Fedora People

Server Updates & Reboots

Posted by Fedora Infrastructure Status on May 18, 2022 08:00 PM

We will be updating and rebooting various servers to bring them up to date. During the outage window any services may be up and down as proxies and gateways are rebooted. Any fedoraproject services may be affected with the exception of mirrorlists and static web content.

OAuth Security Workshop 2022

Posted by Kushal Das on May 16, 2022 03:46 PM

Last week I attended OAuth Security Workshop at Trondheim, Norway. It was a 3-day single-track conference, where the first half of the days were pre-selected talks, and the second parts were unconference talks/side meetings. This was also my first proper conference after COVID emerged in the world.

osw starting

Back to the starting line

After many years felt the whole excitement of being a total newbie in something and suddenly being able to meet all the people behind the ideas. I reached the conference hotel in the afternoon of day 0 and met the organizers as they were in the lobby area. That chat went on for a long, and as more and more people kept checking into the hotel, I realized that it was a kind of reunion for many of the participants. Though a few of them met at a conference in California just a week ago, they all were excited to meet again.

To understand how welcoming any community is, just notice how the community behaves towards new folks. I think the Python community stands high in this regard. And I am very happy to say the whole OAuth/OIDC/Identity-related community folks are excellent in this regard. Even though I kept introducing myself as the new person in this identity land, not even for a single time I felt unwelcome. I attended OpenID-related working group meetings during the conference, multiple hallway chats, or talked to people while walking around the beautiful city. Everyone was happy to explain things in detail to me. Even though most of the people there have already spent 5-15+ years in the identity world.

The talks & meetings

What happens in Trondheim, stays in Trondheim.

I generally do not attend many talks at conferences, as they get recorded. But here, the conference was a single track, and also, there were no recordings.

The first talk was related to formal verification, and this was the first time I saw those (scary in my mind) maths on the big screen. But, full credit to the speakers as they explained things in such a way so that even an average programmer like me understood each step. And after this talk, we jumped into the world of OAuth/OpenID. One funny thing was whenever someone mentioned some RFC number, we found the authors inside the meeting room.

In the second half, we had the GNAP master class from Justin Richer. And once again, the speaker straightforwardly explained such deep technical details so that everyone in the room could understand it.

Now, in the evening before, a few times, people mentioned that in heated technical details, many RFC numbers will be thrown at each other, though it was not that many for me to get too scared :)

rfc count

I also managed to meet Roland for the first time. We had longer chats about the status of Python in the identity ecosystem and also about Identity Python. I took some notes about how we can improve the usage of Python in this, and I will most probably start writing about those in the coming weeks.

In multiple talks, researchers & people from the industry pointed out the mistakes made in the space from the security point of view. Even though, for many things, we have clear instructions in the SPECs, there is no guarantee that the implementors will follow them properly, thus causing security gaps.

At the end of day 1, we had a special Organ concert at the beautiful Trondheim Cathedral. On day 2, we had a special talk, “The Viking Kings of Norway”.

If you let me talk about my experience at the conference, I don’t think I will stop before 2 hours. It was so much excitement, new information, the whole feeling of going back into my starting days where I knew nothing much. Every discussion was full of learning opportunities (all discussions are anyway, but being a newbie is a different level of excitement) or the sadness of leaving Anwesha & Py back in Stockholm. This was the first time I was staying away from them after moving to Sweden.

surprise

Just before the conference ended, Aaron Parecki gave me a surprise gift. I spent time with it during the whole flight back to Stockholm.

This conference had the best food experience of my life for a conference. Starting from breakfast to lunch, big snack tables, dinners, or restaurant foods. In front of me, at least 4 people during the conference said, “oh, it feels like we are only eating and sometimes talking”.

Another thing I really loved to see is that the two primary conference organizers are university roommates who are continuing the friendship and journey in a very beautiful way. After midnight, standing outside of the hotel and talking about random things about life and just being able to see two longtime friends excited about similar things, it felt so nice.

Trondheim

I also want to thank the whole organizing team, including local organizers, Steinar, and the rest of the team did a superb job.

So long, Shadowman

Posted by Joe Brockmeier on May 16, 2022 12:46 PM
After nearly nine years, I'm no longer at Red Hat. Feels weird to type that, but it's true. I joined in August 2013 to work in the Open Source and Standards office (now OSPO) when the company was fewer than 6,000 people, Jim Whitehurst was CEO and everybody thought OpenStack was going to be the … Continue reading So long, Shadowman

Friday the 13th: a lucky day :-)

Posted by Peter Czanik on May 16, 2022 11:50 AM

I’m not superstitious, so I never really cared about black cats, Friday the 13th, and other signs of (imagined) trouble. Last Friday (which was the 13th) I had an article printed in a leading computer magazine in Hungary, and I gave my first IRL talk at a conference in well over two years. Best of all, I also met many people, some for the first time in real life.

Free Software Conference: sudo talk

Last Friday, I gave a talk at the Free Software Conference in Szeged. It was my first IRL conference talk in well over two years. I gave my previous non-virtual talk in Pasadena at SCALE; after that, I arrived Hungary only a day before flights between the EU and the US were shut down due to Covid.

I must admit that I could not finish presenting all my slides. I practiced my talk many times, so in the end, I could fit my talk into my time slot. However, I practiced the talk by talking to my screen. That gives no feedback, which is one of the reasons I hate virtual talks. At the event, I could see my audience and read from their faces when something was really interesting, or something was difficult to follow. In both cases, I improvised and added some more details. In the end, I had to skip three of my slides, including the summary. Luckily, all important slides were already shown. The talk was short, so the summary was probably not really missing. Once my talk was over, many people came to me for stickers, and to explain which of the features they learned about they plan to implement once they are back home.

<figure><figcaption>

Sudo logo

</figcaption> </figure>

My talk was in Hungarian. Everything about sudo is in English in my head. I had to do some simultaneous interpreting from English to Hungarian, at least when I started practicing my talk. I gave an earlier version of this talk at FOSDEM in English. So, if you want to learn about some of the latest sudo features in English, you can watch it at the FOSDEM website at https://fosdem.org/2022/schedule/event/security_sudo/.

ComputerWorld: syslog-ng article

Once upon a time, I learned journalism in Hungarian. I even wrote a couple of articles in Hungarian half a decade ago. However, I’ve been writing only in English ever since. The syslog-ng blog, the sudo blog, and even my own personal blog where you read this, are all in English. Other than a few chats and e-mails, all my communication is in English.

Last week the Hungarian edition of ComputerWorld prepared with a few extra pages for the Free Software Conference. It also featured an article I wrote about some little known facts about syslog-ng. Writing in Hungarian was quite a challenge, just like talking in Hungarian. I tried to find a balance in the use of English words. Some people use English expressions for almost everything, so just a few words are actually in Hungarian. I hate the other extreme even more: when all words are in Hungarian, and I need to guess what the author is trying to say. I hope I found an enjoyable compromise.

I must admit, it was a great feeling to see my article printed. :-)

Meeting people: Turris, Fedora community

Last Friday I also met many people for the first time in person, or for the first time in person in a long while. I am a member of the Hungarian Fedora community. We met regularly earlier but not any more. I keep in touch with individual members over the Internet, but in Szeged, I could meet some of them in person and have some longer discussions.

If you checked the website of the conference, you could see that it was the first ever international version of the event. When I learned that the conference is not just for Hungarians but for the V4 countries as well, I reached out to the Turris guys. Their booth was super busy during the conference, but luckily, I had a chance to chat with them a bit. Two of their talks were at the same time as my talk, but I could listen to their third talk. It was really nice: I learned about the history of the project.

As you can see, my Friday the 13th was a fantastic day. I knock on wood hoping that Friday the 13th stays a lucky day. OK, just kidding :-)

MeeraNew font new release 1.3

Posted by Rajeesh K Nambiar on May 16, 2022 09:43 AM

MeeraNew is the default Malayalam font for Fedora 36. I have just released a much improved version of this libre font and they are just built for Fedora 36 & rawhide; which should reach the users in a week of time. For the impatient, you may enable updates-testing repository and provide karma/feedback.

<figure class="wp-block-image size-large"><figcaption>Fig. 1: MeeraNew definitive-script Malayalam font version 1.3.</figcaption></figure>

Two major improvements are present in version 1.3

  1. Improved x-height to match RIT Rachana, the serif counterpart. This should improve readability at default font sizes.
  2. Large improvements to many kerning pairs including the above-base mark positioning of dotreph (0D4E) character (e.g. ൎയ്യ), ി (0D3F), ീ (0D40) vowel symbols (e.g. ന്റി), post-base symbols of Ya, Va (e.g. സ്ത്യ) etc.

The font source and OTF/TTF files can be downloaded at Rachana website or at GitLab.

How to rebase to Fedora Linux 36 on Silverblue

Posted by Fedora Magazine on May 16, 2022 08:00 AM

Fedora Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update or rebase to Fedora Linux 36 on your Fedora Silverblue system (these instructions are similar for Fedora Kinoite), this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.

Prior to actually doing the rebase to Fedora Linux 36, you should apply any pending updates. Enter the following in the terminal:

$ rpm-ostree update

or install updates through GNOME Software and reboot.

Rebasing using GNOME Software

GNOME Software shows you that there is new version of Fedora Linux available on the Updates screen.

<figure class="wp-block-image size-large"><figcaption>Fedora 36 update available</figcaption></figure>

First thing you need to do is download the new image, so click on the Download button. This will take some time. When it’s done you will see that the update is ready to install.

<figure class="wp-block-image size-large"><figcaption>Fedora 36 update ready to install</figcaption></figure>

Click on the Restart & Upgrade button. This step will take only a few moments and the computer will be restarted at the end. After restart you will end up in new and shiny release of Fedora Linux 36. Easy, isn’t it?

Rebasing using terminal

If you prefer to do everything in a terminal, then this part of the guide is for you.

Rebasing to Fedora Linux 36 using the terminal is easy. First, check if the 36 branch is available:

$ ostree remote refs fedora

You should see the following in the output:

fedora:fedora/36/x86_64/silverblue

If you want to pin the current deployment (this deployment will stay as option in GRUB until you remove it), you can do it by running:

$ sudo ostree admin pin 0

To remove the pinned deployment use the following command:

$ sudo ostree admin pin --unpin 2

where 2 is the position in the $rpm-ostree status

Next, rebase your system to the Fedora Linux 36 branch.

$ rpm-ostree rebase fedora:fedora/36/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Linux 36.

How to roll back

If anything bad happens—for instance, if you can’t boot to Fedora Linux 36 at all—it’s easy to go back. Pick the previous entry in the GRUB menu at boot (if you don’t see it, try to press ESC during boot), and your system will start in its previous state before switching to Fedora Linux 36. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase Fedora Silverblue to Fedora Linux 36 and roll back. So why not do it today?

Can we fix bearer tokens?

Posted by Matthew Garrett on May 16, 2022 07:48 AM
Last month I wrote about how bearer tokens are just awful, and a week later Github announced that someone had managed to exfiltrate bearer tokens from Heroku that gave them access to, well, a lot of Github repositories. This has inevitably resulted in a whole bunch of discussion about a number of things, but people seem to be largely ignoring the fundamental issue that maybe we just shouldn't have magical blobs that grant you access to basically everything even if you've copied them from a legitimate holder to Honest John's Totally Legitimate API Consumer.

To make it clearer what the problem is here, let's use an analogy. You have a safety deposit box. To gain access to it, you simply need to be able to open it with a key you were given. Anyone who turns up with the key can open the box and do whatever they want with the contents. Unfortunately, the key is extremely easy to copy - anyone who is able to get hold of your keyring for a moment is in a position to duplicate it, and then they have access to the box. Wouldn't it be better if something could be done to ensure that whoever showed up with a working key was someone who was actually authorised to have that key?

To achieve that we need some way to verify the identity of the person holding the key. In the physical world we have a range of ways to achieve this, from simply checking whether someone has a piece of ID that associates them with the safety deposit box all the way up to invasive biometric measurements that supposedly verify that they're definitely the same person. But computers don't have passports or fingerprints, so we need another way to identify them.

When you open a browser and try to connect to your bank, the bank's website provides a TLS certificate that lets your browser know that you're talking to your bank instead of someone pretending to be your bank. The spec allows this to be a bi-directional transaction - you can also prove your identity to the remote website. This is referred to as "mutual TLS", or mTLS, and a successful mTLS transaction ends up with both ends knowing who they're talking to, as long as they have a reason to trust the certificate they were presented with.

That's actually a pretty big constraint! We have a reasonable model for the server - it's something that's issued by a trusted third party and it's tied to the DNS name for the server in question. Clients don't tend to have stable DNS identity, and that makes the entire thing sort of awkward. But, thankfully, maybe we don't need to? We don't need the client to be able to prove its identity to arbitrary third party sites here - we just need the client to be able to prove it's a legitimate holder of whichever bearer token it's presenting to that site. And that's a much easier problem.

Here's the simple solution - clients generate a TLS cert. This can be self-signed, because all we want to do here is be able to verify whether the machine talking to us is the same one that had a token issued to it. The client contacts a service that's going to give it a bearer token. The service requests mTLS auth without being picky about the certificate that's presented. The service embeds a hash of that certificate in the token before handing it back to the client. Whenever the client presents that token to any other service, the service ensures that the mTLS cert the client presented matches the hash in the bearer token. Copy the token without copying the mTLS certificate and the token gets rejected. Hurrah hurrah hats for everyone.

Well except for the obvious problem that if you're in a position to exfiltrate the bearer tokens you can probably just steal the client certificates and keys as well, and now you can pretend to be the original client and this is not adding much additional security. Fortunately pretty much everything we care about has the ability to store the private half of an asymmetric key in hardware (TPMs on Linux and Windows systems, the Secure Enclave on Macs and iPhones, either a piece of magical hardware or Trustzone on Android) in a way that avoids anyone being able to just steal the key.

How do we know that the key is actually in hardware? Here's the fun bit - it doesn't matter. If you're issuing a bearer token to a system then you're already asserting that the system is trusted. If the system is lying to you about whether or not the key it's presenting is hardware-backed then you've already lost. If it lied and the system is later compromised then sure all your apes get stolen, but maybe don't run systems that lie and avoid that situation as a result?

Anyway. This is covered in RFC 8705 so why aren't we all doing this already? From the client side, the largest generic issue is that TPMs are astonishingly slow in comparison to doing a TLS handshake on the CPU. RSA signing operations on TPMs can take around half a second, which doesn't sound too bad, except your browser is probably establishing multiple TLS connections to subdomains on the site it's connecting to and performance is going to tank. Fixing this involves doing whatever's necessary to convince the browser to pipe everything over a single TLS connection, and that's just not really where the web is right at the moment. Using EC keys instead helps a lot (~0.1 seconds per signature on modern TPMs), but it's still going to be a bottleneck.

The other problem, of course, is that ecosystem support for hardware-backed certificates is just awful. Windows lets you stick them into the standard platform certificate store, but the docs for this are hidden in a random PDF in a Github repo. Macs require you to do some weird bridging between the Secure Enclave API and the keychain API. Linux? Well, the standard answer is to do PKCS#11, and I have literally never met anybody who likes PKCS#11 and I have spent a bunch of time in standards meetings with the sort of people you might expect to like PKCS#11 and even they don't like it. It turns out that loading a bunch of random C bullshit that has strong feelings about function pointers into your security critical process is not necessarily something that is going to improve your quality of life, so instead you should use something like this and just have enough C to bridge to a language that isn't secretly plotting to kill your pets the moment you turn your back.

And, uh, obviously none of this matters at all unless people actually support it. Github has no support at all for validating the identity of whoever holds a bearer token. Most issuers of bearer tokens have no support for embedding holder identity into the token. This is not good! As of last week, all three of the big cloud providers support virtualised TPMs in their VMs - we should be running CI on systems that can do that, and tying any issued tokens to the VMs that are supposed to be making use of them.

So sure this isn't trivial. But it's also not impossible, and making this stuff work would improve the security of, well, everything. We literally have the technology to prevent attacks like Github suffered. What do we have to do to get people to actually start working on implementing that?

comment count unavailable comments

Response to “Flatpak Is Not the Future”

Posted by TheEvilSkeleton on May 16, 2022 12:00 AM

Introduction

Late last year, this interesting article “Flatpak Is Not the Future” was published to the public, and very quickly grabbed the Linux community’s attention. I want to go over some of the author's arguments and explain some of the misunderstanding and claims.

Do keep in mind that I have nothing against the author’s opinion. The point of this response is to reduce the amount of misinformation and misunderstanding that the article might have caused, as I have seen (and still see) many users post this article very frequently, without having a proper understanding of the subject.

Alright, let’s get started.

“Size”

Suppose you want to make a simple calculator app. How big should the download be?

[…]

Other solutions like Flatpak or Steam download the runtime separately. Your app metadata specifies what runtime it wants to use and a service downloads it and runs your app against it.

So how big are these runtimes? On a fresh machine, install KCalc from Flathub. You’re looking at a nearly 900 MB download to get your first runtime. For a calculator.

        ID                                      Branch    Op   Remote    Download
 1.     org.freedesktop.Platform.GL.default     20.08     i    flathub   < 106.4 MB
 2.     org.freedesktop.Platform.VAAPI.Intel    20.08     i    flathub    < 11.6 MB
 3.     org.freedesktop.Platform.openh264       2.0       i    flathub     < 1.5 MB
 4.     org.kde.KStyle.Adwaita                  5.15      i    flathub     < 6.6 MB
 5.     org.kde.Platform.Locale                 5.15      i    flathub   < 341.4 MB (partial)
 6.     org.kde.Platform                        5.15      i    flathub   < 370.1 MB
 7.     org.kde.kcalc.Locale                    stable    i    flathub   < 423.1 kB (partial)
 8.     org.kde.kcalc                           stable    i    flathub     < 4.4 MB

Note that the app package itself is only 4.4 MB. The rest is all redundant libraries that are already on my system. I just ran the kcalc binary straight out of its Flatpak install path unsandboxed and let it use my native libraries. It ran just fine, because all of the libraries it uses are backwards compatible.

Flatpak wants to download 3D drivers, patented video codecs, themes, locales, Qt 5, KDE 5, GTK 3, ICU, LLVM, ffmpeg, Python, and everything else in org.kde.Platform, all to run a calculator. Because unlike AppImage, the runtime isn’t stripped down to just what the app needs. It’s got every dependency for any app. It’s an entire general-purpose OS on top of your existing OS.

Flatpak installs these runtimes to ensure that you and other users are running the exact same binaries and libraries across different systems, whether the libraries are backwards compatible or not. This is done to reduce the amount of quality assurance (QA) needed by application developers and to reduce bugs as much as possible, because application developers can test builds that were tested against the same toolchain and dependencies.

By running host libraries, there is a risk of running into distribution specific bugs, or from other sorts of negative side effects due to, e.g. patched libraries, slightly older or newer versions of libraries, overlooked dependencies, etc.

Also, this is actually quite similar to system packages. Suppose you are using GNOME on your system, and you want to install KCalc on your system. If you don’t have Plasma dependencies installed, then the package manager will download and install all the needed dependencies, which should install the 4.4 MB for KCalc atop the Qt and Plasma dependencies.

Just like system packages, the more applications you install, the more space efficient Flatpak becomes. Flatpak goes to a process called deduplication, where it reuses dependencies whenever possible, avoiding the need of duplicating data. In the author’s example, KCalc pulls 900 MB because of the runtime and drivers. Now, suppose you install 10 more Qt applications. Instead of redownloading and reinstalling 9 GB (900 MB of runtimes * 10 times the Qt applications), it will not redownload and reinstall 10 times, and you will not have wasted 9 GB either. Instead, all 10 Qt applications will keep using the same 900 MB that come from the runtime. Deduplication ensures that applications that rely on the same dependencies will keep reusing the same dependencies. Basically, the more applications you install, the more space efficient Flatpak becomes.

“Sharing Runtimes?”

They claim that they deduplicate runtimes. I question how much can really be shared between different branches when everything is recompiled. How much has /usr changed between releases of Ubuntu? I would guess just about all of it.

Regarding this section, Will Thompson already wrote a response on his website: On Flatpak disk usage and deduplication. He looks over the sizes of runtimes and how much deduplication takes effect. It’s a very detailed explanation on deduplication, and I recommend reading it.

To summarize, between the freedesktop.org 20.08 and 21.08 runtimes, 113 MB out of 498 MB were deduplicated. And between the GNOME 41 and freedesktop.org 21.08 runtimes, 388 MB out of 715 MB were deduplicated. He also goes more in depth with these runtimes and applications.

To add more, not only are the runtimes deduplicated, contents outside the runtimes are deduplicated too, as long as these files share the same hash (checksum). Alexander Larsson, the maintainer of Flatpak, explains more about it in details in this video.

Storage Usage

We’re going to check the amount of storage runtimes are using on my system. Let’s look at what I have installed on my system:

$ flatpak list --runtime --user | wc -l
57

Here, I have 57 runtimes installed.

Now, let’s look at the runtimes I have installed:

$ flatpak list --runtime --user
Name                                                   Application ID                                               Version             Branch              Origin
Codecs                                                 com.github.Eloston.UngoogledChromium.Codecs                                      stable              flathub
Proton (community build)                               com.valvesoftware.Steam.CompatibilityTool.Proton             7.0-2               stable              flathub
Proton experimental (community build)                  com.valvesoftware.Steam.CompatibilityTool.Proton-Exp         7.0-20220511        stable              flathub
Proton-GE (community build)                            com.valvesoftware.Steam.CompatibilityTool.Proton-GE          7.17-1              stable              flathub
gamescope                                              com.valvesoftware.Steam.Utility.gamescope                    3.11.28             stable              flathub
steamtinkerlaunch                                      com.valvesoftware.Steam.Utility.steamtinkerlaunch                                test                steamtinkerlaunch-origin
Codecs                                                 org.audacityteam.Audacity.Codecs                                                 stable              flathub
Codecs                                                 org.chromium.Chromium.Codecs                                                     stable              flathub
Fedora Platform                                        org.fedoraproject.Platform                                   35                  f35                 fedora
LSP                                                    org.freedesktop.LinuxAudio.Plugins.LSP                       1.1.30              20.08               flathub
LSP                                                    org.freedesktop.LinuxAudio.Plugins.LSP                       1.2.1               21.08               flathub
TAP-plugins                                            org.freedesktop.LinuxAudio.Plugins.TAP                       1.0.1               21.08               flathub
ZamPlugins                                             org.freedesktop.LinuxAudio.Plugins.ZamPlugins                3.14                20.08               flathub
ZamPlugins                                             org.freedesktop.LinuxAudio.Plugins.ZamPlugins                3.14                21.08               flathub
SWH                                                    org.freedesktop.LinuxAudio.Plugins.swh                       0.4.17              21.08               flathub
Freedesktop Platform                                   org.freedesktop.Platform                                     20.08.19            20.08               flathub
Freedesktop Platform                                   org.freedesktop.Platform                                     21.08.13            21.08               flathub
i386                                                   org.freedesktop.Platform.Compat.i386                                             21.08               flathub
Mesa                                                   org.freedesktop.Platform.GL.default                          21.1.8              20.08               flathub
Mesa                                                   org.freedesktop.Platform.GL.default                          21.3.8              21.08               flathub
default                                                org.freedesktop.Platform.GL32.default                                            21.08               flathub
MangoHud                                               org.freedesktop.Platform.VulkanLayer.MangoHud                0.6.6-1             21.08               flathub
vkBasalt                                               org.freedesktop.Platform.VulkanLayer.vkBasalt                0.3.2.5             21.08               flathub
ffmpeg-full                                            org.freedesktop.Platform.ffmpeg-full                                             21.08               flathub
i386                                                   org.freedesktop.Platform.ffmpeg_full.i386                                        21.08               flathub
openh264                                               org.freedesktop.Platform.openh264                            2.1.0               2.0                 flathub
Freedesktop SDK                                        org.freedesktop.Sdk                                          21.08.13            21.08               flathub
i386                                                   org.freedesktop.Sdk.Compat.i386                                                  21.08               flathub
.NET Core SDK extension                                org.freedesktop.Sdk.Extension.dotnet6                        6.0.300             21.08               flathub
Free Pascal Compiler and Lazarus                       org.freedesktop.Sdk.Extension.freepascal                     3.2.2               21.08               flathub
toolchain-i386                                         org.freedesktop.Sdk.Extension.toolchain-i386                                     21.08               flathub
GNOME Boxes Osinfo DB                                  org.gnome.Boxes.Extension.OsinfoDb                           20220214            stable              flathub
GNOME Application Platform version 41                  org.gnome.Platform                                                               41                  flathub
GNOME Application Platform version 42                  org.gnome.Platform                                                               42                  flathub
GNOME Application Platform version Nightly             org.gnome.Platform                                                               master              gnome-nightly
i386                                                   org.gnome.Platform.Compat.i386                                                   41                  flathub
i386                                                   org.gnome.Platform.Compat.i386                                                   42                  flathub
GNOME Software Development Kit version 41              org.gnome.Sdk                                                                    41                  flathub
GNOME Software Development Kit version 42              org.gnome.Sdk                                                                    42                  flathub
GNOME Software Development Kit version Nightly         org.gnome.Sdk                                                                    master              gnome-nightly
i386                                                   org.gnome.Sdk.Compat.i386                                                        41                  flathub
i386                                                   org.gnome.Sdk.Compat.i386                                                        42                  flathub
Adwaita dark GTK theme                                 org.gtk.Gtk3theme.Adwaita-dark                                                   3.22                flathub
adw-gtk3 Gtk Theme                                     org.gtk.Gtk3theme.adw-gtk3                                                       3.22                flathub
adw-gtk3-dark Gtk Theme                                org.gtk.Gtk3theme.adw-gtk3-dark                                                  3.22                flathub
Kvantum theme engine                                   org.kde.KStyle.Kvantum                                       1.0.1               5.15-21.08          flathub
KDE Application Platform                               org.kde.Platform                                                                 5.15-21.08          flathub
QGnomePlatform                                         org.kde.PlatformTheme.QGnomePlatform                                             5.15                flathub
QGnomePlatform                                         org.kde.PlatformTheme.QGnomePlatform                                             5.15-21.08          flathub
QtSNI                                                  org.kde.PlatformTheme.QtSNI                                                      5.15                flathub
QtSNI                                                  org.kde.PlatformTheme.QtSNI                                                      5.15-21.08          flathub
KDE Software Development Kit                           org.kde.Sdk                                                                      5.15-21.08          flathub
QGnomePlatform-decoration                              org.kde.WaylandDecoration.QGnomePlatform-decoration                              5.15                flathub
QGnomePlatform-decoration                              org.kde.WaylandDecoration.QGnomePlatform-decoration                              5.15-21.08          flathub
DXVK                                                   org.winehq.Wine.DLLs.dxvk                                    1.10.1              stable-21.08        flathub
Gecko                                                  org.winehq.Wine.gecko                                                            stable-21.08        flathub
Mono                                                   org.winehq.Wine.mono                                                             stable-21.08        flathub

This is a huge list of all the runtimes I have installed, ranging from different versions, branches, and more.

I install Flatpak applications with the --user flag, meaning everything is installed per user. This means that all the Flatpak installs are located in the ~/.local/share/flatpak directory.

I recently wrote a script that contrasts the size of Flatpak applications, runtimes or both. It checks the size without deduplication, with deduplication, and compression if applicable.

This script checks how many times a file hard links, and multiplies accordingly in a given Flatpak directory. For example, if a file hard links 10 times, then the script shows as 10 times the space (but realistically, it only uses once). Then, it checks the actual size (with deduplication). And lastly, if using btrfs with transparent compression, the script will intelligently check the size with compression enabled.

With the help of this script, let’s look at how much data they’re taking up:

$ ./flatpak-dedup-checker --path=~/.local/share/flatpak --runtime
Directory:                  /var/home/TheEvilSkeleton/.local/share/flatpak/runtime
Size without deduplication: 36.22 GB
Size with deduplication:    13.07 GB (36% of 36.22 GB)

That’s it! Only 13.07 GB are used with deduplication for 57 runtimes, whereas 36.22 GB without. This is way less than 50%.

We can observe that deduplication is actually effective. A lot more than half is deduplicated because runtimes already share tons of files to begin with.

”"Disk space is cheap!"”

They say disk space is cheap. This is not true, not for the root devices of modern computers. Built-in storage has in fact been shrinking.

Software has gotten so much slower and more bloated that operating systems no longer run acceptably on spinning rust. Laptop manufacturers are switching to smaller flash drives to improve performance while preserving margins. Budget laptops circa 2015 shipped with 256 GB or larger mechanical drives. Now in 2021 they ship with 120 GB flash. NVMe drives are around $100/TB and laptop manufacturers inflate these prices 500% or more so upgrading a new laptop can be pricey.

Chromebooks are even smaller as they push everything onto cloud storage. Smartphones are starting to run full-fledged Linux distributions. The Raspberry Pi 4 and 400 use an SD card as root device and have such fantastic performance that we’re on the verge of a revolution in low-cost computing. Surely Flatpak should be usable on these systems! There is no reason why a 16 GB root device shouldn’t fit every possible piece of non-game software we could want. Flatpak isn’t part of the revolution; it’s holding it back.

Built-in storage have definitely become smaller. However, flash storage have higher physical density than hard drives because of built-in compression and deduplication in SSD controllers and flash storage.

The operating system, games, and other data take less space on flash memories thanks to these features. In other word: you, as an end user, see less storage visually, but data take less space on flash memories compared to hard drives as well.

Additionally, we can enable btrfs transparent compression to save up a lot of space. Fedora Linux comes with zstd:1 (zstd level 1) compression to make data take less space, which is the one I am going to use in the following example.

Let’s contrast the size of the results above, but now with compression:

$ ./flatpak-dedup-checker --path=~/.local/share/flatpak --runtime
Directory:                  /var/home/TheEvilSkeleton/.local/share/flatpak/runtime
Size without deduplication: 36.22 GB
Size with deduplication:    13.07 GB (36% of 36.22 GB)
Size with compression:      9.41 GB (25% of 36.22 GB; 71% of 13.07 GB)

Even better. Only with deduplication, runtimes took 36% of 34.56G. Pair it with zstd:1, it literally takes a quarter.

Let’s do the same with all the installed applications:

$ flatpak list --app --user | wc --lines
79

I have 79 applications installed. Here’s the list of all the applications installed:

$ flatpak list --app --user
Name                                       Application ID                                        Version                       Branch                   Origin
Decoder                                    com.belmoussaoui.Decoder                              0.2.2                         stable                   flathub
Brave Browser                              com.brave.Browser                                     1.38.115                      stable                   flathub
Discord                                    com.discordapp.Discord                                0.0.17                        stable                   flathub
Discord Canary                             com.discordapp.DiscordCanary                          0.0.135                       beta                     flathub-beta
Mindustry                                  com.github.Anuken.Mindustry                           126.2                         stable                   flathub
ungoogled-chromium                         com.github.Eloston.UngoogledChromium                  101.0.4951.64                 stable                   flathub
Notepad Next                               com.github.dail8859.NotepadNext                       v0.5.1                        stable                   flathub
Tor Browser Launcher                       com.github.micahflee.torbrowser-launcher              0.3.5                         stable                   flathub
waifu2x-ncnn-vulkan                        com.github.nihui.waifu2x-ncnn-vulkan                  20220419                      stable                   flathub
Czkawka                                    com.github.qarmin.czkawka                             4.1.0                         stable                   flathub
Avvie                                      com.github.taiko2k.avvie                              2.1                           stable                   flathub
Flatseal                                   com.github.tchx84.Flatseal                            1.7.5                         stable                   flathub
EasyEffects                                com.github.wwmm.easyeffects                           6.2.5                         stable                   flathub
NewsFlash                                  com.gitlab.newsflash                                  1.5.1                         stable                   flathub
Google Chrome                              com.google.Chrome                                     101.0.4951.41-1               beta                     flathub-beta
Extension Manager                          com.mattjakeman.ExtensionManager                      0.3.0                         stable                   flathub
Microsoft Edge                             com.microsoft.Edge                                    101.0.1210.39-1               stable                   flathub
OBS Studio                                 com.obsproject.Studio                                 27.2.4                        stable                   flathub
BlackBox                                   com.raggesilver.BlackBox                              42.alpha0                     master                   blackbox-origin
Bottles                                    com.usebottles.bottles                                2022.5.2-trento-3             stable                   flathub
Steam                                      com.valvesoftware.Steam                               1.0.0.74                      stable                   flathub
Visual Studio Code                         com.visualstudio.code                                 1.67.0-1651667246             stable                   flathub
Fragments                                  de.haeckerfelix.Fragments                             2.0.2                         stable                   flathub
Boop-GTK                                   fyi.zoey.Boop-GTK                                     1.6.0                         stable                   flathub
Element                                    im.riot.Riot                                          1.10.12                       stable                   flathub
Amberol                                    io.bassi.Amberol                                      0.6.1                         stable                   flathub
youtubedl-gui                              io.github.JaGoLi.ytdl_gui                             3.0                           stable                   flathub
Celluloid                                  io.github.celluloid_player.Celluloid                  0.23                          stable                   flathub
Mousai                                     io.github.seadve.Mousai                               0.6.6                         stable                   flathub
LibreWolf                                  io.gitlab.librewolf-community                         100.0-2                       stable                   flathub
Lutris                                     net.lutris.Lutris                                     0.5.10.1                      beta                     flathub-beta
Poedit                                     net.poedit.Poedit                                     3.0.1                         stable                   flathub
Color Picker                               nl.hjdskes.gcolor3                                    2.4.0                         stable                   flathub
Audacity                                   org.audacityteam.Audacity                             3.1.3                         stable                   flathub
Chromium Web Browser                       org.chromium.Chromium                                 101.0.4951.64                 stable                   flathub
Chromium application base                  org.chromium.Chromium.BaseApp                                                       21.08                    flathub
Electron2 application base                 org.electronjs.Electron2.BaseApp                                                    21.08                    flathub
Fedora Media Writer                        org.fedoraproject.MediaWriter                         4.2.2                         stable                   fedora
Flatpak External Data Checker              org.flathub.flatpak-external-data-checker                                           stable                   flathub
appstream-glib                             org.freedesktop.appstream-glib                                                      stable                   flathub
Feeds                                      org.gabmus.gfeeds                                     1.0.3                         stable                   flathub
GNU Image Manipulation Program             org.gimp.GIMP                                         2.99.10                       beta                     flathub-beta
Adwaita Demo                               org.gnome.Adwaita1.Demo                               1.2.alpha                     master                   gnome-nightly
Boxes                                      org.gnome.Boxes                                       42.0                          stable                   flathub
Builder                                    org.gnome.Builder                                     42.0                          stable                   flathub
Calendar                                   org.gnome.Calendar                                    42.0                          stable                   flathub
Contacts                                   org.gnome.Contacts                                    42.0                          stable                   flathub
Web                                        org.gnome.Epiphany.Devel                              42~beta                       master                   devel-origin
File Roller                                org.gnome.FileRoller                                  3.42.0                        stable                   flathub
Firmware                                   org.gnome.Firmware                                    42.1                          stable                   flathub
Fractal                                    org.gnome.Fractal.Devel                               5.alpha                       master                   gnome-nightly
Geary                                      org.gnome.Geary                                       40.0                          stable                   flathub
Notes                                      org.gnome.Notes                                       40.1                          stable                   flathub
Text Editor                                org.gnome.TextEditor                                  42.1                          stable                   flathub
Weather                                    org.gnome.Weather                                     42.0                          stable                   flathub
Clocks                                     org.gnome.clocks                                      42.0                          stable                   flathub
Contrast                                   org.gnome.design.Contrast                             0.0.5                         stable                   flathub
Image Viewer                               org.gnome.eog                                         42.1                          stable                   flathub
Fonts                                      org.gnome.font-viewer                                 42.0                          stable                   flathub
gitg                                       org.gnome.gitg                                        41                            stable                   flathub
Identity                                   org.gnome.gitlab.YaLTeR.Identity                      0.3.0                         stable                   flathub
Iotas                                      org.gnome.gitlab.cheywood.Iotas                       0.1.1                         stable                   flathub
Apostrophe                                 org.gnome.gitlab.somas.Apostrophe                     2.6.3                         stable                   flathub
Inkscape                                   org.inkscape.Inkscape                                 1.1.2                         stable                   flathub
Kdenlive                                   org.kde.kdenlive                                      22.04.0                       stable                   flathub
Krita                                      org.kde.krita                                         5.0.2                         stable                   flathub
LibreOffice                                org.libreoffice.LibreOffice                           7.3.3.2                       stable                   flathub
Thunderbird                                org.mozilla.Thunderbird                               91.9.0                        stable                   flathub
Firefox                                    org.mozilla.firefox                                   100.0                         stable                   flathub
Olive                                      org.olivevideoeditor.Olive                            0.1.2                         stable                   flathub
ONLYOFFICE Desktop Editors                 org.onlyoffice.desktopeditors                         7.0.1                         stable                   flathub
Helvum                                     org.pipewire.Helvum                                   0.3.4                         stable                   flathub
PolyMC                                     org.polymc.PolyMC                                     1.2.2                         stable                   flathub
PulseAudio Volume Control                  org.pulseaudio.pavucontrol                            5.0                           stable                   flathub
qBittorrent                                org.qbittorrent.qBittorrent                           4.4.2                         stable                   flathub
QOwnNotes                                  org.qownnotes.QOwnNotes                               22.5.0                        stable                   flathub
Tenacity                                   org.tenacityaudio.Tenacity                                                          nightly                  tenacity
Wine                                       org.winehq.Wine                                       7.0                           stable-21.08             flathub
Commit                                     re.sonny.Commit                                       3.2.0                         stable                   flathub

Let’s check the amount of storage they take without deduplication, with deduplication, and with deduplication and compression:

$ ./flatpak-dedup-checker --path=~/.local/share/flatpak --app
Directory:                  /var/home/TheEvilSkeleton/.local/share/flatpak/app
Size without deduplication: 11.16 GB
Size with deduplication:    9.78 GB (87% of 11.16 GB)
Size with compression:      7.81 GB (69% of 11.16 GB; 79% of 9.78 GB)

Again, deduplication and compression are also effective here. Applications already take very little amount of space, and since applications generally contain nonidentical files, the likelihood of taking advantage of deduplication is less than with runtimes. With deduplication, 87% of the nondeduplicated 11.16 GB is used. Pair it with zstd:1, 69% of 11.16 GB.

Now, with runtimes and applications combined:

./flatpak-dedup-checker --path=~/.local/share/flatpak
Directories                 /var/home/TheEvilSkeleton/.local/share/flatpak/{runtime,app}
Size without deduplication: 47.38 GB
Size with deduplication:    22.75 GB (48% of 47.38 GB)
Size with compression:      17.17 GB (36% of 47.38 GB; 75% of 22.75 GB)

We can observe here that the compression is effective. This is also a driving force for Fedora Linux to push btrfs to users, so we can take advantage of modern features like compression. Hopefully more and more distributions follow the same footstep.

“Memory Usage, Startup Time”

A bigger problem is that these applications can actually take several seconds to start up. They have to load all their own libraries from disk instead of using what’s already on the system, already in memory.

This is assuming the user has the same applications installed on the system and as a Flatpak and wants to load both. I can only imagine that there is a tiny percentage of users that have both variants (Flatpak and native) installed and use both frequently. Otherwise, there shouldn’t be a problem if these applications are primarily used as a Flatpak.

Distributions that heavily push Flatpak, like Fedora Silverblue/Kinoite, Endless OS and elementaryOS, strictly push Flatpak for applications. As a side effect, these distributions have a really small base install as well. As an example, an Endless OS install takes roughly 4.2 GB, according to Will Thompson on On Flatpak disk usage and deduplication.

“Security”

Flatpak allows applications to declare that they need full access to your filesystem or your home folder, yet graphical software stores still claim such applications are sandboxed. This has been discussed before. Here’s what happens when I search GIMP in the Software application on a fresh install of Fedora 34:

1

This has been discussed before too. I also want to add that this is a GNOME Software issue for not displaying correctly, rather than Flatpak being at fault. On another note, this has been fixed since GNOME 41:

2 2

Regardless, I personally believe that it is unfair to blame the backend utility (Flatpak) for an issue caused by the frontend (GNOME Software). The flatpak command-line utility makes it very clear about the permissions GIMP or any other application is going to use by default:

$ flatpak install org.gimp.GIMP
org.gimp.GIMP permissions:
    ipc                   network       x11      dri      file access [1]
    dbus access [2]       tags [3]

    [1] /tmp, host, xdg-config/GIMP, xdg-config/gtk-3.0, xdg-run/gvfs, xdg-run/gvfsd
    [2] org.freedesktop.FileManager1, org.gnome.Shell.Screenshot, org.gtk.vfs, org.gtk.vfs.*,
        org.kde.kwin.Screenshot
    [3] stable


        ID                    Branch         Op         Remote         Download
 1.     org.gimp.GIMP         stable         i          flathub        < 121.3 MB

Proceed with these changes to the user installation? [Y/n]:

Flatpak itself was always blatant with permissions and I don’t think it’s fair to criticize them when they didn’t cause it.

”"It’s better than nothing!"”

Flatpak and Snap apologists claim that some security is better than nothing. This is not true. From a purely technical perspective, for applications with filesystem access the security is exactly equal to nothing. In reality it’s actually worse than nothing because it leads people to place more trust than they should in random applications they find on the internet.

From a purely technical perspective, Flatpak does have some security benefits too, be it with filesystem=host/home or not. Filesystem access does not automatically mean that the application in question has access to everything.

First of all, Flatpak applications cannot see the content of other Flatpak applications. Let’s contrast what the host shell sees and what the shell inside the GIMP container sees under ~/.var/app, where all user configuration in Flatpak applications lie in.

Let’s first check what the host shell sees:

$ ls ~/.var/app | wc --lines
83
$ ls ~/.var/app
com.belmoussaoui.Decoder                  com.microsoft.Edge                    org.chromium.Chromium                      org.gnome.Contacts         org.gnome.gitlab.cheywood.Iotas    org.mozilla.Firefox
com.brave.Browser                         com.obsproject.Studio                 org.cubocore.CoreArchiver                  org.gnome.design.Contrast  org.gnome.gitlab.somas.Apostrophe  org.mozilla.firefox
com.discordapp.Discord                    com.raggesilver.Terminal              org.fedoraproject.MediaWriter              org.gnome.eog              org.gnome.gitlab.YaLTeR.Identity   org.olivevideoeditor.Olive
com.discordapp.DiscordCanary              com.usebottles.bottles                org.flathub.flatpak-external-data-checker  org.gnome.eog.Devel        org.gnome.NautilusDevel            org.onlyoffice.desktopeditors
com.github.Anuken.Mindustry               com.valvesoftware.Steam               org.freedesktop.appstream-glib             org.gnome.Epiphany.Devel   org.gnome.Notes                    org.pipewire.Helvum
com.github.dail8859.NotepadNext           de.haeckerfelix.Fragments             org.gabmus.gfeeds                          org.gnome.Evince           org.gnome.Screenshot               org.polymc.PolyMC
com.github.Eloston.UngoogledChromium      fyi.zoey.Boop-GTK                     org.gimp.GIMP                              org.gnome.FileRoller       org.gnome.TextEditor               org.pulseaudio.pavucontrol
com.github.micahflee.torbrowser-launcher  im.riot.Riot                          org.gnome.Adwaita1.Demo                    org.gnome.Firmware         org.gnome.Totem                    org.qbittorrent.qBittorrent
com.github.nihui.waifu2x-ncnn-vulkan      io.bassi.Amberol                      org.gnome.Boxes                            org.gnome.font-viewer      org.gnome.Weather                  org.qownnotes.QOwnNotes
com.github.tchx84.Flatseal                io.github.celluloid_player.Celluloid  org.gnome.Builder                          org.gnome.Fractal.Devel    org.inkscape.Inkscape              org.tenacityaudio.Tenacity
com.github.wwmm.easyeffects               io.github.JaGoLi.ytdl_gui             org.gnome.Calculator                       org.gnome.FractalDevel     org.kde.alligator                  org.winehq.Wine
com.gitlab.newsflash                      io.github.seadve.Mousai               org.gnome.Calendar                         org.gnome.Geary            org.kde.kdenlive                   re.sonny.Commit
com.google.Chrome                         net.poedit.Poedit                     org.gnome.Characters                       org.gnome.gedit            org.kde.krita                      us.zoom.Zoom
com.mattjakeman.ExtensionManager          nl.hjdskes.gcolor3                    org.gnome.clocks                           org.gnome.gitg             org.libreoffice.LibreOffice

We can observe that the host sees everything inside ~/.var/app.

Let’s check inside the GIMP container:

$ flatpak run --command=bash org.gimp.GIMP
[📦 org.gimp.GIMP theevilskeleton.gitlab.io]$ ls ~/.var/app
org.gimp.GIMP

Unlike the host shell, the GIMP container can only see ~/.var/app/org.gimp.GIMP, which is where the GIMP configurations live. GIMP cannot see my Geary, GNOME Contacts or other Flatpak applications’ configurations by default, even with filesystem=host, meaning it will be much harder for GIMP to tamper with other applications’ data and read from them.

Additionally, these applications don’t have access to every single API or framework in the system by default. I have my computer connected and synced up with my Nextcloud instance, Google and Microsoft accounts. For my computer to communicate with my accounts, it uses the OnlineAccounts framework. This also allows applications like GNOME Contacts, Notes and more to sync my contacts, notes and other information. GNOME Contacts has the org.gnome.OnlineAccounts=talk permission, which means that it can “talk” to the framework and access these information, whereas GIMP doesn’t.

To confirm, let’s check if GIMP, an application that comes with filesystem=host, has the org.gnome.OnlineAccounts=talk permission:

$ flatpak info --show-permissions org.gimp.GIMP
[Context]
shared=network;ipc;
sockets=x11;wayland;fallback-x11;
devices=dri;
filesystems=xdg-config/GIMP;xdg-config/gtk-3.0;/tmp;xdg-run/gvfsd;host;xdg-run/gvfs;

[Session Bus Policy]
org.kde.kwin.Screenshot=talk
org.gtk.vfs.*=talk
org.gnome.Shell.Screenshot=talk
org.freedesktop.FileManager1=talk

We notice that there is no org.gnome.OnlineAccounts=talk here. This means that without this permission, it’s much harder for GIMP to communicate with the OnlineAccounts framework and therefore it cannot successfully fetch my contacts, notes or other sorts of information to accounts I am connected in.

Needless to say, it is definitely a problem that filesystem=host/home can write to sensitive locations. But the fact that it requires extra steps for GIMP to gain access to my other personal information means that Flatpak can prevent bad actors a little bit with filesystem=host/home. Also, the majority of applications don’t come with those permissions. In the realm of security, 100% security doesn’t exist, but we can always take a step closer, which is exactly what Flatpak does.

Furthermore, we can use Flatseal to manage these permissions, if feeling uncomfortable with the defaults. While I do admit that this is not a great approach, it certainly is the best option for now. The alternative is manually configuring bubblewrap, Firejail or AppArmor, which requires a lot more time and knowledge than Flatseal. The other “alternative” is to rely on unsandboxed environments. Flatseal also neatly documents the main permissions, to help users understand what they are about to change.

“Permissions and Portals”

Flatpak is working on a fine-grained permission system to improve the security of its sandbox. Permissions are things like whether the app is allowed to access the microphone or the printer. Portals are things like a file open dialog that runs outside the sandbox, so the app in the sandbox gets only the file the user chose.

Flatpak documents these portals and provides libportal, a client library to access them. However this isn’t really intended for individual apps. It’s all meant to be integrated in the toolkits. From the documentation:

Interface toolkits like GTK3 and Qt5 implement transparent support for portals, meaning that applications don’t need to do any additional work to use them (it is worth checking which portals each toolkit supports).

Apparently, developing client APIs for apps themselves is antithetical to Flatpak’s mission. They want the apps running on Flatpak to be unaware of Flatpak. They would rather modify the core libraries like GTK to integrate with Flatpak. So for example if you want to open a file, you don’t call a Flatpak API function to get a file or request permissions. Instead, you call for an ordinary GTK file open dialog and your Flatpak runtime’s GTK internally does the portal interaction with the Flatpak service (using all sorts of hacks to let you access the file “normally” and pretend you’re not sandboxed.)

This is the most complicated and brittle way to implement this. It’s also not at all how other sandboxed platforms work. If I want file access permissions on Android, I don’t just try to open a file with the Java File API and expect it to magically prompt the user. I have to call Android-specific APIs to request permissions first. iOS is the same. So why shouldn’t I be able to just call flatpak_request_permission(PERMISSION) and get a callback when the user approves or declines?

Portals are designed to be standards and not Flatpak specific. This means that portals can be utilized outside of Flatpak, i.e. system packages or even with Snap. This also means that we can take advantage of these portals to better integrate with the desktop.

A notorious example is the file picker problem on Linux. For the longest time, Firefox and Chromium based browsers (including Electron) have been using the GTK file picker. This means, that if you were using Plasma at that time, these browsers were using the GTK file picker instead of the KDE file picker, which made them look a lot inconsistent depending on the desktop in use.

This was later solved with the XDG FileChooser portal. The XDG FileChooser portal makes applications open the host file picker instead of the hardcoded file picker the toolkit or framework is using. Nowadays, if you use Firefox or Google Chrome on Plasma, Flatpak or not, and you decide to open the file picker within these applications, then they will open the KDE file picker and not the previously hardcoded GTK file picker. Likewise, Kdenlive, a KDE application, opens the GTK file picker when using GNOME, and not the KDE file picker.

Another huge benefit with portals is that they are automatically in use as soon as the application developer upgrades to a version of the toolkit or framework wherein support of the portal is included, therefore being transparent. Last year, Electron started supporting the FileChooser portal. Element, a Matrix client that uses Electron, upgraded to a newer version of Electron that supports the FileChooser portal. After that upgrade, Element “magically” started opening the host’s file picker and not the previously hardcoded GTK file picker. So, if you are using Plasma, Element should now open the KDE file picker and not the GTK file picker. It literally can’t get easier than that.

In contrast, I can only imagine that Flatpak developers developing Flatpak specific, i.e. nonstandard, APIs would force individual application developers to integrate these APIs and continuously maintain them. This would cause duplicated work among application developers, and may also force individual developers to maintain Flatpak specific APIs atop the APIs they are already using, instead of using one for every use case. And worst of all, this would only work with Flatpak, so native applications wouldn’t have the luxury of opening the desktop’s file picker.

The approach the Flatpak developers are taking at the moment is a standardized, transparent and easily integrated fashion, where application developers have to put very little to no effort to get these APIs working. Already the majority of developers don’t prioritize the Linux desktop, expecting adoption on the Linux desktop by asking developers to implement and maintain different sets of APIs solely for Flatpak won’t do a lot of help.

This is why. Fedora is auto-converting all of their rpm apps to Flatpak. In order for this to work, they need the Flatpak permission system and Flatpak in general to require no app changes whatsoever.

Why on Earth would they do a mass automatic conversion of apps? Your guess is as good as mine. The video claims that Fedora’s apps are higher quality than upstream, and Fedora makes their Flatpaks available on older distributions. I think it’s more likely they just want huge numbers of auto-converted apps to make it look like Flatpak is useful. Whatever the reason, it’s clear that this requirement has influenced many of their poor design decisions.

This is… unrelated? The previous slides in the presentation have no correlation with portals whatsoever. This is the Fedora Project’s decision solely.

And logically, if Flatpak developers decided to create Flatpak specific APIs, then application developers integrated these APIs in their applications, these converted RPMs would have no problem running those applications inside Flatpak containers, because upstream would already support these Flatpak specific APIs. Calling Flatpak specific APIs wouldn’t affect Fedora Flatpaks at all.

Since Fedora Flatpaks converts RPMs from the Fedora repositories to Flatpak applications, it is much easier to trust and audit from a Fedora Project developer and maintainer perspective. Furthermore, these RPMs already comply with all Fedora Project’s conducts and standards. They’re all built inside the Fedora Project’s infrastructure and based on RPMs that are maintained by Fedora Project maintainers. Flathub, on the other hand, is independent and unaffiliated with the Fedora Project. This also makes auditing harder for the Fedora Project maintainers.

“Identifier Clashes”

So Fedora auto-converts all its apps to Flatpak. Does it at least namespace them to something specific to Fedora?

No, it doesn’t. Fedora publishes its Flatpak of GIMP as org.gimp.GIMP. This conflicts with the official org.gimp.GIMP published by the GIMP developers on Flathub. On a fresh install of Fedora 34, if you add the Flathub repository and type flatpak install org.gimp.GIMP, you get prompted for which one to install:

[nick@fedora active]$ flatpak install org.gimp.GIMP
Looking for matches…
Remotes found with refs similar to ‘org.gimp.GIMP’:

   1) ‘fedora’ (system)
   2) ‘flathub’ (system)

Which do you want to use (0 to abort)? [0-2]:

If you choose option 1, you get a build of GIMP with Fedora’s patches that uses the 650 MB Fedora 35 runtime. If you choose option 2, you get a different build of GIMP that uses the 1.8 GB freedesktop.org GNOME runtime.

Isn’t the whole point of reverse DNS that the org.gimp prefix is reserved for people who actually own the gimp.org domain? How can Fedora justify publishing apps while masquerading as the upstream developers? If major Linux distributions won’t even respect DNS, who will?

The point of the reverse DNS notation is to use the domain name of the author of the application, not the author of the packager.

Needless to say, this is indeed a problem. The decentralized nature of Flatpak gives the freedom to Flatpak remotes to use the same application ID, which as a side effect may also result to identifier clashes. Although, this still prevents applications from the having same name more than without application IDs.

“Services”

All of these app packaging systems require that the user have some service installed on their PC before any packages can be installed.

AppImage, to its credit, technically does not require a service to run apps, but it doesn’t integrate with the desktop without it. I needed to use an AppImage app for a while and my solution was to just leave it in my ~/Downloads folder and double click it from my file manager to run it. This is a terrible user experience.

All of the desktop integration (launcher entries, mimetypes, icons, updates) is provided by either appimaged or AppImageLauncher, one of which must be installed by the user for any of this to work. So in practice, AppImage is no different than any of our other solutions: it requires a service to be usable.

There is one important key missing: store integration. AppImages don’t (and won’t anytime soon) integrate with software stores like GNOME Software or Discover. GNOME Software and Discover take care of managing Flatpak applications and system packages simultaneously, whereas AppImageHub is for AppImage specifically.

Most AppImages need a big minimum requirement. Most AppImages don’t bundle glibc or other core dependencies. Additionally, many AppImages might not even come with higher level dependencies bundled that would make an application actually functional without relying a lot on the host. Many of these AppImages will assume that some dependencies are already installed and have the right version on the host.

This means that most AppImages are not actually universal, and it gets harder to use them depending on the distribution you are using. If you are using a musl based distribution, AppImages won’t work. Likewise, if you use an immutable distribution, chances are they won’t come with many lower and higher level dependencies, because Flatpak and other container utilities already take care of that. Flatpak is compatible in the majority of desktop cases. Even the Steam Deck uses Flatpak by default. And that is because Flatpak ships with lower and higher level dependencies.

“Is Flatpak Fixable?”

If the Flatpak developers truly want an app distribution system that rivals Android and iOS, the sandbox, permissions and portal system should be the only focus.

They should:

  • Abandon everything related to runtimes, and instead mount the native /usr (or a restricted set of core libraries from /usr) read-only into each container;

/usr is a directory that is unique per install the majority of the time, because users install different packages. As a result, Flatpak will assume that some libraries and dependencies are already installed. This would recreate the same problems as AppImage and thus won’t make them universal, which would also defeat the purpose of Flatpak.

Runtimes ensure that core dependencies are always available on any given system, and more importantly, the right version.

  • Build a fine-grained user-interactive runtime permission system that requires the app to make Flatpak-specific API calls to activate permission dialogs; and

This is exactly what Flatpak is doing, the only distinction is these “Flatpak-specific API calls” are portals, which are also standards. As time goes by, we’ll start seeing more and more applications that use portals. GTK applications already use portals, so do Qt applications. Firefox, Chromium based browsers and Electron started supporting some of the portals too, and so on.

  • Deprecate install-time permissions (especially filesystem access) and remove all apps from Flathub that use them.

Mass adoption doesn’t happen in 10 days. It mostly depends on application developers’ priorities. Right now, we rely on install-time (static) permissions because many framework developers don’t deem portals a priority. This problem is not even a Flatpak specific problem, but this also happens with Wayland, Windows Store, and many other newer technologies on other platforms. Modern video games also rely on 32-bit libraries. This is much easier said than done, and it’s literally impossible to make applications that were designed for unsandboxed environments run in its antithesis environment without expecting issues.

Anything that requires transitioning or adapting to a different technology requires time and effort. Flatpak is not magic, neither are Wayland, PipeWire and others.

Even with filesystem permissions, there are still security benefits. These applications typically don’t have access to all the APIs and don’t have access to individual Flatpak applications’ data. Furthermore, managing permissions are also much easier than the alternatives because the documentation is available and the interface is easy to use.

The container aspect is also really helpful, because it makes upgrading systems much easier. Instead of relying on PPAs, Copr and the likes, Flatpak applications are independently updated between major versions and only need to be packaged once for everyone.

On another note, avoiding static permissions is indeed a priority, because we want to ideally switch to portals entirely, as mentioned by Alexander Larsson in this presentation.

Under this system, apps would be encouraged to statically link many of their dependencies, but use the system GTK/Qt/SDL, OpenGL/Vulkan, OpenSSL/curl, and other large or security-critical libraries. The community could maintain guidelines and wrappers to make apps that dynamically link against the system libraries cross-version and cross-distribution. Apps would be expected to make changes to run sandboxed and request permissions directly through a Flatpak client API.

This, again, causes issues with universality. These dependencies may be using older, newer, patched, etc. variants depending on the distribution, as opposed to Flatpak which uses the same everywhere. I’ve recently stumbled upon a user who had issues with Bottles failing to render the font. This was caused by using a patched version of GTK. The issue was later resolved, but we can observe that simple patches can cause usability issues. Let alone version differences.

“Conclusion”

If you are a Linux distribution maintainer, please understand what all of these solutions are trying to accomplish. All your hard work in building your software repository, maintaining your libraries, testing countless system configurations, designing a consistent user experience… they are trying to throw all of that away. Every single one of these runtime packaging mechanisms is trying to subvert the operating system, replacing as much as they can with their own. Why would you support this?

Flatpak’s goal is to not “throw all of that away”. Rather, it is to avoid doing duplicated work and give more room for distribution developers to innovate their distribution, instead of spending the majority of the time to package software and continuously maintain them. Alexander Larsson made a presentation explaining it in details.

This is also why Fedora Silverblue/Kinoite is improving very quickly. Fedora Silverblue/Kinoite is entirely different from most distributions because of its immutable nature. Since it relies mainly on Flatpak and Toolbx, there are a lot more room to improve core utilities like rpm-ostree (Fedora Silverblue/Kinoite’s package manager). Another prime example is SteamOS.

Conclusion

In conclusion, I believe that the author of the article has misunderstood many aspects of Flatpak and quickly drew conclusions.

Before Flatpak was first announced, there were already plenty of issues on the Linux desktop: X11, PulseAudio, fragmentation, etc. As per the fragmentation, packages are generally built against different toolchains and different libraries and dependencies. There are endless of possibilities of software not working as intended on any distribution at any given time.

These are real world issues on the Linux desktop, and we can clearly see that Flatpak developers are doing an outstanding job to solve them; properly too: Flatpak was first announced in 2015 as xdg-app, and not even a decade later, GTK applications work excellently (and sometimes even better) with Flatpak than their system package counterpart. Complex applications like Bottles, Firefox and any Chromium based browsers work very well inside Flatpak. The Steam Deck ships and primarily uses Flatpak on SteamOS. Fedora Silverblue/Kinoite, Endless OS and elementaryOS primarily use Flatpak as well.

Is Flatpak perfect? No. We still heavily rely on static permissions. However, as time goes by, I believe more and more frameworks and toolkits will start using portals. More applications that use these frameworks and toolkits will start supporting portals with very minimal work put into them, and applications will be designed to be secure by default on the operating system because portals take care of the majority of the security problems already.


  • Edit 1: remove mentions of TRIM. It cleans up deleted data and therefore doesn’t really benefit with reducing data (credit to re:fi.64)

Episode 323 – The fake 7-Zip vulnerability and SBOM

Posted by Josh Bressers on May 16, 2022 12:00 AM

Josh and Kurt talk about a fake 7-Zip security report. It’s pretty clear that everyone is running open source all the time. We end on some thoughts around what SBOM is good for, and who should be responsible for them.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2780-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_323_The_fake_7Zip_vulnerability_and_SBOM.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_323_The_fake_7Zip_vulnerability_and_SBOM.mp3</audio>

Show Notes

Datacenter outage

Posted by Fedora Infrastructure Status on May 15, 2022 09:30 PM

There is an ongoing outage of our main IAD2 datacenter. All servers may be down. We are working on fixing things.

This outage is over and everything should be back online.

nbdkit now supports LUKS encryption

Posted by Richard W.M. Jones on May 14, 2022 10:52 AM

nbdkit, our permissively licensed plugin-based Network Block Device server can now transparently decode encrypted disks, for both reading and writing:

qemu-img create -f luks --object secret,data=SECRET,id=sec0 -o key-secret=sec0 encrypted-disk.img 1G

nbdkit file encrypted-disk.img --filter=luks passphrase=+/tmp/secret

We use LUKSv1 as the encryption format. That’s an older version [more on that in a moment] of the format used for Full Disk Encryption on Linux. It’s much preferable to use LUKS rather than using qemu’s built-in qcow2 encryption, and our implementation is compatible with qemu’s.

You can place the filter on top of other nbdkit plugins, like Curl:

nbdkit curl https://example.com/encrypted-disk.img --filter=luks passphrase=+/tmp/secret

The threat model here is that you can store the encrypted data on a remote server, and the admin of the server cannot decrypt the disk (assuming you don’t give them the passphrase).

If you try this filter (or qemu’s device) with a modern Linux LUKS disk you’ll find that it doesn’t work. This is because modern Linux uses LUKSv2, although they are able to create, read and write LUKSv1 if you use set them up that way in advance. Unfortunately LUKSv2 is significantly more complicated than LUKSv1. It requires parsing JSON data(!) stored in the header, and supports a wider range of password derivation functions, typically the very slow and memory-intensive argon2. LUKSv1 by contrast only requires support for PBKDF2 and is generally far more straightforward to implement.

The new filter will be available in nbdkit 1.32, or you can grab the development version now.

An introduction to USB Device Emulation and how to take advantage of it

Posted by Fedora Magazine on May 14, 2022 08:00 AM

Introduction

Nowadays, the number of devices is getting bigger and bigger, and modern operating systems must try to support all types and several of them with every integration, with every release. Maintaining a large number of devices is difficult, expensive and also hard to test, specially for plug-and-play devices, like USB devices.

Therefore, it is necessary to create a mechanism to facilitate the maintenance and testing of old and new USB devices. And this is where USB device emulation comes in. In that way, a complete framework including a big bunch of emulated and validated USB devices will allow easier integration and release. The area of application would be very wide: earlier bug search/detection even during development, automatic tests, continuous integration, etc.

How to emulate USB devices

USB/IP project allows sharing the USB devices connected to a local machine so that they can be managed by another machine connected to the network by means of a TCP/IP connection.

Then USB/IP project consists of two parts:

  • local device support (host) to allow remote access to every necessary control events and data
  • remote control that catches every necessary control event and data to process like a normal driver

The procedure is valid for Linux and Windows, here I will focus only on Linux.

The idea behind emulation is to replace the remote device support with an application that behaves in the same way. In this way we can emulate devices with software applications that follow the commented USB/IP protocol specification.

In the following points I will describe how to configure and run the remote support and how to connect to our USB emulated device.

Remote support

Remote support is divided in two parts:

  • kernel space to control a remote device as it was local, that is, to be probed by the normal driver.
  • user space application to configure access to remote devices.

At this point, it is important to remark that the device emulators, after configuration by user space application, will communicate directly with the kernel space.

Local support has a very similar structure, but the focus of this article is device emulation.

Let’s analyze every part of remote support.

Kernel space

First of all, in order to get the functionality we need to compile the Linux Kernel with the following options:

CONFIG_USBIP_CORE=m
CONFIG_USBIP_VHCI_HCD=m

These options enable the USB/IP virtual host controller driver, which is run on the remote machine.

Normal USB drivers need to be also included because they will be probed and configured in the same way from virtual host controller drivers.

Besides there are other important configuration options:

CONFIG_USBIP_VHCI_HC_PORTS=8
CONFIG_USBIP_VHCI_NR_HCS=1

These options define the number of ports per USB/IP virtual host controller and the number of USB/IP virtual host controllers as if adding physical host controllers. These are the default values if CONFIG_USBIP_VHCI_HCD is enabled, increase if necessary.

The commented options and kernel modules are already included in some Linux distributions like Fedora Linux.

Let’s see an example of available virtual USB buses and ports that we will use later.

Default and real resources in example equipment:

$ lsusb 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$ lsusb -t
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 5000M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M
|__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 480M
$

Now, we will load the module vhci-hcd into the system (default configuration for CONFIG_USBIP_VHCI_HC_PORTS and CONFIG_USBIP_VHCI_NR_HCS):

$ sudo modprobe vhci-hcd 
$ lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$ lsusb -t
/: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 5000M
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 5000M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M
|__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 480M
$

The remote USB/IP virtual host controller driver will only use the configured virtualized resources. Of course, emulated devices will work in the same way.

User space

The other necessary part in the USB/IP project is the user space tool usbip and needs to be used to configure the referred kernel space on both sides, although, in the same way, we only focus on the remote side, since the local side will be represented by the emulator.

That is, usbip tool will configure the USB/IP virtual controller (tcp client) in kernel space to connect to the device emulator (tcp server) in order to establish a direct connection between them for USB configuration, events, data, etc. exchange.

The tool is independent of the type of device and is able to provide information about available and reserved resources (see more information in the examples below).

The local USB/IP virtual host controller needs to specify the pair bus-port that will used for remote access, it will be the same for emulated devices, but in this case, this pair can be anything because there is no real device and resource reservation is not necessary.

This tool is found on the Linux Kernel repository in order to be totally synchronized with it.

Location of the tool on the Linux Kernel repository: ./tools/usb/usbip

In some distribution like Fedora Linux, the usbip utility can be installed by means of usbip package from repositories. If usbip utility or related package can not be found, follow the instruction in the available README file to compile and install. Suitable rpm package can also be generated from the usbip-emulator repository:

$ git clone https://github.com/jtornosm/USBIP-Virtual-USB-Device.git 
$ cd USBIP-Virtual-USB-Device/usbip 
$ make rpm 
...
$

How to emulate USB devices

Emulators are generated in Python and C. I have started with C development (I will focus on this part), but the same could be done in Python.

For C development, compile emulation tools from the usbip-emulator repository:

$ git clone https://github.com/jtornosm/USBIP-Virtual-USB-Device.git 
$ cd USBIP-Virtual-USB-Device/c 
$ make 
...
$

All the supported devices emulated at this moment will be generated:

  • hid-keyboard
  • hid-mouse
  • cdc-adm
  • hso
  • cdc-ether
  • bt

rpm package (usbip-emulator) can be also generated with:

$ make rpm 
...
$

As examples, Vendor and Product IDs are hardcoded in the code.

Following three examples to show how emulation works. We are using the same equipment for the emulator and remote USB/IP but they could run in different equipment. Besides, we are reserving different resources so all the devices could be emulated at the same time.

Example 1: hso

From one terminal, let’s emulate the hso device:

(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)

$ hso -p 3241 -b 1-1 
hso started.... 
server usbip tcp port: 3241 
Bus-Port: 3-0:1.0 
...

From another terminal, connect to the emulator:

(localhost because emulator is running in the same equipment and the same name for pair bus-port as the emulator)

$ sudo modprobe vhci-hcd 
$ sudo usbip --tcp-port 3241 attach -r 127.0.0.1 -b 1-1
usbip: info: using port 3241 ("3241")
$

Now we can check that the new device is present:

(As we saw previously, for this example machine, bus 3 is virtualized)

$ ip addr show dev hso
0 3: hso0: <POINTOPOINT,MULTICAST,NOARP> mtu 1486 qdisc noop state DOWN group default qlen 10 
link/none 
$ rfkill list 
0: hso-0: Wireless WAN 
Soft blocked: no 
Hard blocked: no 
...
$ lsusb 
... 
Bus 003 Device 002: ID 0af0:6711 Option GlobeTrotter Express 7.2 v2 
... 
$ lsusb -t 
...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M 
|__ Port 1: Dev 2, If 0, Class=Vendor Specific Class, Driver=hso, 12M 
...
$

In order to release resources:

$ sudo usbip port 
Imported USB devices
====================
Port 00: <Port in Use> at Full Speed(12Mbps)
Option : GlobeTrotter Express 7.2 v2 (0af0:6711)
3-1 -> usbip://127.0.0.1:3241/1-1
-> remote bus/dev 001/002
$ sudo usbip detach -p 00
usbip: info: Port 0 is now detached!
$

And we can check that the device is released:

$ ip addr show dev hso0 
Device "hso0" does not exist. 
$ rfkill list 
...
$ lsusb 
... 
$

After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).

Example 2: cdc-ether

From one terminal, let’s emulate the cdc-ether device (root permission is required because raw socket needs to bind to specified interface for data plane):

(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)

$ sudo cdc-ether -e 88:00:66:99:5b:aa -i enp1s0 -p 3242 -b 1-1 
cdc-ether started.... 
server usbip tcp port: 3242 
Bus-Port: 1-1 
Ethernet address: 88:00:66:99:5b:aa 
Manufacturer: Temium 
Network interface to bind: enp1s0 
...

From another terminal connect to the emulator:

(localhost because emulator is running in the same equipment and the same name for pair bus-port as the emulator)

$ sudo modprobe vhci-hcd 
$ sudo usbip --tcp-port 3242 attach -r 127.0.0.1 -b 1-1
usbip: info: using port 3242 ("3242")
$

Now we can check that the new device is present:

(As we saw previously, for this example machine, bus 3 is virtualized)

$ ip addr show dev eth0 
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 
link/ether 88:00:66:99:5b:aa brd ff:ff:ff:ff:ff:ff 
$ sudo ethtool eth0 
...
Link detected: yes 
$ lsusb 
... 
Bus 003 Device 003: ID 0fe6:9900 ICS Advent 
... 
$ lsusb -t 
...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M 
|__ Port 2: Dev 3, If 0, Class=Communications, Driver=cdc_ether, 480M 
|__ Port 2: Dev 3, If 1, Class=CDC Data, Driver=cdc_ether, 480M 
...
$

For this example, we can also test the data plane.

(IP forwarding is disabled in both sides)

First, we can configure the IP address in the emulated device:

$ sudo ip addr add 10.0.0.1/24 dev eth0 
$ ip addr show dev eth0
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 88:00:66:99:5b:aa brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 scope global eth0
valid_lft forever preferred_lft forever
$

Second, for example, from other directly Ethernet connected machine (real or virtual) we can configure a macvlan interface in the same subnet to send/receive traffic (ping, iperf, etc.):

$ sudo ip link add macvlan0 link enp1s0 type macvlan mode bridge 
$ sudo ip addr add 10.0.0.2/24 dev macvlan0 
$ sudo ip link set macvlan0 up 
$ ip addr show dev macvlan0 
3: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 
link/ether d6:f1:cd:f1:cc:02 brd ff:ff:ff:ff:ff:ff 
inet 10.0.0.2/24 scope global macvlan0 
valid_lft forever preferred_lft forever 
inet6 fe80::d4f1:cdff:fef1:cc02/64 scope link 
valid_lft forever preferred_lft forever 
$ ping 10.0.0.1 
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=55.6 ms 
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=2.19 ms 
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=1.74 ms 
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=1.76 ms 
64 bytes from 10.0.0.1: icmp_seq=5 ttl=64 time=1.93 ms 
64 bytes from 10.0.0.1: icmp_seq=6 ttl=64 time=1.65 ms 
...

In order to release resources:

$ sudo usbip port 
Imported USB devices 
==================== 
...
Port 01: <Port in Use> at High Speed(480Mbps) 
ICS Advent : unknown product (0fe6:9900) 
3-2 -> usbip://127.0.0.1:3245/1-1 
-> remote bus/dev 001/003 
$ sudo usbip detach -p 01 
usbip: info: Port 1 is now detached! 
$

And we can check that the device is released:

$ ip addr show dev eth0 
Device "eth0" does not exist.
$ lsusb
...
$

And of course, traffic from the other machine is not working:

From 10.0.0.2 icmp_seq=167 Destination Host Unreachable 
From 10.0.0.2 icmp_seq=168 Destination Host Unreachable 
From 10.0.0.2 icmp_seq=169 Destination Host Unreachable 
From 10.0.0.2 icmp_seq=170 Destination Host Unreachable 
...

After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).

Example 3: bt

From one terminal, let’s emulate the Bluetooth device:

(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)

$ bt -a aa:bb:cc:dd:ee:11 -p 3243 -b 1-1 
bt started.... 
server usbip tcp port: 3243 
Bus-Port: 1-1 
BD address: aa:bb:cc:dd:ee:11 
Manufacturer: Trust 
...

From another terminal connect to the emulator:

(localhost because emulator is running in the same equipment and the same name for pair bus-port as the emulator)

$ sudo modprobe vhci-hcd 
$ sudo usbip --tcp-port 3243 attach -r 127.0.0.1 -b 1-1
usbip: info: using port 3243 ("3243")
$

Now we can check that the new device is present:

(As we saw previously, for this example machine, bus 3 is virtualized)

$ hciconfig -a 
hci0: Type: Primary Bus: USB 
BD Address: AA:BB:CC:DD:EE:11 ACL MTU: 310:10 SCO MTU: 64:8 
UP RUNNING PSCAN ISCAN INQUIRY 
RX bytes:1451 acl:0 sco:0 events:80 errors:0 
TX bytes:1115 acl:0 sco:0 commands:73 errors:0 
Features: 0xff 0xff 0x8f 0xfe 0xdb 0xff 0x5b 0x87 
Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 
Link policy: RSWITCH HOLD SNIFF PARK 
Link mode: SLAVE ACCEPT 
Name: 'BT USB TEST - CSR8510 A10' 
Class: 0x000000 
Service Classes: Unspecified 
Device Class: Miscellaneous, 
HCI Version: 4.0 (0x6) Revision: 0x22bb 
LMP Version: 3.0 (0x5) Subversion: 0x22bb 
Manufacturer: Cambridge Silicon Radio (10)

$ rfkill list 
...
1: hci0: Bluetooth 
Soft blocked: no 
Hard blocked: no 
$ lsusb 
...
Bus 003 Device 004: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) 
...
$ lsusb -t 
...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M 
|__ Port 3: Dev 4, If 0, Class=Wireless, Driver=btusb, 12M 
|__ Port 3: Dev 4, If 1, Class=Wireless, Driver=btusb, 12M 
...
$

And we can turn off and turn on the emulated Bluetooth device, detecting several fake Bluetooth devices:

(At this moment, fake Bluetooth devices are not emulated/simulated so we can not set up)

<figure class="wp-block-image"><figcaption>Turn Bluetooth off</figcaption></figure> <figure class="wp-block-image"><figcaption>Turn Bluetooth on</figcaption></figure>

In order to release resources:

$ sudo usbip port 
Imported USB devices 
==================== 
...
Port 02: <Port in Use> at Full Speed(12Mbps) 
Cambridge Silicon Radio, Ltd : Bluetooth Dongle (HCI mode) (0a12:0001) 
3-3 -> usbip://127.0.0.1:3243/1-1 
-> remote bus/dev 001/002 
$ sudo usbip detach -p 02 
usbip: info: Port 2 is now detached! 
$

And we can check that the device is released:

$ hciconfig 
$ rfkill list 
...
$ lsusb 
... 
$

And of course, device is not detected (as before emulation):

<figure class="wp-block-image"><figcaption>Bluetooth is not found</figcaption></figure>

After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).

Emulated vs real USB devices

When the real hardware and/or final device is not used to test, we can always feel insecure about the results, and this is the biggest hurdle that we will have to overcome to check the correct operation of the devices by means of emulation.

So, in order to be confident, emulation must be as close as possible to the real hardware and in order to get the most real emulation every aspect of the device must be covered (or at least the necessary ones if they are not related with other aspects). In fact, for a correct test, we must not modify the driver, that is, we must only emulate the physical layer, so that the driver is not able to know if the device is real or emulated.

Starting to test with the real hardware device is a very good idea to get a reference to build the emulator with the same features. For the case of USB devices, the device emulator building is easier because of the existing procedure to get remote control that complies with all the characteristics mentioned above.

Conclusion

USB device emulation is the best way to integrate and test the related features in an efficient, automatic and easy way. But, in order to be confident about the emulation procedure, device emulators need to be previously validated to confirm that they work in the same way as real hardware.

Of course, the USB device emulator is not the same as the real hardware device, but the commented method, thanks to the tested procedure to get remote control of the device, it’s very close to the real scenario and can help a lot to improve our release and testing processes.

Finally, I would like to comment that one of the best advantages of using software emulators is that we will be able to cause specific behaviors, in a simple way, that would be very difficult to reproduce with real hardware, and this could help to find issues and be more robust.

Friday’s Fedora Facts: 2022-19

Posted by Fedora Community Blog on May 14, 2022 01:43 AM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Fedora Linux 36 was released on Tuesday 10 May. Join us tomorrow for second day of the F36 Release Party

Election nominations are open through 25 May.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
CentOS Summer Dojovirtual17 Junopen
DDD PerthPerth, AU9 Sepcloses 27 May
Open Source Summit EuropeDublin, IE & virtual13–16 Sepcloses 30 May
#a11yTOToronto, ON, CA & virtual20–21 Octcloses 31 May
I Code JavaJohannesburg, ZA & hybrid12–14 Octcloses 5 Jun
PyCon SKBratislava, SK9–11 Sepcloses 30 Jun
SREcon22 EMEAAmsterdam, NL25–27 Octcloses 30 Jun
Write the Docs Praguevirtual11–13 Sepcloses 30 Jun
React IndiaGoa, IN & virtual22–24 Sepcloses 30 Jun
NodeConf EUKilkenny, IE & virtual3–5 Octcloses 6 Jul
Nest With Fedoravirtual4–6 Augcloses 8 Jul
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
1955416shimPOST
</figure>

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345819
F354269
F361913
Rawhide6811
</figure>

Fedora Linux 36

Schedule

  • 2022-05-25Election nominations end
  • 2022-06-03 — Election voting begins
  • 2022-06-16 — Election voting ends

See the schedule website for the full schedule.

Fedora Linux 37

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
Replace jwhois package with whois for Fedora WorkstationSelf-ContainedFESCo #2785
Strong crypto settings: phase 3, forewarning 1/2System-WideFESCo #2788
Node.js 18.x by defaultSystem-WideFESCo #2789
Perl 5.36System-WideFESCo #2791
Build all JDKs in Fedora against in-tree libraries and with static stdc++libSystem-WideAnnounced
Python: Add -P to default shebangsSystem-WideAnnounced
BIOS boot.iso with GRUB2System-WideAnnounced
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Major upgrade of MicrodnfSelf-ContainedFESCo #2784
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-19 appeared first on Fedora Community Blog.

CPE Weekly Update – Week 19 2022

Posted by Fedora Community Blog on May 13, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat (https://libera.chat/).

Week 09th May – 13th May 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board: https://zlopez.fedorapeople.org/I&R-2022-05-11.pdf

Update

Fedora Infra

  • F34/F35 container builds failing due to 32bit arm ( https://bugzilla.redhat.com/show_bug.cgi?id=2077680 )
  • git -core change broke koji. Downgraded git and upstream koji already has a fix.
  • Got a FMW macos build fully signed and notarized! Unfortunately, now need to find out how to build it to be able to run on older macos. ;(
  • Fedora 36 release went pretty smoothly, we are now out of Freeze
  • Business as usual, misc tickets, etc.

CentOS Infra including CentOS CI

  • CentOS Stream storage migration spike (Netapp for nfs/iscsi)
  • Duffy fixes and tests
  • Investigating hardware issue on CI pool
  • Investigating ci.centos.org decommission steps
  • Git.centos.org pagure upgrade/migration (blocked, waiting on internal Red Hat Team)
  • Updated sshd host key signing (sha1 issue for el9 ssh clients)
  • Bussiness as usual (mirrors, tags)

Release Engineering

  • F36 is out
  • Firmware win binaries signed
  • Bussiness as usual – stalled epel packages, package unretirements

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Finished the RPM import for c8s to Stream Koji
  • Business as usual otherwise

CentOS Duffy CI

Goal of this Initiative

Duffy is a system within CentOS CI Infra which allows tenants to provision and access bare metal resources of multiple architectures for the purposes of CI testing.
We need to add the ability to checkout VMs in CentOS CI in Duffy. We have OpenNebula hypervisor available, and have started developing playbooks which can be used to create VMs using the OpenNebula API, but due to the current state of how Duffy is deployed, we are blocked with new dev work to add the VM checkout functionality.

Updates

  • More deployment tests
  • Per tenant session lifetimes
  • Some bug fixes

Package Automation (Packit Service)

Goal of this initiative

Automate RPM packaging of infra apps/packages

Updates

  • The team is hitting lots of dependency and sub dependency issues, working through them but its slow
  • fasjson-client is our first package to be fully automated
    • upstream release -> src.fp.o PR -> koji -> bodhi
  • Thanks to Nils, Aurelien and Kevin for their help/advice
  • fedora-messaging, datagrepper, fasjson currently being worked on (all have deps issues)
  • spec files will be staying downstream, packit has a way to facilitate this

Flask-oidc: oauth2client replacement

Goal of this initiative

Flask-oidc is a library used across the Fedora infrastructure and is the client for ipsilon for its authentication. flask-oidc uses oauth2client. This library is now deprecated and no longer maintained. This will need to be replaced with authlib.

Updates:

  • Setup dev environment (Work In Progress!)
  • Starting to implement flask-oidc api using authlib.

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • epel9 up to 2568 source packages (increase of 113 from last week).
  • Added rhel+epel-9 mock configs to mock-core-configs.
  • Updated slurm in epel7 and epel8 to fix CVE-2022-29500 and CVE-2022-29501.
  • Retired swtpm and libtpms from epel8 because they were added to RHEL8.6.
  • Added python-texttable to epel9 to allow c8s maintainers (Johnny) to run c9s as a workstation.
  • Added missing devel packages for cogl, clutter, and clutter-gtk to epel9 to unblock other epel9 requests.

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 19 2022 appeared first on Fedora Community Blog.

You’re invited: Fedora Ambassador Call

Posted by Fedora Community Blog on May 13, 2022 07:30 AM

A couple weeks ago the Fedora Community Outreach Revamp(FCOR) team announced that we would be organising an Ambassador Call Kick-off and collected feedback about availability. Based on the results from the whenisgood, we are excited to invite you to the Ambassador Call Kick-Off, on May 18th at 3PM UTC.

This meeting will be an hour-long video session with the following agenda:

  • Introduction
  • Updates from the FCOR Objective
  • Review beginning tasks
  • How to get involved
  • set up a recurring call

This call is not limited to Ambassadors. It’s for anyone who is interested in Fedora’s outreach, including:

  • Ambassadors
  • Advocates
  • CommOps Team Members
  • Join SIG Team Members
  • Any Fedora community member interested in outreach

The FCOR team will circulate an agenda for the meeting in advance. We can’t wait to see you there!

Background

Since July of 2020, co-leads Mariana Balla and Sumantro Mukherjee, with the support of Marie Nordin, have been working on the Fedora Community Outreach Revamp Objective(FCOR).

This objective will be an initiative to revamp several community outreach teams within Fedora that are struggling to function, or need more support to truly become a success. The objective will bring together Ambassadors, Ambassador Emeritus, Join SIG, and Advocates all under the same umbrella of CommOps in a clear and cohesive structure.

This FCOR revolves around processing feedback from the Fedora community and its various outreach groups to help restructure ongoing and new processes. CommOps, Ambassadors, Join SIG are important teams that exist to make the Fedora Project a sustainable open source project by implementing internal and external outreach. Many of the Objective deliverables are completed or in progress.

The post You’re invited: Fedora Ambassador Call appeared first on Fedora Community Blog.

PHP version 8.0.19 and 8.1.6

Posted by Remi Collet on May 13, 2022 05:27 AM

RPMs of PHP version 8.1.6 are available in remi-modular repository for Fedora ≥ 34 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php81 repository for EL 7.

RPMs of PHP version 8.0.19 are available in remi-modular repository for Fedora ≥ 34 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php80 repository for EL 7.

emblem-notice-24.pngNo security fix this month, so no update for version 7.4.29.

emblem-important-2-24.pngPHP version 7.3 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.1 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php81
yum update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

Replacement of default PHP by version 8.0 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php80
yum update

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php74
yum update

Parallel installation of version 7.4 as Software Collection

yum install php74

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.5
  • EL-7 RPMs are build using RHEL-7.9
  • EL-7 builds now use libicu69 (version 69.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 21.5
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php74 / php80 / php81)

Nest with Fedora: Call for proposals and sponsors

Posted by Fedora Community Blog on May 12, 2022 08:00 AM

As we celebrate Fedora Linux 36 with the upcoming Release Party, we are also looking forward to our next virtual event: Nest with Fedora. As I mentioned in my last update, the 2022 edition of our annual contributor conference will be our third virtual Nest paired with Fedora Hatches (local in person meetups). A big thanks to Fedorans for showing up with seven proposals for Hatches all across the world! Furthermore, I am excited to announce that the Nest with Fedora call for proposals and sponsors is now open.

You’ll also notice a new Fedora mascot in the banner for this post, a wonderfully Fedorable design developed by intern Jess Chitas. Introducing Colúr, inspired by the Nest logo and named with the Gaelic word for pigeon. The Design Team was inspired by wordplay because Colúr sounds similar to the word color in English and he is a comprised of all the Fedora colors! Welcome, Colúr!

Call for Proposals

Submit your proposals by opening a ticket in the Flock repo on Pagure. We will have a rolling deadline process with acceptance dates of July 1st and July 8th, 2022. On behalf of the review committee, we look forward to your proposals! If you have any questions, please feel free to direct them to the Mindshare Committee at mindshare@lists.fedoraproject.org

What to submit?

Generally the sessions will be in one of these formats:

  • Session (25 minutes)
  • Session (50 minutes)
  • Workshop/Tutorial (110 minutes)
  • Half Day Hackfest

Submissions can span a broad range. We are currently open for proposals for the following:

  • Keynote speakers/topics
  • Informational sessions
  • Team/SIG meetups
  • Panel sessions
  • Project sprints
  • Tutorials
  • Workshops
  • Social sessions

There is also an opportunity to incorporate external activities that fit with our communities. For example we will be socializing in the Fedora Museum (WorkAdventure), and in the past we’ve “Beat the Bomb” or run a Minecraft instance! If you have an idea for a virtual social experience for the Fedora community at Nest, we are open to your ideas and you are welcome to open a ticket on the Flock pagure repo.

Call for Sponsors

We welcome sponsors for the third edition of Nest with Fedora, and our first run at Fedora Hatch. Nest with Fedora is a virtual variation of our longstanding conference Flock to Fedora. This annual gathering is an opportunity for our community to discuss new ideas, share what they have been working on, connect with each other, and revitalize for the upcoming year. Fedora Hatches are local in person meet-ups organized by Ambassadors and passionate folks who want to bring Fedorans together. If your organization is interested in sponsoring this year’s contributor conference and events please take a look at our Sponsor Prospectus and contact sponsors@fedoraproject.org

The post Nest with Fedora: Call for proposals and sponsors  appeared first on Fedora Community Blog.

Secureboot Signing With Your Own Key

Posted by Robbie Harwood on May 12, 2022 04:00 AM

Create and enroll a key

First, we'll need some packages:

dnf install pesign mokutil keyutils

(Package names are the same on most major distros, though of course your package manager won't be the same.)

Next, we create a key for signing. This uses efikeygen, from the pesign project; I prefer efikeygen because it also creates an NSS database that will be useful for pesign later.

efikeyen -d /etc/pki/pesign -S -TYPE -c 'CN=Your Name Key' -n 'Custom Secureboot'

Replace Your Name Key with your name, and Custom Secureboot with a name for the key that you'll remember for later steps. For -TYPE, replace with -m if you only plan to sign custom kernel modules, and -k otherwise.

(Note that this will set up the key for use in /etc/pki/pesign, which is convenient for pesign later, but it can also be placed elsewhere, like on a hardware token - see efikeygen(1) for more options.)

Next, export the public key and import it into the MOK (the Machine Owner Keystore - the list of keys trusted for signing):

certutil -d /etc/pki/pesign -n 'Custom Secureboot' -Lr > sb_cert.cer
mokutil --import sb_cert.cer

Again, replace Custom Secureboot the chosen name. mokutil will prompt you for a password - this will be used in a moment to import the key.

Reboot, and press any key to enter the mok manager. Use the up/down arow keys and enter to select "enroll mok", then "view key". If it's the same key you generated earlier, continue, then "yes" to enroll. The mok manager will prompt you for the password from before - note that it will not be echoed (no dots, either). When finished, select reboot.

To check the key imported okay, we can use keyctl. Output could look something like this:

~# keyctl show %:.platform
Keyring
1072527305 ---lswrv      0     0  keyring: .platform
1013423757 ---lswrv      0     0   \_ asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53
 246036308 ---lswrv      0     0   \_ asymmetric: Your Name Key: 31fe6684706ff53faf26cec7e700f84aa0fd22ae
 919193603 ---lswrv      0     0   \_ asymmetric: Red Hat Secure Boot CA 5: cc6fa5e72868ba494e939bbd680b9144769a9f8f
 341707055 ---lswrv      0     0   \_ asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4

where the important part is that your key is among those listed. (The Microsoft keys are the "normal" anchors for secureboot, and the Red Hat one is present because this is a RHEL machine.)

Unenroll a key

If you want to unenroll a key you added, just do

mokutil --delete sb_cert.cer

This will prompt for the password, and reboot through mok manager as before, except this time select the option to delete the key.

Sign a kernel

Kernel signatures are part of the vmlinuz file. Unfortunately, the process differs between x64 (or amd64, or x86_64, or whatever you want to call it) and aarch64. First, x64 because it's simpler:

pesign -c 'Custom Secureboot' -i vmlinuz-VERSION -s -o vmlinuz-VERSION.signed
pesign -S -i vmlinuz-VERSION.signed # check the signatures
mv vmlinuz-VERSION.signed vmlinuz-VERSION

Replace VERSION with whatever suffix your vmlinuz has, and Custom Secureboot with whatever name you chose earlier.

On aarch64/aa64, things are slightly more involved because the signature is pre-compression. Not to worry, though:

zcat vmlnuz-VERSION > vmlinux-VERSION
pesign -c 'Custom Secureboot -i vmlinux-VERSION -s -o vmlinux-VERSION.signed
pesign -S -i vmlinux-VERSION.signed # check signature
gzip -c vmlinux-VERSION.signed > vmlinuz-VERSION
rm vmlinux-VERSION*

Sign a kernel module

First, prerequisites - the signing tool. On Fedora/RHEL-likes:

dnf install kernel-devel

while on Debian-likes I believe this is part of linux-kbuild, and therefore pulled in by linux-headers.

The sigining tool uses openssl, so we need to extract the key from the NSS database:

pk12util -o sb_cert.p12 -n 'Custom Secureboot' -d /etc/pki/pesign

Replacing Custom Secureboot as before. This will prompt for a password, which will encrypt the private key - we'll need this for the next step:

opensl pkcs12 -in sb_cert.p12 -out sb_cert.priv -nocerts -noenc

This is exporting an unencrypted private key, so of course handle with care :)

Signing will be something like:

/usr/src/kernels/$(uname -r)/scripts/sign-file sha256 sb_cert.priv sb_cert.cer my_module.ko

where my_module.ko is of course the module to be signed. Debian users will I think want a path more like /usr/lib/linux-kbuild-5.17/scripts/sign-file.

To inspect:

~# modinfo my_modile.ko | grep signer
  signer:         Your Name Key

where Your Name Key will be your name as entered during generation.

To test, insmod my_module.ko and to remove rmmod my_module.

Sign a grub build

This is fairly straightforward - the signatures live in the .efi files, which are just PE binaries, which live in /boot/efi/EFI/distro_name (e.g., /boot/efi/EFI/redhat).

pesign -i grubx64.efi -o grubx64.efi.signed -c 'Custom Secureboot' -s
pesign -i grubx64.efi.signed -S # check signatures
mv grubx64.efi.signed grubx64.efi

where Custom Secureboot is once again the name you picked above. Note that x64 is the architecture name in EFI - so if you're on aarch64, it'd be aa64, etc..

Cockpit 269

Posted by Cockpit Project on May 12, 2022 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 269 and cockpit-machines 268:

Dark mode for Cockpit Client

The chrome of Cockpit Client now follows the system dark style preference. We’re working on adding dark mode for Cockpit itself in the near future.

cockpit client, in dark mode compared to light mode

Metrics: Show Podman containers in top CPU and memory lists

Current metrics cards for CPU and memory now show Podman containers, in addition to systemd services.

container memory usage

Machines: Provide default name for new VMs

If no name is specified for a new virtual machine, generate a unique name based on the name of the operating system. The autogenerated name is visible only after the operating system is selected.

create VM dialog with a generated default name

Try it out

Cockpit 269 and cockpit-machines 269 are available now:

Why the open source driver release from NVIDIA is so important for Linux?

Posted by Christian F.K. Schaller on May 11, 2022 07:52 PM

Background
Today NVIDIA announced that they are releasing an open source kernel driver for their GPUs, so I want to share with you some background information and how I think this will impact Linux graphics and compute going forward.

One thing many people are not aware of is that Red Hat is the only Linux OS company who has a strong presence in the Linux compute and graphics engineering space. There are of course a lot of other people working in the space too, like engineers working for Intel, AMD and NVIDIA or people working for consultancy companies like Collabora or individual community members, but Red Hat as an OS integration company has been very active on trying to ensure we have a maintainable and shared upstream open source stack. This engineering presence is also what has allowed us to move important technologies forward, like getting hiDPI support for Linux some years ago, or working with NVIDIA to get glvnd implemented to remove a pain point for our users since the original OpenGL design only allowed for one OpenGl implementation to be installed at a time. We see ourselves as the open source community’s partner here, fighting to keep the linux graphics stack coherent and maintainable and as a partner for the hardware OEMs to work with when they need help pushing major new initiatives around GPUs for Linux forward. And as the only linux vendor with a significant engineering footprint in GPUs we have been working closely with NVIDIA. People like Kevin Martin, the manager for our GPU technologies team, Ben Skeggs the maintainer of Nouveau and Dave Airlie, the upstream kernel maintainer for the graphics subsystem, Nouveau contributor Karol Herbst and our accelerator lead Tom Rix have all taken part in meetings, code reviews and discussions with NVIDIA. So let me talk a little about what this release means (and also what it doesn’t mean) and what we hope to see come out of this long term.

First of all, what is in this new driver?
What has been released is an out of tree source code kernel driver which has been tested to support CUDA usecases on datacenter GPUs. There is code in there to support display, but it is not complete or fully tested yet. Also this is only the kernel part, a big part of a modern graphics driver are to be found in the firmware and userspace components and those are still closed source. But it does mean we have a NVIDIA kernel driver now that will start being able to consume the GPL-only APIs in the linux kernel, although this initial release doesn’t consume any APIs the old driver wasn’t already using. The driver also only supports NVIDIA Turing chip GPUs and newer, which means it is not targeting GPUs from before 2018. So for the average Linux desktop user, while this is a great first step and hopefully a sign of what is to come, it is not something you are going to start using tomorrow.

What does it mean for the NVidia binary driver?
Not too much immediately. This binary kernel driver will continue to be needed for older pre-Turing NVIDIA GPUs and until the open source kernel module is full tested and extended for display usecases you are likely to continue using it for your system even if you are on Turing or newer. Also as mentioned above regarding firmware and userspace bits and the binary driver is going to continue to be around even once the open source kernel driver is fully capable.

What does it mean for Nouveau?
Let me start with the obvious, this is actually great news for the Nouveau community and the Nouveau driver and NVIDIA has done a great favour to the open source graphics community with this release. And for those unfamiliar with Nouveau, Nouveau is the in-kernel graphics driver for NVIDIA GPUs today which was originally developed as a reverse engineered driver, but which over recent years actually have had active support from NVIDIA. It is fully functional, but is severely hampered by not having had the ability to for instance re-clock the NVIDIA card, meaning that it can’t give you full performance like the binary driver can. This was something we were working with NVIDIA trying to remedy, but this new release provides us with a better path forward. So what does this new driver mean for Nouveau? Less initially, but a lot in the long run. To give a little background first. The linux kernel does not allow multiple drivers for the same hardware, so in order for a new NVIDIA kernel driver to go in the current one will have to go out or at least be limited to a different set of hardware. The current one is Nouveau. And just like the binary driver a big chunk of Nouveau is not in the kernel, but are the userspace pieces found in Mesa and the Nouveau specific firmware that NVIDIA currently kindly makes available. So regardless of the long term effort to create a new open source in-tree kernel driver based on this new open source driver for NVIDIA hardware, Nouveau will very likely be staying around to support pre-turing hardware just like the NVIDIA binary kernel driver will.

The plan we are working towards from our side, but which is likely to take a few years to come to full fruition, is to come up with a way for the NVIDIA binary driver and Mesa to share a kernel driver. The details of how we will do that is something we are still working on and discussing with our friends at NVIDIA to address both the needs of the NVIDIA userspace and the needs of the Mesa userspace. Along with that evolution we hope to work with NVIDIA engineers to refactor the userspace bits of Mesa that are now targeting just Nouveau to be able to interact with this new kernel driver and also work so that the binary driver and Nouveau can share the same firmware. This has clear advantages for both the open source community and the NVIDIA. For the open source community it means that we will now have a kernel driver and firmware that allows things like changing the clocking of the GPU to provide the kind of performance people expect from the NVIDIA graphics card and it means that we will have an open source driver that will have access to the firmware and kernel updates from day one for new generations of NVIDIA hardware. For the ‘binary’ driver, and I put that in ” signs because it will now be less binary :), it means as stated above that it can start taking advantage of the GPL-only APIs in the kernel, distros can ship it and enable secure boot, and it gets an open source consumer of its kernel driver allowing it to go upstream.
If this new shared kernel driver will be known as Nouveau or something completely different is still an open question, and of course it happening at all depends on if we and the rest of the open source community and NVIDIA are able to find a path together to make it happen, but so far everyone seems to be of good will.

What does this release mean for linux distributions like Fedora and RHEL?

Over time it provides a pathway to radically simplify supporting NVIDIA hardware due to the opportunities discussed elsewhere in this document. Long term we will hope be able to get a better user experience with NVIDIA hardware in terms out of box functionality. Which means day 1 support for new chipsets, a high performance open source Mesa driver for NVIDIA and it will allow us to sign the NVIDIA driver alongside the rest of the kernel to enable things like secureboot support. Since this first release is targeting compute one can expect that these options will first be available for compute users and then graphics at a later time.

What are the next steps
Well there is a lot of work to do here. NVIDIA need to continue the effort to make this new driver feature complete for both Compute and Graphics Display usecases, we’d like to work together to come up with a plan for what the future unified kernel driver can look like and a model around it that works for both the community and NVIDIA, we need to add things like a Mesa Vulkan driver. We at Red Hat will be playing an active part in this work as the only Linux vendor with the capacity to do so and we will also work to ensure that the wider open source community has a chance to participate fully like we do for all open source efforts we are part of.

If you want to hear more about this I did talk with Chris Fisher and Linux Action News about this topic. Note: I did state some timelines in that interview which I didn’t make clear was my guesstimates and not in any form official NVIDIA timelines, so apologize for the confusion.

New badge: Lets have a party Fedora 36 !

Posted by Fedora Badges on May 11, 2022 12:04 PM
Lets have a party Fedora 36You attended the F36 Virtual Release Party!

F36 elections nominations now open

Posted by Fedora Community Blog on May 11, 2022 12:01 AM
Fedora 26 Supplementary Wallpapers: Vote now!

Today we are starting the Nomination & Campaign period during which we accept nominations to the “steering bodies” of the following teams:

This period is open until 2022-05-25 at 23:59:59 UTC.

Candidates may self-nominate. If you nominate someone else, please check with them to ensure that they are willing to be nominated before submitting their name.

The steering bodies are currently selecting interview questions for the candidates.

Nominees submit their questionnaire answers via a private Pagure issue. The Election Wrangler or their backup will publish the interviews to the Community Blog before the start of the voting period.

Please note that the interview is mandatory for all nominees. Nominees not having their interview ready by end of the Interview period (2022-06-01) will be disqualified and removed from the election.

As part of the campaign people may also ask questions to specific candidates on the appropriate mailing list.

The full schedule of the elections is available on the Elections schedule. For more information about the elections, process see the Elections docs.

The post F36 elections nominations now open appeared first on Fedora Community Blog.

Sortie de Fedora Linux 36

Posted by Charles-Antoine Couret on May 10, 2022 02:00 PM

En ce mardi 10 mai, les utilisateurs du Projet Fedora seront ravis d'apprendre la disponibilité de la version Fedora Linux 36.

Fedora Linux est une distribution communautaire développée par le projet Fedora et sponsorisée par Red Hat, qui lui fournit des développeurs ainsi que des moyens financiers et logistiques. Fedora Linux peut être vu comme une sorte de vitrine technologique pour le monde du logiciel libre, c’est pourquoi elle est prompte à inclure des nouveautés.

Fedora garde un rôle central dans le développement de ces nouveautés via le développement en amont. En effet, les développeurs de la distribution contribuent également directement au code d’un certain nombre de logiciels libres contenus dans la distribution, dont le noyau Linux, GNOME, NetworkManager, PackageKit, PulseAudio, Wayland, systemd, la célèbre suite de compilateurs GCC, etc. Cliquez ici pour voir l’ensemble des contributions de Red Hat.

Cela a été aussi abordé dans une série d'articles ici, et par ici encore.

GNOME-Bureau.png, avr. 2022

Expérience utilisateur

Passage à GNOME 42. Cette version apporte de nombreux changements esthétiques et ergonomiques. Tout d'abord il y a une vraie configuration du thème sombre. Auparavant avec l'application Ajustements il était possible de choisir le thème Adwaita-dark pour avoir les applications avec un thème sombre. Maintenant cela est disponible dans le panneau de configuration de GNOME, et elle permet non seulement de configurer le thème des applications, mais aussi les fonds d'écran (s'ils sont compatibles) et les applications peuvent adapter leur affichage également car ils ont accès à cette information. Ce changement n'affecte que les applications écrites avec GTK4 et la nouvelle bibliothèque libadwaita pour le moment.

De manière globale le style des widgets et des applications a été un peu revu. Des icônes ont été également rafraîchies en particulier dans l'application Fichiers.

Ce thème sombre est aussi en lien avec la nouvelle version de la bibliothèque graphique GTK4, de nombreuses applications utilisent cette version maintenant : Paramètres, Fichiers, Analyseur de disques, Polices, To do, Tour, Calendrier, Horloges, Logiciels, Caractères, Contacts, Météo et Calculatrice.

La capture d'écran bénéficie d'un rafraichissement très important. Plutôt qu'une fenêtre classique qui s'ouvre, c'est une application plus discrète en surimpression qui permet de sélectionner la zone à capturer et de configurer les options d'enregistrement. Il permet également d'enregistrer l'écran sous forme de vidéo.

Deux nouvelles applications entrent dans l'arène. Console qui est une version simplifiée de Terminal tout en utilisant GTK4. Les deux doivent s'adresser à des publics un peu différent, Terminal devant être plus complet et complexe à l'usage. Et Éditeur de texte en parallèle de Gedit pour éditer du texte ou programmer. Il est plus simple que Gedit et utilise aussi GTK4, d'autant que que Gedit n'est pour le moment plus officiellement maintenu.

Il est possible de partager son écran à distance avec le protocole RDP dans le panneau de configuration dans l'onglet Partage.

GNOME-Mode_clair.png, avr. 2022

Quelques améliorations de performances, notamment pour Vidéo qui utilise une surface OpenGL pour accélérer le rendu par l'usage de la carte graphique. Le navigateur Web utilise aussi l'accélération matérielle pour le rendu pour améliorer les performances. L'indexation des fichiers avec Tracker qui consomme moins de mémoire et démarre plus rapidement. La gestion des entrées de saisies (clavier, souris, etc.) a été améliorée pour diminuer la latence et améliorer la sensation de fluidité.

GNOME utilise Wayland avec le pilote propriétaire de nVidia par défaut. À partir de la version 495.44, le pilote propriétaire permet d'avoir l'accélération matérielle pour les applications non compatibles avec Wayland, qui passent donc par la couche de compatibilité XWayland. Comme toutes les applications peuvent avoir des performances décentes tout en ayant une bonne stabilité, il devient possible de le proposer par défaut aux utilisateurs. Seuls ceux ayant plusieurs cartes graphiques comme nVidia avec Intel sur les portables garderont X11 par défaut pour le moment.

Notons qu'il est toujours possible d'utiliser X11 si l'utilisateur le souhaite dans tous les cas.

Mise à jour de l'environnement LXQt à la version 1.0.0. Cette version exploite la dernière version LTS de Qt5 à savoir la version 5.15. Un mode ne pas déranger fait son apparition pour ne pas recevoir des notifications quand il est actif. Il est possible d'ajouter sur la barre principale des boutons ayant des commandes personnalisées. Deux nouveaux thèmes font leur apparition avec la possibilité de configurer les palettes Qt également. Le gestionnaire de fichiers peut ajouter des emblèmes aux fichiers ou dossiers. Une option permet de choisir d'afficher ou non les fichiers cachés.

L'installateur anaconda cochera par défaut l'option pour que l'utilisateur soit administrateur du système. En effet par défaut le principal utilisateur doit être l'administrateur via sudo, il est préférable que ceux qui choisissent de procéder autrement décident de décocher cette option plutôt que quelqu'un qui n'a pas les connaissances nécessaires de devoir cliquer dessus.

Par ailleurs les éditions Workstation et Silverblue n'affichent pas cet écran car la configuration est faite via l'utilitaire de GNOME depuis de nombreuses versions, et l'utilisateur créé par ce biais est bien administrateur par défaut ce qui rend cette décision plus cohérente.

La police par défaut devient Noto Font pour plus d'uniformité d'affichage. Jusqu'ici les polices DejaVu étaient employées par défaut pour les langues européennes ou les scripts, Noto étaient employées pour les langues asiatiques pour la meilleure prise en charge de l'ensemble des caractères de ces langues. La police par défaut sera la même pour toutes les langues. Ainsi une application qui affiche plusieurs langues aura un affichage plus propre et cela permet d'économiser quelques 6 Mio dans l'image de base du système en supprimant les polices DejaVu.

GNOME-Mode_sombre.png, avr. 2022

Gestion du matériel

L'architecture POWER LE change d'ABI basé sur le standard long double 128-bit IEEE. En effet ce standard est plus commun que ceux d'AIX double-double ou IBM long double (deux doubles de 64 bits groupés ensemble), ce dernier étant celui utilisé dans Fedora. Ces deux alternatives avaient l'inconvénient d'avoir une mantisse discontinue ce qui les rendait peu pratiques à exploiter.

L'ancien pilote de gestion du framebuffer du noyau fbdev est remplacé par simpledrm qui exploite l'infrastructure DRM du noyau tout en fournissant une couche de compatibilité. Cela fait une bonne décennie que le sous système DRM du noyau est devenu la référence pour l'affichage vidéo. Seulement fbdev est toujours utilisé par certains firmwares pour initialiser l'affichage vidéo au démarrage comme aarch64 via le device tree, efifb pour les machines ayant un EFI ou encore vesafb pour les machines avec un affichage VESA. Mais le noyau 5.14 a introduit simpledrm qui peut prendre en charge ce cas d'utilisation avec une couche de compatibilité ce qui rend fbdev facultatif.

Cela simplifie la maintenance et est une étape avant de supprimer totalement le sous système à terme, qui est préservé à cause de la console framebuffer en pur texte.

Suppression de la prise en charge des Wireless Extensions par le noyau et des outils systèmes, qui a été remplacé en 2007 par mac80211/cfg80211. Cette API entre le noyau et l'espace utilisateur est très ancienne et a pour principal inconvénient de n'être compatible qu'avec le chiffrement WEP qui n'est plus du tout sécurisé. De fait cela fait longtemps qu'il n'est plus vraiment utilisé en conditions réelles et sa suppression permet d'améliorer la sécurité en réduisant la surface d'attaque tout en signalant à ses éventuels utilisateurs de changer leur infrastructure pour une solution plus sécurisée.

Le paquet wireless-tools qui fournissait les utilitaires en espace utilisateur est de fait supprimé. Au niveau du noyau cela passe par la désactivation des options suivantes : CONFIG_WEXT_CORE, CONFIG_WEXT_PROC, CONFIG_WEXT_SPY, CONFIG_WEXT_PRIV, CONFIG_CFG80211_WEXT et CONFIG_CFG80211_WEXT_EXPORT.

Internationalisation

La méthode d'entrée par défaut pour la locale zh_HK (Hong Kong) passe à ibus-table-chinese-cangjie. L'ancienne méthode d'entrée ibus-cangjie n'est plus maintenue mais reste disponible pour ceux qui le souhaitent ou pour les configurations existantes.

La police par défaut de la langue malayalam a été mise à jour pour une meilleure lisibilité et pour la compatibilité avec UNICODE 13. Les polices smc-meera-fonts et smc-rachana-fonts (respectivement serif et sans serif) font place à rit-meera-new-fonts et rit-rachana-fonts.

<figure style="display:table; margin:0 auto;">Console et Éditeur de texte, avr. 2022<figcaption>Console et Éditeur de texte</figcaption></figure>

Administration système

Les authentifications systèmes et les périphériques de sécurité associés (les lecteurs d'empreintes ou les cartes à puce) doivent forcément passer par authselect maintenant pour plus d'homogénéité et de sécurité. Il devient plus difficile d'avoir une configuration cassée ou incohérente.

En effet sans authselect, les fichiers de configuration /etc/nsswitch.conf et /etc/pam.d/* ne sont jamais écrasés suite à une mise à jour, à la place un fichier .rpmnew est créé à côté pour laisser à l'utilisateur le soin de tenir compte du nouveau format ou des nouveaux paramètres pour ne pas casser sa configuration existante. Mais du coup même des fichiers de configuration par défaut n'étaient jamais mis à jour. Par conséquent il fallait écrire des scripts un peu compliqués pour mettre à jour ces fichiers sans casser l'existant ou détecter si c'était une configuration par défaut ce qui est source de nombreux bogues.

authselect a la faculté d'avoir des fichiers de configuration auto-générés et a un meilleur contrôle de ceux-ci, il limite en partie ces difficultés et rend ces opérations plus simples.

Ce changement implique donc que le fichier /etc/nsswitch.conf passe du paquet glibc à authselect, et que ce dernier devient une dépendance à de nombreux paquets de base du système comme pam ou glibc. Les paquets systemd, ecryptfs, nss-mdns et fingerprint ne prennent plus en charge la configuration sans authselect.

Cependant si un utilisateur souhaite totalement s'en passer, il faut utiliser la commande authselect opt-out ou supprimer le fichier /etc/authselect/authselect.conf.

Dans les logs de systemd, le nom du service concerné sera indiqué en plus de la description pour plus d'efficacité. Voyez plutôt :

Avant :

 Started Journal Service.

 Finished Load Kernel Modules.

Après :

 Started systemd-journald.service - Journal Service.

 Finished systemd-modules-load.service - Load Kernel Modules.

De manière plus précise, systemd gère ces lignes avec trois types de format différents : name, 'description et combined. Le premier est le nom de l'unité, le second est sa description (ce qui était la configuration par défaut avant) et le dernier est la combinaison des deux sous la forme <Name> - <Description>'' qui est donc la forme retenue.

Ce choix facilite l'identification de l'unité concernée et éventuellement le copier/coller pour des opérations ultérieures en ligne de commande.

Ajout d'un module Cockpit pour faciliter le partage de fichiers à travers Samba ou NFS. La configuration de ces systèmes de fichiers partagés à travers le réseau devient plus simple et graphique pour ceux qui le souhaitent. Il est disponible à travers le paquet cockpit-file-sharing.

NetworkManager ne fournit plus le support des configurations ifcfg. Ces fichiers qui résidaient dans les répertoires /etc/sysconfig/network-scripts/ifcfg-* étaient créés par l'ancien service network qui n'est plus fourni depuis Fedora... 25 ! Cependant NetworkManager gère bien plus de fonctionnalités que son illustre prédécesseur ce qui rendait difficile l'exploitation de ces fichiers configuration complexes car non prévus pour tous ces cas d'usage et le format est mal documenté. NetworkManager étant incapable de convertir d'un format à un autre sa configuration, cette situation est une importante source de bogues malgré les tests unitaires et de gros effort de maintenance. Fedora 33 a changé le format par défaut pour être keyfile, maintenant les utilisateurs utilisant l'ancien format devront refaire leur configuration.

Cela permet de supprimer quelques 130 000 lignes de code dans le projet NetworkManager.

ostree prend en charge les formats OCI/Docker pour le transport et le mécanisme de déploiement des conteneurs. La fonctionnalité est considérée comme expérimentale encore, les données ou interfaces sont encore susceptibles de changer. Cela permet de bénéficier des outils et de l'écosystème autour de ces technologies pour faciliter le déploiement des applications ou du système. Créer des versions dérivées du système de base devient aussi plus facile, tout en bénéficiant des avantages des delta entre les images pour économiser les ressources.

Pour y parvenir il a fallu faire en sorte que ostree puisse encapsuler ses commits comme des images OCI/Docker, et que rpm-ostree de son côté puisse utiliser ces images tout en conservant l'ensemble de ses fonctionnalités.

L'agent keylime pour établir et maintenir des systèmes distribués sécurisés est découpé en sous paquets pour plus de flexibilité. En effet pour les systèmes Cloud ou IoT, il est possible de n'installer que les composants nécessaires. De plus l'agent keylime a aussi une version alternative en Rust (au lieu de Python) installable via le paquet keylime-agent-rust au lieu de keylime pour la version en Python.

<figure style="display:table; margin:0 auto;">Screenshot_from_2022-04-25_23-04-11.png, avr. 2022<figcaption>Screenshot_from_2022-04-25_23-04-11.png</figcaption></figure>

Ajout d'un nouvel outil remove-retired-packages pour supprimer les paquets qui ne sont plus proposés par la nouvelle version de Fedora et de fait qui ne seront plus jamais mis à jour. Cela permet de nettoyer le système de vieux paquets qui ne sont plus à jour, et d'éviter aussi qu'une mise à niveau de Fedora soit bloquée à cause de conflits de dépendances pour un paquet qui de fait n'existe plus.

Les programmes utilisant GnuTLS peuvent réactiver des algorithmes de sécurité au delà de ceux autorisés par la police de sécurité du système sans en altérer sa politique globale.

En effet, l'outil crypto-policies permet de configurer l'ensemble des algorithmes de sécurités autorisés par les outils de base du système ce qui permet une politique de sécurité cohérente. Cependant c'est trop rigide, si un utilisateur souhaite un algorithme interdit dans la politique choisi pour un seul programme, il doit changer la politique pour l'ensemble du système. Le résultat abaisse donc la sécurité globale pour un besoin très localisé.

L'objectif ici est de pouvoir modifier partiellement la politique de sécurité pour la bibliothèque GnuTLS sans affecter le reste du système. GnuTLS a été choisi en lien avec une demande par rapport à un VPN qui utilise cette bibliothèque, il se pourrait qu'un principe plus large soit mis en place plus tard.

Les systèmes basés sur rpm-ostree ont par défaut le répertoire /var monté depuis le sous-volume var si Btrfs est le système de fichier. Ainsi ce point de montage rejoint /home et / qui ont chacun aussi leur propre sous volume. L'objectif est de faciliter la création de clichés du système indépendamment de / qui contient aussi /etc et /usr qui doivent être en lecture seule dans ce contexte. La sauvegarde du système et sa restauration est ainsi plus simple, les volumes /home et /var contiennent l'ensemble des données personnalisées du système. Le reste peut facilement être restauré via ostree ou une réinstallation.

La base de données RPM est déplacée dans /usr/lib/sysimage/rpm, l'ancien chemin /var/lib/rpm devient un lien symbolique pointant vers la nouvelle destination. Cela permet d'unifier la localisation avec les systèmes basés sur rpm-ostree à savoir CoreOS, IoT, Silverblue, Kinoite, mais aussi OpenSUSE qui a déjà acté de ce changement.

Par ailleurs en lien avec le changement précédent, cela simplifie la gestion des clichés du système et des retours en arrière si nécessaire en cas d'une mise à jour ratée. Il est envisagé plus tard de permettre un système de retour en arrière automatique dans ce cas de figure, un peu comme proposé par rpm-ostree.

Le répertoire des dictionnaires hunspell migre de /usr/share/myspell/ vers /usr/share/hunspell/. La plupart des distributions ont déjà opéré ce changement depuis longtemps.

Le programme de recherche de fichiers locate est implémenté par plocate au lieu du vénérable mlocate. Il est en effet plus rapide tout en utilisant un peu moins d'espace disque grâce à l'usage des bibliothèques liburing et libzstd. mlocate sera totalement retiré pour Fedora 37 ou 38.

Pour les conteneurs, podman 4.0 est fourni. Cette version majeure bénéficie de la réécriture de la pile réseau pour utiliser le nouvel outil Netavark et Aardvark. Les performances réseaux sont améliorées, comme la prise en charge d'IPv6 ou d'être connecté à plusieurs réseaux différents. Les Pods peuvent partager plus de ressources comme les volumes, les périphériques, ou les configurations de sécurité et sysctl. Beaucoup d'autres changements sont ajoutés, de même que des corrections de bogues. Cette version a des incompatibilités nombreuses notamment en lien avec les changements liés au réseau, utiliser podman 4.0 puis une version antérieure peut générer des problèmes.

Le gestionnaire de base de données PostgreSQL est stocké dans sa 14e version. Les options CYCLE et SEARCH pour expressions des tables communes ont été implémentées. Les types représentant les intervalles peuvent avoir plusieurs intervalles, pour représenter des intervalles non continus. Les performances ont été améliorées pour les requêtes parallèles, les demandes hautement concurrentes, les tables partitionnées, etc. La bibliothèque libpq permet de mettre plusieurs requêtes successives dans un pipeline pour améliorer la bande passante des résultats. La mise à jour des index B-tree a été améliorée pour en réduire sa taille. De même l'opération VACUUM est moins agressive en évitant des nettoyages non essentiels.

La plateforme de configuration Ansible est configurée pour employer la version 5. Cette version introduit plus de flexibilité dans la collection d'outils fournie. Fedora en profite pour que l'installation du paquet ansible installe ansible-core qui est le moteur et quelques paquets supplémentaires pour fournir une partie des outils en plus par défaut. L'utilisateur peut installer ansible-core et les autres outils manuellement qu'il souhaite si le choix par défaut ne convient pas.

Le gestionnaire de stockage Stratis est géré dans sa 3e version. Son interface DBus a été revue en profondeur avec la suppression de la méthode FetchProperties pour obtenir les propriétés, des propriétés standards de DBus sont utilisées à la place. À la création d'un système de fichiers il devient enfin possible de configurer la taille logique de celui-ci.

OpenLDAP est mis à jour à la version 2.6.1. Cette version retire le backend nbd tandis que ceux en Perl ou SQL sont mis au placard avant une suppression future. Slapd autorise d'enregistrer les journaux d'erreur directement dans un fichier plutôt que de passer par syslog. Un répartiteur de charge interne fait son apparition pour améliorer les performances de l'infrastructure. L'authentification à multiples facteurs fait également son entrée.

La bibliothèque de sécurité OpenSSL évolue vers la version 3.0. Si l'ABI change en profondeur, l'API reste majoritairement inchangée. Outre la grande progression dans les tests automatiques et la documentation, l'un des grands changements est l'ajout du concept de Providers. L'idée est de facilement, pouvoir via la config ou du code, changer le fournisseur pour un algorithme cryptographique considéré. L'un des fournisseurs est par exemple le module FIPS qui est un standard américain et fourni une implémentation certifiée ce qui peut être utile dans certains contextes de développement. L'usage de l'API bas niveau génère des avertissements avant une suppression future, seule l'API de haut niveau pourra être employée par les applications.

Le paquet nscd pour le cache des noms de domaine est définitivement supprimé. Il n'évolue plus trop également, avec une grande dette technique et est remplacé par défaut par systemd-resolved et sssd depuis quelques temps.

LxQt-Bureau.png, avr. 2022

Développement

Le vénérable autoconf est mis à jour à la version 2.71. Il est possible de renseigner un répertoire pour contenir des fichiers temporaires le temps des opérations via l'argument runstatedir. La compilation croisée est mieux prise en charge par les macros fournies. Parmi les incompatibilités, notons que les macros sont plus strictes à propos de l'usage des quotes. Il est découragé de configurer un compilateur C comme compilateur C, plus tard le comportement sera strict à ce sujet à cause des incompatibilités grandissantes entre ces langages. D'ailleurs par défaut il cherchera la compatibilité avec les normes C11 et C11.

Mise à jour de la chaine de compilation GNU avec GCC 12 et Glibc 2.35. Pour GCC, l'option d'optimisation -O2 active la vectorisation. Les normes OpenMP 5.0 et 5.1 sont mieux prises en charge. Il est d'ailleurs possible pour OpenACC de détecter des choix de parallélisme sous-optimaux avec l'option -Wopenacc-parallelism. La compatibilité avec le langage Ada 2022 s'est aussi améliorée avec plus d'optimisations dans le code généré. Côté famille C, la norme C2X fait ses débuts quand C20 et C23 progressent encore. Il peut générer des informations de débogage via les formats CTF ou BTF. Il est enfin possible d'initialiser la pile implicitement, avec notamment un motif défini avec l'option -ftrivial-auto-var-init=motif.

Pour Glibc, il prend en charge la norme d'encodage Unicode 14.0.0 et de la locale C.UTF-8.

La suite de compilateurs LLVM a quant à lui sa 14e collection d'hiver. L'architecture Armv9-A est pleinement prise en charge. Comme GCC 12, il débute la prise en charge de la norme C2X et de NVIDIA CUDA v11.5. La norme C++20 est également mieux gérée. Le format de débogage par défaut devient DWARFv5 au lieu de DWARFv4. Par défaut le compilateur va générer le code avec des adresses indépendantes via l'option -fPIE pour suivre le même chemin que GCC.

Le langage Go passe la 1.18e vitesse. Le principal changement est l'introduction de la programmation générique qui a été longuement attendue. Il fournit aussi un moteur de fuzzing pour générer des entrées pour tester le logiciel compilé et vérifier s'il se comporte normalement malgré des données imprévues. Les protocoles de sécurité TLS 1.1 et 1.0 sont désactivés côtés clients alors qu'un nouveau module net/netip fait son apparition pour corriger quelques défauts du type net.IP.

Pour Java, la JVM de référence OpenJDK passe de la version 11 à 17. Dans les changements, notons la disparition des compilateurs AOT et JIT expérimentaux alors que le ramasse miette à faible latence ZGC a été introduit. La génération des nombres pseudo aléatoires a été améliorée. La filtrage par motif fait son entrée comme fonctionnalité expérimentale quand l'API des Applet tirera bientôt sa révérence. Il est possible de définir des blocs de texte pour limiter le besoin de formatage des grandes chaînes de caractères dans le code. L'algorithme de sécurité EdDSA est également fourni.

Le langage Ruby bénéficie d'une monture à 3.1 carats. Un nouveau compilateur expérimental JIT entre dans la danse : YJIT, le compilateur JIT de référence MJIT n'améliorant pas assez les performances pour les applications réelles bien qu'un progrès ait été noté dans cette version. Le débogueur debug.gem est aussi de la partie qui est plus performant tandis que le gem error_highlight est fourni pour mettre en évidence les erreurs de compilations dans le code de manière similaire à ce qu'on peut retrouver dans l'écosystème C par exemple.

Tandis que sa boîte à outils web préférée Ruby on Rails arrive voie 7.0 en gare. La nouvelle bibliothèque Hotwire permet de réduire l'usage du couple JavaScript / JSON au profit du HTML pour écrire des applications. Avec Active Record il est possible de définir dans un modèle un attribut chiffré qui est transparent à l'utilisation. Enfin des actions du contrôleur peuvent s'exécuter en parallèle grâce à l'usage de Relation#load_async.

Alors que le Rubygem Cucumber 7.1.0 est proposé. La boîte à outils de tests pour le Behaviour Driven Development" bénéficie de nouveaux points d'ancrage InstallPlugin, BeforeAll et AfterAll''. Fedora en a profité pour améliorer l'intégration des plugins dans le système.

Le langage PHP pèse dorénavant 8.1 tonnes. Cette version propose les énumérations et les propriétés en lecture seule. Il est possible d'obtenir des références pour n'importe quelle fonction ou de définir des intersections de types pour représenter plusieurs contraintes sur un type. Il défini le mot clé final pour bloquer l'héritage d'une méthode spécifique. Et le travail sur l'amélioration des performances s'est poursuivi.

La célèbre boîte à outils web en Python, Django, est recherchée dans sa version 4.0. Un système de cache a été introduit pour Redis avec RedisCach. Forms, Formsets et ErrorList sont pris en charge par le moteur de template pour personnaliser leur rendu. Le fuseau horaire par défaut est implémenté à partir de la fonction zoneinfo dans la bibliothèque standard de Python.

La suite d'outils Python python-setuptools est proposée à la version 58. Elle a supprimé la prise en charge de 2to3 pour faciliter le portage de Python 2 à 3 depuis la fin de support de Python 2 début 2020 ce qui rompt la compatibilité ascendante.

Les amateurs d'Haskell seront ravis d'apprendre l'existence de paquets sous la forme ghcX.Y pour installer plusieurs versions de leur compilateur préféré en parallèle depuis les dépôts. Cela simplifie le travail de développement et de tests sur plusieurs versions. Alors que la version par défaut via le paquet ghc est 8.10, il est possible d'installer les paquets ghc9.0 ou ghc9.2 (et dérivés) pour tirer profit de ces versions alternatives.

La bibliothèque d'interface de fonctions étrangères libffi saute de la version 3.1 à la version 3.4. Sa principale amélioration est la prise en charge des technologies Intel Control-flow Enforcement Technology ou ARM Pointer Authentication. Les architectures matérielles Power10 et RISC-V sont maintenant gérées.

La boîte à outils d'édition de vidéos MLT en est à son 7e film. Le système de compilation utilise uniquement CMake. L'emplacement des modules et des en-têtes a bougé ce qui rend cette version incompatible avec l'ancienne. De nombreux modules ont été supprimés comme GTK2 ou swfdec.

Les informations de débogage produites par la chaine de compilation MinGW résident dans le dossier /usr/lib/debug. Ce changement permet d'éviter que les fichiers de débogage générés finissent à côté du binaire et soient fournis par le paquet standard.

Projet Fedora

Ajout d'informations pour permettre l'obsolescence et la fin de vie des modules, ainsi la mise à jour du système avec des modules activés permettra de choisir le module le plus adapté pour poursuivre la mise à niveau s'il existe, ou pour entièrement le supprimer s'il est supprimé. En effet jusqu'ici DNF manquait d'informations concernant la mise à niveau des modules. Il est possible de changer le comportement par défaut via les options module_obsoletes et module_stream_switch de DNF.

La macro %set_build_flags est appelée automatiquement au début des phases %build, %check et %install. Cette macro exporte les variables CFLAGS, CXXFLAGS, FFLAGS, FCFLAGS, LDFLAGS, LT_SYS_LIBRARY_PATH, CC et CXX. Ces variables très importantes doivent être définies de manière uniformes lors de ces phases car certains projets peuvent compiler du code, en particulier les tests, en dehors de la compilation principale du logiciel. Cela permet que la compilation de l'ensemble du code d'un projet soit effectuée avec le même environnement. La fonctionnalité peut être désactivée avec la macro %undefine _auto_set_build_flags.

Les fichiers .la générés par autotools / libtool ne sont plus installés dans le buildroot. Ces fichiers sont des archives libtool, un format antédiluvien pour compenser l'absence du format de binaire ELF qui a ajouté de nombreuses informations supplémentaires. Mais beaucoup de projets basés sur autotools les génèrent et les installent par défaut ce qui n'est pas souhaité en réalité. Jusqu'ici le moyen de contournement était d'exécuter la commande find $RPM_BUILD_ROOT -name "*.la" -delete dans la procédure de génération du paquet. Avec ce changement cette opération n'est plus nécessaire et il ne devrait plus avoir ces fichiers dans les paquets par accident.

Tous les binaires ont leurs objets ELF annotés pour préciser le nom du paquet d'où ils proviennent pour faciliter le débogage. D'ailleurs systemd-coredump tire profit de cette information pour rapporter les bogues suite à un crash. Auparavant seulement un ID opaque était fourni selon le format .note.gnu.build-id mais il était sans signification pour l'utilisateur à moins de recourir à la commande dnf repoquery --whatprovides debuginfo(build-id) = …, et regarder la version du paquet dans la base de données RPM n'est pas toujours pertinente car une mise à jour a pu avoir eu lieu entre temps. L'information est contenue dans une nouvelle section .note.package du binaire ELF au format JSON.

Exemple :

$ objdump -s -j .note.package build/libhello.so



build/libhello.so:     file format elf64-x86-64



Contents of section .note.package:

 02ec 04000000 63000000 7e1afeca 46444f00  ....c...~...FDO.

 02fc 7b227479 7065223a 2272706d 222c226e  {"type":"rpm","n

 030c 616d6522 3a226865 6c6c6f22 2c227665  ame":"hello","ve

 031c 7273696f 6e223a22 302d312e 66633335  rsion":"0-1.fc35

 032c 2e783836 5f363422 2c226f73 43706522  .x86_64","osCpe"

 033c 3a226370 653a2f6f 3a666564 6f726170  :"cpe:/o:fedorap

 034c 726f6a65 63743a66 65646f72 613a3333  roject:fedora:33

 035c 227d0000                             "}..            

Pour les systèmes Silverblue et Kinoite, un nouveau méta paquet kernel-devel-matched est fourni pour permettre la prise en charge de akmods pour la compilation de modules externes du noyau comme le pilote propriétaire de nVidia permettant de supprimer les paquets kernel-devel et glibc-devel de leur image de base et donc de gagner de la place. Ce nouveau paquet permet de faire la chaine de dépendance akmods ainsi : akmods -> akmod -> kernel-devel-matched -> kernel & kernel-devel.

En effet, comme ces variantes ont le système en lecture seule et que le noyau n'est pas forcément à jour, ce paquet permet de forcer l'installation du paquet de développement correspondant dans la même version du noyau. Ce qui permet la bonne compilation du pilote par la suite. Sans ce système, la dépendance va installer la dernière version de kernel-devel automatiquement ce qui ne colle pas forcément avec le noyau installé et rend ainsi l'opération impossible. Ou alors il fallait préciser la version du paquet de kernel-devel à installer ce qui complexifiait le processus.

L'installation des paquets en tant que dépendance faible ne se fait plus lors de la mise à jour du paquet ayant cette suggestion, cela n'est fait qu'à l'installation de ce dernier. Les dépendances faibles diffèrent des dépendances classiques par leur caractère facultatif. Si le paquet A dépend de B, l'installation de A installe B, et la suppression de B entraine la suppression de A. Si par contre le paquet A dépend faiblement de B, l'installation de A installe B mais la suppression de B n'entraine pas la suppression de A. Cela permet une certaine flexibilité si plusieurs paquets fournissent potentiellement le même service comme community-mysql et mariadb en fournissant une recommandation par défaut sans la forcer, ou d'avoir une installation minimale possible tout en offrant un système complet très facilement aussi.

Cependant de nombreuses dépendances faibles ne sont pas installées dans l'image par défaut pour des raisons de minimalisme et de sécurité. Mais à la première mise à jour du système elles étaient installées ce qui n'était pas le comportement souhaité. De même, si l'utilisateur a supprimé cette dépendance faible manuellement, le paquet supprimé pouvait revenir lors de la mise à jour d'un autre paquet.

L'option exclude_from_weak_autodetect=false peut être insérée dans le fichier /etc/dnf/dnf.conf pour restaurer le comportement précédent.

La communauté francophone

L'association

Logo.png, oct. 2017

Borsalinux-fr est l'association qui gère la promotion de Fedora dans l'espace francophone. Nous constatons depuis quelques années une baisse progressive des membres à jour de cotisation et de volontaires pour prendre en main les activités dévolues à l'association.

Nous lançons donc un appel à nous rejoindre afin de nous aider.

L'association est en effet propriétaire du site officiel de la communauté francophone de Fedora, organise des évènements promotionnels comme les Rencontres Fedora régulièrement et participe à l'ensemble des évènements majeurs concernant le libre à travers la France principalement.

Si vous aimez Fedora, et que vous souhaitez que notre action perdure, vous pouvez :

  • Adhérer à l'association : les cotisations nous aident à produire des goodies, à nous déplacer pour les évènements, à payer le matériel ;
  • Participer sur le forum, les listes de diffusion, à la réfection de la documentation, représenter l'association sur différents évènements francophones ;
  • Concevoir des goodies ;
  • Organiser des évènements type Rencontres Fedora dans votre ville.

Nous serions ravis de vous accueillir et de vous aider dans vos démarches. Toute contribution, même minime, est appréciée.

Si vous souhaitez avoir un aperçu de notre activité, vous pouvez participer à nos réunions hebdomadaires chaque lundi soir à 20h30 (heure de Paris) sur IRC (canal #fedora-meeting-1 sur Libera).

La documentation

Depuis juin 2017, un grand travail de nettoyage a été entrepris sur la documentation francophone de Fedora, pour rattraper les 5 années de retard accumulées sur le sujet.

Le moins que l'on puisse dire, c'est que le travail abattu est important : près de 90 articles corrigés et remis au goût du jour. Un grand merci à Charles-Antoine Couret, Nicolas Berrehouc, Édouard Duliège, José Fournier et les autres contributeurs et relecteurs pour leurs contributions.

L'équipe se réunit tous les lundis soir après 21h (heure de Paris) sur IRC (canal #fedora-doc-fr sur Libera) pour faire progresser la documentation par un travail collaboratif. Le reste de la semaine cela se passe sur le forum. Pour plus de convivialité, nous avons mis en place également une réunion mensuelle le premier lundi du mois à la même heure en visio-conférence sur Jitsi.

Si vous avez des idées d'articles ou de corrections à effectuer, que vous avez une compétence technique à retransmettre, n'hésitez pas à participer.

Comment se procurer Fedora Linux 36 ?

Mediawriter.png, oct. 2018

Si vous avez déjà Fedora Linux 35 ou 34 sur votre machine, vous pouvez faire une mise à niveau vers Fedora Linux 36. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.

Autrement, pas de panique, vous pouvez télécharger Fedora Linux avant de procéder à son installation. La procédure ne prend que quelques minutes.

Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.

De plus, pour éviter les mauvaises surprises, nous vous recommandons aussi de lire au préalable les bogues importants connus à ce jour pour Fedora Linux 36.

What’s new in Fedora Workstation 36

Posted by Fedora Magazine on May 10, 2022 06:32 AM

The latest release of Fedora Workstation 36 continues the Fedora Project’s ongoing commitment to delivering the latest innovations in the open source world. This article describes some of the notable user-facing changes that appear in this version.

GNOME 42

Fedora Workstation 36 includes the latest version of the GNOME desktop environment. GNOME 42 includes many improvements and new features. Just some of the improvements include:

  • Significantly improved input handling, resulting in lower input latency and improved responsiveness when the system is under load. This is particularly beneficial for games and graphics applications.
  • The Wayland session is now the default for those who use Nvidia’s proprietary graphics driver.
  • A universal dark mode is now available.
  • A new interface has been added for taking screenshots and screen video recordings.

In addition, many of the core apps have been ported to GTK 4, and the shell features a number of subtle refinements.

Refreshed look and feel

<figure class="aligncenter size-large is-resized"><figcaption>GNOME 42 as featured in Fedora Workstation 36</figcaption></figure>

GNOME Shell features a refreshed look and feel, with rounder and more clearly separated elements throughout. All the symbolic icons have been updated and the top bar is no longer rounded.

Universal dark mode option

In Settings > Appearance, you can now choose a dark mode option which applies a dark theme to all supported applications. In addition, the pre-installed wallpapers now include dark mode variants. Dark themes can help reduce eye-strain when there is low ambient light, can help conserve battery life on devices with OLED displays, and can reduce the risk of burn-in on OLED displays. Plus, it looks cool!

New screenshot interface

<figure class="aligncenter size-full"><figcaption>Taking screenshots and screen video recordings is now easier than ever</figcaption></figure>

Previously, pressing the Print Screen key simply took a screenshot of the entire screen and saved it to the Pictures folder. If you wanted to customize your screenshots, you had to remember a keyboard shortcut, or manually open the Screenshots app and use that to take the screenshot you wanted. This was inconvenient.

Now, pressing Print Screen presents you with an all-new user interface that allows you to take a screenshot of either your entire screen, just one window, or a rectangular selection. You can also choose whether to hide or show the mouse pointer, and you can also now take a screen video recording from within the new interface.

Core applications

<figure class="aligncenter size-large is-resized"><figcaption>Apps made in GTK 4 + libadwaita feature a distinct visual style</figcaption></figure>

GNOME’s core applications have seen a number of improvements. A number of them have been ported to GTK 4 and use libadwaita, a new widget library that implements GNOME’s Human Interface Guidelines.

  • Files now includes the ability to sort files by creation date, and includes some visual refinements, such as a tweaked headerbar design and file renaming interface.
  • The Software app now includes a more informative update interface, and more prominently features GNOME Circle apps.
  • The Settings app now has a more visually appealing interface matching the visual tweaks present throughout GNOME Shell.
  • Text Editor replaces Gedit by default. Text Editor is an all-new app built in GTK 4 and libadwaita. You can always reinstall Gedit by searching for it in the Software app.

Wayland support on Nvidia’s proprietary graphics driver

In previous versions, Fedora Workstation defaulted to the X display server when using Nvidia’s proprietary graphics driver – now, Fedora Workstation 36 uses the Wayland session by default when using Nvidia’s proprietary graphics driver.

If you experience issues with the Wayland session, you can always switch back to the Xorg session by clicking the gear icon at the bottom-right corner of the login screen and choosing “GNOME on Xorg”.

Under-the-hood changes throughout Fedora Linux 36

  • When installing or upgrading packages with DNF or PackageKit, weak dependencies that have been manually removed will no longer be reinstalled. That is to say: if foo is installed and it has bar as a weak dependency, and bar is then removed, bar will not be reinstalled when foo is updated.
  • The Noto fonts are now used by default for many languages. This provides greater coverage for different character sets. For users who write in the Malayalam script, the new Meera and RIT Rachana fonts are now the default.
  • systemd messages now include unit names by default rather than just the description, making troubleshooting easier.
<figure class="aligncenter size-full"><figcaption>systemd messages shows unit names by default</figcaption></figure>

Upgrade now!

You can upgrade your system through GNOME Software, via dnf system-upgrade in the terminal, or download the live ISO image from the official website.

Also check out…

There are always cool things happening in the Fedora Project!

Next Open NeuroFedora meeting: 9 May 1300 UTC

Posted by The NeuroFedora Blog on May 09, 2022 10:11 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 9 May at 1300 UTC The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2022-05-09'

The meeting will be chaired by @shaneallcroft. The agenda for the meeting is:

We hope to see you there!

Episode 322 – Adam Shostack on the security of Star Wars

Posted by Josh Bressers on May 09, 2022 12:00 AM

Josh and Kurt talk to Adam Shostack about his new book “Threats: What Every Engineer Should Learn From Star Wars”. We discuss some of the lessons and threats in the Star Wars universe, it’s an old code I hear. We also discuss if Star Wars is a better than Star Trek for teaching security (it probably is). It’s a fun conversation and sounds like an amazing book.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2773-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_322_Adam_Shostack_on_the_security_of_Star_Wars.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_322_Adam_Shostack_on_the_security_of_Star_Wars.mp3</audio>

Show Notes

Onlykey DUO

Posted by Kevin Fenzi on May 08, 2022 07:47 PM

Last year, I backed the onlykey DUO on kickstarter: https://www.kickstarter.com/projects/timsteiner/onlykey-duo-portable-protection-for-all-of-your-devices It seemed like a interesting device and I like that it’s fully opensource, unlike modern yubikeys.

The device finally arrived last month, and I’ve had a chance to play around with it some. Sadly, I don’t think it’s going to replace my yubikey anytime soon.

On the good side: The device itself is nicely constructed. It has a multicolored led on it that indicates which profile is in use (There are 4: green, blue, yellow, purple). It’s got 2 buttons on the end, so you can press one or the other or both at the same time and long or short presses for different slots. That means each profile has 6 ‘slots’ for a total of 24 in all 4 profiles. You can set a pin to lock the key which you have to enter before using it, along with a ‘self destruct’ pin that will wipe all configuration when entered.

On the bad side however, there’s a fair bit. The software to manage the onlykey is provided as either a ubuntu .deb or a snap. I tried to get the snap working with no luck at all, and ended up unpacking the deb to get things working. I looked into making a Fedora package but it’s a node app and has a pile of deps.

Next, I tried to enroll a otp for our Fedora account system, but found that the TOTP secret wouldn’t work. Further investigation showed that the onlykey NEO only supports sha1 for TOTP secrets and our account system uses SHA512. ;( There’s a old closed ticket about this on the onlykey firmware repo: https://github.com/trustcrypto/OnlyKey-Firmware/issues/101

There’s also no way to generate a ssh private key on the device (like you can using the opensc support on a yubikey). You can generate ecdsa sk openssh keys, which is great, but not too useful to me yet as RHEL7 and RHEL8 don’t support them.

So, at this point I would not recommend these devices unless you don’t need to interact with the Fedora account system or want to use the device with a Fedora linux install.

A hybrid development Docker Compose setup for Rails

Posted by Josef Strzibny on May 08, 2022 12:00 AM

Lots of developers choose between dockerizing their development setup or leaving it as is. There is also a viable hybrid approach in combining Docker Compose with native processes.

I am usually in the camp of running things directly or creating Vagrant environments that closely resemble what I normally run. I also think a lot between introducing more layers than I need, so I usually run without Docker if I can.

Nevertheless, I realized that running a Docker Compose setup alongside your regular Puma and Sidekiq processes is actually a pretty nice sweet spot to be in. It’s what we use at Phrase.

Why, but why

The arguments for dockerizing the whole development environment are usually in terms of matching production. That means running the same versions of databases, utilities, and services. Having it formalized also means that every team member can immediatelly start working or return to a working setup.

I understand this argument a lot as it’s the reason I usually had a Vagrant environment around for my own projects. Even when I developed without a virtual machine, I would write a Vagrantfile to be able to run things in case of anything breaking. So I get it.

But it’s not the same with Docker. Dockerizing an entire development setup requires a bit different mindset in my opinion. And while leaving virtual machines behind sounds like an improvement, performance might still suffer.

This makes you think if dockerizing everything is worth it. Seems like full Docker setups are a minority for this reason.

Can we not go overboard and still enjoy some Docker, though? What’s an alternative?

The alternative is installating Ruby, Rails, and system utilities as usual while dockerizing the rest. This way we solve the annoying part of managing different databases at the cost of not solving the parity in system dependencies.

It’s not perfect, but it’s simple. It’s getting 80% of benefits for 20% of effort. The end result should be running bin/dev and bin/rails test as usual. Not a single command would have to run within a container.

Implementation

There are three steps to turn a regular setup to a hybrid Docker Compose one. We’ll write the docker-compose.yml specification of our databases, update the ports in Rails configuration files, and finally include Docker Compose in our Procfile.dev.

The Docker Compose file for a typical new Rails application with a relational database and Redis server might look like the following:

# docker-compose.yml
version: '3.7'

services:
  postgres:
    image: postgres:14.2
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    ports:
      - 54320:5432
    volumes:
      - postgres:/var/lib/postgresql/data

  redis:
    image: redis:5.0.4
    command: redis-server /etc/redis.conf
    ports:
      - 63790:6379
    volumes:
      - redis:/data

volumes:
  redis:
  postgres:

The first thing you might notice is that it’s very short and understandable. Two database services, each with a volume for data and ports we expose to the host. The PostgreSQL server is run with a default password while we can omit a password for Redis.

Remember that some other services or databases might require different development and test entries, but this is not necessary here as we can use the same servers for both environments (the database name will differ).

Running with this Compose setup is as easy as typing docker-compose up and updating your Rails configuration.

If you have to run Docker with sudo, add your user to the docker group first:

$ sudo gpasswd -a $USER docker
$ newgrp docker

And start Docker Compose:

$ docker-compose up

docker-compose up should download the database images and start these two services for you.

Now that your databases are ready, update the Rails configuration:

# config/database.yml
development:
  <<: *default
  username: postgres
  password: postgres
  # 5432 for local, 54320 for Docker Compose
  port: 54320
  host: "0.0.0.0"
  database: app_development
...

# config/cable.yml
development:
  adapter: redis
  # 6379 for local, 63790 for Docker Compose
  url: redis://localhost:63790/1
...

At this point you should be able to run bin/rails s, bin/rails test and other usual commands against these new databases.

Finally, to put these things together, we’ll update Procfile.dev:

$ cat Procfile.dev
web: bin/rails server -p 3000
css: yarn build:css --watch
live_reload: bin/guard
js: yarn build --watch
services: docker-compose up

Conclusion

If we now want to start Rails in development, all we have to do is to run bin/dev as usual.

We haven’t solved everything with the new setup, but we gained a lot for very little effort. I think that’s the setup I’ll go with in my kit.

Friday’s Fedora Facts: 2022-18

Posted by Fedora Community Blog on May 06, 2022 09:10 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Fedora Linux 36 will be released on Tuesday 10 May.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
RustConfPortland, OR, US & virtual4–5 Augcloses 8 May
CentOS Summer Dojovirtual17 Junopen
DDD PerthPerth, AU9 Sepcloses 27 May
Open Source Summit EuropeDublin, IE & virtual13–16 Sepcloses 30 May
#a11yTOToronto, ON, CA & virtual20–21 Octcloses 31 May
I Code JavaJohannesburg, ZA & hybrid12–14 Octcloses 5 Jun
PyCon SKBratislava, SK9–11 Sepcloses 30 Jun
SREcon22 EMEAAmsterdam, NL25–27 Octcloses 30 Jun
Write the Docs Praguevirtual11–13 Sepcloses 30 Jun
NodeConf EUKilkenny, IE & virtual3–5 Octcloses 6 Jul
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
1955416shimPOST
</figure>

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345217
F354240
F36 (pre-release)1812
Rawhide6909
</figure>

Fedora Linux 36

Schedule

  • 2022-05-10 — Target date #3
  • 2022-05-11 — Election nominations open
  • 2022-05-25 — Election nominations end

See the schedule website for the full schedule.

Fedora Linux 37

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
Deprecate Legacy BIOSSystem-WideRejected
RPM 4.18System-WideApproved
Legacy Xorg Driver RemovalSystem-WideApproved
Haskell GHC 9 & Stackage LTS 19Self-ContainedApproved
Replace jwhois package with whois for Fedora WorkstationSelf-ContainedFESCo #2785
Strong crypto settings: phase 3, forewarning 1/2System-WideAnnounced
Node.js 18.x by defaultSystem-WideAnnounced
Strong crypto settings: phase 3, forewarning 1/2System-WideAnnounced
Perl 5.36System-WideAnnounced
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Major upgrade of MicrodnfSelf-ContainedFESCo #2784
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-18 appeared first on Fedora Community Blog.

Announcing Fedora Linux 36

Posted by Fedora Magazine on May 06, 2022 08:04 PM

Today, I’m excited to share the results of the hard work of thousands of Fedora Project contributors: our latest release, Fedora Linux 36, is here!

By the community, for the community

Normally when I write these announcements, I talk about some of the great technical changes in the release. This time, I wanted to put the focus on the community that makes those changes happen. Fedora isn’t just a group of people toiling away in isolation — we’re friends. In fact, that’s one of our Four Foundations.

One of our newest Fedora Friends, Juan Carlos Araujo said it beautifully in a Fedora Discussion post:

Besides functionality, stability, features, how it works under the hood, and how cutting-edge it is, I think what makes or breaks a distro are those intangibles, like documentation and the community. And Fedora has it all… especially the intangibles.

We’ve worked hard over the years to make Fedora an inclusive and welcoming community. We want to be a place where experienced contributors and newcomers alike can work together. Just like we want Fedora Linux to be a distribution that appeals to both long-time and novice Linux users.

Speaking of Fedora Linux, let’s take a look at some of the highlights this time around. As always, you should make sure your system is fully up-to-date before upgrading from a previous release. This time especially, because we’ve squashed some very important upgrade-related bugs in F34/F35 updates. Your system upgrade to Fedora Linux 36 could fail if those updates aren’t applied first.

Desktop improvements

Fedora Workstation focuses on the desktop, and in particular, it’s geared toward users who want a “just works” Linux operating system experience. As usual, Fedora Workstation features the latest GNOME release: GNOME 42. While it doesn’t completely provide the answer to life, the universe, and everything, GNOME 42 brings a lot of improvements. Many applications have been ported to GTK 4 for improved style and performance. And two new applications come in GNOME 42: Text Editor and Console. They’re aptly named, so you can guess what they do. Text Editor is the new default text editor and Console is available in the repos.

If you use NVIDIA’s proprietary graphics driver, your desktop sessions will now default to using the Wayland protocol. This allows you to take advantage of hardware acceleration while using the modern desktop compositor.

Of course, we produce more than just the Editions. Fedora Spins and Labs target a variety of audiences and use cases, including Fedora Comp Neuro, which provides tools for computational neuroscience, and desktop environments like Fedora LXQt, which provides a lightweight desktop environment. And don’t forget our alternate architectures: ARM AArch64, Power, and S390x.

Sysadmin improvements

Fedora Linux 36 includes the latest release of Ansible. Ansible 5 splits the “engine” into an ansible-core package and collections packages. This makes maintenance easier and allows you to download only the collections you need. See the Ansible 5 Porting Guide to learn how to update your playbooks.

Beginning in Fedora Server 36, Cockpit provides a module for provisioning and ongoing administration of NFS and Samba shares. This allows administrators to manage network file shares through the Cockpit web interface used to configure other server attributes.

Other updates

No matter what variant of Fedora Linux you use, you’re getting the latest the open source world has to offer. Podman 4.0 will be fully released for the first time in Fedora Linux 36. Podman 4.0 has a huge number of changes and a brand new network stack. It also brings backwards-incompatible API changes, so read the upstream documentation carefully.

Following our “First” foundation, we’ve updated key programming language and system library packages, including Ruby 3.1, Golang 1.18 and PHP 8.1. 

We’re excited for you to try out the new release! Go to https://getfedora.org/ and download it now. Or if you’re already running Fedora Linux, follow the easy upgrade instructions. For more information on the new features in Fedora Linux 36, see the release notes.

In the unlikely event of a problem…

If you run into a problem, visit our Ask Fedora user-support forum. This includes a category for common issues.

Thank you everyone

Thanks to the thousands of people who contributed to the Fedora Project in this release cycle. We love having you in the Fedora community. Be sure to join us May 13 – 14 for a virtual release party!

CPE Weekly Update – Week 18 2022

Posted by Fedora Community Blog on May 06, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat.

Week: 2nd – 6th May 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board

Update

Fedora Infra

  • Resultsdb now entirely working in stg ocp4 (Many thanks Leo!)
  • leaned up pagure.io ssl cert issues
  • Fixed an issue on ipsilon02 that was preventing bugzilla.redhat.com logins
  • Business as usual tickets (lists, groups, etc)

CentOS Infra including CentOS CI

Release Engineering

  • F36 RC-1.4 is out
  • 1 proposed blocker, GO/NOGO is tomorrow

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • May planning meeting happening now – outlining goals to work towards for the month
  • Active discussion happening on RHEL and CentOS Stream module synchronization. Planning the branch names, stream names and DistroBaker rules.

CentOS Duffy CI

Goal of this Initiative

Duffy is a system within CentOS CI Infra which allows tenants to provision and access bare metal resources of multiple architectures for the purposes of CI testing.
We need to add the ability to checkout VMs in CentOS CI in Duffy. We have OpenNebula hypervisor available, and have started developing playbooks which can be used to create VMs using the OpenNebula API, but due to the current state of how Duffy is deployed, we are blocked with new dev work to add the VM checkout functionality.

Updates

  • More deployment testing
  • Reverse lookup IPs to get hostnames when provisioning
  • Legacy API: map parameter combinations to pools in configuration rather than hard-coded
  • Node quotas (ongoing)

Package Automation (Packit Service)

Goal of this initiative

Automate RPM packaging of infra apps/packages

Updates

  • Business as usual, working through errors with packit configs
  • Ready to try our first full release this week (hopefully) resulting in koji builds and bodhi updates

Flask-oidc: oauth2client replacement

Goal of this initiative

Flask-oidc is a library used across the Fedora infrastructure and is the client for ipsilon for its authentication. flask-oidc uses oauth2client. This library is now deprecated and no longer maintained. This will need to be replaced with authlib.

Updates:

  • Investigating where oauth2client appears in the code, possible replacement functions in authlib
  • Met with Aurelien this afternoon to gain some of his knowledge

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • epel9 up to 2455 source packages (increase of 39 from last week).
  • Qt5 rebuild issue from last week has been resolved in epel9-next, but epel8-next rebuilds are blocked by a CentOS Stream 8 module bug.
  • Partially unblocked azure-cli addition to epel9 by adding python-jwt.
  • Bootstrapped the “testing-cabal” suite in epel9 (python-extras, python-fixtures, python-testresources, python-testscenarios, and python-testtools) (update). This potentially unblocks many new epel9 packages, notably several Openstack client tools and libraries.
  • ImageMagick incompatible upgrade and related rebuilt packages are available in epel8-testing. (Fixes 81 bugs, 69 of them CVE bugs)

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 18 2022 appeared first on Fedora Community Blog.

Taking farther strides now – An Outreachy Blog about Revamping Meetbot Logs

Posted by Fedora Community Blog on May 05, 2022 08:00 AM

I cannot help but feel a little nostalgic while writing this last blog post about my Outreachy internship. This post marks the end of my three-month internship where I worked with the community members to revamp the web application housing IRC meeting logs and summaries. I cannot believe how the time flew by – from applying, to getting selected, to contributing, and finally ending the journey. So today, I would be summarising my overall experience of this beautiful journey and will introduce the people who have helped me throughout the way.

Getting started

I still remember the nervousness I had back while answering the application questions. I was not sure if my efforts would amount to getting selected for the opportunity. But still, I never lost hope and gave my best in every phase of the selection process: the initial application form, the contribution period, and the final application form. I had contributed to various free and open-source programmes in the past but even then, I had this fear in my mind: “What if I don’t get selected Outreachy?”

More specifically, I was determined to make the best of the contribution period as that phase decides whether a contributor makes it in or not. My focus was always on the quality of my contributions, rather than racking up as many issues as I can. Project maintainers/mentors observe how well you are acquainted with the code and how it functions. They are more favourable when they see you fix existing bugs and bring new features to the table, not when you change just one line of code.

When the selection results were out, I could not believe the fact that I have made the internship. My parents had tears of joy in their eyes and blessed me saying that efforts never go in vain. And that is when I started working on my project “Revamp web application to aggregate and distribute IRC meeting minutes and logs” – a Python Flask project to aggregate and distribute the minutes and logs for IRC/Matrix meetings of the Fedora Project community.

Doing the work

I discussed ideas with my mentors Akashdeep Dhar and Francois Andrieu and implemented the solutions under their guidance and supervision. They were always available for me whenever I needed any help or I had to clarify any doubt despite being in different time zones. I worked on using Poetry for dependency management, building a calendar view, and creating a quicker search. To start with, I knew only JavaScript, Python, and Flask but my mentors helped me learn the skills I needed.

My journey of working with a variety of community teams like the design team and the Red Hat Community Platform Engineering team has been a great learning experience. I would recommend Outreachy as one of the best opportunities to start with for folks keen on free and open-source software. I will always cherish the journey of my Outreachy internship. I am thankful to my amazing mentors and all the people who have supported me throughout this journey.

The post Taking farther strides now – An Outreachy Blog about Revamping Meetbot Logs appeared first on Fedora Community Blog.

Keystone LDAP with Bifrost

Posted by Adam Young on May 04, 2022 07:39 PM

I got keystone in my Bifrost install to talk via LDAP to our Freeipa server. Here’s what I had to do.

I started with a new install of bifrost, using Keystone and TLS.

./bifrost-cli install --enable-keystone --enable-tls  --network-interface enP4p4s0f0np0 --dhcp-pool 192.168.116.25-192.168.116.75

After making sure that Keystone could work for normal things;

source /opt/stack/bifrost/bin/activate
export OS_CLOUD=bifrost-admin
 openstack user list -f yaml
- ID: 1751a5bb8b4a4f0188069f8cb4f8e333
  Name: admin
- ID: 5942330b4f2c4822a9f2cdf45ad755ed
  Name: ironic
- ID: 43e30ad5bf0349b7b351ca2e86fd1628
  Name: ironic_inspector
- ID: 0c490e9d44204cc18ec1e507f2a07f83
  Name: bifrost_user

I had to install python3-ldap and python3-ldappool .

sudo apt install python3-ldap python3-ldappool

Now create a domain for the LDAP data.

openstack domain create freeipa
...
openstack domain show freeipa -f yaml

description: ''
enabled: true
id: 422608e5c8d8428cb022792b459d30bf
name: freeipa
options: {}
tags: []

Edit /etc/keystone/keystone.conf to support domin specific backends and back them with file config. When you are done, your identity section should look like this.

[identity]
domain_specific_drivers_enabled=true
domain_config_dir=/etc/keystone/domains
driver = sql

Create the corresponding directory for the new configuration files.

sudo mkdir /etc/keystone/domains/

Add in a configuration file for your LDAP server. Since I called my domain freeipa I have to name the config file /etc/keystone/domains/keystone.freeipa.conf

[identity]
driver = ldap

[ldap]
url = ldap://den-admin-01


user_tree_dn = cn=users,cn=accounts,dc=younglogic,dc=com
user_objectclass = person
user_id_attribute = uid
user_name_attribute = uid
user_mail_attribute = mail
user_allow_create = false
user_allow_update = false
user_allow_delete = false
group_tree_dn = cn=groups,cn=accounts,dc=younglogic,dc=com
group_objectclass = groupOfNames
group_id_attribute = cn
group_name_attribute = cn
group_member_attribute = member
group_desc_attribute = description
group_allow_create = false
group_allow_update = false
group_allow_delete = false
user_enabled_attribute = nsAccountLock
user_enabled_default = False
user_enabled_invert = true

To make changes, to restart sudo systemctl restart uwsgi@keystone-public

sudo systemctl restart uwsgi@keystone-public

And test that it worked

openstack user list -f yaml  --domain freeipa
- ID: b3054e3942f06016f8b9669b068e81fd2950b08c46ccb48032c6c67053e03767
  Name: renee
- ID: d30e7bc818d2f633439d982783a2d145e324e3187c0e67f71d80fbab065d096a
  Name: ann

This same approach can work if you need to add more than one LDAP server to your Keystone deployment.

GNOME Foundation Board Elections 2022

Posted by Felipe Borges on May 04, 2022 03:11 PM

My involvement with GNOME started in my teens and has continued over the years where it influenced my studies, career, and even the place I chose to live. One of my desires in my journey has been to help the GNOME project achieve its goals and fulfill its vision of building an open source desktop environment that is accessible and easy to use to a general audience. Sitting on the Board has enabled me to contribute to these efforts more directly and has also taught me plenty about community governance and nonprofit sustainability.

My Board term is ending now and will not run for reelection for a few reasons: firstly, I believe that a rotation of board members can help increase community engagement and transparency. The current model our Board has of renewing parts of its members every year IMO does a great job at ensuring continuity of board programs while allowing for new voices and perspectives to onboard and maximize the impact.

Another reason why I will not be running for reelection is that I am convinced I can be more beneficial to the GNOME project by contributing to more operational tasks and running some of our programs, instead of the position of governance and oversight expected of the Board members. I would like for my seat on the Board to be filled by someone with skills and enthusiasm for reaching out to broader audiences beyond GNOME, someone capable of bridging our plans and vision with opportunities that can bring funding, diversity, and sustainability to the Foundation.

I am not going anywhere. You will still see me around the chat channels, forums, and conferences. I want to focus on improving our Newcomers onboarding experience as well as increase our conversion rate of Outreachy/GSoC interns that become long-term contributors. This also involves helping application developers monetize their work and making sure volunteers are given employment opportunities that allow them to continue working on open source software. I also want to refocus on my coding contributions, while learning new things and keeping up with modern technologies.

All in all, I am looking forward to meeting my fellow GNOME friends in GUADEC this year after such a long time with no travel. o/

Analyzing Apache HTTPD logs in syslog-ng

Posted by Peter Czanik on May 04, 2022 12:26 PM

Recently, I started my own blog, and as Google Analytics seems to miss a good part of visitors, I wanted to analyze my web server logs myself. I use syslog-ng to read Apache logs, process them, and store them to Elasticsearch. Along the way, I resolve the IP address using a Python parser, analyze the Agent field of the logs, and also use GeoIP to locate the user on the map.

From this blog, you can learn how I built my configuration. Note that once I was ready, I realized that my configuration is not GDPR compliant, so I also show you which parts to remove from the final configuration :-).

Read the rest of my blog at https://www.syslog-ng.com/community/b/blog/posts/analyzing-apache-httpd-logs-in-syslog-ng

Bazsi, founder of the syslog-ng project is looking for your feedback. He writes:

“In the past few weeks I performed a round of discussions/interviews with syslog-ng users. I also spent time looking at other products and analyst reports on the market. Based on all this information I’ve come up with a list of potential strategic directions for syslog-ng to tackle. Focusing on these and prioritizing features that fall into one of these directions ensures that syslog-ng indeed moves ahead.”

  • The Edge
  • Cloud Native
  • Observability
  • Application awareness
  • User friendliness

You can read the rest if his blog and provide him (and the syslog-ng team) with feedback at https://syslog-ng-future.blog/syslog-ng-on-the-long-term-a-draft-on-strategic-directions/

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

FMW is finished

Posted by Evzen Gasta on May 04, 2022 10:55 AM

As all good things have to come to an end, my bachelor thesis has to end someday also. But I’m still looking forward to contributing to FMW, when are any updates or fixes needed. Official FMW 5.0.0 will be released soon. This was my first experience with an open source project and I liked it very much. I’m looking forward to start working on new open source projects in the future.

This project gave me a lot of experience such as learning how to deploy Qt applications on various operating systems, and how the structure of applications looks. I have learned the new programming language QML and how to use CMake in open source project. Another big experience for me is understanding how GitHub CI works. This was used to automatically create builds for various systems. I have practiced many options and advantages of Git (from the beginning I have been using only “git pull” with “git push”).

As a result I was able to create new generation of FMW.

<figure class="wp-container-2 wp-block-gallery-1 wp-block-gallery has-nested-images columns-default is-cropped" data-carousel-extra="{"blog_id":200697502,"permalink":"https:\/\/egastablog.wordpress.com\/2022\/05\/04\/fmw-is-finished\/"}"> <figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-large"></figure> <figcaption class="blocks-gallery-caption">New generation of FMW</figcaption></figure>

This result couldn’t be achieved without my technical mentor Jan Grulich (his blog and twitter). And also without my leader of my bachelor’s thesis Dominika Regéciová (her blog and twitter). So I would like to thank you both, for the opportunity, patients and overall help.

Distrobox for Ubuntu (and soon, Debian!)

Posted by Michel Alexandre Salim on May 04, 2022 12:00 AM
I’ve been a fan of distrobox for a while - it really makes it easy to experiment with different Linux distributions, and also to do packaging work for different distributions regardless of what’s running on the physical machine1. It’s not only ridiculously flexible, compared to the Toolbx project that inspires it - the latter requires specially modified containers - but it also just consists of Bash scripts, with a simple installation script.

Sudo for blue teams: how to control and log better

Posted by Peter Czanik on May 03, 2022 01:47 PM

Sudo had many features to help blue teams in their daily job even before 1.9 was released. Session recordings, plugins and others made sure that most administrative access could be controlled and problems easily detected. Version 1.9 introduced Python support, new APIs, centralized session recordings, however some blind spots still remained. Learn how some of the latest sudo features can help you to better control and log administrative access to your hosts. You will learn about JSON logging in sudo, chroot support, logging sub-commands, and how to work with these logs in syslog-ng.

You can read the rest of my blog at https://www.sudo.ws/posts/2022/05/sudo-for-blue-teams-how-to-control-and-log-better/

<figure><figcaption>

Sudo logo

</figcaption> </figure>

Community Blog monthly summary: April 2022

Posted by Fedora Community Blog on May 03, 2022 08:00 AM

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think.

Stats

In April, we published 19 posts. The site had 13,824 visits from 9,035 unique viewers. Both of these numbers are the highest in a long time. 3,555 visits came from search engines, while 2,308 came from the WordPress Android app and 441 came from Twitter.

The most read post last month was Anaconda is getting a new suit and a wizard with 3,580 views.

Badges

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly summary: April 2022 appeared first on Fedora Community Blog.

Fitting Everything Together

Posted by Lennart Poettering on May 02, 2022 10:00 PM

TLDR: Hermetic /usr/ is awesome; let's popularize image-based OSes with modernized security properties built around immutability, SecureBoot, TPM2, adaptability, auto-updating, factory reset, uniformity – built from traditional distribution packages, but deployed via images.

Over the past years, systemd gained a number of components for building Linux-based operating systems. While these components individually have been adopted by many distributions and products for specific purposes, we did not publicly communicate a broader vision of how they should all fit together in the long run. In this blog story I hope to provide that from my personal perspective, i.e. explain how I personally would build an OS and where I personally think OS development with Linux should go.

I figure this is going to be a longer blog story, but I hope it will be equally enlightening. Please understand though that everything I write about OS design here is my personal opinion, and not one of my employer.

For the last 12 years or so I have been working on Linux OS development, mostly around systemd. In all those years I had a lot of time thinking about the Linux platform, and specifically traditional Linux distributions and their strengths and weaknesses. I have seen many attempts to reinvent Linux distributions in one way or another, to varying success. After all this most would probably agree that the traditional RPM or dpkg/apt-based distributions still define the Linux platform more than others (for 25+ years now), even though some Linux-based OSes (Android, ChromeOS) probably outnumber the installations overall.

And over all those 12 years I kept wondering, how would I actually build an OS for a system or for an appliance, and what are the components necessary to achieve that. And most importantly, how can we make these components generic enough so that they are useful in generic/traditional distributions too, and in other use cases than my own.

The Project

Before figuring out how I would build an OS it's probably good to figure out what type of OS I actually want to build, what purpose I intend to cover. I think a desktop OS is probably the most interesting. Why is that? Well, first of all, I use one of these for my job every single day, so I care immediately, it's my primary tool of work. But more importantly: I think building a desktop OS is one of the most complex overall OS projects you can work on, simply because desktops are so much more versatile and variable than servers or embedded devices. If one figures out the desktop case, I think there's a lot more to learn from, and reuse in the server or embedded case, then going the other way. After all, there's a reason why so much of the widely accepted Linux userspace stack comes from people with a desktop background (including systemd, BTW).

So, let's see how I would build a desktop OS. If you press me hard, and ask me why I would do that given that ChromeOS already exists and more or less is a Linux desktop OS: there's plenty I am missing in ChromeOS, but most importantly, I am lot more interested in building something people can easily and naturally rebuild and hack on, i.e. Google-style over-the-wall open source with its skewed power dynamic is not particularly attractive to me. I much prefer building this within the framework of a proper open source community, out in the open, and basing all this strongly on the status quo ante, i.e. the existing distributions. I think it is crucial to provide a clear avenue to build a modern OS based on the existing distribution model, if there shall ever be a chance to make this interesting for a larger audience.

(Let me underline though: even though I am going to focus on a desktop here, most of this is directly relevant for servers as well, in particular container host OSes and suchlike, or embedded devices, e.g. car IVI systems and so on.)

Design Goals

  1. First and foremost, I think the focus must be on an image-based design rather than a package-based one. For robustness and security it is essential to operate with reproducible, immutable images that describe the OS or large parts of it in full, rather than operating always with fine-grained RPM/dpkg style packages. That's not to say that packages are not relevant (I actually think they matter a lot!), but I think they should be less of a tool for deploying code but more one of building the objects to deploy. A different way to see this: any OS built like this must be easy to replicate in a large number of instances, with minimal variability. Regardless if we talk about desktops, servers or embedded devices: focus for my OS should be on "cattle", not "pets", i.e that from the start it's trivial to reuse the well-tested, cryptographically signed combination of software over a large set of devices the same way, with a maximum of bit-exact reuse and a minimum of local variances.

  2. The trust chain matters, from the boot loader all the way to the apps. This means all code that is run must be cryptographically validated before it is run. All storage must be cryptographically protected: public data must be integrity checked; private data must remain confidential.

    This is in fact where big distributions currently fail pretty badly. I would go as far as saying that SecureBoot on Linux distributions is mostly security theater at this point, if you so will. That's because the initrd that unlocks your FDE (i.e. the cryptographic concept that protects the rest of your system) is not signed or protected in any way. It's trivial to modify for an attacker with access to your hard disk in an undetectable way, and collect your FDE passphrase. The involved bureaucracy around the implementation of UEFI SecureBoot of the big distributions is to a large degree pointless if you ask me, given that once the kernel is assumed to be in a good state, as the next step the system invokes completely unsafe code with full privileges.

    This is a fault of current Linux distributions though, not of SecureBoot in general. Other OSes use this functionality in more useful ways, and we should correct that too.

  3. Pretty much the same thing: offline security matters. I want my data to be reasonably safe at rest, i.e. cryptographically inaccessible even when I leave my laptop in my hotel room, suspended.

  4. Everything should be cryptographically measured, so that remote attestation is supported for as much software shipped on the OS as possible.

  5. Everything should be self descriptive, have single sources of truths that are closely attached to the object itself, instead of stored externally.

  6. Everything should be self-updating. Today we know that software is never bug-free, and thus requires a continuous update cycle. Not only the OS itself, but also any extensions, services and apps running on it.

  7. Everything should be robust in respect to aborted OS operations, power loss and so on. It should be robust towards hosed OS updates (regardless if the download process failed, or the image was buggy), and not require user interaction to recover from them.

  8. There must always be a way to put the system back into a well-defined, guaranteed safe state ("factory reset"). This includes that all sensitive data from earlier uses becomes cryptographically inaccessible.

  9. The OS should enforce clear separation between vendor resources, system resources and user resources: conceptually and when it comes to cryptographical protection.

  10. Things should be adaptive: the system should come up and make the best of the system it runs on, adapt to the storage and hardware. Moreover, the system should support execution on bare metal equally well as execution in a VM environment and in a container environment (i.e. systemd-nspawn).

  11. Things should not require explicit installation. i.e. every image should be a live image. For installation it should be sufficient to dd an OS image onto disk. Thus, strong focus on "instantiate on first boot", rather than "instantiate before first boot".

  12. Things should be reasonably minimal. The image the system starts its life with should be quick to download, and not include resources that can as well be created locally later.

  13. System identity, local cryptographic keys and so on should be generated locally, not be pre-provisioned, so that there's no leak of sensitive data during the transport onto the system possible.

  14. Things should be reasonably democratic and hackable. It should be easy to fork an OS, to modify an OS and still get reasonable cryptographic protection. Modifying your OS should not necessarily imply that your "warranty is voided" and you lose all good properties of the OS, if you so will.

  15. Things should be reasonably modular. The privileged part of the core OS must be extensible, including on the individual system. It's not sufficient to support extensibility just through high-level UI applications.

  16. Things should be reasonably uniform, i.e. ideally the same formats and cryptographic properties are used for all components of the system, regardless if for the host OS itself or the payloads it receives and runs.

  17. Even taking all these goals into consideration, it should still be close to traditional Linux distributions, and take advantage of what they are really good at: integration and security update cycles.

Now that we know our goals and requirements, let's start designing the OS along these lines.

Hermetic /usr/

First of all the OS resources (code, data files, …) should be hermetic in an immutable /usr/. This means that a /usr/ tree should carry everything needed to set up the minimal set of directories and files outside of /usr/ to make the system work. This /usr/ tree can then be mounted read-only into the writable root file system that then will eventually carry the local configuration, state and user data in /etc/, /var/ and /home/ as usual.

Thankfully, modern distributions are surprisingly close to working without issues in such a hermetic context. Specifically, Fedora works mostly just fine: it has adopted the /usr/ merge and the declarative systemd-sysusers and systemd-tmpfiles components quite comprehensively, which means the directory trees outside of /usr/ are automatically generated as needed if missing. In particular /etc/passwd and /etc/group (and related files) are appropriately populated, should they be missing entries.

In my model a hermetic OS is hence comprehensively defined within /usr/: combine the /usr/ tree with an empty, otherwise unpopulated root file system, and it will boot up successfully, automatically adding the strictly necessary files, and resources that are necessary to boot up.

Monopolizing vendor OS resources and definitions in an immutable /usr/ opens multiple doors to us:

  • We can apply dm-verity to the whole /usr/ tree, i.e. guarantee structural, cryptographic integrity on the whole vendor OS resources at once, with full file system metadata.

  • We can implement updates to the OS easily: by implementing an A/B update scheme on the /usr/ tree we can update the OS resources atomically and robustly, while leaving the rest of the OS environment untouched.

  • We can implement factory reset easily: erase the root file system and reboot. The hermetic OS in /usr/ has all the information it needs to set up the root file system afresh — exactly like in a new installation.

Initial Look at the Partition Table

So let's have a look at a suitable partition table, taking a hermetic /usr/ into account. Let's conceptually start with a table of four entries:

  1. An UEFI System Partition (required by firmware to boot)

  2. Immutable, Verity-protected, signed file system with the /usr/ tree in version A

  3. Immutable, Verity-protected, signed file system with the /usr/ tree in version B

  4. A writable, encrypted root file system

(This is just for initial illustration here, as we'll see later it's going to be a bit more complex in the end.)

The Discoverable Partitions Specification provides suitable partition types UUIDs for all of the above partitions. Which is great, because it makes the image self-descriptive: simply by looking at the image's GPT table we know what to mount where. This means we do not need a manual /etc/fstab, and a multitude of tools such as systemd-nspawn and similar can operate directly on the disk image and boot it up.

Booting

Now that we have a rough idea how to organize the partition table, let's look a bit at how to boot into that. Specifically, in my model "unified kernels" are the way to go, specifically those implementing Boot Loader Specification Type #2. These are basically kernel images that have an initial RAM disk attached to them, as well as a kernel command line, a boot splash image and possibly more, all wrapped into a single UEFI PE binary. By combining these into one we achieve two goals: they become extremely easy to update (i.e. drop in one file, and you update kernel+initrd) and more importantly, you can sign them as one for the purpose of UEFI SecureBoot.

In my model, each version of such a kernel would be associated with exactly one version of the /usr/ tree: both are always updated at the same time. An update then becomes relatively simple: drop in one new /usr/ file system plus one kernel, and the update is complete.

The boot loader used for all this would be systemd-boot, of course. It's a very simple loader, and implements the aforementioned boot loader specification. This means it requires no explicit configuration or anything: it's entirely sufficient to drop in one such unified kernel file, and it will be picked up, and be made a candidate to boot into.

You might wonder how to configure the root file system to boot from with such a unified kernel that contains the kernel command line and is signed as a whole and thus immutable. The idea here is to use the usrhash= kernel command line option implemented by systemd-veritysetup-generator and systemd-fstab-generator. It does two things: it will search and set up a dm-verity volume for the /usr/ file system, and then mount it. It takes the root hash value of the dm-verity Merkle tree as the parameter. This hash is then also used to find the /usr/ partition in the GPT partition table, under the assumption that the partition UUIDs are derived from it, as per the suggestions in the discoverable partitions specification (see above).

systemd-boot (if not told otherwise) will do a version sort of the kernel image files it finds, and then automatically boot the newest one. Picking a specific kernel to boot will also fixate which version of the /usr/ tree to boot into, because — as mentioned — the Verity root hash of it is built into the kernel command line the unified kernel image contains.

In my model I'd place the kernels directly into the UEFI System Partition (ESP), in order to simplify things. (systemd-boot also supports reading them from a separate boot partition, but let's not complicate things needlessly, at least for now.)

So, with all this, we now already have a boot chain that goes something like this: once the boot loader is run, it will pick the newest kernel, which includes the initial RAM disk and a secure reference to the /usr/ file system to use. This is already great. But a /usr/ alone won't make us happy, we also need a root file system. In my model, that file system would be writable, and the /etc/ and /var/ hierarchies would be located directly on it. Since these trees potentially contain secrets (SSH keys, …) the root file system needs to be encrypted. We'll use LUKS2 for this, of course. In my model, I'd bind this to the TPM2 chip (for compatibility with systems lacking one, we can find a suitable fallback, which then provides weaker guarantees, see below). A TPM2 is a security chip available in most modern PCs. Among other things it contains a persistent secret key that can be used to encrypt data, in a way that only if you possess access to it and can prove you are using validated software you can decrypt it again. The cryptographic measuring I mentioned earlier is what allows this to work. But … let's not get lost too much in the details of TPM2 devices, that'd be material for a novel, and this blog story is going to be way too long already.

What does using a TPM2 bound key for unlocking the root file system get us? We can encrypt the root file system with it, and you can only read or make changes to the root file system if you also possess the TPM2 chip and run our validated version of the OS. This protects us against an evil maid scenario to some level: an attacker cannot just copy the hard disk of your laptop while you leave it in your hotel room, because unless the attacker also steals the TPM2 device it cannot be decrypted. The attacker can also not just modify the root file system, because such changes would be detected on next boot because they aren't done with the right cryptographic key.

So, now we have a system that already can boot up somewhat completely, and run userspace services. All code that is run is verified in some way: the /usr/ file system is Verity protected, and the root hash of it is included in the kernel that is signed via UEFI SecureBoot. And the root file system is locked to the TPM2 where the secret key is only accessible if our signed OS + /usr/ tree is used.

(One brief intermission here: so far all the components I am referencing here exist already, and have been shipped in systemd and other projects already, including the TPM2 based disk encryption. There's one thing missing here however at the moment that still needs to be developed (happy to take PRs!): right now TPM2 based LUKS2 unlocking is bound to PCR hash values. This is hard to work with when implementing updates — what we'd need instead is unlocking by signatures of PCR hashes. TPM2 supports this, but we don't support it yet in our systemd-cryptsetup + systemd-cryptenroll stack.)

One of the goals mentioned above is that cryptographic key material should always be generated locally on first boot, rather than pre-provisioned. This of course has implications for the encryption key of the root file system: if we want to boot into this system we need the root file system to exist, and thus a key already generated that it is encrypted with. But where precisely would we generate it if we have no installer which could generate while installing (as it is done in traditional Linux distribution installers). My proposed solution here is to use systemd-repart, which is a declarative, purely additive repartitioner. It can run from the initrd to create and format partitions on boot, before transitioning into the root file system. It can also format the partitions it creates and encrypt them, automatically enrolling an TPM2-bound key.

So, let's revisit the partition table we mentioned earlier. Here's what in my model we'd actually ship in the initial image:

  1. An UEFI System Partition (ESP)

  2. An immutable, Verity-protected, signed file system with the /usr/ tree in version A

And that's already it. No root file system, no B /usr/ partition, nothing else. Only two partitions are shipped: the ESP with the systemd-boot loader and one unified kernel image, and the A version of the /usr/ partition. Then, on first boot systemd-repart will notice that the root file system doesn't exist yet, and will create it, encrypt it, and format it, and enroll the key into the TPM2. It will also create the second /usr/ partition (B) that we'll need for later A/B updates (which will be created empty for now, until the first update operation actually takes place, see below). Once done the initrd will combine the fresh root file system with the shipped /usr/ tree, and transition into it. Because the OS is hermetic in /usr/ and contains all the systemd-tmpfiles and systemd-sysuser information it can then set up the root file system properly and create any directories and symlinks (and maybe a few files) necessary to operate.

Besides the fact that the root file system's encryption keys are generated on the system we boot from and never leave it, it is also pretty nice that the root file system will be sized dynamically, taking into account the physical size of the backing storage. This is perfect, because on first boot the image will automatically adapt to what it has been dd'ed onto.

Factory Reset

This is a good point to talk about the factory reset logic, i.e. the mechanism to place the system back into a known good state. This is important for two reasons: in our laptop use case, once you want to pass the laptop to someone else, you want to ensure your data is fully and comprehensively erased. Moreover, if you have reason to believe your device was hacked you want to revert the device to a known good state, i.e. ensure that exploits cannot persist. systemd-repart already has a mechanism for it. In the declarations of the partitions the system should have, entries may be marked to be candidates for erasing on factory reset. The actual factory reset is then requested by one of two means: by specifying a specific kernel command line option (which is not too interesting here, given we lock that down via UEFI SecureBoot; but then again, one could also add a second kernel to the ESP that is identical to the first, with only different that it lists this command line option: thus when the user selects this entry it will initiate a factory reset) — and via an EFI variable that can be set and is honoured on the immediately following boot. So here's how a factory reset would then go down: once the factory reset is requested it's enough to reboot. On the subsequent boot systemd-repart runs from the initrd, where it will honour the request and erase the partitions marked for erasing. Once that is complete the system is back in the state we shipped the system in: only the ESP and the /usr/ file system will exist, but the root file system is gone. And from here we can continue as on the original first boot: create a new root file system (and any other partitions), and encrypt/set it up afresh.

So now we have a nice setup, where everything is either signed or encrypted securely. The system can adapt to the system it is booted on automatically on first boot, and can easily be brought back into a well defined state identical to the way it was shipped in.

Modularity

But of course, such a monolithic, immutable system is only useful for very specific purposes. If /usr/ can't be written to, – at least in the traditional sense – one cannot just go and install a new software package that one needs. So here two goals are superficially conflicting: on one hand one wants modularity, i.e. the ability to add components to the system, and on the other immutability, i.e. that precisely this is prohibited.

So let's see what I propose as a middle ground in my model. First, what's the precise use case for such modularity? I see a couple of different ones:

  1. For some cases it is necessary to extend the system itself at the lowest level, so that the components added in extend (or maybe even replace) the resources shipped in the base OS image, so that they live in the same namespace, and are subject to the same security restrictions and privileges. Exposure to the details of the base OS and its interface for this kind of modularity is at the maximum.

    Example: a module that adds a debugger or tracing tools into the system. Or maybe an optional hardware driver module.

  2. In other cases, more isolation is preferable: instead of extending the system resources directly, additional services shall be added in that bring their own files, can live in their own namespace (but with "windows" into the host namespaces), however still are system components, and provide services to other programs, whether local or remote. Exposure to the details of the base OS for this kind of modularity is restricted: it mostly focuses on the ability to consume and provide IPC APIs from/to the system. Components of this type can still be highly privileged, but the level of integration is substantially smaller than for the type explained above.

    Example: a module that adds a specific VPN connection service to the OS.

  3. Finally, there's the actual payload of the OS. This stuff is relatively isolated from the OS and definitely from each other. It mostly consumes OS APIs, and generally doesn't provide OS APIs. This kind of stuff runs with minimal privileges, and in its own namespace of concepts.

    Example: a desktop app, for reading your emails.

Of course, the lines between these three types of modules are blurry, but I think distinguishing them does make sense, as I think different mechanisms are appropriate for each. So here's what I'd propose in my model to use for this.

  1. For the system extension case I think the systemd-sysext images are appropriate. This tool operates on system extension images that are very similar to the host's disk image: they also contain a /usr/ partition, protected by Verity. However, they just include additions to the host image: binaries that extend the host. When such a system extension image is activated, it is merged via an immutable overlayfs mount into the host's /usr/ tree. Thus any file shipped in such a system extension will suddenly appear as if it was part of the host OS itself. For optional components that should be considered part of the OS more or less this is a very simple and powerful way to combine an immutable OS with an immutable extension. Note that most likely extensions for an OS matching this tool should be built at the same time within the same update cycle scheme as the host OS itself. After all, the files included in the extensions will have dependencies on files in the system OS image, and care must be taken that these dependencies remain in order.

  2. For adding in additional somewhat isolated system services in my model, Portable Services are the proposed tool of choice. Portable services are in most ways just like regular system services; they could be included in the system OS image or an extension image. However, portable services use RootImage= to run off separate disk images, thus within their own namespace. Images set up this way have various ways to integrate into the host OS, as they are in most ways regular system services, which just happen to bring their own directory tree. Also, unlike regular system services, for them sandboxing is opt-out rather than opt-in. In my model, here too the disk images are Verity protected and thus immutable. Just like the host OS they are GPT disk images that come with a /usr/ partition and Verity data, along with signing.

  3. Finally, the actual payload of the OS, i.e. the apps. To be useful in real life here it is important to hook into existing ecosystems, so that a large set of apps are available. Given that on Linux flatpak (or on servers OCI containers) are the established format that pretty much won they are probably the way to go. That said, I think both of these mechanisms have relatively weak properties, in particular when it comes to security, since immutability/measurements and similar are not provided. This means, unlike for system extensions and portable services a complete trust chain with attestation and per-app cryptographically protected data is much harder to implement sanely.

What I'd like to underline here is that the main system OS image, as well as the system extension images and the portable service images are put together the same way: they are GPT disk images, with one immutable file system and associated Verity data. The latter two should also contain a PKCS#7 signature for the top-level Verity hash. This uniformity has many benefits: you can use the same tools to build and process these images, but most importantly: by using a single way to validate them throughout the stack (i.e. Verity, in the latter cases with PKCS#7 signatures), validation and measurement is straightforward. In fact it's so obvious that we don't even have to implement it in systemd: the kernel has direct support for this Verity signature checking natively already (IMA).

So, by composing a system at runtime from a host image, extension images and portable service images we have a nicely modular system where every single component is cryptographically validated on every single IO operation, and every component is measured, in its entire combination, directly in the kernel's IMA subsystem.

(Of course, once you add the desktop apps or OCI containers on top, then these properties are lost further down the chain. But well, a lot is already won, if you can close the chain that far down.)

Note that system extensions are not designed to replicate the fine grained packaging logic of RPM/dpkg. Of course, systemd-sysext is a generic tool, so you can use it for whatever you want, but there's a reason it does not bring support for a dependency language: the goal here is not to replicate traditional Linux packaging (we have that already, in RPM/dpkg, and I think they are actually OK for what they do) but to provide delivery of larger, coarser sets of functionality, in lockstep with the underlying OS' life-cycle and in particular with no interdependencies, except on the underlying OS.

Also note that depending on the use case it might make sense to also use system extensions to modularize the initrd step. This is probably less relevant for a desktop OS, but for server systems it might make sense to package up support for specific complex storage in a systemd-sysext system extension, which can be applied to the initrd that is built into the unified kernel. (In fact, we have been working on implementing signed yet modular initrd support to general purpose Fedora this way.)

Note that portable services are composable from system extension too, by the way. This makes them even more useful, as you can share a common runtime between multiple portable service, or even use the host image as common runtime for portable services. In this model a common runtime image is shared between one or more system extensions, and composed at runtime via an overlayfs instance.

More Modularity: Secondary OS Installs

Having an immutable, cryptographically locked down host OS is great I think, and if we have some moderate modularity on top, that's also great. But oftentimes it's useful to be able to depart/compromise for some specific use cases from that, i.e. provide a bridge for example to allow workloads designed around RPM/dpkg package management to coexist reasonably nicely with such an immutable host.

For this purpose in my model I'd propose using systemd-nspawn containers. The containers are focused on OS containerization, i.e. they allow you to run a full OS with init system and everything as payload (unlike for example Docker containers which focus on a single service, and where running a full OS in it is a mess).

Running systemd-nspawn containers for such secondary OS installs has various nice properties. One of course is that systemd-nspawn supports the same level of cryptographic image validation that we rely on for the host itself. Thus, to some level the whole OS trust chain is reasonably recursive if desired: the firmware validates the OS, and the OS can validate a secondary OS installed within it. In fact, we can run our trusted OS recursively on itself and get similar security guarantees! Besides these security aspects, systemd-nspawn also has really nice properties when it comes to integration with the host. For example the --bind-user= permits binding a host user record and their directory into a container as a simple one step operation. This makes it extremely easy to have a single user and $HOME but share it concurrently with the host and a zoo of secondary OSes in systemd-nspawn containers, which each could run different distributions even.

Developer Mode

Superficially, an OS with an immutable /usr/ appears much less hackable than an OS where everything is writable. Moreover, an OS where everything must be signed and cryptographically validated makes it hard to insert your own code, given you are unlikely to possess access to the signing keys.

To address this issue other systems have supported a "developer" mode: when entered the security guarantees are disabled, and the system can be freely modified, without cryptographic validation. While that's a great concept to have I doubt it's what most developers really want: the cryptographic properties of the OS are great after all, it sucks having to give them up once developer mode is activated.

In my model I'd thus propose two different approaches to this problem. First of all, I think there's value in allowing users to additively extend/override the OS via local developer system extensions. With this scheme the underlying cryptographic validation would remain in tact, but — if this form of development mode is explicitly enabled – the developer could add in more resources from local storage, that are not tied to the OS builder's chain of trust, but a local one (i.e. simply backed by encrypted storage of some form).

The second approach is to make it easy to extend (or in fact replace) the set of trusted validation keys, with local ones that are under the control of the user, in order to make it easy to operate with kernel, OS, extension, portable service or container images signed by the local developer without involvement of the OS builder. This is relatively easy to do for components down the trust chain, i.e. the elements further up the chain should optionally allow additional certificates to allow validation with.

(Note that systemd currently has no explicit support for a "developer" mode like this. I think we should add that sooner or later however.)

Democratizing Code Signing

Closely related to the question of developer mode is the question of code signing. If you ask me, the status quo of UEFI SecureBoot code signing in the major Linux distributions is pretty sad. The work to get stuff signed is massive, but in effect it delivers very little in return: because initrds are entirely unprotected, and reside on partitions lacking any form of cryptographic integrity protection any attacker can trivially easily modify the boot process of any such Linux system and freely collected FDE passphrases entered. There's little value in signing the boot loader and kernel in a complex bureaucracy if it then happily loads entirely unprotected code that processes the actually relevant security credentials: the FDE keys.

In my model, through use of unified kernels this important gap is closed, hence UEFI SecureBoot code signing becomes an integral part of the boot chain from firmware to the host OS. Unfortunately, code signing – and having something a user can locally hack, is to some level conflicting. However, I think we can improve the situation here, and put more emphasis on enrolling developer keys in the trust chain easily. Specifically, I see one relevant approach here: enrolling keys directly in the firmware is something that we should make less of a theoretical exercise and more something we can realistically deploy. See this work in progress making this more automatic and eventually safe. Other approaches are thinkable (including some that build on existing MokManager infrastructure), but given the politics involved, are harder to conclusively implement.

Running the OS itself in a container

What I explain above is put together with running on a bare metal system in mind. However, one of the stated goals is to make the OS adaptive enough to also run in a container environment (specifically: systemd-nspawn) nicely. Booting a disk image on bare metal or in a VM generally means that the UEFI firmware validates and invokes the boot loader, and the boot loader invokes the kernel which then transitions into the final system. This is different for containers: here the container manager immediately calls the init system, i.e. PID 1. Thus the validation logic must be different: cryptographic validation must be done by the container manager. In my model this is solved by shipping the OS image not only with a Verity data partition (as is already necessary for the UEFI SecureBoot trust chain, see above), but also with another partition, containing a PKCS#7 signature of the root hash of said Verity partition. This of course is exactly what I propose for both the system extension and portable service image. Thus, in my model the images for all three uses are put together the same way: an immutable /usr/ partition, accompanied by a Verity partition and a PKCS#7 signature partition. The OS image itself then has two ways "into" the trust chain: either through the signed unified kernel in the ESP (which is used for bare metal and VM boots) or by using the PKCS#7 signature stored in the partition (which is used for container/systemd-nspawn boots).

Parameterizing Kernels

A fully immutable and signed OS has to establish trust in the user data it makes use of before doing so. In the model I describe here, for /etc/ and /var/ we do this via disk encryption of the root file system (in combination with integrity checking). But the point where the root file system is mounted comes relatively late in the boot process, and thus cannot be used to parameterize the boot itself. In many cases it's important to be able to parameterize the boot process however.

For example, for the implementation of the developer mode indicated above it's useful to be able to pass this fact safely to the initrd, in combination with other fields (e.g. hashed root password for allowing in-initrd logins for debug purposes). After all, if the initrd is pre-built by the vendor and signed as whole together with the kernel it cannot be modified to carry such data directly (which is in fact how parameterizing of the initrd to a large degree was traditionally done).

In my model this is achieved through system credentials, which allow passing parameters to systems (and services for the matter) in an encrypted and authenticated fashion, bound to the TPM2 chip. This means that we can securely pass data into the initrd so that it can be authenticated and decrypted only on the system it is intended for and with the unified kernel image it was intended for.

Swap

In my model the OS would also carry a swap partition. For the simple reason that only then systemd-oomd.service can provide the best results. Also see In defence of swap: common misconceptions

Updating Images

We have a rough idea how the system shall be organized now, let's next focus on the deployment cycle: software needs regular update cycles, and software that is not updated regularly is a security problem. Thus, I am sure that any modern system must be automatically updated, without this requiring avoidable user interaction.

In my model, this is the job for systemd-sysupdate. It's a relatively simple A/B image updater: it operates either on partitions, on regular files in a directory, or on subdirectories in a directory. Each entry has a version (which is encoded in the GPT partition label for partitions, and in the filename for regular files and directories): whenever an update is initiated the oldest version is erased, and the newest version is downloaded.

With the setup described above a system update becomes a really simple operation. On each update the systemd-sysupdate tool downloads a /usr/ file system partition, an accompanying Verity partition, a PKCS#7 signature partition, and drops it into the host's partition table (where it possibly replaces the oldest version so far stored there). Then it downloads a unified kernel image and drops it into the EFI System Partition's /EFI/Linux (as per Boot Loader Specification; possibly erase the oldest such file there). And that's already the whole update process: four files are downloaded from the server, unpacked and put in the most straightforward of ways into the partition table or file system. Unlike in other OS designs there's no mechanism required to explicitly switch to the newer version, the aforementioned systemd-boot logic will automatically pick the newest kernel once it is dropped in.

Above we talked a lot about modularity, and how to put systems together as a combination of a host OS image, system extension images for the initrd and the host, portable service images and systemd-nspawn container images. I already emphasized that these image files are actually always the same: GPT disk images with partition definitions that match the Discoverable Partition Specification. This comes very handy when thinking about updating: we can use the exact same systemd-sysupdate tool for updating these other images as we use for the host image. The uniformity of the on-disk format allows us to update them uniformly too.

Boot Counting + Assessment

Automatic OS updates do not come without risks: if they happen automatically, and an update goes wrong this might mean your system might be automatically updated into a brick. This of course is less than ideal. Hence it is essential to address this reasonably automatically. In my model, there's systemd's Automatic Boot Assessment for that. The mechanism is simple: whenever a new unified kernel image is dropped into the system it will be stored with a small integer counter value included in the filename. Whenever the unified kernel image is selected for booting by systemd-boot, it is decreased by one. Once the system booted up successfully (which is determined by userspace) the counter is removed from the file name (which indicates "this entry is known to work"). If the counter ever hits zero, this indicates that it tried to boot it a couple of times, and each time failed, thus is apparently "bad". In this case systemd-boot will not consider the kernel anymore, and revert to the next older (that doesn't have a counter of zero).

By sticking the boot counter into the filename of the unified kernel we can directly attach this information to the kernel, and thus need not concern ourselves with cleaning up secondary information about the kernel when the kernel is removed. Updating with a tool like systemd-sysupdate remains a very simple operation hence: drop one old file, add one new file.

Picking the Newest Version

I already mentioned that systemd-boot automatically picks the newest unified kernel image to boot, by looking at the version encoded in the filename. This is done via a simple strverscmp() call (well, truth be told, it's a modified version of that call, different from the one implemented in libc, because real-life package managers use more complex rules for comparing versions these days, and hence it made sense to do that here too). The concept of having multiple entries of some resource in a directory, and picking the newest one automatically is a powerful concept, I think. It means adding/removing new versions is extremely easy (as we discussed above, in systemd-sysupdate context), and allows stateless determination of what to use.

If systemd-boot can do that, what about system extension images, portable service images, or systemd-nspawn container images that do not actually use systemd-boot as the entrypoint? All these tools actually implement the very same logic, but on the partition level: if multiple suitable /usr/ partitions exist, then the newest is determined by comparing the GPT partition label of them.

This is in a way the counterpart to the systemd-sysupdate update logic described above: we always need a way to determine which partition to actually then use after the update took place: and this becomes very easy each time: enumerate possible entries, pick the newest as per the (modified) strverscmp() result.

Home Directory Management

In my model the device's users and their home directories are managed by systemd-homed. This means they are relatively self-contained and can be migrated easily between devices. The numeric UID assignment for each user is done at the moment of login only, and the files in the home directory are mapped as needed via a uidmap mount. It also allows us to protect the data of each user individually with a credential that belongs to the user itself. i.e. instead of binding confidentiality of the user's data to the system-wide full-disk-encryption each user gets their own encrypted home directory where the user's authentication token (password, FIDO2 token, PKCS#11 token, recovery key…) is used as authentication and decryption key for the user's data. This brings a major improvement for security as it means the user's data is cryptographically inaccessible except when the user is actually logged in.

It also allows us to correct another major issue with traditional Linux systems: the way how data encryption works during system suspend. Traditionally on Linux the disk encryption credentials (e.g. LUKS passphrase) is kept in memory also when the system is suspended. This is a bad choice for security, since many (most?) of us probably never turn off their laptop but suspend it instead. But if the decryption key is always present in unencrypted form during the suspended time, then it could potentially be read from there by a sufficiently equipped attacker.

By encrypting the user's home directory with the user's authentication token we can first safely "suspend" the home directory before going to the system suspend state (i.e. flush out the cryptographic keys needed to access it). This means any process currently accessing the home directory will be frozen for the time of the suspend, but that's expected anyway during a system suspend cycle. Why is this better than the status quo ante? In this model the home directory's cryptographic key material is erased during suspend, but it can be safely reacquired on resume, from system code. If the system is only encrypted as a whole however, then the system code itself couldn't reauthenticate the user, because it would be frozen too. By separating home directory encryption from the root file system encryption we can avoid this problem.

Partition Setup

So we discussed the organization of the partitions OS images multiple times in the above, each time focusing on a specific aspect. Let's now summarize how this should look like all together.

In my model, the initial, shipped OS image should look roughly like this:

  • (1) An UEFI System Partition, with systemd-boot as boot loader and one unified kernel
  • (2) A /usr/ partition (version "A"), with a label fooOS_0.7 (under the assumption we called our project fooOS and the image version is 0.7).
  • (3) A Verity partition for the /usr/ partition (version "A"), with the same label
  • (4) A partition carrying the Verity root hash for the /usr/ partition (version "A"), along with a PKCS#7 signature of it, also with the same label

On first boot this is augmented by systemd-repart like this:

  • (5) A second /usr/ partition (version "B"), initially with a label _empty (which is the label systemd-sysupdate uses to mark partitions that currently carry no valid payload)
  • (6) A Verity partition for that (version "B"), similar to the above case, also labelled _empty
  • (7) And ditto a Verity root hash partition with a PKCS#7 signature (version "B"), also labelled _empty
  • (8) A root file system, encrypted and locked to the TPM2
  • (9) A home file system, integrity protected via a key also in TPM2 (encryption is unnecessary, since systemd-homed adds that on its own, and it's nice to avoid duplicate encryption)
  • (10) A swap partition, encrypted and locked to the TPM2

Then, on the first OS update the partitions 5, 6, 7 are filled with a new version of the OS (let's say 0.8) and thus get their label updated to fooOS_0.8. After a boot, this version is active.

On a subsequent update the three partitions fooOS_0.7 get wiped and replaced by fooOS_0.9 and so on.

On factory reset, the partitions 8, 9, 10 are deleted, so that systemd-repart recreates them, using a new set of cryptographic keys.

Here's a graphic that hopefully illustrates the partition stable from shipped image, through first boot, multiple update cycles and eventual factory reset:

Partitions Overview

Trust Chain

So let's summarize the intended chain of trust (for bare metal/VM boots) that ensures every piece of code in this model is signed and validated, and any system secret is locked to TPM2.

  1. First, firmware (or possibly shim) authenticates systemd-boot.

  2. Once systemd-boot picks a unified kernel image to boot, it is also authenticated by firmware/shim.

  3. The unified kernel image contains an initrd, which is the first userspace component that runs. It finds any system extensions passed into the initrd, and sets them up through Verity. The kernel will validate the Verity root hash signature of these system extension images against its usual keyring.

  4. The initrd also finds credentials passed in, then securely unlocks (which means: decrypts + authenticates) them with a secret from the TPM2 chip, locked to the kernel image itself.

  5. The kernel image also contains a kernel command line which contains a usrhash= option that pins the root hash of the /usr/ partition to use.

  6. The initrd then unlocks the encrypted root file system, with a secret bound to the TPM2 chip.

  7. The system then transitions into the main system, i.e. the combination of the Verity protected /usr/ and the encrypted root files system. It then activates two more encrypted (and/or integrity protected) volumes for /home/ and swap, also with a secret tied to the TPM2 chip.

Here's an attempt to illustrate the above graphically:

Trust Chain

This is the trust chain of the basic OS. Validation of system extension images, portable service images, systemd-nspawn container images always takes place the same way: the kernel validates these Verity images along with their PKCS#7 signatures against the kernel's keyring.

File System Choice

In the above I left the choice of file systems unspecified. For the immutable /usr/ partitions squashfs might be a good candidate, but any other that works nicely in a read-only fashion and generates reproducible results is a good choice, too. The home directories as managed by systemd-homed should certainly use btrfs, because it's the only general purpose file system supporting online grow and shrink, which systemd-homed can take benefit of, to manage storage.

For the root file system btrfs is likely also the best idea. That's because we intend to use LUKS/dm-crypt underneath, which by default only provides confidentiality, not authenticity of the data (unless combined with dm-integrity). Since btrfs (unlike xfs/ext4) does full data checksumming it's probably the best choice here, since it means we don't have to use dm-integrity (which comes at a higher performance cost).

OS Installation vs. OS Instantiation

In the discussion above a lot of focus was put on setting up the OS and completing the partition layout and such on first boot. This means installing the OS becomes as simple as dd-ing (i.e. "streaming") the shipped disk image into the final HDD medium. Simple, isn't it?

Of course, such a scheme is just too simple for many setups in real life. Whenever multi-boot is required (i.e. co-installing an OS implementing this model with another unrelated one), dd-ing a disk image onto the HDD is going to overwrite user data that was supposed to be kept around.

In order to cover for this case, in my model, we'd use systemd-repart (again!) to allow streaming the source disk image into the target HDD in a smarter, additive way. The tool after all is purely additive: it will add in partitions or grow them if they are missing or too small. systemd-repart already has all the necessary provisions to not only create a partition on the target disk, but also copy blocks from a raw installer disk. An install operation would then become a two stop process: one invocation of systemd-repart that adds in the /usr/, its Verity and the signature partition to the target medium, populated with a copy of the same partition of the installer medium. And one invocation of bootctl that installs the systemd-boot boot loader in the ESP. (Well, there's one thing missing here: the unified OS kernel also needs to be dropped into the ESP. For now, this can be done with a simple cp call. In the long run, this should probably be something bootctl can do as well, if told so.)

So, with this scheme we have a simple scheme to cover all bases: we can either just dd an image to disk, or we can stream an image onto an existing HDD, adding a couple of new partitions and files to the ESP.

Of course, in reality things are more complex than that even: there's a good chance that the existing ESP is simply too small to carry multiple unified kernels. In my model, the way to address this is by shipping two slightly different systemd-repart partition definition file sets: the ideal case when the ESP is large enough, and a fallback case, where it isn't and where we then add in an addition XBOOTLDR partition (as per the Discoverable Partitions Specification). In that mode the ESP carries the boot loader, but the unified kernels are stored in the XBOOTLDR partition. This scenario is not quite as simple as the XBOOTLDR-less scenario described first, but is equally well supported in the various tools. Note that systemd-repart can be told size constraints on the partitions it shall create or augment, thus to implement this scheme it's enough to invoke the tool with the fallback partition scheme if invocation with the ideal scheme fails.

Either way: regardless how the partitions, the boot loader and the unified kernels ended up on the system's hard disk, on first boot the code paths are the same again: systemd-repart will be called to augment the partition table with the root file system, and properly encrypt it, as was already discussed earlier here. This means: all cryptographic key material used for disk encryption is generated on first boot only, the installer phase does not encrypt anything.

Live Systems vs. Installer Systems vs. Installed Systems

Traditionally on Linux three types of systems were common: "installed" systems, i.e. that are stored on the main storage of the device and are the primary place people spend their time in; "installer" systems which are used to install them and whose job is to copy and setup the packages that make up the installed system; and "live" systems, which were a middle ground: a system that behaves like an installed system in most ways, but lives on removable media.

In my model I'd like to remove the distinction between these three concepts as much as possible: each of these three images should carry the exact same /usr/ file system, and should be suitable to be replicated the same way. Once installed the resulting image can also act as an installer for another system, and so on, creating a certain "viral" effect: if you have one image or installation it's automatically something you can replicate 1:1 with a simple systemd-repart invocation.

Building Images According to this Model

The above explains how the image should look like and how its first boot and update cycle will modify it. But this leaves one question unanswered: how to actually build the initial image for OS instances according to this model?

Note that there's nothing too special about the images following this model: they are ultimately just GPT disk images with Linux file systems, following the Discoverable Partition Specification. This means you can use any set of tools of your choice that can put together GPT disk images for compliant images.

I personally would use mkosi for this purpose though. It's designed to generate compliant images, and has a rich toolset for SecureBoot and signed/Verity file systems already in place.

What is key here is that this model doesn't depart from RPM and dpkg, instead it builds on top of that: in this model they are excellent for putting together images on the build host, but deployment onto the runtime host does not involve individual packages.

I think one cannot underestimate the value traditional distributions bring, regarding security, integration and general polishing. The concepts I describe above are inherited from this, but depart from the idea that distribution packages are a runtime concept and make it a build-time concept instead.

Note that the above is pretty much independent from the underlying distribution.

Final Words

I have no illusions, general purpose distributions are not going to adopt this model as their default any time soon, and it's not even my goal that they do that. The above is my personal vision, and I don't expect people to buy into it 100%, and that's fine. However, what I am interested in is finding the overlaps, i.e. work with people who buy 50% into this vision, and share the components.

My goals here thus are to:

  1. Get distributions to move to a model where images like this can be built from the distribution easily. Specifically this means that distributions make their OS hermetic in /usr/.

  2. Find the overlaps, share components with other projects to revisit how distributions are put together. This is already happening, see systemd-tmpfiles and systemd-sysuser support in various distributions, but I think there's more to share.

  3. Make people interested in building actual real-world images based on general purpose distributions adhering to the model described above. I'd love a "GnomeBook" image with full trust properties, that is built from true Linux distros, such as Fedora or ArchLinux.

FAQ

  1. What about ostree? Doesn't ostree already deliver what this blog story describes?

    ostree is fine technology, but in respect to security and robustness properties it's not too interesting I think, because unlike image-based approaches it cannot really deliver integrity/robustness guarantees over the whole tree easily. To be able to trust an ostree setup you have to establish trust in the underlying file system first, and the complexity of the file system makes that challenging. To provide an effective offline-secure trust chain through the whole depth of the stack it is essential to cryptographically validate every single I/O operation. In an image-based model this is trivially easy, but in ostree model it's with current file system technology not possible and even if this is added in one way or another in the future (though I am not aware of anyone doing on-access file-based integrity that spans a whole hierarchy of files that was compatible with ostree's hardlink farm model) I think validation is still at too high a level, since Linux file system developers made very clear their implementations are not robust to rogue images. (There's this stuff planned, but doing structural authentication ahead of time instead of on access makes the idea to weak — and I'd expect too slow — in my eyes.)

    With my design I want to deliver similar security guarantees as ChromeOS does, but ostree is much weaker there, and I see no perspective of this changing. In a way ostree's integrity checks are similar to RPM's and enforced on download rather than on access. In the model I suggest above, it's always on access, and thus safe towards offline attacks (i.e. evil maid attacks). In today's world, I think offline security is absolutely necessary though.

    That said, ostree does have some benefits over the model described above: it naturally shares file system inodes if many of the modules/images involved share the same data. It's thus more space efficient on disk (and thus also in RAM/cache to some degree) by default. In my model it would be up to the image builders to minimize shipping overly redundant disk images, by making good use of suitably composable system extensions.

  2. What about configuration management?

    At first glance immutable systems and configuration management don't go that well together. However, do note, that in the model I propose above the root file system with all its contents, including /etc/ and /var/ is actually writable and can be modified like on any other typical Linux distribution. The only exception is /usr/ where the immutable OS is hermetic. That means configuration management tools should work just fine in this model – up to the point where they are used to install additional RPM/dpkg packages, because that's something not allowed in the model above: packages need to be installed at image build time and thus on the image build host, not the runtime host.

  3. What about non-UEFI and non-TPM2 systems?

    The above is designed around the feature set of contemporary PCs, and this means UEFI and TPM2 being available (simply because the PC is pretty much defined by the Windows platform, and current versions of Windows require both).

    I think it's important to make the best of the features of today's PC hardware, and then find suitable fallbacks on more limited hardware. Specifically this means: if there's desire to implement something like the this on non-UEFI or non-TPM2 hardware we should look for suitable fallbacks for the individual functionality, but generally try to add glue to the old systems so that conceptually they behave more like the new systems instead of the other way round. Or in other words: most of the above is not strictly tied to UEFI or TPM2, and for many cases already there are reasonably fallbacks in place for more limited systems. Of course, without TPM2 many of the security guarantees will be weakened.

  4. How would you name an OS built that way?

    I think a desktop OS built this way if it has the GNOME desktop should of course be called GnomeBook, to mimic the ChromeBook name. ;-)

    But in general, I'd call hermetic, adaptive, immutable OSes like this "particles".

How can you help?

  1. Help making Distributions Hermetic in /usr/!

    One of the core ideas of the approach described above is to make the OS hermetic in /usr/, i.e. make it carry a comprehensive description of what needs to be set up outside of it when instantiated. Specifically, this means that system users that are needed are declared in systemd-sysusers snippets, and skeleton files and directories are created via systemd-tmpfiles. Moreover additional partitions should be declared via systemd-repart drop-ins.

    At this point some distributions (such as Fedora) are (probably more by accident than on purpose) already mostly hermetic in /usr/, at least for the most basic parts of the OS. However, this is not complete: many daemons require to have specific resources set up in /var/ or /etc/ before they can work, and the relevant packages do not carry systemd-tmpfiles descriptions that add them if missing. So there are two ways you could help here: politically, it would be highly relevant to convince distributions that an OS that is hermetic in /usr/ is highly desirable and it's a worthy goal for packagers to get there. More specifically, it would be desirable if RPM/dpkg packages would ship with enough systemd-tmpfiles information so that configuration files the packages strictly need for operation are symlinked (or copied) from /usr/share/factory/ if they are missing (even better of course would be if packages from their upstream sources on would just work with an empty /etc/ and /var/, and create themselves what they need and default to good defaults in absence of configuration files).

    Note that distributions that adopted systemd-sysusers, systemd-tmpfiles and the /usr/ merge are already quite close to providing an OS that is hermetic in /usr/. These were the big, the major advancements: making the image fully hermetic should be less controversial – at least that's my guess.

    Also note that making the OS hermetic in /usr/ is not just useful in scenarios like the above. It also means that stuff like this and like this can work well.

  2. Fill in the gaps!

    I already mentioned a couple of missing bits and pieces in the implementation of the overall vision. In the systemd project we'd be delighted to review/merge any PRs that fill in the voids.

  3. Build your own OS like this!

    Of course, while we built all these building blocks and they have been adopted to various levels and various purposes in the various distributions, no one so far built an OS that puts things together just like that. It would be excellent if we had communities that work on building images like what I propose above. i.e. if you want to work on making a secure GnomeBook as I suggest above a reality that would be more than welcome.

    How could this look like specifically? Pick an existing distribution, write a set of mkosi descriptions plus some additional drop-in files, and then build this on some build infrastructure. While doing so, report the gaps, and help us address them.

Further Documentation of Used Components and Concepts

  1. systemd-tmpfiles
  2. systemd-sysusers
  3. systemd-boot
  4. systemd-stub
  5. systemd-sysext
  6. systemd-portabled, Portable Services Introduction
  7. systemd-repart
  8. systemd-nspawn
  9. systemd-sysupdate
  10. systemd-creds, System and Service Credentials
  11. systemd-homed
  12. Automatic Boot Assessment
  13. Boot Loader Specification
  14. Discoverable Partitions Specification
  15. Safely Building Images

Earlier Blog Stories Related to this Topic

  1. The Strange State of Authenticated Boot and Disk Encryption on Generic Linux Distributions
  2. The Wondrous World of Discoverable GPT Disk Images
  3. Unlocking LUKS2 volumes with TPM2, FIDO2, PKCS#11 Security Hardware on systemd 248
  4. Portable Services with systemd v239
  5. mkosi — A Tool for Generating OS Images

And that's all for now.

Improvements to Fedora Docs

Posted by Fedora Magazine on May 02, 2022 08:57 AM

The Docs team is experiencing a new burst of energy. As part of this, we have several big improvements to the Fedora Docs site that we want to share.

Searchable docs

For years, readers have asked for search. We have a lot of documentation on the site, but you sometimes struggle to find what you’re looking for. With the new search feature, you can search the entire Fedora Documentation content.

Lunr.js powers the search. This means your browser downloads the index and does the search locally. The advantage is that there are no external dependencies: searches send nothing to a remote server and there is no external Javascript required. The downside is that the index has to be downloaded before search is available. Although we compress the index, if you’re on a slower connection, you may experience delays.

While the search is a major improvement, it’s not perfect. The search tool is not aware of the context of your search and can’t offer “do you mean _ ?” suggestions. Also, because many pages have similar titles, you can’t always tell which page has the information you’re looking for. We’re looking into adding more context to the page titles and working with teams to make titles more useful.

Stable “latest” URL

Many times when you link to a page on Fedora Docs, you don’t care about the version number. For example, if you’re writing a blog post that links to the Installation Guide, you’d rather it go to the Installation Guide for the latest version. If you don’t actively update your links to Fedora Docs, they grow stale over time.

We recently added a stable /latest URL for release docs. For example, https://docs.fedoraproject.org/en-US/fedora/latest/ points to the Fedora Linux 35 documentation. When we release Fedora Linux 36, soon, that URL will point to F36 documentation. You can use this stable URL when you want to target the latest released version and only use specific versions in the URL when the version matters.

Redesigning the site

Over the next few months, we’re working on a two-pronged approach to documentation. First, we want the user documentation to better reflect the changes in the Fedora distribution over the last few years. In the meantime, there are great differences in terms of installation and administration, which makes uniform guides difficult to write. In fact, most Editions now have their own installation guide. As a first step, we will split the current installation guide into a guide describing where to find the installation guide for the appropriate Edition. It will also include a generic description of Anaconda that Editions can link to or include text parts. We expect to be able to connect the customization during the Fedora Linux 37 development cycle. As a next step, we will revise the Administration Guide and Quick Docs.

In addition, the Docs team will be selecting an Outreachy intern to work on a redesign of the site. This will include both the graphical design of the user interface as well as improving the information architecture of the site. We want to make it easier for you to find exactly what you’re looking for.

Help wanted

Just like the rest of Fedora, the Docs team is a community effort. We welcome new contributors. You can join us in the #docs tag on Fedora Discussion or in #docs on Fedora Chat.

If you see an issue on any Fedora Docs page, you can click the bug icon on the top of the page to report an issue. Or click the edit icon to submit a correction!

Episode 321 – Relativistic Security: Project Zero on 0day

Posted by Josh Bressers on May 02, 2022 12:01 AM

Josh and Kurt talk about the Google Project Zero blog post about 0day vulnerabilities in 2021. There were a lot more than ever before, but why? Part of the challenge is the whole industry is expanding while a lot of our security technologies are not. When the universe around you is expanding but you’re staying the same size, you are actually shrinking.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2768-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_321_Relativistic_Security_Project_Zero_on_0day.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_321_Relativistic_Security_Project_Zero_on_0day.mp3</audio>

Show Notes

A cool planet!

Posted by Jon Chiappetta on May 01, 2022 01:32 AM
<figure class="wp-block-image aligncenter size-large"> </figure>

Friday’s Fedora Facts: 2022-17

Posted by Fedora Community Blog on April 29, 2022 07:17 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

The F36 Final freeze is underway. F36 Final is on track for target date #3 (2022-05-10).

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
DevConf.USBoston, MA18–20 Augcloses 30 Apr
Container DaysHamburg, DE & virtual5–7 Sepcloses 30 Apr
ShipItConDublin, IESepcloses 30 Apr
RustConfPortland, OR, US & virtual4–5 Augcloses 8 May
DDD PerthPerth, AU9 Sepcloses 27 May
Open Source Summit EuropeDublin, IE & virtual13–16 Sepcloses 30 May
#a11yTOToronto, ON, CA & virtual20–21 Octcloses 31 May
PyCon SKBratislava, SK9–11 Sepcloses 30 Jun
SREcon22 EMEAAmsterdam, NL25–27 Octcloses 30 Jun
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
1955416shimPOST
</figure>

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345227
F354155
F36 (pre-release)1793
Rawhide6757
</figure>

Fedora Linux 36

Schedule

  • 2022-05-10 — Target date #3

See the schedule website for the full schedule.

Changes

<figure class="wp-block-table">
StatusCount
ON_QA49
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
2056303selinux-policyON_QAAccepted(Final)
2079308gnome-photosNEWAccepted(Final)
2079344gnome-photosNEWAccepted(Final)
2072070wpa_supplicantNEWAccepted(Final)
2079274gnome-contactsNEWProposed(Final)
</figure>

Fedora Linux 37

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
Deprecate Legacy BIOSSystem-WideRejected
RPM 4.18System-WideApproved
Legacy Xorg Driver RemovalSystem-WideApproved
Haskell GHC 9 & Stackage LTS 19Self-ContainedApproved
Replace jwhois package with whois for Fedora WorkstationSelf-ContainedFESCo #2785
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Major upgrade of MicrodnfSelf-ContainedFESCo #2784
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-17 appeared first on Fedora Community Blog.

threads and libxcb, part 2

Posted by Adam Jackson on April 29, 2022 04:35 PM

I've been working on kopper recently, which is a complementary project to zink. Just as zink implements OpenGL in terms of Vulkan, kopper seeks to implement the GL window system bindings - like EGL and GLX - in terms of the Vulkan WSI extensions. There are several benefits to doing this, which I'll get into in a future post, but today's story is really about libX11 and libxcb.

Yes, again.

One important GLX feature is the ability to set the swap interval, which is how you get tear-free rendering by syncing buffer swaps to the vertical retrace. A swap interval of 1 is the typical case, where an image update happens once per frame. The Vulkan way to do this is to set the swapchain present mode to FIFO, since FIFO updates are implicitly synced to vblank. Mesa's WSI code for X11 uses a swapchain management thread for FIFO present modes. This thread is started from inside the vulkan driver, and it only uses libxcb to talk to the X server. But libGL is a libX11 client library, so in this scenario there is always an "xlib thread" as well.

libX11 uses libxcb internally these days, because otherwise there would be no way to intermix xlib and xcb calls in the same process. But it does not use libxcb's reflection of the protocol, XGetGeometry does not call xcb_get_geometry for example. Instead, libxcb has an API to allow other code to take over the write side of the display socket, with a callback mechanism to get it back when another xcb client issues a request. The callback function libX11 uses here is straightforward: lock the Display, flush out any internally buffered requests, and return the sequence number of the last request written. Both libraries need this sequence number for various reasons internally, xcb for example uses it to make sure replies go back to the thread that issued the request.

But "lock the Display" here really means call into a vtable in the Display struct. That vtable is filled in during XOpenDisplay, but the individual function pointers are only non-NULL if you called XInitThreads beforehand. And if you're libGL, you have no way to enforce that, your public-facing API operates on a Display that was already created.

So now we see the race. The queue management thread calls into libxcb while the main thread is somewhere inside libX11. Since libX11 has taken the socket, the xcb thread runs the release callback. Since the Display was not made thread-safe at XOpenDisplay time, the release callback does not block, so the xlib thread's work won't be correctly accounted. If you're lucky the two sides will at least write to the socket atomically with respect to each other, but at this point they have diverging opinions about the request sequence numbering, and it's a matter of time until you crash.

It turns out kopper makes this really easy to hit. Like "resize a glxgears window" easy. However, this isn't just a kopper issue, this race exists for every program that uses xcb on a not-necessarily-thread-safe Display. The only reasonable fix is to for libX11 to just always be thread-safe.

So now, it is.


CPE Weekly Update – Week 17 2022

Posted by Fedora Community Blog on April 29, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat (https://libera.chat/).

Week: 25th April – 29th April 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board: https://zlopez.fedorapeople.org/I&R-27-04-2022.pdf

Update

Fedora Infra

  • Issues sending email from fedoraproject.org -> redhat.com. Finally got tls connection reuse working so we didn’t hit their limits.
  • FCOS apps (almost) all moved from ocp3->ocp4 clusters
  • Cleaned up after a bodhi bug left rawhide updates in limbo.
  • Very close to having resultsdb in ocp4 stg working, just some url adjustments left hopefully.
  • Anitya and the-new-hotness messaging schemas were moved to separate repositories
    https://github.com/fedora-infra/the-new-hotness-messages
    https://github.com/fedora-infra/anitya-messages

CentOS Infra including CentOS CI

Release Engineering

  • F36 RC composes 1.1 on friday 1.2 yesterday
  • Discussion about where container images live and potential move to quay.io

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Prototyping CI actions for Stream 9 using Duffy system, awaiting onboarding from infra
  • New compose released
  • Upcoming: Reviewing Stream container and image build processes
  • Improvements to the CVE checker have been rolled out
  • Fedora ELN is being used as a prototype for SHA-1 removal in Fedora

CentOS Duffy CI

Goal of this Initiative

Duffy is a system within CentOS CI Infra which allows tenants to provision and access bare metal resources of multiple architectures for the purposes of CI testing.
We need to add the ability to checkout VMs in CentOS CI in Duffy. We have OpenNebula hypervisor available, and have started developing playbooks which can be used to create VMs using the OpenNebula API, but due to the current state of how Duffy is deployed, we are blocked with new dev work to add the VM checkout functionality.

Updates

  • Parallelize running provisioning/deprovisioning playbooks
  • More testing and minor fixes

Package Automation (Packit Service)

Goal of this initiative

Automate RPM packaging of infra apps/packages

Updates

  • backlog is stocked with plenty of our apps for the time being, more to be added as needed
  • Expect emails, lots of emails
  • fasjson-client, fedora-messaging and datagrepper currently being worked on
  • currently able to build in copr, minus some hiccups
  • we are currently looking at the best way to version. what are the teams thoughts on moving spec files upstream? it makes everything cleaner in packit. thoughts on a postcard please

Flask-oidc: oauth2client replacement

Goal of this initiative

Flask-oidc is a library used across the Fedora infrastructure and is the client for ipsilon for its authentication. flask-oidc uses oauth2client. This library is now deprecated and no longer maintained. This will need to be replaced with authlib.

Updates:

  • Currently finding all instances of oauth2client code in the current flask-oidc code, mapping functionality to whats available in the authlib library.

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • EPEL9 up to 2416 source packages (increase of 45 from last week)
  • qt5-5.15.3 update in CentOS Stream 8 and 9 has caused update breakages for KDE users on those distros (EPEL8 and EPEL9). Updated packages are being built as we speak.
  • Unblocked python-aiosignal by adding python-pytest-asyncio to EPEL9.
  • Unblocked several openstack packages by adding python-ddt to EPEL9.
  • Resolved python-aiohttp installation issue by adding python-async-timeout to EPEL9.
  • Updated GitPython in EPEL7 to resolve issue with cloning large repositories.
  • Bump release of python-cheetah in EPEL7 to fix upgrade path from python-cheetah in CentOS 7 Extras.
  • Updated incompatible updates policy to reduce minimum time in testing from 2 weeks to 1 week to match regular updates policy.

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 17 2022 appeared first on Fedora Community Blog.

From ifcfg to keyfiles: modernizing NetworkManager configuration in Fedora Linux 36

Posted by Fedora Magazine on April 29, 2022 08:22 AM

One of the changes in Fedora Linux 36 is that new installations will no longer support the ifcfg files to configure networking. What are those and what replaces them?

A bit of history

In the good old days, connecting a Linux box to a network was easy. For each of the interface cards connected to a network, the system administrator would drop a configuration file into the /etc directory. That configuration file would describe the addressing configuration for a particular network. On Fedora Linux, the configuration file would actually be a shell script snippet like this:

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
DEVICE=eth0
BOOTPROTO=dhcp

A shell script executed on startup would read the file and apply the configuration. Simple.

Towards the end of 2004, however, a change was in the air. Quite literally — the Wi-Fi has become ubiquitous. The portable computers of the day could rapidly connect to new networks and the USB bus allowed even the wired network adapters to come and go while the system was up and running. The network configuration became more dynamic than ever before, rendering the existing network configuration tooling impractical. To the rescue came NetworkManager. On a Fedora Linux system, NetworkManager uses configuration like this:

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
DEVICE=eth0
BOOTPROTO=dhcp

Looks familiar? It should. From the beginning, NetworkManager was intended to work with the existing configuration formats. In fact, it ended up with plugins which would seamlessly convert between NetworkManager’s internal configuration model and the distribution’s native format. On Fedora, it would be the aforementioned ifcfg files.

Let’s take a closer look at them.

Ifcfg files

The legacy network service, now part of the network-scripts package, originally defined the ifcfg file format. Along with the package comes a file called sysconfig.txt that, quite helpfully, documents the format.

As NetworkManager gained traction it often found itself in need of expressing a configuration that was not supported by the old fashioned network service. Given the nature of configuring things with shell scripts, adding new settings is no big deal. The unknown ones are generally just silently ignored. The NetworkManager’s idea of what ifcfg files should look like is described in the nm-settings-ifcfg-rh(5) manual.

In general, NetworkManager tries hard to write ifcfg files that work well with the legacy network service. Nevertheless, sometimes it is just not possible. These days, the number of network connection types that NetworkManager supports vastly outnumber what the legacy network service can configure. . A new format is now used to express what the legacy format can not. This includes VPN connections, broadband modems and more.

Keyfiles

The new format closely resembled the NetworkManager’s native configuration model:

$ cat /etc/NetworkManager/system-connections/VPN.ovpn
[connection]
id=My VPN
uuid=c85a7cdb-973b-491f-998d-b09a590af10e
type=vpn

[vpn]
ca=/etc/pki/tls/certs/vpn-ca.pem
connection-type=password
remote=vpn.example.com
username=lkundrak
service-type=org.freedesktop.NetworkManager.openvpn

[ipv6]
method=auto
never-default=true

The actual format should be instantly familiar to everyone familiar with Linux systems. It’s the “ini file” or “keyfile” — a bunch of plain text key-value pairs, much like the ifcfg files use, grouped into sections. The nm-settings-keyfile(5) manual documents the format thoroughly.

The main advantage of using this format is that it closely resembles NetworkManager’s idea of how to express network configuration, used both internally and on the D-Bus API. It’s easier to extend without taking into consideration the quirks of the mechanism that was designed in without the benefit of foresight back when the world was young. This means less code, less surprises and less bugs.

In fact there’s nothing the keyfile format can’t express that ifcfg files can. It can express the simple wired connections just as well as the VPNs or modems.

Migrating to keyfiles

The legacy network service served us well for many years, but its days are now long over. Fedora Linux dropped it many releases ago and without it there is seemingly little reason to use the ifcfg files. That is, for new configurations. While Fedora Linux still supports the ifcfg files, it has defaulted to writing keyfiles for quite some time.

Starting with Fedora Linux 36, the ifcfg support will no longer be present in new installations. If you’re still using ifcfg files, do not worry — the existing systems will keep it on upgrades. Nevertheless, you can still decide to uninstall it and carry your configuration over to keyfiles. Keep on reading to learn how.

If you’re like me, you installed your system years ago and you have a mixture of keyfiles and ifcfg files. Here’s how can you check:

$ nmcli -f TYPE,FILENAME,NAME conn
TYPE      FILENAME                                         NAME
ethernet  /etc/sysconfig/network-scripts/ifcfg-eth0        eth0
wifi      /etc/sysconfig/network-scripts/ifcfg-Guest       Guest
wifi      /etc/NetworkManager/system-connections/Base48    Base48
vpn       /etc/NetworkManager/system-connections/VPN.ovpn  My VPN

This example shows a VPN connection that must have always used a keyfile and a Wi-Fi connection presumably created after Fedora Linux switched to writing keyfiles by default. There’s also an Ethernet connection and Wi-Fi one from back in the day that use the ifcfg plugin. Let’s see how we can convert those to keyfiles.

The NetworkManager’s command line utility, nmcli(1), acquired a new connection migrate command, that can change the configuration backend used by a connection profile.

It’s a good idea to make a backup of /etc/sysconfig/network-scripts/ifcfg-* files, in case anything goes wrong. Once you have the backup you can try migrating a single connection to a different configuration backend (keyfile by default):

$ nmcli connection migrate eth0
Connection 'eth0' (336aba93-1cd7-4cf4-8e90-e2009db3d4d0) successfully migrated.

Did it work?

$ nmcli -f TYPE,FILENAME,NAME conn
TYPE      FILENAME                                         NAME
ethernet  /etc/NetworkManager/system-connections/eth0.nmc  eth0
wifi      /etc/sysconfig/network-scripts/ifcfg-Guest       Guest
wifi      /etc/NetworkManager/system-connections/Base48    Base48
vpn       /etc/NetworkManager/system-connections/VPN.ovpn  My VPN

Cool. Can I migrate it back, for no good reason?

$ nmcli conn migrate --plugin ifcfg-rh eth0
Connection 'eth0' (336aba93-1cd7-4cf4-8e90-e2009db3d4d0) successfully migrated.

Excellent. Without specifying more options, the “connection migrate” command ensures all connections use the keyfile backend:

$ nmcli conn migrate
Connection '336aba93-1cd7-4cf4-8e90-e2009db3d4d0' (eth0) successfully migrated.
Connection '3802a9bc-6ca5-4a17-9d0b-346f7212f2d3' (Red Hat Guest) successfully migrated.
Connection 'a082d5a0-5e29-4c67-8b6b-09af1b8d55a0' (Base48) successfully migrated.
Connection 'c85a7cdb-973b-491f-998d-b09a590af10e' (Oh My VPN) successfully migrated.

And that’s all. Now that your system has no ifcfg files, the configuration backend that supports them is of no use and you can remove it:

# dnf remove NetworkManager-initscripts-ifcfg-rh

Your system now works the same as it did before, but you can rejoice, for it is now modern.

PHP version 8.0.19RC1 and 8.1.6RC1

Posted by Remi Collet on April 29, 2022 06:19 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.1.6RC1 are available

  • as SCL in remi-test repository
  • as base packages
    • in the remi-php81-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 34-36 and Enterprise Linux ≥ 8

RPM of PHP version 8.0.19RC1 are available

  • as SCL in remi-test repository
  • as base packages
    • in the remi-php81-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 34-36 and Enterprise Linux ≥ 8

 

emblem-notice-24.pngPHP version 7.4 is now in security mode only, so no more RC will be released, this is also the last one for 7.4.

emblem-notice-24.pngInstallation : follow the wizard instructions.

Parallel installation of version 8.1 as Software Collection:

yum --enablerepo=remi-test install php81

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Update of system version 8.1 (EL-7) :

yum --enablerepo=remi-php81,remi-php81-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.0 (EL-7) :

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Notice: version 8.1.6RC1 is also in Fedora rawhide for QA.

emblem-notice-24.pngEL-9 packages are built using RHEL-9.0-Beta

emblem-notice-24.pngEL-8 packages are built using RHEL-8.5

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections ( php74, php80)

Base packages (php)

fwupd 1.8.0 and 50 million updates

Posted by Richard Hughes on April 28, 2022 03:05 PM

I’ve just tagged the 1.8.0 release of fwupd, with these release notes — there’s lots of good stuff there as always. More remarkable is that LVFS has now supplied over 50 million updates to Linux machines all around the globe. The true number is going to be unknown, as we allow vendors to distribute updates without any kind of logging, and also allow companies or agencies to mirror the entire LVFS so the archive can be used offline. The true number of updates deployed will be a lot higher than 50 million, which honestly blows my tiny mind. Just 7 years ago Christian asked me to “make firmware updates work on Linux” and now we have a thriving client project that respects both your freedom and your privacy, and a thriving ecosystem of hardware vendors who consider Linux users first class citizens. Of course, there are vendors who are not shipping updates for popular hardware, but they’re now in the minority — and every month we have two or three new vendor account requests. The logistical, security and most importantly commercial implications of not being “on the LVFS” are now too critical even for tier-1 IHVs, ODMs and OEMs to ignore.

I’m still amazed to see Reddit posts, YouTube videos and random people on Twitter talk about the thing that’s been my baby for the last few years. It’s both frightening as hell (because of the responsibility) and incredibly humbling at the same time. Red Hat can certainly take a lot of credit for the undeniable success of LVFS and fwupd, as they have been the people paying my salary and pushing me forward over the last decade and more. Obviously I’m glad everything is being used by the distros like Ubuntu and Arch, although for me it’s Fedora that’s at least technically the one pushing Linux forward these days. I’ve seen Fedora grow in market share year on year, and I’m proud to be one of the people pushing the exciting Future Features into Fedora.

So what happens next? I guess we have the next 50 million updates to look forward to. The LVFS has been growing ever so slightly exponentially since it was first conceived so that won’t take very long now. We’ve blasted through 1MM updates a month, and now regularly ship more than 2MM updates a month and with the number of devices supported growing like it has (4004 different streams, with 2232 more planned), it does seem an exciting place to be. I’m glad that the number of committers for fwupd is growing at the same pace as the popularity, and I’m not planning to burn out any time soon. Google has also been an amazing partner in encouraging vendors to ship updates on the LVFS and shipping fwupd in ChromeOS — and their trust and support has been invaluable. I’m also glad the “side-projects” like “GNOME Firmware“, “Host Security ID“, “fwupd friendly firmware” and “uSWID as an SBoM” also seem to be flourishing into independent projects in their own right. It does seem now is the right time to push the ecosystem towards transparency, open source and respecting the users privacy. Redistributing closed source firmware may be an unusual route to get there, but it’s certainly working. There are a few super-sekret things I’m just not allowed to share yet, but it’s fair to say that I’m incredibly excited about the long term future.

From the bottom of my heart, thank you all for your encouragement and support.

Mindshare Committee Quarterly Report – Q1 2022

Posted by Fedora Community Blog on April 28, 2022 08:00 AM

The Mindshare Committee publishes a Quarterly Report, with this post being our second edition. It covers activities from the Mindshare Committee and related teams for the months of January, February, and March of 2022. As always, we welcome feedback on how we can improve these reports in the related Mindshare ticket.

Help Wanted in Q2 2022

Take a look at the links below and see how you can get involved. For tickets and Discussion threads, make sure to comment your interest in getting involved.

Events

Team Updates

CommOps Team

  • Community Blog Activity
    • 15 posts in January
    • 18 posts in February
    • 17 posts in March

DocsForumChatRepo

Community Outreach Revamp (FCOR)

DocsChat

Council Representative

DocsForumChatRepo

Design Team

  • The Fedora 36 artwork is complete and ready, created by Madeline Peck & Micah Denn. This is the first release wallpaper process Madeline has managed 🙂
  • We had our first ever default wallpaper test day. The test day was a success with many participants and contributions.
  • Intern Jess Chitas is in the finishing stages of the new Nest mascot.
  • New tablecloth design for the events box completed, and sent to the vendor. A new banner design upcoming!
  • Almost completed is the Fedora Workstation website redesign. We have a strong start on IoT, progress has been made on the website revamp design.

DocsMailing ListChatRepo

Internationalization Team

  • Fedora i18n team has implemented 4 Change proposals for Fedora Linux 36 release.
  • Fedora Linux 36 i18n test week event has been successfully completed.

DocsMailing ListChatRepo

Magazine Team

  • A five-year-old request to add some more social media links was closed. Links to the Fedora Project’s YouTube channel, Matrix channel and Discourse Forum have been added to the list of social media links in the upper-right corner of the Fedora Magazine website.
  • A request to add a Lightbox plugin so that enlarged versions of images could be viewed by clicking on them was also closed.

Fedora MagazineDocsForumChatRepo

Mentored Projects

  • Fedora Project is paticipating in Outreachy May 2022 cohort. Projects are listed on the Outreachy website.
  • Unfortunately, Fedora Project was not accepted in GSoC 2022. We missed the requirement of having at least 3 projects listed even before the org is accepted. We have readjusted our timeline to gather more projects for the next round.
  • We had our first Fedora Mentor Summit on April 1st and 2nd. It was a successful event and video recordings are already posted on the Fedora Project YouTube channel.

DocsMailing ListChatRepo

Mindshare Committee

DocsMailing ListChatRepo

Websites & Apps Team & Revamp

  • The Websites Revamp Stakeholder team is actively coming up with design mockups for the Fedora Workstation and Server spins. We are getting them reviewed from the wider community and work on improving them.
  • We have successfully completed refining the action items for expected outcomes and outputs for the Fedora Council objective. Action items would be continually getting updates as we continue.
  • We have started with establishing processes for technical, management, and operational aspects of the team. We have documented those duly to ensure that they are reproducible.
  • We provided progress updates on a Fedora Council meeting and posted an article on community blog.

DocsMailing ListChatForumRepo

Mindshare Updates from the Fedora Community Action and Impact Coordinator (FCAIC)

Thanks, we’ll see you online!

Thank you for reading the latest edition of the Mindshare Committee Quarterly Report. We will publish our next report sometime in July. Feel free to join us in the #fedora-mindshare chat on IRC/Element and drop by our Mindshare weekly meetings.

The post Mindshare Committee Quarterly Report – Q1 2022 appeared first on Fedora Community Blog.