Fedora People

Episode 343 – Stop trying to fix the open source software supply chain

Posted by Josh Bressers on October 03, 2022 12:01 AM

Josh and Kurt talk about a blog post that explains there isn’t really an open source software supply chain. The whole idea of open source being one thing is incorrect, open source is really a lot of little things put together. A lot of companies and organizations get this wrong.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2935-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_343_Stop_trying_to_fix_the_open_source_software_supply_chain.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_343_Stop_trying_to_fix_the_open_source_software_supply_chain.mp3</audio>

Show Notes

Toolbx — running the same host binary on Arch Linux, Fedora, Ubuntu, etc. containers

Posted by Debarshi Ray on October 02, 2022 05:28 PM

This is a deep dive into some of the technical details of Toolbx and is a continuation from the earlier post about bypassing the immutability of OCI containers.

The problem

As we saw earlier, Toolbx uses a special entry point for its containers. It’s the toolbox executable itself.

$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36
toolbox --log-level debug init-container ...

This is achieved by bind mounting the toolbox executable invoked by the user on the hosts to /usr/bin/toolbox inside the containers. While this has some advantages, it opens the door to one big problem. It means that executables from newer or different host operating systems might be running against older or different run-time environments inside the containers. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.

This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.

When binaries are compiled and linked against newer run-time environments, they may start relying on symbols (ie., non-static global variables, functions, class and struct members, etc.) that are missing in older environments. For example, glibc-2.32 (used in Fedora 33 onwards) added a new version of the pthread_sigmask symbol. If toolbox binaries built and linked against glibc-2.32 are run against older glibc versions, then they will refuse to start.

$ objdump -T /usr/bin/toolbox | grep GLIBC_2.32
0000000000000000      DO *UND*        0000000000000000  GLIBC_2.32  pthread_sigmask

This means that one couldn’t use Fedora 32 Toolbx containers on Fedora 33 hosts, or similarly any containers with glibc older than 2.32 on hosts with newer glibc versions. That’s quite the bummer.

If the executables are not ELF binaries, but carefully written POSIX shell scripts, then this problem goes away. Incidentally, Toolbx used to be implemented in POSIX shell, until it was re-written in Go two years ago, which is how it managed to avoid this problem for a while.

Fortunately, Go binaries are largely statically linked, with the notable exception of the standard C library. The scope of the problem would be much bigger if it involved several other dynamic libraries, like in the case of C or C++ programs.

<figure class="wp-block-image size-large"></figure>

Potential options

In theory, the easiest solution is to build the toolbox binary against the oldest supported run-time environment so that it doesn’t rely on newer symbols. However, it’s easier said than done.

Usually downstream distributors use build environments that are composed of components that are part of that specific version of the distribution. For example, it will be unusual for an RPM for a certain Fedora version to be deliberately built against a run-time from an older Fedora. Carlos O’Donell had an interesting idea on how to implement this in Fedora by only ever building for the oldest supported branch, adding a noautobuild file to disable the mass rebuild automation, and having newer branches always inherit the builds from the oldest one. However, this won’t work either. Building against the oldest supported Fedora won’t be enough for Fedora’s Toolbx because, by definition, Toolbx is meant to run different kinds of containers on hosts. The oldest supported Fedora hosts might still be too new compared to containers of supported Debian, Red Hat Enterprise Linux, Ubuntu etc. versions.

So, yes, in theory, this is the easiest solution, but, in practice, it requires a non-trivial amount of cross-distribution collaboration, and downstream build system and release engineering effort.

The second option is to have Toolbx containers provide their own toolbox binary that’s compatible with the run-time environment of the container. This would substantially complicate the communication between the toolbox binaries on the hosts and the ones inside the containers, because the binaries on the hosts and containers will no longer be exactly the same. The communication channel between commands like toolbox create and toolbox enter running on the hosts, and toolbox init-container inside the containers can no longer use a private and unstable interface that can be easily modified as necessary. Instead, it would have complicated backwards and forwards compatibility requirements. Other than that, it would complicate bug reports, and every single container on a host may need to be updated separately to fix bugs, with updates needing to be co-ordinated across downstream distributors.

The next option is to either statically link against the standard C library, or disable its use in Go. However, that would prevent us from using glibc’s Name Service Switch to look up usernames and groups, or to resolve host names. The replacement code, written in pure Go, can’t handle enterprise set-ups involving Network Information Service and Lightweight Directory Access Protocol, nor can it talk to host OS services like SSSD, systemd-userdbd or systemd-resolved.

It’s true that Toolbx currently doesn’t support enterprise set-ups with NIS and LDAP, but not using NSS will only make it more difficult to add that support in future. Similarly, we don’t resolve any host names at the moment, but given that we are in the business of pulling content over the network, it can easily become necessary in the future. Disabling the use of NSS will leave the toolbox binary as this odd thing that behaves differently from the rest of the OS for some fundamental operations.

An extension of the previous option is to split the toolbox executable into two. One dynamically linked against the standard C library for the hosts, and another that has no dynamic linkage to run inside the containers as their entry point. This can impact backwards compatibility and affect the developer experience of hacking on Toolbx.

Existing Toolbx containers want to bind mount the toolbox executable from the host to /usr/bin/toolbox inside the containers and run toolbox init-container as their entry point. This can’t be changed because of the immutability of OCI containers, and Toolbx simply can’t afford to break existing containers in a way where they can no longer be entered. This means that the toolbox executable needs to become a shim, without any dynamic linkage, that forwards the invocation to the right executable depending on whether it’s running on the hosts or inside the containers.

That brings us to the developer experience of hacking on Toolbx. The first thing note is that we don’t to go back to using POSIX shell to implement the executable that’s meant to run inside the container. Ondřej spent a lot of effort replacing the POSIX shell implementation of Toolbx, and we don’t want to undo any part of that. Ideally, we would use the same programming language (ie., Go) to implement both executables so that one doesn’t need to learn multiple disparate languages to work on Toolbx. However, even if we do use Go, we would have to be careful not to share code across the two executables, or be aware that they may have subtle differences in behaviour depending on how they might be linked.

Then there’s the developer experience of hacking on Toolbx on Fedora Silverblue and similar OSTree-based OSes, which is what you would do to eat your own dog food. Experiences are always subjective and this one is unique to hacking Toolbx inside a Toolbx. So let’s take a moment to understand the situation.

On OSTree-based OSes, Toolbx containers are used for development, and, generally speaking, it’s better to use container-specific locations invisible to the host as the development prefixes because the generated executables are specific to each container. Executables built on one container may not work on another, and not on the hosts either, because of the run-time problems mentioned above. Plus, it’s good hygiene not to pollute the hosts.

Similar to Flatpak and Podman, Toolbx is a tool that sets up containers. This means that unlike most other executables, toolbox must be on the hosts because, barring the init-container command, it can’t work inside the containers. The easiest way to do this, is to have a separate terminal emulator with a host shell, and invoke toolbox directly from Meson’s build directory in $HOME that’s shared between the hosts and the Toolbx containers, instead of installing toolbox to the container-specific development prefixes. Note that this only works because toolbox has always been implemented in programming languages with none to minimal dynamic linking, and only if you ensure that the Toolbx containers for hacking on Toolbx matches the hosts. Otherwise, you might run into the run-time problems mentioned above.

The moment there is one executable invoking another, the executables need to be carefully placed on the file system so that one can find the other one. This means that either the executables need to be installed into development prefixes or that the shim should have special logic to work out the location of the other binary when invoked directly from Meson’s build directory.

The former is a problem because the development prefixes will likely default to container-specific locations invisible from the hosts, preventing the built executables from being trivially invoked from the host. One could have a separate development prefix only for Toolbx that’s shared between the containers and the hosts. However, I suspect that a lot of existing and potential Toolbx contributors would find that irksome. They either don’t know or want to set up a prefix manually, but instead use something like jhbuild to do it for them.

The latter requires two different sets of logic depending on whether the shim was invoked directly from Meson’s build directory or from a development prefix. At the very least this would involve locating the second executable from the shim, but could grow into other areas as well. These separate code paths would be crucial enough that they would need to be thoroughly tested. Otherwise, Toolbx hackers and users won’t share the same reality. We could start by running our test suite in both modes, and then meticulously increase coverage, but that would come at the cost of a lengthier test suite.

Failed attempts

Since glibc uses symbol versioning, it’s sometimes possible to use some .symver hackery to avoid linking against newer symbols even when building against a newer glibc. This is what Toolbox used to do to ensure that binaries built against newer glibc versions still ran against older ones. However, this doesn’t defend against changes to the start-up code in glibc, like the one in glibc-2.34 that performed some security hardening.

Current solution

Alexander Larsson and Ray Strode pointed out that all non-ancient Toolbx containers have access to the hosts’ /usr at /run/host/usr. In other words, Toolbx containers have access to the host run-time environments. So, we decided to ensure that toolbox binaries always run against the host run-time environments.

The toolbox binary has a rpath pointing to the hosts’ libc.so somewhere under /run/host/usr and it’s dynamic linker (ie., PT_INTERP) is changed to the one inside /run/host/usr. Unfortunately, there can only be one PT_INTERP entry inside the binary, so there must be a /run/host on the hosts too for the binary to work on the hosts. Therefore, a /run/host symbolic link is also created on the host pointing to the hosts’ /.

The toolbox binary now looks like this, both on the hosts and inside the Toolbx containers:

$ ldd /usr/bin/toolbox
    linux-vdso.so.1 (0x00007ffea01f6000)
    libc.so.6 => /run/host/usr/lib64/libc.so.6 (0x00007f6bf1c00000)
    /run/host/usr/lib64/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f6bf289a000)

It’s been almost a year and thus far this approach has held its own. I am mildly bothered by the presence of the /run/host symbolic link on the hosts, but not enough to lose sleep over it.

Other options

Recently, Robert McQueen brought up the idea of possibly using the Linux kernel’s binfmt_misc mechanism to modify the toolbox binary on the fly. I haven’t explored this in any seriousness, but maybe I will if the current set-up doesn’t work out.

Fedora 38 Wallpaper in motion!

Posted by Madeline Peck on September 30, 2022 10:18 PM
<figure class=" sqs-block-image-figure intrinsic "> </figure>

Fedora 38 Wallpaper Process Starting

Photo by Sonika Agarwal on Unsplash

That’s right!!! We are officially ready to start brainstorming for Fedora 38 Wallpaper ideas because our candidate with an M last name has been chosen (drum roll please) and it’s Samuel Massie!!!

Ideas and progress are going to be documented on gitlab here!

Thank you to everyone who voted and gave comments on the candidates! It’s greatly appreciated! After choosing our inspiration, we had a brainstorming session for figuring out different paths we could go down together. We met on Wednesday, September 21st at the design team meeting which was 1:30-2:30 PM EST*

*This is our usual time slot which we will continue to use for most of the future sessions. If you can’t make this time of the week, feel free to tune into the recording or are nervous about participating, join the Livestream to watch only.


To recap, Samuel Massie was a chemist who studied a variety of chemicals that contributed to the development of therapeutic drugs, including the chemistry of phenothiazine. As an African American scientist, it was hard for him to find work as easily as white colleagues and to avoid being drafted- worked on the Manhattan Project, which developed atomic bombs in World War II.

This presents a few different paths we could go in, as well as some that we want to avoid since they’ve been done before for past wallpapers.

We took our time looking into the various parts of his background, learning he originally studied chemistry because he wanted to cure his father’s asthma, and reading articles by his grandaughter where she admits how she wrestles with his legacy as a groundbreaking Black chemist… whose works also contributed to thousands of deaths with the atomic bomb.

Themes we thought about included, glowing things that are beautiful but dangerous, we are all the same, with visuals of repeating lines or pointillism.

Below is the mind map we created together in our brainstorm session!

<figure class=" sqs-block-image-figure intrinsic "> </figure>

1. Beauty in Death / Contrast Visual

We found inspiration from holocaust memorials that are eerily beautiful while reminding the visitor about the gruesome deaths involved. It made us think of other visuals that are beautiful but dangerous like jellyfish, volcanoes, uranium glassware, and the fireflies from the movie ‘Grave of the Fireflies’ that also resemble sparks and ash from the bombing. “Beautiful toxic like lead and uranium- should look beautiful, repeating sameness but separate.”

2. False Dichotomy We are all the Same

Oftentimes throughout history, humanity forgets that we are all the same.

3. Butterfly Effect

Massie working and not directly witnessing the effects across the world. Without his work where would we be now? The small ripples and after effects. Remote presence is increasing, pushing towards container technology and how we’re all connected even though we never meet.

All three of these themes can be woven together and don’t have to be separate at all! They’re simply to encourage what to think of next.

Next steps:

Going forward anyone who wants to be involved is encouraged to pursue whichever avenue they’re the most interested in and work on some thumbnail sketches (they are supposed to get the creative juices flowing with ideas not overly detailed) to figure out some potential compositions.

:D Please have fun with this and don’t feel restricted by any of the topics discussed above! Any other ideas are welcomed.

And one last reminder that the ticket issue is on gitlab here!

<figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

Photo by Pat Whelen on Unsplash

</figcaption> </figure> <figure class=" sqs-block-image-figure intrinsic "> <figcaption class="image-caption-wrapper">

Photo by Shawn Appel on Unsplash

</figcaption> </figure>

Friday’s Fedora Facts: 2022-39

Posted by Fedora Community Blog on September 30, 2022 02:00 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Fedora Week of Diversityvirtual14–15 Octcloses 1 Oct
Python Web Confvirtual14–17 Marcloses 1 Oct
RSA ConferenceSan Francisco, CA, US24–27 Aprcloses 6 Oct
OLFColumbus, OH, US2–3 Deccloses 15 Oct
</figure>

Help wanted

Upcoming test days

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354003
F364142
F37 (beta)1220
Rawhide6383
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-10-04Final freeze begins
  • 2022-10-18 — Current final target (early target date)

Changes

<figure class="wp-block-table">
StatusCount
MODIFIED2
ON_QA43
CLOSED2
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
2049849grub2NEWAccepted(Final)
2128662abrtASSIGNEDAccepted(Final)
2129358glibcPOSTAccepted(Final)
2129914gnome-mapsVERIFIEDAccepted(Final)
2130087firefoxNEWProposed(Final)
2130964gnome-calendarNEWProposed(Final)
2130977gnome-calendarNEWProposed(Final)
2130981gnome-calendarNEWProposed(Final)
2130657gnome-contactsPOSTProposed(Final)
2130661gnome-contactsNEWProposed(Final)
2130927gnome-photosNEWProposed(Final)
2130937gnome-photosNEWProposed(Final)
2128423gnome-settings-daemonPOSTProposed(Final)
2127618nautilusNEWProposed(Final)
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
Minizip RenamingSelf-ContainedApproved
Strong crypto settings: phase 3, forewarning 2/2System-WideFESCo #2865
Node.js RepackagingSelf-ContainedApproved
KTLS implementation for GnuTLSSystem-WideApproved
PHP 8.2Self-ContainedFESCo #2875
SWIG 4.1.0Self-ContainedFESCo #2877
</figure>

Fedora Linux 39

Changes

The table below lists proposed Changes.

<figure class="wp-block-table">
ProposalTypeStatus
libsoup 3: Part TwoSystem-WideRejected
Replace DNF with DNF5System-WideFESCo #2870
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-39 appeared first on Fedora Community Blog.

CPE Weekly Update – Week 39 2022

Posted by Fedora Community Blog on September 30, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat.

We provide you with both infographics and text versions of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details look at the infographic.

Week: 26th September to 30th September 2022

<figure class="wp-block-image size-large"></figure>

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Docs

Update

Fedora Infra

  • Cleaned up space on log01 (archived 2021 logs)
  • Koji had a prod->stg sync and is all backup and working in stg
  • Hit a bug where createrepo_c blows up when a changelog has an ESC in it. 
  • FMN in stg (python3) working! Now to add to ansible and redeploy to make sure we captured everything and then on to prod!
  • Networking upgrades yesterday, mostly went well

CentOS Infra including CentOS CI

  • CentOS Stream infrastructure maintenance window planning
    • NFS storage for koji
    • Iscsi lun for lookaside cache
    • Kojid builder kernel bump
    • Openshift ownership+subscription change + update to a newer version
  • Kiwi plugin WIP (updated CBS/koji to 1.30.0-2 with backported koji fixes)

Release Engineering

  • F37 Final freeze next Tuesday!

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Moved c8s to source from 8.8, and built new 8.8 packages. Although no 8.8 modules, as yet.
  • Put together a set of backup Gitlab runners to mitigate the risk of an outage during the CS OpenShift migration.
  • Centpkg development is still underway with PRs submitted to enable users to build source-rpms from the exploded srpm format – useful to some SIGs!
  • Fixing ELN package builds

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high-quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including a build system, Bugzilla instance, updates manager, mirror manager and more.

Updates

FMN replacement

Goal of this initiative

FMN (Fedora-Messaging-Notification) is a web application allowing users to create filters on messages sent to (currently) fedmsg and forward these as notifications to email or IRC.
The goal of the initiative is mainly to add fedora-messaging schemas, create a new UI for a better user experience and create a new service to triage incoming messages to reduce the current message delivery lag problem. The community will profit from speedier notifications based on their own preferences (IRC, Matrix, Email), a unified fedora project to one message service and human-readable results in Datagrepper.
Also, CPE tech debt will be significantly reduced by dropping the maintenance of fedmsg altogether.

Updates

  • Frontend implementation
  • Mockup implementation for Rule page
  • Investigating HTTPX-GSSAPI to interface with FASJSON

The post CPE Weekly Update – Week 39 2022 appeared first on Fedora Community Blog.

PHP version 7.4.32, 8.0.24 and 8.1.11

Posted by Remi Collet on September 30, 2022 05:48 AM

RPMs of PHP version 8.1.11 are available in remi-modular repository for Fedora ≥ 35 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php81 repository for EL 7.

RPMs of PHP version 8.0.24 are available in remi-modular repository for Fedora ≥ 35 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php80 repository for EL 7.

RPMs of PHP version 7.4.32 are available in remi-modular repository for Fedora ≥ 35 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php74 repository for EL 7.

emblem-notice-24.png The modules for EL-9 are available for x86_64 and aarch64.

emblem-important-2-24.pngPHP version 7.3 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

security-medium-2-24.pngThese Versions fix 2 security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.1 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php81
yum update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

Replacement of default PHP by version 8.0 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php80
yum update

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php74
yum update

Parallel installation of version 7.4 as Software Collection

yum install php74

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are build using RHEL-9.0
  • EL-8 RPMs are build using RHEL-8.6
  • EL-7 RPMs are build using RHEL-7.9
  • intl extension now uses libicu71 (version 71.1)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.8, instead of the outdated system library)
  • oci8 extension now uses Oracle Client version 21.7
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php74 / php80 / php81)

The Fedora Project Remains Community Driven

Posted by TheEvilSkeleton on September 30, 2022 12:00 AM

Introduction

Recently, the Fedora Project removed all patented codecs from their Mesa builds, without the rest of the community’s input. This decision was heavily criticized from the community. For that decision, some even asked the Fedora Project to remove “community driven” from its official description. I’d like to spend some time to explain why, in my opinion, this decision was completely justified, and how the Fedora Project remains community driven.

Law Compliance Cannot Be Voted

Massive organizations, like the Fedora Project, must comply with laws to avoid lawsuits as much as possible. Patent trolls are really common and will target big organizations. Let’s not forget that, in 2019, GNOME was sued by a patent troll. Unfortunately, patent trolling is quite common. And even worse, patent trolling against open source projects has considerably increased since early this year. So, this decision had to be acted quickly, to avoid potential lawsuits as soon as possible.

Complying with laws is not up to the community to decide. For example, Arch Linux, another community driven distribution, cannot and will not redistribute proprietary software at all. And this is not something that can be voted on, but must be complied with. This doesn’t mean that Arch Linux is not community driven whatsoever; it only means that it’s legally bound, just like how the Fedora Project cannot ship these patented codecs.

Even if the Fedora Project wasn’t sued in the past years, it doesn’t mean that they will continue to be free of lawsuits in the future. The increase in patent trolling is a good reason for the Fedora Project to quickly react on this. If they ever get sued, is the community going to pay for lawyers?

Community Driven

As a volunteer of the Fedora Project who is unaffiliated with Red Hat, I believe that the Fedora Project remains community driven. I am currently a part of Fedora Websites & Apps Team with the “developer” role of the upcoming website revamp repository. This is mainly a volunteer effort, as the majority of us contributing to it are unaffiliated with Red Hat and unpaid developers.

Since we (volunteers) are the ones in control with the decision, we could intentionally make the website look displeasing and appalling. Of course, we care about the Fedora Project, so we want it to look appealing for potential users, contributors and even enterprises that are willing to switch to open source.

Recently, I proposed to unify Silverblue, Kinoite, CoreOS, and other pages’ layouts into one that looks uniform and consistent when navigating, e.g. same navigation bar, footer, color palette, etc. Some developers are considering joining the effort, but some disagree. Of course, this is merely a proposal, but if everyone is on board, then we volunteers will be the ones leading this initiative.

This is one example from personal experience, but many initiatives were (and will be) proposed by independent contributors, and can also lead the effort. Nonlegal proposals are still democratically voted and surveys are still taken seriously. Currently, the Fedora Project is in the process of migrating from Pagure to GitLab, and from IRC to Matrix. That is because the community voted on it. I voiced my opinion and was one of the people who proposed both of those changes in the surveys.

Conclusion

I completely agree with the Fedora Project’s decision on disabling patented codecs from Mesa. These changes cannot and should not be asked by the community, as this is a legal discussion about potential lawsuits. Anything that is nonlegal remains democratically voted by the community, as long as you comply with US laws (unfortunately) and the Fedora Code of Conduct.


Edit 1: Use Arch Linux as an example instead of Gentoo Linux, as it is a binary based distribution

From Infrastructure as Code to Policy as Code

Posted by Fabio Alessandro Locati on September 30, 2022 12:00 AM
I still remember when 15 years ago, the topic of Infrastructure as Code was beginning to be discussed. At the time, the majority of tools we know and use for Infrastructure as Code did not exist. Some people and companies realized the need for such a paradigm, while many others were skeptical or against it. In the last few months, I had a kind of a Deja Vu when I started to have conversations with some stakeholders around Policy as Code, or as someone prefers to call it, Compliance as Code.

Contribute to the Fedora Project during Hacktoberfest 2022

Posted by Fedora Community Blog on September 29, 2022 08:00 AM
Lit jack-o-lanterns in a dark location with the white text "Hacktoberfest"

Allow us to wake you up when September ends because Hacktoberfest is (nearly) here. And you can contribute to the Fedora Project while participating in Hacktoberfest 2022! This event is an excellent opportunity to advocate for free and open-source software, all while giving back to the community with the contribution of your choice. Hacktoberfest includes low and non-code contributions. You can diversify your contributions to include writing docs, creating designs, running tests, mentoring folks, and much more. This global event is open for anyone, from students to professionals. People of all backgrounds and skill levels are encouraged to join us.

Hold on – What’s in it for me?

Folks who participate during Hacktoberfest 2022:

  • Get to expand their network with like-minded folks who maintain or contribute to the projects of their choices and interests
  • Get a pristine opportunity to give back to the free and open-source software project that they use and/or are a big fan of
  • Receive an absolutely awesome-looking Hacktoberfest 2022 tee-shirt to exhibit those bragging rights
  • Get a great opportunity to advocate for environmental conservation by having DigitalOcean plant a tree in their name

Alright, I’m sold – How do I participate?

Folks can participate either as project maintainers or as individual contributors.

As a project maintainer

If you want to participate as a project maintainer:

  • Ensure that you have your project repository available on either GitLab or GitHub
  • Add the “hacktoberfest” topic to your repository to indicate you’re looking for contributors
  • Add a “CONTRIBUTING.md” file with contribution guidelines to your repository
  • Choose issue tickets that are well-defined, self-contained, and have a limited scope
  • Adopt a code of conduct to create a greater sense of inclusion and community
  • Be active in reviewing, approving valid pull/merge requests and merging them
  • Reject any spammy requests you receive by labelling them as “spam” and closing them

Project maintainers get rewards for:

  • Merging unique pull requests or merge requests
  • Providing an approving review for the pull requests and merge requests
  • Adding the “hacktoberfest-accepted” label to the valid requests
  • Adding the “invalid” or “spam” labels to the invalid requests

We encourage teams who have their repositories in the Fedora namespace of GitLab and Fedora-related organizations in GitHub to participating in this event.

As an individual contributor

If you want to participate as an individual contributor:

  • Ensure that you register on the Hacktoberfest website between Sep 26 to Oct 31
  • Look for the “hacktoberfest” topic in GitLab or GitHub project repositories
  • Open four valid pull requests or merge requests for these repositories (maintainers must accept your requests for them to be counted towards your total)
  • Adhere to the project’s contributing guidelines and code of conduct
  • Be active in revising your requests as and when the maintainers request changes

Individual contributors get rewards for:

  • Creating four merged pull requests
  • Being among the top 40,000 participants to get their four requests in
  • Making contributions with a lasting effect, long after October ends
  • Not sending spammy requests to the project maintainers

It’s cool and all – but I am new to this

Not a problem. The Fedora Join SIG is here to help you out. Drop in and introduce yourself. Fom there on, the helpful folks at Join SIG will help you find a place to contribute.

Happy hacking!

The post Contribute to the Fedora Project during Hacktoberfest 2022 appeared first on Fedora Community Blog.

syslog-ng 101: how to get started with learning syslog-ng?

Posted by Peter Czanik on September 28, 2022 01:03 PM

How to get started with syslog-ng? There are two main resources: the syslog-ng documentation and the syslog-ng blogs. You should learn the concepts and basics from the documentation. The blogs document use cases and you can use the docs as a reference.

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Read the rest of my blog at: https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-101-how-to-get-started-with-learning-syslog-ng

Using a GraphQL endpoint from an Ansible playbook

Posted by Maxim Burgerhout on September 28, 2022 11:50 AM

Calling a GraphQL API from an Ansible playbook

I recently had to call a GraphQL API from an Ansible playbook, and considering I had never done that before and there is little to no documentation about this online, it was a bit tricky.

In the end, I got everything working, and figured I would share here for posterity (and myself, when I have to do this again in the future!)

A GraphQ API call consists of either one or two parts: the actual query or mutation and optionally a set of variable as input to the query or mutation.

I found it works easiest to drop both the query and the variables into separate files. This allows me to edit them with appropriate syntax highlighting and formatting. The variables are probably dynamic, so they are in a template. For simplicity, I put the mutation (I was mutating, not querying) in a template file as well.

Ok, so in the end, my mutation looked like this:

<figure class="highlight">
mutation createEventUrl($createDto: EventUrlCreateInput!) {
  createEventUrl(createDto: $createDto) {
    id
  }
}
</figure>

While the input variables looked like this:

<figure class="highlight">
{ "createDto": {
  "url": "https://foo.bar.baz.9099/webhooks"
  }
}
</figure>

The Ansible task was called like this, with two lookups in the body: one for the mutation itself (as “query”) and one for the variables:

<figure class="highlight">
  - name: test graphql
    ansible.builtin.uri:
      url: https://graphql.api.url.here/
      method: POST
      headers:
        content-type: "application/json"
      body_format: json
      body:
        query: '{{ lookup("template", "./templates/create_event_url.graphql") }}'
        variables: '{{ lookup("template", "./templates/create_event_url_variables.graphql") }}'
    connection: local
</figure>

This worked quite nicely for me, hope this helps!

What Not to Recommend to Flatpak Users

Posted by TheEvilSkeleton on September 28, 2022 12:00 AM

Introduction

Whenever I browse through the web, I find many “tips and tricks” from various blog writers, YouTubers and others who recommend users to take steps that either they aren’t supposed to, or have better alternatives. In this article, I will go over some of those steps you should not be taking and explain why.

Setting GTK_THEME

The GTK_THEME variable is often used to force custom themes for GTK applications. For example, setting GTK_THEME=adw-gtk3-dark will set the dark variant of adw-gtk3 if installed on Flathub.

GTK_THEME is a debug variable. It is intended to be used for testing stylesheets for GTK3 and GTK4. However, it is NOT intended to be used by users. libhandy and libadwaita ship additional widgets that the majority of GTK3 and GTK4 themes don’t support, as they are made for GTK3 and/or GTK4 alone. This means that using a custom theme on a GTK4+libadwaita application may remove libadwaita widgets.

Many applications are increasingly porting from GTK3 to GTK4+libadwaita. While GTK_THEME may work fine on GTK3, the application will appear broken after it gets ported to GTK4+libadwaita, if GTK_THEME is kept. The solution in that case is to unset GTK_THEME.

When recommending GTK_THEME, ensure that the user knows that they will need to unset that variable after the application gets ported to GTK4+libadwaita. Or better yet, don’t recommend debug variables to users. Otherwise, they will get the impression that the application itself is buggy and not working as intended. They won’t know notice that GTK_THEME caused it.

Aliasing flatpak run

A common recommendation is to alias flatpak run. When launching Flatpak applications from the terminal, it’s typical to type flatpak run $APP_ID, where $APP_ID is the application ID, for example org.mozilla.firefox for Firefox. So, logically, users find that flatpak run is too long, so they alias it to, e.g. fr.

While this works, there is a better way to improve this situation. Flatpak has its own /bin directories that we can add to PATH. For system installations, the directory is located at /var/lib/flatpak/exports/bin. For user installations, it’s located at ~/.local/share/flatpak/exports/bin.

After we restart the shell, we should be able to use application IDs only, without the need to type flatpak run. These directories are not set to PATH by default to avoid Flatpak’s /bin from clashing over distribution packages’ binaries that follow the reverse domain name notation convention.

flatpak run is often used to temporarily add or remove permissions when running the Flatpak application. For example, flatpak run --nofilesystem=home $APP_ID denies access to filesystem=home for $APP_ID for that session specifically.

Placing Themes and Cursors in ~/.icons and ~/.themes

A major mistake users often make is overriding filesystem permissions to allow read access to ~/.icons and ~/.themes. These aforementioned paths are entirely unsupported by Flatpak, as they are legacy directories.

A user who uses a cursor from ~/.icons may encounter an issue where Flatpak applications fall back to the Xlib cursor. And a user who uses ~/.themes may encounter an issue where Flatpak doesn’t automatically detect the theme and install it (if available).

Flatpak heavily relies on XDG standards and thus honors XDG compliant equivalent paths:

  • ~/.icons~/.local/share/icons
  • ~/.themes~/.local/share/themes

It is best to use these XDG compliant paths to avoid overriding permissions, as they are better supported long-term. If you use a program that installs cursors/icons or themes in legacy paths, contact the developers and kindly ask them to follow XDG standards instead!


Edit 1: Correct mistake (Credit to re:fi.64)

Networking upgrades and reboots

Posted by Fedora Infrastructure Status on September 27, 2022 09:00 PM

On 2022-09-27 1gb internal switches will be upgraded. 30min outage is likely during this time period.

Call for Proposals: FWD 2022

Posted by Fedora Community Blog on September 27, 2022 08:00 AM

The Diversity, Equity, and Inclusion (DEI) team is hard at work in the preparation for Fedora Week of Diversity (FWD). We would love to have a larger group of volunteers to make Fedora Week of Diversity bigger and better this year to celebrate our diverse and inclusive community. 

FWD will take place on October 14th and 15th on the Hopin platform. We are also hosting an online game on https://teambuilding.com/ for all participants to play and attend connection games.

We are calling for participants to be interviewed or give a talk on personal experiences on diversity, equity, and inclusion within Fedora and in your life. We would love to hear your story!

Submit your Proposal

Submit your idea for your talk on the fedora-diversity repo today! Fedora Week of Diversity will be a fun way to hear and share unique perspectives. Submit your proposals by October 1st for review. 

If you have any doubts about what to submit a proposal on, you can always ask us anything in the #diversity channel on Matrix as well. We look forward to hearing what you are planning!

The post Call for Proposals: FWD 2022 appeared first on Fedora Community Blog.

Networking upgrades and reboots

Posted by Fedora Infrastructure Status on September 26, 2022 09:00 PM

On 2022-09-26 internet edge routers and switches will be upgraded. During this time period various Fedora services will be down as switches and routers are rebooted.

Episode 342 – Programming languages are the new operating system

Posted by Josh Bressers on September 26, 2022 12:01 AM

Josh and Kurt talk about programming language ecosystems tracking and publishing security advisory details. We are at a point in the language ecosystems where they are giving us services that have historically been reserved for operating systems.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2904-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_342_Programming_languages_are_the_new_operating_system.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_342_Programming_languages_are_the_new_operating_system.mp3</audio>

Show Notes

Google Analytics and EU rules

Posted by Fabio Alessandro Locati on September 26, 2022 12:00 AM
In the last few months, we have witnessed multiple European Data Protection offices weigh on the legitimacy of Google Analytics. Back in January, I’ve published a post that touched on the topic but was not really about Google Analytics. So, let’s start looking at what happened, why Google Analytics seems to be so interesting for the European Privacy authorities, and finish with some guessing on what could happen in the next few months.

Holding open source to a higher standard

Posted by Josh Bressers on September 25, 2022 11:54 PM

Open source has always been held to a higher standard. It has always surpassed this standard.

I ran across a story recently about a proposed bill in the US Congress that is meant to “help” open source software. The bill lays out steps CISA should take to help secure open source software. This post isn’t meant to argue if open source needs to be fixed (it doesn’t), but rather let’s consider the standards and expectations open source is held to.

In the early days of open source, there was an ENORMOUS amount of attention paid to the fact that open source was built by volunteers. It was amateur software, not like the professional companies of Sun, DEC, and Microsoft. The term FUD was used quite a lot by the open source developers to explain what was going on. And even though all the big commercial companies kept changing the reasons open source couldn’t be trusted, open source just kept exceeding every expectation. Now the FUD slingers are either out of business or have embraced open source.

The events following Log4Shell created whole new industries bent on convincing us open source can’t be trusted. This time instead of the answer being closed source software, it’s some sort of trusted open source, that only their company can sell you, and don’t forget to be afraid of open source! I don’t think humans will ever learn that fear doesn’t work. It might work for a little while during an emergency, but it’s just not a long term strategy. Open source won because it works. No amount of trying to convince us otherwise will change a single mind.

We’re again at a place where there are a lot of governments and companies telling us open source needs to be fixed. We also see attempts at trying to create ways to measure open source so we can determine which bits of open source should be trusted and which should be eschewed. On the surface this seems like a fair idea. Obviously some open source would measure well, some will measure bad (this doesn’t mean a good score is better, more on that another day). All software works like this. There’s no call to measure closed source, consider it homework to figure out why not.

Higher Standards

So this brings us to how open source surpasses whatever standards are put in front of it. This will probably keep happening. Open source is easy to pick on. Nobody is in charge. There’s no legal department to come after you for making up lies. It’s just people building software in an amazing way. When someone starts trying to scare off users, they just keep building software and ignore the nonsense. It’s generally very impressive.

If we look at some of the most successful open source companies. Red Hat, Canonical, GitHub, and GitLab to name a few, they aren’t trying to convince you open source is dangerous. Open source is how they build their products and their business. They understand open source and that allows them to reap the benefits. These companies directly benefit from the higher standards of open source. Ironically, their competition created the very environment that let the open source companies win.

This isn’t the risk you are looking for

Now, all this said, there is risk when you use open source. There is risk in everything you do. Crossing the street, driving to fast, eating expired ketchup. However, the risk that comes with open source isn’t what we may think it is. There are many who will tell you some open source is dangerous because it’s old, or has vulnerabilities, or doesn’t have the backing of a foundation, or fill in some metric. These reasons aren’t the risk, they’re marketing. The actual risk is never a simple one line explanation.

Open source is like any asset. If you know you have it, you can make informed decisions. There will always be vulnerabilities. There will always be old open source. There will always be projects run by one person in their basement. None of these things are a problem by themselves if you know about it.

Here’s an easy example that picks on Red Hat and Google. Google has container images called “distroless”. They are designed to be incredibly stripped down, the idea being you should only run what you need. The Red Hat container images are gigantic in comparison. One reason often brought up for running distroless is they have fewer security vulnerabilities, which is completely true. If you have a surface understanding of the issues, this seems like a no brainer, smaller image equals less risk. But nothing is ever this simple. Red Hat has a great team of people that is constantly providing care and feeding of their packages (I used to be one of them). They don’t just patch the security problems, they are in the trenches helping open source projects with the updates and making sure they understand the vulnerabilities better than anyone else. Red Hat knows exactly how a security vulnerability affects them and they fix the things that matter very quickly.

The risk in the container images you run is really about support and tooling. Some projects and companies will benefit more from small container images. Maybe because they have fewer vulnerabilities, or maybe because they want to move at lightning speed. Some companies will benefit more from the slower pace Red Hat affords and they want to have world class support. There’s no right answer, everyone has to decide for themselves what makes sense.

The only constant is FUD

I don’t expect how open source is judged or measured to change anytime soon. It’s just too easy to blame with one hand, and keep using with the other. Only time will tell if and how governments get involved. I’m sure some ideas will be good and some will be bad. Something about a road paved with good intentions maybe.

The one thing that gives me the most solace about all of this is how much open source has won. It hasn’t won by a little bit, software ate the world, then open source ate the software. If it uses electricity, it uses open source.

Next Open NeuroFedora meeting: 26 September 1300 UTC

Posted by The NeuroFedora Blog on September 24, 2022 12:43 PM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 26 September at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2022-09-26'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

Friday’s Fedora Facts: 2022-38

Posted by Fedora Community Blog on September 23, 2022 02:00 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
ConFooMontreal, CA22–24 Febcloses 23 Sep
geeconPrague, CZ24–25 Octcloses 25 Sep
JavaLandBrühl, DE21–23 Marcloses 26 Sep
Python Web Confvirtual14–17 Marcloses 1 Oct
RSA ConferenceSan Francisco, CA, US24–27 Aprcloses 6 Oct
OLFColumbus, OH, US2–3 Deccloses 15 Oct
</figure>

Help wanted

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354017
F364033
F37 (beta)1205
Rawhide6264
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-10-04Final freeze begins
  • 2022-10-18 — Current final target (early target date)

Changes

<figure class="wp-block-table">
StatusCount
MODIFIED2
ON_QA43
CLOSED2
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
2088113anacondaON_QAAccepted(Final)
2123494gnome-initial-setupVERIFIEDAccepted(Final)
2121110gnome-shellVERIFIEDAccepted(Final)
2121944greenbootNEWAccepted(Final)
2049849grub2NEWAccepted(Final)
2123274mesaNEWAccepted(Final)
2128662abrtASSIGNEDProposed(Final)
2127618nautilusNEWProposed(Final)
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
Minizip RenamingSelf-ContainedApproved
Strong crypto settings: phase 3, forewarning 2/2System-WideFESCo #2865
Node.js RepackagingSelf-ContainedFESCo #2869
KTLS implementation for GnuTLSSystem-WideFESCo #2871
PHP 8.2Self-ContainedAnnounced
SWIG 4.1.0Self-ContainedAnnounced
</figure>

Fedora Linux 39

Changes

The table below lists proposed Changes.

<figure class="wp-block-table">
ProposalTypeStatus
libsoup 3: Part TwoSystem-WideFESCo #2863
Replace DNF with DNF5System-WideFESCo #2870
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-38 appeared first on Fedora Community Blog.

CPE Weekly Update – Week 38 2022

Posted by Fedora Community Blog on September 23, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat.

We provide you both infographics and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 19th September – 23rd September 2022

<figure class="wp-block-image size-full">CPE Infographics</figure>

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Planning board
Docs

Update

Fedora Infra

  • Proxies moving to f36 (ran into fun amazon nvme firmware bug)
  • Reissued all fedora-messaging and vpn certs that expired this year.
  • Closed 2 really old infra tickets (6year and 5year) and got them done.

CentOS Infra including CentOS CI

Release Engineering

  • We are at 77 opened tickets
  • Investigation on quay updates
  • Testing of SCM toddler in staging

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Manually push c8s modules, working around CVE checker problems.
  • Building several modules for Stream 8 that were stuck in CVE checker: 389-ds, 2 container tools modules, and virt. Also two standard module releases: python38 and python39
  • Focusing on centpkg-sig and its support of the exploded SRPM format.
  • Discussed details around repo split (BaseOS, AppStream, CRB, etc.) in Content Resolver and working on a standalone script for now.

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

FMN replacement

Goal of this initiative

FMN (Fedora-Messaging-Notification) is a web application allowing users to create filters on messages sent to (currently) fedmsg and forward these as notifications on to email or IRC.
The goal of the initiative is mainly to add fedora-messaging schemas, create a new UI for a better user experience and create a new service to triage incoming messages to reduce the current message delivery lag problem. Community will profit from speedier notifications based on own preferences (IRC, Matrix, Email), unified fedora project to one message service and human-readable results in Datagrepper.
Also, CPE tech debt will be significantly reduced by dropping the maintenance of fedmsg altogether.

Updates

  • Added logic to create a user in the DB once logged into the frontend
  • Reworked the backend’s authentication with Ipsilon
  • Added new unit tests for the consumer
  • Investigated ORM models and their relationship to each other for the DB structure

The post CPE Weekly Update – Week 38 2022 appeared first on Fedora Community Blog.

FontForge gains ability to reuse OpenType rules for different fonts

Posted by Rajeesh K Nambiar on September 22, 2022 06:08 AM

FontForge is the long standing libre font development tool: it can be used to design glyphs, import glyphs of many formats (svg, ps, pdf, …), write OpenType lookups or integrate Adobe feature files, and produce binary fonts (OTF, TTF, WOFF, …). It has excellent scripting abilities, especially Python library to manipulate fonts; which I extensively use in producing & testing fonts.

When I wrote advanced definitive OpenType shaping rules for Malayalam and build scripts based on FontForge, I also wanted to reuse the comprehensive shaping rules in all the fonts RIT develop. The challenge in reusing the larger set of rules in a ‘limited’ character set font was that FontForge would (rightly) throw errors that such-and-such glyph does not exist in the font and thus the lookup is invalid. For instance, the definitive OTL shaping rules for Malayalam has nearly 950 glyphs and lookup rules; but a limited character set font like ‘Ezhuthu’ has about 740 glyphs.

One fine morning in 2020, I set out to read FontForge’s source code to study if functionality to safely skip lookups that do not apply to a font (because the glyphs specified in the lookup are not present in the font, for instance) can be added. Few days later, I have modified the core functionality and adapted the Python interface (specifically, the Font.mergeFeature method) to do exactly that, preserving backward compatibility.

Next, it was also needed to expose the same functionality in the graphical interface (via FileMerge Feature info menu). FontForge uses its own GUI toolkit (neither GTK nor Qt); but with helpful pointers from Fredrick Brennan, I have developed the GUI to take a flag (default ‘off’ to retain backward compatibility) that allows the users to try skipping lookup rules that do not apply to the current font. In the process, I had to touch the innards of FontForge’s low-level code and learn about it.

<figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Fig. 1: Fontforge now supports skipping non-existent glyphs when merging a comprehensive OpenType feature file.</figcaption></figure>

This worked fine for our use case, typically ignoring the GSUB lookups of type sub glyph1 glyph2 by glyph3 where glyph3 does not exist in the font. But it did not properly handle the cases when glyph1 or glyph2 were non-existent. I’ve tried to fix the issue but then was unable to spend more time to finish it as Real Life™ caught up; c’est la vie. It was later attempted as part of Free Software Camp mentoring program in 2021 but that didn’t bear fruit.

A couple of weeks ago, Fred followed up now that this functionality is found very useful; so I set aside time again to finish the feature. With fresh eyes, I was able to fix remaining issues quickly, rebase the changes to current master and update the pull request.

The merge request has landed in FontForge master branch this morning. There’s a follow up pull request to update the Python scripting documentation as well. I want to thank Fredrick Brennan and Jeremy Tan for the code reviews and suggestions, and KH Hussain and CVR for sharing the excitement.

This functionality added to FontForge helps immensely in reusing the definitive Malayalam OpenType shaping rules without any modification for all the fonts! 🎉

Migrate to APIv3 queries

Posted by Copr on September 22, 2022 12:00 AM

We had planned the APIv2 drop for a very long time, and we started with that quite some time ago (api_2 dropped from our Python API lib). The team was so much familiar with this ongoing change, and was kind of bored from announcements so we forgot about analyzing the ongoing api_2 Apache access_log entries.

The change has already happened, api_2 is gone. So here comes at least a small helper post that should make the migration from api_2 to api_3 trivial. Only the routes that are being accessed (and are 404) are covered here.

Also, it might be a good idea to use the Python API (python3-copr) if possible.

How to get a build info

  • Old route: /api_2/builds/<ID>
  • New route: /api_3/build/<ID>

This should be easy as it looks. But note that newly you want look at the .projectname and .ownername fields in the output because you’ll need them in the other routes.

How to get a project info

  • Old route: /api_2/projects/<project_id>
  • New route: /api_3/project?ownername=<ownername>&projectname=<projectname>

Listing builds in a project

  • Old route: /api_2/builds?project_id=<proj_id>
  • New route: /api_3/build/list/?ownername=<ownername>&projectname=<projectname>

We no longer work with project IDs here.

Note the needed trailing slash symbol after list/?..., this is a bug and should be fixed so both list? and list/? variants are possible in the future.

Last build in a project

Same as the previous one, just add the &limit=N argument:

  • Old route: /api_2/builds?project_id=<proj_id>&limit=<N>
  • New route: /api_3/build/list/?ownername=<ownername>&projectname=<projectname>&limit=<N>

List of owner’s projects

  • Old route: /api_2/projects?owner=<owner>&name=<owner>
  • New route: /api_3/project?ownername=<owner>&projectname=<projectname>

Build config

The “build config” info is rather internal thing (we have part of it in APIv3, because of the $ copr mock-config <project> <chroot> functionality. But previously the config route was used to get the build result directory, aka chroot’s result_dir_url. This can be now obtained through the /api_3/build-chroot route (result_url field). The rest of the info goes to /backend internal namespace.

  • Old route: /api_2/build_tasks/<build_id>/<chroot_name>
  • New route: /api_3/build-chroot?build_id=<build-id>&chrootname=<chroot_name>
  • New route: /api_3/build-chroot/build-config?build_id=<build-id>&chrootname=<chroot_name>
  • New route: /backend/get-build-task/<build-id>-<chroot_name> (internal only)

Cockpit 277

Posted by Cockpit Project on September 22, 2022 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 277, cockpit-machines 275, and cockpit-podman 54:

Login: Improve password manager compatibility

The HTML of the login page has been adjusted to be more compatible with password managers in popular browsers. Usernames and passwords are more likely to be prefilled or selectable, depending on the password manager and browser.

Thanks to Jacek Tomasiak for pointing this out and providing a fix!

Login: Fix “This web browser is too old” error with upcoming browsers

Newer, upcoming browser versions have improved next-level support for :is() and :where() selectors. Cockpit was checking for support an empty usage which passed on browsers in the earlier (current as of writing) implementation. However, browsers have recently updated their parsing support for “Forgiving Selector Parsing”, which caused the newer development versions of Firefox, Chrome, and WebKit to fail this check, preventing the browsers from logging into Cockpit.

The check has now been adjusted so current and upcoming browser versions all pass.

Additionally, hotfixes for older versions of Cockpit have been published for various distributions. If you have an error while trying to log in with a new browser on an older Cockpit version, please upgrade your version of Cockpit.

Updates have been issued for the following distributions and versions:

  • CentOS Stream 8 & 9: Fixed.
  • Fedora: Fixed in ≥ 36.
  • RHEL 8.6 & 9.0 are affected, fixes are not yet published.
  • RHEL versions older than 8.6 are not affected.
  • Debian: Fixed in testing and backports. Stable (11 and earlier) are not affected.
  • Ubuntu Kinetic (current devel series) and the backport for 22.04 LTS got fixed.
  • Ubuntu 22.04 LTS is still affected, but will be fixed soon.

Huge thanks to Emilio Cobos Álvarez for bringing this to our attention and sending a PR with a fix!

Machines: RHEL offline token management improvements

Downloading Red Hat Enterprise Linux directly from within Cockpit’s Machines page requires an offline token. It’s now validated on entry.

demo1

Valid tokens are saved in browser LocalStorage and re-validated and re-used the next time a machine image is downloaded.

demo2

Podman: Show all containers by default

Cockpit-podman now shows all containers by default. This is especially useful for seeing containers that are stopped or restarting.

Try it out

Cockpit 277, cockpit-machines 275, and cockpit-podman 54 are available now:

Upgrade of Copr servers

Posted by Fedora Infrastructure Status on September 21, 2022 01:00 PM

We're updating copr packages to the new versions which will bring new features and bugfixes.

This outage impacts the copr-frontend and the copr-backend.

Google Summer of Code 2022: It’s a wrap!

Posted by Felipe Borges on September 20, 2022 07:16 PM

Google Summer of Code logo

Another program year is ending and we are extremely happy with the resulting work of our contributors!

This year GNOME had nine Google Summer of Code projects covering various areas, from improving apps in our ecosystem to standardizing our web presence. We hope our interns had a glimpse of our community that motivated them to continue engaged with their projects and involved with the broad GNOME ecosystem.

A special thanks goes to our mentors that are the front-line of this initiative, sharing their knowledge and introducing our community to the new contributors. Thank you so much!

We encourage interns now to contemplate their future after GSoC. If you want to continue with us, speak to your mentor about your interests and ask for some tips on how you can continue participating in the project. Also, there are opportunities of employment that can help you build a career in open source.

Thanks for choosing GNOME for your internship! We were lucky to have you!

EuroBSDcon 2022

Posted by Peter Czanik on September 20, 2022 07:51 AM

Last weekend I was in Vienna for EuroBSDcon, an event where BSD users are gathering from Europe (and all around the world). And while you could follow the event online, to me, the greatest value of the conference was not in the talks themselves (not to lessen their value of course, as they were fantastic) but rather in meeting people during the hallway session. The line-up consisted of sudo and syslog-ng users, BSD users and developers, and even some people from history books :-)

The venue

This year the conference took place in Vienna, in one of the buildings of the technical university. Talks were given in two large and small small auditoriums. Luckily, there was enough space for the hallway session too. And not just enough space, but also enough time for discussions. Coffee and tea helped us to stay awake :-) Wearing a mask was mandatory in the building, but luckily, we could take it off for drinking a coffee or when giving a talk.

The people

The best part of the event. At most conferences, people try to hide away their badges. Here, most of the badges were perfectly visible. Anyone less introvert than me could easily read names from badges and start a discussion with other participants.

To me, approaching people is not so easy. Luckily, I got some help during the event. I participate in various Bastille discussion groups, and one of the members introduced me to many interesting people from the BSD community. There was one case where I collected all my courage and asked for a selfie with someone without any help: Eric Allman. He is the creator of sendmail and syslog. I’m not that good at taking selfies, but luckily, he was patient :-) The first one, with masks on, was probably the best photo of the series, but I also wanted one with his badge and my syslog-ng t-shirt visible… Eric explained that he is still using the original syslogd, but was curious to learn what new features syslog-ng provides. He was surprised to hear that even syslog-ng is 24 years old already. :-)

<figure><figcaption>

With Eric Allman at EuroBSDcon

</figcaption> </figure> <figure><figcaption>

With Eric Allman at EuroBSDcon

</figcaption> </figure>

The social event was fantastic in many ways. It was probably the nicest venue that I have ever visited for a conference event: the Vienna City Hall. The city of Vienna was one of the main sponsors of the event, with one of its council members opening it. The food and drinks were really good, but those just provided a comfortable environment to many good discussions.

The talks

Listing here everything would probably be too much, as I listened to more than ten talks all together. Neither of the keynotes were strictly BSD-specific, however both had a strong message also for the world of BSD. Frank Karlitschek of Nextcloud talked about decentralized infrastructure and the importance of open source and open standards. Dylan Beattie of Microsoft talked about legacy code and also a few words about the rockstar programming language :-)

From the more BSD-focused talks, Eirik Øverby described the very same problems I had 15+ years ago while running thousands of web servers: default kernel parameters and web server parameters are not so well-documented and need lots of experimentation and tuning to survive attacks.

Toshaan Bharvani, who participates in the work of the OpenPower Foundation, described the current status of FreeBSD on OpenPower and the resources available currently to developers. He also talked a few words about the upcoming Power SBC arriving hopefully mid-next year.

Netflix is active at open source events, and proudly explains how they use FreeBSD to serve the world with movies, using a FreeBSD-based platform. Almost every time they give a talk, the performance they demonstrate is doubled. This time, they explained the hardware and FreeBSD tuning they use to reach 800 Gbit from a single host.

My talk

This year, I gave a combined talk at EuroBSDcon about sudo and syslog-ng. The focus was on the very latest sudo features, and I also demonstrated how to work with the logs of these features from syslog-ng. Sudo logs, both traditional and JSON-formatted, are automagically parsed by syslog-ng. Parsed messages are easier to alert on in real-time in syslog-ng, and also more efficient to work with in various NoSQL databases, where name-value pairs enable easier searching and reporting. Of course, as I was talking at a BSD event, I also talked about the history and status of syslog-ng in FreeBSD ports.

There were some good discussions already before my talk. Python support both in sudo and syslog-ng resonated well with the audience. BSD users consider syslog-ng to be the best maintained logging application in ports, which I was very happy to hear. :-) Of course, I also learned about some technical problems – luckily, none of them cause any real problems, only some ugly error messages. Still, I’ll try to reproduce them. Version 4.0 syslog-ng news were also well-received, as many users use syslog-ng to forward log messages to Elasticsearch.

Next year

I hope you could feel from my blog that I really enjoyed this conference, both from the BSD and from the sudo / syslog-ng points of view. So I hope to participate the next EuroBSDcon as well!

Handling WebAuthn over remote SSH connections

Posted by Matthew Garrett on September 20, 2022 02:17 AM
Being able to SSH into remote machines and do work there is great. Using hardware security tokens for 2FA is also great. But trying to use them both at the same time doesn't work super well, because if you hit a WebAuthn request on the remote machine it doesn't matter how much you mash your token - it's not going to work.

But could it?

The SSH agent protocol abstracts key management out of SSH itself and into a separate process. When you run "ssh-add .ssh/id_rsa", that key is being loaded into the SSH agent. When SSH wants to use that key to authenticate to a remote system, it asks the SSH agent to perform the cryptographic signatures on its behalf. SSH also supports forwarding the SSH agent protocol over SSH itself, so if you SSH into a remote system then remote clients can also access your keys - this allows you to bounce through one remote system into another without having to copy your keys to those remote systems.

More recently, SSH gained the ability to store SSH keys on hardware tokens such as Yubikeys. If configured appropriately, this means that even if you forward your agent to a remote site, that site can't do anything with your keys unless you physically touch the token. But out of the box, this is only useful for SSH keys - you can't do anything else with this support.

Well, that's what I thought, at least. And then I looked at the code and realised that SSH is communicating with the security tokens using the same library that a browser would, except it ensures that any signature request starts with the string "ssh:" (which a genuine WebAuthn request never will). This constraint can actually be disabled by passing -O no-restrict-websafe to ssh-agent, except that was broken until this weekend. But let's assume there's a glorious future where that patch gets backported everywhere, and see what we can do with it.

First we need to load the key into the security token. For this I ended up hacking up the Go SSH agent support. Annoyingly it doesn't seem to be possible to make calls to the agent without going via one of the exported methods here, so I don't think this logic can be implemented without modifying the agent module itself. But this is basically as simple as adding another key message type that looks something like:
type ecdsaSkKeyMsg struct {
       Type        string `sshtype:"17|25"`
       Curve       string
       PubKeyBytes []byte
       RpId        string
       Flags       uint8
       KeyHandle   []byte
       Reserved    []byte
       Comments    string
       Constraints []byte `ssh:"rest"`
}
Where Type is ssh.KeyAlgoSKECDSA256, Curve is "nistp256", RpId is the identity of the relying party (eg, "webauthn.io"), Flags is 0x1 if you want the user to have to touch the key, KeyHandle is the hardware token's representation of the key (basically an opaque blob that's sufficient for the token to regenerate the keypair - this is generally stored by the remote site and handed back to you when it wants you to authenticate). The other fields can be ignored, other than PubKeyBytes, which is supposed to be the public half of the keypair.

This causes an obvious problem. We have an opaque blob that represents a keypair. We don't have the public key. And OpenSSH verifies that PubKeyByes is a legitimate ecdsa public key before it'll load the key. Fortunately it only verifies that it's a legitimate ecdsa public key, and does nothing to verify that it's related to the private key in any way. So, just generate a new ECDSA key (ecdsa.GenerateKey(elliptic.P256(), rand.Reader)) and marshal it ( elliptic.Marshal(ecKey.Curve, ecKey.X, ecKey.Y)) and we're good. Pass that struct to ssh.Marshal() and then make an agent call.

Now you can use the standard agent interfaces to trigger a signature event. You want to pass the raw challenge (not the hash of the challenge!) - the SSH code will do the hashing itself. If you're using agent forwarding this will be forwarded from the remote system to your local one, and your security token should start blinking - touch it and you'll get back an ssh.Signature blob. ssh.Unmarshal() the Blob member to a struct like
type ecSig struct {
        R *big.Int
        S *big.Int
}
and then ssh.Unmarshal the Rest member to
type authData struct {
        Flags    uint8
        SigCount uint32
}
The signature needs to be converted back to a DER-encoded ASN.1 structure (eg,
var b cryptobyte.Builder
b.AddASN1(asn1.SEQUENCE, func(b *cryptobyte.Builder) {
        b.AddASN1BigInt(ecSig.R)
        b.AddASN1BigInt(ecSig.S)
})
signatureDER, _ := b.Bytes()
, and then you need to construct the Authenticator Data structure. For this, take the RpId used earlier and generate the sha256. Append the one byte Flags variable, and then convert SigCount to big endian and append those 4 bytes. You should now have a 37 byte structure. This needs to be CBOR encoded (I used github.com/fxamacker/cbor and just called cbor.Marshal(data, cbor.EncOptions{})).

Now base64 encode the sha256 of the challenge data, the DER-encoded signature and the CBOR-encoded authenticator data and you've got everything you need to provide to the remote site to satisfy the challenge.

There are alternative approaches - you can use USB/IP to forward the hardware token directly to the remote system. But that means you can't use it locally, so it's less than ideal. Or you could implement a proxy that communicates with the key locally and have that tunneled through to the remote host, but at that point you're just reinventing ssh-agent.

And you should bear in mind that the default behaviour of blocking this sort of request is for a good reason! If someone is able to compromise a remote system that you're SSHed into, they can potentially trick you into hitting the key to sign a request they've made on behalf of an arbitrary site. Obviously they could do the same without any of this if they've compromised your local system, but there is some additional risk to this. It would be nice to have sensible MAC policies that default-denied access to the SSH agent socket and only allowed trustworthy binaries to do so, or maybe have some sort of reasonable flatpak-style portal to gate access. For my threat model I think it's a worthwhile security tradeoff, but you should evaluate that carefully yourself.

Anyway. Now to figure out whether there's a reasonable way to get browsers to work with this.

comment count unavailable comments

Fedora at OpenAlt 2022

Posted by Jiri Eischmann on September 19, 2022 04:02 PM

Covid stopped a lot of activities including IT events. As things are hopefully coming to normal the Czech community of Fedora had its first booth at a physical event since 2019. It was also a revival for OpenAlt, traditional open source conference in Brno, because its last edition was in 2019, too. The traditional date of OpenAlt is the first weekend in November, but to avoid any possible autumn covid waves the organizers decided to have it on Sep 17-18.

<figure class="aligncenter size-large"><figcaption class="wp-element-caption">Fedora booth</figcaption></figure>

The conference grew to occupy pretty much the whole venue (Faculty of Informatics of Brno Technical University) and offer 6+ tracks. This year it shrank back to its pre-2012 times.

For us from the Fedora community it was great to be back among people, talking directly to our users. We demoed the freshly released Fedora 37 Beta and we also showcased Fedora on a Pinephone (with the Posh environment). The theme of this year’s OpenAlt turned out to be Linux on mobile phones. There were several talks on this topic, people showed up with different phones (Pinephone, Librem 5…) and different OSes on them (Fedora, Manjaro, Debian, SailfishOS…).

<figure class="aligncenter size-large is-resized"><figcaption class="wp-element-caption">Fedora on Pinephone</figcaption></figure>

During the last 3 years we have also accumulated a lot of Fedora swag in the storage, so we had a lot to give away and people appreciated it because apparently getting a sticker of your favorite distribution is something people were missing too.

I’d also like to thank Vojtech Trefny, Jan Beran, Ondrej Michal, and Lukas Kotek for helping me staff the booth.

Is Wayland really a future of desktop?

Posted by Marcin 'hrw' Juszkiewicz on September 19, 2022 11:35 AM

Each time I update my Fedora desktop to new release (usually around Beta) I give a try to Wayland. Which shows that I still use X11.

My setup

My desktop has Ryzen cpu and NVidia GTX 1050 Ti graphics card. Only one monitor (34” 3440x1440). I use binary blobs as this generation of GPU chipset is not really usable with FOSS driver (nouveau).

For desktop environment I use KDE. Which means Plasma desktop/panel, Konsole and few KDE apps. Firefox and Chrome as web browsers, Thunderbird for mail, Steam for gaming and Zoom (or Google Meet) for most of video calls.

Issues this time

So what went wrong? Several things:

  • KDE decided that KEY_4 is the same as KEY_KP4. Too bad that I have separate actions on them. Already reported by someone else.
  • Zoom does not even want to cooperate — opens window for a moment and quits. There is a discussion in Zoom forum for this.
  • Google Meet in Chrome listed only 3 windows when I tried to share a window and there was no way to share whole screen (got black screen instead). XWayland related?
  • MPV movie player refused to use any hardware accelerated output.
  • Factorio game at first worked in 60 fps. But then I loaded bigger base and it went down to 1.3 fps (while it is around 60 fps under X11).

Did not tested other games — have far too much time spent in Factorio and plan to finally do last achievement there.

Would love to be able to get rid of video meetings but they are part of remote work. Having to switch whole desktop session just to addend a meeting feels weird.

Things which went fine

I would like to list several things which went fine compared to my previous attempts. The only one visible one was that font sizes are finally more consistent with X11 session. They are not the same size in pixels but year ago they were much bigger under Wayland.

Maybe next time

I will give another try in 6 months — after upgrade to Fedora 38 Beta.

Bring Your Own Disaster

Posted by Matthew Garrett on September 19, 2022 07:12 AM
After my last post, someone suggested that having employers be able to restrict keys to machines they control is a bad thing. So here's why I think Bring Your Own Device (BYOD) scenarios are bad not only for employers, but also for users.

There's obvious mutual appeal to having developers use their own hardware rather than rely on employer-provided hardware. The user gets to use hardware they're familiar with, and which matches their ergonomic desires. The employer gets to save on the money required to buy new hardware for the employee. From this perspective, there's a clear win-win outcome.

But once you start thinking about security, it gets more complicated. If I, as an employer, want to ensure that any systems that can access my resources meet a certain security baseline (eg, I don't want my developers using unpatched Windows ME), I need some of my own software installed on there. And that software doesn't magically go away when the user is doing their own thing. If a user lends their machine to their partner, is the partner fully informed about what level of access I have? Are they going to feel that their privacy has been violated if they find out afterwards?

But it's not just about monitoring. If an employee's machine is compromised and the compromise is detected, what happens next? If the employer owns the system then it's easy - you pick up the device for forensic analysis and give the employee a new machine to use while that's going on. If the employee owns the system, they're probably not going to be super enthusiastic about handing over a machine that also contains a bunch of their personal data. In much of the world the law is probably on their side, and even if it isn't then telling the employee that they have a choice between handing over their laptop or getting fired probably isn't going to end well.

But obviously this is all predicated on the idea that an employer needs visibility into what's happening on systems that have access to their systems, or which are used to develop code that they'll be deploying. And I think it's fair to say that not everyone needs that! But if you hold any sort of personal data (including passwords) for any external users, I really do think you need to protect against compromised employee machines, and that does mean having some degree of insight into what's happening on those machines. If you don't want to deal with the complicated consequences of allowing employees to use their own hardware, it's rational to ensure that only employer-owned hardware can be used.

But what about the employers that don't currently need that? If there's no plausible future where you'll host user data, or where you'll sell products to others who'll host user data, then sure! But if that might happen in future (even if it doesn't right now), what's your transition plan? How are you going to deal with employees who are happily using their personal systems right now? At what point are you going to buy new laptops for everyone? BYOD might work for you now, but will it always?

And if your employer insists on employees using their own hardware, those employees should ask what happens in the event of a security breach. Whose responsibility is it to ensure that hardware is kept up to date? Is there an expectation that security can insist on the hardware being handed over for investigation? What information about the employee's use of their own hardware is going to be logged, who has access to those logs, and how long are those logs going to be kept for? If those questions can't be answered in a reasonable way, it's a huge red flag. You shouldn't have to give up your privacy and (potentially) your hardware for a job.

Using technical mechanisms to ensure that employees only use employer-provided hardware is understandably icky, but it's something that allows employers to impose appropriate security policies without violating employee privacy.

comment count unavailable comments

Episode 341 – Time till open source alternative

Posted by Josh Bressers on September 19, 2022 12:01 AM

Josh and Kurt talk about the Time Till Open Source Alternative blog post. The numbers probably don’t mean what we think they mean anymore. A lot of modern open source is really corporate controlled. Just because something carries an open source license doesn’t mean you can contribute to it.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2897-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_341_Time_till_open_source_alternative.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_341_Time_till_open_source_alternative.mp3</audio>

Show Notes

PCIe CXL investigation

Posted by Adam Young on September 18, 2022 04:17 PM

I’ve been looking in to PCIe+CXL. These are my notes.

There is a cxl_test module the Linux tree under tools/testing/cxl/.

There is a cxl command line tool. On Ubuntu and CentOS you install it via the ndctl package. This is short for libnvdimm, or Nonvoltile Memory. I think it is needed for CXL Kernel tests, but it is interesting in its own right, too.

When trying to build the cxl_test module, from it’s directory I got…

make -C ../../.. M=$PWD
/home/ayoung/linux/tools/testing/cxl/config_check.c: In function ‘check’:
././include/linux/compiler_types.h:352:45: error: call to ‘__compiletime_assert_117’ declared with attribute error: BUILD_BUG_ON failed: !IS_MODULE(CONFIG_CXL_BUS)
  352 |         _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)

This means I need to change the config option to the kernel build from ‘y’ to ‘m’ in order to build it as a module. The make menuconfig search function shows the output below. Note a that PCI Support is the top menu item on the device drivers page.

   Symbol: CXL_BUS [=y]                                                                                                                                                                                 
   Type  : tristate                                                                                                                                                                                     
   Defined at drivers/cxl/Kconfig:2                                                                                                                                                                 
     Prompt: CXL (Compute Express Link) Devices Support                                                                                                                                               
     Depends on: PCI [=y]                                                                                                                                                                             
     Location:                                                                                                                                                                                        
       Main menu                                                                                                                                                                                      
         -> Device Drivers                                                                                                                                                                            
   (1)     -> PCI support (PCI [=y])                                                                                                                                                                    

One made, there are a bunch of .ko files in the subdir:

$ find . -name \*.ko
./test/cxl_mock.ko
./test/cxl_mock_mem.ko
./test/cxl_test.ko
./cxl_mem.ko
./cxl_pmem.ko
./cxl_acpi.ko
./cxl_core.ko
./cxl_port.ko
$ sudo insmod test/cxl_test.ko 
insmod: ERROR: could not insert module test/cxl_test.ko: Unknown symbol in module
11204.608668] cxl_test: Unknown symbol cxl_decoder_autoremove (err -2)
[11204.615136] cxl_test: Unknown symbol devm_cxl_add_dport (err -2)
[11204.621236] cxl_test: Unknown symbol is_cxl_memdev (err -2)
[11204.626927] cxl_test: Unknown symbol cxl_decoder_add_locked (err -2)
[11204.633917] cxl_test: Unknown symbol cxl_switch_decoder_alloc (err -2)
[11204.640706] cxl_test: Unknown symbol cxl_endpoint_decoder_alloc (err -2)
[11204.647649] cxl_test: Unknown symbol to_cxl_port (err -2)
[11204.653117] cxl_test: Unknown symbol register_cxl_mock_ops (err -2)
[11204.659676] cxl_test: Unknown symbol unregister_cxl_mock_ops (err -2)

The mock module reports the error

[11573.093178] cxl_mock: Unknown symbol nvdimm_bus_register

So Building ../nvdimm using the same approach as above. This symbol is defined in

../nvdimm/nfit.mod.c:105:	{ 0xe9117c1f, "nvdimm_bus_register" },

That brings up the errors

[11907.753694] libnvdimm: Unknown symbol __wrap_devm_memunmap (err -2)
[11907.760070] libnvdimm: Unknown symbol __wrap___release_region (err -2)
[11907.766676] libnvdimm: Unknown symbol __wrap___devm_request_region (err -2)
[11907.773764] libnvdimm: Unknown symbol __wrap_memunmap (err -2)
[11907.779997] libnvdimm: Unknown symbol __wrap___devm_release_region (err -2)
[11907.787085] libnvdimm: Unknown symbol __wrap_memremap (err -2)
[11907.793345] libnvdimm: Unknown symbol __wrap_iounmap (err -2)
[11907.799217] libnvdimm: Unknown symbol __wrap___request_region (err -2)
[11907.806304] libnvdimm: Unknown symbol __wrap_devm_memremap (err -2)

Some guidance from Dan Williams on how to run the test: https://github.com/pmem/ndctl/blob/main/README.md. To Build nvdimm code:

make M=tools/testing/nvdimm
make M=tools/testing/cxl/
sudo make M=tools/testing/nvdimm modules_install

Both of those give:

depmod: WARNING: /lib/modules/5.19.0_ampcxl_+/extra/test/nfit_test.ko needs unknown symbol libnvdimm_test
depmod: WARNING: /lib/modules/5.19.0_ampcxl_+/extra/test/nfit_test.ko needs unknown symbol acpi_nfit_test
depmod: WARNING: /lib/modules/5.19.0_ampcxl_+/extra/test/nfit_test.ko needs unknown symbol pmem_test
depmod: WARNING: /lib/modules/5.19.0_ampcxl_+/extra/test/nfit_test.ko needs unknown symbol device_dax_test
depmod: WARNING: /lib/modules/5.19.0_ampcxl_+/extra/test/nfit_test.ko needs unknown symbol dax_pmem_test

When I try to run the ndctl test:

sudo meson test -C build

The tests are skipped


Due to

libkmod: DEBUG libkmod/libkmod-module.c:202 kmod_module_parse_depline: 1 dependencies for nfit
test/init: ndctl_test_init: nfit.ko: appears to be production version: /lib/modules/5.19.0_ampcxl_+/kernel/drivers/acpi/nfit/nfit.ko
__ndctl_test_skip: explicit skip test_libndctl:2600
nfit_test unavailable skipping tests

The instructions above showed the way forward: I needed to perform a modules_install of the modules built for the test (tools/testing/nvdimm and tools/testing/cxl including explicitly installing the ones for the tools/testing/nvdimm/test) before the tests will run. Which is clearly stated in the instructions.

The error in the logfile now shows that the code is x86_64 specific: there is a failure to load the module nd_e820 which is related to memory management on x86_64 platforms. The file: ndctl/test/core.c has the following line:

        if (access("/sys/bus/acpi", F_OK) == 0)
                family = NVDIMM_FAMILY_INTEL;

and then later

                        if (family != NVDIMM_FAMILY_INTEL &&
                            (strcmp(name, "nfit") == 0 ||
                             strcmp(name, "nd_e820") == 0))
                                continue;

However, my machine does have the path /sys/bus/acpi but will not build/load the nd_8280 module. This seems to indicate at least where to start working on the test: making an appropriate AARCH64 Family for the core test framework. I suspect the right thing is to add in a check to something like /proc/cpu and look at the manufacturer. Alternately, I could look at uname -r and see what architecture the Kernel is running on, if the solution is less vendor specific than required for x86_64. Tasks for future days.

For now, I am just going to highjack this check and say that it should set family equal to NVDIMM_FAMILY_AARCH64. With that, the first test passes, maybe some others, have not looked that closely yet.

Next up I will continue through the tests and see what else I can hammer in to place to get them to pass.


Sourceware Infrastructure / Conservancy / GNU Toolchain at Cauldron

Posted by Mark J. Wielaard on September 18, 2022 04:14 PM

Cauldron was fun, heard so many interesting talks, met so many fun people, had great conversations.

I also had a BoF about all the great infrastructure work we have been adding to Sourceware over that last year and had wanted to discuss how the different project hosted on Sourceware wanted to use it to improve their processes: New services for sourceware projects. The sourceware.md file also has the presenter notes and discussion items I had prepared.

But when I said that at the end I also would like to discuss the recent Sourceware as Conservancy member project proposal and that I had asked both Conservancy and FSF members to call in to help with that discussion there was what felt like a shouting match. It was the first time I felt an in-person event was worse than an email discussion. Hopefully people will calm down and restart this discussion on the sourceware overseers list.

Using Mock is easy

Posted by Jakub Kadlčík on September 18, 2022 12:00 AM

There are a lot of articles and documentation pages explaining how to use Mock but I am hesitant to share them with new Fedora packagers because they make the tool look much scarier than it actually is. Using Mock is easy!

Why Mock?

Unlike rpmbuild which builds packages directly on your computer without any isolation, Mock spawns a container with a minimal build environment and builds your package inside. As a consequence:

  • A bug in the package won’t break your system
  • Two people building a package on different computers will always get the same results
  • All BuildRequires that you forgot to put into the spec file will be revealed

Setup

Install Mock from the Fedora repositories.

sudo dnf install mock

By default, Mock can be used only by the root user. Please don’t run it with sudo and instead add yourself to the mock group.

sudo usermod -a -G mock $USER

Usage

Mock takes an SRPM and produces an RPM package for a given Fedora version and architecture. If you don’t have an SRPM package yet, you need to build it from a spec file first. If you downloaded an SRPM package from the internet or already built it, you can skip this step.

rpmbuild -bs /path/to/your/foo.spec

And pass the resulting SRPM to Mock.

mock /path/to/your/foo.src.rpm

By default Mock builds the RPM packages for the Fedora version and architecture that matches your system. If you want to specify a different target, use the -r parameter and press <TAB> twice to see all the possible options.

Read further

If interested, you can read more about Mock configuration files, containers, plugins, and other features.

In case you want to build your packages using Mock but you don’t want to do it on your computer, use Copr. It’s easy, just follow this screenshot tutorial.

Friday’s Fedora Facts: 2022-37

Posted by Fedora Community Blog on September 16, 2022 07:25 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! F37 Beta was released on Tuesday.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
EmacsConfvirtual3–4 Deccloses 18 Sep
ConFooMontreal, CA22–24 Febcloses 23 Sep
JavaLandBrühl, DE21–23 Marcloses 26 Sep
Python Web Confvirtual14–17 Marcloses 1 Oct
OLFColumbus, OH, US2–3 Deccloses 15 Oct
</figure>

Help wanted

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354027
F363995
F37 (beta)1209
Rawhide6203
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-10-04Final freeze begins
  • 2022-10-18 — Current final target (early target date)

Changes

<figure class="wp-block-table">
StatusCount
ASSIGNED1
MODIFIED2
ON_QA43
CLOSED1
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
2088113anacondaON_QAAccepted(Final)
2111003gnome-contactsON_QAAccepted(Final)
2123494gnome-initial-setupASSIGNEDAccepted(Final)
2121110gnome-shellASSIGNEDAccepted(Final)
2125252gnome-shellVERIFIEDAccepted(Final)
2124869gnome-softwareVERIFIEDAccepted(Final)
2121944greenbootNEWAccepted(Final)
2049849grub2NEWAccepted(Final)
2123274mesaNEWAccepted(Final)
2124127uboot-toolsON_QAAccepted(Final)
2106868chrome-gnome-shellNEWProposed(Final)
2125439gnome-shellNEWProposed(Final)
2127192gnome-shell-extension-background-logoNEWProposed(Final)
2121106systemdNEWProposed(Final)
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
Minizip RenamingSelf-ContainedFESCo #2857
Pcre deprecationSystem-WideApproved
Strong crypto settings: phase 3, forewarning 2/2System-WideFESCo #2865
Node.js RepackagingSelf-ContainedFESCo #2869
KTLS implementation for GnuTLSSystem-WideFESCo #2871
</figure>

Fedora Linux 39

Changes

The table below lists proposed Changes.

<figure class="wp-block-table">
ProposalTypeStatus
libsoup 3: Part TwoSystem-WideFESCo #2863
Replace DNF with DNF5System-WideFESCo #2870
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-37 appeared first on Fedora Community Blog.

LPC 2022 GPU BOF (user console and cgroups)

Posted by Dave Airlie on September 16, 2022 03:56 PM

 At LPC 2022 we had a BoF session for GPUs with two topics.

Moving to userspace consoles:

Currently most mainline distros still use the kernel console, provided by the VT subsystem. We'd like to move to CONFIG_VT=n as the console and vt subsystem have historically been a source of bugs but are also nasty places for locking etc. It also can be the cause of oops going missing when it takes out the panic path with locking bugs stopping other paths from completely processing the oops (like pstore or serial).

 The session started discussing what things would like. Lennart gave a great summary of the work David did a few years ago and the train of thought involved.

Once you think through all the paths and things you want supported, you realise the best user console is going to be one that supports emojis and non-Latin scripts. This probably means you want a lightweight wayland compositor running a fullscreen VTE based terminal. Working back from the consequences of this means you probably aren't going to want this in systemd, and it should be a separate development.

The other area discussed was around the requirements for a panic/emergency kernel console, likely called drmlog, this would just be something to output to the display whenever the kernel panics or during boot before the user console is loaded.

cgroups for GPU

This has been an ongoing topic, where vendors often suggest cgroup patches, and on review told they are too vendor specific and to simplify and try again, never to try again. We went over the problem space of both memory allocation accounting/enforcement and processing restrictions. We all agreed the local memory accounting was probably the easiest place to start doing something. We also agreed that accounting should be implemented and solid before we start digging into enforcement. We had a discussion about where CPU memory should be accounted, especially around integrated vs discrete graphics, since users might care about the application memory usage apart from the GPU objects usage which would be CPU on integrated and GPU on discrete. I believe a follow-up hallway discussion also covered a bunch of this with upstream cgroups maintainer.

LPC 2022 Accelerators BOF outcomes summary

Posted by Dave Airlie on September 16, 2022 03:08 PM

 At Linux Plumbers Conference 2022, we held a BoF session around accelerators.

This is a summary made from memory and notes taken by John Hubbard.

We started with defining categories of accelerator devices.

1. single shot data processors, submit one off jobs to a device. (simpler image processors)

2. single-user, single task offload devices (ML training devices)

3. multi-app devices (GPU, ML/inference execution engines)

One of the main points made is that common device frameworks are normally about targeting a common userspace (e.g. mesa for GPUs). Since a common userspace doesn't exist for accelerators, this presents a problem of what sort of common things can be targetted. Discussion about tensorflow, pytorch as being the userspace, but also camera image processing and OpenCL. OpenXLA was also named as a userspace API that might be of interest to use as a target for implementations.

 There was a discussion on what to call the subsystem and where to place it in the tree. It was agreed that the drivers would likely need to use DRM subsystem functionality but having things live in drivers/gpu/drm would not be great. Moving things around now for current drivers is too hard to deal with for backports etc. Adding a new directory for accel drivers would be a good plan, even if they used the drm framework. There was a lot naming discussion, I think we landed on drivers/skynet or drivers/accel (Greg and I like skynet more).

We had a discussion about RAS (Reliability, Availability, Serviceability) which is how hardware is monitored in data centers. GPU and acceleration drivers for datacentre operations define a their own RAS interfaces that get plugged into monitoring systems. This seems like an area that could be standardised across drivers. Netlink was suggested as a possible solution for this area.

Namespacing for devices was brought up. I think the advice was if you ever think you are going to namespace something in the future, you should probably consider creating a namespace for it up front, as designing one in later might prove difficult to secure properly.

We should use the drm framework with another major number to avoid some of the pain points and lifetime issues other frameworks see.

 This is mostly a summary of the notes, I think we have a fair idea on a path forward we just need to start bringing the pieces together upstream now.

 




How to rebase to Fedora Silverblue 37 Beta

Posted by Fedora Community Blog on September 16, 2022 02:00 PM

Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. Let’s see the steps to upgrade to the newly released Fedora 37 Beta, and how to revert if anything unforeseen happens.

Before attempting an upgrade to the Fedora 37 Beta, apply any pending upgrades.

Updating using terminal

Because the Fedora 37 Beta is not available in GNOME Software, the whole upgrade must be done through a terminal.

First, check if the 37 branch is available, which should be true now:

$ ostree remote refs fedora

You should see the following line in the output:

fedora:fedora/37/x86_64/silverblue

If you want to pin the current deployment (this deployment will stay as option in GRUB until you remove it), you can do it by running:

# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0

To remove the pinned deployment use following command (2 corresponds to the entry position in rpm-ostree status):

$ sudo ostree admin pin --unpin 2

Next, rebase your system to the Fedora 37 branch.

$ rpm-ostree rebase fedora:fedora/37/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Silverblue 37 Beta.

How to revert

If anything bad happens — for instance, if you can’t boot to Fedora Silverblue 37 Beta at all — it’s easy to go back. Pick the previous entry in the GRUB boot menu (you need to press ESC during boot sequence to see the GRUB menu in newer versions of Fedora Silverblue), and your system will start in its previous state. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase to Fedora Silverblue 37 Beta and back. So why not do it today?

The post How to rebase to Fedora Silverblue 37 Beta appeared first on Fedora Community Blog.

PHP version 8.0.24RC1 and 8.1.11RC1

Posted by Remi Collet on September 16, 2022 10:10 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.1.11RC1 are available

  • as SCL in remi-test repository
  • as base packages
    • in the remi-php81-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 34-36 and Enterprise Linux ≥ 8

RPM of PHP version 8.0.24RC1 are available

  • as SCL in remi-test repository
  • as base packages
    • in the remi-php80-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 34-36 and Enterprise Linux ≥ 8

 

emblem-notice-24.pngPHP version 7.4 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : follow the wizard instructions.

Parallel installation of version 8.1 as Software Collection:

yum --enablerepo=remi-test install php81

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Update of system version 8.1 (EL-7) :

yum --enablerepo=remi-php81,remi-php81-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.0 (EL-7) :

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Notice: version 8.1.9RC1 is also in Fedora rawhide for QA.

emblem-notice-24.pngEL-9 packages are built using RHEL-9.0

emblem-notice-24.pngEL-8 packages are built using RHEL-8.6

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngoci8 extension now uses Oracle Client version 21.7, intl extension now uses libicu 71.1

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.2.0RC2 is also available

Software Collections (php80, php81)

Base packages (php)

CPE Weekly Update – Week 37 2022

Posted by Fedora Community Blog on September 16, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat.

We provide you both infographics and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: September 12th – 16th 2022

<figure class="wp-block-image size-large"><figcaption>CPE infographic</figcaption></figure>

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Planning board
Docs

Update

Fedora Infra

  • F37Beta release seems to have gone pretty smoothly.
  • Bunch of certs expiring (vpn, messaging, etc). Will be renewing them this week.
  • Business as usual

CentOS Infra including CentOS CI

Release Engineering

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Actively working on resolving centpkg issues
    • New version of centpkg in testing so it works with the new rpkg
  • Last CentOS Stream 8 release tracking 8.7 is out. Next week we’re switching to 8.8. Also generated new cloud images.
  • IDM modules, SSSD, IPA, all updated and working in CentOS Stream 8.

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • EPEL 9 is up to 8639 (+75) packages from 3640 (+39) source packages
  • qt6 package rebased from 6.2 to 6.3 in EPEL 9 testing
  • Initial ipython package in EPEL 9 testing
  • fedpkg in EPEL9 testing
  • nagios-plugins-check-updates package backported support for Alma and Rocky in EPEL 8 testing
  • Early observations from the EPEL survey:
    • On a scale of 1 to 5, average happiness is 4.5
    • Many people suggested a website to look up EPEL packages, so we added a link to https://packages.fedoraproject.org to our docs
    • 41% of people who don’t contribute don’t know where to get started
    • 30% of people who don’t contribute don’t want to contribute
    • 19% of people who don’t contribute want training

FMN replacement

Goal of this initiative

FMN (Fedora-Messaging-Notification) is a web application allowing users to create filters on messages sent to (currently) fedmsg and forward these as notifications on to email or IRC.
The goal of the initiative is mainly to add fedora-messaging schemas, create a new UI for a better user experience and create a new service to triage incoming messages to reduce the current message delivery lag problem. Community will profit from speedier notifications based on own preferences (IRC, Matrix, Email), unified fedora project to one message service and human-readable results in Datagrepper.
Also, CPE tech debt will be significantly reduced by dropping the maintenance of fedmsg altogether.

Updates

  • Database connection and models
  • Frontend mockups
  • Work on the consumer and the sender modules

The post CPE Weekly Update – Week 37 2022 appeared first on Fedora Community Blog.

Using Python and NetworkManager to control the network

Posted by Fedora Magazine on September 16, 2022 08:00 AM

NetworkManager is the default network management service on Fedora and several other Linux distributions. Its main purpose is to take care of things like setting up interfaces, adding addresses and routes to them and configuring other network related aspects of the system, such as DNS.

There are other tools that offer similar functionality. However one of the advantages of NetworkManager is that it offers a powerful API. Using this API, other applications can inspect, monitor and change the networking state of the system.

This article first introduces the API of NetworkManager and presents how to use it from a Python program. In the second part it shows some practical examples: how to connect to a wireless network or to add an IP address to an interface programmatically via NetworkManager.

The API

NetworkManager provides a D-Bus API. D-Bus is a message bus system that allows processes to talk to each other; using D-Bus, a process that wants to offer some services can register on the bus with a well-known name (for example, “org.freedesktop.NetworkManager”) and expose some objects, each identified by a path. Using d-feet, a graphical tool to inspect D-Bus objects, we can see the object tree exposed by the NetworkManager service:

<figure class="aligncenter size-full"></figure>

Each object has properties, methods and signals, grouped into different interfaces. For example, the following is a simplified view of the interfaces for the second device object:

<figure class="aligncenter size-large"></figure>

We see that there are different interfaces; the org.freedesktop.NetworkManager.Device interface contains some properties common to all devices, like the state, the MTU and IP configurations. Since this device is Ethernet, it also has a org.freedesktop.NetworkManager.Device.Wired D-Bus interface containing other properties such as the link speed.

The full documentation for the D-Bus API of NetworkManager is here.

A client can connect to the NetworkManager service using the well-known name and perform operations on the exposed objects. For example, it can invoke methods, access properties or receive notifications via signals. In this way, it can control almost every aspect of network configuration. In fact, all the tools that interact with NetworkManager – nmcli, nmtui, GNOME control center, the KDE applet, Cockpit – use this API.

libnm

When developing a program, it can be convenient to automatically instantiate objects from the objects available on D-Bus and keep their properties synchronized; or to be able to have method calls on those objects automatically dispatched to the corresponding D-Bus method. Such objects are usually called proxies and are used to hide the complexity of D-Bus communication from the developer.

For this purpose, the NetworkManager project provides a library called libnm, written in C and based on GNOME’s GLib and GObject. The library provides C language bindings for functionality provided by NetworkManager. Being a GLib library, it is usable from other languages as well via GObject introspection, as explained below.

The library maps fairly closely to the D-Bus API of NetworkManager. It wraps remote D-Bus objects as native GObjects, and D-Bus signals and properties to GObject signals and properties. Furthermore, it provides helpful accessors and utility functions.

Overview of libnm objects

The diagram below shows the most important objects in libnm and their relationship:

<figure class="aligncenter size-full"></figure>

NMClient caches all the objects instantiated from D-Bus. The object is typically created at the beginning at the program and provides a way to access other objects.

A NMDevice represents a network interface, physical (as Ethernet, Infiniband, Wi-Fi, etc.) or virtual (as a bridge or a IP tunnel). Each device type supported by NetworkManager has a dedicated subclass that implements type-specific properties and methods. For example, a NMDeviceWifi has properties related to the wireless configuration and to access points found during the scan, while a NMDeviceVlan has properties describing its VLAN-id and the parent device.

NMClient also provides a list of NMRemoteConnection objects. NMRemoteConnection is one of the two implementations of the NMConnection interface. A connection (or connection profile) contains all the configuration needed to connect to a specific network.

The difference between a NMRemoteConnection and a NMSimpleConnection is that the former is a proxy for a connection existing on D-Bus while the latter is not. In particular, NMSimpleConnection can be instantiated when a new blank connection object is required. This is useful for, example, when adding a new connection to NetworkManager.

The last object in the diagram is NMActiveConnection. This represents an active connection to a specific network using settings from a NMRemoteConnection.

GObject introspection

GObject introspection is a layer that acts as a bridge between a C library using GObject and programming language runtimes such as JavaScript, Python, Perl, Java, Lua, .NET, Scheme, etc.

When the library is built, sources are scanned to generate introspection metadata describing, in a language-agnostic way, all the constants, types, functions, signals, etc. exported by the library. The resulting metadata is used to automatically generate bindings to call into the C library from other languages.

One form of metadata is a GObject Introspection Repository (GIR) XML file. GIRs are mostly used by languages that generate bindings at compile time. The GIR can be translated into a machine-readable format called Typelib that is optimized for fast access and lower memory footprint; for this reason it is mostly used by languages that generate bindings at runtime.

This page lists all the introspection bindings for other languages. For a Python example we will use PyGObject which is included in the python3-gobject RPM on Fedora.

A basic example

Let’s start with a simple Python program that prints information about the system:

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">import</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">gi</mark>

gi.require_version(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"NM"</mark>, <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"1.0"</mark>)
<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">from</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">gi.repository</mark> <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">import</mark> GLib, NM

client = NM.Client.new(<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>)
<mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">print</mark>(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"version:"</mark>, client.get_version())

At the beginning we import the introspection module and then the Glib and NM modules. Since there could be multiple versions of the NM module in the system, we make certain to load the right one. Then we create a client object and print the version of NetworkManager.

Next, we want to get a list of devices and print some of their properties:

devices = client.get_devices()
print("devices:")
for device in devices:
    print("  - name:", device.get_iface());
    print("    type:", device.get_type_description())
    print("    state:", device.get_state().value_nick)

The device state is an enum of type NMDeviceState and we use value_nick to get its description. The output is something like:

version: 1.41.0
devices:
  - name: lo
    type: loopback
    state: unmanaged
  - name: enp1s0
    type: ethernet
    state: activated
  - name: wlp4s0
    type: wifi
    state: activated

In the libnm documentation we see that the NMDevice object has a get_ip4_config() method which returns a NMIPConfig object and provides access to addresses, routes and other parameters currently set on the device. We can print them with:

    ip4 = device.get_ip4_config()
    if ip4 is not None:
       print("    addresses:")
       for a in ip4.get_addresses():
           print("      - {}/{}".format(a.get_address(),
                                        a.get_prefix()))
       print("    routes:")
       for r in ip4.get_routes():
           print("      - {}/{} via {}".format(r.get_dest(),
                                               r.get_prefix(),
                                               r.get_next_hop()))

From this, the output for enp1s0 becomes:

 - name: enp1s0
   type: ethernet
   state: activated
   addresses:
     - 192.168.122.191/24
     - 172.26.1.1/16
   routes:
     - 172.26.0.0/16 via None
     - 192.168.122.0/24 via None
     - 0.0.0.0/0 via 192.168.122.1

Connecting to a Wi-Fi network

Now that we have mastered the basics, let’s try something more advanced. Suppose we are in the range of a wireless network and we want to connect to it.

As mentioned before, a connection profile describes all the settings required to connect to a specific network. Conceptually, we’ll need to perform two different operations: first insert a new connection profile into NetworkManager’s configuration and second activate it. Fortunately, the API provides method nm_client_add_and_activate_connection_async() that does everything in a single step. When calling the method we need to pass at least the following parameters:

  • the NMConnection we want to add, containing all the needed properties;
  • the device to activate the connection on;
  • the callback function to invoke when the method completes asynchronously.

We can construct the connection with:

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">def</mark> <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">create_connection</mark>():
    connection = NM.SimpleConnection.new()
    ssid = GLib.Bytes.new(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"Home"</mark>.encode(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"utf-8"</mark>))

    s_con = NM.SettingConnection.new()
    s_con.set_property(NM.SETTING_CONNECTION_ID,
                       <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"my-wifi-connection"</mark>)
    s_con.set_property(NM.SETTING_CONNECTION_TYPE,
                       <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"802-11-wireless"</mark>)

    s_wifi = NM.SettingWireless.new()
    s_wifi.set_property(NM.SETTING_WIRELESS_SSID, ssid)
    s_wifi.set_property(NM.SETTING_WIRELESS_MODE,
                        <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"infrastructure"</mark>)

    s_wsec = NM.SettingWirelessSecurity.new()
    s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_KEY_MGMT,
                        <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"wpa-psk"</mark>)
    s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_PSK,
                        <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"z!q9at#0b1"</mark>)

    s_ip4 = NM.SettingIP4Config.new()
    s_ip4.set_property(NM.SETTING_IP_CONFIG_METHOD, <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"auto"</mark>)

    s_ip6 = NM.SettingIP6Config.new()
    s_ip6.set_property(NM.SETTING_IP_CONFIG_METHOD, <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"auto"</mark>)

    connection.add_setting(s_con)
    connection.add_setting(s_wifi)
    connection.add_setting(s_wsec)
    connection.add_setting(s_ip4)
    connection.add_setting(s_ip6)

    <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">return</mark> connection

The function creates a new NMSimpleConnection and sets all the needed properties. All the properties are grouped into settings. In particular, the NMSettingConnection setting contains general properties such as the profile name and its type. NMSettingWireless indicates the wireless network name (SSID) and that we want to operate in “infrastructure” mode, that is, as a wireless client. The wireless security setting specifies the authentication mechanism and a password. We set both IPv4 and IPv6 to “auto” so that the interface gets addresses via DHCP and IPv6 autoconfiguration.

All the properties supported by NetworkManager are described in the nm-settings man page and in the “Connection and Setting API Reference” section of the libnm documentation.

To find a suitable interface, we loop through all devices on the system and return the first Wi-Fi device.

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">def</mark> <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">find_wifi_device</mark>(client):
   <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">for</mark> device <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">in</mark> client.get_devices():
       <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">if</mark> device.get_device_type() == NM.DeviceType.WIFI:
           <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">return</mark> device
   <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">return</mark> <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>

What is missing now is a callback function, but it’s easier if we look at it later. We can proceed invoking the add_and_activate_connection_async() method:

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">import</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">gi</mark>
gi.require_version(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"NM"</mark>, <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"1.0"</mark>)
<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">from</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">gi.repository</mark> <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">import</mark> GLib, NM

<mark class="has-inline-color has-cyan-bluish-gray-color" style="background-color:rgba(0, 0, 0, 0)"># other functions here...</mark>

main_loop = GLib.MainLoop()
client = NM.Client.new(<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>)
connection = create_connection()
device = find_wifi_device(client)
client.add_and_activate_connection_async(
    connection, device, <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>, <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>, add_and_activate_cb, <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>
)
main_loop.run()

To support multiple asynchronous operations without blocking execution of the whole program, libnm uses an event loop mechanism. For an introduction to event loops in GLib see this tutorial. The call to main_loop.run() waits until there are events (such as the callback for our method invocation, or any update from D-Bus). Event processing continues until the main loop is explicitly terminated. This happens in the callback:

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">def</mark> <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">add_and_activate_cb</mark>(client, result, data):
    <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">try</mark>:
        ac = client.add_and_activate_connection_finish(result)
        <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">print</mark>(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"ActiveConnection {}"</mark>.format(ac.get_path()))
        <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">print</mark>(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"State {}"</mark>.format(ac.get_state().value_nick))
    <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">except</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">Exception</mark> <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">as</mark> e:
        <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">print</mark>("<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">Error:"</mark>, e)
    main_loop.quit()

Here, we use client.add_and_activate_connection_finish() to get the result for the asynchronous method. The result is a NMActiveConnection object and we print its D-Bus path and state.

 Note that the callback is invoked as soon as the active connection is created. It may still be attempting to connect. In other words, when the callback runs we don’t have a guarantee that the activation completed successfully. If we want to ensure that, we would need to monitor the active connection state until it changes to activated (or to deactivated in case of failure). In this example, we just print that the activation started, or why it failed, and then we quit the main loop; after that, the main_loop.run() call will end and our program will terminate.

Adding an address to a device

Once there is a connection active on a device, we might decide that we want to configure an additional IP address on it.

There are different ways to do that. One way would be to modify the profile and activate it again similar to what we saw in the previous example. Another way is by changing the runtime configuration of the device without updating the profile on disk.

To do that, we use the reapply() method. It requires at least the following parameters:

  • the NMDevice on which to apply the new configuration;
  • the NMConnection containing the configuration.

Since we only want to change the IP address and leave everything else unchanged, we first need to retrieve the current configuration of the device (also called the “applied connection”). Then we update it with the static address and reapply it to the device.

The applied connection, not surprisingly, can be queried with method get_applied_connection() of the NMDevice. Note that the method also returns a version id that can be useful during the reapply to avoid race conditions with other clients. For simplicity we are not going to use it.

In this example we suppose that we already know the name of the device we want to update:

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">import</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">gi</mark>
<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">import</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">socket</mark>

gi.require_version(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"NM"</mark>, <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"1.0"</mark>)
<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">from</mark> <mark class="has-inline-color has-vivid-purple-color" style="background-color:rgba(0, 0, 0, 0)">gi.repository</mark> <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">import</mark> GLib, NM

<mark class="has-inline-color has-cyan-bluish-gray-color" style="background-color:rgba(0, 0, 0, 0)"># other functions here...</mark>

main_loop = GLib.MainLoop()
client = NM.Client.new(<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>)
device = client.get_device_by_iface(<mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"enp1s0"</mark>)
device.get_applied_connection_async(<mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">0</mark>, <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>, get_applied_cb, <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>)
main_loop.run()

The callback function retrieves the applied connection from the result, changes the IPv4 configuration and reapplies it:

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">def</mark> <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">get_applied_cb</mark>(device, result, data):
    (connection, v) = device.get_applied_connection_finish(result)

    s_ip4 = connection.get_setting_ip4_config()
    s_ip4.add_address(NM.IPAddress.new(socket.AF_INET,
                                       <mark class="has-inline-color has-vivid-red-color" style="background-color:rgba(0, 0, 0, 0)">"172.25.12.1"</mark>,
                                       <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">24</mark>))

    device.reapply_async(connection, <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">0</mark>, <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">0</mark>, <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>, reapply_cb, <mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">None</mark>)

Omitting exception handling for brevity, the reapply callback is as simple as:

<mark class="has-inline-color has-vivid-green-cyan-color" style="background-color:rgba(0, 0, 0, 0)">def</mark> <mark class="has-inline-color has-vivid-cyan-blue-color" style="background-color:rgba(0, 0, 0, 0)">reapply_cb</mark>(device, result, data):
   device.reapply_finish(result)
   main_loop.quit()

When the program quits, we will see the new address configured on the interface.

Conclusion

This article introduced the D-Bus and libnm API of NetworkManager and presented some practical examples of its usage. Hopefully it will be useful when you need to develop your next project that involves networking!

Besides the examples presented here, the NetworkManager git tree includes many others for different programming languages. To stay up-to-date with the news from NetworkManager world, follow the blog.

References

gnome-info-collect closing soon

Posted by Allan Day on September 15, 2022 01:43 PM

There has been a fantastic response to gnome-info-collect since Vojtěch announced it three weeks ago. To date we’ve had over 2,200 responses. That’s amazing! Thanks to everyone who has run the tool.

We now have enough data to perform the analyses we want. As a result, it’s time to close data collection. This will happen next Monday, 19 September. On that day, the data collection server will be turned off.

If you haven’t run gnome-info-collect yet, and would like to, there’s still a little time. See the project readme for instructions on how to install and run it.

Just because we’re shutting down gnome-info-collect doesn’t mean that it doesn’t have a future. Hopefully there will be further rounds of data collection in the future, where we can look at other aspects of GNOME usage that we didn’t examine this time round.

In the mean time, we have lots of great data to process and analyse. Watch this space to learn about what we find!

git signatures with SSH certificates

Posted by Matthew Garrett on September 15, 2022 01:34 AM
Last night I complained that git's SSH signature format didn't support using SSH certificates rather than raw keys, and was swiftly corrected, once again highlighting that the best way to make something happen is to complain about it on the internet in order to trigger the universe to retcon it into existence to make you look like a fool. But anyway. Let's talk about making this work!

git's SSH signing support is actually just it shelling out to ssh-keygen with a specific set of options, so let's go through an example of this with ssh-keygen. First, here's my certificate:

$ ssh-keygen -L -f id_aurora-cert.pub
id_aurora-cert.pub:
Type: ecdsa-sha2-nistp256-cert-v01@openssh.com user certificate
Public key: ECDSA-CERT SHA256:(elided)
Signing CA: RSA SHA256:(elided)
Key ID: "mgarrett@aurora.tech"
Serial: 10505979558050566331
Valid: from 2022-09-13T17:23:53 to 2022-09-14T13:24:23
Principals:
mgarrett@aurora.tech
Critical Options: (none)
Extensions:
permit-agent-forwarding
permit-port-forwarding
permit-pty

Ok! Now let's sign something:

$ ssh-keygen -Y sign -f ~/.ssh/id_aurora-cert.pub -n git /tmp/testfile
Signing file /tmp/testfile
Write signature to /tmp/testfile.sig

To verify this we need an allowed signatures file, which should look something like:

*@aurora.tech cert-authority ssh-rsa AAA(elided)

Perfect. Let's verify it:

$ cat /tmp/testfile | ssh-keygen -Y verify -f /tmp/allowed_signers -I mgarrett@aurora.tech -n git -s /tmp/testfile.sig
Good "git" signature for mgarrett@aurora.tech with ECDSA-CERT key SHA256:(elided)


Woo! So, how do we make use of this in git? Generating the signatures is as simple as

$ git config --global commit.gpgsign true
$ git config --global gpg.format ssh
$ git config --global user.signingkey /home/mjg59/.ssh/id_aurora-cert.pub


and then getting on with life. Any commits will now be signed with the provided certificate. Unfortunately, git itself won't handle verification of these - it calls ssh-keygen -Y find-principals which doesn't deal with wildcards in the allowed signers file correctly, and then falls back to verifying the signature without making any assertions about identity. Which means you're going to have to implement this in your own CI by extracting the commit and the signature, extracting the identity from the commit metadata and calling ssh-keygen on your own. But it can be made to work!

But why would you want to? The current approach of managing keys for git isn't ideal - you can kind of piggy-back off github/gitlab SSH key infrastructure, but if you're an enterprise using SSH certificates for access then your users don't necessarily have enrolled keys to start with. And using certificates gives you extra benefits, such as having your CA verify that keys are hardware-backed before issuing a cert. Want to ensure that whoever made a commit was actually on an authorised laptop? Now you can!

I'll probably spend a little while looking into whether it's plausible to make the git verification code work with certificates or whether the right thing is to fix up ssh-keygen -Y find-principals to work with wildcard identities, but either way it's probably not much effort to get this working out of the box.

Edit to add: thanks to this commenter for pointing out that current OpenSSH git actually makes this work already!

comment count unavailable comments

لینوکس فدورا ۳۷ بتا منتشر شد

Posted by Fedora fans on September 14, 2022 08:07 PM
f37-beta

f37-beta

تیم توسعه فدورا لینوکس خبر انتشار نسخه Fedora Linux 37 Beta را اعلام کرد. گام بعدی حرکت به سوی انتشار نسخه نهایی فدورا لینوکس ۳۷ می باشد که برای پایان ماه اکتبر (October) برنامه ریزی شده است.

از مهمترین ویژگی های برجسته در نسخه ی Fedora 37 Beta می توان به موارد زیر اشاره کرد:

  • نسخه Fedora 37 Workstation Beta شامل میزکار GNOME 43 می باشد که انتظار می رود نسخه نهایی GNOME 43 تا چند هفته ی دیگر منتشر شود.
  • GNOME 43 شامل یک پنل جدید device security در قسمت Settings می باشد که به کاربران اطلاعاتی در مورد امنیت سخت افزار و firmware بر روی سیستم ارائه می دهد.
  • بر اساس نسخه قبلی GNOME، برنامه های اصلی بیشتری به آخرین نسخه GTK toolkit منتقل شده اند که عملکرد بهبود یافته و ظاهری مدرن را ارائه می‌کنند.
  • Raspberry Pi 4 اکنون به صورت رسمی در Fedora Linux  پشتیبانی می شود که شامل accelerated graphics می باشد.
  • Fedora Linux 37 Beta دیگر از معماری ARMv7 پشتیبانی نمی کند که همچنین با نام های arm32 یا armhfp نیز شناخته می شود.
  • بروزرسانی برخی از زبان های برنامه نویسی مانند Python 3.11, Perl 5.36, Golang 1.19

 

دانلود لینوکس فدورا ۳۷ بتا:

Fedora 37 Workstation Beta:

https://getfedora.org/workstation/download/

Fedora 37 Server Beta:

https://getfedora.org/server/download/

Fedora 37 IoT Beta:

https://getfedora.org/iot/download/

Fedora Spins 37 Beta:

https://spins.fedoraproject.org/prerelease

Fedora Labs 37 Beta:

https://labs.fedoraproject.org/prerelease

پیش به سوی فدورا ۳۷

The post لینوکس فدورا ۳۷ بتا منتشر شد first appeared on طرفداران فدورا.

Working Hybrid

Posted by Peter Czanik on September 14, 2022 11:13 AM

I worked from home all my life, or at least that’s what I thought. Recently I learned that what do is actually called “hybrid” work. I do most of my work from home, however I also regularly visit the office. I can work a lot more efficiently at home, so, I work from there. Once a week I’m at the office where I do not progress that well with my tasks. Still, I find these visits very important.

<figure> </figure>

This is a follow-up post to my working from home blog from last week.

Meetings

When I started working at my current workplace more than a decade ago, I had a saying that “I only work four days a week for Balabit”. When someone asked me what I do on the fifth work day, I said that “I go to the office for a full day of meetings.”. At that time I considered any meetings a waste of time, taking time from doing actual work.

As a first step I learned that meetings can actually be useful. What I work on between meetings, the main targets of the next software release for example are all decided in meetings. I need to participate to take part in the decisions. Of course I still consider that many meetings could easily be replaced by a few e-mails. However, I quickly remove those from my calendar. There are topics however, that are not easily discussed in e-mail threads. Talking with interested participants in real-time can speed up the discussions and lead to a decision quicker.

In person

The pandemic also taught me that meeting people in person regularly is important as well. Yes, virtual meetings are just as efficient as IRL meetings. In some ways even better, as during less interesting topics you can catch up with your e-mails without disturbing others. But normally meetings have a few fixed topics, and when the discussion is finished or the meeting time is over, people leave and do so with a lot less wasted time. Or is it really wasted time?

Virtual meetings can completely fulfill their jobs. However, many of the best discussions and ideas are born after meetings, in the kitchen or around the water cooler. These are informal discussions where there is no formal agenda, no fixed participants list, just a few random people meeting without any planning. Rushing back to your desk or avoiding the office completely makes these random encounters impossible.

Going hybrid

Many people – including me – are (a lot) more efficient when working from home. That is where work can be done. In the office it is a lot less efficient. Meetings, people talking around you, lots of interruptions. Why do I still say that visiting the office regularly is very important? Because you can meet random people in person. Why is meeting random people is important?

  • In regular meetings people talk to close colleagues. You do not see outside your silo. In the kitchen you might run into members of other teams, join discussions with higher management, or the other way around. This can help to break down silos, listen to different views of the same problems.

  • In an informal discussion people are more open to share why a problem frustrates them. If the other side better understands a problem and why is it frustrating, there is a better chance to resolve it quicker.

  • Even when starting to talk about the weather with someone you just met at the coffee machine, the discussion can easily lead to something useful from the work point of view.

  • Learning about a common hobby with your colleague can bring you closer together, and gain you more attention from the colleague even when discussing work.

These random in person encounters in the office can create tremendous value. Working from home is more efficient in most cases, that is how most people in IT can work through their task lists systematically with minimal interruptions. However, random discussions in the office can break down silos, facilitate quicker problem solutions and even lead to groundbreaking new ideas. This is why in most cases hybrid is better than both in office or full remote.

Happy Hours

If you really cannot get to the office regularly, for example because you moved far and away while remote work was mandatory during the pandemic, you should still visit the office every once in a while. A good occasion is the Happy Hours in the office, when people stay in the office after work and have some great discussions next to a hamburger or other fine food. You can even meet some of the otherwise full-remote colleagues on these special evenings. These events also help team bonding which is especially important when team members rarely see each-other in real life. And, of course, the informal discussions have all the benefits I listed earlier.

What do you think?

Of course, in a more strict interpretation, hybrid means that everyone works where it is the most efficient. From this point of view, the kind of office day I describe is just socializing. In my view going to the office with an open mind and talking to random people you meet there is beneficial, even if you cross out less tasks from your to-do list on office days. These discussions open new perspectives and can be huge productivity boosts, even if somewhat unpredictable. So, it even fits the stricter definition of hybrid: you spend part of your work hours in the office to be more efficient: probably not always on the day you are in the office, but in the long run.

What do you think? You can reach me on Twitter and LinkedIn.

Fedora 37 Beta est disponible !

Posted by Charles-Antoine Couret on September 13, 2022 02:00 PM

En ce mardi 13 septembre, la communauté du Projet Fedora sera ravie d'apprendre la disponibilité de la version Beta de Fedora Linux 37.

Malgré les risques concernant la stabilité d’une version Beta, il est important de la tester ! En rapportant les bogues maintenant, vous découvrirez les nouveautés avant tout le monde, tout en améliorant la qualité de Fedora Linux 37 et réduisant du même coup le risque de retard. Les versions en développement manquent de testeurs et de retours pour mener à bien leurs buts.

La version finale est pour le moment fixée pour le 15 ou 25 octobre.

Expérience utilisateur

  • Passage à GNOME 43 ;
  • L'environnement de bureau LXQt est proposé à la version 1.1.0 ;
  • Le logiciel de chat audio Mumble 1.4 est également mis à jour ;
  • Le répertoire /sysroot devient en lecture seule pour les images Fedora Silverblue et Kinoite ;
  • Possibilité de tester le nouvel installateur Anaconda basé sur des technologies Web, sur des images dédiées ;
  • L'image boot.iso pour systèmes BIOS utilise GRUB2 au lieu de syslinux comme chargeur de démarrage ;
  • Création d'une image disque KVM pour faciliter l'usage de Fedora Server dans un environnement virtualisé.

Gestion du matériel

  • L'architecture ARMv7 n'est plus prise en charge ;
  • Prise en charge officielle de la carte Raspberry Pi 4 ;
  • Prise en charge du nouveau standard Device Onboarding pour améliorer la sécurité des machines IoT ;
  • Par défaut sur l'architecture x86_64 avec un BIOS, le système de partition GPT sera employé plutôt que son ancêtre MBR ;
  • Les paquets pour jdk 8, 11, 17 et 18 n'ont plus de paquets pour l'architecture i686 ;
  • Gros nettoyage dans les paquets orphelins i686 qui sont supprimés des dépôts.

Internationalisation

  • Les langues additionnelles de Firefox sont dans un paquet dédié firefox-langpacks avant de passer à un paquet par langue comme effectué pour LibreOffice il y a des années déjà ;
  • Mis à jour de IBus 1.5.27 ;
  • Mis à jour de ibus-libpinyin 1.13 ;
  • Amélioration de la prise en charge des polices de caractères de la langue perse.

Administration système

  • Toutes les unités de systemd seront marquées comme preset lors du premier démarrage ;
  • Le nom d'hôte par défaut si pas configuré est localhost au lieu de fedora ;
  • L'étiquetage des contextes SELinux est maintenant parallélisé ;
  • La gestion des paquets passe par la nouvelle version de RPM 4.18 ;
  • Tous fichiers fournis par un paquet RPM sont signés, pour plus de sécurité ;
  • Fedora Workstation par défaut embarque le paquet whois au lieu de jwhois ;
  • Première étape pour le renforcement par défaut de la politique de sécurité manipulé via l'outil update-crypto-policies ;
  • Le paquet openssl1.1 est marqué comme déprécié en vue d'une suppression pour F38 ou F39 ;
  • Le paquet gettext est séparé en deux en gettext et gettext-runtime pour alléger la taille minimale du système ;
  • Mis à jour du serveur DNS BIND 9.18 ;
  • Mis à jour du gestionnaire de stockage Stratis 3.2.0.

Développement

  • Mise à jour de la chaîne de compilation GNU avec glibc 2.36 et binutils 2.38 ;
  • La chaîne de compilation LLVM a aussi son lot de mise à jour avec sa 15e version ;
  • Coup de Boost pour la version 1.78 de la bibliothèque C++ ;
  • Le lapin Go bondit vers sa version 1.19 ;
  • La branche Node.js 18.x devient celle de référence ;
  • Le langage Perl bénéficie de sa version 5.36 ;
  • Alors que le langage Python rampe vers sa version 3.11 ;
  • Le langage Erlang dispose d'une 25e mise à jour ;
  • Pendant que son rival Haskell GHC 8.10.7 avec sa suite Stackage LTS 18.28 sont disponibles ;
  • L'éditeur de texte Emacs passe à la version 28 ;
  • La bibliothèque libsoup version 3 fait son entrée dans Fedora, en parallèle de sa 2e version pour des questions de compatibilité ;
  • Mise à jour de la chaine de compilation pour Windows nommée MinGW ;
  • Ce dernier utilise la bibliothèque OpenSSL 3e du nom par ailleurs ;
  • Il bénéfice également de la cible UCRT en plus de MSVCRT, ce support étant recommandé depuis Windows 10.

Projet Fedora

  • Fedora CoreOS devient une édition officielle ;
  • Fedora Cloud Base redevient une édition officielle également ;
  • La génération des images IoT se fait avec l'outil osbuild ;
  • Les paquets Python ont par défaut un shebang avec l'option -P activée ;
  • Python Dist RPM fourni uniquement des noms normalisés selon la PEP 503 ;
  • Ajout d'une nouvelle cible ELN-extras pour étendre les fonctions de EPEL et se rapprocher de RHEL. Mais le contenu est défini par la communauté plutôt que Red-Hat.

Tester

Durant le développement d'une nouvelle version de Fedora Linux, comme cette version Beta, quasiment chaque semaine le projet propose des journées de tests. Le but est de tester pendant une journée une fonctionnalité précise comme le noyau, Fedora Silverblue, la mise à niveau, GNOME, l’internationalisation, etc. L'équipe d'assurance qualité élabore et propose une série de tests en général simples à exécuter. Suffit de les suivre et indiquer si le résultat est celui attendu. Dans le cas contraire, un rapport de bogue devra être ouvert pour permettre l'élaboration d'un correctif.

C'est très simple à suivre et requiert souvent peu de temps (15 minutes à une heure maximum) si vous avez une Beta exploitable sous la main.

Les tests à effectuer et les rapports sont à faire via la page suivante. J'annonce régulièrement sur mon blog quand une journée de tests est planifiée.

Si l'aventure vous intéresse, les images sont disponibles par Torrent ou via le site officiel.

Si vous avez déjà Fedora Linux 36 ou 35 sur votre machine, vous pouvez faire une mise à niveau vers la Beta. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.

Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.

En cas de bogue, n'oubliez pas de relire la documentation pour signaler les anomalies sur le BugZilla ou de contribuer à la traduction sur Weblate. N'oubliez pas de consulter les bogues déjà connus pour Fedora 37.

Bons tests à tous !

Nightly syslog-ng container images

Posted by Peter Czanik on September 13, 2022 10:51 AM

The syslog-ng team started publishing container images many years ago. For quite a while, it was a manual process, however, a few releases ago, publishing a container image became part of the release process. Recently, nightly container images have also become available, so you can test the latest features and bug fixes easily.

The syslog-ng images are still available under the Balabit namespace on the Docker hub. Balabit was bought by One Identity almost five years ago, and we stopped using the old company name years ago. However, there are many scripts, blogs and even books that feature this location, so changing the location would cause problems for many people. Not to mention, that moving to a new location would reset the download counter :-) Close to 50 million pulls over the years!

From this blog you can learn about the various container images the syslog-ng team provides. You can find basic information about how to use the syslog-ng container image at https://hub.docker.com/r/balabit/syslog-ng, where you can learn how to use your own syslog-ng configuration or open ports for syslog-ng.

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

You can read the blog at https://www.syslog-ng.com/community/b/blog/posts/nightly-syslog-ng-container-images

Announcing the release of Fedora Linux 37 Beta

Posted by Fedora Magazine on September 13, 2022 12:12 AM

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 37 Beta, the next step towards our planned Fedora Linux 37 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for specific use cases like Computational Neuroscience

Beta Release Highlights

Fedora Workstation

Fedora 37 Workstation Beta includes a beta release of GNOME 43. (We expect the final GNOME 43 release in a few weeks.) GNOME 43 includes a new device security panel in Settings, providing the user with information about the security of hardware and firmware on the system. Building on the previous release, more core GNOME apps have been ported to the latest version of the GTK toolkit, providing improved performance and a modern look. 

Other updates

The Raspberry Pi 4 is now officially supported in Fedora Linux, including accelerated graphics. In other ARM news, Fedora Linux 37 Beta drops support for the ARMv7 architecture (also known as arm32 or armhfp).

We are preparing to promote two of our most popular variants: Fedora CoreOS and Fedora Cloud Base to Editions. Fedora Editions are our flagship offerings targeting specific use cases. 

In order to keep up with advances in cryptography, this release introduces a TEST-FEDORA39 policy that previews changes planned for Fedora Linux 39. The new policy includes a move away from SHA-1 signatures.

Of course, there’s the usual update of programming languages and libraries: Python 3.11, Perl 5.36, Golang 1.19, and more!

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the test mailing list or in the #quality channel on Matrix (bridged to #fedora-qa on Libera.chat). As testing progresses, we track common issues on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

More information

For more detailed information about what’s new on Fedora Linux 37 Beta release, you can consult the Fedora Linux 37 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Sourceware as Conservancy member project

Posted by Mark J. Wielaard on September 12, 2022 03:39 PM

Sourceware

Last month the Sourceware overseers started a discussion with the projects hosted on Sourceware and the Software Freedom Conservancy (SFC) to become a Conservancy member project (which means Conservancy would become the fiscal sponsor of Sourceware). After many positive responses the SFC’s Evaluations Committee has voted to accept Sourceware.

Software Freedom Conservancy