Fedora People

PHP version 8.1.23 and 8.2.10

Posted by Remi Collet on September 01, 2023 05:37 AM

RPMs of PHP version 8.2.10 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php82 repository for EL 7.

RPMs of PHP version 8.1.23 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php81 repository for EL 7.

emblem-notice-24.png The modules for EL-9 are available for x86_64 and aarch64.

emblem-notice-24.pngNo security fix this month, so no update for version 8.0.30.

emblem-important-2-24.pngPHP version 7.4 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.2 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.2
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

Replacement of default PHP by version 8.1 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php81
yum update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are build using RHEL-9.2
  • EL-8 RPMs are build using RHEL-8.8
  • EL-7 RPMs are build using RHEL-7.9
  • intl extension now uses libicu72 (version 72.1)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.8, instead of the outdated system library)
  • oci8 extension now uses Oracle Client version 21.10
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php80 / php81 / php82)

PHP on the road to the 8.3.0 release

Posted by Remi Collet on August 31, 2023 11:50 AM

Version 8.3.0 Release Candidate 1 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.

RPMs are available in the php:remi-8.3 stream or in the remi-php83 repository for Enterprise Linux  7 (RHEL, CentOS) and as Software Collection in the remi-safe repository (or remi for Fedora)

 

emblem-important-4-24.pngThe repository provides development versions which are not suitable for production usage.

Also read: PHP 8.3 as Software Collection

emblem-notice-24.pngInstallation : follow the Wizard instructions.

Replacement of default PHP by version 8.3 installation, module way (simplest way on Fedora, EL-8 and EL-9):

dnf module reset php
dnf module install php:remi-8.3
dnf update

Replacement of default PHP by version 8.3 installation, repository way (simplest way on EL-7):

yum-config-manager --enable remi-php83
yum update php\*

Parallel installation of version 8.3 as Software Collection (recommended for tests):

yum install php83

emblem-important-2-24.pngTo be noticed :

  • EL9 rpm are build using RHEL-9.2
  • EL8 rpm are build using RHEL-8.8
  • EL7 rpm are build using RHEL-7.9
  • lot of extensions are also available, see the PHP extension RPM status page and PHP version 8.3 tracker
  • follow the comments on this page for update until final version
  • proposed as a Fedora 40 change

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php83)

Kiwi TCMS 12.6.1

Posted by Kiwi TCMS on August 31, 2023 06:03 AM

We're happy to announce Kiwi TCMS version 12.6.1!

IMPORTANT: This is a small release which contains several improvements, bug fixes and new translations!

You can explore everything at https://public.tenant.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

---

Upstream container images (x86_64):

kiwitcms/kiwi   latest  c2a9b82871d9    598MB

IMPORTANT: version tagged and multi-arch container images are available only to subscribers!

Changes since Kiwi TCMS 12.5

Improvements

  • Update allpairspy from 2.5.0 to 2.5.1
  • Update django from 4.2.3 to 4.2.4
  • Update mysqlclient from 2.1.1 to 2.2.0
  • Update uwsgi from 2.0.21 to 2.0.22
  • Update pygments from 2.15.1 to 2.16.1
  • Update psycopg2 from 2.9.6 to 2.9.7
  • Update node_modules/datatables.net-buttons from 2.3.6 to 2.4.1
  • Update node_modules/markdown from 3.4.3 to 3.4.4
  • Update node_modules/word-wrap from 1.2.3 to 1.2.4
  • Update documentation for JIRA integration
  • Clarify the django-ses add-on mentioned in documentation
  • Add a button to delete URLs from test executions. Fixes Issue #2936
  • Show traceback info during IssueTracker health-check to make it easier to debug problems

API

  • Define IssueTracker.rpc_credentials property to make it easier to override credentials for IssueTracker integrations

Settings

  • Allow overriding IssueTrackerType.rpc_credentials via the EXTERNAL_ISSUE_RPC_CREDENTIALS setting

Bug fixes

  • Hide all expanded child rows in TestPlan Search page. Fixes Issue #3245 (@somenewacc)
  • Fix wrong query parameter on DASHBOARD page (@somenewacc)
  • Fix template variable for form fields in search pages (@somenewacc)
  • Prevent multiplication of callbacks for data tables (@somenewacc)
  • Don't fail IssueTracker health-check if we didn't use OpenGraph
  • Reorder items under SEARCH menu for consistency with items under the TESTING menu. Fixes Issue #3315

Refactoring and testing

  • Update node_modules/eslint from 8.44.0 to 8.48.0
  • Update node_modules/eslint-plugin-import from 2.27.5 to 2.28.1
  • Update node_modules/eslint-plugin-n from 16.0.1 to 16.0.2
  • Update node_modules/webpack from 5.88.1 to 5.88.2
  • Fix exception when no history objects found in TestExecutionFactory
  • Move append items to list definition
  • Provide /usr/lib64/pkgconfig/mariadb.pc inside buildroot
  • Remove unused translation string in ar_SA locale

Translations

Kiwi TCMS Enterprise v12.6.1-mt

  • Based on Kiwi TCMS v12.6.1

  • Update dj-database-url from 2.0.0 to 2.1.0

    Private images:

    quay.io/kiwitcms/version            12.6.1 (aarch64)        323f49dbe0f8    31 Aug 2023     607MB
    quay.io/kiwitcms/version            12.6.1 (x86_64)         c2a9b82871d9    31 Aug 2023     598MB
    quay.io/kiwitcms/enterprise         12.6.1-mt (aarch64)     34a63fa8e979    31 Aug 2023     860MB
    quay.io/kiwitcms/enterprise         12.6.1-mt (x86_64)      dbf819ed00cc    31 Aug 2023     849MB
    

IMPORTANT: version tagged, multi-arch and Enterprise container images are available only to subscribers!

How to upgrade

Backup first! Then execute the commands:

cd path/containing/docker-compose/
docker-compose down
docker-compose pull
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py upgrade

Refer to our documentation for more details!

Happy testing!

---

If you like what we're doing and how Kiwi TCMS supports various communities please help us grow!

Red Hat Certified Specialist in Managing Automation with Ansible Automation Platform

Posted by Fabio Alessandro Locati on August 31, 2023 12:00 AM
A few weeks ago, I passed the Red Hat EX467 exam, which allowed me to become Red Hat Certified Specialist in Managing Automation with Ansible Automation Platform. As of today, this is the newest Red Hat exam on Ansible. You can notice this from the version of Ansible Automated Platform that this exam uses: 2.2. An aspect that is already clear by looking at the objective is that this exam is completely complementary to the EX294 exam.

Building a Kernel RPM with the Builtin Makefile target

Posted by Adam Young on August 30, 2023 03:35 PM

Note that you need to have a .config file that will be included in the build. It will also use the Version as specified in your Makefile. Then run

make rpm-pkg

Which will use the RPM build infra set up for your user to put the rpm in $HOME/rpmbuild/

Developing a syslog-ng configuration

Posted by Peter Czanik on August 29, 2023 11:43 AM

This year I started publishing a syslog-ng tutorial series both on my blog and on YouTube: https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/ And while the series was praised as the best possible introduction to syslog-ng, viewers also mentioned that one interesting element is missing from it: namely, it does not tell users how to develop a syslog-ng configuration.

So, in this blog, learn how to develop a syslog-ng configuration from the ground up! I will explain not just the end result, but also the process and the steps to take to develop a configuration. It starts with a single source and destination, then concludes with a conditional log path and sending parsed and enriched logs to Elasticsearch (or a compatible document store).

You can read it at https://www.syslog-ng.com/community/b/blog/posts/developing-a-syslog-ng-configuration

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

New badge: F38 Kernel Test Day Participant !

Posted by Fedora Badges on August 29, 2023 07:27 AM
F38 Kernel Test Day ParticipantYou helped with kernel testing in Fedora 38

Unix sockets, Cygwin, SSH agents, and sadness

Posted by Matthew Garrett on August 29, 2023 06:57 AM
Work involves supporting Windows (there's a lot of specialised hardware design software that's only supported under Windows, so this isn't really avoidable), but also involves git, so I've been working on extending our support for hardware-backed SSH certificates to Windows and trying to glue that into git. In theory this doesn't sound like a hard problem, but in practice oh good heavens.

Git for Windows is built on top of msys2, which in turn is built on top of Cygwin. This is an astonishing artifact that allows you to build roughly unmodified POSIXish code on top of Windows, despite the terrible impedance mismatches inherent in this. One is that until 2017, Windows had no native support for Unix sockets. That's kind of a big deal for compatibility purposes, so Cygwin worked around it. It's, uh, kind of awful. If you're not a Cygwin/msys app but you want to implement a socket they can communicate with, you need to implement this undocumented protocol yourself. This isn't impossible, but ugh.

But going to all this trouble helps you avoid another problem! The Microsoft version of OpenSSH ships an SSH agent that doesn't use Unix sockets, but uses a named pipe instead. So if you want to communicate between Cygwinish OpenSSH (as is shipped with git for Windows) and the SSH agent shipped with Windows, you need something that bridges between those. The state of the art seems to be to use npiperelay with socat, but if you're already writing something that implements the Cygwin socket protocol you can just use npipe to talk to the shipped ssh-agent and then export your own socket interface.

And, amazingly, this all works? I've managed to hack together an SSH agent (using Go's SSH agent implementation) that can satisfy hardware backed queries itself, but forward things on to the Windows agent for compatibility with other tooling. Now I just need to figure out how to plumb it through to WSL. Sigh.

comment count unavailable comments

Misconceptions About Immutable Distributions

Posted by TheEvilSkeleton on August 29, 2023 12:00 AM

What Is an Immutable Distribution?

“Immutable” is a fancy way of saying unchangeable. So, an immutable distribution is essentially an unchangeable distribution, i.e. read-only. However, in most cases, the entirety of the distribution is not immutable. Some directories, like /tmp are writable, as some directories need write capabilities to actually function. Some distributions may even have other directories, such as /etc, writable for extra functionality.

To redefine, an immutable distribution is when, at a minimum, only the core of the distribution is read-only, and the rest is read-write.

Note: The more accurate terms are “reprovisionable” and “anti-hysteresis”. However, since “immutable” is more commonly used, I’ll use it instead throughout this article. Further reading: “Immutable” → reprovisionable, anti-hysteresis by Colin Walters.

Technical Definitions

There are a few technical definitions that should be understood before reading the rest of the article:

  • Rollback (rolling back): The process of undoing a change or update. For example, you upgrade your system to the latest version, which introduces bugs. You can then revert that update and go back to the version that worked well.
  • Reproducibility: The process of delivering identical software regardless of the time or system used. Suppose you install a distribution on hundreds of laptops and desktops over the course of a week. The distribution you install on the first desktop (on the first day) will be exactly the same as the last laptop (on the last day). That is, all systems will be identical. You can think of it as a perfect clone.
  • Atomicity (atomic updates): The process of ensuring that a change (update, downgrade, etc.) is either fully applied or aborted. For example, your system shuts down in the middle of an upgrade due to a power outage. Instead of having a half-updated system, the update is fully aborted, meaning that your system was unchanged, just like before you started the update.

Concept Versus Implementations

It’s really easy to confuse the concept and the implementation when discussing immutability.

As we noted above, an immutable distribution is when the core of the distribution is read-only — that’s it. You (or someone else) decide to implement that concept as you see fit. You could have a locked down and inflexible system, like GNOME OS,1 where there isn’t even a system package manager; or you could have something that is designed to be flexible, like NixOS, where you can change the desktop environment, the kernel, the boot loader, and so on.

You can think of an immutable distribution as any traditional distribution. Some traditional distributions are locked down and inflexible, such as Fedora Workstation, where removing or even disabling SELinux can cause problems. Fedora Workstation also prevents you from accidentally removing core packages like sudo. Meanwhile, some traditional distributions are designed to be flexible, such as Gentoo Linux, where you can change almost anything in your system.

Misconceptions

There are several misconceptions both for and against immutable distributions that I would like to address.

I’m going to use pseudo quotes, which means I will be quoting and refining arguments I’ve interpreted from debates and media.

“Needing to Reboot”

Due to the read-only nature of immutable distributions, applying changes (updates, installs, etc.) absolutely requires a reboot.

The need of reboots varies per implementation. With Endless OS, you must reboot to apply updates. On Fedora Silverblue, you can run sudo rpm-ostree apply-live in the terminal to apply changes while the system is booted. On NixOS, you don’t need to reboot at all.

Even then, rebooting your system after updating it is really important to keep it running optimally. As the article Restarting and Offline Updates explains: “The Linux Kernel can change files without restarting, but the services or application using that file don’t have the same luxury. If a file being used by an application changes while the application is running then the application won’t know about the change. This can cause the application to no longer work the same way. As such, the adage that “Linux doesn’t need to restart to update” is a discredited meme. All Linux distributions should restart.

Many immutable distributions focus on reboots when performing system-level updates, because in many cases a reboot is crucial for stability. That’s why Android, iOS, macOS, Windows, ChromeOS, etc. require a reboot.

“Difficult to Use”

Since immutable distributions are locked down, inflexible, and often need workarounds for various issues, they’re more difficult to use.

Difficulty varies per implementation and users’ priorities. For me, Fedora Silverblue is really easy to use, because it provides a great experience by default;2 I want something that is reliable, gets out of my way as much as possible, and follows closest with my political views (I value free software, inclusiveness in the community, and progressing the Linux desktop), which Fedora Silverblue is one of the few that provides.3 However, NixOS isn’t for me, because I’m not a power user.

Endless OS is used by many educational institutions, because these institutions typically need a system that is simple, accessible and secure. On the contrary, NixOS is a powerful system typically used by power users and organizations that absolutely need the features it offers.

However, it’s worth noting that, no matter the distribution, immutable or not, workarounds may be needed. Nothing is perfect, but there are many cases when external developers refuse to collaborate. For example, many drivers are unavailable in the Linux kernel, so we’re sometimes forced to inconveniently install these drivers manually. While a distribution like Arch Linux has the AUR, most other traditional distributions do not have the luxury of installing obscure drivers. In short, this isn’t a fundamental problem with immutable distributions; it’s a problem that you have with every distribution.

“Inflexible by Nature”

Due to the read-only nature of immutable distributions, users cannot tinker with their system.

Again, this heavily depends on the implementation. If we look at traditional distributions, Ubuntu is not designed to be tinkered with, nor is Fedora Linux. As such, these distributions are prone to breaking after a significant amount of modification.

Meanwhile, NixOS, an immutable distribution, empowers the user and allows them to change many aspects of their system. Don’t like GRUB? Use systemd-boot, rEFInd or something else. You want to add a Mandatory Access Control? Use SELinux. Don’t like SELinux? use AppArmor or something else. Don’t like using GNOME? Use Plasma, XFCE, MATE, Cinnamon, Sway or any desktop environment/window manager. Don’t like the fact that you’re using an LTS version of NixOS? Just switch to an unstable or even a bleeding edge channel. What’s that, you want to go back to LTS? You can. Want to mix and match packages from different channels? Go for it… You could view NixOS as Arch Linux on steroids.

Really, some traditional and immutable distributions are locked down and inflexible, meanwhile some others are flexible. This heavily varies per implementation.

“Immutable Distributions Are More Secure”

Immutable distributions provide better security, because […]

I truly believe that statements like these create a false sense of security because “security” can be interpreted differently by different people. Allow me to identify the meanings of “security” and address each of them:

“Security as a Safety Net”

You can roll back or prevent damage in case an error occurs. For example, your system shuts down in the middle of an update due to a power outage. However, thanks to the system’s atomicity, nothing was changed on the system, thereby allowing you to boot back into the system without a problem.

Atomicity could be replicated on traditional distributions if someone put in the effort.

If the system updates but runs into problems, you can roll back from your bootloader.

Rollbacks can be implemented on traditional distributions. Linux Mint and openSUSE Tumbleweed/Leap are some implementors of this feature.

“Security as a Measure to Prevent Malicious Actors From Compromising Your System”

Thanks to the read-only nature of immutable distributions, they prevent malicious actors from doing more damage than traditional distributions.

While immutable distributions might prevent malicious actors from doing certain things on your system, the bigger problem is that your personal data is at risk. Whether the actor can use sudo is less of an issue than the fact that your GPG keys, personal media and documents, or other sensitive data are easily accessible once compromised.

Whether your system is immutable or not, there are still really important steps you should be taking to make your system more secure. For example, you can use LUKS2 to encrypt your drive(s) to protect your data from physical access, and you can use Flatpak (with Flatseal) to make sure that your data is better protected from malicious/compromised apps. These steps are highly beneficial and should not be taken lightly!

“Security as a Measure to Prevent Users From Breaking Their Own Systems”

Thanks to the read-only nature of immutable distributions, even the most destructive actions, like sudo rm -rf --no-preserve-root /* won’t destroy the system.

While running such commands might prevent total destruction, they can still do enough damage to make the benefit meaningless. In this example, running sudo rm -rf --no-preserve-root /* will delete your $HOME (user) directory if you wait long enough, since the user directory is mutable; depending on the implementation, it could delete your boot partition as well. These destructive commands are usually destructive in other ways on immutable distributions, as the vulnerable bit is the mutable places. Only the immutable parts of the systems are unaffected, but the rest are just as vulnerable as any vulnerable place in traditional distributions.

Immutability Is an Implementation Detail

Immutable and traditional distributions can literally share the same benefits and drawbacks, because, in reality, whether the core of the distribution is read-only or read-write is a minor detail. An immutable distribution could be as unstable as Manjaro and rely on distribution packages, meanwhile a traditional distribution could be as stable as Endless OS and fully embrace containers, such as Flatpak. A traditional distribution could be as locked down and inflexible as GNOME OS, meanwhile an immutable distribution could be as flexible as Gentoo Linux (theoretically).

Immutable distributions are different from each other, the same way traditional distributions are different from each other as well. For example, Vanilla OS utilizes ABRoot, whose concept is inspired from Android’s A/B system updates. Fedora Silverblue and Endless OS utilize OSTree, which is similar to git in paradigm. MicroOS utilizes btrfs’s features. NixOS does, uhh… yes.

While immutable and traditional distributions are presented as different in categories, they have very little difference in nature. It’s best to treat immutable distributions like traditional distributions, rather than a whole different paradigm. It’s up to the maintainers to decide what they want to do with their distribution.

Immutability Relieves Maintenance Burden

Okay, as explored above, if the “benefits” are not benefits that are exclusive to immutable distributions and can be applied to traditional distributions, then why do immutable distributions exist in the first place? Why not just implement atomicity, reproducibility, and rollback functionalities to a traditional distribution?

Simply put, immutable distributions make it much easier for maintainers to focus on their high-level goals, because immutability eases the burden of maintenance.

The benefits are indirect for users, but direct for maintainers. Maintainers have fewer things to worry about than in traditional distributions, which allows them to spend their precious time and resources on the things they actually want to worry about, such as stability, reproducibility, providing a safety net, and so on. As a result, users get a more robust system.

This is why immutable distributions are generally considered “safer” than traditional distributions, even though the concept is not. So far, maintainers have put more effort into making their immutable distribution robust, as opposed to traditional distributions, which usually have to worry about the whole mutable aspect. This benefits immutable distributions greatly, because they don’t have to worry about solving the problems that traditional distributions typically run into.

Similarly, many developers prefer to officially support Flatpak over traditional packaging formats, because they only have to worry about one format as opposed to hundreds. As a result, they have more time and energy to focus on achieving their real goals, instead of focusing on tedious repetitive tasks and supporting a wider range of combinations (packaging format, dependencies version and patches, etc.).

All of these features (reproducibility, atomicity, and rollback) can also be implemented in traditional distributions. However, these features are typically treated as first-class features in immutable distributions, even though they are not mandated, whereas they are second-class features in most traditional distributions that support them. Immutability greatly reduces the maintenance overhead and leaves more room to improve the things maintainers want to improve. Immutable distributions have recently become popular, and are already used and popular in educational and personal systems.

Conclusion

To summarize, an immutable distribution is when the core of the distribution is read-only. Anyone is free to grab that concept and implement a distribution with it. The “benefits” and “inconveniences” that we tend to see with immutable distributions could be applied on traditional distributions, but immutable distributions tend to be easier to accomplish the high-level goals, because they are easier to maintain.

In the end, immutability should be seen as an implementation detail and philosophical/political decision, rather than a purely technical one, because traditional and immutable distributions can literally share the same benefits and drawbacks.

Footnotes

  1. GNOME OS is used for development purposes, so it doesn’t need a package manager. 

  2. Except its downsides

  3. I’m currently looking into Vanilla OS and think about switching to it once 2.0 is released

A Maldição do Investidor

Posted by ! Avi Alkalay ¡ on August 28, 2023 05:34 PM
<figure class="wp-block-image size-large"></figure>

Newsletter da Nord (consultoria de investimentos) que analisa didaticamente as ações da NVIDIA, comparando com a Apple nas últimas décadas e Sun Microsystems. Spoiler: o analista não recomenda comprar porque o preço da ação da NVIDIA está sendo negociado a 40× as vendas da empresa.

Investidor é pessoa que vive infeliz. Lamenta não ter uma máquina do tempo para voltar e comprar Bitcoin, Apple, Nvidia, Yahoo. Ou, quando acerta quase que por sorte ter investido em algo que supervalorizou, lamenta ter comprado tão pouco quando o preço estava tão baixo.

An update on GNOME Settings

Posted by Felipe Borges on August 28, 2023 11:39 AM

There’s no question that GNOME Settings is important to the overall GNOME experience and I feel flattered to share the responsibility of being one of its maintainers. I have been involved with Settings for almost a decade now but only in the last few months I have  started to wear the general maintainer hat “officially”.

That’s why I am compelled to update our community on the current state of the project. Settings is also co-maintained by Robert Ancell who has been doing great work with reviews and also helping us improve our code readability/quality.

The last general update from Settings you might have heard of was Georges’ Maintainership of GNOME Settings Discourse post. Some of what’s written there still holds true: Settings is one of the largest modules in GNOME, and being this hub connecting the Shell, the settings daemons, network manager, portals, cups, etc… it needs more maintainers. It needs especially maintainers with domain expertise. We have a handful of active contributors doing great frontend/UI work,  but we lack active contributors with expertise in the deep dungeons of networking or color management, for example.

To tackle this issue, one of my goals is to improve the developer experience in GNOME Settings to attract new contributors and to enable drive-by contributors to post changes without struggling much with the process. For that, I kickstarted our Developer documentation. It is in an early stage now and welcoming contributions.

I also have been invested in fixing some of our historical UI consistency problems. A lot has been done in the gnome-44 and gnome-45 cycles to adopt the latest design patterns from the GNOME Human Interface Guidelines with libadwaita and modern GTK. Alice Mikhaylenko and Christopher Davis did an outstanding job with the ports to modern Adwaita navigation widgets. We also gained a new “About” panel that can condense more information that is useful especially for debugging/supporting issues. There’s still work to be done on this front especially with certain views that are currently looking a bit out of place in comparison to modern views.

Screenshot of the new "About" panel.

The new Privacy hub is a new “hub” panel introduced by Marco Melorio in gnome-45 that is our initial step towards reducing the overall number of panels.Screenshot of the new "Privacy" panel.For GNOME 46 we want to introduce a new “System” hub panel, developed by our Google Summer of Code intern Gotam Gorabh, as well as introduce a new “Network & Internet” panel that is being already worked on by contributor Inam Ul Haq. These are two epics that involve reworking some complicated panels such as the Wifi/Network and User Accounts ones. These are panels that should also see a big frontend rework in the gnome-46 cycle and that I plan to work on myself.

Also a big thank you to Allan Day, Jakub Steiner, Tobias Bernard, Sam Hewitt, and other folks doing outstanding design and UX work for Settings.

GNOME 45.0 (stable) will be released in September, shipping plenty of new stuff and bugfixes. It would be extremely helpful if you could test the latest changes and report issues and regressions in our issue tracker. GNOME Settings 45.rc has been released and should be available soon in GNOME OS and unstable/development distro releases such as Fedora Rawhide.

If you want to get involved, feel free to join our Matrix chat channel and ask questions there. I also monitor the “settings” Discourse tag, where you can ask support questions and suggest features.

Week 34 in Packit

Posted by Weekly status of Packit Team on August 28, 2023 12:00 AM

Week 34 (August 22nd – August 28th)

  • We have fixed a bug in packit source-git init caused by the changed behaviour in the newer version of rpmbuild. (packit#2048)
  • We have fixed an issue in our API endpoint that could cause DoS until manual intervention from our team. (packit-service#2164)
  • We have fixed a bug causing broken retriggering of Github checks. (packit-service#2161)
  • SRPM build commit statuses, for monorepos projects, are now being correctly updated. (packit-service#2157)
  • We have fixed the bug resulting in incorrect reporting for tests when retriggering a build of a different target that was not configured for tests. (packit-service#2144)
  • We have fixed an issue that caused retriggers of Testing Farm to fail, if you specified any labels in the comment and had one or more test job definitions without any labels specified. (packit-service#2156)
  • Macro definitions and tags gained a new valid attribute. A macro definition/tag is considered valid if it doesn't appear in a false branch of any condition appearing in the spec file. (specfile#276)

Episode 390 – Rust shipping binaries doesn’t matter

Posted by Josh Bressers on August 28, 2023 12:00 AM

Josh and Kurt talk about a blog post that explains how C and C++ compilers prioritize performance over correctness. This is the class story of security vs usability. Security is never the primary goal. If a security requirement doesn’t also enable other business goals it will fail. We also touch on the news of a Rust package containing binary files. It doesn’t really have anything to do with security, it’s all about convenience.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3200-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_390_Rust_shipping_binaries_doesnt_matter.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_390_Rust_shipping_binaries_doesnt_matter.mp3</audio>

Show Notes

Handling Inbox spam

Posted by ! Avi Alkalay ¡ on August 27, 2023 02:25 PM

I’m a serial unsubscriber — absolutely ruthless when it comes to keeping my inbox in order. If I get a new ad or newsletter on my inbox I immediately scroll to the end of it to click on the tiny “unsubscribe” link. I admit I have great pleasure doing this without even seeing the ad.

I also never ever give my personal e-mail address in account registrations. Instead, if an e-mail address is absolutely necessary, I have my own mail relay service that creates a unique address for each account registration. If you think about it, real people do not communicate through e-mail anymore, only chat nowadays. You keep your e-mail address to communicate mostly with companies only, not friends and family.

I also stopped using social login buttons (technology known as OAuth). They are pretty convenient, but also very efficient in broadcasting your e-mail address everywhere you use them.

Hidding your e-mail address (and serially unsubscribing from ads) is good not just for reducing clutter in your inbox, but also to protect it from tefth. When a service which you have an account undergoes a data leakage, only your random fake/relay/unique e-mail address used on that specific service is compromised, not your real one.

I run my own e-mail relay, but I know that Mozilla and iCloud provide this functionality as a paid service. Whom else?

And before you ask, of course I don’t have to deal with hundreds of e-mail addresses nor text files full of passwords to copy and paste. Password managers do this for me almost transparently and in a very secure way. You can see them in action when your password is automatically filled in login forms. They are part of all Apple platforms, Google Chrome, Mozilla Firefox, Microsoft Edge, Linux desktops and also — now unnecessary and obsolete — third party apps such as 1Password. I’ve used them all and Apple iCloud Keychain is the most complete, advanced, well synchronized and seamless.

<figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Apple iCloud Keychain synchronized across multiple devices</figcaption></figure>

Also, If you use GMail and your address is myname@gmail.com, you can create variations as myname+someword@gmail.com. This is useful but also weak, because it doesn‘t hide your real address, and sometimes problematic though, because of the ‘+’ char.

A relaying e-mail address that hides your real addess looks like “lots-of-cryptic-chars@relay.somedomain.com”.

Also on my LinkedIn.

Contribute at the Test Week for the Anaconda WebUI Installer for Fedora Workstation

Posted by Fedora Magazine on August 25, 2023 08:00 PM

The Workstation team is working on the final integration of Anaconda WebUI Installer for Fedora Linux Workstation. As a result, the Fedora Workstation Working Group and QA teams have organized a test week from Monday, Aug 28, 2023 to Monday, Sept 04, 2023. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the Anaconda WebUI test week has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Fedora meets RHEL: upgrading UBI to RHEL

Posted by Debarshi Ray on August 25, 2023 07:41 PM

Six years ago

As part of our efforts to make Fedora Workstation more attractive for developers, particularly those building applications that would be deployed on Red Hat Enterprise Linux, we had made it easy to create gratis, self-supported Red Hat Enterprise Linux virtual machines inside GNOME Boxes. You had to join the Red Hat Developer Program by creating an account on developers.redhat.com and with that you not only had gratis, self-supported access to RHEL, but also a series of other products like Red Hat JBoss Middleware, Red Hat OpenShift Container Platform and so on.

<figure class="wp-block-image size-large"></figure>

Few years later

Fedora Silverblue became a thing, and so did Toolbx. So, we decided to take this one step further.

Toolbx already had support for Red Hat Enterprise Linux for the past two and a half years. It means that on RHEL hosts, Toolbx sets up a RHEL container that has access to all the software and support that the host is entitled to. On hosts that aren’t running RHEL, if you want, it will set up a gratis, self-supported container based on the Red Hat Universal Base Image, which is a limited subset of RHEL:

$ toolbox create --distro rhel --release 9.2

However

This works well only as long as you are running a Red Hat Enterprise Linux host. Otherwise, you quickly run into the limitations of the Red Hat Universal Base Image, which is designed for distributing server-side applications, and not so much for persistent interactive CLI environments. For example, you won’t enjoy hacking on GNOME components like GTK, Settings, Shell or WebKitGTK in it because of the sheer amount of missing dependencies.

Today is glorious

You can now have gratis, self-supported Red Hat Enterprise Linux Toolbx containers on Fedora hosts that have access to the entire set of RHEL software, beyond the limited subset offered by UBI.

As always, you need to join the Red Hat Developer Program. Make sure that your account has simple content access enabled, by logging into access.redhat.com and then clicking the subscriptions link at the very top left.

<figure class="wp-block-image size-large"></figure>

Install subscription-manager on your Fedora host to register it with your Red Hat Developer Program account, create a RHEL Toolbx container as before, and you are off to the races!

Thanks to Pino Toscano for helping me iron out all the wrinkles in the pipeline to get everything working smoothly.

<figure class="wp-block-image size-large"></figure>

Docs workshop: Virtually writing together

Posted by Fedora Magazine on August 25, 2023 07:00 PM

At the Fedora Linux 38 release party, the Docs team suggested that we take advantage of a virtual meetup to bring teamwork into documentation writing. Documentation writing shouldn’t be a solitary pursuit.

An interactive session at Flock 2023 helped exchange ideas on a collaborative way to run meetings and invite more contributions for documentation.

After months of waiting for ideas to be finalized, the Docs team is pleased to announce the workshop will begin September 2023.

If you fancy coming along, just let us know your preferred timeslot in the When-is-good scheduler by September 15 2023.

But why and how?

The idea behind a virtual writing session is to combine the power of the Fedora Podcast with advocacy of writing and maintaining excellent user documentation. Here is why.

  • Documentation in any free and open source software project provides reasons for users and contributors to stay loyal to the project and software.
  • The Docs workshop aims to facilitate individual and collaborative work through a supportive community of documentarians.
  • Documentation is more than a fix of visual presentation. We’re writing, reviewing, and deploying docs.
  • In accordance with the Fedora project motto “First”, we like to try new things in toolset, automation, and UI improvement.

Building on feedback from each session, the Docs team wants to empower people to learn about templates, issue tickets, review processes, and tool chains to improve documentation for Fedora Linux users and contributors.

Program agenda

A monthly agenda will be posted in Fedocal and Fosstodon (@fedora@fosstodon.org).

Track 1: Introduction and onboarding (odd months)
– What the Docs team is all about. What role will interest you?
– The types of user documentation Fedora Linux publishes
– How you can help improve Fedora Documentation.

Track 2: Skill-based workshop (even months)
– Technical review, Git workshop, AsciiDoc template and attributes
– Use of local build and preview script
– Test documentation quality

Format of Track 2
– Demo
– Try it yourself
– Q&A

If you come along to the Track 2 workshop, all you need is a Fedora account and Pagure account with your computer, preferably with Git and Podman (or Docker) installed.

In the meantime, if you have questions, feel free to drop by our Discussion forum. I’m looking forward to saying hello at our first virtual docs workshop someday in late September (the exact date depends on the when-is-good responses)! Let’s do it!

CPE Weekly update – Week 34 2023

Posted by Fedora Community Blog on August 25, 2023 10:00 AM

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 21 August – 25 August 2023

<figure class="wp-block-image size-large">CPE infographic</figure>

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Planning board
Docs

Update

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

The post CPE Weekly update – Week 34 2023 appeared first on Fedora Community Blog.

No matching key found

Posted by Pablo Iranzo Gómez on August 25, 2023 09:42 AM
As you might have experienced… using a recent system to connect to a legacy one could be complicated as some insecure protocols have been disabled, with a message like: Unable to negotiate with 192.168.2.82 port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss Create an entry like this in your .ssh/config file, so that insecure methods can be used to connect to a specific host: Host 192.168.2.82 HostKeyAlgorithms=+ssh-rsa KexAlgorithms=+diffie-hellman-group1-sha1 PubkeyAcceptedKeyTypes=+ssh-rsa User root or alternatively on the command line:

Catch me at Ohio LinuxFest (OLF)

Posted by Joe Brockmeier on August 24, 2023 02:40 PM

Ohio LinuxFest (or OLF these days) is returning to Columbus, Ohio on September 8th and 9th. Happy to announce that I’m going to be doing the Friday keynote, “Open Source Can’t Win.” Wait, you might say, hasn’t open source already won? It’s popular to say that open source has won – but the truth is that open source can never win. Not permanently, at least.

Open source is mainstream, and a popular choice with developers, users, admins, companies, and government. But as open source has grown in popularity, complacency has crept in. Many people have forgotten, or never knew, the “why” of open source and open computing.

Open source communities can’t stand on past success, stand down from educating newcomers, or evolving to meet new challenges. And we certainly can’t give an inch on what it means to be open source and not merely source available.

So that’s what I’m going to be talking about. I’m also excited to see Jon “maddog” Hall and Catherine Devlin giving keynotes on Saturday, and some other fantastic speakers like Scott McCarty and Steven Pritchard.

Please join us if you can!

Small Caps in Impress

Posted by Caolán McNamara on August 23, 2023 04:01 PM

Writer supports Small Caps, but Impress and drawing shapes in general never fully supported Small Caps. The option was available, and the character dialog provided a preview, but Small Caps was rendered the same as All Caps, as seen here.

 This has lingered for years and it's not easy as a user to try and manually workaround with varying font sizes because underline/overline/strike-through decorations won't link up, as seen in this example:


 but finally for Collabora Hack Week I was able to carve out some time to fix this. So in today's LibreOffice trunk we finally have this implemented as:

In addition a buggy case seen in the preview of double overlines for mixed upper/lower case was also fixed, from:

to:


Also noticed during all of this was the wrong width scaling used for the red wave line underneath incorrect spelling in impress when the text is superscript or subscript so the line didn't stretch over the whole text, as seen here: 

Now corrected as:

and finally added was a missing implementation in the RTF export of shape text to allow the small caps format to be copy and pasted into other applications from impress.

Systemd-journald vs. syslog-ng

Posted by Peter Czanik on August 23, 2023 12:35 PM

Even if most people ask me to compare systemd-journald vs. syslog-ng, I would say that they complement each other. Systemd-journald excels at collecting local log messages, including those of various system services. The focus of syslog-ng is on central log collection and forwarding the logs to a wide variety of destinations after processing and filtering. Combining the two gives you the most flexibility.

Read more at https://www.syslog-ng.com/community/b/blog/posts/systemd-journald-vs-syslog-ng

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Call for Mentors and projects for Outreachy December ’23 – March ’24 cohort

Posted by Felipe Borges on August 23, 2023 09:54 AM

The GNOME Foundation is interested in sponsoring up to 3 Outreachy projects for the December-March cohort.

If you are interested in mentoring AND have a project idea in mind, visit GNOME: Call for Outreachy mentors and volunteer and submit your proposal.

We can also use pre-submitted ideas from our Internship project ideas repositor .

GNOME has a committee (Allan Day, Matthias Clasen and Sri Ramkrishna) that will triage projects before approval. More information about GNOME’s participation in Outreachy is available at Outreach/Outreachy – GNOME Wiki! .

If you have any questions, please feel free to reply to this thread or e-mail soc-admins@gnome.org, which is a private mailing list with the GNOME internship coordinators.

This is a repost from https://discourse.gnome.org/t/deadline-sept-20-2023-call-for-mentors-for-outreachy-december-23-march-24-cohort/16748

Cockpit 299

Posted by Cockpit Project on August 23, 2023 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 299:

Kdump: Show location of kdump to verify the successful configuration test

You can now see the location where kdump can be found if testing of kdump settings is successful.

Screenshot from 2023-08-23 11-17-45

Storage: Support for no-overprovisioning with Stratis

A Stratis pool can be put into a “no-overprovisioning” mode where the bad effects of running out of space can be avoided via careful management of filesystem sizes. Cockpit now supports this mode.

Storage: Cockpit can now add caches to encrypted Stratis pools

Encrypted caches are a new feature in Stratis 3.5, and now you can use it also from Cockpit. Read more about Stratis cache in Stratis release notes.

Screenshot from 2023-08-25 13-50-29

Try it out

Cockpit 299 is available now:

Qt theming in Fedora Workstation

Posted by Jan Grulich on August 22, 2023 03:47 PM

We have been working on and using custom Qt theming in Fedora Workstation for many years now. By custom Qt theming, I’m talking about the QGnomePlatform and Adwaita-qt projects. If you haven’t heard of them, you can read my recent blog post explaining what they are. While these projects are in some ways better than what Qt upstream has to offer, there were also drawbacks/issues and that’s why I decided to make a final decision and discontinue both projects. The issues are explained in the aforementioned blog post, but one of the main drawbacks is that we are in this development alone and not working directly in the upstream makes it less attractive for other contributors. It’s also not used by default anywhere other than Fedora, so it’s not properly tested by other developers working on Qt applications using different distributions. These reasons led me to submit a Fedora 39 feature to remove our custom Qt theming in Fedora Workstation in favor of Qt’s defaults. The only problem is that if we just go with Qt’s default, we would go backwards a bit. This is because upstream Qt does not provide any decent client-side window decorations (problem #1), and the QGtkTheme in Qt5 (QGnomePlatform equivalent) is a bit behind its Qt6 version with many improvements and integration goodies (problem #2) recently made by Axel Spoerl of the Qt Group, whom I met during this year’s KDE Akademy.

Solution to problem #1

QGnomePlatform used to be our solution to this problem, as QGnomePlatform implemented it’s own version of the QWaylandAbstractDecoration plugin. This was a GTK 3-like decoration plugin that used Adwaita-qt for button rendering and QGnomePlatform bits (e.g. GSettings configuration) to get the titlebar layout. Since we are going to remove QGnomePlatform, we needed an alternative. So I started working on the QAdwaitaDecorations project. This is supposed to be a an intermediate step as I would like to have a proper GNOME/Gtk decorations directly in Qt upstream, but since I was in a hurry to get everything done in time for Fedora 39, we have this for now. QAdwaitaDecorations plugin is based on the decorations we have in QGnomePlatform, but there is no dependency on GTK or Glib (e.g. GSettings) or on Adwaita-qt. We use xdg-desktop-portal to get the titlebar layout and do our own drawing instead. This decoration plugin should also have now a GTK 4-like style so the buttons and colors of the decorations are different.

Below is a screenshot of Wireshark (Qt6) using QAdwaitaDecorations plugin + QGtkTheme + Fusion:

<figure class="wp-block-image size-large"></figure>

Solution to problem #2

Since Qt5 is no longer actively developed, the only possible solution is to backport all QGtkTheme improvements from Qt6, so I did that + modified some of those changes to avoid breaking binary compatibility. This results in about ~15 related backports to Qt5 so far, and it seems to work pretty well. I also made sure that Fedora 38 and older will still use QGnomePlatform by default, so we don’t change the behavior for existing users. Also, a small change to our QtWayland package was needed to make it use the new decoration plugin by default.

Future plans

As mentioned, I would really like to have everything directly in Qt upstream (talking about QAdwaitaDecorations). That way we get other contributions and thus fixes/improvements for free and a lot more users. Another thing is that QGnomePlatform supports things that are not yet supported/implemented in QGtkTheme, like support for xdg-desktop-portal instead of just relying on GSettings. Not to mention that GTK 4 has been around for a while, and both QGnomePlatform and QGtkTheme are still GTK 3 based. I will definitely try to make some of these things happen for Fedora 40, but knowing myself, it’s better not to make any promises, as things usually don’t go according to plan.

Untitled Post

Posted by Zach Oglesby on August 22, 2023 01:03 AM

Finished reading: The Frugal Wizard’s Handbook for Surviving Medieval England by Brandon Sanderson 📚

This was a fun story, I would love to read more of this world.

Fedora IPU6 black image issue

Posted by Hans de Goede on August 21, 2023 03:46 PM
I have just become aware that Fedora users using the packaged IPU6 camera stack are having an issue where the output from the camera is black. We have been updating the stack and the new version of ipu6-camera-bins has hit the stable updates repo, while the matching new version of ipu6-camera-hal is currently in the updates-testing repo.

This causes the versions of ipu6-camera-bins vs ipu6-camera-hal to get out of sync (unless you have updates-testing enabled) which leads to the camera output being all black

You can fix this issue by running the following command:

sudo dnf update --enablerepo=rpmfusion-nonfree-updates-testing 'ipu6-camera-*'

Sorry about the inconvenience, we'll make sure to push both packages at the same time for the next set of updates.

I have tagged all the new ipu6-camera-hal builds to be moved to the stable update repositories, so on the next rpmfusion updates push this should be resolved.

CPE Weekly update – Week 33 2023

Posted by Fedora Community Blog on August 21, 2023 10:00 AM

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 14 August – 18 August 2023

<figure class="wp-block-image size-full">CPE Infographics</figure>

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Planning board
Docs

Update

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

Community Design

Goal of this initiative

CPE has few members that are working as part of Community Design Team. This team is working on anything related to design in Fedora Community.

Updates

The post CPE Weekly update – Week 33 2023 appeared first on Fedora Community Blog.

Episode 389 – What would HashiCorp do?

Posted by Josh Bressers on August 21, 2023 12:00 AM

Josh and Kurt talk about the HashiCorp license change and copyright problems in open source. This isn’t the first and won’t be the last time we see this, but it’s very likely open source developers and communities will view any project that has a contributor license agreement as a problem moving forward.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3196-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_389_What_would_HashiCorp_do.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_389_What_would_HashiCorp_do.mp3</audio>

Show Notes

Change

Posted by Zach Oglesby on August 20, 2023 04:00 AM

As the old saying goes, “The only constant in life is change.” My life has been ruled by it, moving around as a kid, life in the military, and the same old stuff we all deal with as we age and grow.

For the last 15 years, my wife has been the only constant in my life. We met in Delaware while I was stationed there. We fell in love, got married, and I scooped her away from the only home she had ever known to go live in Spain with me.

We were young, very young. I now know I was dealing with the early stages of PTSD; having met her just weeks after coming home from a rough deployment, she gave me an escape from my pain. The thing about escapes is that they only work for so long, and eventually, I had to deal with my issues. It was not easy, and sometimes she boor the brunt of that pain. Anyone who knows me can tell you I don’t express my emotions well; she, on the other hand, needed all of hers on her sleeve. Needless to say, this difference resulted in a lot of tension, frustration, and confusion. We worked through it, but not absolutely. We had our first child in Spain, I got out of the military and had two more kids in Maryland, but tensions always remained.

Tension eventually rubs raw, like a dog licking a wound; the longer it lasts, the worse it gets. We both tried to change and adapt for the sake of the other, but changing who you are is difficult, if not impossible, and it never worked.

So here I am 15 years later, looking change in the eyes again. We are separating, and my home will not be hers for the first time since 2008. It’s hard to explain; we don’t hate each other or even really fight, but we are distant, and I can sense some resentment in her that I can’t bear. I am thankful that we get along and can raise our kids together, even if we are apart. It will be an adjustment for all of us, but I pray they will understand and feel our love for them.

I have lived with my wife for longer than anyone else (I lived with my mom until I was 11 and my dad until I was 18). If I was not as open as she wanted, she still knew me better than anyone else. It’s going to be strange to live alone again (part-time), it’s going to be hard not to see my kids every day, and it will be difficult not to be upset. Still, in the end, I will never feel regret. I will always love her, be glad for our time together, and forever cherish our children.

XMPP onion

Posted by Casper on August 19, 2023 08:49 AM

Nouveau concept, nouveau design, nouveau modèle.

Nous allons voir comment installer la messagerie instantannée la mieux sécurisée de tous les temps. Nous passerons par le serveur Prosody, un routeur Tor, et Gajim comme client. Ces composants seront installés au niveau du système, sur une seule machine : un poste de travail ou un laptop. La sécurité de ce concept repose sur sa simplicité. Le choix du chemin le plus court pour le transit des messages est un choix volontaire, dans le but de réduire le plus possible la latence. Comme un mode de communication de pair-à-pair (P2P). D'après mes tests, la latence fait moins d'une seconde, c'est quasiment-instantanné.

Ce petit guide permet d'installer manuellement un système de messagerie instantannée sur sa machine. Toutes les actions décrites ici sont réversibles. Pour une utilisation sur le long-terme, les données à backuper sont indiquées à la toute-fin. Les containers Docker/Podman ne sont pas à l'ordre du jour. L'architecture serait trop compliquée et difficile à mettre en place.

Design

À qui s'adresse t-il ?

Tout le monde. Pas besoin de posséder un serveur dédié. Cette méthode s'applique au "poste client" ou "poste de travail" de chacun. Nous possédons tous au moins un ordinateur fixe ou un laptop. Si vous n'avez jamais eu de compte Jabber, bienvenue dans le monde de Jabber/XMPP à travers ma méthode.

À l'heure où j'écris ces lignes, le client Poezio ne peut pas être employé avec ma méthode. Les développeurs font tout leur possible pour résoudre les problèmes au niveau de la bibliothèque "slixmpp". Peut-être que ça marchera dans 6 mois, ou dans 1 an. Je publierais un nouvel article si la situation évolue.

J'ai testé avec les clients Gajim et Profanity, mais je ne parlerais ici que de Gajim. Je n'ai pas testé avec Dino non-plus.

Messages en transit

Le transit des messages se fera via le Tor Network. Egalement appelé Dark Net. Le Tor Network est un sous-réseau dans le réseau. Il offre un certain nombre de possibilités que nous allons exploiter dans ce concept. Pour en savoir plus, je vous invite à visionner la vidéo (en anglais) "The Dark Net Isn't what you think. It's actually key to our privacy" d'Alex Winter lors de l'événement TEDxMidAtlantic :

https://youtube.com/watch?v=luvthTjC0O1

Avantages du réseau Tor

En passant par le Tor Network, ce concept s'affranchi des problématiques habituelles. Vous n'avez pas besoin d'acheter un nom de domaine (.org, .net, etc...). Puisqu'il n'y a pas de nom de domaine, il n'y a pas besoin de certificats SSL non-plus. Il n'y a pas de problème de chiffrement des connexions car tout le traffic est chiffré par défaut.

Pour les problématiques liées aux serveurs, avec ma méthode, pas besoin de configurer les règles de routage NAT, ni le parefeu de la machine.

Si vous avez un laptop, la connexion fonctionnera où que vous soyez. Même si vous êtes en déplacement, et même à travers un point d'accès WiFi publique.

Pas de serveurs distant

Le mot clé de ce concept est la légèreté. Ce concept ne réclame aucune infrastructure ni serveur dédié. Prosody est un serveur Jabber extrêmement léger et ne représente aucune charge sur la machine. Sur un poste client, il sera invisible.

Le but est de s'affranchir des serveurs distant. Pas besoin de louer une machine dans un datacenter, et pas besoin d'acheter une seconde unité centrale qu'on laissera tourner 24H/24. Pour la première fois dans l'histoire de Jabber/XMPP, la communication aura lieu en ligne directe.

Il y a un avantage caché : Vous constatez qu'il n'y a pas de serveur intermédiaire entre 2 correspondants. Les 2 correspondants communiquent de pair à pair, et sont reliés par le Tor Network. Les 2 correspondants sont reliés par les routeurs Tor installés sur leur propre machine. C'est une forme de P2P, et je pense sincèrement que le niveau de sécurité est augmenté.

Les 2 correspondants sont en "P2P", en ligne directe, sauf que le Tor Network anonymise la connexion entre-eux. C'est une fonctionnalité de base du réseau Tor.

Toujours à la recherche de légèreté, l'absence de certificats SSL simplifie au maximum les tâches d'administration système. Aucune maintenance ne sera nécessaire. Une fois installé, ce système fonctionnera pendant un bon bout de temps.

De plus, Prosody est un serveur Jabber fiable qui ne pose jamais de problème avec SELinux. C'est un détail, certes, mais ce n'est pas le cas avec d'autres serveurs Jabber. Il est le programme parfait, pour un poste client.

Client XMPP

Techniquement, vous pouvez utiliser n'importe quel client XMPP. La configuration sera un peu différente par rapport à d'habitude, car il ne se connecte pas à Internet, mais directement sur le serveur Prosody sur la même machine, en passant par le chemin le plus court. Les connexions sont établies sur l'interface localhost.

Le port d'écoute C2S (Client-to-Server) est un port qui doit être inaccessible depuis l'extérieur de la machine. La seule connexion attendue est celle de l'utilisateur de la machine. J'ai choisi un port arbitraire 15222, non-standard, et bloqué par défaut au niveau du parefeu.

Lorsque les connexions transitent par l'interface localhost, SSL/TLS est inutile. Dans le cas présent, il est désactivé. Les clients des 2 correspondants pourront activer le chiffrement de bout-en-bout (E2EE) avec OMEMO, mais je rappelle que tout le traffic est déjà chiffré par le réseau Tor. Il n'y a pas de configuration supplémentaire à ajouter au niveau de Prosody pour qu'OMEMO fonctionne correctement.

Jabber ID

Cette méthode offre des garanties au maintien et à la persistance de l'identité Jabber de l'utilisateur. Une identité Jabber, une adresse Jabber, ne peut pas être usurpée.

Le système de sécurité repose sur les noms de domaine .onion fournis par le Tor Network. Dans ce modèle, chaque correspondant possède une adresse Jabber unique grace au nom de domaine .onion. Les adresses onion sont uniques, donc si une adresse n'est plus utilisée, on ne peut pas la récupérer dans le but d'une usurpation d'identité. Elle sera perdue à tout jamais.

Chaque correspondant possède sur sa propre machine le serveur XMPP qui gère le nom de domaine .onion. Il est possible qu'un correspondant constate que le "serveur XMPP" de la personne destinataire ne soit pas joignable. C'est un scénario possible, et ce n'est pas un problème : Les messages seront stockés sur la machine de départ, et seront envoyés au moment où le serveur XMPP distant sera à nouveau joignable. Dans un mode de communication de Pair à Pair, si un pair est hors-ligne, alors la communication ne peut pas avoir lieu. C'est logique.

Le protocole XMPP a tout prévu pour gérer les problèmes d'envoi de message (et les envois en différé).

Un utilisateur peut créer plusieurs adresses Jabber, en ajoutant 2 ou 3 comptes utilisateur sur le serveur Prosody. Toutes les adresses auront le nom de domaine .onion de la machine. Mais cette fonctionnalité ne présente aucun intérêt... Une adresse onion par personne, c'est déjà pas mal.

Inconvénients

Ce design n'est pas parfait, il souffre d'un problème de compatibilité avec le reste du réseau XMPP.

Dans ce modèle minimaliste, un correspondant avec une adresse onion ne peut pas contacter une personne ayant un domaine du clearnet (comme .org, .net, .com, etc...). L'objectif initial de ce modèle est de démocratiser les adresses onion. Si chaque personne possède son adresse sur son poste de travail, alors ce modèle fonctionne. Ce n'est qu'un concept. Un design.

Sa solution sort du cadre de cet article.

Au niveau des adresses Jabber, les noms de domaine .onion ne peuvent pas être personnalisés ou choisis par l'utilisateur. Ils sont générés automatiquement par le routeur Tor, et sont composés de lettres et chiffres mélangés aléatoirement. Mais ils sont uniques.

Place à la technique

Installation et configuration du routeur Tor

C'est un Logiciel Libre, protégé par une license BSD : il ne peut pas être soumis à un brevet. Il faut commencer par l'installer :

dnf install tor

Il existe plusieurs mode pour le routeur Tor. Le plus connu est le mode "relai", il permet de relayer le traffic du réseau Tor sur sa machine. Ce mode ne nous intéresse pas, et nous allons le configurer en mode "routeur" simple. Il sert juste à raccorder notre machine au réseau Tor, sans relayer le traffic.

Mon fichier /etc/tor/torrc :

Log notice stdout
SocksPort [::1]:9050 PreferIPv6
SocksPort 127.0.0.1:9050 PreferIPv6
#SocksPort 172.17.0.1:9050 PreferIPv6 # Optionnel pour Docker
ClientPreferIPv6ORPort 1
HiddenServiceDir /var/lib/tor/hidden_service1/
HiddenServicePort 5269 [::1]:5269

Puis démarrer le processus en arrière-plan :

systemctl enable tor
systemctl start tor

(Le processus met environ 2min à démarrer, ça dépend des machines).

Récupération du nom de domaine .onion

Il faut basculer en utilisateur "root" pour pouvoir accéder à cette info. Comme expliqué précédemment, le routeur Tor va générer automatiquement un nom de domaine onion, unique. Il va l'écrire dans un fichier texte, mais si on modifie le fichier, le nom de domaine ne changera pas. Des clés cryptographiques déterminent les noms de domaines, et on ne peut pas les modifier sans les altérer.

L'ensemble des fichiers sont stockés dans le répertoire :

/var/lib/tor/hidden_service1/

L'info est stockée dans le fichier :

cat /var/lib/tor/hidden_service1/hostname

Il faut la noter, on en aura besoin pour la suite.

Installation et configuration de Prosody

Il est disponible au téléchargement dans le depot Fedora. On commence par l'installer sur la machine :

dnf install prosody

Mon fichier de config :

/etc/prosody/prosody.cfg.lua

Je l'ai mis en ligne pour le récupérer plus rapidement avec curl (il fait 300 lignes).

curl -o /etc/prosody/prosody.cfg.lua https://dl.casperlefantom.net/pub/prosody.cfg.lua.txt

Ou bien (à travers Tor) :

torsocks curl -o /etc/prosody/prosody.cfg.lua http://uhxfe4e6yc72i6fhexcpk4ph4niueexpy4ckc3wapazxqhv4isejbnyd.onion/pub/prosody.cfg.lua.txt

Passons en revue ce qu'il faut modifier et adapter à votre setup :

----------- Virtual hosts -----------
-- You need to add a VirtualHost entry for each domain you wish Prosody to serve.
-- Settings under each VirtualHost entry apply *only* to that host.

--VirtualHost "localhost"
-- Prosody requires at least one enabled VirtualHost to function. You can
-- safely remove or disable 'localhost' once you have added another.

-- Section for VirtualHost onion address

VirtualHost "p4ac3ntp3ai643k3h5f7ubggg7zmdf7ddsnfybn5rejy73vqdcplzxid.onion"
    ssl = { }

Component "rooms.p4ac3ntp3ai643k3h5f7ubggg7zmdf7ddsnfybn5rejy73vqdcplzxid.onion" "muc"
    name = "Hidden Chatrooms"
    modules_enabled = { "muc_mam" }
    restrict_room_creation = "local"
    ssl = { }

La configuration des VirtualHosts est indiquée directement dans le fichier de config principal, pour simplifier au maximum. Il faut remplacer les adresses onion par votre nom de domaine .onion. C'est tout. Le fichier de config est prêt à être utilisé.

Il existe des fichiers inutiles qui ne peuvent pas être supprimés :

  • /etc/prosody/conf.d/example.com.cfg.lua
  • /etc/prosody/conf.d/localhost.cfg.lua

Pour éviter qu'ils interfèrent avec le fichier de config principal, on peut remplacer leur contenu par une ligne de commentaire. Le commentaire indique que le fichier n'est pas utilisé :

echo "-- Fichier vide" > /etc/prosody/conf.d/example.com.cfg.lua
echo "-- Fichier vide" > /etc/prosody/conf.d/localhost.cfg.lua

Si on supprime ces fichiers avec la commande "rm", ils seront recréés plus tard, au moment de la mise à jour du RPM de Prosody. Leur contenu pose problème, si l'on ne fait pas attention à ce détail, tout le projet XMPP onion peut tomber en panne, sans que l'on comprenne pourquoi.

Outils pour gérer les modules

Les modules Prosody ont un gestionnaire de module particulier nommé "luarocks". Sans ce programme, la commande prosodyctl ne peut pas installer de module. Il faut donc l'installer :

dnf install luarocks lua-devel

Scripts Lua à installer

Le module "mod_onions" a besoin de 2 programmes Lua pour marcher. Il sont disponibles dans le depot et il faut les installer sur le système :

dnf install luajit lua-bit32

Esnuite, on peut l'installer :

prosodyctl install --server=https://modules.prosody.im/rocks/ mod_onions

Après avoir exécuté la commande prosodyctl, si vous le souhaitez, vous pouvez supprimer luarocks avec l'historique dnf, car on ne va plus l'utiliser par la suite.

Le module "mod_onions" permet de rediriger vers Tor toutes les connexions sortantes de Prosody. Lors de sa tentative de connexion à un autre serveur, il va envoyer une requête pour établir une connexion TLS. C'est écrit comme ça dans le code, et ce n'est pas configurable. Le serveur du correspondant répond ensuite qu'il ne peut pas établir de connexion TLS, donc la connexion échoue. Pour modifier le comportement de base du programme, on peut passer par un module.

Voici mon module perso à installer dans :

/var/lib/prosody/custom_plugins/share/lua/5.4/mod_s2s_never_encrypt.lua

local libev = module:get_option_boolean("use_libevent")

local function disable_tls_for_baddies_in(event)
    local session = event.origin
    module:log("debug", "disabling tls on incoming stream from %s...", tostring(session.from_host));
    if libev then session.conn.starttls = false; else session.conn.starttls = nil; end
end

local function disable_tls_for_baddies_out(event)
    local session = event.origin
    module:log("debug", "disabling tls on outgoing stream from %s...", tostring(session.to_host));
    if libev then session.conn.starttls = false; else session.conn.starttls = nil; end
end

module:hook("s2s-stream-features", disable_tls_for_baddies_in, 600)
module:hook("stanza/http://etherx.jabber.org/streams:features", disable_tls_for_baddies_out, 600)

Blocages SELinux

On saute la phase d'expérimentation, pour aller directement à la solution.

SELinux bloque 2 choses. Il empèche Prosody d'écouter sur le port 15222, car ce port fait parti d'une plage de ports non-réservés à un service spécifique. Donc SELinux l'empèche de choisir un port "aléatoire". Un service (réseau) doit utiliser un port réservé pour ce service, c'est logique.

Ce mode de fonctionnement est autorisé par le booleen "nis_enabled" :

setsebool -P nis_enabled on

Ou bien, on peut modifier le label SELinux du port 15222, au choix (vérifiez la position du booleen d'abord) :

semanage port -a -t prosody_port_t -p tcp 15222

Le second blocage empèche Prosody d'établir une connexion au proxy SOCKSv5 de Tor. C'est-à-dire que Prosody essaye de se connecter au socket TCP d'un autre service (sur la même machine). Ce n'est pas le mode de fonctionnement habituel d'un service réseau.

Pour résoudre le problème, on va créer un fichier texte (prosody-connect-tor-port.txt) qui contient une seule ligne avec le contenu suivant :

type=AVC msg=audit(1673295193.979:4392): avc:  denied  { name_connect } for  pid=935298 comm="prosody" dest=9050 scontext=system_u:system_r:prosody_t:s0 tcontext=system_u:object_r:tor_port_t:s0 tclass=tcp_socket permissive=0

Puis, on se sert de audit2allow pour générer un module de règles SELinux :

cat prosody-connect-tor-port.txt|audit2allow -M prosody-connect-tor-port

Ensuite, on peut installer le module :

semodule -i prosody-connect-tor-port.pp

Suppression des certificats inutiles

Au moment de l'installation de prosody, un certificat x509 est généré automatiquement. Il contient des informations génériques, il est valide pendant 1 an, et il est auto-signé. Voici un exemple de ce qu'il contient :

Issuer: C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = vulcain, emailAddress = root@vulcain
Subject: C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = vulcain, emailAddress = root@vulcain
Validity:
    Not Before: Jun 17 07:02:32 2023 GMT
    Not After : Jun 16 07:02:32 2024 GMT

Il contient n'importe quoi, c'est un fait. Il n'est pas utilisable, et il va générer des problèmes quand il aura expiré dans un an. Donc, je recommande la suppression :

echo "" > /etc/pki/prosody/localhost.crt
echo "" > /etc/pki/prosody/localhost.key

On en a pas besoin, techniquement.

Démarrage de prosody

C'est le moment. Le processus est lancé en arrière-plan.

systemctl enable prosody
systemctl start prosody

Vous constatez qu'il consomme quasiment-pas de resource sur la machine.

Création d'un nom d'utilisateur

Pour créer un compte utilisateur, on peut passer soit par Gajim, soit par la commande prosodyctl. Je ne vais pas détailler la procédure dans Gajim, libre à vous de choisir ce qui vous convient le mieux. Pour le nom, je recommande quelque-chose de court, car l'adresse finale est déjà très longue.

Voici les commandes en passant par prosodyctl :

prosodyctl adduser user@adresse.onion
prosodyctl passwd user@adresse.onion

(Personne peut se connecter au port d'écoute client, donc personne peut tester la résistance/robustesse du mot de passe).

Installation et configuration de Gajim

Du coté de Gajim, la config est relativement simple. C'est encore un Logiciel Libre, protégé par une license GPLv3 : personne peut déposer un brevet sur ce logiciel. Il est présent dans le dépot Fedora et il faut commencer par l'installer :

dnf install gajim

Ensuite, c'est parti pour le parcours fléché. Suivez les fleches !

Aller dans "Comptes" > Modifier le compte > Ajouter un compte

Capture d'écran Capture d'écran Capture d'écran Capture d'écran Capture d'écran Capture d'écran Capture d'écran

L'installation et la configuration d'OMEMO n'est pas traitée dans cet article.

En résumé

Au final, peu-importe quel client vous utilisez, voici les informations clés à renseigner :

  • Nom du compte (nom d'utilisateur@nom d'hôte, également appelé JID)
  • Serveur : localhost
  • Port : 15222
  • Utiliser une connexion non-chiffrée (désactiver TLS)

Données à backuper

Ce système est fiable sur le long-terme seulement si il est backupé. Je vous propose ici de faire le point, afin de garantir la longévité de votre installation.

Pour générer une copie de sauvegarde du système (en root) :

tar -Jcf xmpp-onion-system.tar.xz /etc/tor/torrc /var/lib/tor/hidden_service1/ /etc/prosody/prosody.cfg.lua /etc/prosody/conf.d/example.com.cfg.lua /etc/prosody/conf.d/localhost.cfg.lua /var/lib/prosody/ /etc/pki/prosody/localhost.crt /etc/pki/prosody/localhost.key

Données utilisateur (pas root) :

$ tar -Jcf xmpp-onion-user.tar.xz .config/gajim/ .local/share/gajim/

L'avantage est que, si vous décidez de changer de client, nul-besoin de tout re-backuper.

Pour restaurer depuis la sauvegarde (en root) :

pushd /
tar -Jxf /home/user/xmpp-onion-system.tar.xz
popd

Et pour restaurer les données utilisateur :

$ tar -Jxf xmpp-onion-user.tar.xz

Simple et efficace. Mais il faut aussi créer de la redondance en backupant la $HOME. La redondance, on en a besoin.

Il est possible de restaurer le backup sur un LiveUSB Fedora après avoir booté dessus. Ça fonctionnera aussi.

Et ça marche.

Nous venons de voir la méthode d'installation et de mise en place d'une messagerie décentralisée, acentrée, à travers un réseau d'anonymisation. Aide et support dans le salon de discussion fedora@chat.jabberfr.org (XMPP) où je suis présent en permanance.

TransFLAC: Convert FLAC to lossy formats

Posted by Fedora Magazine on August 18, 2023 08:00 AM

FLAC: The Lossless Audio Compression Format

FLAC, or Free Lossless Audio Codec, is a lossless audio compression format that preserves all the original audio data. This means that FLAC files can be decoded to an identical copy of the original audio file, without any loss in quality. However, lossless compression typically results in larger file sizes than lossy compression, which is why a method to convert FLAC to lossy formats is desirable. This is where TransFLAC can help.

FLAC is a popular format for archiving digital audio files, as well as for storing music collections on home computers. It is also becoming increasingly common for music streaming services to offer FLAC as an option for high-quality audio.

For portable devices, where storage space is limited, lossy audio formats such as MP3, AAC, and OGG Vorbis are often used. These formats can achieve much smaller file sizes than lossless formats, while still providing good sound quality.

In general, FLAC is a good choice for applications where lossless audio quality is important, such as archiving, mastering, and critical listening. Lossy formats are a good choice for applications where file size is more important, such as storing music on portable devices or streaming music over the internet.

TransFLAC: Convert FLAC to lossy formats

TransFLAC is a command-line application that converts FLAC audio files to a lossy format at a specified quality level. It can keep both the FLAC and lossy libraries synchronized, either partially or fully. TransFLAC also synchronizes album art stored in the directory structure, such as cover, albumart, and folder files. You can run TransFLAC interactively in a terminal window, or you can schedule it to run automatically using applications such as cron or systemd.

The following four parameters must be specified:

  1. Input FLAC Directory: The directory to recursively search for FLAC audio files. The case of the directory name matters. TransFLAC will convert all FLAC audio files in the directory tree to the specified lossy codec format. The program will resolve any symlinks encountered and display the physical path.
  2. Output Lossy Directory: The directory to store the lossy audio files. The case of the directory name matters. The program will resolve any symlinks encountered and display the physical path.
  3. Lossy Codec: The codec used to convert the FLAC audio files. The case of the codec name does not matter. OPUS generally provides the best sound quality for a given file size or bitrate, and is the recommended codec.
    Valid values are: OPUS | OGG | AAC | MP3
  4. Codec Quality: The quality preset used to encode the lossy audio files. The case of the quality name does not matter. OPUS STANDARD quality provides full bandwidth, stereo music, good audio quality approaching transparency, and is the recommended setting.
    Valid values are: LOW | MEDIUM | STANDARD | HIGH | PREMIUM

TransFLAC allows for customization of certain items in the configuration.  The project wiki provides additional information.

Installation on Fedora Linux:

$ sudo dnf install transflac
<figure class="wp-block-image size-large">TransFLAC Convert FLAC to lossy formats</figure>

GNOME 45 Core Apps Update

Posted by Michael Catanzaro on August 17, 2023 03:57 PM

It’s been a few months since I last reviewed the state of GNOME core apps. For GNOME 45, we have implemented the changes proposed in the “Imminent Core App Changes” section of that blog post:

  • Loupe enters core as GNOME’s new image viewer app, developed by Christopher Davis and Sophie Herold. Loupe will be branded as Image Viewer and replaces Eye of GNOME, which will no longer use the Image Viewer branding. Eye of GNOME will continue to be maintained by Felix Riemann, and contributions are still welcome there.
  • Snapshot enters core as GNOME’s new camera app, developed by Maximiliano Sandoval and Jamie Murphy. Snapshot will be branded as Camera and replaces Cheese. Cheese will continue to be maintained by David King, and contributions are still welcome there.
  • GNOME Photos has been removed from core without replacement. This application could have been retained if more developers were interested in it, but we have made the decision to remove it due to lack of volunteers interested in maintaining it. Photos will likely be archived eventually, unless a new maintainer volunteers to save it.

GNOME 45 beta will be released imminently with the above changes. Testing the release and reporting bugs is much appreciated.

We are also looking for volunteers interested in helping implement future core app changes. Specifically, improvements are required for Music to remain in core, and improvements are required for Geary to enter core. We’re also not quite sure what to do with Contacts. If you’re interested in any of these projects, consider getting involved.

PHP version 8.1.23RC1 and 8.2.10RC1

Posted by Remi Collet on August 17, 2023 03:36 PM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.2.10RC1 are available

  • as base packages
    • in the remi-php82-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 36-38 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPM of PHP version 8.1.23RC1 are available

  • as base packages
    • in the remi-php81-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 36-38 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

emblem-notice-24.pngPHP version 8.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : follow the wizard instructions.

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Parallel installation of version 8.1 as Software Collection:

yum --enablerepo=remi-test install php81

Update of system version 8.2 (EL-7) :

yum --enablerepo=remi-php82,remi-php82-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.1 (EL-7) :

yum --enablerepo=remi-php81,remi-php81-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png EL-9 packages are built using RHEL-9.2

emblem-notice-24.png EL-8 packages are built using RHEL-8.8

emblem-notice-24.png EL-7 packages are built using RHEL-7.9

emblem-notice-24.png oci8 extension now uses Oracle Client version 21.10, intl extension now uses libicu 72.1

emblem-notice-24.png RC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.png versions 8.1.23 and 8.2.10 are planed for August 31th, in 2 weeks.

Software Collections (php81, php82)

Base packages (php)

PHP version 8.0.30, 8.1.22 and 8.2.9

Posted by Remi Collet on August 17, 2023 05:18 AM

RPMs of PHP version 8.2.9 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php82 repository for EL 7.

RPMs of PHP version 8.1.22 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php81 repository for EL 7.

RPMs of PHP version 8.0.30 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php80 repository for EL 7.

emblem-notice-24.png The modules for EL-9 are available for x86_64 and aarch64.

emblem-important-2-24.pngPHP version 7.4 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

security-medium-2-24.pngThese Versions fix 2 security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.2 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.2
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

Replacement of default PHP by version 8.1 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php81
yum update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are build using RHEL-9.2
  • EL-8 RPMs are build using RHEL-8.8
  • EL-7 RPMs are build using RHEL-7.9
  • intl extension now uses libicu72 (version 72.1)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.8, instead of the outdated system library)
  • oci8 extension now uses Oracle Client version 21.10
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php80 / php81 / php82)

[F39] Participez aux journées de tests de GNOME 45 et de DNF5

Posted by Charles-Antoine Couret on August 16, 2023 10:14 PM

Depuis le 11 jusqu'au 17 août, est une semaine dédiée à plusieurs tests : autour de DNF5, le tout complété jusqu'au 20 août à propos de GNOME 45 et de ses applications. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous nous approchons de la diffusion de Fedora 39 édition Beta. De nombreuses nouveautés sont bien avancées dans leur développement et doivent être fiabilisés avant la version finale qui sortira fin avril.

Pour GNOME 45 ils consistent en :

  • La détection de la mise à niveau de Fedora par GNOME Logiciels ;
  • Le verrouillage et déverrouillage d'écran ;
  • Le bon fonctionnement du navigateur Web, de Cartes, de Musique, de Disques et du Terminal ;
  • La connexion / déconnexion et changement d'utilisateurs ;
  • Le fonctionnement du son, notamment détection de la connexion ou déconnexion d'écouteurs ou casques audios ;
  • Le fonctionnement global du bureau : les activités, les paramètres, les extensions.
  • Possibilité de lancer les applications graphiques depuis le menu.

Pour DNF5, basé sur microdnf qui est une réécriture de DNF, cela consiste majoritairement à s'assurer de la non régression de ce composant par rapport à son prédécesseur. Pour l'invoquer il faut installer le paquet dnf5 et utiliser la commande du même nom au lieu de dnf.

Les tests consistent majoritairement à :

  • Installer, supprimer et mettre à jour des paquets ;
  • Mettre à jour et vider le cache des dépôts ;
  • Lister les paquets disponibles et installés ;
  • Chercher un paquet dans les dépôts par le nom ou sa description ;
  • Lister les dépôts actifs et inactifs ;
  • Vérifier l'historique des transactions des paquets et éventuellement annuler une transaction passée.

Comment y participer ?

Visitez cette page GNOME 45 et DNF5, suivez les instructions et communiquez vos résultats dessus.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Libera.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une semaine est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Server updates/reboots

Posted by Fedora Infrastructure Status on August 16, 2023 09:00 PM

We will be updating/rebooting various servers. Services may be up and down in this outage window.

Flock Ireland

Posted by David Cantrell on August 16, 2023 05:34 PM

The annual Fedora Contributor Conference, Flock to Fedora, has come and gone. Our last conference was actually in 2019 in Budapest, Hungary. Then when the pandemic hit we discontinued in person events. We held Fedora Nest online events during those years. While those were successful, I miss being able to gather in person. Starting this year we are back to in person conferences for Fedora. For this year the conference was held in Cork, Ireland.

Getting There

Having done many conferences during my career, I have found the best way for me to be most present is if I arrive early. I at least like to have most of a day before the event in order to adjust to the time zone (if necessary) and get settled. I am not someone who can go directly from a plane to a meeting and then back to a plane.

I booked my travel through my employer using Egencia (link not provided as I find them to be not very helpful for any travelers). In the past I usually booked my travel directly and expensed things using our usual reimbursement system, but we now have new travel guidelines and need to use Egencia.

Given the time I was booking, approved options through Egencia were limited for me. Going there my flight departed from Boston headed to Amsterdam. There I had a 6 hour layover and then flew from Amsterdam to Cork. I have done the 6 hour layover before in Amsterdam, so that I was prepared for. I was surprised the system did not suggest I fly to Dublin, but what do I know? I think next time I would tell Egencia that I was flying to Dublin instead of Cork and then arrange transport to Cork separately. Egencia is apparently unaware that trains are a thing that exists.

While in Amsterdam I got some breakfast (or breafkast as one sign said), called home, and then headed to the Lounge 2 Starbucks which is the last place I know when I’m headed to Europe where I can get an American sized drip coffee. I purchased my “filter coffee” and then got the laptop out to work on my presentation material. I was able to meet up with Justin Flory and Kevin Fenzi and we all took the same flight to Cork.

Once in Cork we went through passport control because Ireland is not part of the Schengen Area. This was fun for the 12 year old me because now I got another visa stamp in my passport that is not Amsterdam. Seems all of my EU travel is just through Amsterdam and it’s pages of those stamps. Occassionally Paris. I liked how the Irish visa is a large green stamp (I assume) and the agent writes down why you are there and for how long. Those Schengen 90 day stamps are boring by comparison.

Justin found his luggage and then we got a cab to the hotel. Felt like we drove in circles for a while because Ireland has more rotaries than Massachusetts, but we eventually made it there.

The Hotel and Venue

Event hotels can make or break the event. If there are problems, that’s all anyone will ever remember about an event. The Clayton Silver Springs looked right for Flock. I checked in and went to my room on the 3rd floor. Hotel navigation took me a bit because the conference center part of the hotel is in a separate detached building behind the hotel, but it’s connected by walkway on the 3rd floor of the hotel. Strange.

The elevator also had an off by one display problem on the floor indicator. The trick was to still just press the button you wanted and ignore the display. The 3rd floor was always displayed as the 4th. Or the 2nd. I can’t remember.

Breakfast was included and was the usual hotel buffet spread.

I prefer to sleep in a very cold room and this hotel did not have in-room air conditioning. It did have windows that opened a bit, so I opened them and kept them open the whole time. The room also included a desktop oscillating fan which I ran at full blast in front of my face. What can I say, I like to sleep in an igloo.

I found the shower to be dangerous. There was a non-slip mat rolled up with a sign indicating that you must use this mat if you intend to use the shower (who doesn’t shower in a hotel?). But then the shower itself was extremely narrow. I remember that from the UK too. Do people in Ireland shower standing sideways rather than facing the showerhead?

Day 0

On the day of check-in, I was able to watch as people arrived and checked in. Some of us gathered in the hotel bar for the usual socializing. I did not bring toiletries and the hotel did not have anything complimentary or a shop, but advised I try the Circle K at the corner. Strange things were afoot at the Circle K as they sold nearly every imagineable toiletry item except toothpaste and deodorant.

That evening we held the sponsor dinner at a nice restaurant downtown. Before the dinner, I was able to acquire the items I was looking for at a nearby Tesco.

The restaurant was able to accomodate us, but it was tight quarters. For my dinner I chose the steak. This obviously came with potatoes (Ireland), which was fine by me. The beef was Irish and the menu indicated the family farm where it came from, even with contact information. Suddenly I was reminded of Portlandia and meeting the chicken.

Irish beef: recommended

Day 1

Day 0 was the day where we were all trying to finish our presentation material, but instead socialized. I decided to break from the standard conference tradition of finishing the slides 10 minutes before the presentation and I finished them the evening before. Mine were more meant for reference material later if you were going to watch the video. Others had complex slides with charts, images, and so on.

I was able to start early and get breakfast and head over to the conference venue to collect my name tag and other stuff. There were conference shirts as well as assorted swag.

The opening talk is always about the State of Fedora. It included a bit of information about recent happenings within the enterprise Linux world, to which we are not immune, as well as recent changes in the Fedora space. All in all things look good and the overall feelings about the project are positive.

The keynote was the long established “What Does Red Hat Want?” talk. In the past this was usually just a combination of a wish list as well as acknowledgement of already in-progress projects in Fedora that will ultimately end up in RHEL. This talk was different. Most notably because Red Hat has made some recent changes about the positioning of RHEL and the relationship to CentOS Stream. The talk was by Brian Exelbierd who I thought did a really good job of blending the business speak with the engineering speak. Too often we hear from people who skew too far to one of those directions, but Brian did a good job (I thought) of speaking to the Fedora Project in a meaningful way.

Side note…one thing I dislike about current conferences are the use of online schedule tools. I miss the days of the printed schedule or conference booklet. First, my phone may not work in the country the conference is held. Two, I kind of want to detach from my phone for the event if at all possible. Flock uses sched.com, but I did notice this time it emailed me a daily schedule that I was able to reference more easily. Usability here is a slight tick upward.

Next was a snack and coffee break with sessions starting at 11:00.

CentOS Connect was co-located with Flock this year, which I thought was a really good idea. I hope we continue that in the future. Sadly I was not able to attend the Welcome to CentOS Connect nor the State of the Fedora Kernel talk because I was giving my talk on rpminspect. This is both a good thing and a bad thing. You’re never able to attend everything, but you try to get to as many things as possible.

I went to the SPDX talk and the Rocky Linux secureboot talks before lunch. After lunch I got tied up in the always present hallway track discussing various things, so I missed two sets of talks. I went to the Fedora Asahi talk because I was interested in the state of Linux on Apple Silicon. This unfortunately conflicted with the Podman Desktop talk which I was also interested in. And Fedora CI which I was also interested in. I’m going to be watching a lot of talks later.

More hallway track for me and coffee and finally the State of EPEL talk. One thing I found interesting here is that EPEL is the most downloaded content out of everything the Fedora project produces. Not surprising, but it does indicate how much enterprise Linux is out there. EPEL maintenance should be taken seriously and not just a fedpkg build once you get the branch.

I am not much of a gamer or a candy fan, so I socialized at the hotel bar for the first night. Really, I was exhausted.

Day 2

The second day starts a little more slowly, but still up in time for breakfast and coffee. I was in The Meet your FESCo session. We had four of us in person and I pointed out that the entire time I had been part of FESCo was during the pandemic. This was the first time many of us had met in person. We were not able to have all members present, but four was pretty good. We got things going and let people ask questions and then took turns answering. It was nice to hear from people.

Snacks and coffee again and some sidebar conversations from the FESCo session led to hallway track sidebars for myself. Day 2 saw some session cancellations due to some people being unable to attend. While this was not the largest Flock, it was nice to see the turnout we had. But people were still missed.

I attended the Discourse session and that ended up going a different direction than I think Matthew had intended, but I believe it was still useful.

The enterprise Linux panel was moderated by Matthew representing Fedora and included representatives for Rocky Linux, Red Hat Enterprise Linux, CentOS Stream, and Alma Linux. After some introductions and paying tribute to Fedora, questions happened. I was surprised there was not a line for questions, so I raised my hand to ask one. My question was serious, but I was also trying to keep things light. I said I have a server at home that performs some rather mundane tasks. I used to run old CentOS on it because I don’t like to have “IT day” be every day at home. I kind of want to rely on the system and not babysit it all the time. Fedora changes too fast for this system. With CentOS now gone, why would I choose Rocky Linux, CentOS Stream, or Alma Linux for this system? Put another way, what distinguishes your distribution from the others? This generated lots of talk and discussion, but at the session I did not actually get a feeling for what makes them different other than their individual communities. The Q&A session continued and the topic of rebuilding RHEL vs. maintaining ABI compability with the major version was presented. Alma Linux has announced a plan to stick with ABI compatibility. Given this, they have the opportunity to differ from being an exact RHEL rebuild while still be compatible. The example given was Alma choosing to enable kernel drivers that RHEL has stopped supporting. I think this is a huge thing and worth pointing out for an enterprise Linux project. Yeah, Red Hat stops supporting older hardware sooner than some users want. Alma can turn that on and still offer a compatible ABI environment. That’s nice.

For the evening event we had a buffet dinner at Tequila Jack’s followed by a team scavenger hunt. This was downtown from the hotel, but Flock had two big coaches that moved us between the hotel and downtown. The restaurant was ok. We had a drink ticket and otherwise it was a cash bar.

The scavenger hunt was interesting. It was a fun way to see different parts of the city. I would have preferred it to be about a third or half the length and also if it had not started so closely after dinner. Having some time to socialize at the restaurant would have been better.

Some of us stayed downtown and hit some other bars before returning to the hotel very late. I was surprised at the early closing times of the bars and restaurants, but it was evident why. Lots of stumbling intoxicated people wandering home or at least somewhere else. Still, never saw any fighting or anything bad. That may have been because of an increase in police visibility later at night.

Oh, I kept seeing vans and cars that said GARDA and I thought it was related to GardaWorld. Justin clarified for me that Garda is the police force in Ireland. Ooops, noted. I’m just used to seeing some word that looks visually similar to “police” in other countries: polizei, policie, politie, police, and so on. Add garda to that list.

Day 3

The last day was a late start for me. We had an SPDX hackfest session. Not widely attended, but we were able to answer questions and help people with some of the license matching tools.

The closing session for Flock offered an opportunity to address the room. I noted that I was glad we are back to in person events. Virtual just isn’t a replacement. I also asked that we not have Windows laptops set up for presenters at a Fedora conference, which to my surprise received an applause. Seriously, at this point we should have live media ready to go for that regardless of what hardware appears on the podium.

I took some time during the day to go back in to town with Justin to find some gifts for my family. We got lunch as well and then headed back to the hotel. The last night’s event was a ghost tour. I skipped that along with some others and we found alternative Mexican food (I mean, when in Ireland…). By this point I was pretty exhausted, so it was nice to have a smaller group.

Back at the hotel people said good byes and either started leaving or got ready to leave the next day. A large group was in the hotel bar rotating out as the night went on so we could see people and meet up one last time.

Day 4

My flight out of Cork was around 5pm so I had some time during the day. Isaac Chute offered to take me to Kinsale and see some things before dropping me at the airport. Isaac is a former Red Hatter, also lives in metro Boston near me, is also a fellow boater, and now works for the Linux Foundation. And he is originally from Cork. Perfect tour guide!

He arrived at the hotel and I put my bags in the car and off we went. Our driver was his father in law. We proceeded through Cork and towards Kinsale. Outside of Cork the roads became very narrow. There are also dirt roads that are actually named streets and have speed limits posted saying 80 km/h. Sure!

Kinsale was really nice. While it was only a few hours for me, I could see spending more time there. Recommended if you have not been.

After grabbing lunch at a pub, they took me to the airport and I began the check in process to depart. I had a last Guiness-in-Ireland before boarding a plane to Paris.

I arrived in Paris around 7:00 pm local time. This is where things began to derail for me. To keep this part of the trip report short, I will just say that my original travel plans had me flying from Cork to Paris to Amsterdam to Boston to get home. The Paris layover was 12 hours and the Amsterdam layover was 9 hours. Trying to change this before my flight was impossible and I was continually told to have Air France make the change in Paris. OK, I guess I will try that. The goal here was to rebook for a direct Paris to Boston flight, but leaving the next day at a non-insane hour. That would give me some hours to see something other than the airport before flying home.

Well, there were other problems. I started getting emails some days before my return flight about “civil unrest” in France. Air France ended up being short staffed and had cancelled many flights. Or many flights were diverted so the equipment ended up elsewhere. Inside the airport were MANY people waiting to talk to a ticketing agent for help.

And here is where we get to my problem. I had a valid booking, it was just terrible. I wanted something nicer. I can wait in a line and ask. But this process went on and on and had me go from line to line to talk to different people none of which could actually help me. What I should have done was abort early and just say I tried and failed and gotten some sleep and caught the horrible early flight. But I kept thinking I could get a rebooking. Nope.

By the time I was at the hotel and going to bed, I was going to get about 3 hours of sleep before I needed to make my way to the airport.

Day 5

So I missed that alarm. Fortunately the night before I had purchased a one-way CDG to BOS flight just in case I was going to have more rebooking problems. I made some attempts to call Air France and our horrible terrible corporate travel agency and nothing worked, so whatever. I went ahead with my “see Paris” plan.

Paris is huge and not very tall. Subway doors are manually operated by you if you want to exit. The subways also make a muffled hockey goal horn sound before closing (compare).

Saw some stuff, ate some things, had some drinks, walked a lot, practiced absolutely none of my French, bought touristy things. Returned to hotel.

The hotel bar was full of displaced travelers too. I met a brother and sister from Nebraska who were stuck at the airport due to cancelled flights but didn’t know how long they would be there. There was a woman who was separated from her family traveling back from Norway and had no idea when she would be flying out.

Day 6

Checked out of the hotel and headed to Terminal 2E to check in, security check myself, “exit France”, and find the Air France lounge that due to my Delta status I was able to use. Breakfast buffet there was great. Their lounges are way nicer than Delta lounges. For starters they are not full of people all waiting to get a Saran wrapped apple and biscoff cookie. Second, they have both a salon and massage place INSIDE the lounge.

Boarded plane, flew home, sat in front of Loud Howard, watched the entire first season of The Last Of Us.

Once home I went through Global Entry (woo!) and headed headed to the Silver Line to take me to South Station where I took the Red Line to Downtown Crossing where I changed to the Orange Line to take it to North Station where I changed to the Green Line E branch to take it to Medford. The Sumner tunnel is closed this summer in Boston and just for fun they decided to close the Government Center and Haymarket Stations so a destruction crew could remove a couple more bricks from the Government Center parking monstrosity.

End of trip.

The University of Utah uses Kiwi TCMS

Posted by Kiwi TCMS on August 16, 2023 04:13 PM

"University of Utah + Kiwi TCMS logos"

The University of Utah is a public research university in Salt Lake City, USA. It is the flagship institution of the Utah System of Higher Education and was established in 1850.

The University of Utah's School of Computing, founded as the Computer Science Department in 1965, has a long and distinguished record of high impact research. The university has provided large, automated testbeds since around the year 2000, funded by the National Science Foundation.

The Flux Research Group conducts research in operating systems, networking, security, and virtualization. The group consists of three faculty and over two dozen research staff, graduate students, and undergrads.

POWDER (the Platform for Open Wireless Data-driven Experimental Research) is flexible infrastructure enabling a wide range of software-defined experiments on the future of wireless networks. POWDER supports software-programmable experimentation on 5G and beyond, massive MIMO, ORAN, spectrum sharing and CBRS, RF monitoring, and anything else that can be supported on software-defined radios.

In the words of David M. Johnson, research staff:

The addition of Kiwi TCMS to our POWDER mobile wireless testbed helps to support the complex multi-system, end-to-end functional test and integration scenarios we see in the 5G/O-RAN/beyond mobile wireless space.

We use Kiwi TCMS as part of an on-demand environment that POWDER provides to users that can help them automate testing using a workflow approach, from CI-triggered orchestration from scratch in our cloud-like environment, through resource configuration and running test suites, to finally collecting results into private instances of Kiwi TCMS.

We use both the Stackstorm and Dagster workflow engines to execute our test and integration workflows. The stackstorm-kiwitcms library is a simple Stackstorm "integration pack" (Python source code in this case) that invokes and re-exports much of the core Kiwi TCMS XML-RPC API (with some minor sugar) into Stackstorm, so that each API function is exposed as a Stackstorm action (the fundamental unit of its workflows). This means that the workflows can orchestrate resources into test scenarios; configure the resources; create or instantiate Kiwi TCMS test runs/executions/metadata; execute tests; and push test results/status into Kiwi TCMS records, upload attachments, etc, for persistence.

We use a fork of Kiwi TCMS right now so that we could upload attachments to test runs via the API. That was a trivial change which made its way upstream as part of Kiwi TCMS version 12.1.


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

آموزش نصب و پیکربندی Cilium در Kubernetes – بخش ۶

Posted by Fedora fans on August 16, 2023 09:38 AM
cilium

cilium

در بخش ششم از سلسه مطلب «آموزش نصب و پیکربندی Cilium در Kubernetes» ، قصد داریم تا در مورد L7 Policy صحبت کنیم.

آزمایش و انجام HTTP-aware L7 Policy

در سناریوی ساده بالا، کافی بود که به tiefighter / xwing دسترسی کامل به API deathstar داده شود یا اصلاً دسترسی نداشته باشد. اما برای ارائه بالاترین امنیت (به عنوان مثال، enforce least-privilege isolation) بین میکروسرویس‌ها، هر سرویسی که API deathstar را فراخوانی می‌کند باید محدود به ایجاد مجموعه‌ای از درخواست‌های HTTP باشد که برای عملکرد درست لازم است. به عنوان مثال، در نظر بگیرید که سرویس deathstar برخی از maintenance API ها را expose می کند که نباید توسط فضاپیماهای امپراتوری به صورت تصادفی فراخوانی شوند. برای دیدن آن می توانید دستور زیر را اجرا کنید:

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port

یک نمونه خروجی از اجرای دستور گفته شده را در تصویر پایین مشاهده می کنید:

ciliumدر حالی که این یک مثال گویا است، دسترسی غیرمجاز مانند بالا می‌تواند پیامدهای امنیتی نامطلوبی داشته باشد.

L7 Policy با Cilium و Kubernetes

cilium_http_l3_l4_l7_gsg

Cilium می‌تواند policy های لایه HTTP (یعنی L7) را برای محدود کردن URL‌هایی که tiefighter مجاز است به آن دسترسی پیدا کند، اعمال کند. در اینجا یک فایل policy به عنوان نمونه است که policy اصلی ما را با محدود کردن tiefighter به برقراری POST API call به /v1/request-landing و اجازه ندان به سایر call ها (از جمله PUT /v1/exhaust-port) گسترش می دهد.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "rule1"
spec:
  description: "L7 policy to restrict access to specific HTTP call"
  endpointSelector:
    matchLabels:
      org: empire
      class: deathstar
  ingress:
  - fromEndpoints:
    - matchLabels:
        org: empire
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: "POST"
          path: "/v1/request-landing"

 

بروزرسانی rule موجود جهت اعمال L7-aware policy برای محافظت از deathstar با استفاده از دستور زیر:

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.12/examples/minikube/sw_l3_l4_l7_policy.yaml

اکنون می‌توانیم همان آزمایش بالا را دوباره اجرا کنیم، اما نتیجه متفاوتی خواهیم دید:

kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

cilium-policyو این یکی آزمایش:

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port

cilium-policyهمانطور که می بینید، با Cilium L7 security policies، ما می توانیم به tiefighter اجازه دهیم فقط به منابع API مورد نیاز در deathstar دسترسی داشته باشد و بدین ترتیب یک رویکرد امنیتی «least privilege» را برای ارتباط بین میکروسرویس ها پیاده سازی کنیم. توجه داشته باشید که مسیر دقیقاً با URL مطابقت دارد، برای مثال اگر می‌خواهید هر چیزی در /v1/ را allow کنید، باید از یک regular expression استفاده کنید:

path: "/v1/.*"

می توانید L7 policy را با استفاده از kubectl مشاهده کنید:

kubectl describe ciliumnetworkpolicies

و از طریق cilium CLI :

kubectl -n kube-system exec cilium-rwmwr -- cilium policy get

همچنین می توان درخواست های HTTP را به صورت live با استفاده از «cilium monitor» نظارت کرد:

kubectl exec -it -n kube-system cilium-rwmwr -- cilium monitor -v --type l7

cilium-policyخروجی بالا یک پاسخ موفقیت آمیز به یک درخواست POST و به دنبال آن یک درخواست PUT را نشان می دهد که توسط L7 policy رد شده است.

تمیزکاری:

برای پاک کردن کارهایی که در این آزمایش ها انجام دادیم کافیست تا دستورهای زیر را اجرا کنید:

kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/1.11.5/examples/minikube/http-sw-app.yaml
kubectl delete cnp rule1

ادامه دهید …

در این سلسه مطالب، نصب Cilium بر روی Kubernetes آموزش داده شد و به برخی از قابلیت های آن اشاره شد. برای اطلاعات بیشتر و یادگیری در مورد Cilium کافیست تا به وب سایت، مستندات و Github آن مراجعه کنید:

https://cilium.io

https://docs.cilium.io/en/stable

https://github.com/cilium/cilium

 

همچنین جهت آزمایش سناریوهای مختلف با استفاده از Cilium می توانید از Lab های  آن به آدرس زیر استفاده کنید:

https://isovalent.com/resource-library/labs/

امید است تا این سلسه مطالب برای شما مفید بوده باشد.

 

The post آموزش نصب و پیکربندی Cilium در Kubernetes – بخش ۶ first appeared on طرفداران فدورا.

Using Cockpit to graphically manage systems, without installing Cockpit on them!

Posted by Fedora Magazine on August 16, 2023 08:00 AM

It probably sounds too good to be true: the ability to manage remote systems using an easy to use, intuitive graphical interface – without the need to install extra software on the remote systems, enable additional services, or make any other changes on the remote systems. This functionality, however, is now available with a combination of the recently introduced Python bridge for Cockpit and the Cockpit Client Flatpak! This allows Cockpit to manage remote systems, assuming only SSH access and that Python is installed on the remote host. Read on for more information on how this works and how to get started.

If you are not familiar with Cockpit, it is described on the project’s web site as a web-based graphical interface for servers. Cockpit is intended for everyone, especially those who are:

  • new to Linux (including Windows admins)
  • familiar with Linux and want an easy, graphical way to administer servers
  • expert admins who mainly use other tools but want an overview on individual systems

You can easily and intuitively complete a variety of tasks from Cockpit. These including tasks such as:

  • expanding the size of a filesystem
  • creating a network bond
  • modifying the firewall
  • viewing log entries
  • viewing real time and historical performance information
  • managing Podman containers
  • managing KVM virtual machines

and many additional tasks.

Objections to using Cockpit on systems

In the past, I’ve heard two main objections to using Cockpit on systems:

  1. I don’t want to run the Cockpit web server on my systems. Additional network services like this increase the attack surface. I don’t want to open another port in the firewall. I don’t want more HTTPS certificates in my environment to manage and maintain.
  2. I don’t want to install additional packages on my systems. I don’t even have access to install additional packages). The more packages installed, the larger my footprint is, and the more attack surface there is. For me to install additional packages in a production environment, I have to go through a change management process, etc. What a hassle!

Let’s address these one at a time. For the first concern, you have actually had several options for connecting to Cockpit over SSH, without running the Cockpit web server, for quite some time. These options include:

  • The ability to set up a bastion host, which is a host that has the Cockpit web server running on it.  You can then connect to Cockpit on the bastion host using a web browser.  From the Cockpit login screen on the bastion host you can use the Connect to option to specify an alternate host to login to (refer to the LoginTo cockpit.conf configuration option).  Another option is to authenticate to Cockpit on the bastion host, and use the Add new host option.  In either case, the bastion Cockpit host will connect to these additional remote hosts over SSH (so only the bastion host in your environment needs to be running the Cockpit web server).
  • You can use the Cockpit integration available with the upstream Foreman, or downstream Red Hat Satellite, to connect to Cockpit on systems in your environment over SSH.  
  • You can use the Cockpit Client Flatpak, which will connect to systems over SSH.
  • You can use the cockpit/ws container image. This is a containerized version of the Cockpit web server that acts as a containerized bastion host

For more information on these options, refer to the Connecting to the RHEL web console, part 1: SSH access methods blog post. This blog post focuses on the downstream RHEL web console, however, the information also applies to the upstream Cockpit available in Fedora. 

This brings me to the second concern, and the main focus of this article. This is the concern that I don’t want to install additional packages on the remote systems I am managing.  While there are several options for using the web console without the Cockpit web server, all of these options previously had a prerequisite that the remote systems needed to have at least the cockpit-system package installed.  For example, previously if you tried to use the Cockpit Client Flatpak to connect to a remote system that didn’t have Cockpit installed, you’d see an error message stating that the remote system doesn’t have cockpit-bridge installed. 

The Cockpit team has replaced the previous Cockpit bridge (implemented using C) with a new bridge written in Python.  For a technical overview of the function of the Cockpit bridge, and how the new Python bridge was implemented, refer to the recent Monty Python’s Flying Cockpit DevConf presentation by Allison Karlitskaya and Martin Pitt. 

This new Python bridge overcomes the previous limitation requiring Cockpit to be installed on the remote hosts.  

Using the Cockpit Client Flatpak

With the Cockpit Client Flatpak application installed on a workstation, we can connect to remote systems over SSH and manage them using Cockpit.

Installation

In the following example, I’m using a Fedora 38 workstation. Install the Cockpit Client Flatpak by simply opening the GNOME Software application and searching for Cockpit. Note that you’ll need to have Flathub enabled in GNOME Software.

<figure class="wp-block-image size-large"></figure>

Using the Cockpit Client

Once installed, you’ll see the following when opening the Cockpit Client:

<figure class="wp-block-image size-large"></figure>

You can type in a hostname or IP address that you would like to connect to. To authenticate as a user other than the user you are currently using, you can use the user@hostname syntax. A list of recent hosts that you’ve connected to will appear, if this is not the first time using the Cockpit Client. In that case, you can simply click on a host name to reconnect

If you have SSH key based authentication setup, you’ll be logged in to the remote host using the key based authentication. With out SSH keys setup, you’ll be prompted to authenticate with a password. In either case, if it is your first time connecting to the host over SSH, you’ll be prompted to accept the host key fingerprint.

As a special case, you can log into your currently running local session by connecting to localhost, without authentication.  

Once connected, you’ll see the Cockpit Overview page:

<figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Cockpit overivew menu</figcaption></figure>

Select the Terminal menu item in Cockpit to show that the remote system that I’m logged in to does not have any Cockpit packages installed:

<figure class="wp-block-image size-full"><figcaption class="wp-element-caption">Cockpit Terminal view</figcaption></figure>

Prerequisites for connecting to systems with Cockpit Client

There are several prerequisites for utilizing Cockpit Client to connect to a remote system. If you are familiar with managing remote hosts with Ansible, you’ll likely already be familiar with the prerequisites. They are the same:

  1. You must have connectivity to the remote system over SSH.
  2. You must have a valid user account on the remote system that you can authenticate with.
  3. If you need the ability to complete privileged operations in Cockpit, the user account on the remote system will need sudo privileges.

If you are connecting to a remote system that doesn’t have Cockpit installed, there are a couple of additional prerequisites:

  1. Python 3.6 or later must be installed on the remote host. This is not usually an issue, with some exceptions, such as Fedora CoreOS which does not include Python by default.
  2. An older version of Cockpit Client can not be used to connect to a newer operating system version. For example, if I installed Cockpit Client on my Fedora 38 workstation today and never updated it, it may not work properly to manage a Fedora 39 or Fedora 40 server in the future.

Frequently asked questions

Here are some frequently asked questions about this functionality:

Question: Cockpit is extendable via additional Applications.  Which Cockpit applications are available if I use the Cockpit Client to connect to a remote system that doesn’t have Cockpit installed?

Answer: Currently, Cockpit Client includes

  • cockpit-machines (virtual machine management)
  • cockpit-podman (Podman container management)
  • cockpit-ostree (used to manage rpm-ostree based systems)
  • cockpit-storaged (storage management)
  • cockpit-sosreport (for generating diagnostic reports)
  • cockpit-selinux (for managing SELinux)
  • cockpit-packagekit (for managing software updates)
  • cockpit-networkmanager (network management)
  • cockpit-kdump (kernel dump configuration) 

The Cockpit team is looking for feedback on what Cockpit applications you’d like to see included in the Cockpit Client. Post a comment below with your feedback. 

Question:  I connected to a remote system that doesn’t have Cockpit installed, but I don’t see Virtual Machines or one of the other applications listed in the menu.  I thought you just said these were included in the Cockpit Client Flatpak?

Answer:  When you login to a remote system that doesn’t have Cockpit packages installed, you’ll only see the menu options for underlying functionality available on the remote system.  For example, you’ll only see Virtual Machines in the Cockpit menu if the remote host has the libvirt-dbus package installed. 

Question: Can Cockpit applications available in the Cockpit Client be used with locally installed Cockpit applications on the remote host?  In other words, if I need a Cockpit application not included in the Cockpit Client, can I install just that single package on the remote host?  

Answer:  No, you cannot mix and match applications included in the Cockpit Client flatpak and those installed locally on the remote host.  For a remote host that has the cockpit-bridge package installed, Cockpit Client will exclusively use the applications that are installed locally on the remote host.  If the remote host does not have the cockpit-bridge package installed, Cockpit Client will exclusively use the applications bundled in the Cockpit Client Flatpak.  

Question:  Can I use Cockpit Client to connect to the local host?

Answer: Yes!  Simply open Cockpit Client and type in localhost and you’ll be able to manage the local host.  You don’t need to have any Cockpit packages installed on the local host if you use this method. You only need the Cockpit Client Flatpak.  

Question:  What Linux distributions can I connect to using the Cockpit Client?

Answer:  Cockpit is compatible with a number of different Linux distributions.  For more information, see the Running Cockpit page.  If connecting to a remote system that doesn’t have Cockpit installed, keep in mind the previously mentioned requirements regarding not connecting to newer OS’s from an older Cockpit Client.  

Question:  Does the Cockpit team have any future plans regarding this functionality? 

Answer:  The Cockpit team is planning on adding the ability to connect to remote hosts without Cockpit packages installed to the cockpit-ws container image. See COCKPIT-954 ticket for more info.  

Have more questions not covered here? Ask them in the comments section below!

Conclusion

The new Python bridge, and the corresponding ability to use the Cockpit Client to connect to remote systems without installing Cockpit, makes it incredibly easy to use Cockpit in almost any circumstance.

Try this out! It’s easy to do. Simply install the Cockpit Client Flatpak, and use it to connect to either your localhost or a remote system. Once you’ve tried it, let us know what you think in the comments below.

Bisecting Fedora kernel

Posted by Kamil Páral on August 15, 2023 03:07 PM

This post shows how to bisect a Fedora kernel to find the source of a regression. I needed that recently and I found no good guide, so I’m at least capturing my notes here, perhaps you find it useful. This approach can be used to identify which exact commit caused a bad kernel behavior on your hardware, and then report it to kernel maintainers. Note, you need to have a reliable way of reproducing the problem. If it happens randomly and infrequently, it’s much harder to debug.

0. Try the latest Rawhide kernel

Before you spend too much time on this, it’s always worth a shot to test the latest Rawhide kernel. Perhaps the bug is fixed already?

Usually the kernel consists of these installed packages: kernel, kernel-core, kernel-modules, kernel-modules-core, kernel-modules-extra. But see what you have installed on your system, e.g. with: rpm -qa | grep ^kernel | sort .

Install the latest Rawhide kernel:

sudo dnf update --setopt=installonly_limit=0 --repo fedora --releasever rawhide kernel{,-core,-modules,-modules-core,-modules-extra}

You want to use --setopt=installonly_limit=0 throughout this exercise to make sure you don’t accidentally remove a working kernel from your system and don’t end up with just broken ones (there’s a limit of three kernels installed at the same time by default). But it means you’ll need to remove tested kernels manually from time to time, otherwise you run out of space in /boot.

Reboot and keep pressing F8 during startup to display the GRUB boot menu. Make sure to select the newly installed kernel, boot it, test it. Note down whether it’s good or bad. If the problem is still there, we’ll need to continue debugging.

Note: When you want to remove that tested kernel, obviously you can’t be currently running from it. Then use standard dnf remove to get rid of it, or use dnf history for a more convenient way (e.g. dnf history undo last).

I. Narrow down the issue in Fedora-packaged kernels

As the first step, it’s useful to figure out which Fedora-packaged kernel is the last one with good behavior (a “good kernel“), and which one is the first one with bad behavior (a “bad kernel“). That will help you narrow down the scope. It’s much faster to download and install already built kernels than to compile your own (which we’ll do later).

Most probably you’re currently running a bad kernel (because you’re reading this). So reboot, display the GRUB boot menu and boot an older kernel. See if it’s good or bad, note it down. Unless the problem is very recent, all available kernels (usually three) in the GRUB menu will be bad. It’s time to start downloading older kernels from Koji. Use a reasonable strategy, e.g. install a month old kernel, or several months old, and gradually halve the intervals and narrow down until you find the latest good kernel. You don’t need to worry about using kernels from other Fedora releases (as you can see in their .fcNN suffix), they are standalone and work in any release. You can download the kernel subpackages manually, or use koji command (from the koji package), e.g.:

koji download-build --arch x86_64 kernel-6.5.0-0.rc6.43.fc39

That downloads many more subpackages than you need, so install just those needed (see the previous section), e.g. like this:

sudo dnf --setopt=installonly_limit=0 install ./kernel{,-core,-modules,-modules-core,-modules-extra}-6.5*.rpm

For each picked kernel, install it, boot into it, test it, note down whether it’s good or bad. Continue until you’ve found the latest good packaged kernel and the first bad packaged kernel.

II. Find git commits used for building identified good and bad kernels

Now that you have the closest good and bad packaged kernel, we need to figure out which git commits from the upstream Linux kernel were used to build them. In some cases, the git commit hash is included directly in the RPM filename. For example in my case, I reported that kernel-6.4.0-0.rc0.20230427git6e98b09da931.5.fc39 is the last good kernel, and kernel-6.4.0-0.rc0.20230428git33afd4b76393.7.fc39 is the first bad kernel. From those filenames, you can see that git commit 6e98b09da931 is good and git commit 33afd4b76393 is bad.

Not always is the commit hash part of the filename, e.g. with the example of kernel-6.5.0-0.rc6.43.fc39. In this case, you need to download the .src.rpm file from that build. Either manually from Koji, or using:

koji download-build --arch src kernel-6.5.0-0.rc6.43.fc39

Unpack that .src.rpm (my favorite decompress tool is deco), find linux-*.tar.xz archive and run the following command (adjust the archive filename):

$ xzcat -qq linux-6.5-rc6.tar.xz | git get-tar-commit-id
2ccdd1b13c591d306f0401d98dedc4bdcd02b421

(This command is documented in the kernel.spec file, also in that directory). Now you know the git commit hash used for that kernel build. Figure out commits for both the good and bad kernel you identified.

III. Use git bisect to find the exact commit that broke it

It’s time to clone the upstream Linux kernel repo:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ~/src/linux

And also the Fedora distgit kernel repo:

fedpkg clone -a ~/distgit/kernel

We’ll now use git bisect to arrive at the breaking commit which caused the problem. After each step, we’ll need to build the kernel, test it, and mark it as good or bad. Let’s start:

cd ~/src/linux
git bisect start
git bisect good YOUR_GOOD_COMMIT
git bisect bad YOUR_BAD_COMMIT

Git now prints a commit hash to be tested (and switches the repository to that commit), and an estimate of how many steps remain. We now need to take the current contents of the source code and build our own kernel.

Note: When building the kernel, I was advised to avoid the overhead of packaging, to speed up the process. I’m sure it’s a good advice, but I didn’t find a good guide on how to do that (including how to retrieve the Fedora kernel config, build the kernel manually, copy it to the right places, create initramfs, create a boot option in GRUB, etc). So I just ran the whole process including packaging. On my machine, the compilation took about 40 minutes and packaging took 10 minutes, and I needed to do about 11 rounds, so it was an OK tradeoff for me. (If you can write a guide how to do that without packaging, please do and link it in the comments, I’d love to read it).

Let’s create a tarball of the current source code like this:

git archive --prefix=linux-local/ HEAD | xz -0 -T0 > linux-local.tar.xz

Usually the tarballs have a version number in both the filename and the included directory (which is then also matched in a spec file). You can do that if you wish, I didn’t want to spend too much time on throwaway builds, so I just used a static filename and overwrote it each time.

Let’s move the tarball to the distgit repo:

mv ~/src/linux/linux-local.tar.xz ~/distgit/kernel/

Now we need to adjust the distgit spec file a bit:

cd ~/distgit/kernel
# edit kernel.spec

I made the following changes to the spec file:

-# define buildid .local
+%define buildid .local
-%define specrpmversion 6.4.9
+%define specrpmversion 6.4.0
-%define specversion 6.4.9
+%define specversion 6.4.0
-%define tarfile_release 6.4.9
+%define tarfile_release local
-%define specrelease 200%{?buildid}%{?dist}
+%define specrelease 0.gitYOUR_TESTED_COMMIT%{?buildid}%{?dist}

Now we can start the build:

nice fedpkg mockbuild --with baseonly --with vanilla --without debuginfo

Options --with baseonly and --without debuginfo make sure we don’t build unnecessary stuff. --with vanilla was needed, because Fedora-specific patches didn’t apply to the older source code.

After a long time, your results should be available in results_kernel/ and look something like this:

$ ls -1 results_kernel/6.4.0/0.git6e98b09da931.local.fc38/
build.log
hw_info.log
installed_pkgs.log
kernel-6.4.0-0.git6e98b09da931.local.fc38.src.rpm
kernel-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-core-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-devel-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-devel-matched-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-modules-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-modules-core-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-modules-extra-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-modules-internal-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
kernel-uki-virt-6.4.0-0.git6e98b09da931.local.fc38.x86_64.rpm
root.log
state.log

See that all the RPMs have the git commit hash identifier that you specified in the spec file. Now you just need to install the kernel (see in a previous section), boot it (make sure to display the GRUB menu and verify that the correct kernel is selected), and test it.

Note: If you have Secure Boot enabled, you’ll need to disable it in order to boot your own kernel (or figure out how to sign it yourself). Don’t forget to re-enable it once this is all over.

Once you’ve determined whether this kernel is good or bad, tell it to git bisect:

cd /src/linux
git bisect good   # or bad

And now the whole cycle repeats. Create a new archive using git archive, move it to the distgit directory, adjust the specrelease field in kernel.spec to match the new commit hash, and use fedpkg to build another kernel. Eventually, git bisect will print out the exact commit that caused the problem.

IV. Report your findings

Report the problem and the identified breaking commit into Red Hat Bugzilla under the kernel component. Please also save and attach the bisect log:

cd /src/linux
git bisect log > git-bisect-log.txt

Then also report this problem (possibly a regression) to the kernel upstream and mention it in the RH Bugzilla ticket. Thanks and good luck.

Backward compatibility in syslog-ng by using the version number in syslog-ng.conf

Posted by Peter Czanik on August 15, 2023 11:22 AM

Many users are annoyed by the version number included in the syslog-ng configuration. However, it ensures backward compatibility in syslog-ng. It is especially useful when updating to syslog-ng 4 from version 3, but also when updating within the same major version.

Read more about it at https://www.syslog-ng.com/community/b/blog/posts/backward-compatibility-in-syslog-ng-by-using-the-version-number-in-syslog-ng-conf

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

New responsibilities

Posted by Bastien Nocera on August 14, 2023 09:31 AM

As part of the same process outlined in Matthias Clasen's "LibreOffice packages" email, my management chain has made the decision to stop all upstream and downstream work on desktop Bluetooth, multimedia applications (namely totem, rhythmbox and sound-juicer) and libfprint/fprintd. The rest of my upstream and downstream work will be reassigned depending on Red Hat's own priorities (see below), as I am transferred to another team that deals with one of a list of Red Hat’s priority projects.

I'm very disappointed, because those particular projects were already starved for resources: I spent less than 10% of my work time on them in the past year, with other projects and responsibilities taking most of my time.

This means that, in the medium-term at least, all those GNOME projects will go without a maintainer, reviewer, or triager:
- gnome-bluetooth (including Settings panel and gnome-shell integration)
- totem, totem-pl-parser, gom
- libgnome-volume-control
- libgudev
- geocode-glib
- gvfs AFC backend

Those freedesktop projects will be archived until further notice:
- power-profiles-daemon
- switcheroo-control
- iio-sensor-proxy
- low-memory-monitor

I will not be available for reviewing libfprint/fprintd, upower, grilo/grilo-plugins, gnome-desktop thumbnailer sandboxing patches, or any work related to XDG specifications.

Kernel work, reviews and maintenance, including recent work on SteelSeries headset and Logitech devices kernel drivers, USB revoke for Flatpak Portal support, or core USB is suspended until further notice.

All my Fedora packages were orphaned about a month and a half ago, it's likely that there are still some that are orphaned, if there are takers. RHEL packages were unassigned about 3 weeks ago, they've been reassigned since then, so I cannot point to the new maintainer(s).

If you are a partner, or a customer, I would recommend that you get in touch with your Red Hat contacts to figure out what the plan is going forward for the projects you might be involved with.

If you are a colleague that will take on all or part of the 90% of the work that's not being stopped, or a community member that was relying on my work to further advance your own projects, get in touch, I'll do my best to accommodate your queries, time permitting.

I'll try to make sure to update this post, or create a new one if and when any of the above changes.

Week 32 in Packit

Posted by Weekly status of Packit Team on August 14, 2023 12:00 AM

Week 32 (August 8th – August 14th)

  • Two new configuration options for filtering when getting latest upstream release tag were introduced: upstream_tag_include and upstream_tag_exclude. They should contain a Python regex that can be used as an argument in re.match. (packit#2030, packit-service#2138)
  • Retriggering of pull-from-upstream via a comment will now use the correct configuration file from the default dist-git branch. (packit-service#2140)
  • The pull-from-upstream job can now be used with upstream repos that are not hosted on a supported git forge. (packit-service#2137)

Week 32 in Packit

Posted by Weekly status of Packit Team on August 14, 2023 12:00 AM

Week 32 (August 8th – August 14th)

  • Two new configuration options for filtering when getting latest upstream release tag were introduced: upstream_tag_include and upstream_tag_exclude. They should contain a Python regex that can be used as an argument in re.match. (packit#2030, packit-service#2138)
  • Retriggering of pull-from-upstream via a comment will now use the correct configuration file from the default dist-git branch. (packit-service#2140)
  • The pull-from-upstream job can now be used with upstream repos that are not hosted on a supported git forge. (packit-service#2137)

Episode 388 – Video game vulnerabilities

Posted by Josh Bressers on August 14, 2023 12:00 AM

Josh and Kurt ask the question what is a vulnerability, but in the framing of video games. Security loves to categorize all bugs as security vulnerabilities or not security vulnerabilities. But the reality nothing is so simple. Everything is a question of risk, not vulnerability. The discussion about video games can help us to better have this discussion.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3192-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_388_Video_game_vulnerabilities.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_388_Video_game_vulnerabilities.mp3</audio>

Show Notes

Industrializing Machine Learning

Posted by ! Avi Alkalay ¡ on August 13, 2023 12:35 PM

I’m doing Machine Learning Industrialization for more than 2 years and I’m thrilled to see it featured by McKinsey as top 2 in its 2023 tech trends!

<figure class="aligncenter size-large"></figure>

Industrializing ML is about applying Software Engineering best practices to the whole AI modeling process since its first line of code. It is about Data Scientists focusing on math and stats at the same time that the AI artifact is casted as a software product aiming production environments. This is different from MLOps, which is commonly positioned as a mere wrapping activity that happens after and separated from AI modeling and before production. In the whole Industrialization practice, MLOps is a subset activity that happens in between, but quite apart, from both Data Scientists’ work and the infrastructure. Industrializing Machine Learning contains MLOps, plus other concepts that are even more important.

The term “industrial” is accurate precisely to antagonize with the artisanal way that Machine Learning squads usually operate nowadays. It’s common to see a lot of mathematics, good statistics, but few software engineering best practices, little DevOps, few design patterns, minimal automation, and limited standardization.

I practically invented Machine Learning Industrialization for myself when I was at Loft, out of necessity and intuition, in 2021. Work that I proposed and lead when I was there allowed us to scale from 4 models that were laborious to maintain and monitor, to over 70 models, without growing the team of Data Scientists. Those +70 are now easy to maintain, audit, observe, reproduce, retrain, find and handle in general.

Also on my LinkedIn.

Flock 2023 trip report

Posted by Tomas Tomecek on August 13, 2023 06:00 AM

My first conference outside of Brno after the pandemic. I forgot how stressed I am from the travelling. Didn’t have to wait long to realize why:

  • traffic jam in Brno
  • 90 minutes flight delay
  • downpour of people on the airport
  • border control scanners not made for my height
  • missing connections because of delays
  • never-ending transfers

All of this was worth the frustration to absorb Flock energy and magic.