Fedora People

Change

Posted by Zach Oglesby on August 20, 2023 04:00 AM

As the old saying goes, “The only constant in life is change.” My life has been ruled by it, moving around as a kid, life in the military, and the same old stuff we all deal with as we age and grow.

For the last 15 years, my wife has been the only constant in my life. We met in Delaware while I was stationed there. We fell in love, got married, and I scooped her away from the only home she had ever known to go live in Spain with me.

We were young, very young. I now know I was dealing with the early stages of PTSD; having met her just weeks after coming home from a rough deployment, she gave me an escape from my pain. The thing about escapes is that they only work for so long, and eventually, I had to deal with my issues. It was not easy, and sometimes she boor the brunt of that pain. Anyone who knows me can tell you I don’t express my emotions well; she, on the other hand, needed all of hers on her sleeve. Needless to say, this difference resulted in a lot of tension, frustration, and confusion. We worked through it, but not absolutely. We had our first child in Spain, I got out of the military and had two more kids in Maryland, but tensions always remained.

Tension eventually rubs raw, like a dog licking a wound; the longer it lasts, the worse it gets. We both tried to change and adapt for the sake of the other, but changing who you are is difficult, if not impossible, and it never worked.

So here I am 15 years later, looking change in the eyes again. We are separating, and my home will not be hers for the first time since 2008. It’s hard to explain; we don’t hate each other or even really fight, but we are distant, and I can sense some resentment in her that I can’t bear. I am thankful that we get along and can raise our kids together, even if we are apart. It will be an adjustment for all of us, but I pray they will understand and feel our love for them.

I have lived with my wife for longer than anyone else (I lived with my mom until I was 11 and my dad until I was 18). If I was not as open as she wanted, she still knew me better than anyone else. It’s going to be strange to live alone again (part-time), it’s going to be hard not to see my kids every day, and it will be difficult not to be upset. Still, in the end, I will never feel regret. I will always love her, be glad for our time together, and forever cherish our children.

XMPP onion

Posted by Casper on August 19, 2023 08:49 AM

Nouveau concept, nouveau design, nouveau modèle.

Nous allons voir comment installer la messagerie instantannée la mieux sécurisée de tous les temps. Nous passerons par le serveur Prosody, un routeur Tor, et Gajim comme client. Ces composants seront installés au niveau du système, sur une seule machine : un poste de travail ou un laptop. La sécurité de ce concept repose sur sa simplicité. Le choix du chemin le plus court pour le transit des messages est un choix volontaire, dans le but de réduire le plus possible la latence. Comme un mode de communication de pair-à-pair (P2P). D'après mes tests, la latence fait moins d'une seconde, c'est quasiment-instantanné.

Ce petit guide permet d'installer manuellement un système de messagerie instantannée sur sa machine. Toutes les actions décrites ici sont réversibles. Pour une utilisation sur le long-terme, les données à backuper sont indiquées à la toute-fin. Les containers Docker/Podman ne sont pas à l'ordre du jour. L'architecture serait trop compliquée et difficile à mettre en place.

Design

À qui s'adresse t-il ?

Tout le monde. Pas besoin de posséder un serveur dédié. Cette méthode s'applique au "poste client" ou "poste de travail" de chacun. Nous possédons tous au moins un ordinateur fixe ou un laptop. Si vous n'avez jamais eu de compte Jabber, bienvenue dans le monde de Jabber/XMPP à travers ma méthode.

À l'heure où j'écris ces lignes, le client Poezio ne peut pas être employé avec ma méthode. Les développeurs font tout leur possible pour résoudre les problèmes au niveau de la bibliothèque "slixmpp". Peut-être que ça marchera dans 6 mois, ou dans 1 an. Je publierais un nouvel article si la situation évolue.

J'ai testé avec les clients Gajim et Profanity, mais je ne parlerais ici que de Gajim. Je n'ai pas testé avec Dino non-plus.

Messages en transit

Le transit des messages se fera via le Tor Network. Egalement appelé Dark Net. Le Tor Network est un sous-réseau dans le réseau. Il offre un certain nombre de possibilités que nous allons exploiter dans ce concept. Pour en savoir plus, je vous invite à visionner la vidéo (en anglais) "The Dark Net Isn't what you think. It's actually key to our privacy" d'Alex Winter lors de l'événement TEDxMidAtlantic :

https://youtube.com/watch?v=luvthTjC0O1

Avantages du réseau Tor

En passant par le Tor Network, ce concept s'affranchi des problématiques habituelles. Vous n'avez pas besoin d'acheter un nom de domaine (.org, .net, etc...). Puisqu'il n'y a pas de nom de domaine, il n'y a pas besoin de certificats SSL non-plus. Il n'y a pas de problème de chiffrement des connexions car tout le traffic est chiffré par défaut.

Pour les problématiques liées aux serveurs, avec ma méthode, pas besoin de configurer les règles de routage NAT, ni le parefeu de la machine.

Si vous avez un laptop, la connexion fonctionnera où que vous soyez. Même si vous êtes en déplacement, et même à travers un point d'accès WiFi publique.

Pas de serveurs distant

Le mot clé de ce concept est la légèreté. Ce concept ne réclame aucune infrastructure ni serveur dédié. Prosody est un serveur Jabber extrêmement léger et ne représente aucune charge sur la machine. Sur un poste client, il sera invisible.

Le but est de s'affranchir des serveurs distant. Pas besoin de louer une machine dans un datacenter, et pas besoin d'acheter une seconde unité centrale qu'on laissera tourner 24H/24. Pour la première fois dans l'histoire de Jabber/XMPP, la communication aura lieu en ligne directe.

Il y a un avantage caché : Vous constatez qu'il n'y a pas de serveur intermédiaire entre 2 correspondants. Les 2 correspondants communiquent de pair à pair, et sont reliés par le Tor Network. Les 2 correspondants sont reliés par les routeurs Tor installés sur leur propre machine. C'est une forme de P2P, et je pense sincèrement que le niveau de sécurité est augmenté.

Les 2 correspondants sont en "P2P", en ligne directe, sauf que le Tor Network anonymise la connexion entre-eux. C'est une fonctionnalité de base du réseau Tor.

Toujours à la recherche de légèreté, l'absence de certificats SSL simplifie au maximum les tâches d'administration système. Aucune maintenance ne sera nécessaire. Une fois installé, ce système fonctionnera pendant un bon bout de temps.

De plus, Prosody est un serveur Jabber fiable qui ne pose jamais de problème avec SELinux. C'est un détail, certes, mais ce n'est pas le cas avec d'autres serveurs Jabber. Il est le programme parfait, pour un poste client.

Client XMPP

Techniquement, vous pouvez utiliser n'importe quel client XMPP. La configuration sera un peu différente par rapport à d'habitude, car il ne se connecte pas à Internet, mais directement sur le serveur Prosody sur la même machine, en passant par le chemin le plus court. Les connexions sont établies sur l'interface localhost.

Le port d'écoute C2S (Client-to-Server) est un port qui doit être inaccessible depuis l'extérieur de la machine. La seule connexion attendue est celle de l'utilisateur de la machine. J'ai choisi un port arbitraire 15222, non-standard, et bloqué par défaut au niveau du parefeu.

Lorsque les connexions transitent par l'interface localhost, SSL/TLS est inutile. Dans le cas présent, il est désactivé. Les clients des 2 correspondants pourront activer le chiffrement de bout-en-bout (E2EE) avec OMEMO, mais je rappelle que tout le traffic est déjà chiffré par le réseau Tor. Il n'y a pas de configuration supplémentaire à ajouter au niveau de Prosody pour qu'OMEMO fonctionne correctement.

Jabber ID

Cette méthode offre des garanties au maintien et à la persistance de l'identité Jabber de l'utilisateur. Une identité Jabber, une adresse Jabber, ne peut pas être usurpée.

Le système de sécurité repose sur les noms de domaine .onion fournis par le Tor Network. Dans ce modèle, chaque correspondant possède une adresse Jabber unique grace au nom de domaine .onion. Les adresses onion sont uniques, donc si une adresse n'est plus utilisée, on ne peut pas la récupérer dans le but d'une usurpation d'identité. Elle sera perdue à tout jamais.

Chaque correspondant possède sur sa propre machine le serveur XMPP qui gère le nom de domaine .onion. Il est possible qu'un correspondant constate que le "serveur XMPP" de la personne destinataire ne soit pas joignable. C'est un scénario possible, et ce n'est pas un problème : Les messages seront stockés sur la machine de départ, et seront envoyés au moment où le serveur XMPP distant sera à nouveau joignable. Dans un mode de communication de Pair à Pair, si un pair est hors-ligne, alors la communication ne peut pas avoir lieu. C'est logique.

Le protocole XMPP a tout prévu pour gérer les problèmes d'envoi de message (et les envois en différé).

Un utilisateur peut créer plusieurs adresses Jabber, en ajoutant 2 ou 3 comptes utilisateur sur le serveur Prosody. Toutes les adresses auront le nom de domaine .onion de la machine. Mais cette fonctionnalité ne présente aucun intérêt... Une adresse onion par personne, c'est déjà pas mal.

Inconvénients

Ce design n'est pas parfait, il souffre d'un problème de compatibilité avec le reste du réseau XMPP.

Dans ce modèle minimaliste, un correspondant avec une adresse onion ne peut pas contacter une personne ayant un domaine du clearnet (comme .org, .net, .com, etc...). L'objectif initial de ce modèle est de démocratiser les adresses onion. Si chaque personne possède son adresse sur son poste de travail, alors ce modèle fonctionne. Ce n'est qu'un concept. Un design.

Sa solution sort du cadre de cet article.

Au niveau des adresses Jabber, les noms de domaine .onion ne peuvent pas être personnalisés ou choisis par l'utilisateur. Ils sont générés automatiquement par le routeur Tor, et sont composés de lettres et chiffres mélangés aléatoirement. Mais ils sont uniques.

Place à la technique

Installation et configuration du routeur Tor

C'est un Logiciel Libre, protégé par une license BSD : il ne peut pas être soumis à un brevet. Il faut commencer par l'installer :

dnf install tor

Il existe plusieurs mode pour le routeur Tor. Le plus connu est le mode "relai", il permet de relayer le traffic du réseau Tor sur sa machine. Ce mode ne nous intéresse pas, et nous allons le configurer en mode "routeur" simple. Il sert juste à raccorder notre machine au réseau Tor, sans relayer le traffic.

Mon fichier /etc/tor/torrc :

Log notice stdout
SocksPort [::1]:9050 PreferIPv6
SocksPort 127.0.0.1:9050 PreferIPv6
#SocksPort 172.17.0.1:9050 PreferIPv6 # Optionnel pour Docker
ClientPreferIPv6ORPort 1
HiddenServiceDir /var/lib/tor/hidden_service1/
HiddenServicePort 5269 [::1]:5269

Puis démarrer le processus en arrière-plan :

systemctl enable tor
systemctl start tor

(Le processus met environ 2min à démarrer, ça dépend des machines).

Récupération du nom de domaine .onion

Il faut basculer en utilisateur "root" pour pouvoir accéder à cette info. Comme expliqué précédemment, le routeur Tor va générer automatiquement un nom de domaine onion, unique. Il va l'écrire dans un fichier texte, mais si on modifie le fichier, le nom de domaine ne changera pas. Des clés cryptographiques déterminent les noms de domaines, et on ne peut pas les modifier sans les altérer.

L'ensemble des fichiers sont stockés dans le répertoire :

/var/lib/tor/hidden_service1/

L'info est stockée dans le fichier :

cat /var/lib/tor/hidden_service1/hostname

Il faut la noter, on en aura besoin pour la suite.

Installation et configuration de Prosody

Il est disponible au téléchargement dans le depot Fedora. On commence par l'installer sur la machine :

dnf install prosody

Mon fichier de config :

/etc/prosody/prosody.cfg.lua

Je l'ai mis en ligne pour le récupérer plus rapidement avec curl (il fait 300 lignes).

curl -o /etc/prosody/prosody.cfg.lua https://dl.casperlefantom.net/pub/prosody.cfg.lua.txt

Ou bien (à travers Tor) :

torsocks curl -o /etc/prosody/prosody.cfg.lua http://uhxfe4e6yc72i6fhexcpk4ph4niueexpy4ckc3wapazxqhv4isejbnyd.onion/pub/prosody.cfg.lua.txt

Passons en revue ce qu'il faut modifier et adapter à votre setup :

----------- Virtual hosts -----------
-- You need to add a VirtualHost entry for each domain you wish Prosody to serve.
-- Settings under each VirtualHost entry apply *only* to that host.

--VirtualHost "localhost"
-- Prosody requires at least one enabled VirtualHost to function. You can
-- safely remove or disable 'localhost' once you have added another.

-- Section for VirtualHost onion address

VirtualHost "p4ac3ntp3ai643k3h5f7ubggg7zmdf7ddsnfybn5rejy73vqdcplzxid.onion"
    ssl = { }

Component "rooms.p4ac3ntp3ai643k3h5f7ubggg7zmdf7ddsnfybn5rejy73vqdcplzxid.onion" "muc"
    name = "Hidden Chatrooms"
    modules_enabled = { "muc_mam" }
    restrict_room_creation = "local"
    ssl = { }

La configuration des VirtualHosts est indiquée directement dans le fichier de config principal, pour simplifier au maximum. Il faut remplacer les adresses onion par votre nom de domaine .onion. C'est tout. Le fichier de config est prêt à être utilisé.

Il existe des fichiers inutiles qui ne peuvent pas être supprimés :

  • /etc/prosody/conf.d/example.com.cfg.lua
  • /etc/prosody/conf.d/localhost.cfg.lua

Pour éviter qu'ils interfèrent avec le fichier de config principal, on peut remplacer leur contenu par une ligne de commentaire. Le commentaire indique que le fichier n'est pas utilisé :

echo "-- Fichier vide" > /etc/prosody/conf.d/example.com.cfg.lua
echo "-- Fichier vide" > /etc/prosody/conf.d/localhost.cfg.lua

Si on supprime ces fichiers avec la commande "rm", ils seront recréés plus tard, au moment de la mise à jour du RPM de Prosody. Leur contenu pose problème, si l'on ne fait pas attention à ce détail, tout le projet XMPP onion peut tomber en panne, sans que l'on comprenne pourquoi.

Outils pour gérer les modules

Les modules Prosody ont un gestionnaire de module particulier nommé "luarocks". Sans ce programme, la commande prosodyctl ne peut pas installer de module. Il faut donc l'installer :

dnf install luarocks lua-devel

Scripts Lua à installer

Le module "mod_onions" a besoin de 2 programmes Lua pour marcher. Il sont disponibles dans le depot et il faut les installer sur le système :

dnf install luajit lua-bit32

Esnuite, on peut l'installer :

prosodyctl install --server=https://modules.prosody.im/rocks/ mod_onions

Après avoir exécuté la commande prosodyctl, si vous le souhaitez, vous pouvez supprimer luarocks avec l'historique dnf, car on ne va plus l'utiliser par la suite.

Le module "mod_onions" permet de rediriger vers Tor toutes les connexions sortantes de Prosody. Lors de sa tentative de connexion à un autre serveur, il va envoyer une requête pour établir une connexion TLS. C'est écrit comme ça dans le code, et ce n'est pas configurable. Le serveur du correspondant répond ensuite qu'il ne peut pas établir de connexion TLS, donc la connexion échoue. Pour modifier le comportement de base du programme, on peut passer par un module.

Voici mon module perso à installer dans :

/var/lib/prosody/custom_plugins/share/lua/5.4/mod_s2s_never_encrypt.lua

local libev = module:get_option_boolean("use_libevent")

local function disable_tls_for_baddies_in(event)
    local session = event.origin
    module:log("debug", "disabling tls on incoming stream from %s...", tostring(session.from_host));
    if libev then session.conn.starttls = false; else session.conn.starttls = nil; end
end

local function disable_tls_for_baddies_out(event)
    local session = event.origin
    module:log("debug", "disabling tls on outgoing stream from %s...", tostring(session.to_host));
    if libev then session.conn.starttls = false; else session.conn.starttls = nil; end
end

module:hook("s2s-stream-features", disable_tls_for_baddies_in, 600)
module:hook("stanza/http://etherx.jabber.org/streams:features", disable_tls_for_baddies_out, 600)

Blocages SELinux

On saute la phase d'expérimentation, pour aller directement à la solution.

SELinux bloque 2 choses. Il empèche Prosody d'écouter sur le port 15222, car ce port fait parti d'une plage de ports non-réservés à un service spécifique. Donc SELinux l'empèche de choisir un port "aléatoire". Un service (réseau) doit utiliser un port réservé pour ce service, c'est logique.

Ce mode de fonctionnement est autorisé par le booleen "nis_enabled" :

setsebool -P nis_enabled on

Ou bien, on peut modifier le label SELinux du port 15222, au choix (vérifiez la position du booleen d'abord) :

semanage port -a -t prosody_port_t -p tcp 15222

Le second blocage empèche Prosody d'établir une connexion au proxy SOCKSv5 de Tor. C'est-à-dire que Prosody essaye de se connecter au socket TCP d'un autre service (sur la même machine). Ce n'est pas le mode de fonctionnement habituel d'un service réseau.

Pour résoudre le problème, on va créer un fichier texte (prosody-connect-tor-port.txt) qui contient une seule ligne avec le contenu suivant :

type=AVC msg=audit(1673295193.979:4392): avc:  denied  { name_connect } for  pid=935298 comm="prosody" dest=9050 scontext=system_u:system_r:prosody_t:s0 tcontext=system_u:object_r:tor_port_t:s0 tclass=tcp_socket permissive=0

Puis, on se sert de audit2allow pour générer un module de règles SELinux :

cat prosody-connect-tor-port.txt|audit2allow -M prosody-connect-tor-port

Ensuite, on peut installer le module :

semodule -i prosody-connect-tor-port.pp

Suppression des certificats inutiles

Au moment de l'installation de prosody, un certificat x509 est généré automatiquement. Il contient des informations génériques, il est valide pendant 1 an, et il est auto-signé. Voici un exemple de ce qu'il contient :

Issuer: C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = vulcain, emailAddress = root@vulcain
Subject: C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = vulcain, emailAddress = root@vulcain
Validity:
    Not Before: Jun 17 07:02:32 2023 GMT
    Not After : Jun 16 07:02:32 2024 GMT

Il contient n'importe quoi, c'est un fait. Il n'est pas utilisable, et il va générer des problèmes quand il aura expiré dans un an. Donc, je recommande la suppression :

echo "" > /etc/pki/prosody/localhost.crt
echo "" > /etc/pki/prosody/localhost.key

On en a pas besoin, techniquement.

Démarrage de prosody

C'est le moment. Le processus est lancé en arrière-plan.

systemctl enable prosody
systemctl start prosody

Vous constatez qu'il consomme quasiment-pas de resource sur la machine.

Création d'un nom d'utilisateur

Pour créer un compte utilisateur, on peut passer soit par Gajim, soit par la commande prosodyctl. Je ne vais pas détailler la procédure dans Gajim, libre à vous de choisir ce qui vous convient le mieux. Pour le nom, je recommande quelque-chose de court, car l'adresse finale est déjà très longue.

Voici les commandes en passant par prosodyctl :

prosodyctl adduser user@adresse.onion
prosodyctl passwd user@adresse.onion

(Personne peut se connecter au port d'écoute client, donc personne peut tester la résistance/robustesse du mot de passe).

Installation et configuration de Gajim

Du coté de Gajim, la config est relativement simple. C'est encore un Logiciel Libre, protégé par une license GPLv3 : personne peut déposer un brevet sur ce logiciel. Il est présent dans le dépot Fedora et il faut commencer par l'installer :

dnf install gajim

Ensuite, c'est parti pour le parcours fléché. Suivez les fleches !

Aller dans "Comptes" > Modifier le compte > Ajouter un compte

Capture d'écran Capture d'écran Capture d'écran Capture d'écran Capture d'écran Capture d'écran Capture d'écran

L'installation et la configuration d'OMEMO n'est pas traitée dans cet article.

En résumé

Au final, peu-importe quel client vous utilisez, voici les informations clés à renseigner :

  • Nom du compte (nom d'utilisateur@nom d'hôte, également appelé JID)
  • Serveur : localhost
  • Port : 15222
  • Utiliser une connexion non-chiffrée (désactiver TLS)

Données à backuper

Ce système est fiable sur le long-terme seulement si il est backupé. Je vous propose ici de faire le point, afin de garantir la longévité de votre installation.

Pour générer une copie de sauvegarde du système (en root) :

tar -Jcf xmpp-onion-system.tar.xz /etc/tor/torrc /var/lib/tor/hidden_service1/ /etc/prosody/prosody.cfg.lua /etc/prosody/conf.d/example.com.cfg.lua /etc/prosody/conf.d/localhost.cfg.lua /var/lib/prosody/ /etc/pki/prosody/localhost.crt /etc/pki/prosody/localhost.key

Données utilisateur (pas root) :

$ tar -Jcf xmpp-onion-user.tar.xz .config/gajim/ .local/share/gajim/

L'avantage est que, si vous décidez de changer de client, nul-besoin de tout re-backuper.

Pour restaurer depuis la sauvegarde (en root) :

pushd /
tar -Jxf /home/user/xmpp-onion-system.tar.xz
popd

Et pour restaurer les données utilisateur :

$ tar -Jxf xmpp-onion-user.tar.xz

Simple et efficace. Mais il faut aussi créer de la redondance en backupant la $HOME. La redondance, on en a besoin.

Il est possible de restaurer le backup sur un LiveUSB Fedora après avoir booté dessus. Ça fonctionnera aussi.

Et ça marche.

Nous venons de voir la méthode d'installation et de mise en place d'une messagerie décentralisée, acentrée, à travers un réseau d'anonymisation. Aide et support dans le salon de discussion fedora@chat.jabberfr.org (XMPP) où je suis présent en permanance.

TransFLAC: Convert FLAC to lossy formats

Posted by Fedora Magazine on August 18, 2023 08:00 AM

FLAC: The Lossless Audio Compression Format

FLAC, or Free Lossless Audio Codec, is a lossless audio compression format that preserves all the original audio data. This means that FLAC files can be decoded to an identical copy of the original audio file, without any loss in quality. However, lossless compression typically results in larger file sizes than lossy compression, which is why a method to convert FLAC to lossy formats is desirable. This is where TransFLAC can help.

FLAC is a popular format for archiving digital audio files, as well as for storing music collections on home computers. It is also becoming increasingly common for music streaming services to offer FLAC as an option for high-quality audio.

For portable devices, where storage space is limited, lossy audio formats such as MP3, AAC, and OGG Vorbis are often used. These formats can achieve much smaller file sizes than lossless formats, while still providing good sound quality.

In general, FLAC is a good choice for applications where lossless audio quality is important, such as archiving, mastering, and critical listening. Lossy formats are a good choice for applications where file size is more important, such as storing music on portable devices or streaming music over the internet.

TransFLAC: Convert FLAC to lossy formats

TransFLAC is a command-line application that converts FLAC audio files to a lossy format at a specified quality level. It can keep both the FLAC and lossy libraries synchronized, either partially or fully. TransFLAC also synchronizes album art stored in the directory structure, such as cover, albumart, and folder files. You can run TransFLAC interactively in a terminal window, or you can schedule it to run automatically using applications such as cron or systemd.

The following four parameters must be specified:

  1. Input FLAC Directory: The directory to recursively search for FLAC audio files. The case of the directory name matters. TransFLAC will convert all FLAC audio files in the directory tree to the specified lossy codec format. The program will resolve any symlinks encountered and display the physical path.
  2. Output Lossy Directory: The directory to store the lossy audio files. The case of the directory name matters. The program will resolve any symlinks encountered and display the physical path.
  3. Lossy Codec: The codec used to convert the FLAC audio files. The case of the codec name does not matter. OPUS generally provides the best sound quality for a given file size or bitrate, and is the recommended codec.
    Valid values are: OPUS | OGG | AAC | MP3
  4. Codec Quality: The quality preset used to encode the lossy audio files. The case of the quality name does not matter. OPUS STANDARD quality provides full bandwidth, stereo music, good audio quality approaching transparency, and is the recommended setting.
    Valid values are: LOW | MEDIUM | STANDARD | HIGH | PREMIUM

TransFLAC allows for customization of certain items in the configuration.  The project wiki provides additional information.

Installation on Fedora Linux:

$ sudo dnf install transflac
<figure class="wp-block-image size-large">TransFLAC Convert FLAC to lossy formats</figure>

GNOME 45 Core Apps Update

Posted by Michael Catanzaro on August 17, 2023 03:57 PM

It’s been a few months since I last reviewed the state of GNOME core apps. For GNOME 45, we have implemented the changes proposed in the “Imminent Core App Changes” section of that blog post:

  • Loupe enters core as GNOME’s new image viewer app, developed by Christopher Davis and Sophie Herold. Loupe will be branded as Image Viewer and replaces Eye of GNOME, which will no longer use the Image Viewer branding. Eye of GNOME will continue to be maintained by Felix Riemann, and contributions are still welcome there.
  • Snapshot enters core as GNOME’s new camera app, developed by Maximiliano Sandoval and Jamie Murphy. Snapshot will be branded as Camera and replaces Cheese. Cheese will continue to be maintained by David King, and contributions are still welcome there.
  • GNOME Photos has been removed from core without replacement. This application could have been retained if more developers were interested in it, but we have made the decision to remove it due to lack of volunteers interested in maintaining it. Photos will likely be archived eventually, unless a new maintainer volunteers to save it.

GNOME 45 beta will be released imminently with the above changes. Testing the release and reporting bugs is much appreciated.

We are also looking for volunteers interested in helping implement future core app changes. Specifically, improvements are required for Music to remain in core, and improvements are required for Geary to enter core. We’re also not quite sure what to do with Contacts. If you’re interested in any of these projects, consider getting involved.

PHP version 8.1.23RC1 and 8.2.10RC1

Posted by Remi Collet on August 17, 2023 03:36 PM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.2.10RC1 are available

  • as base packages
    • in the remi-php82-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 36-38 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPM of PHP version 8.1.23RC1 are available

  • as base packages
    • in the remi-php81-test repository for Enterprise Linux 7
    • in the remi-modular-test for Fedora 36-38 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

emblem-notice-24.pngPHP version 8.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : follow the wizard instructions.

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Parallel installation of version 8.1 as Software Collection:

yum --enablerepo=remi-test install php81

Update of system version 8.2 (EL-7) :

yum --enablerepo=remi-php82,remi-php82-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.1 (EL-7) :

yum --enablerepo=remi-php81,remi-php81-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png EL-9 packages are built using RHEL-9.2

emblem-notice-24.png EL-8 packages are built using RHEL-8.8

emblem-notice-24.png EL-7 packages are built using RHEL-7.9

emblem-notice-24.png oci8 extension now uses Oracle Client version 21.10, intl extension now uses libicu 72.1

emblem-notice-24.png RC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.png versions 8.1.23 and 8.2.10 are planed for August 31th, in 2 weeks.

Software Collections (php81, php82)

Base packages (php)

PHP version 8.0.30, 8.1.22 and 8.2.9

Posted by Remi Collet on August 17, 2023 05:18 AM

RPMs of PHP version 8.2.9 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php82 repository for EL 7.

RPMs of PHP version 8.1.22 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php81 repository for EL 7.

RPMs of PHP version 8.0.30 are available in remi-modular repository for Fedora ≥ 36 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php80 repository for EL 7.

emblem-notice-24.png The modules for EL-9 are available for x86_64 and aarch64.

emblem-important-2-24.pngPHP version 7.4 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

security-medium-2-24.pngThese Versions fix 2 security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.2 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.2
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

Replacement of default PHP by version 8.1 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php81
yum update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are build using RHEL-9.2
  • EL-8 RPMs are build using RHEL-8.8
  • EL-7 RPMs are build using RHEL-7.9
  • intl extension now uses libicu72 (version 72.1)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.8, instead of the outdated system library)
  • oci8 extension now uses Oracle Client version 21.10
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php80 / php81 / php82)

[F39] Participez aux journées de tests de GNOME 45 et de DNF5

Posted by Charles-Antoine Couret on August 16, 2023 10:14 PM

Depuis le 11 jusqu'au 17 août, est une semaine dédiée à plusieurs tests : autour de DNF5, le tout complété jusqu'au 20 août à propos de GNOME 45 et de ses applications. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous nous approchons de la diffusion de Fedora 39 édition Beta. De nombreuses nouveautés sont bien avancées dans leur développement et doivent être fiabilisés avant la version finale qui sortira fin avril.

Pour GNOME 45 ils consistent en :

  • La détection de la mise à niveau de Fedora par GNOME Logiciels ;
  • Le verrouillage et déverrouillage d'écran ;
  • Le bon fonctionnement du navigateur Web, de Cartes, de Musique, de Disques et du Terminal ;
  • La connexion / déconnexion et changement d'utilisateurs ;
  • Le fonctionnement du son, notamment détection de la connexion ou déconnexion d'écouteurs ou casques audios ;
  • Le fonctionnement global du bureau : les activités, les paramètres, les extensions.
  • Possibilité de lancer les applications graphiques depuis le menu.

Pour DNF5, basé sur microdnf qui est une réécriture de DNF, cela consiste majoritairement à s'assurer de la non régression de ce composant par rapport à son prédécesseur. Pour l'invoquer il faut installer le paquet dnf5 et utiliser la commande du même nom au lieu de dnf.

Les tests consistent majoritairement à :

  • Installer, supprimer et mettre à jour des paquets ;
  • Mettre à jour et vider le cache des dépôts ;
  • Lister les paquets disponibles et installés ;
  • Chercher un paquet dans les dépôts par le nom ou sa description ;
  • Lister les dépôts actifs et inactifs ;
  • Vérifier l'historique des transactions des paquets et éventuellement annuler une transaction passée.

Comment y participer ?

Visitez cette page GNOME 45 et DNF5, suivez les instructions et communiquez vos résultats dessus.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Libera.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une semaine est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Server updates/reboots

Posted by Fedora Infrastructure Status on August 16, 2023 09:00 PM

We will be updating/rebooting various servers. Services may be up and down in this outage window.

The University of Utah uses Kiwi TCMS

Posted by Kiwi TCMS on August 16, 2023 04:13 PM

"University of Utah + Kiwi TCMS logos"

The University of Utah is a public research university in Salt Lake City, USA. It is the flagship institution of the Utah System of Higher Education and was established in 1850.

The University of Utah's School of Computing, founded as the Computer Science Department in 1965, has a long and distinguished record of high impact research. The university has provided large, automated testbeds since around the year 2000, funded by the National Science Foundation.

The Flux Research Group conducts research in operating systems, networking, security, and virtualization. The group consists of three faculty and over two dozen research staff, graduate students, and undergrads.

POWDER (the Platform for Open Wireless Data-driven Experimental Research) is flexible infrastructure enabling a wide range of software-defined experiments on the future of wireless networks. POWDER supports software-programmable experimentation on 5G and beyond, massive MIMO, ORAN, spectrum sharing and CBRS, RF monitoring, and anything else that can be supported on software-defined radios.

In the words of David M. Johnson, research staff:

The addition of Kiwi TCMS to our POWDER mobile wireless testbed helps to support the complex multi-system, end-to-end functional test and integration scenarios we see in the 5G/O-RAN/beyond mobile wireless space.

We use Kiwi TCMS as part of an on-demand environment that POWDER provides to users that can help them automate testing using a workflow approach, from CI-triggered orchestration from scratch in our cloud-like environment, through resource configuration and running test suites, to finally collecting results into private instances of Kiwi TCMS.

We use both the Stackstorm and Dagster workflow engines to execute our test and integration workflows. The stackstorm-kiwitcms library is a simple Stackstorm "integration pack" (Python source code in this case) that invokes and re-exports much of the core Kiwi TCMS XML-RPC API (with some minor sugar) into Stackstorm, so that each API function is exposed as a Stackstorm action (the fundamental unit of its workflows). This means that the workflows can orchestrate resources into test scenarios; configure the resources; create or instantiate Kiwi TCMS test runs/executions/metadata; execute tests; and push test results/status into Kiwi TCMS records, upload attachments, etc, for persistence.

We use a fork of Kiwi TCMS right now so that we could upload attachments to test runs via the API. That was a trivial change which made its way upstream as part of Kiwi TCMS version 12.1.


If you like what we're doing and how Kiwi TCMS supports various communities please help us!

آموزش نصب و پیکربندی Cilium در Kubernetes – بخش ۶

Posted by Fedora fans on August 16, 2023 09:38 AM
cilium

cilium

در بخش ششم از سلسه مطلب «آموزش نصب و پیکربندی Cilium در Kubernetes» ، قصد داریم تا در مورد L7 Policy صحبت کنیم.

آزمایش و انجام HTTP-aware L7 Policy

در سناریوی ساده بالا، کافی بود که به tiefighter / xwing دسترسی کامل به API deathstar داده شود یا اصلاً دسترسی نداشته باشد. اما برای ارائه بالاترین امنیت (به عنوان مثال، enforce least-privilege isolation) بین میکروسرویس‌ها، هر سرویسی که API deathstar را فراخوانی می‌کند باید محدود به ایجاد مجموعه‌ای از درخواست‌های HTTP باشد که برای عملکرد درست لازم است. به عنوان مثال، در نظر بگیرید که سرویس deathstar برخی از maintenance API ها را expose می کند که نباید توسط فضاپیماهای امپراتوری به صورت تصادفی فراخوانی شوند. برای دیدن آن می توانید دستور زیر را اجرا کنید:

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port

یک نمونه خروجی از اجرای دستور گفته شده را در تصویر پایین مشاهده می کنید:

ciliumدر حالی که این یک مثال گویا است، دسترسی غیرمجاز مانند بالا می‌تواند پیامدهای امنیتی نامطلوبی داشته باشد.

L7 Policy با Cilium و Kubernetes

cilium_http_l3_l4_l7_gsg

Cilium می‌تواند policy های لایه HTTP (یعنی L7) را برای محدود کردن URL‌هایی که tiefighter مجاز است به آن دسترسی پیدا کند، اعمال کند. در اینجا یک فایل policy به عنوان نمونه است که policy اصلی ما را با محدود کردن tiefighter به برقراری POST API call به /v1/request-landing و اجازه ندان به سایر call ها (از جمله PUT /v1/exhaust-port) گسترش می دهد.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "rule1"
spec:
  description: "L7 policy to restrict access to specific HTTP call"
  endpointSelector:
    matchLabels:
      org: empire
      class: deathstar
  ingress:
  - fromEndpoints:
    - matchLabels:
        org: empire
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: "POST"
          path: "/v1/request-landing"

 

بروزرسانی rule موجود جهت اعمال L7-aware policy برای محافظت از deathstar با استفاده از دستور زیر:

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.12/examples/minikube/sw_l3_l4_l7_policy.yaml

اکنون می‌توانیم همان آزمایش بالا را دوباره اجرا کنیم، اما نتیجه متفاوتی خواهیم دید:

kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

cilium-policyو این یکی آزمایش:

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port

cilium-policyهمانطور که می بینید، با Cilium L7 security policies، ما می توانیم به tiefighter اجازه دهیم فقط به منابع API مورد نیاز در deathstar دسترسی داشته باشد و بدین ترتیب یک رویکرد امنیتی «least privilege» را برای ارتباط بین میکروسرویس ها پیاده سازی کنیم. توجه داشته باشید که مسیر دقیقاً با URL مطابقت دارد، برای مثال اگر می‌خواهید هر چیزی در /v1/ را allow کنید، باید از یک regular expression استفاده کنید:

path: "/v1/.*"

می توانید L7 policy را با استفاده از kubectl مشاهده کنید:

kubectl describe ciliumnetworkpolicies

و از طریق cilium CLI :

kubectl -n kube-system exec cilium-rwmwr -- cilium policy get

همچنین می توان درخواست های HTTP را به صورت live با استفاده از «cilium monitor» نظارت کرد:

kubectl exec -it -n kube-system cilium-rwmwr -- cilium monitor -v --type l7

cilium-policyخروجی بالا یک پاسخ موفقیت آمیز به یک درخواست POST و به دنبال آن یک درخواست PUT را نشان می دهد که توسط L7 policy رد شده است.

تمیزکاری:

برای پاک کردن کارهایی که در این آزمایش ها انجام دادیم کافیست تا دستورهای زیر را اجرا کنید:

kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/1.11.5/examples/minikube/http-sw-app.yaml
kubectl delete cnp rule1

ادامه دهید …

در این سلسه مطالب، نصب Cilium بر روی Kubernetes آموزش داده شد و به برخی از قابلیت های آن اشاره شد. برای اطلاعات بیشتر و یادگیری در مورد Cilium کافیست تا به وب سایت، مستندات و Github آن مراجعه کنید:

https://cilium.io

https://docs.cilium.io/en/stable

https://github.com/cilium/cilium

 

همچنین جهت آزمایش سناریوهای مختلف با استفاده از Cilium می توانید از Lab های  آن به آدرس زیر استفاده کنید:

https://isovalent.com/resource-library/labs/

امید است تا این سلسه مطالب برای شما مفید بوده باشد.

 

The post آموزش نصب و پیکربندی Cilium در Kubernetes – بخش ۶ first appeared on طرفداران فدورا.

Using Cockpit to graphically manage systems, without installing Cockpit on them!

Posted by Fedora Magazine on August 16, 2023 08:00 AM

It probably sounds too good to be true: the ability to manage remote systems using an easy to use, intuitive graphical interface – without the need to install extra software on the remote systems, enable additional services, or make any other changes on the remote systems. This functionality, however, is now available with a combination of the recently introduced Python bridge for Cockpit and the Cockpit Client Flatpak! This allows Cockpit to manage remote systems, assuming only SSH access and that Python is installed on the remote host. Read on for more information on how this works and how to get started.

If you are not familiar with Cockpit, it is described on the project’s web site as a web-based graphical interface for servers. Cockpit is intended for everyone, especially those who are:

  • new to Linux (including Windows admins)
  • familiar with Linux and want an easy, graphical way to administer servers
  • expert admins who mainly use other tools but want an overview on individual systems

You can easily and intuitively complete a variety of tasks from Cockpit. These including tasks such as:

  • expanding the size of a filesystem
  • creating a network bond
  • modifying the firewall
  • viewing log entries
  • viewing real time and historical performance information
  • managing Podman containers
  • managing KVM virtual machines

and many additional tasks.

Objections to using Cockpit on systems

In the past, I’ve heard two main objections to using Cockpit on systems:

  1. I don’t want to run the Cockpit web server on my systems. Additional network services like this increase the attack surface. I don’t want to open another port in the firewall. I don’t want more HTTPS certificates in my environment to manage and maintain.
  2. I don’t want to install additional packages on my systems. I don’t even have access to install additional packages). The more packages installed, the larger my footprint is, and the more attack surface there is. For me to install additional packages in a production environment, I have to go through a change management process, etc. What a hassle!

Let’s address these one at a time. For the first concern, you have actually had several options for connecting to Cockpit over SSH, without running the Cockpit web server, for quite some time. These options include:

  • The ability to set up a bastion host, which is a host that has the Cockpit web server running on it.  You can then connect to Cockpit on the bastion host using a web browser.  From the Cockpit login screen on the bastion host you can use the Connect to option to specify an alternate host to login to (refer to the LoginTo cockpit.conf configuration option).  Another option is to authenticate to Cockpit on the bastion host, and use the Add new host option.  In either case, the bastion Cockpit host will connect to these additional remote hosts over SSH (so only the bastion host in your environment needs to be running the Cockpit web server).
  • You can use the Cockpit integration available with the upstream Foreman, or downstream Red Hat Satellite, to connect to Cockpit on systems in your environment over SSH.  
  • You can use the Cockpit Client Flatpak, which will connect to systems over SSH.
  • You can use the cockpit/ws container image. This is a containerized version of the Cockpit web server that acts as a containerized bastion host

For more information on these options, refer to the Connecting to the RHEL web console, part 1: SSH access methods blog post. This blog post focuses on the downstream RHEL web console, however, the information also applies to the upstream Cockpit available in Fedora. 

This brings me to the second concern, and the main focus of this article. This is the concern that I don’t want to install additional packages on the remote systems I am managing.  While there are several options for using the web console without the Cockpit web server, all of these options previously had a prerequisite that the remote systems needed to have at least the cockpit-system package installed.  For example, previously if you tried to use the Cockpit Client Flatpak to connect to a remote system that didn’t have Cockpit installed, you’d see an error message stating that the remote system doesn’t have cockpit-bridge installed. 

The Cockpit team has replaced the previous Cockpit bridge (implemented using C) with a new bridge written in Python.  For a technical overview of the function of the Cockpit bridge, and how the new Python bridge was implemented, refer to the recent Monty Python’s Flying Cockpit DevConf presentation by Allison Karlitskaya and Martin Pitt. 

This new Python bridge overcomes the previous limitation requiring Cockpit to be installed on the remote hosts.  

Using the Cockpit Client Flatpak

With the Cockpit Client Flatpak application installed on a workstation, we can connect to remote systems over SSH and manage them using Cockpit.

Installation

In the following example, I’m using a Fedora 38 workstation. Install the Cockpit Client Flatpak by simply opening the GNOME Software application and searching for Cockpit. Note that you’ll need to have Flathub enabled in GNOME Software.

<figure class="wp-block-image size-large"></figure>

Using the Cockpit Client

Once installed, you’ll see the following when opening the Cockpit Client:

<figure class="wp-block-image size-large"></figure>

You can type in a hostname or IP address that you would like to connect to. To authenticate as a user other than the user you are currently using, you can use the user@hostname syntax. A list of recent hosts that you’ve connected to will appear, if this is not the first time using the Cockpit Client. In that case, you can simply click on a host name to reconnect

If you have SSH key based authentication setup, you’ll be logged in to the remote host using the key based authentication. With out SSH keys setup, you’ll be prompted to authenticate with a password. In either case, if it is your first time connecting to the host over SSH, you’ll be prompted to accept the host key fingerprint.

As a special case, you can log into your currently running local session by connecting to localhost, without authentication.  

Once connected, you’ll see the Cockpit Overview page:

<figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Cockpit overivew menu</figcaption></figure>

Select the Terminal menu item in Cockpit to show that the remote system that I’m logged in to does not have any Cockpit packages installed:

<figure class="wp-block-image size-full"><figcaption class="wp-element-caption">Cockpit Terminal view</figcaption></figure>

Prerequisites for connecting to systems with Cockpit Client

There are several prerequisites for utilizing Cockpit Client to connect to a remote system. If you are familiar with managing remote hosts with Ansible, you’ll likely already be familiar with the prerequisites. They are the same:

  1. You must have connectivity to the remote system over SSH.
  2. You must have a valid user account on the remote system that you can authenticate with.
  3. If you need the ability to complete privileged operations in Cockpit, the user account on the remote system will need sudo privileges.

If you are connecting to a remote system that doesn’t have Cockpit installed, there are a couple of additional prerequisites:

  1. Python 3.6 or later must be installed on the remote host. This is not usually an issue, with some exceptions, such as Fedora CoreOS which does not include Python by default.
  2. An older version of Cockpit Client can not be used to connect to a newer operating system version. For example, if I installed Cockpit Client on my Fedora 38 workstation today and never updated it, it may not work properly to manage a Fedora 39 or Fedora 40 server in the future.

Frequently asked questions

Here are some frequently asked questions about this functionality:

Question: Cockpit is extendable via additional Applications.  Which Cockpit applications are available if I use the Cockpit Client to connect to a remote system that doesn’t have Cockpit installed?

Answer: Currently, Cockpit Client includes

  • cockpit-machines (virtual machine management)
  • cockpit-podman (Podman container management)
  • cockpit-ostree (used to manage rpm-ostree based systems)
  • cockpit-storaged (storage management)
  • cockpit-sosreport (for generating diagnostic reports)
  • cockpit-selinux (for managing SELinux)
  • cockpit-packagekit (for managing software updates)
  • cockpit-networkmanager (network management)
  • cockpit-kdump (kernel dump configuration) 

The Cockpit team is looking for feedback on what Cockpit applications you’d like to see included in the Cockpit Client. Post a comment below with your feedback. 

Question:  I connected to a remote system that doesn’t have Cockpit installed, but I don’t see Virtual Machines or one of the other applications listed in the menu.  I thought you just said these were included in the Cockpit Client Flatpak?

Answer:  When you login to a remote system that doesn’t have Cockpit packages installed, you’ll only see the menu options for underlying functionality available on the remote system.  For example, you’ll only see Virtual Machines in the Cockpit menu if the remote host has the libvirt-dbus package installed. 

Question: Can Cockpit applications available in the Cockpit Client be used with locally installed Cockpit applications on the remote host?  In other words, if I need a Cockpit application not included in the Cockpit Client, can I install just that single package on the remote host?  

Answer:  No, you cannot mix and match applications included in the Cockpit Client flatpak and those installed locally on the remote host.  For a remote host that has the cockpit-bridge package installed, Cockpit Client will exclusively use the applications that are installed locally on the remote host.  If the remote host does not have the cockpit-bridge package installed, Cockpit Client will exclusively use the applications bundled in the Cockpit Client Flatpak.  

Question:  Can I use Cockpit Client to connect to the local host?

Answer: Yes!  Simply open Cockpit Client and type in localhost and you’ll be able to manage the local host.  You don’t need to have any Cockpit packages installed on the local host if you use this method. You only need the Cockpit Client Flatpak.  

Question:  What Linux distributions can I connect to using the Cockpit Client?

Answer:  Cockpit is compatible with a number of different Linux distributions.  For more information, see the Running Cockpit page.  If connecting to a remote system that doesn’t have Cockpit installed, keep in mind the previously mentioned requirements regarding not connecting to newer OS’s from an older Cockpit Client.  

Question:  Does the Cockpit team have any future plans regarding this functionality? 

Answer:  The Cockpit team is planning on adding the ability to connect to remote hosts without Cockpit packages installed to the cockpit-ws container image. See COCKPIT-954 ticket for more info.  

Have more questions not covered here? Ask them in the comments section below!

Conclusion

The new Python bridge, and the corresponding ability to use the Cockpit Client to connect to remote systems without installing Cockpit, makes it incredibly easy to use Cockpit in almost any circumstance.

Try this out! It’s easy to do. Simply install the Cockpit Client Flatpak, and use it to connect to either your localhost or a remote system. Once you’ve tried it, let us know what you think in the comments below.

Bisecting Fedora kernel

Posted by Kamil Páral on August 15, 2023 03:07 PM

This post shows how to bisect a Fedora kernel to find the source of a regression. I needed that recently and I found no good guide, so I’m at least capturing my notes here, perhaps you find it useful. This approach can be used to identify which exact commit caused a bad kernel behavior on your hardware, and then report it to kernel maintainers. Note, you need to have a reliable way of reproducing the problem. If it happens randomly and infrequently, it’s much harder to debug.

0. Try the latest Rawhide kernel

Before you spend too much time on this, it’s always worth a shot to test the latest Rawhide kernel. Perhaps the bug is fixed already?

Usually the kernel consists of these installed packages: kernel, kernel-core, kernel-modules, kernel-modules-core, kernel-modules-extra. But see what you have installed on your system, e.g. with: rpm -qa | grep ^kernel | sort .

Install the latest Rawhide kernel:

sudo dnf update --setopt=installonly_limit=0 --repo fedora --releasever rawhide kernel{,-core,-modules,-modules-core,-modules-extra}

You want to use --setopt=installonly_limit=0 throughout this exercise to make sure you don’t accidentally remove a working kernel from your system and don’t end up with just broken ones (there’s a limit of three kernels installed at the same time by default). But it means you’ll need to remove tested kernels manually from time to time, otherwise you run out of space in /boot.

Reboot and keep pressing F8 during startup to display the GRUB boot menu. Make sure to select the newly installed kernel, boot it, test it. Note down whether it’s good or bad. If the problem is still there, we’ll need to continue debugging.

Note: When you want to remove that tested kernel, obviously you can’t be currently running from it. Then use standard dnf remove to get rid of it, or use dnf history for a more convenient way (e.g. dnf history undo last).

I. Narrow down the issue in Fedora-packaged kernels

As the first step, it’s useful to figure out which Fedora-packaged kernel is the last one with good behavior (a “good kernel“), and which one is the first one with bad behavior (a “bad kernel“). That will help you narrow down the scope. It’s much faster to download and install already built kernels than to compile your own (which we’ll do later).

Most probably you’re currently running a bad kernel (because you’re reading this). So reboot, display the GRUB boot menu and boot an older kernel. See if it’s good or bad, note it down. Unless the problem is very recent, all available kernels (usually three) in the GRUB menu will be bad. It’s time to start downloading older kernels from Koji. Use a reasonable strategy, e.g. install a month old kernel, or several months old, and gradually halve the intervals and narrow down until you find the latest good kernel. You don’t need to worry about using kernels from other Fedora releases (as you can see in their .fcNN suffix), they are standalone and work in any release. You can download the kernel subpackages manually, or use koji command (from the koji package), e.g.:

koji download-build --arch x86_64 kernel-6.5.0-0.rc6.43.fc39

That downloads many more subpackages than you need, so install just those needed (see the previous section), e.g. like this:

sudo dnf --setopt=installonly_limit=0 install ./kernel{,-core,-modules,-modules-core,-modules-extra}-6.4.0*x86_64.rpm

For each picked kernel, install it, boot into it, test it, note down whether it’s good or bad. Continue until you’ve found the latest good packaged kernel and the first bad packaged kernel.

II. Find git commits used for building identified good and bad kernels

Now that you have the closest good and bad packaged kernel, we need to figure out which git commits from the upstream Linux kernel were used to build them. In some cases, the git commit hash is included directly in the RPM filename. For example in my case, I reported that kernel-6.4.0-0.rc0.20230427git6e98b09da931.5.fc39 is the last good kernel, and kernel-6.4.0-0.rc0.20230428git33afd4b76393.7.fc39 is the first bad kernel. From those filenames, you can see that git commit 6e98b09da931 is good and git commit 33afd4b76393 is bad.

Not always is the commit hash part of the filename, e.g. with the example of kernel-6.5.0-0.rc6.43.fc39. In this case, you need to download the .src.rpm file from that build. Either manually from Koji, or using:

koji download-build --arch src kernel-6.5.0-0.rc6.43.fc39

Unpack that .src.rpm (my favorite decompress tool is deco), find linux-*.tar.xz archive and run the following command, e.g.:

$ xzcat -qq linux-6.5-rc6.tar.xz | git get-tar-commit-id
2ccdd1b13c591d306f0401d98dedc4bdcd02b421

(This command is explained in the kernel.spec file, also in there). Now you know the git commit hash used for that kernel build. Figure out commits for both the good and bad kernel you identified.

III. Use git bisect to find the exact commit that broke it

It’s time to clone the upstream Linux kernel repo:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ~/src/linux

And also the Fedora distgit kernel repo:

fedpkg clone -a ~/distgit/kernel

We’ll now use git bisect to arrive at the breaking commit which caused the problem. After each step, we’ll need to build the kernel, test it, and mark it as good or bad. Let’s start:

cd ~/src/linux
git bisect start
git bisect good YOUR_GOOD_COMMIT
git bisect bad YOUR_BAD_COMMIT

Git now prints a commit hash to be tested (and switches the repository to that commit), and an estimate of how many steps remain. We now need to take the current contents of the source code and build our own kernel.

Note: When building the kernel, I was advised to avoid the overhead of packaging, to speed up the process. I’m sure it’s a good advice, but I didn’t find a good guide on how to do that (including how to retrieve the Fedora kernel config, build the kernel manually, copy it to the right places, create initramfs, create a boot option in GRUB, etc). So I just ran the whole process including packaging. On my machine, the compilation took about 40 minutes and packaging took 10 minutes, and I needed to do about 11 rounds, so it was an OK tradeoff for me. (If you can write a guide how to do that without packaging, please do and link it in the comments, I’d love to read it).

Let’s create a tarball of the current source code like this:

git archive --prefix=linux-6.4.0-bisect/ -o linux-6.4.0-bisect.tar HEAD

In my case, all my work was done on the 6.4.0 kernel version, so I used that number, but I don’t think it actually matters, you don’t need to worry about it too much. Also I haven’t distinguished the tarballs at all, always used the name linux-6.4.0-bisect.tar, overwriting the old one each time. It was easier than to append commit hashes and adjust the spec file (to be shown later) in multiple places each time. Feel free to take a different approach.

Let’s move the tarball to the distgit repo:

mv ~/src/linux/linux-6.4.0-bisect.tar ~/distgit/kernel/

Now we need to adjust the distgit spec file a bit:

cd ~/distgit/kernel
# edit kernel.spec

I made the following changes to the spec file:

-%define specrpmversion 6.4.9
+%define specrpmversion 6.4.0
-%define specversion 6.4.9
+%define specversion 6.4.0
-%define tarfile_release 6.4.9
+%define tarfile_release 6.4.0-bisect
-Release: %{pkg_release}
+Release: %{pkg_release}.gitYOUR_TESTED_COMMIT
-Source0: linux-%{tarfile_release}.tar.xz
+Source0: linux-%{tarfile_release}.tar

Now we can start the build:

fedpkg mockbuild --with baseonly --with vanilla --without debuginfo

Options --with baseonly and --without debuginfo make sure we don’t build unnecessary stuff. --with vanilla was needed, because Fedora-specific patches didn’t apply to the older source code.

After a long time, your results should be available in results_kernel/ and look something like this:

$ ls -1 results_kernel/6.4.0/200.fc38.git6e98b09da931/
build.log
hw_info.log
installed_pkgs.log
kernel-6.4.0-200.fc38.git6e98b09da931.src.rpm
kernel-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-core-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-devel-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-devel-matched-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-modules-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-modules-core-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-modules-extra-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-modules-internal-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
kernel-uki-virt-6.4.0-200.fc38.git6e98b09da931.x86_64.rpm
root.log
state.log

See that all the RPMs have the git commit hash identifier that you specified in the spec file. Now you just need to install the kernel (see in a previous section), boot it (make sure to display the GRUB menu and verify that the correct kernel is selected), and test it.

Note: If you have Secure Boot enabled, you’ll need to disable it in order to boot your own kernel (or figure out how to sign it yourself). Don’t forget to re-enable it once this is all over.

Once you’ve determined whether this kernel is good or bad, tell it to git bisect:

cd /src/linux
git bisect good   # or bad

And now the whole cycle repeats. Create a new archive using git archive, move it to the distgit directory, adjust the Release: field in kernel.spec to match the new commit hash, and use fedpkg to build another kernel. Eventually, git bisect will print out the exact commit that caused the problem.

IV. Report your findings

Report the problem and the identified breaking commit into Red Hat Bugzilla under the kernel component. Please also save and attach the bisect log:

cd /src/linux
git bisect log > git-bisect-log.txt

Then also report this problem (possibly a regression) to the kernel upstream and mention it in the RH Bugzilla ticket. Thanks and good luck.

Backward compatibility in syslog-ng by using the version number in syslog-ng.conf

Posted by Peter Czanik on August 15, 2023 11:22 AM

Many users are annoyed by the version number included in the syslog-ng configuration. However, it ensures backward compatibility in syslog-ng. It is especially useful when updating to syslog-ng 4 from version 3, but also when updating within the same major version.

Read more about it at https://www.syslog-ng.com/community/b/blog/posts/backward-compatibility-in-syslog-ng-by-using-the-version-number-in-syslog-ng-conf

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

New responsibilities

Posted by Bastien Nocera on August 14, 2023 09:31 AM

As part of the same process outlined in Matthias Clasen's "LibreOffice packages" email, my management chain has made the decision to stop all upstream and downstream work on desktop Bluetooth, multimedia applications (namely totem, rhythmbox and sound-juicer) and libfprint/fprintd. The rest of my upstream and downstream work will be reassigned depending on Red Hat's own priorities (see below), as I am transferred to another team that deals with one of a list of Red Hat’s priority projects.

I'm very disappointed, because those particular projects were already starved for resources: I spent less than 10% of my work time on them in the past year, with other projects and responsibilities taking most of my time.

This means that, in the medium-term at least, all those GNOME projects will go without a maintainer, reviewer, or triager:
- gnome-bluetooth (including Settings panel and gnome-shell integration)
- totem, totem-pl-parser, gom
- libgnome-volume-control
- libgudev
- geocode-glib
- gvfs AFC backend

Those freedesktop projects will be archived until further notice:
- power-profiles-daemon
- switcheroo-control
- iio-sensor-proxy
- low-memory-monitor

I will not be available for reviewing libfprint/fprintd, upower, grilo/grilo-plugins, gnome-desktop thumbnailer sandboxing patches, or any work related to XDG specifications.

Kernel work, reviews and maintenance, including recent work on SteelSeries headset and Logitech devices kernel drivers, USB revoke for Flatpak Portal support, or core USB is suspended until further notice.

All my Fedora packages were orphaned about a month and a half ago, it's likely that there are still some that are orphaned, if there are takers. RHEL packages were unassigned about 3 weeks ago, they've been reassigned since then, so I cannot point to the new maintainer(s).

If you are a partner, or a customer, I would recommend that you get in touch with your Red Hat contacts to figure out what the plan is going forward for the projects you might be involved with.

If you are a colleague that will take on all or part of the 90% of the work that's not being stopped, or a community member that was relying on my work to further advance your own projects, get in touch, I'll do my best to accommodate your queries, time permitting.

I'll try to make sure to update this post, or create a new one if and when any of the above changes.

Week 32 in Packit

Posted by Weekly status of Packit Team on August 14, 2023 12:00 AM

Week 32 (August 8th – August 14th)

  • Two new configuration options for filtering when getting latest upstream release tag were introduced: upstream_tag_include and upstream_tag_exclude. They should contain a Python regex that can be used as an argument in re.match. (packit#2030, packit-service#2138)
  • Retriggering of pull-from-upstream via a comment will now use the correct configuration file from the default dist-git branch. (packit-service#2140)
  • The pull-from-upstream job can now be used with upstream repos that are not hosted on a supported git forge. (packit-service#2137)

Episode 388 – Video game vulnerabilities

Posted by Josh Bressers on August 14, 2023 12:00 AM

Josh and Kurt ask the question what is a vulnerability, but in the framing of video games. Security loves to categorize all bugs as security vulnerabilities or not security vulnerabilities. But the reality nothing is so simple. Everything is a question of risk, not vulnerability. The discussion about video games can help us to better have this discussion.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3192-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_388_Video_game_vulnerabilities.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_388_Video_game_vulnerabilities.mp3</audio>

Show Notes

Industrializing Machine Learning

Posted by ! Avi Alkalay ¡ on August 13, 2023 12:35 PM

I’m doing Machine Learning Industrialization for more than 2 years and I’m thrilled to see it featured by McKinsey as top 2 in its 2023 tech trends!

<figure class="aligncenter size-large"></figure>

Industrializing ML is about applying Software Engineering best practices to the whole AI modeling process since its first line of code. It is about Data Scientists focusing on math and stats at the same time that the AI artifact is casted as a software product aiming production environments. This is different from MLOps, which is commonly positioned as a mere wrapping activity that happens after and separated from AI modeling and before production. In the whole Industrialization practice, MLOps is a subset activity that happens in between, but quite apart, from both Data Scientists’ work and the infrastructure. Industrializing Machine Learning contains MLOps, plus other concepts that are even more important.

The term “industrial” is accurate precisely to antagonize with the artisanal way that Machine Learning squads usually operate nowadays. It’s common to see a lot of mathematics, good statistics, but few software engineering best practices, little DevOps, few design patterns, minimal automation, and limited standardization.

I practically invented Machine Learning Industrialization for myself when I was at Loft, out of necessity and intuition, in 2021. Work that I proposed and lead when I was there allowed us to scale from 4 models that were laborious to maintain and monitor, to over 70 models, without growing the team of Data Scientists. Those +70 are now easy to maintain, audit, observe, reproduce, retrain, find and handle in general.

Also on my LinkedIn.

Flock 2023 trip report

Posted by Tomas Tomecek on August 13, 2023 06:00 AM

My first conference outside of Brno after the pandemic. I forgot how stressed I am from the travelling. Didn’t have to wait long to realize why:

  • traffic jam in Brno
  • 90 minutes flight delay
  • downpour of people on the airport
  • border control scanners not made for my height
  • missing connections because of delays
  • never-ending transfers

All of this was worth the frustration to absorb Flock energy and magic.

Mesurer l’état de santé des communautés linguistiques

Posted by Jean-Baptiste Holcroft on August 13, 2023 12:00 AM
Il y a quelques jours, je vous parlais de la création de sites pour mettre en valeur nos traducteurs. Voici un premier site réalisé il y a quelques années maintenant : https://languages.fedoraproject.org Que contient languages.fedoraproject.org ? Ce site rudimentaire a pour valeur de fournir des statistiques de traduction sur la totalité des paquets Fedora disponibles dans les dépôts. La dernière version de Fedora sur laquelle nous avons exécuté ce site est Fedora 36.

Flock to Fedora 2023 report

Posted by Alexander Bokovoy on August 11, 2023 10:46 AM

On August 2nd-4th, 2023, Fedora Project ran its annual contributors conference, Flock to Fedora, in Cork, Ireland. After a previous successful Flock in 2019 in Budapest, Fedora contributors did not meet in person due to rough pandemia years and had created Nest with Fedora online event instead. Nest ran for three years but online meetings aren’t a full replacement for face to face collaboration. Cork’s Flock was supposed to combine both online and offline events together.

I have been attending and presenting at various Flock and Nest events over past seven years. I was looking forward to see and collaborate with many other project participants and users and get to know new people as well.

My travel to Cork was unremarkable. I took a direct flight from Helsinki to Dublin and then an Aircoach bus to Cork. The ‘unremarkable’ part was really about the unexpected delays people did report over the Matrix channel. The only ‘trouble’ I had was to catch a taxi at 11pm after arrival to Cork to get to my B&B. The Aircoach bus from Dublin airport is very popular in summer and whatever taxi fleet is in Cork was DDoSed by the passengers.

Cork is hilly. I stayed in excellent B&B across the road from the conference hotel. The hotel and its conference facilities are in separate buildings; the events building is up hill from the hotel. Walking is helpful, climbing harder but given we are sitting most of the time, was a welcoming ‘struggle’. Perhaps, my stay outside of the conference hotel has also helped to avoid COVID-19 which few other participants, sadly, contracted. It is hit or miss every time.

Unfortunately, not everyone made to Cork. Marina Zhurakhinskaya passed away in June 2022. Ben Cotton, Fedora Program Manager, has been let go as a part of Red Hat’s layoffs earlier this year. Both had definitely changed Fedora project dramatically, in many ways, both leading to openness and friendliness Fedora is known for. Many presenters remembered both Marina and Ben during their sessions.

2023’s edition of Flock to Fedora was also the first Fedora Project’s event collocated with CentOS Connect. As a result, it brought together Red Hat Enterprise Linux distribution upstreams and downstreams together.

Talks

In total, there were up to four parallel tracks, dedicated to different areas of a distribution development and a project’s life and spanned over three days. That, unsurprisingly, made it challenging to visit all talks and activities. It is a common trait shared by many successful events. And for those who wanted to continue discussions after a talk has ended, there is always a ‘hallway track’.

State of Fedora 2023

The first talk was ‘State of Fedora 2023’ by the project leader, Matthew Miller. Recording is available here. I am linking to the re-take of the talk as the original streaming was off by 20 minutes and Matthew had to reprise it again.

A major announcement made during the talk was a hiring one. Fedora Operations Architect role has been introduced after a program manager role that Ben Cotton so masterfully executed was eliminated. Hopefully, this new role will be filled soon and will allow to capture the same benefits that Ben brought to Fedora. The role is a bit different, though, as it is focused on cross-project and cross-distro impact across Fedora and RHEL.

Fedora contributors’ survey results were also unveiled by Matthew in the talk. In general, contributors keep trust in the project and continue their participation at the pre-pandemia levels. Recent social networking turmoil around Red Hat actions hasn’t influenced the results too much. The screenshots below are from the video stream as the talk’s slides aren’t yet available.

The talk went into details on what Matthew and Fedora Council aim for Fedora Project’s future. Growing a project with thousands of contributors spread around the world and representing different cultures is hard. A lot of effort is put into making Fedora a welcoming place to everyone who is willing to work together towards a common goal.

rpminspect: Lessons from three distributions

David Cantrell created and maintains a tool that helps RHEL maintainers to keep their packages sane over years of maintenance. It is run as a part of CentOS Stream merge request process, as part of Fedora gating and pull request testing, and as a gating test for RHEL.

The talk itself is an excellent retrospective on what one should consider when creating a new Open Source project while working on it full time. David provided observations on how to sell the idea to your management, how to get people interested in becoming a community for your project, how to sustain development in a long run. This is one of rare gems of a ‘lone wolf’ maintainership stories that everybody needs to absorb when they start their new journey. Believe me, it is worth it.

Using AI/ML to process automated test results from OpenQA

Tim Flink from Fedora QE team decided to apply AI/ML to a problem of identifying hanging or failing jobs in OpenQA. OpenQA runs full-VM tests and records screencasts of everything what is shown on a VM screen. A crash of a graphical environment is abruptly visible there as graphics would be replaced by a terminal with a Wayland’s stacktrace. What followed is an experiment on processing these screens to reliably detect a particular type of a crash.

We spent some time with Tim discussing how these experiments can be applied to finding out possible issues in other system reports. Since Fedora is upstream of CentOS Stream or RHEL, it means certain issues – and their fixes – would often appear in Fedora first. If we could train a model on those issues in Fedora, can we detect automatically whether a particular fix is required in RHEL later? This is quite relevant to FreeIPA and SSSD as we do run their tests in OpenQA as a part of Fedora Server release criteria.

Another possible use case is to do a reverse training. Since we do know how a potential failure could look like, we can intentionally build an OpenQA test environment that would reproduce the failure and then train a model to recognize logs from such failures in real life scenarios. For example, establishing trust to Active Directory in FreeIPA is reliant on a working DNS setup, working firewall, etc. Failure to communicate through an incomplete firewall would be reflected with timeouts in the logs which we could train to recognize. There are endless possibilities here to aid through known errors

Hallway track

On a similar note, I had discussion with Amazon’s David Duncan in the ‘hallway track’ which started from an observation that Cloud SIG would really benefit from our passwordless work: distributing VMs with pre-set passwords is not ideal, an ability to inject FIDO2 passkey information and have everything obey it at login in cloud would be great to have. Somewhere along this way, discussion switched to CoreOS-based environments and I realised my experiments with Fedora Silverblue to develop passwordless support for FreeIPA would probably be a subject to a talk that would be interesting to others as well.

I am running my own Silverblue images which source SSSD and FreeIPA upstream test builds to allow me easily to switch between different potential options in one go, without messing with an installation environment. It is quite important for the integration work we do and would be crucial for end-to-end testing of upcoming GNOME changes.

This also provided me an insight into what container-based environments need from FreeIPA and overall from enterprise domains to fit nicely. I should have submitted a talk about that to Flock! Well, I will do one next year, for sure. (And, TODO: file issues to track for that integration to FreeIPA upstream!)

Another interesting discussion we had with Jonathan Dieter. Jonathan is a long term Fedora contributor and FreeIPA user. For past several years Jonathan works with a local Irish company that provides services around the world to test local phone numbers. They maintain an infrastructure in more than 80 countries where there might be no global cloud providers at all. To keep that infrastructure reliable, they use FreeIPA (not alone, of course) and OStree-based images.

Asahi Linux and Fedora

It is one of the talks that I missed to attend in person as conflicts are inevitable: Mo Duffy’s Podman Desktop talk and Adam Williamson’s Fedora CI state talk were running at the same time.

Asahi Linux is a project which aims to upstream support for Apple’s ARM64 architecture, best known through Apple’s M1 and M2 systems. At the Flock Asahi Linux project members have announced that not only Fedora Asahi Remix will be the flagship distribution for the project, but also Fedora Discourse instance will be used to handle Asahi Linux community collaborations.

Asahi’s announcement also an example of how friendly has become Fedora Project as a community over years. I am definitely looking forward to see the remix to become one of official builds of Fedora.

Podman desktop: from Fedora to Kubernetes for beginners

Mo Duffy gave an outstanding talk about using Podman desktop to deliver workloads for non-technical people. It was a highlight of the conference, for sure. She also made few interesting points. For one, running cloud-based workloads locally to allow offline operations is nice. Mo demonstrated a Penpot instance, which is a design and prototyping application. Running it locally helps to maintain the same workflow while on an intercontinental flight. However, even more interesting is that this approach also allows to use a cloud software that otherwise is considered insecure. For example, running a Wordpress setup locally to benefit from its nice UI in a local browser and export static web site content to push to the actual web hosting.

By lowering a barrier to use containerised applications through Podman Desktop we may hope to get more people join our community and contribute. Starting with Podman Desktop’s friendliness would allow these newcomers to discover other Fedora flavors and features. It is certainly an interesting aspect we could expand further in a way similar how this ‘F’ in Fedora got expanded in Mo’s presentation.

Panel: Upstream collaboration & cooperation in the Enterprise Linux ecosystem

Another conference highlight was the panel that brought representatives of Fedora, RHEL, Rocky Linux, Alma Linux, and CentOS Stream together on stage. Distributions upstream and downstream of RHEL presented their views on various development and community topics. It is worth to watch the stream.

State of EPEL

Trow Dawson and Carl George did present another state of EPEL. EPEL has a solid contributors’ base who keep thousands of packages available to RHEL and downstream distributions’ users. EPEL is using Fedora infrastructure and for many packages it shares maintainers with Fedora (EPEL branches are branches in Fedora dist-git for the same package, if this package is not in RHEL). So all EPEL contributors are Fedora contributors. ;)

One interesting aspect in every “State of EPEL” talk is a long tail of the EPEL demographics. Much like “State of Fedora” shows demographics of Fedora releases, EPEL statistics includes details on who is running the lowest number of downstream systems:

Passwordless Fedora

My talk was on the morning of the second day. People were still recovering from the night of International Candy Swap and table games so at start I had may be a couple of attendies. Eventually, we’ve got more people in the room and there were also online attendees so it wasn’t so feeling so lonely.

My talk was similar to previous ones at FOSDEM and SambaXP. What was new is a demo from Ray Strode on how potentially a user experience could look like in GNOME for a passwordless login. Ray implemented a prototype of external identity provider login flow that Allan Day has shared recently. This flow could be used for login through Microsoft’s Entra ID (a.k.a. Azure AD) or any OAuth2 provider supported by FreeIPA. We aren’t fully there yet but the goal is to do this work once for GNOME and reuse for various passwordless authentication approaches supported through SSSD.

I also showed an old demo from my FOSDEM and Flock 2016. It shows how we integrated 2FA tokens (Yubikeys in this example) with FreeIPA to authenticate and obtain Kerberos tickets through a KDC proxy over HTTPS. These tickets then were used to login onto a VPN. This is something that is possible in Fedora and RHEL for almost a decade now.

OpenQA hacking

Before Flock, Adam Williamson started to work on integrating Samba AD tests into OpenQA for Fedora. It almost worked well but there were few issues Adam wasn’t able to resolve so we set down at the Flock and figured out at least few of those. The only remaining one was an apparent race condition within a test that enrolls a system to Samba AD using kickstart. SSSD, it seems, starts before networking is up and stable, and decides that it is offline. When the test tries to resolve an Active Directory user, SSSD fails to do so as it thinks to be offline.

Interestingly, the same test against FreeIPA works fine. The same test done past kickstart works fine as well, for both FreeIPA and Samba AD. There is probably a need to add a waiting period to settle a network state. We saw this in past too but never found a good way to trigger a proper event for SSSD to recover.

Social events

I am trying to reduce candy consumption so I skipped the social events on the first day but attended the conference dinner on the second day. All social events during the Flock well organized and this one wasn’t exception either. We had interesting discussions with Fedora and Rocky folks, getting to know there is a lot of similarity in how people do their lives across the world.

On Friday’s night another social event was a Ghost Tour. However, we skipped it and together with few other people went to do a bit of memorabilia road through another Mexican place and (of course!) a local bar. Life in IT and development in 90’s and early 2000’s weren’t that much different in US and Europe, really. Thanks to Spot and Amazon for covering the dinner, thanks to other folks for beer and a company.

I left on Saturday at noon using the same Aircoach bus towards Dublin airport. The bus was full – make sure you have booked your seat online in advance. My flight back to Finland was uneventful as well. Overall, it was a great conference, as usual. I’d like to say thank you to all volunteers and organizers who keep Flock so wonderful and Fedora project so welcoming. Thank you!

CPE Weekly update – Week 32 2023

Posted by Fedora Community Blog on August 11, 2023 10:00 AM

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 07 August – 11 August 2022

<figure class="wp-block-image size-full">CPE infographic</figure>

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Planning board
Docs

Update

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • Presented “State of EPEL” at Flock
  • Participated in packaging “Review-a-thon” hackfest at Flock

The post CPE Weekly update – Week 32 2023 appeared first on Fedora Community Blog.

Contribute during the DNF5, GNOME 45, and i18n test days

Posted by Fedora Magazine on August 11, 2023 08:00 AM

Fedora test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora Linux before, this is a perfect way to get started.

There are four test periods in the upcoming weeks:

  • Friday 11 August through Thursday 17 August , is to test DNF5.
  • Monday 14 August through Sunday 20 August, two test day periods focusing on testing GNOME Desktop and Core Apps.
  • Tuesday 5 September through Monday 11 September, is to test i18n.

Come and test with us to make the upcoming Fedora Linux 39 release even better. Read more below about how to do it.

DNF5

Since the brand new dnf5 package has landed in rawhide, we would like to organize a test week to get some initial feedback on it before it becomes the default. We will be testing DNF5 to iron out any rough edges.

The test week will be Friday 11 August through Thursday 17 August. The test week page is available here .

GNOME 45 test week

GNOME 45 has landed and will be part of the change for Fedora Linux 39. Since GNOME is the default desktop environment for Fedora Workstation, and thus for many Fedora users, this interface and environment merits a lot of testing. The Workstation Working Group and Fedora Quality team have decided to split the test week into two parts:

Monday 14 August through Thursday 17 August, we will be testing GNOME Desktop and Core Apps. You can find the test day page here.

Friday 18 August through Sunday 20 August, the focus will be to test GNOME Apps in general. This will be shipped by default. The test day page is here.

i18n test week

The i18n test week focuses on testing internationalization features in Fedora Linux.

The test week is Tuesday 5 September through Monday 11 September. The test week page is available here.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results. All the test day pages receive some final touches which complete about 24 hrs before the test day begins. We urge you to be patient about resources that are, in most cases, uploaded hours before the test day starts.

Come and test with us to make the upcoming Fedora Linux 39 even better.

Conecta Element con Fedora:

Posted by Arnulfo Reyes on August 11, 2023 03:51 AM

¡Configuración en fedora.im! 🎩💬

Descubre cómo unir fuerzas y configurar Element con el servidor de Fedora: fedora.im

Paso 1: Descargar e Instalar Element

  1. Dirígete a la tienda de aplicaciones de tu dispositivo (iOS, Android o escritorio) y busca “Element” o “Element Matrix Client”.
  2. Descarga e instala la aplicación en tu dispositivo.
<figure></figure>

Paso 2: Abrir la aplicación Element

  1. Abre la aplicación Element después de la instalación.
<figure><figcaption>por default es matrix.org debemos Editar</figcaption></figure>

Paso 3: Configurar la Cuenta con el Servidor de Fedora: fedora.im

  1. En la pantalla principal, toca o haz clic en “Editar”.
  2. Selecciona “Continuar con your Fedora Account” según la versión de la aplicación que estés usando.
<figure><figcaption>fedora.im</figcaption></figure>

Te va a redirigir a “Accounts de Fedora” inicia sesión con tu usuario y contraseña.

Confirma la configuración y espera a que Element establezca la conexión con el servidor de Fedora.

<figure></figure>

Paso 4: Comenzar a Chatear

  1. Una vez configurada la cuenta, estarás listo para comenzar a chatear de manera segura con otros usuarios en el servidor de Fedora.
  2. Puedes unirte a salas públicas o privadas, buscar amigos a través de sus nombres de usuario y empezar a conversar.
<figure></figure>

Los esperamos en Fedora Latam:

https://matrix.to/#/#latam:fedoraproject.org

La sinergia entre Element y Fedora nos ha brindado no solo una plataforma de comunicación cifrada, sino también estilo digital.

Aprecio tu tiempo al llegar hasta aquí. Si deseas seguir conectado y continuar la conversación, te invito a seguirme en mis redes sociales: Instagram @arnulfo o LinkedIn.

Tus comentarios son siempre bienvenidos. ¡Hasta la próxima!

Flock to Fedora in Cork, IE (2023)

Posted by Kevin Fenzi on August 11, 2023 01:26 AM

Just got back from our first in person flock since 2019 and it was amazing. I thought I’d share my journey and thoughts from it here in longer form.

I did do some prep work the previous week in that I tried to make sure as best as I could that infrastructure and releng stuff was stable and wouldn’t need any interventions from me, which mostly worked out in the end, with only some releng work (starting mass signing of f40 in prep for branching this coming week) and some troubles with the matrix/irc bridge (which I can’t do much about aside from letting people know).

Travel to flock was fine. I took advantage of my favorite flight from PDX to AMS and then to Cork directly. In AMS I ran into Justin Flory and David Cantrell, who were both not planning on being there at the same time as me, but due to flight changes were. The flight to Cork was quick and we took a cab to the hotel.

The hotel was fine. It was a bit out of the center of town, which meant you had to take a cab or bus most of the time, but it wasn’t too bad. The rooms oddly didn’t have heating or cooling, but did have windows that opened, so I just kept mine open and was just fine. The conference center was in a seperate building behind the hotel and it was a bit weird to walk all the back there on the 3rd floor to get to it, but it worked out fine. The elevator had a amusing ‘off by one’ error in it. Ground floor was “1”, the first floor was “2”, etc. The food was all fine to me. The hotel did a buffet breakfast and did lunch/coffee breaks, etc.

I left Monday, but arrived Tuesday afternoon, got settled and looked around, then off to dinner with various folks. It was really nice. We went to a place called Market Lane and they did a pretty good job accomodating a large group of roundy nerds.

Wednesday the conference kicked off with some introductions from Justin and then Matthew Millers ‘State of Fedora’ talk. This was great as always, but sadly the streaming / recording had problems at the beginning, so Matthew redid the talk on the last day in the afternoon for a smaller audience. I think both were great. I think I liked the first one better, but might be because I was less tired. Next I went to Justin Forbes talk on the state of the kernel, informative as always. Then on to a roundtable about packaging problems in modern languages from Jens Peterson. Some good comments from a number of folks, but I am not sure we came up with much plan to help aside from moving to more bundling. I was hoping we could look at sharing tooling to handle large package sets, but we can discuss this more on other platforms moving forward. Then, on to a talk about the current state of Infra and Releng applications from Akash and Aoife. They made this talk really nice and fun and gave out prizes! Great work by them putting this together. Next I wanted to go to the RiscV talk, but somehow decided to go to the AI/ML in QA talk instead. Really interesting seeing how to try and use AI for our QA efforts. I’m not a big fan of AI/LLM’s in general, but I think they do have uses, and this is a clever one. Next up was a state of Fedora CI. It’s amazing how long it’s taken us, but we are finally getting there with CI. Next to a talk about EPEL. Great to get more visibility for a subproject of Fedora thats so popular and useful. The great Flock international candy swap took place. So much cool and interesting candy. Someone (Carl!) even brought some jerky. Tons of good stuff. The day ended out with a lot of talking to various people at the game night, the hotel bar and then finally the hotel lobby when they kicked us out of the bar.

Thursday started in too early at 8:30 with a great keynote by Jen Madriaga on Diversity, Equity and Inclusion. Some very good points and things to think about. We can all be better here and help each other. Next up was a “Meet your FESCo” session. We only had 4 of the 9 FESCo members present this time, but we talked and answered questions and hopefully made some amount of sense. I then headed out to a talk on whats new in systemd. Wow, so many things I had no idea about. I hope the slides for this are up somewhere because even though I followed along on my laptop, there were things I didn’t get to trying out. So many good things. Next was a talk on ansible packaging in Fedora / EPEL. An excellent overview from gotmax23, who I was finally able to meet in person. So nice to have someone take over the ansible maintainer mantle. I almost always enjoyed it, but I just don’t have the time to devote to it that I used to. It’s in good hands. Next was Matthew Millers discourse discourse, but it didn’t really get into how to move it away from him doing so much and more got into a general discussion about it and how and if we should convince people to move there. Still lots of good info and things that were good to bring up. Next was the upstream colaboration in Enterprise Linux panel. It was all surprisingly cordial and for much of it the panelists seem to all agree on things. Then the evening events: dinner at a mexican food place and a scavenger hunt. I was wiped out, so I headed back to sleep after the dinner.Even though I passed out at like 8:30pm, I didn’t really seem to catch up on sleep much. Seems to be what happens at flock.

The last day of flock, Friday, again started off at 8:30am. This day was devoted to a Mentor Summit. After a introduction I was on a Mentoring panel. This session was one of the best of the conference I thought. Some really great questions from the audience and from our moderator, Amita Sharma. She kept us going with great questions and the time flew by. My fellow panelists did an awesome job too. Next I went to a workshop I was running with James Richardson on revaming our onboarding and mentoring and docs about those in Fedora Infrastructure and Release Engineering. Surprisingly we had a really nice crowd of folks and we dived right in. Tons of good ideas and suggestions. As soon as we are recovered from travel, James and I will be writing things up for a round of review with the community and then we can dive in and revamp the docs and start trying ideas. Infra and Releng are great, fun areas to contibute and I look forward to onboading and mentoring a bunch of new folks. The workshop was scheduled to go on after lunch, but we lost a lot of people (either leaving early or going to other sessions), so we did just a bit and wrapped things up. Then off to the final night. There was a ghost tour, but I was tired and a bit footsore so I tagged along with some folks getting dinner in Cork and then passed out early for my super early travel back.

My saturday started at 4:30am or so. Got up, showered, packed and checked out and met the cab to the airport at 5:30am. Then flight to heathrow (which I had never been to before). Amusingly, my Brother had been vacationing in the area and was in fact flying back to the US that same morning. So, we met up in the airport, he got me into the lovely Virgin Atlantic Lounge where we had breakfast and caught up a bit. Sometimes these weird things work out. Next was my 9.5 hour flight to Seattle. That went mostly fine, I usually just read or listen to podcasts and ignore the world. I have to say that noise canceling headphones are sure nice for these trips. I picked up a new sony WH-1000MX5 set before this trip and they did an outstanding job. Tons of battery life, super good noise canceling. Landed in Seattle and then walked. And Walked. And walked. I think it was probibly a mile or two of corridors and up and down and around before getting to the passport control line. It was really a lot of walking after being in planes all day. Finally got past that and… my next flight was in another terminal, so more walking to a train and more walking. 🙂 (I did over 10k steps saturday). Finally got to my gate and… they changed the gate. Got to the new gate and… they didn’t have a driver for the bus from the gate to the plane. When they finally did they made me check my bag because of limited space. Finally off and landed in Portland, then in to wait on my bag. Then shuttle to the parking lot where I parked and finally the 2 hour drive home. Whew. Pretty epic day.

I have to say the hallway track was excellent as always. I had a number of really nice coversations with all kinds of people on all kinds of topics, from Irish taxes to Community Building to Mobile devices and hardware support to Boating to Weather to Books, etc.

So, whirlwind travel, but really really nice to see people in person again who I talk with most days over the network. I really hope next year we get more of the folks who were not able to make it this time around. As always flock leaves me body tired, but mind bursting with all the posibilities!

Testing latest KDE software, from Apps to the Plasma desktop

Posted by Timothée Ravier on August 10, 2023 10:00 PM

This is the transcript of the talk I gave at Akademy 2023. If you prefer, you can watch the video (the sound is not great unfortunately) or look at the slides. The videos from the talk are in this post.

While this talk is focused on KDE software and Fedora Kinoite, most the concepts described here also apply to other Flatpak’ed applications (notably GNOME ones) and other rpm-ostree based desktops (Silverblue, Sericea, Onyx, etc.).

Testing: what, when, how?

Software has bugs! One way to find bugs is to have users test changes.

To make that possible, we need to deliver pre-release versions of our software in a way that is accessible to our users which are usually not developers.

Remember that even as a developer, you are always the user of someone else’s project. The technology stack is now so complex that it is mostly impossible to understand every single projects included in a modern desktop environment, even if it is fully open source.

Testing pre-release software also has to be reasonably safe regarding user’s data, as it’s often not practical to ask users to backup everything all the time and testing a small fix for an application should not have your entire system crash.

Users must also be able to go back to state where their system is running only “stable” software again, after they have completed testing a change or a fix.

Ideally, we would let users test all changes before they are commited to a repo, during the merge request process. But sometimes this is too difficult and we should then enable them to test those changes as soon as possible after they are commited.

Let’s start with KDE Apps

We’re now publishing most of KDE Apps as Flatpaks on Flathub. We track the latest stable releases. Updates are directly shipped to users.

See Albert Astals Cid’s talk (Flatpak and KDE) for more details about Flatpak, how they work and how we use it for KDE Apps.

KDE Apps on Flathub

On Flathub, the pull-request workflow is enforced. Each PR will build a “test” Flatpak. This Flatpak is installed using a single command.

This let’s developer create Flatpaks with fixes for users to test on top of stable releases.

Below is a demo of testing a Flatpak fix from Flathub:

<video class="center" controls="" preload="metadata"> <source src="/downloads/videos/Akademy_2023_1_flathub_demo.mkv" type="video/mp4"> </video>


Transcript of the video:

# Go to PR: https://github.com/flathub/org.kde.gwenview/pull/97

# Download Flatpak:
$ flatpak install --user \ https://dl.flathub.org/build-repo/35177/org.kde.gwenview.flatpakref

# Run Flatpak
$ flatpak --user run org.kde.gwenview

# Run Gwenview from the host to compare versions
$ gwenview

# Cleanup
$ flatpak --user uninstall org.kde.gwenview

KDE Apps on KDE Invent

We are progressively setting up the same infrastructure in KDE GitLab instance (invent.kde.org) using GitLab CI.

Flatpak manifests are directly stored in application’s repos. Every pull request creates a Flatpak bundle to try out.

This let’s user test fixes and features before the change is merged.

Below is a demo of testing a Flatpak from GitLab CI:

<video class="center" controls="" preload="metadata"> <source src="/downloads/videos/Akademy_2023_2_invent_demo.mkv" type="video/mp4"> </video>


Transcript of the video:

# Go to PR: https://invent.kde.org/graphics/gwenview/-/merge_requests/208

# Download the artifact:
$ curl -O https://invent.kde.org/graphics/gwenview/-/jobs/1041125/artifacts/download?file_type=archive

# Unzip and install the Flatpak bundle
$ unzip Flatpak_artifacts.zip
$ flatpak --user install --bundle ./gwenview.flatpak

# Run the Flatpak
$ flatpak --user run org.kde.gwenview

# Cleanup
$ flatpak --user uninstall org.kde.gwenview

More advanced Flatpak usage

Flatpak content is stored in an ostree repo. Similarly to a Git repo, you can fetch any previous build to test regressions or compare behavior. This works with Flatpaks from Flathub.

Note that apps strictly tied to Qt versions will also need an older Runtime.

Below is a demo of bisecting a Flatpak from Flathub:

<video class="center" controls="" preload="metadata"> <source src="/downloads/videos/Akademy_2023_3_flatpak_adv_demo.mkv" type="video/mp4"> </video>


Transcript of the video:

# Install a Flatpak from Flathub
$ flatpak --user install org.kde.kcalc 

# Look at the log of versions (ostree commit log)
$ flatpak --user remote-info --log flathub org.kde.kcalc | less

# “Checkout” an older version
$ flaptak --user update --commit=... org.kde.kcalc

# Run the older version
$ flatpak --user run org.kde.kcalc

# Reset to latest version (note: operation canceled)
$ flapak update

# Skipping updates for a Flatpak
$ flatpak --user mask org.kde.kcalc

# Testing updates again
$ flapak update

# Removing the mask
$ flatpak --user mask --remove org.kde.kcalc

# Listing masks
$ flatpak --user mask

# Cleanup
$ flatpak --user uninstall org.kde.gwenview

Can we do the same for KDE Plasma?

(Re-)introducing Fedora Kinoite

Fedora Kinoite is a Fedora variant featuring the KDE Plasma desktop. It follows the latest upstream KDE releases. It is stable and based on an up to date software stack from Fedora: Wayland, Pipewire, systemd user sessions, etc.

Fedora Kinoite brings the concept of an immutable desktop operating system, which means that you control when your system is changed.

The system is focused on Flatpak and container based workflows.

See my previous talk at Akademy: Akademy 2021: Kinoite, a new Fedora variant with the KDE Plasma desktop (slides, video).

Benefits of rpm-ostree

The system is shipped as a single consistent image. The updates are performed atomically: either fully applied or not, thus there are no broken updates and your system is always in a working state.

System updates (rpm-ostree) keeps all your data and Flatpak apps as-is. It makes it easy to rollback to a previous known good version.

You also have access to package diff between versions.

rpm-ostree ❤️ containers

rpm-ostree is now capable of delivering operating system images as a container images. This lets you manage operating system versions with containers tags.

You can store each version of your operating system inside a container registry and rebase your system to almost any version.

Below is a demo of rebasing to a container on Kinoite:

<video class="center" controls="" preload="metadata"> <source src="/downloads/videos/Akademy_2023_4_rpm-ostree_rebase_demo.mkv" type="video/mp4"> </video>


Transcript of the video:

# Looking at current state
$ rpm-ostree status

# Find the version to rebase to in the repo on Quay.io:
https://quay.io/repository/fedora-ostree-desktops/kinoite?tab=tags

# Rebase to this version
$ sudo rpm-ostree rebase \
ostree-unverified-registry:quay.io/fedora-ostree-desktops/kinoite:38.20230710.xyz
$ reboot

# Package diff
$ rpm-ostree db diff

# Cleanup and rollback
$ rpm-ostree cleanup
$ rpm-ostree rollback

Looking forward to Plasma 6

Fedora Kinoite Beta & Nightly

See Introducing Kinoite Nightly (and Kinoite Beta).

The builds for those images are currently paused (waiting for Plasma 6).

Plasma 6 Kinoite images?

We’re working on it! We’ll make Fedora Kinoite Nightly images, with Plasma 6 packages, on top of stable Fedora.

Hopefully coming soon!

Future options for testing?

Could we do pre-merge checks? Testing with OpenQA?

Running OpenQA tests for each Plasma PR is likely to create too much overhead, but maybe we can do daily or weekly checks?

Bringing RPM specfiles to Git Repos and building then in GitLab CI would significantly help with Fedora Kinoite Nightly and Beta efforts for testing.

Conclusion

Happy testing!

Introducing Kinoite Nightly (and Kinoite Beta)

Posted by Timothée Ravier on August 10, 2023 10:00 PM

Update: Kinoite Nightly & Beta images are temporarily paused while we work on making Kinoite image with Plasma 6 content available.


As announced during the Fedora Kinoite “Hello World!” talk (slides) last year at the Fedora 35 release party, one of the goals for Fedora Kinoite is to make it easier for everyone to try and test the latest KDE Plasma desktop and Apps, without having packaging, compiler or development knowledge.

We are now much closer to that goal with the introduction of Kinoite Nightly, an unofficial variant of Fedora Kinoite based on stable Fedora plus nightly packages for KDE software (Plasma desktop and a base set of apps).

Alonside Kinoite Nightly, we are also introducing Kinoite Beta, which is also an unofficial variant of Fedora Kinoite, also based on stable Fedora but with KDE Plasma Beta packages. This variant is based on fresh release of KDE Plasma 5.27 Beta.

While the Nightly variant will be built daily and will always be available, the Beta variant will only be built and available during KDE Plasma Beta testing phases.

All of this is only made possible by the very good work of the Fedora KDE SIG members that maintain KDE packages for Fedora, including those nightly packages.

Those variants are built daily and published as a container image hosted in repos on Quay.io:

Pre-release software notice

Warning: This should be obvious but in case it needs to be said: This is pre-release software that may include major bugs. Only use this on systems where you are confident you will be able to rollback and have backups of your collection of favorite cat pictures.

Additionaly for Kinoite Nightly: The functionnality, features, bugs might change at any time and there is no guarantee of compatibility or stability.

If you find bugs, you are welcomed to report them to KDE developers on bugs.kde.org or to the project on GitLab.

How do I try it?

We currently do not have installation ISOs or pre-installed images available. To try it, you can follow those steps:

 1. Install the latest official Fedora Kinoite release.

 2. Update your system to the latest version and reboot:

<figure class="highlight">
$ sudo rpm-ostree update --reboot
</figure>

 3. Pin your current deployment to make sure that you will be able to rollback if something fails:

<figure class="highlight">
$ sudo ostree admin pin 0
</figure>

 4. Switch to either Kinoite Nightly or Kinoite Beta:

<figure class="highlight">
# For Kinoite Nightly
$ sudo rpm-ostree rebase --reboot \
    ostree-unverified-registry:quay.io/fedora-ostree-desktops/kinoite-nightly

# For Kinoite Beta
$ sudo rpm-ostree rebase --reboot \
    ostree-unverified-registry:quay.io/fedora-ostree-desktops/kinoite-beta
</figure>

 5. Test and report bugs!

Kinoite Beta with KDE Plasma 5.27 Beta

Connecting the packages with the sources in Kinoite Nightly

Note: This mostly applies to Kinoite Nightly as Kinoite Beta is made from released Beta sources.

You can figure out exactly which commit is included in the image by looking at the Git short hash included at the end of the package versions (right before -1.fc37.x86_64 which is the revision, major release and architecture):

<figure class="highlight">
$ rpm -qa | grep kf5
...
kf5-kio-core-5.100.0^20221108.0a9215b-1.fc37.x86_64
...
$ rpm -qa | grep plasma
...
plasma-desktop-5.26.80^20221108.a2a62b5-1.fc37.x86_64
...
</figure>

Testing an image built on a specific date

As the images are rebuilt daily, you can also fetch a version that was built on a specific date. You can see all builds (tags) in the repos on Quay.io:

To do that, you need to specify the tag referencing the container image to fetch with the rebase command:

<figure class="highlight">
$ sudo rpm-ostree rebase \
    ostree-unverified-registry:quay.io/fedora-ostree-desktops/kinoite-nightly:37.20230109.0.be10bf34
</figure>

This should make it much easier to bissect regressions that impact a lot of components, either in KDE packages themselves or in their dependencies.

You can also diff the list of package between two versions with:

<figure class="highlight">
$ rpm-ostree db diff
...
</figure>

Or override a specific package with another version or your own build with custom patches for example:

<figure class="highlight">
$ rpm-ostree override replace plasma-workspace-XYZ.rpm
...
$ reboot
</figure>

But what about all the other apps?

Just like Fedora Kinoite, most applications should run just fine as Flatpaks, and we’re working on making most of them available on Flathub (we’re really close now). To be able to test development versions of those apps, you can try them via the Nightly Flatpak repo hosted by KDE.

This single repo might go away or be reworked once we are able to fully move to GitLab CI based builds for Flatpak apps. You’ll be able to get Nightly Flatpak builds for applications directly from the KDE Invent GitLab instance.

See also my post about the future for Flatpak support and integration in KDE.

Conclusion

Feel free to drop by the Fedora KDE channel or to open an issue if you have feedback, suggestions or questions.

Job Posting: Fedora Operations Architect

Posted by Fedora Community Blog on August 10, 2023 04:25 PM

Red Hat is hiring for a new Fedora role

Red Hat is hiring for a new full-time role supporting the Fedora Project. The job listing (replicated below) is open now, and if you are interested, you can apply online.

About the job

Red Hat’s Linux Integration Team is looking for a Fedora Operations Architect. In this new role, you will work as a member of the Fedora Council, the project’s top-level governance and leadership body, to coordinate and execute key strategic initiatives. You will also work with internal and external stakeholders and the community-elected Fedora Engineering Steering Committee to guide technical changes in the Fedora Linux distribution and coordinate with RHEL engineers and product managers on the impact of these changes. As Fedora Operations Architect, you will analyze Fedora processes and programs for measurable impact, and develop new practices to reduce complexity and improve outcomes in making open source innovation available to Fedora users and to Fedora’s downstream distributions. Successful applicants for this remote-flexible role must reside in a state or country where Red Hat is registered to do business.

What you will do

  • As a member of the Fedora Council, work with other Council members and the community to advance strategic initiatives
  • Use communication and coordination skills to drive an effective Change process in Fedora that avoids or mitigates surprises, and delivers desirable, innovative results into Fedora Linux and Red Hat Enterprise Linux
  • Maintain Fedora release cadence so that it aligns with the Red Hat Enterprise Linux needs
  • Evaluate and improve technical and social processes across the project
  • Provide status reports and communications to the Fedora Community
  • Participate in relevant community teams as an ongoing stakeholder

What you will bring

  • Extensive experience with the Fedora Project or a comparable open source community
  • Experience with software development and open source developer communities; understanding of development processes
  • Demonstrated ability in organizing complex projects with multiple interests and diverse stakeholders
  • Ability to lead teams through empathy, inspiration, and persuasion with multiple cross-organizational groups that span the globe
  • Outstanding organizational skills; ability to prioritize tasks matching short and long-term goals and focus on the tasks of high priority
  • Experience motivating and respecting capacity-limited volunteers and associates across teams and companies
  • Exceptional English language communication abilities in both written and verbal forms

The post Job Posting: Fedora Operations Architect appeared first on Fedora Community Blog.

Anúncio do Drex

Posted by ! Avi Alkalay ¡ on August 09, 2023 09:30 AM

O Real digital vai se chamar Drex, conforme anunciou o BC, e ele pouca relação tem com criptomoedas, a não ser pela tecnologia sobre a qual ele roda: blockchain.

Para nós, compradores de bananada na hora do almoço, ele será pouco visível — não é algo que estará no dia a dia das pessoas tanto quanto o Pix. O Drex entra em ação em transações maiores, como a compra de um imóvel ou carro, lançando mão de minicontratos, simplificando, agilizando e barateando integração bancária e serviços como cheque caução, cheque administrativo, depósito em juízo etc.

Também no Facebook e LinkedIn.

Fedora Linux Flatpak cool apps to try for August

Posted by Fedora Magazine on August 09, 2023 08:00 AM

This article introduces projects available in Flathub with installation instructions.

Flathub is the place to get and distribute apps for all of Linux. It is powered by Flatpak, allowing Flathub apps to run on almost any Linux distribution.

Please read “Getting started with Flatpak“. In order to enable flathub as your flatpak provider, use the instructions on the flatpak site.

Authenticator

Authenticator is a simple app that allows you to generate Two-Factor authentication codes. It has a very simple and elegant interface with support for a a lot of algorithms and methods. Some of its features are:

  • Time-based/Counter-based/Steam methods support
  • SHA-1/SHA-256/SHA-512 algorithms support
  • QR code scanner using a camera or from a screenshot
  • Lock the application with a password
  • Backup/Restore from/into known applications like FreeOTP+, Aegis (encrypted / plain-text), andOTP, Google Authenticator

You can install “Authenticator” by clicking the install button on the site or manually using this command:

flatpak install flathub com.belmoussaoui.Authenticator

Secrets

Secrets is a password manager that integrates with GNOME. It’s easy to use and uses the KeyPass file format. Some of its features are:

  • Supported Encryption Algorithms:
    • AES 256-bit
    • Twofish 256-bit
    • ChaCha20 256-bit
  • Supported Derivation algorithms:
    • Argon2 KDBX4
    • Argon2id KDBX4
    • AES-KDF KDBX 3.1
  • Create or import KeePass safes
  • Add attachments to your encrypted database
  • Generate cryptographically strong passwords
  • Quickly search your favorite entries
  • Automatic database lock during inactivity
  • Support for two-factor authentication

You can install “Secrets” by clicking the install button on the site or manually using this command:

flatpak install flathub org.gnome.World.Secrets

Flatsweep

Flatsweep is a simple app to remove residual files after a flatpak is unistalled. It uses GTK4 and Libadwaita to provide a coherent user interface that integrates nicely with GNOME, but you can use it on any desktop environment.

You can install “Flatsweep” by clicking the install button on the site or manually using this command:

flatpak install flathub io.github.giantpinkrobots.flatsweep

Solanum

Solanum is a time tracking app that uses the pomodoro technique. It uses GTK4 and it’s interface integrates nicely with GNOME.

You can install “Solanum” by clicking the install button on the site or manually using this command:

flatpak install flathub org.gnome.Solanum

Cockpit 298

Posted by Cockpit Project on August 09, 2023 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 298:

PatternFly 5

Cockpit, Cockpit-Podman, Cockpit-Machines and Cockpit-OSTree now all use PatternFly 5.

Storage: Stratis pools can be bound to Tang servers

Stratis pools can now use Tang server for encryption, either in addition to or instead of a passphrase.

screenshot of stratis pools can now be bound to a tang server

Try it out

Cockpit 298 is available now:

Matrix to libera.chat (IRC) bridge unavailable

Posted by Fedora Community Blog on August 08, 2023 07:44 PM

The Fedora project has been moving to Matrix for our interactive chat needs for a while, but we wanted to make any such transition smooth and not leave behind users that preferred IRC for whatever reasons. When we setup Matrix rooms we also setup a portal using the Matrix<->libera.chat IRC bridge. This allows Matrix and IRC users to see the same content and interact with each other. There have of course been issues from time to time of dropped messages, or clashes between the Matrix and IRC cultures, but overall it’s been a great help to keeping our community from fragmenting.

Unfortunately, issues with the bridge have reached a point that libera.chat folks have asked for the bridge to be taken down until it can be fixed up. This happened at 2023-08-06 14UTC.

What does this mean for the Fedora Community? a few things:

  • Realize that part of the community will not see your messages. If you send from IRC, Matrix users will not see or be able to respond to those messages. Likewise if you send from Matrix, no IRC users will see or be aware of your messages. If you are expecting a reply from someone and don’t see it, do realize they could be on the other platform.
  • Fedora’s meeting bot (zodbot) is on IRC. This means you will need to have everyone involved in meetings on IRC. Meetings can’t be recorded from Matrix.

Hopefully this is a very temporary issue and the bridge will be back soon.

The post Matrix to libera.chat (IRC) bridge unavailable appeared first on Fedora Community Blog.

EDA and the Three Dwarves

Posted by Madeline Peck on August 08, 2023 06:05 PM

What a long journey this coloring book has gone on! This blog post has been sitting in my drafts for over a year and I thought it was finally time to publish it.

If you’re not aware of the previous coloring books, they have been a series of projects started by Máirín Duffy and Dan Walsh to increase awareness and convey a better understanding of different technology.

For example, ‘The Container Coloring Book: Who’s Afraid of the Big Bad Wolf?’ is a coloring book where the three little pigs teach you how to keep the big bad wolf from blowing your container-based applications down. The book covers security, management, resource control, namespaces, and much more that people should keep in mind when creating their own applications with containers.

All of the past (and hopefully future coloring books too!) are kept here at www.red.ht/coloring

<figure class=" sqs-block-image-figure intrinsic "> </figure>

‘EDA and the Three Dwarves’ was written by Máirín Duffy and then the script was handed off to me when I was an intern in 2020.

I numbered each beat in the script that I could imagine a visual for, which was a lot since this was going to be a comic and not just one picture on each page.

When I drew a thumbnail version of the page I could then add the numbers to keep track of how far I was. This way I knew which parts of the script had visuals.

Below are some of the pages in thumbnail form. There are page numbers 1,2,3,4 etc. at the bottom - as well as the other numbers that correlated with the script scattered on the page. Ultimately after I thumbnailed the whole script I decided that it would be 20 pages total.

<figure class=" sqs-block-image-figure intrinsic "> </figure> <figure class=" sqs-block-image-figure intrinsic "> </figure>

After the pages were roughly blocked out I went into Krita and sketched out the illustrations. Sometimes certain pages were easier, or they would need a reference photo, but I drew all the pages in a good amount of detail. You can see how the first thumbnail above on the top left, turned into the first page below.

Snow White is the main character of our story and I wanted the first page to have her in front of her bakery and draw the readers in.

<figure class=" sqs-block-image-figure intrinsic "> </figure> <figure class=" sqs-block-image-figure intrinsic "> </figure>

After the pages were blocked out in Krita, I brought them into Inkscape to vectorize them. Below you can see how the sketch transformed to the final!

<figure class=" sqs-block-image-figure intrinsic "> </figure> <figure class=" sqs-block-image-figure intrinsic "> </figure>

This was a really great project to launch my Red Hat career, it gave me a great base for knowledge about containers and to familiarize myself with Open Source programs.

<figure class=" sqs-block-image-figure intrinsic "> </figure>

Mo and I gave some comments about the process that you can read in this article.

As always, thank you for reading about the process! If you are interested in reading the coloring book or printing it for yourself at home, you can find the PDF here.

Community Blog monthly summary: July 2023

Posted by Fedora Community Blog on August 08, 2023 08:00 AM

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let us know what you think.

Stats

In June, we published eight posts. The site had 2,763 visits from 1,890 unique viewers. 986 visits came from search engines, while 21 came from Fedora Discussion, and 14 came from Fedora Magazine.

The most read post last month was “Fedora Linux 39 development schedule” with 479 views. The most read post published last month was “Community Blog monthly summary: June 2023” with 60 views.

Badges

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly summary: July 2023 appeared first on Fedora Community Blog.

Updating Fedora the unsupported way

Posted by Matthew Garrett on August 08, 2023 05:54 AM
I dug out a computer running Fedora 28, which was released 2018-04-01 - over 5 years ago. Backing up the data and re-installing seemed tedious, but the current version of Fedora is 38, and while Fedora supports updates from N to N+2 that was still going to be 5 separate upgrades. That seemed tedious, so I figured I'd just try to do an update from 28 directly to 38. This is, obviously, extremely unsupported, but what could possibly go wrong?

Running sudo dnf system-upgrade download --releasever=38 didn't successfully resolve dependencies, but sudo dnf system-upgrade download --releasever=38 --allowerasing passed and dnf started downloading 6GB of packages. And then promptly failed, since I didn't have any of the relevant signing keys. So I downloaded the fedora-gpg-keys package from F38 by hand and tried to install it, and got a signature hdr data: BAD, no. of bytes(88084) out of range error. It turns out that rpm doesn't handle cases where the signature header is larger than a few K, and RPMs from modern versions of Fedora. The obvious fix would be to install a newer version of rpm, but that wouldn't be easy without upgrading the rest of the system as well - or, alternatively, downloading a bunch of build depends and building it. Given that I'm already doing all of this in the worst way possible, let's do something different.

The relevant code in the hdrblobRead function of rpm's lib/header.c is:

int32_t il_max = HEADER_TAGS_MAX;
int32_t dl_max = HEADER_DATA_MAX;

if (regionTag == RPMTAG_HEADERSIGNATURES) {
il_max = 32;
dl_max = 8192;
}

which indicates that if the header in question is RPMTAG_HEADERSIGNATURES, it sets more restrictive limits on the size (no, I don't know why). So I installed rpm-libs-debuginfo, ran gdb against librpm.so.8, loaded the symbol file, and then did disassemble hdrblobRead. The relevant chunk ends up being:

0x000000000001bc81 <+81>: cmp $0x3e,%ebx
0x000000000001bc84 <+84>: mov $0xfffffff,%ecx
0x000000000001bc89 <+89>: mov $0x2000,%eax
0x000000000001bc8e <+94>: mov %r12,%rdi
0x000000000001bc91 <+97>: cmovne %ecx,%eax

which is basically "If ebx is not 0x3e, set eax to 0xffffffff - otherwise, set it to 0x2000". RPMTAG_HEADERSIGNATURES is 62, which is 0x3e, so I just opened librpm.so.8 in hexedit, went to byte 0x1bc81, and replaced 0x3e with 0xfe (an arbitrary invalid value). This has the effect of skipping the if (regionTag == RPMTAG_HEADERSIGNATURES) code and so using the default limits even if the header section in question is the signatures. And with that one byte modification, rpm from F28 would suddenly install the fedora-gpg-keys package from F38. Success!

But short-lived. dnf now believed packages had valid signatures, but sadly there were still issues. A bunch of packages in F38 had files that conflicted with packages in F28. These were largely Python 3 packages that conflicted with Python 2 packages from F28 - jumping this many releases meant that a bunch of explicit replaces and the like no longer existed. The easiest way to solve this was simply to uninstall python 2 before upgrading, and avoiding the entire transition. Another issue was that some data files had moved from libxcrypt-common to libxcrypt, and removing libxcrypt-common would remove libxcrypt and a bunch of important things that depended on it (like, for instance, systemd). So I built a fake empty package that provided libxcrypt-common and removed the actual package. Surely everything would work now?

Ha no. The final obstacle was that several packages depended on rpmlib(CaretInVersions), and building another fake package that provided that didn't work. I shouted into the void and Bill Nottingham answered - rpmlib dependencies are synthesised by rpm itself, indicating that it has the ability to handle extensions that specific packages are making use of. This made things harder, since the list is hard-coded in the binary. But since I'm already committing crimes against humanity with a hex editor, why not go further? Back to editing librpm.so.8 and finding the list of rpmlib() dependencies it provides. There were a bunch, but I couldn't really extend the list. What I could do is overwrite existing entries. I tried this a few times but (unsurprisingly) broke other things since packages depended on the feature I'd overwritten. Finally, I rewrote rpmlib(ExplicitPackageProvide) to rpmlib(CaretInVersions) (adding an extra '\0' at the end of it to deal with it being shorter than the original string) and apparently nothing I wanted to install depended on rpmlib(ExplicitPackageProvide) because dnf finished its transaction checks and prompted me to reboot to perform the update. So, I did.

And about an hour later, it rebooted and gave me a whole bunch of errors due to the fact that dbus never got started. A bit of digging revealed that I had no /etc/systemd/system/dbus.service, a symlink that was presumably introduced at some point between F28 and F38 but which didn't get automatically added in my case because well who knows. That was literally the only thing I needed to fix up after the upgrade, and on the next reboot I was presented with a gdm prompt and had a fully functional F38 machine.

You should not do this. I should not do this. This was a terrible idea. Any situation where you're binary patching your package manager to get it to let you do something is obviously a bad situation. And with hindsight performing 5 independent upgrades might have been faster. But that would have just involved me typing the same thing 5 times, while this way I learned something. And what I learned is "Terrible ideas sometimes work and so you should definitely act upon them rather than doing the sensible thing", so like I said, you should not do this in case you learn the same lesson.

comment count unavailable comments

Keyboard Gear Update – The Five Tactiles

Posted by Jon Chiappetta on August 07, 2023 05:03 PM
<figure class="aligncenter wp-block-table">
Ducky Mini Switches: Cherry MX Browns
Mods: Wooden Case
Sound: Marbley
Drop Alt Switches: Halo True + Holy Pandas
Mods: Stab Tape + Lube + Shim
Sound: Clacky
Matias Mini Switches: Alps White
Mods: None
Sound: Clicky
KBD67 Lite Switches: Halo True + Holy Pandas
Mods: All Foam
Mods: Stab Shim
Sound: Chalky
Mode Envoy Switches: Halo True + Holy Pandas
Mods: Plate Foam
Mods: Silicone Feet
Sound: Creamy
 
</figure>

Sourceware 25 Roadmap

Posted by Mark J. Wielaard on August 07, 2023 01:34 PM

Sourceware has been running for almost 25 years, providing a worry-free, developer friendly home for Free Software core toolchain and developer tool communities. And we would like to keep providing that for the next 25 years.

That is why in the last couple of years we have started to diversify our hardware partners, setup new services using containers and isolated VMs, investigated secure supply chain issues, added redundant mirrors, created a non-profit home, collected funds, invested in open communication, open office hours and introduced community oversight by a Sourceware Project Leadership Committee with the help from the Software Freedom Conservancy.

Please participate and let us know what more we (and you!) can do to make Sourceware and all hosted projects a success for the next 25 years.

Full history and roadmap for the next 25 on sourceware.org: Sourceware 25 Roadmap.

Video content creation with Kdenlive

Posted by Fedora Magazine on August 07, 2023 08:00 AM

Fedora Linux packages a suite of graphical software for content creators. This article introduces a use case and suggestions for creating tutorial videos with Kdenlive.

Plan tutorial

A question that you need to address is whether text and images are appropriate to share your knowledge. If you create resources for learners of graphical software, a tutorial video is something to consider.

Review abstract and draft script

An abstract in content writing helps reviewers look for key points of your tutorial. Depending on your workflow, you can submit this abstract to reviewers for comments, questions, or updates.

Once an abstract of the tutorial video is agreed upon by the reviewers, a video script is created and works like a manuscript for your tutorial. Break down process steps into each sequence. Check this link for an example.

Screen recording

Use your preferred recording tool that comes with the desktop environment or enable the ‘Screen Grab’ option in Kdenlive in the ‘View’ menu. Alternatively, you can install OBS Studio.

Kdenlive can process various container formats. You should transcode to a high-quality lossless matroska file (.mkv) that ensures high quality and compression ratio.

Installation

Kdenlive supports Linux, Mac, Windows and FreeBSD, which encourages collaboration among content creators. If you are Linux users, go to the packager manager of your distro to install Kdenlive. If you use Fedora Linux, we recommend the Fedora Linux RPM version or Flatpak.

Set up Kdenlive

Let’s start with Kdenlive’s user interface and focus on three sections – Project bin, Monitors and Timeline.

<figure class="aligncenter size-large"><figcaption class="wp-element-caption">Kdenlive user interface</figcaption></figure>

Project bin

Load video clips into Project Bin on the upper left. The Project Bin lists all the clips that are associated with your project. You can drag and drop the clips onto Project Bin.

Monitors

Clip Monitor on the left window displays the unedited clip that is currently selected in The Project Bin. If you have loaded multiple takes of the same scene (process steps), you need to know which one you’re going to choose and edit. If you changed your mind during editing processes, that’s no problem. You can move around a sequence with timeline and tracks after the initial cut.

The Project Monitor is a place to watch your edited footage.

Timeline

Timeline is a place for all selected clips you edit. Drag and drop the clips directly on to the Timeline from the Project Bin.

Editing processes

Cut and stitch

Timeline cursor, also known as playhead, indicates the position of clips you are working on and previewing in the Project Monitor.

<figure class="aligncenter size-full"><figcaption class="wp-element-caption">Timeline</figcaption></figure>

The initial cut means editing on a scene by scene basis until you’re ready to stitch tracks together into a complete piece.

Cut when;
– Delayed, boring or repetitive part was recorded. This happens often when recording a scene for loading apps or waiting for rendering on web browser
– Transition pieces when a scene starts and ends
– Trim off a few frames before you tidy up
– Ensure basic continuity – let it flow!

In Timeline, video tracks (V2, V1) are cascaded up whereas audio tracks (A1, A2) are cascaded down as default.

Slide up trimmed video track and stitch frames you want to keep. Delete trimmed frames when you’re sure you don’t need them.

<figure class="aligncenter size-full"><figcaption class="wp-element-caption">Cut </figcaption></figure>

Timeline works like chef’s chopping board and takes time for new users to get familiar with it. Check the upstream documentation on this link.

Text effects with Titles

Titles are text elements that you can overlay to the timeline. To create Titles, right-click in the The Project Bin and open the Titles window as shown below. Select ‘Create Title’ to save it. Drag and drop the Title to the video track 2 in timeline. Check this link for more information.

<figure class="aligncenter size-large"><figcaption class="wp-element-caption">Titler</figcaption></figure>

Sound effects

Ambient music could jazz up your video tutorial for the audience.

Go to top left corner of the Project Bin and select the arrow to list options. Select ‘Online Resources’. At the top right, ‘Service’ drop-down menu, choose ‘Freesound’. Select ‘Preview’ to play back and import to download and overlay to A1 audio track.

<figure class="aligncenter size-full"><figcaption class="wp-element-caption">Sound</figcaption></figure>

Transition and finishing touch

Text and sounds effects will blend well if tracks have transitions. Check this link for fine-tuning your final cut video.

Rendering

In the Render dialogue box (Ctrl + Return) on Project Bin, choose WebM as output file, select ‘More options’ to de-select the Export Audio option, and select ‘Render to File’ to save the clip.

<figure class="aligncenter size-full"><figcaption class="wp-element-caption">Rendering</figcaption></figure>

WebM offers good compression and output.

Rendering speed is dependent on the number of CPU cores in your computer. If you work with high quality footage and visual effects in a computer with low-end CPU and RAM, adapt your workflow with proxy clips and use script for rendering.

Share your tutorial video

PeerTube is a video sharing platform that runs on GNU/Linux infra, and Open Source/Free Software. Just like Vimeo or YouTube, you can embed your content to your documentation site from PeerTube.

Credits and acknowledgements

Big thanks to Seth Kenlon who provided me with a great deal of inspiration from his publication in Opensource.com and Kdenlive workshop.

Kdenlive Version 23.04.2 was used for this article.

Week 31 in Packit

Posted by Weekly status of Packit Team on August 07, 2023 12:00 AM
  • Licenses in Packit specfiles are now confirmed to be SPDX-compatible. (example PR: packit#2026) If you are interested in more details, see a talk from Flock 2023 by Mirek.

Episode 387 – Enterprise open source is different

Posted by Josh Bressers on August 07, 2023 12:00 AM

Josh and Kurt talk about the difference between what we think of as traditional open source, and enterprise software projects that have an open source license. They are both technically open source, but how the projects work is very very different.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3188-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_387_Enterprise_open_source_is_different.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_387_Enterprise_open_source_is_different.mp3</audio>

Show Notes

Matrix / libera.chat IRC bridge unavailable

Posted by Fedora Infrastructure Status on August 06, 2023 12:00 PM

The Matrix / libera.chat IRC bridge is unavailable currently. See ticket for more information.

Ebook formats

Posted by Matěj Cepl on August 05, 2023 09:32 PM

This is just collection of alternatives to EPub for electronic books and similar stuff.

:w gnu_bram_moolenaar.md

Posted by Joe Brockmeier on August 05, 2023 06:28 PM

Just learned this morning that Bram Moolenaar, creator and maintainer of Vim, passed away recently at 62. I’ve been a user of Vim since my 20s, so even though I’ve never met Bram his work has been an important part of my life.

I’ve already written about how I got started with Vim a while back, so I won’t rehash that here. Suffice to say that I’ve spent a lot of time in Vim since 1999 when I got started with it.

At a conservative estimate, I’ve written more than 2.5 million words in Vim in my career. (Some of them about Vim, too.) That’s a conservative estimate, as Vim was my primary editor throughout my freelance writing and editing career. It’s also been my go-to for first drafts long after I switched to Google Docs for most of my collaborative work.

In those years I’ve spent a lot of time in the Vim help.txt and reading release notes and announcements and so forth, perusing his website, and even checking out some of his photo album. Might sound silly to say, but it did feel like I knew him a bit through his work – and certainly, I felt a great deal of affection for someone who touched so many people with work on a vital open source tool and used that to try to put a spotlight on people who need help.

Earlier this year I gave a talk about Vim at Open Source Summit North America. I was pleasantly surprised how many people came to the talk on a Friday, on a beautiful day in Vancouver. Clearly Vim still has some drawing power. A few of the folks who attended asked me after the talk why I didn’t switch to one of the newer variants like Neovim. I didn’t really have a good answer other than a strong fondness for the original. Part of that, illogically, is that I felt like I’d be letting Bram down by switching.

I’ve donated a few times to ICCF Holland, and will do so again today. What I hadn’t done, and really wish I had, was to send a thank you note to Bram for all those years of work on Vim. GNU, Bram Moolenaar. Your work has helped millions of people and you asked very little in return. I hope other folks will join me in making donations to ICCF in Bram’s memory.

:wq

Investimento em marcados globais via Interactive Brokers

Posted by ! Avi Alkalay ¡ on August 05, 2023 11:58 AM

Nos últimos anos surgiram no Brasil meios mais fáceis de se investir em mercados globais, com a Nomad, Avenue, XP. Mas a melhor plataforma de investimentos que vi é a Interactive Brokers (ao abrir conta com este link você e eu ganhamos créditos e ações).

De todos que já usei, o aplicativo e site da Interactive Brokers é o que dá maior clareza de visão, facilidade de uso, opções de relatórios e acesso a informações de mercado. Inclusive quando comparado com os grandes bancos brasileiros de investimentos como XP e BTG.

<figure class="wp-block-image size-large"></figure>

Não há custo para abrir ou manter conta e cobram taxas muito baixas, tipo $1, por operação de compra ou venda de ativos. Não cobram também por transferências de dinheiro que entra e sai via ACH ou wire transfer.

Pode-se operar em dezenas de moedas e a plataforma dá acesso a centenas de mercados de ações e ativos financeiros no mundo todo, a citar alguns: NYSE e NASDAQ (bolsa americana), B3, bolsas da Europa, Tel Aviv Stock Exchange e muitas outras.

Como selecionar em o que investir?

A vantagem de se investir fora do Brasil é mais diversificação e acesso a moedas e mercados mais estáveis. A desvantagem é a complexidade, inclusive para saber como lidar com imposto de renda no Brasil. É necessário também mais desenvoltura para tratar maior quantidade de informações sobre ações, ativos, índices e mercados de outros países.

Bons investimentos!

Também no Facebook e LinkedIn.

My time on the IBM Linux Impact Team, and legacy

Posted by ! Avi Alkalay ¡ on August 04, 2023 10:36 PM

In this extensive article, Jon “MadDog” delves into the behind-the-scenes narrative of how Linux and Open Source gained acceptance within the corporate sphere, eventually establishing itself as the dominant platform in today’s enterprise information technology. It has become the operating system powering contemporary cloud infrastructure and, most notably, has transformed into the primary methodology for driving software innovation.

Interwoven throughout the article are accounts of my personal journey during my time at IBM, specifically during the unforgettable years as a member of the IBM Linux Impact Team. Our mission achieved remarkable success, evident today every time a non-desktop IT professional interacts with a computer system, which is now commonly built entirely on dependable, enterprise-ready Open Source software.

While people tell success stories in specific projects, what we did between 2001 and 2008 changed the entire IT industry! Without that evangelism and adoption of Open Source work, you as an IT professional would probably be using only Windows and Solaris today. You wouldn’t have embraced the cloud, wouldn’t know what DevOps is, and would likely be completely 100% dependent on software licensing.

My blog is full of stories from that time.

Time to tag some people I worked with on those years: Marcelo Braunstein, Cecilia Faria, Jose Carlos Fadel, John Walicki, Manuel Silveyra, Gisele Lloret Spinola de Araújo, Haroldo Hoffmann, Paulo Aragão, Auta Souza, Samuel Garofalo Masini, Rafael Peregrino da Silva, Lucas Ravagnani

Also on LinkedIn.

nvk: the kernel changes needed

Posted by Dave Airlie on August 04, 2023 10:26 PM

The initial NVK (nouveau vulkan) experimental driver has been merged into mesa master[1], and although there's lots of work to be done before it's application ready, the main reason it was merged was because the initial kernel work needed was merged into drm-misc-next[2] and will then go to drm-next for the 6.6 merge window. (This work is separate from the GSP firmware enablement required for reclocking, that is a parallel development, needed to make nvk useable). Faith at Collabora will have a blog post about the Mesa side, this is more about the kernel journey.

What was needed in the kernel?

The nouveau kernel API was written 10 years or more ago, and was designed around OpenGL at the time. There were two major restrictions in the current uAPI that made it unsuitable for Vulkan.

  1. buffer objects (physical memory allocations) were allocated 1:1 with virtual memory allocations for a file descriptor. This meant the kernel managed the virtual address space. For proper Vulkan support, the bo allocation and vm allocation have to be separate, and userspace should control the virtual address space.
  2. Command submission didn't use sync objects. The nouveau command submission wasn't wired up to the modern sync objects. These are pretty much a requirement for Vulkan fencing and semaphores to work properly.

How to implement these?

When we kicked off the nvk idea I made a first pass at implementing a new user API, to allow the above features. I took at look at how the GPU VMA management was done in current drivers and realized that there was a scope for a common component to manage the GPU VA space. I did a hacky implementation of some common code and a nouveau implementation. Luckily at the time, Danilo Krummrich had joined my team at Red Hat and needed more kernel development experience in GPU drivers. I handed my sketchy implementation to Danilo and let him run with it. He spent a lot of time learning and writing copious code. His GPU VA manager code was merged into drm-misc-next last week and his nouveau code landed today.

What is the GPU VA manager?

The idea behind the GPU VA manager is that there is no need for every driver to implement something that should essentially not be a hardware specific problem. The manager is designed to track VA allocations from userspace, and keep track of what GEM objects they are currently bound to. The implementation went through a few twists and turns and experiments. 

For a long period we considered using maple tree as the core of it, but we hit a number of messy interactions between the dma-fence locking and memory allocations required to add new nodes to the maple tree. The dma-fence critical section is a hard requirement to make others deal with. In the end Danilo used an rbtree to track things. We will revisit if we can deal with maple tree again in the future. 

We had a long discussion and a couple of implement it both ways and see, on whether we needed to track empty sparse VMA ranges in the manager or not,  nouveau wanted these but generically we weren't sure they were helpful, but that also affected the uAPI as it needed explicit operations to create/drop these. In the end we started tracking these in the driver and left the core VA manager cleaner.

Now the code is in tree we will start to push future drivers to use it instead of spinning their own.

What changes are needed for nouveau?

Now that the VAs are being tracked, the nouveau API needed two new entrypoints. Since BO allocation will no longer create a VM, a new API is needed to bind BO allocations with VM addresses. This is called the VM_BIND API. It has two variants

  1. a synchronous version that immediately maps a BO to a VM and is used for the common allocation paths.
  2. an asynchronous version that is modeled after the Vulkan sparse API, and takes in/out sync objects, which use the drm scheduler to schedule the vm/bo binding.
The VM BIND backend then does all the page table manipulation required.
 
The second API added was an EXEC call. This takes in/out sync objects and a set of addresses that point to command buffers to execute. This uses the drm scheduler to deal with the synchronization and hands the firmware the command buffer address to execute.
Internally for nouveau this meant having to add support for the drm scheduler, adding new internal page table manipulation APIs, and wiring up the GPU VA. 

Shoutouts:

My input was the sketchy sketch at the start, and doing the userspace changes to the nvk codebase to allow testing.

The biggest shoutout to Danilo, who took a sketchy sketch of what things should look like, created a real implementation, did all the experimental ideas I threw at him, and threw them and others back at me, negotiated with other drivers to use the common code, and built a great foundational piece of drm kernel infrastructure.

Faith at Collabora who has done the bulk of the work on nvk did a code review at the end and pointed out some missing pieces of the API and the optimisations it enables.

Karol at Red Hat on the main nvk driver and Ben at Red Hat for nouveau advice on how things worked, while he smashed away at the GSP rock.

(and anyone else who has contributed to nvk, nouveau and even NVIDIA for some bits :-)

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/24326

[2] https://cgit.freedesktop.org/drm-misc/log/

Does open source matter?

Posted by Ben Cotton on August 03, 2023 12:00 PM

Matt Asay’s article “The Open Source Licensing War is Over” has been making the rounds this week, as text and subtext. While his position is certainly spicy, I don’t think it’s entirely wrong. “It’s not that open source doesn’t matter, but rather it has never mattered in the way some hoped or believed,” Asay writes. I think that’s true, and it’s our fault.

To the average person, and even to many developers, the freeness or openness of the software doesn’t matter. They want to be able to solve their problem in the easiest (and cheapest) way. Often that’s open source software. Sometimes it isn’t. But they’re not sitting there thinking about the societal impact of their software choices. They’re trying to get a job done.

Free and open source software (FOSS) advocates often tout the ethical benefits of FOSS. We talk about the “four essential freedoms“. And while those should matter to people, they often don’t. I’ve said before — and I still believe it — FOSS is not the end goal. Any time we end with “and thus: FOSS!”, we’re doing it wrong.

FOSS advocacy — and I suspect this is true of other advocacy efforts as well — tends to try to meet people where we want them to be. The problem, of course, is that people are not where we want them to be. They’re where they are. We have to meet them there, with language that resonates with them, addressing the problems they currently face instead of hypothetical future problems. This is all easier said than done, of course.

Open source licenses don’t matter — they’ve never mattered — except as an implementation detail for the goal we’re trying to achieve.

The post Does open source matter? appeared first on Blog Fiasco.

Bodhi Upgrade

Posted by Fedora Infrastructure Status on August 03, 2023 12:01 AM

Updating Bodhi to version 7.2.1 -- which also includes upgrading bodhi-backend from Fedora 37 to Fedora 38.

One minute hacks: Saving time inserting images in Libreoffice

Posted by James Just James on August 02, 2023 05:35 PM
You’ve probably used libreoffice. You opened a document, and proceeded to insert an image into the text. It wiggles all over and never rests where you want it to. The solution: You most likely want the “anchor as character” option. Right-click on the newly inserted image, go to the Anchor menu item, and choose As Character. You will have to do this for each image you insert which is annoying…

Coming soon: Fedora for Apple Silicon Macs!

Posted by Fedora Magazine on August 02, 2023 04:15 PM

Today at Flock, we announced that Fedora Linux will soon be available on Apple Silicon Macs. Developed in close collaboration with the Fedora Asahi SIG and the Asahi Linux project, the Fedora Asahi Remix will provide a polished experience for Workstation and Server usecases on Apple Silicon systems. The Asahi Linux project has also announced that the new Asahi Linux flagship distribution will be Fedora Asahi Remix.

We are using a Remix as opposed to delivering this support in Fedora Linux proper because this ecosystem is still very fast moving and we believe a Remix will offer the best user experience for the time being. Also, the Remix will allow us to integrate hardware support as it becomes available. Nonetheless, as much of this work as possible is being conducted upstream, with several key components being developed, maintained and packaged in Fedora Linux upstream. Ultimately, we expect Apple Silicon support to be integrated in Fedora Workstation and Fedora Server in a future release, and are working towards this goal. This approach is in line with the overarching goal of the Asahi project itself to integrate support for these systems in the relevant upstream projects.

The first official release of Fedora Asahi Remix is slated to be available by the end of August 2023. Development builds are already available for testing at https://fedora-asahi-remix.org/, though they should be considered unsupported and likely to break until the official release.