Fedora People

Infra & RelEng Update – Week 2 2024

Posted by Fedora Community Blog on January 12, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contain updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 08 January – 12 January 2024

<figure class="wp-block-image size-large wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/01/Weekly-Report-Template18-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2109", "targetHeight": "2560", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">Infra&Releng infographic </figure>

Infrastructure & Release Engineering

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

Updates

  • Monthly Office Hours call last Wednesday
  • EPEL Steering Committee call
  • James Richardson onboarding
  • Requested vorbis-tools for EPEL9 to fix icedax package

Community Design

CPE has few members that are working as part of Community Design Team. This team is working on anything related to design in Fedora Community.

Updates

  • Working on Creative Freedom Summit 🤩
  • Podman / Podman Desktop Hackathon running this week
  • Getting assets ready for FOSDEM

List of new releases of apps maintained by CPE

Minor update of Noggin from 1.8.0 to 1.9.0 on 2024-01-10
Minor update of FMN from 3.2.0 to 3.3.0 on 2024-01-10

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra & RelEng Update – Week 2 2024 appeared first on Fedora Community Blog.

Pros/Cons Of The Controversial Apple Studio Display VESA Edition

Posted by Jon Chiappetta on January 11, 2024 07:46 PM

Pros:

  • Flat Industrial Design (Metal & Glass)
  • Clean Even Rounded Simple Bezels
  • 5K+ Display Resolution & 200+ Pixels Per Inch
  • Single Cable Connection (Hub + Power)
  • VESA Mount Monitor Arm Movement
  • Brighter Brightness Helps Reduce Reflections
  • Pretty Good Quality Speakers

Cons:

  • Lower refresh rate
  • No Local Dimming Zones
  • No physical buttons or controls
  • No HDMI input port for an Apple TV
  • No affordable larger-sized 32-inch version

Overall this is an amazing monitor for text-based productivity-activities such as emailing, browsing, coding, terminals, etc. as the font is crisp and clear & sharp and smooth to look at. Most PC monitors have all sorts of different styles and technologies and designs built into them which are primarily geared towards PC gaming performance instead. This monitor is good at what it does and is nice to look at and lastly inspires me to use it — I just wish Apple offered a larger 32″ option that was more affordable!

bodhi-server 8.0.2

Posted by Bodhi on January 11, 2024 04:23 PM

Released on 2024-01-11.
This is a bugfix release.

Bug fixes

  • Fixed Automated Tests table in the web UI not showing missing results or remote rule errors correctly (#5581).

Contributors

The following developers contributed to this release of Bodhi:

  • Adam Williamson

Music of the week: five albums to bring with me to the desert island

Posted by Peter Czanik on January 11, 2024 02:19 PM

I love music. My family, friends, colleagues love music. I am in quite a few music-related Facebook groups. A returning question everywhere in the past couple of weeks in various wordings was: what are the five albums you would bring to a desert island? This list is of course changing almost each and every year. And also depends on the number of albums, and if live concert recordings, “best of”, etc. albums can be included. So, this is the 2024 January edition with just studio albums :-)

<figure> </figure>

You are what you listen to. I read several articles with this or similar titles. Well, almost all focused on the lyrics of music, and how that is related to the personality of the listener. This approach has multiple problems with me. The majority of music I listen to does not have any lyrics. I prefer instrumental music. And even if a music has a lyrics, I do not care. I can turn of interpreting English almost completely, and even my native Hungarian to a degree. To me the human voice is just yet another instrument. Probably the only exception is The War of the Worlds, but that’s another story.

The first album on my list comes from Vangelis. “Chariots of Fire” was the first CD I ever bought, and it is one of my favorite albums ever since. There were times when I listened to it almost daily. Nowadays I listen to it a few times in a year. I learned only years later that it is a film soundtrack. I also watched the movie. Not bad at all, but its soundtrack is much better :-) After 32 years I still have the CD, and it plays perfectly well.

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/8a-HfNE3EIo" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/103208768

The second album comes from Pink Floyd. Many say that the album “Atom Heart Mother” is the odd one out in the Pink Floyd discography. Probably they are right, but that’s why I love this album. The first song, the “Atom Heart Mother Suite” almost sounds like classical music. Oh, and it’s definitely the odd one out on my list: the only album with lyrics. Not that I have any idea what is it about, I just enjoy the music. The last song begins with a couple of sound effects. Listening to those on a quality pair of headphones or speakers can be scary :-)

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/uUHb3cBvWMY" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>
TIDAL: https://listen.tidal.com/album/7909666

If you take a look at my CDs, you will see that the largest collection is from Mike Oldfield. The various Tubular Bells albums are among my favorites. Of course I like the original version the most. But not the original recording. “Tubular Bells 2003” is a rerecording of the original version, and sounds much better than the original recording. It was not always this way. For many years the original recording was so much in my ears that I could hear all the little changes. It took some time before I could really enjoy the added quality of the new recording.

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/EGNPYXCvId8" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/2115739

The most recent album on my list comes from Japan. “Spectrum” by Hiromi is a solo piano jazz album. Most songs are her originals, and show off her virtuosity on the keyboard. The only exception is the song “Rhapsody in various shades of blue”. I hope you can guess from the title where the main motives of this song are coming from :-)

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/119047902

Last but not least, an album from Hungary. To quote myself from a month ago: “One of my favorite albums is Vedres Csaba és a Kairosz kvartett – Áldott Idő / Blessed Time. It was made by Hungarian pianist Csaba Vedres, who worked together with a string quartet. Their music taught me that string quartets playing alone, with a piano, or with any other instrument can do some fantastic music.”

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/8x5mpbNT4wY" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/27780222

You can also find the CD at the publisher http://perifericrecords.com/hun/catalogue.php?cont=artist&artist_id=1002, together with other albums from Csaba Vedres.

Obviously, this selection is just the tip of the iceberg. With more than five possible albums I would add many others: Philip Glass, King Crimson, ELP, Jean Michel Jarre, Kitaro, Kraftwerk and Rick Wakeman from abroad, or Solaris and After Crying from Hungary. These are just my most listened music, and we did not even mention classical music.

Finally, a question to all the amateur psychologists out there. If the statement “You are what you listen to” is true, what does this selection of music show about me? Am I a scary person or I’m a lovely person? Or both? :-) You can share your opinion with me on LinkedIn, Twitter or Mastodon. My accounts are listed in the top right corner of my blog.

Orquestração de Contêineres com Fedora CoreOS e Kubernetes

Posted by Daniel Lara on January 11, 2024 12:41 PM

 



Porquê Fedora CoreOS ? 

O Fedora CoreOS é uma distribuição Linux mínima e otimizada para execução de contêineres. Construída sobre os princípios do sistema operacional imutável, o Fedora CoreOS garante uma base consistente para cargas de trabalho de contêineres. Suporta ferramentas como Ignition para configuração automática e atualizações transacionais, garantindo um ambiente consistente e confiável para a execução de aplicativos baseados em contêineres.

Antes da instalação do Fedora Coreos , vamos criar nosso arquivo .ign que é um arquivo base de instalação do Fedora CoreOS é nele que vamos definir algumas configurações como Hotname, repositórios, ajustes em algumas aruqivos de confs, etc...

Criando nosso Arquivo ign pra o Control Plane 

k8s-control-plane.bu

variant: fcos
version: 1.4.0
storage:
  links:
    - path: /etc/localtime
      target: ../usr/share/zoneinfo/America/Sao_Paulo
  files:
    # Hostname
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: k8s-control-plane
    
    - path: /etc/yum.repos.d/kubernetes.repo
      mode: 0644
      overwrite: true
      contents:
        inline: |
          [kubernetes]
          name=Kubernetes
          baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
          enabled=1
          gpgcheck=0
          repo_gpgcheck=0
          gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    # Configure automatic loading of required Kernel modules on startup
    - path: /etc/modules-load.d/crio-net.conf
      mode: 0644
      overwrite: true
      contents:
        inline: |
          overlay
          br_netfilter
    # Set kernel parameters required by kubelet
    - path: /etc/sysctl.d/kubernetes.conf
      mode: 0644
      overwrite: true
      contents:
        inline: |
          net.bridge.bridge-nf-call-iptables=1
          net.ipv4.ip_forward=1
passwd:
  users:
    - name: core
      groups:
        - wheel
        - sudo
      ssh_authorized_keys:
        - ssh-rsa AAAAB3..............


Agora vamos gerar o arquivo ign

$ butane --pretty --strict k8s-control-plane.bu > k8s-control-plane.ign

Já temos nosso arquivo 
Vamos criar agora os arquivos para os 2 workers 

k8s-worker1.bu

variant: fcos
version: 1.4.0
storage:
  links:
    - path: /etc/localtime
      target: ../usr/share/zoneinfo/America/Sao_Paulo
  files:
    # Hostname
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: k8s-worker1
    
    - path: /etc/yum.repos.d/kubernetes.repo
      mode: 0644
      overwrite: true
      contents:
        inline: |
          [kubernetes]
          name=Kubernetes
          baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
          enabled=1
          gpgcheck=0
          repo_gpgcheck=0
          gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    # Configure automatic loading of required Kernel modules on startup
    - path: /etc/modules-load.d/crio-net.conf
      mode: 0644
      overwrite: true
      contents:
        inline: |
          overlay
          br_netfilter
    # Set kernel parameters required by kubelet
    - path: /etc/sysctl.d/kubernetes.conf
      mode: 0644
      overwrite: true
      contents:
        inline: |
          net.bridge.bridge-nf-call-iptables=1
          net.ipv4.ip_forward=1
passwd:
  users:
    - name: core
      groups:
        - wheel
        - sudo
      ssh_authorized_keys:
        - ssh-rsa AAAA........

A Diferença esta no hostname os demais continua a mesma coisa 
cria o arquivo ign

$ butane --pretty --strict k8s-worker1.bu > k8s-worker1.ign

Efetue o mesmo com o worker2

No caso estou iniciando por um LiveCD , tém várias formas de instalaçao , vai do gosto de cada um

Ao iniciar o LiveCD efetuo o donwload do arquivo .ing e efetuo a instalação 
como mostra a imagem abaixo


Efetuamos uma instalação de cada um usando o seu arquivo .ign

Agora após a instalação reinicie  e acesse via ssh , e vamos instalar os pacotes nos 3 servers 

$ sudo rpm-ostree install kubelet kubeadm cri-o


Renicie todos eles após a instalação dos pacotes 

e vamos ativar os serviços nos 3 server 

$ sudo systemctl enable --now crio

$ sudo systemctl enable kubelet

Feito isso nos 3 agora no Control-plane vamos criar um arquivo yml para iniciar o nosso cluster 

$ vi config.yml

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.28.2
controllerManager:
  extraArgs: # Specify a R/W directory for FlexVolumes (cluster won't work without this even though we use PVs)
    flex-volume-plugin-dir: "/etc/kubernetes/kubelet-plugins/volume/exec"
networking: # Pod subnet definition
  podSubnet: 10.244.0.0/16
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration


e vamos executar 

$ sudo kubeadm init --config config.yml

Agora vamos adicionar os node




Agora que o cluster está inicializado, vamos instalar o Weave Net:

$ kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml


Pronto nosso Cluster esta no Ar 





Pronto 

Guia de Referencias : 




Software Freedom Conservancy Fundraiser

Posted by Mark J. Wielaard on January 11, 2024 12:21 PM
<figure class="wp-block-image"></figure>

The Software Freedom Conservancy Fundraiser runs for another 4 days. We urge you to become a Sustainer, renew your existing membership or donate before January 15th to maximize your contribution to furthering the goals of software freedom!

They have been a great partner to Sourceware, helping with the GNU Toolchain Infrastructure, putting developers and community first.

OpenTofu project serves up stable release

Posted by Joe Brockmeier on January 10, 2024 04:28 PM

August of last year, Hashicorp decided to move its products away from open source licenses to a source-available license with fuzzy parameters on its use in production. Shortly afterwards, the community forked Terraform as OpenTF and then it was endorsed and picked up by the Linux Foundation as OpenTofu. Now the project is ready to declare a stable release that it says is a production-ready “drop-in replacement for Terraform.”

OpenTofu isn’t a direct clone of Terraform, however. Kuba Martin, the interim technical lead of OpenTofu, says that the project is working to include client-side state encryption and other features that the community has proposed. Read the post for more details, but it looks like the project has made some strong strides in just a few months.

As I wrote last year on The New Stack about the OpenTofu fork, the Linux Foundation made the right call to endorse this fork. Companies and open source projects had adopted Terraform as part of their infrastructure and contributed to its success under the idea that it was open source. The abrupt change to a non-OSI license – and one that’s poorly understood and intentionally vague – set organizations scrambling.

Zero day licensing event

I’ve been thinking of this as a “zero day” licensing event, which is in some ways worse than a security incident. One hopes when an open source product or project has a major security hole, it’s unintentional. It’s also something that the larger community had an opportunity to participate in and try to head off before it happened.

A zero day licensing event, however, is fully intentional and opaque to the larger community until it happens. More on that soon, because I expect we’ll be seeing more of this in 2024 – though the LF’s intervention here might give other companies pause before they go this direction.

Your OpenTofu is served…

There was a bit of skepticism at first when there was talk of a fork. Much less when the LF endorsed and picked up the fork. I still think it’s a silly name, but I doubt that will affect anybody’s production use.

Kudos to all of the contributors who made this release happen. If you’re looking to deploy OpenTofu 1.6, you’ll find the release on GitHub with Debian packages and RPMs for Arm 64, x86_64, 386(!), and some of the BSDs, macOS, Windows, and (of course) source code if you’d like to compile it yourself.

How build services make life easier for upstream developers

Posted by Peter Czanik on January 10, 2024 09:14 AM

Many Linux distributions provide build services under various names: openSUSE Build Service (OBS), Fedora Copr, and so on. These resources are indispensable for upstream developers, and also for their users. I will demonstrate this through some examples from the syslog-ng project.

Note: this blog is loosely based on a talk idea I had for the FOSDEM Distributions Devroom. There is no deep technical information about syslog-ng in this blog. This is more like a history of syslog-ng packaging, and how the fantastic tools by openSUSE and Fedora made it a lot easier and made me an active part of these communities.

Read more at https://www.syslog-ng.com/community/b/blog/posts/how-build-services-make-life-easier-for-upstream-developers

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Crash Course On Using Textual

Posted by Fedora Magazine on January 10, 2024 08:00 AM

Crash course on using Textual

Python on Linux has nice GUI (Graphic User Interface) development libraries like TkInter, but what if you cannot run graphical applications?

Text terminals, are available on not just Linux but BSD and other great Unix-like operating systems. If you write code in Python, you should be using Textual to help you write TUI (Text User Interfaces). In this quick introduction, I will show you two examples of what you can do with Textual and where you can go after that.

So what is Textual?

Textual is a Rapid Application Development framework for Python, built by Textualize.io. Build sophisticated user interfaces with a simple Python API. Run your apps in the terminal or a web browser!

What you need to follow this tutorial

You will need the following:

  1. Basic programming experience, preferable in Python.
  2. Understanding basic object oriented concepts like classes and inheritance
  3. A machine with Linux and Python 3.9+ installed
  4. A good editor (Vim or PyCharm are good choices)

I tried to keep the code simple, so you can follow it. Also, I strongly recommend you download the code or at least install the programs as explained next.

Installation

First create a virtual environment:

python3 -m venv ~/virtualenv/Textualize

Now you can either clone the Git repository and make an editable distribution:

. ~/virtualenv/Textualize/bin/activate
pip install --upgrade pip
pip install --upgrade wheel
pip install --upgrade build
pip install --editable .

Or just Install from Pypi.org:

. ~/virtualenv/Textualize/bin/activate
pip install --upgrade KodegeekTextualize

Our first application: A log scroller

<figure class="wp-block-image">Log scroller, select commands to run</figure>

The log scroller is a simple application that executes a list of UNIX commands that are on the PATH and captures the output as they finish.

The resulting application code:


import shutil
from textual import on
from textual.app import ComposeResult, App
from textual.widgets import Footer, Header, Button, SelectionList
from textual.widgets.selection_list import Selection
from textual.screen import ModalScreen
# Operating system commands are hardcoded
OS_COMMANDS = {
"LSHW": ["lshw", "-json", "-sanitize", "-notime", "-quiet"],
"LSCPU": ["lscpu", "--all", "--extended", "--json"],
"LSMEM": ["lsmem", "--json", "--all", "--output-all"],
"NUMASTAT": ["numastat", "-z"]
}

class LogScreen(ModalScreen):
# ... Code of the full separate screen omitted, will be explained next
def __init__(self, name = None, ident = None, classes = None, selections = None):
super().__init__(name, ident, classes)
pass

class OsApp(App):
BINDINGS = [
("q", "quit_app", "Quit"),
]
CSS_PATH = "os_app.tcss"
ENABLE_COMMAND_PALETTE = False # Do not need the command palette

def action_quit_app(self):
self.exit(0)

def compose(self) -> ComposeResult:
# Create a list of commands, valid commands are assumed to be on the PATH variable.
selections = [Selection(name.title(), ' '.join(cmd), True) for name, cmd in OS_COMMANDS.items() if shutil.which(cmd[0].strip())]
yield Header(show_clock=False)
sel_list = SelectionList(*selections, id='cmds')
sel_list.tooltip = "Select one more more command to execute"
yield sel_list
yield Button(f"Execute {len(selections)} commands", id="exec", variant="primary")
yield Footer()

@on(SelectionList.SelectedChanged)
def on_selection(self, event: SelectionList.SelectedChanged) -> None:
button = self.query_one("#exec", Button)
selections = len(event.selection_list.selected)
if selections:
button.disabled = False
else:
button.disabled = True
button.label = f"Execute {selections} commands"

@on(Button.Pressed)
def on_button_click(self):
selection_list = self.query_one('#cmds', SelectionList)
selections = selection_list.selected
log_screen = LogScreen(selections=selections)
self.push_screen(log_screen)

def main():
app = OsApp()
app.title = f"Output of multiple well known UNIX commands".title()
app.sub_title = f"{len(OS_COMMANDS)} commands available"
app.run()

if __name__ == "__main__":
main()

Let’s quickly dissect the code for the application:

  1. An application extends the class App. It has several methods but the most important are compose and mount. Only compose is implemented in this app.
  2. In compose, you yield back Widgets, and they get added in the same order to the main screen. Each Widget has options to customize their appearance.
  3. You can define single letter bindings, in this case the letter ‘q’ allows you to exit the application (see the function action_quit_app and the BINDINGS list)
  4. We display the list of commands to run using a SelectionList widget. You can then tell your application to capture what was selected by using the annotation @on(SelectionList.SelectedChanged) and the method on_selection.
  5. It is important to react to a lack of selected elements, we disable or enable the ‘exec’ button depending on how many commands were selected to run.
  6. A similar listener ( @on(Button.Pressed) ) is used to execute the commands. We do that by pushing our selection to a new screen that handles the execution and collection of results.

Notice the CSS_PATH = “os_app.tcss” variable? Textual allows you to control the appearance (colors, position, size) of individual or classes of widgets using CSS:

Screen {
        layout: vertical;
}

Header {
        dock: top;
}

Footer {
        dock: bottom;
}

SelectionList {
        padding: 1;
        border: solid $accent;
        width: 1fr;
        height: 80%;
}

Button {
        width: 1fr
}

Quoting from the Textual website:

The dialect of CSS used in Textual is greatly simplified over web based CSS and much easier to learn.

This is great, as you can customize the appearance of your application using a separate stylesheet without too much effort.

Let’s now look at how to display the results on a separate screen.

Display results on a separate screen

<figure class="wp-block-image">The results of the command, pretty print</figure>

The code that handles the output on a separate screen is here:

import asyncio
from typing import List
from textual import on, work
from textual.reactive import reactive
from textual.screen import ModalScreen
from textual.widgets import Button, Label, Log
from textual.worker import Worker
from textual.app import ComposeResult

class LogScreen(ModalScreen):
    count = reactive(0)
    MAX_LINES = 10_000
    ENABLE_COMMAND_PALETTE = False
    CSS_PATH = "log_screen.tcss"

    def __init__(
            self,
            name: str | None = None,
            ident: str | None = None,
            classes: str | None = None,
            selections: List = None
    ):
        super().__init__(name, ident, classes)
        self.selections = selections

    def compose(self) -> ComposeResult:
        yield Label(f"Running {len(self.selections)} commands")
        event_log = Log(
            id='event_log',
            max_lines=LogScreen.MAX_LINES,
            highlight=True
        )
        event_log.loading = True
        yield event_log
        button = Button("Close", id="close", variant="success")
        button.disabled = True
        yield button

    async def on_mount(self) -> None:
        event_log = self.query_one('#event_log', Log)
        event_log.loading = False
        event_log.clear()
        lst = '\n'.join(self.selections)
        event_log.write(f"Preparing:\n{lst}")
        event_log.write("\n")

        for command in self.selections:
            self.count += 1
            self.run_process(cmd=command)

    def on_worker_state_changed(self, event: Worker.StateChanged) -> None:
        if self.count == 0:
            button = self.query_one('#close', Button)
            button.disabled = False
        self.log(event)

    @work(exclusive=False)
    async def run_process(self, cmd: str) -> None:
        event_log = self.query_one('#event_log', Log)
        event_log.write_line(f"Running: {cmd}")
        # Combine STDOUT and STDERR output
        proc = await asyncio.create_subprocess_shell(
            cmd,
            stdout=asyncio.subprocess.PIPE,
            stderr=asyncio.subprocess.STDOUT
        )
        stdout, _ = await proc.communicate()
        if proc.returncode != 0:
            raise ValueError(f"'{cmd}' finished with errors ({proc.returncode})")
        stdout = stdout.decode(encoding='utf-8', errors='replace')
        if stdout:
            event_log.write(f'\nOutput of "{cmd}":\n')
            event_log.write(stdout)
        self.count -= 1

    @on(Button.Pressed, "#close")
    def on_button_pressed(self, _) -> None:
        self.app.pop_screen()

You will notice the following:

  1. The LogScreen class extends ModalScreen which handles screens in modal mode.
  2. The screen also has a compose method where we add widgets to show the contents of the Unix commands.
  3. We have a new method called mount. Once you ‘compose’ the widgets then you can run code to retrieve data and customize their appearance even further
  4. To run the commands we use asyncio, so we give the TUI main worker thread a chance to update the contents as soon as results for each command are known.
  5. On the ‘workers’ topic, please note the @work(exclusive=False) annotation on the run_process method used to run the commands and capture the STDOUT + STDERR output. Using workers to manage concurrency is not complicated, but they do have a dedicated section in the manual. This extra complexity arises because we are running external commands that may or may not take a long time to complete.
  6. In run_process we update the event_log by calling write with the contents of the command output.
  7. Finally, the on_button_pressed takes us back to the previous screen (pop the screen from the stack).

This little app shows you how to write a simple front end to run non-python code, in less than 200 lines of code.

Now let’s move to a more complex example that uses new features of Textual we haven’t explored yet.

Second example: A table with race results

<figure class="wp-block-image">Racing summary table<figcaption class="wp-element-caption">Table application created via Textual</figcaption></figure>

This example shows you how to display race results in a table (Using a DataTable widget). The application allows you to:

  • Sort a table by column
  • Select a row to show race details in a full window, using the same ‘push screen’ technique we saw in the log scroll application.
  • Search the table and show racer details or run other commands like exit the application.

Let’s see the application code then:

#!/usr/bin/env python
"""
Author: Jose Vicente Nunez
"""
from typing import Any, List

from rich.style import Style
from textual import on
from textual.app import ComposeResult, App
from textual.command import Provider
from textual.screen import ModalScreen, Screen
from textual.widgets import DataTable, Footer, Header

MY_DATA = [
("level", "name", "gender", "country", "age"),
("Green", "Wai", "M", "MYS", 22),
("Red", "Ryoji", "M", "JPN", 30),
("Purple", "Fabio", "M", "ITA", 99),
("Blue", "Manuela", "F", "VEN", 25)
]

class DetailScreen(ModalScreen):
ENABLE_COMMAND_PALETTE = False
CSS_PATH = "details_screen.tcss"

def __init__(
self,
name: str | None = None,
ident: str | None = None,
classes: str | None = None,
row: List[Any] | None = None,
):
super().__init__(name, ident, classes)
# Rest of screen code will be show later

class CustomCommand(Provider):

def __init__(self, screen: Screen[Any], match_style: Style | None = None):
super().__init__(screen, match_style)
self.table = None
# Rest of provider code will be show later

class CompetitorsApp(App):
BINDINGS = [
("q", "quit_app", "Quit"),
]
CSS_PATH = "competitors_app.tcss"
# Enable the command palette, to add our custom filter commands
ENABLE_COMMAND_PALETTE = True
# Add the default commands and the TablePopulateProvider to get a row directly by name
COMMANDS = App.COMMANDS | {CustomCommand}

def action_quit_app(self):
self.exit(0)

def compose(self) -> ComposeResult:
yield Header(show_clock=True)

table = DataTable(id=f'competitors_table')
table.cursor_type = 'row'
table.zebra_stripes = True
table.loading = True
yield table
yield Footer()

def on_mount(self) -> None:
table = self.get_widget_by_id(f'competitors_table', expect_type=DataTable)
columns = [x.title() for x in MY_DATA[0]]
table.add_columns(*columns)
table.add_rows(MY_DATA[1:])
table.loading = False
table.tooltip = "Select a row to get more details"

@on(DataTable.HeaderSelected)
def on_header_clicked(self, event: DataTable.HeaderSelected):
table = event.data_table
table.sort(event.column_key)

@on(DataTable.RowSelected)
def on_row_clicked(self, event: DataTable.RowSelected) -> None:
table = event.data_table
row = table.get_row(event.row_key)
runner_detail = DetailScreen(row=row)
self.show_detail(runner_detail)

def show_detail(self, detailScreen: DetailScreen):
self.push_screen(detailScreen)

def main():
app = CompetitorsApp()
app.title = f"Summary".title()
app.sub_title = f"{len(MY_DATA)} users"
app.run()

if __name__ == "__main__":
main()

What is interesting here?:

  1. compose adds the header where the ‘command palette’ will live, as well our table (DataTable). The table gets populated in the mount method.
  2. We have the expected bindings (BINDINGS) and external CSS for appearance (CSS_PATH)
  3. By default, if we want to have the command palette we do nothing, but it is explicitly enabled here (ENABLE_COMMAND_PALETTE = True)
  4. Our application has a custom search in the table contents. When the user types a name, a possible match is shown and the user clicks it to display the details for that racer. This requires telling the application that we have a custom provider (COMMANDS = App.COMMANDS | {CustomCommand}), which is the class CustomCommand(Provider) 
  5. If the user clicks a table header, the contents are sorted by that header. This is done using on_header_clicked which is annotated with @on(DataTable.HeaderSelected) 
  6. Similarly, when a row is selected, the method on_row_clicked is called thanks to the annotation @on(DataTable.RowSelected). The method receives the selected row that is then used to push a new screen with details (class DetailScreen(ModalScreen))

Now let’s explore in detail how the racer details are shown

Using screens to show more complex views

<figure class="wp-block-image">Runner details, using a markdown renderer</figure>

When the user selects a row, the method on_row_clicked gets called. It receives an event of type DataTable.RowSelected. From there we construct an instance of class DetailScreen(ModalScreen) with the contents of the selected row:

from typing import Any, List
from textual import on
from textual.app import ComposeResult
from textual.screen import ModalScreen
from textual.widgets import Button, MarkdownViewer

MY_DATA = [
    ("level", "name", "gender", "country", "age"),
    ("Green", "Wai", "M", "MYS", 22),
    ("Red", "Ryoji", "M", "JPN", 30),
    ("Purple", "Fabio", "M", "ITA", 99),
    ("Blue", "Manuela", "F", "VEN", 25)
]

class DetailScreen(ModalScreen):
    ENABLE_COMMAND_PALETTE = False
    CSS_PATH = "details_screen.tcss"

    def __init__(
            self,
            name: str | None = None,
            ident: str | None = None,
            classes: str | None = None,
            row: List[Any] | None = None,
    ):
        super().__init__(name, ident, classes)
        self.row: List[Any] = row

    def compose(self) -> ComposeResult:
        self.log.info(f"Details: {self.row}")
        columns = MY_DATA[0]
        row_markdown = "\n"
        for i in range(0, len(columns)):
            row_markdown += f"* **{columns[i].title()}:** {self.row[i]}\n"
        yield MarkdownViewer(f"""## User details:
        {row_markdown}
        """)
        button = Button("Close", variant="primary", id="close")
        button.tooltip = "Go back to main screen"
        yield button

    @on(Button.Pressed, "#close")
    def on_button_pressed(self, _) -> None:
        self.app.pop_screen()

The responsibility of this class is very simple:

  1. Method compose takes the row and displays the content using a widget that knows how to render Markdown. Pretty neat as it creates a table of contents for us.
  2. The method on_button_pressed pops back the original screen once the user clicks ‘close’. (Annotation @on(Button.Pressed, “#close”) takes care of receiving pressed events)

Now the last bit of the puzzle, which requires more explanation, the multipurpose search bar (known as command palette).

You can search too, using the command palette

<figure class="wp-block-image"></figure>

The command palette is enabled by default on every Textual application that uses a header. The fun part is that you can add your own commands in addition to the default commands, on class CompetitorsApp:

COMMANDS = App.COMMANDS | {CustomCommand}

And now the class that does all the heavy lifting, CustomCommand(Provider):

from functools import partial
from typing import Any, List
from rich.style import Style
from textual.command import Provider, Hit
from textual.screen import ModalScreen, Screen
from textual.widgets import DataTable
from textual.app import App

class CustomCommand(Provider):

    def __init__(self, screen: Screen[Any], match_style: Style | None = None):
        super().__init__(screen, match_style)
        self.table = None

    async def startup(self) -> None:
        my_app = self.app
        my_app.log.info(f"Loaded provider: CustomCommand")
        self.table = my_app.query(DataTable).first()

    async def search(self, query: str) -> Hit:
        matcher = self.matcher(query)

        my_app = self.screen.app
        assert isinstance(my_app, CompetitorsApp)

        my_app.log.info(f"Got query: {query}")
        for row_key in self.table.rows:
            row = self.table.get_row(row_key)
            my_app.log.info(f"Searching {row}")
            searchable = row[1]
            score = matcher.match(searchable)
            if score > 0:
                runner_detail = DetailScreen(row=row)
                yield Hit(
                    score,
                    matcher.highlight(f"{searchable}"),
                    partial(my_app.show_detail, runner_detail),
                    help=f"Show details about {searchable}"
                )

class DetailScreen(ModalScreen):
     def __init__(
            self,
            name: str | None = None,
            ident: str | None = None,
            classes: str | None = None,
            row: List[Any] | None = None,
    ):
        super().__init__(name, ident, classes)
        # Code of this class explained on the previous section

class CompetitorsApp(App):
    # Add the default commands and the TablePopulateProvider to get a row directly by name
    COMMANDS = App.COMMANDS | {CustomCommand}
    # Most of the code shown before, only displaying relevant code
    def show_detail(self, detailScreen: DetailScreen):
        self.push_screen(detailScreen)
  1. Any class extending Provider only needs to implement the method search. In our case we do also override the method startup to get a reference to our application table (and its contents), using the App.query(DataTable).first(). startup gets called only once in the lifetime of the instantiated class.
  2. Inside the search method we use the Provider.matcher to do a fuzzy search on the second column (name) of each table row comparing with the query (which is the term passed by the user on the TUI). The matcher.match(searchable) returns an integer score, where greater than zero indicates a match.
  3. Inside search if the score is greater than zero then it returns a Hit object that tell the command palette if the search query was successful or not.
  4. Each Hit has the following information: score (used for sorting matches on the palette command), a highlighted search term, a reference to a callable (that’s it in our case a function that will push our table row to a new screen)
  5. All the methods of the Provider class are async. This allows you to free the main worker thread and only return once the response is ready to be used (no frozen UI).

With all that information we can now display the racer details.

While the framework is simple enough to follow there is also a lot of complexity on the messages passed back and forth between the components. Luckily for us Textual has a nice debugging framework that will help us understand what is going on behind scenes.

Troubleshooting a Textual application

Debugging a Python Textual application is a little bit more challenging. This is because some operations can be asynchronous and setting breakpoints may be cumbersome when troubleshooting widgets.

Depending on the situation, there are some tools you can use. But first make sure you have the textual dev tools:

pip install textual-dev==1.3.0

Make sure you are capturing the right keys

You are not sure what keys are being captured by a Textual application? Run the key app:

textual keys

This lets you can press your key combinations and confirm what events are generated in Textual.

A picture is worth more than a thousand words

Say that you have a problem placing components on a layout, and you want to show others where you are stuck. Textual allows you to take a screenshot of your running application:

textual run --screenshot 5 ./kodegeek_textualize/log_scroller.py

That’s how I created the images for this tutorial.

Capturing events and printing custom messages

Textual has a logger that is part of every instance of an Application:

my_app = self.screen.app
my_app.log.info(f"Loaded provider: CustomCommand")

In order to see the messages, you first need to start a console:

. ~/virtualenv/Textualize/bin/activate
textual console

Then in another terminal run your application

. ~/virtualenv/Textualize/bin/activate
textual run --dev ./kodegeek_textualize/log_scroller.py

You will see now events and messages flowing into the terminal where the console is running:

▌Textual Development Console v0.46.0                                                                                                                                                      
▌Run a Textual app with textual run --dev my_app.py to connect.                                                                                                                           
▌Press Ctrl+C to quit.                                                                                                                                                                    
─────────────────────────────────────────────────────────────────────────────── Client '127.0.0.1' connected ────────────────────────────────────────────────────────────────────────────────
[20:29:43] SYSTEM                                                                                                                                                                 app.py:2188
Connected to devtools ( ws://127.0.0.1:8081 )
[20:29:43] SYSTEM                                                                                                                                                                 app.py:2192
---
[20:29:43] SYSTEM                                                                                                                                                                 app.py:2194
driver=<class 'textual.drivers.linux_driver.LinuxDriver'>
[20:29:43] SYSTEM                                                                                                                                                                 app.py:2195
loop=<_UnixSelectorEventLoop running=True closed=False debug=False>
[20:29:43] SYSTEM                                                                                                                                                                 app.py:2196
features=frozenset({'debug', 'devtools'})
[20:29:43] SYSTEM                                                                                                                                                                 app.py:2228
STARTED FileMonitor({PosixPath('/home/josevnz/TextualizeTutorial/docs/Textualize/kodegeek_textualize/os_app.tcss')})
[20:29:43] EVENT                                                                              

Another advantage of running your application in developer mode is that if you change your CSS, the application will try to render again without a restart.

Writing unit tests

What if you want to write unit tests for your brand new Textual application?

The documentation shows there are several ways to test our application.

I will be using unittest for that. We will need the special class, class unittest.IsolatedAsyncioTestCase, to handle our asyncio routines:

import unittest
from textual.widgets import Log, Button
from kodegeek_textualize.log_scroller import OsApp

class LogScrollerTestCase(unittest.IsolatedAsyncioTestCase):
    async def test_log_scroller(self):
        app = OsApp()
        self.assertIsNotNone(app)
        async with app.run_test() as pilot:
            # Execute the default commands
            await pilot.click(Button)
            await pilot.pause()
            event_log = app.screen.query(Log).first()  # We pushed the screen, query nodes from there
            self.assertTrue(event_log.lines)
            await pilot.click("#close")  # Close the new screen, pop the original one
            await pilot.press("q")  # Quit the app by pressing q


if __name__ == '__main__':
    unittest.main()

What is happening in the method test_log_scroller:

  1. Get a Pilot instance using app.run_test(). Then click the main button to run the query with the default commands, and then wait until all the events are processes.
  2. Next get the Log from the new screen we pushed and make sure we got some lines back, it is not empty
  3. Then close the new screen and pop the old one back
  4. Finally, press ‘q’ and exit the application

What about the test table, can it be tested?:

import unittest
from textual.widgets import DataTable, MarkdownViewer
from kodegeek_textualize.table_with_detail_screen import CompetitorsApp


class TableWithDetailTestCase(unittest.IsolatedAsyncioTestCase):
    async def test_app(self):
        app = CompetitorsApp()
        self.assertIsNotNone(app)
        async with app.run_test() as pilot:

            """
            Test the command palette
            """
            await pilot.press("ctrl+\\")
            for char in "manuela".split():
                await pilot.press(char)
            await pilot.press("enter")
            markdown_viewer = app.screen.query(MarkdownViewer).first()
            self.assertTrue(markdown_viewer.document)
            await pilot.click("#close")  # Close the new screen, pop the original one

            """
            Test the table
            """
            table = app.screen.query(DataTable).first()
            coordinate = table.cursor_coordinate
            self.assertTrue(table.is_valid_coordinate(coordinate))
            await pilot.press("enter")
            await pilot.pause()
            markdown_viewer = app.screen.query(MarkdownViewer).first()
            self.assertTrue(markdown_viewer)
            # Quit the app by pressing q
            await pilot.press("q")


if __name__ == '__main__':
    unittest.main()

If you run all the tests you will see something like this:

(Textualize) [josevnz@dmaf5 Textualize]$ python -m unittest tests/*.py
..
----------------------------------------------------------------------
Ran 2 tests in 2.065s

OK

Not a bad way to test a TUI, is it?

Packaging a Textual application

Packaging is not much different than packaging a regular Python application. You need to remember that you need to include the CSS files that control the appearance of your application:

. ~/virtualenv/Textualize/bin/activate
python -m build
pip install dist/KodegeekTextualize-*-py3-none-any.whl

This tutorial pyproject.toml file is a good start that shows you what to do to package your application.

[build-system]
requires = [
    "setuptools >= 67.8.0",
    "wheel>=0.42.0",
    "build>=1.0.3",
    "twine>=4.0.2",
    "textual-dev>=1.2.1"
]
build-backend = "setuptools.build_meta"

[project]
name = "KodegeekTextualize"
version = "0.0.3"
authors = [
    {name = "Jose Vicente Nunez", email = "kodegeek.com@protonmail.com"},
]
description = "Collection of scripts that show how to use several features of textualize"
readme = "README.md"
requires-python = ">=3.9"
keywords = ["running", "race"]
classifiers = [
    "Environment :: Console",
    "Development Status :: 4 - Beta",
    "Programming Language :: Python :: 3",
    "Intended Audience :: End Users/Desktop",
    "Topic :: Utilities"
]
dynamic = ["dependencies"]

[project.scripts]
log_scroller = "kodegeek_textualize.log_scroller:main"
table_detail = "kodegeek_textualize.table_with_detail_screen:main"

[tool.setuptools]
include-package-data = true

[tool.setuptools.packages.find]
where = ["."]
exclude = ["test*"]

[tool.setuptools.package-data]
kodegeek_textualize = ["*.txt", "*.tcss", "*.csv"]
img = ["*.svg"]

[tool.setuptools.dynamic]
dependencies = {file = ["requirements.txt"]}

What is next

This short tutorial only covers a few aspects of Textual. There is so much more to discover and learn:

  • You should definitely take a look at the official tutorial. Lots of examples and pointers to the reference API.
  • Textual can use widgets from the project that started it all, Rich. I think some, if not any of these components will get merged into Textual at some point. Textual framework is more capable for complex applications using a high level API, but Rich has lots of nice features.
  • Make your own widgets! Also while designing the TUI, grab a piece of paper and draw how you picture the components should align together. It will save you time and headaches later.
  • Debugging applications in Python can get complicated. Sometimes you may have to mix different tools to figure out what is wrong with an application.
  • Asyncio is a complex topic, you should read the developer documentation to see your alternatives.
  • Textual is used by other projects. One that is super easy to use is Trogon. It will make your CLI self discoverable.
  • Textual-web is a promising project that will allow you to run Textual applications on a browser. It is less mature than Textual but is evolving really fast.
  • Finally, check the external projects. There are a lot of useful Open Source applications in the portfolio.

Recent GNOME design work

Posted by Allan Day on January 09, 2024 02:58 PM

The GNOME 46 development cycle started around October last year, and it has been a busy one for my GNOME user experience design work (as they all are). I wanted to share some details of what I’ve been working on, both to provide some insight into what I get up to day to day, and because some of the design work might be interesting to the wider community. This is by no means everything that I’ve been involved with, but rather covers the bigger chunks of work that I’ve spent time on.

Videos

GNOME’s video player has yet to port to GTK 4, and it’s been a long time since it’s received major UX attention. This development cycle I worked on a set of designs for what a refreshed default GNOME video player might look like. These built on previous work from Tobias Bernard and myself.

The new Videos designs don’t have a particular development effort in mind, and are instead intended to provide inspiration and guidance for anyone who might want to work on modernising GNOME’s video playback experience.

A mockup of a video player app, with a video playing in the background and playback controls overlaid on top

The designs themselves aim to be clean and unobtrusive, while retaining the essential features you need from a video player. There’s a familial resemblance to GNOME’s new image viewer and camera apps, particularly with regards to the minimal window chrome.

Two mockups of the videos app, showing the window at different sizes and aspect ratios

One feature of the design that I’m particularly happy with is how it manages to scale to different form factors. On a large display the playback controls are constrained, which avoids long pointer travel on super wide displays. When the window size is reduced, the layout updates to optimize for the smaller space. That this is possible is of course thanks to the amazing break points work in libadwaita last cycle.

These designs aren’t 100% complete and we’d need to talk through some issues as part of the development process, but they provide enough guidance for development work to begin.

System Monitor

Another app modernisation effort that I’ve been working on this cycle is for GNOME’s System Monitor app. This was recently ported to GTK 4, which meant that it was a good time to think about where to take the user experience next.

It’s true that there are other resource monitoring apps out there, like Usage, Mission Center, or Resources. However, I thought that it was important for the existing core app to have input from the design team. I also thought that it was important to put time into considering what a modern GNOME resource monitor might look like from a design perspective.

While the designs were created in conversation with the system monitor developers (thank you Robert and Harry!) and I’d love to take them forward in that context, the ideas in the mockups are free for anyone to use and it would be great if any of the other available apps wanted to pick them up.

A mockup of the system monitor app, showing a CPU usage figures and a list of apps

One of the tricky aspects of the system monitor design is how to accommodate different types of usage. Many users just need a simple way to track down and stop runaway apps and processes. At the same time, the system monitor can also be used by developers in very specific or nuanced ways, such as to look in close detail at a particular process, or to examine multithreading behaviour.

A mockup of the system monitor app, showing CPU usage figures and a list of processes

Rather than designing several different apps, the design attempts to reconcile these differing requirements by using disclosure. It starts of simply by default, with a series of small graphs give a high-level overview and allows quickly drilling down to a problem app. However, if you want more fine-grained information, it isn’t hard to get to. For example, to keep a close eye on a particular type of resource, you can expand its chart to get a big view with more detail, or to see how multi-threading is working in a particular process, you can switch to the process view.

Settings

A gallery of mockups for the Settings app, including app settings, power settings, keyboard settings, and mouse & touchpad settings

If my work on Videos and System Monitor has largely been speculative, my time on Settings has been anything but. As Felipe recently reported, there has been a lot of great activity around Settings recently, and I’ve been kept busy supporting that work from the design side. A lot of that has involved reviewing merge requests and responding to design questions from developers. However, I’ve also been active in developing and updating various settings designs. This has included:

  • Keyboard settings:
  • Region and language settings:
    • Updated the panel mockups
    • Modernised language dialog design (#202)
  • Apps settings:
    • Designed banners for when an app isn’t sandboxed (done)
    • Reorganised some of the list rows (#2829)
    • Designs for how to handle the flatpak-spawn permission (!949)
  • Mouse & touchpad settings:
    • New click areas setting (done)
    • Updated designs for the test area (almost done)
  • Power
    • Updated the style of the charge history chart (#1419)
    • Reorganised the battery charge theshold setting (#2553)
    • Prettier battery level display (#2707)

Another settings area where I particularly concentrated this cycle was location services. This was prompted by a collection of issues that I discovered where people experience their location being determined incorrectly. I was also keen to ensure that location discovery is a good fit for devices that don’t have many ways to detect the location (say if it’s a desktop machine with no Wi-Fi).

A mockup of the Settings app, showing the location settings with an embedded map

This led to a round of design which proposed various things, such as adding a location preview to the panel (#2815) and portal dialog (#115), and some other polish fixes (#2816, #2817). As part of these changes, we’re also moving to rename “Location Services” to “Automatic Device Location”. I’d be interested to hear if anyone has any opinions on that, one way or another.

Conclusion

I hope this post has provided some insight into the kind of work that happens in GNOME design. It needs to be stressed that many of the designs that I’ve shared here are not being actively worked on, and may even never be implemented. That is part of what we do in GNOME design – we chart potential directions which the community may or may not decide to travel down. However, if you would like to help make any of these designs a reality, get in touch – I’d love to talk to you!

Looking for LogoFAIL on your local system

Posted by Richard Hughes on January 09, 2024 01:45 PM

A couple of months ago, Binarly announced LogoFAIL which is a pretty serious firmware security problem. There is lots of complexity Alex explains much better than I might, but essentially a huge amount of system firmware running right now is vulnerable: The horribly-insecure parsing in the firmware allows the user to use a corrupted OEM logo (the one normally shown as the system boots) to run whatever code they want, providing a really useful primitive to do basically anything the attacker wants when running in a super-privileged boot state.

Vendors have to release new firmware versions to address this, and OEMs using the LVFS have pumped out millions of updates over the last few weeks.

So, what can we do to check that your system firmware has been patched [correctly] by the OEM? The only real way we can detect this is by dumping the BIOS in userspace, decompressing the various sections and looking at the EFI binary responsible for loading the image. In an ideal world we’d be able to look at the embedded SBoM entry for the specific DXE, but that’s not a universe we live in yet — although it is something I’m pushing the IBVs really hard to do. What we can do right now is token matching (or control flow analysis) to detect the broken and fixed image loader versions.

The four decompressing the various sections words hide how complicated taking an Intel Flash Descriptor image and breaking it into EFI binaries actually is. There are many levels of Matryoshka doll stacking involving hideous custom LZ77 and Huffman decompressors, and of course vendor-specific section types. It’s been several programmer-months spread over the last few years figuring it all out. Programs like UEFITool do a very good job, but we need to do something super-lightweight (and paranoid) at every system boot as part of the HSI tests. We only really want to stream a few kBs of SPI contents, not MBs as it’s actually quite slow and we only need a few hundred bytes to analyze.

In Fedora 40 all the kernel parts are in place to actually get the image from userspace in a sane way. It’s a 100% read-only interface, so don’t panic about bricking your system. This is currently Intel-only — AMD wasn’t super-keen on allowing userspace read access to the SPI, even as root — even though it’s the same data you can get with a $2 SPI programmer and 30 seconds with a Pomona clip.

Intel laptop and servers should both have an Intel PCI SPI controller — but some OEMs manually hide it for dubious reasons — and if that’s the case there’s nothing we can do I’m afraid.

You can help the fwupd project by contributing test firmware we can use to verify we parse it correctly, and to prevent regressions in the future. Please follow these steps only if:

  1. You have an Intel CPU laptop, desktop or server machine
  2. You’re running Fedora 39, (no idea on other distros, but you’ll need at least CONFIG_MTD_SPI_NOR, CONFIG_SPI_INTEL_PCI and CONFIG_SPI_MEM to be enabled in the kernel)
  3. You’re comfortable installing and removing a kernel on the command line
  4. There’s not already a test image for the same model provided by someone else
  5. You are okay with uploading your SPI contents to the internet
  6. You’re running the OEM-provided firmware, and not something like coreboot
  7. You’re aware that the firmware image we generate may have an encrypted version of your BIOS supervisor password (if set) and also all of the EFI attribute keys you’ve manually set, or that have been set by the various crash reporting programs.
  8. The machine is not a secure production system or a machine you don’t actually own.

Okay, lets get started:

sudo dnf update kernel --releasever 40

Then reboot into the new kernel, manually selecting the fc40 entry on the grub menu if required. We can check that the Intel SPI controller is visible.

$ cat /sys/class/mtd/mtd0/name 
BIOS

Assuming it’s indeed BIOS and not some other random system MTD device, lets continue.

$ sudo cat /dev/mtd0 > lenovo-p1-gen4.bin

The filename should be lowercase, have no spaces, and identify the machine you’re using — using the SKU if that’s easier.

Then we want to compress it (as it will have a lot of 0xFF padding bytes) and encrypt it (otherwise github will get most upset that you’re attaching something containing “binary code”):

zip lenovo-p1-gen4.zip lenovo-p1-gen4.bin -e
Enter password: fwupd
Verify password: fwupd

It’s easier if you use the password of “fwupd” (lowercase, no quotes) but if you’d rather send the image with a custom password just get the password to me somehow. Email, mastodon DM, carrier pigeon, whatever.

If you’re happy sharing the image, can you please create an issue and then attach the zip file and wait for me to download the file and close the issue. I also promise that I’m only using the provided images for testing fwupd IFD parsing, rather than anything more scary.

NOTE: If you’re getting a permission error (even running with sudo) you’re probably hitting a kernel MTD issue we’re trying to debug and fix. I wrote a python script that can be run as root to try to get each partition in turn.
If this script works, can you please also paste the output of that script into the submitted github issue.

Thanks!

Print the line after a match using AWK

Posted by Adam Young on January 08, 2024 06:46 PM

We have an internal system for allocating hardware to developers on a short term basis. While the software does have a web API, it is not enabled by default, nor in our deployment. Thus, we end up caching a local copy of the data about the machine. The machine names are a glom of architecture, and location. So I make a file with the name of the machine, and a symlink to the one I am currently using.

One of the pieces of information I need out of this file is the IP address for ssh access. The format is two lines in the file, like this:

BMCIP
10.76.244.85
SYSTEMIP
10.76.244.65

While the SYSTEMIP is an easy regular expression match, what I need is the line after the match, and that is a little less trivial. Here is what I have ended up with (using this post as a reference).

awk 'f{print; f=0};/SYSTEMIP/{f=1}' current_dev 

Which produces the expected output:

10.76.244.65

Chingam: a new libre Malayalam traditional script font

Posted by Rajeesh K Nambiar on January 08, 2024 08:16 AM

‘Chingam’/ചിങ്ങം (named after the first month of Malayalam calendar) is the newest libre/open source font released by Rachana Institute of Typography in the year 2024.

<figure class="wp-block-image size-large is-resized">Chingam font specimen</figure>

It comes with a regular variant, embellished with stylistic alternates for a number of characters. The default shape of characters D, O, , etc. are wider in stark contrast with the shape of other characters designed as narrow width. The font contains alternate shapes for these characters more in line with the general narrow width characteristic.

<figure class="wp-block-image size-large">Example of stylistic alternate characters in Chingam font. Text without stylistic alternate above, same text with stylistic alternate below.</figure>

Users can enable the stylistic alternates in typesetting systems, should they wish.

  • XeTeX: stylistic variant can be enabled with the StylisticSet={1} option when defining the font via fontspec package. For e.g.
% in the preamble
\newfontfamily\chingam[Ligatures=TeX,Script=Malayalam,StylisticSet={1}]{Chingam}

\begin{document}
\chingam{മനുഷ്യരെല്ലാവരും തുല്യാവകാശങ്ങളോടും അന്തസ്സോടും സ്വാതന്ത്ര്യത്തോടുംകൂടി ജനിച്ചിട്ടുള്ളവരാണ്‌…}
\end{document}
  • Scribus: extra font features are accessible since version 1.6
<figure class="wp-block-image size-large"></figure>
  • LibreOffice: extra font features are accessible since version 7.4. Enable it using FormatCharacterLanguageFeatures.
<figure class="wp-block-image size-large">LibreOffice stylistic alternates with Chingam font.</figure>
  • InDesign: very similar to Scribus; there should be an option in the text/font properties to choose the stylistic set.

Development

Chingam is designed and drawn by Narayana Bhattathiri. Based on the initial drawings on paper, the glyph shapes are created in vector format (svg) following the glyph naming convention used in RIT projects. A new build script is developed by Rajeesh that makes it easier for designers to iterate & adjust the font metadata & metrics. Review & scrutiny done by CVR, Hussain KH and Ashok Kumar improved the font substantially.

Download

Chingam is licensed under Open Font License. The font can be downloaded from Rachana website, sources are available in GitLab page.

Running a UEFI program upon reboot

Posted by Adam Young on January 08, 2024 03:51 AM

Linux is only one of the Operating Systems that our chips need to support. While some features need a complete OS for testing, we want to make sure that features are as tested as possible before we try to debug them through Linux.

My current code implements has to talk to the firmware. Thus we need to have a responder in the firmware. Testing this from Linux has to go through the entire Linux networking stack.

The developers of the firmware already have to deal with UEFI. UEFI is installed with the firmware. Since writing UEFI apps is basically like writing DOS apps (according to Ron Minnich) it is not a huge demand on our Firmware team to have them use UEFI as the basis for writing thei test code. Thus, they produced a simple app called RespTest.efi That checks that their code does what they expect it to do.

The normal process is that they boot the UEFI shell, switch to the file system, and copy up the Executable via tftp, Once it is on the system, they run it manually. After the first time the executable is leaded, the executable is persisted on the system. Thus, to run it a second or third time, after, say, a change to the firmware, can be automated from the operating system.

ls /boot/efi/
EFI  ResponderTest.efi  spinor.img

The above was executed from a Linux system running that has the efi partition mounted in /boot/efi. The RespTest.efi binary was copied up using tftp.

To see the set of UEFI boot options, we can use efibootmgr. These are mostly various flavors of PXE

efibootmgr
BootCurrent: 0001
Timeout: 10 seconds
BootOrder: 0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000A
Boot0000* UiApp	FvVol(5c60f367-a505-419a-859e-2a4ff6ca6fe5)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0001* Fedora (Samsung SSD 970 EVO Plus 250GB)	PcieRoot(0x60000)/Pci(0x1,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-25-38-5A-91-51-38-0A)/HD(1,GPT,4c1510a2-110f-492f-8c64-50127bc2e552,0x800,0x200000)/File(\EFI\fedora\shimaa64.efi) File(.????????)
Boot0002* UEFI PXEv4 (MAC:0C42A15A9B28)	PcieRoot(0x0)/Pci(0x1,0x0)/Pci(0x0,0x0)/MAC(0c42a15a9b28,1)/IPv4(0.0.0.00.0.0.0,0,0){auto_created_boot_option}
...
Boot0009* UEFI HTTPv6 (MAC:0C42A15A9B29)	PcieRoot(0x0)/Pci(0x1,0x0)/Pci(0x0,0x1)/MAC(0c42a15a9b29,1)/IPv6([::]:<->[::]:,0,0)/Uri(){auto_created_boot_option}
Boot000A* UEFI Shell	FvVol(5c60f367-a505-419a-859e-2a4ff6ca6fe5)/FvFile(7c04a583-9e3e-4f1c-ad65-e05268d0b4d1)

Lets say I want to boot to the UEFI shell next time. Since that has an index of 000A, I can run the following command to tell UEFI to boot to the shell once and only once. After that it will return to the default order:

efibootmgr –bootnext 000A

Here are the first few lines of output from that command;

# efibootmgr --bootnext 000A
BootNext: 000A
BootCurrent: 0001
Timeout: 10 seconds

If I want to add an addition boot option, I can use the –create option. To tell it to boot the ResponderTest,efi executable:

efibootmgr efibootmgr --create -d /dev/nvme0n1p1 --label RespTest -l RespTest.efi

The last line of output from that command is:

Boot000B* RespTest	HD(1,GPT,4c1510a2-110f-492f-8c64-50127bc2e552,0x800,0x200000)/File(RespTest.efi)efibootmgr

If I then want to make UEFI select this option upon next boot…

# efibootmgr --bootnext 000B
BootNext: 000B
BootCurrent: 0001
Timeout: 10 seconds

Note that the executable is in /boot/efi mounted on the Linux filesystem, but that will be the EF0: in UEFI. If you wish to put something further down the tree, you can, but remember that UEFI uses reverse slashes. I think it would look something like this…but I have not tried it yet:

efibootmgr efibootmgr –create -d /dev/nvme0n1p1 –label RespTest -l EFI\\RespTest.efi

Note the double slash to avoid Linux escaping….I think this is necessary.

Criando um Pod com ansible no Podman

Posted by Daniel Lara on January 08, 2024 01:25 AM


 

Uma dica rápida de  iniciar um pod simples com ansible no podman

Tenho uma prática de sempre ter um arquivo de hosts para minhas playbook , isso ajuda muito ,

nesse caso é o meu localhost


Vamos criar agora no nosso arquivo YAML , nesse arquivo, vamos efetuar 

o pull da imagem do wilffly,  criar o pod e rodar o container , ou seja 

teremos 3 execução, o pull da imagem, criação do pod e a execução do container .

aqui esta o arquivo com a conf simples de um pod 


Agora vamos executar nossa playbook

$ ansible-playbook -i hosts pod.yaml


Feito , agora podemos analisar o pod



Verificando o acesso via web


Feito , simples rápido e fácil 

Guia de Referencia : 

https://docs.ansible.com/ansible/latest/collections/containers/podman/podman_pod_module.html


Week 1 in Packit

Posted by Weekly status of Packit Team on January 08, 2024 12:00 AM

Week 1 (January 2nd – January 8th)

  • We have changed the behaviour of loading Packit configuration for koji_build and bodhi_update jobs. For both of them, the behaviour is the same as for pull_from_upstream - the configuration is taken from the default branch of the dist-git repository (usually rawhide) and other branches are ignored. (packit-service#2295)

Episode 410 – Package identifiers are really hard

Posted by Josh Bressers on January 08, 2024 12:00 AM

Josh and Kurt talk about package identifiers. We break this down in the context of an OpenSSF response to a CISA paper on software identifications. The identifiers that get all the air time are purl, CPE, SWID, and OmniBOR. This is a surprisingly complex problem space. It feels easy, but it’s not.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3289-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_410_Package_identifiers_are_really_hard.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_410_Package_identifiers_are_really_hard.mp3</audio>

Show Notes

On The Lineageos User Exposure Window

Posted by Robbie Harwood on January 07, 2024 05:00 AM

Unfortunate scheduling means that LineageOS users are exposed to publicly disclosed vulnerabilities for typically nine days per month. Here's why, and what I (a user who is otherwise uninvolved in the project) think could be done to improve the situation.

Per official Android documentation:

Important: The Android Security Bulletins are published on the first Monday of each month unless that Monday falls on a holiday. If the first Monday of the month is a holiday the bulletins will be published on the following work day.

Holiday time off makes sense for human reasons, though it makes release days inconsistent. (And I assume we're talking US holidays because Google, though this isn't stated.) Adherence to this isn't great - most egregiously, something happened that resulted in a March 13, 2023 release which is probably the largest slip since August 13, 2015 (which is far back as the table goes). But I've worked in security engineering long enough to know that sometimes dates will slip, and I'm sure it's not intentional. Clerical errors like the November 2023 bulletin date are also inevitable.

But let's assume for the sake of simplicity that disclosure happens as written: on the first Monday of the month (or the following day) a bulletin is published listing the CVEs included, their type/severity, and which Android (i.e., AOSP) versions are affected. Importantly absent here are the patches for these vulnerabilities, which per the bulletin boilerplate take 2 days to appear. So what's the point of the bulletin?

In any case, this means that patches won't be available until Wednesday or Thursday. Lineage posts updates on Friday; per datestamps, these updates are built on Thursdays. This means that in order to have the security fixes (and security string) in the next Lineage update, there is less than a day's window to gather patches, backport them, post pull requests, run CI, get code review, etc., before the weekly build is made - a seemingly herculean feat. And sometime months, it won't be technically possible at all.

So since patches will not land in the next release, all users of official builds are exposed to publicly disclosed vulnerabilities for typically nine days. (I think it's 8-10, but I don't discount off-by-one, and that also relies on Lineage not slipping further; everyone's human and it does happen, especially on volunteer projects.)

Clearly, the schedule is a problem. Here are my initial thoughts on how this might be addressed:

  1. Release Lineage updates on a different day. If it takes four days to run through the backport+review+build pipelines, then plan to release on Monday. Users will still be exposed for the length of time it takes to backport+review+build.
  2. Add Lineage to the embargo list. This would mean that backport and review could take place prior to public disclosure, and so including the security fixes in the upcoming Friday update becomes doable. Users are still exposed for 2 days, but that's way better than 9. (I am not involved in the Lineage project so it's possible they already are included, but that seems unlikely given security update PRs are not usually sent on the day code becomes publicly available.)
  3. Stop the bulletin/patch desync. I cannot come up with a good reason why the security bulletin and patches are released at different times, let alone releasing the bulletin before the patches. This makes reasoning about fix availability unnecessarily complicated. However, it would probably require Google to care.
  4. Update Lineage less frequently. It seems like the least amount of work, but if, e.g., Lineage released once per month, then the day of the release could be whenever the security fixes for the month land. This is also helpful beccause it minimizes the number of times that flashing needs to occur. (But that may be a larger discussion for a different post.)

These are not mutually exclusive, either.

To close out, I need to head off something that I am quite sure will come up. Yes, I am capable of running my own custom Android builds and kernel trees. I do not want to do this, and more importantly, users who are not developers by definition cannot do it. That is to say: OTA updates and official builds are the most important versions of the project, not whatever custom stuff we make. (After all, if that weren't the case, there wouldn't be five major forks over the signature spoofing decision.)

Updates to my mod-nginx-proxy replacement script in Python

Posted by Jon Chiappetta on January 06, 2024 06:08 PM

So in my quest to replace my modified-nginx network-wide proxy service with a Python script, I was interested to see what kind of performance I could achieve without having to write the whole thing in C this time. So far, the networking performance has been pretty good after I was able to iron out some connection tracking state issues. I then started looking into making it multi-threaded to help manage a large number of connections in parallel and I came across potential limitations related to the Global Interpreter Lock in Python. This then directed me to the multi-processing capabilities in Python and the ability to lock and share memory variables and values which reminded me of the old-school c-type variable definitions that I enjoy working with. Anyway, I tried to implement my first multi-process shared-variable class which should allow for locking data reads/writes between processes or threads. Because of this functionality and capability in Python, I was able to transform my proxy script to now allow for various processing modes (loop, thread, process) and buffer types (shared, piped).

<figure class="aligncenter size-large"></figure>

No, WordPress doesn’t offer newsletters – not really

Posted by Joe Brockmeier on January 05, 2024 06:19 PM

Switching away from Substack is a hot topic right now, for reasons I won’t belabor here (hint: it’s because of the Nazis) – which means that people are searching for alternatives. One that’s getting tossed around is WordPress. As much as I’d love that to be true, the WordPress Newsletter functionality is not what most folks think of when you think “newsletter.”

<figure aria-describedby="caption-attachment-5301" class="wp-caption alignright" id="attachment_5301" style="width: 249px">WordPress.com Newsletter Settings<figcaption class="wp-caption-text" id="caption-attachment-5301">The backend for WordPress.com Newsletter Settings – pretty much just “advanced email sending” settings.</figcaption></figure>

I subscribe to 10-15 various technology and other newsletters, which vary in format quite a bit – but generally show some editorial discretion from the newsletter editor or publisher. That is to say, if you follow a organization’s blog and its newsletter, you expect to see some additional / different content and curation to go into its newsletter.

WordPress… doesn’t have that. Or if it does it’s nowhere to be found in the Newsletter Settings or Jetpack Settings for Newsletters via WordPress.com. It’s frustrating because I see this suggestion being thrown around on Mastodon and elsewhere, and have checked it out only to find some features around “let people subscribe via email to your posts.” That’s… not a newsletter. Let me create an actual newsletter, and link to my blog content with additional material, and then we’re in good shape.

<figure aria-describedby="caption-attachment-5298" class="wp-caption alignleft" id="attachment_5298" style="width: 150px">Jetpack Newsletter Settings<figcaption class="wp-caption-text" id="caption-attachment-5298">WordPress.com’s Jetpack Newsletter Settings</figcaption></figure>

Which is disappointing, really. I’ve been thinking about launching my own newsletter in 2024, and the only real value offered by WordPress in this department is delivering things via email instead of RSS / visiting the blog directly.

It’s a pity, because WordPress / Automattic are ceding a huge market to others instead of offering real value to people subscribing to their WP.com plans. A proper newsletter offering in addition to the WordPress hosting and other features you get with their various paid plans would be a major draw. Why they haven’t upped their game here, even before Substack’s Nazi problem came to light, mystifies me.

Substack has posed a “threat” to the WordPress publishing model for quite a while. What Automattic offers right now might check a box somewhere that says “newsletter” but it’s hardly a real offering. While you can find some stats for emails, it’s basically just opens and clicks. Not over time, segregated by month or week or anything like that … just a list of opens and clicks. My last post apparently got 22 opens and 5 clicks. That’s all I know.

I’d love to recommend WordPress for this use case, I really would. I think that Automattic is a good company and it’s much nicer to be able to satisfy the need for a site, blog, and newsletter in one product. Sadly, it’s not ready to fill that use case just yet.

Infra & RelEng Update – Week 1 2024

Posted by Fedora Community Blog on January 05, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contain updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work. Happy New Year to everyone!

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 01 January – 05 January 2024

<figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/01/IR_Weekly_01-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2560", "targetHeight": "2191", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).

Fedora Infra

  • RH IT tickets to open firewall ports for production zabbix completed (mostly). Starting to auto enroll the hosts with the server.
  • Update infrastructure documentation
  • [PagureExporter] v0.1.2 tag temporarily yoink’d in order for RH IT to expunge any (potentially) sensitive info from cached views

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

Updates

ARC Investigations

The ARC (which is a subset of the CPE team) investigates possible initiatives that CPE might take on.

Updates

  • Dist-Git decoupling & ecosystem mapping
    • Investigation finished and document available
    • Announced to community and looking for feedback

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra & RelEng Update – Week 1 2024 appeared first on Fedora Community Blog.

37c3 notes

Posted by Nikos Roussos on January 05, 2024 09:27 AM

37c3

It’s been a few years since the last Chaos Computer Congress. Same as many other people I highly enjoyed being there. Meeting with people, participating in discussions and a bit of hacking. Most of the things taking place in a congress are quite difficult to describe them in writing and most of happening outside of the presentation rooms. But stil, I thought I should share at least some sessions I enjoyed.

💡 If you use Kodi, install the relevant add-on to watch these in comfort (or any other of the apps)

Talks

  • Predator Files: How European spyware threatens civil society around the world
    A technical deep dive into Amnesty International investigation about the spyware alliance Intellexa, which is used by governments to infect the devices and infrastructure we all depend on.

  • Tech(no)fixes beware!
    Spotting (digital) tech fictions as replacement for social and political change. As the climate catastrophe is imminent and global injustice is rising, a lot of new tech is supposed to help the transition to a sustainable society. Although some of them can actually help with parts of the transition, they are usually discussed not as tools to assist the broader societal change but as replacement for the broader societal change.

  • A Libyan Militia and the EU - A Love Story?
    An open source investigation by Sea-Watch and other organizations, on how EU (either directly or through Frontex) is collaborating with Tariq Ben Zeyad Brigade (TBZ), a notorious East Libyan land-based militia. TBZ were deeply involved in the failed passage of the boat that sank near Pylos, in which up to 500 people drowned.

  • Tractors, Rockets and the Internet in Belarus
    How belarusian authoritarian regime is using technologies to repress it's population. With dropping costs of surveillance smaller authoritarian regimes are gaining easier access to different "out of the box" security solutions used mainly to further oppress people.

  • Please Identify Yourself!
    Focused mostly on EU's eIDAS and India's Aadhaar, and highlighting how Digital Identity Systems proliferate worldwide without any regard for their human rights impact or privacy concerns. Driven by governments and the crony capitalist solutionism peddled by the private sector, these identification systems are a frontal attack on anonymity in the online world and might lead to completely new forms of tracking and discrimination.

  • On Digitalisation, Sustainability & Climate Justice
    A critical talk about sustainability, technology, society, growth and ways ahead. Which digital tools make sense, which do not and how can we achieve global social emancipation from self-destructive structures and towards ecological sustainability and a and a just world?

  • Energy Consumption of Datacenters
    The increase of datacenter energy consumption has already been exponential for years. With the AI hype, this demand for energy, cooling and water has increased dramatically.

  • Software Licensing For A Circular Economy
    How Open Source Software connects to sustainability, based on the KDE Eco initiative. Concrete examples on Free & Open Source Software license can disrupt the produce-use-dispose linear model of hardware consumption and enable the shift to a reduce-reuse-recycle circular model.

Self-organized sessions

Anyone who has paricipated in a Congress knows that there is a wide variety of workshops and self-organized sessions outside of the official curated talks. Most of them not recorded, but still I thought I should share some highlights and thoughts in case people want to dig a bit deeper into these topics.

Projects

Some quick links on projects captured in my notes based on discussions during the Congress.

How Did I break my Code

Posted by Adam Young on January 04, 2024 11:17 PM

Something I did in a commit broke my code. I have a git bisect that shows me the last good commit and the first bad one.

The code in question is a device driver for MCTP over PCC. MCTP is a network protocol used to have different components on a single system talk to each other. PCC is a shared buffer mechanism with a “doorbell” or register-that-triggers-an-interrupt. Another team needs to use this driver.

In looking through the output, a co-worker stated “it looks like you are sending the wrong buffer.” I suspected he was correct.

The buffers are pulled from ACPI information in a table specific to PCC, called the PCCT.

This is from the PCCT, which is common to both the good and bad run, and shows the machine (physical) addresses of the memory.


03 [Extended PCC Master Subspace]
               Base Address       = 00004000000D4004


04 [Extended PCC Slave Subspace]
               Base Address       = 00004000000D4404

Here is output data I get from a run of code from tip of tree.




[  644.338469] remap_pcc_comm_addresses outbox pcc_chan->shmem_base_addr = 4000000d4004 mapped to  ffff800080621004
[  644.348632] remap_pcc_comm_addresses inbox pcc_chan->shmem_base_addr = 4000000d4404 mapped to ffff80008061f404

[  644.338469] remap_pcc_comm_addresses outbox pcc_chan->shmem_base_addr = 4000000d4004 mapped to  ffff800080621004
[  644.348632] remap_pcc_comm_addresses inbox pcc_chan->shmem_base_addr = 4000000d4404 mapped to ffff80008061f404
 



 
[  828.014307] _mctp_get_buffer pcc_chan->shmem_base_addr = 1  ptr = 000000007f3ab582 
[  828.021950] mctp_pcc_tx buffer = ffff80008061f404

What I see is that I am sending data on the inbox address, not the outbox. So where did I swap it?

541392a553a2c2b7f741d89cff0ab8014a367299 is the first bad commit.

Lets take a look at the code in that commit. The diff shows a lot of changes in _mctp_get_buffer. Here’s what it looks like in non-diff form

static unsigned char * _mctp_get_buffer (struct mbox_chan * chan){
        void __iomem * pcc_comm_addr;
         
        struct pcc_mbox_chan * pchan = (struct pcc_mbox_chan *) chan;

        
        if (pchan == mctp_pcc_dev->out_chan){
                pcc_comm_addr =  mctp_pcc_dev->pcc_comm_outbox_addr;
        }else{
                pcc_comm_addr = mctp_pcc_dev->pcc_comm_inbox_addr;
        }
        pr_info("%s pcc_chan->shmem_base_addr = %llx  ptr = %p \n",
                        __func__, pchan->shmem_base_addr, pcc_comm_addr );
        return (unsigned char *  )pcc_comm_addr;
} 

How could this fail? Well, what if the cast from chan to pchan is bogus? If that happens, the if block will not match, and we will end up in the else block.

Why would I cast like that? Assume for a moment (as I did) that the definitions looked like this:

struct   pcc_mbox_chan {
 struct mbox_chan  chan;
}

Ignore anything after the member variable chan. This is a common pattern in C when you need to specialize a general case.

This is not what the code looks like. Instead, it looks like this:

struct   pcc_mbox_chan {
 struct mbox_chan*  chan;
}

The structure that chan points to was allocated outside of this code, and thus cannot be cast.

The else block always matched.

The reason the old code ran without crashing is it was getting a valid buffer, just not the right one. I was trying to send the “outbox” buffer, but instead sent the “inbox” buffer.

Here is the corrected version of the if block.

if (chan == mctp_pcc_dev->out_chan->mchan){
        pcc_comm_addr = mctp_pcc_dev->pcc_comm_outbox_addr;
}else if (chan == mctp_pcc_dev->in_chan->mchan){
        pcc_comm_addr = mctp_pcc_dev->pcc_comm_inbox_addr;
}
if (pcc_comm_addr){
        pr_info("%s buffer ptr = %px \n",__func__, pcc_comm_addr );
        return (unsigned char * )pcc_comm_addr;
}

What made this much harder to debug is the fact that the “real” system I was testing against had changed and I was not able to test my code against it for a week or so, and by then I had added additional commits on top of the original one. This shows the importance of testing early, testing often, and testing accurately.

To give a sense of how things got more complicated, here is the current version of the get_buffer function:

static unsigned char * _mctp_get_buffer (struct mbox_chan * chan){
        struct mctp_pcc_ndev * mctp_pcc_dev = NULL;
        struct list_head *ptr;
        void __iomem * pcc_comm_addr = NULL;
        list_for_each(ptr, &mctp_pcc_ndevs) {
                mctp_pcc_dev = list_entry(ptr,struct mctp_pcc_ndev, head);
        
                if (chan == mctp_pcc_dev->out_chan->mchan){
                        pcc_comm_addr =  mctp_pcc_dev->pcc_comm_outbox_addr;
                }else if (chan == mctp_pcc_dev->in_chan->mchan){
                        pcc_comm_addr = mctp_pcc_dev->pcc_comm_inbox_addr;
                }
                if (pcc_comm_addr){
                        pr_info("%s buffer ptr = %px \n",
                                __func__, pcc_comm_addr );
                        return (unsigned char *  )pcc_comm_addr;
                }
        }
        //TODO this should never happen, but if it does, it will
        //Oops.  Handle the error better
        pr_err("%s unable to match mctp_pcc_ndev", __func__);
        return 0;
}

Why is it now looking through a list? One of the primary changes I had to make was to move the driver from only supporting a single device to supporting multiple. Finding which buffer to get from struct mbox_chan is again going from general to specific.

This code is still in flight. The list lookup may well not survive the final cut, as there is possibly a better way to get the buffer from the original structure, but it is not clear cut. This works, and lets the unit tests continue to pass. This allows the team to make progress. It might be technical debt, but that is necessary part of a complicated development integration process.

Advice for Telegram usage in education

Posted by Pablo Iranzo Gómez on January 04, 2024 11:00 PM
Telegram is being used frequently at schools for allowing an easy communication flow using the ‘channels’, where teachers can send information to families without sharing their personal contact details so that they can’t be contacted outside of the official tools. This comfort, came, probably because of not knowing the problems/dangers that were not taken into consideration. For making it easier for parents to join the channel of the class, many times, public channels are created… this allows to use a shorter ‘alias’ but that everyone else can search and find… and here is the problem… ’everyone’ it’s not just parents or other teachers… if you do the test by searching for clase de or classe dels you’ll find lot of groups:

PHP version 8.2.15RC1 and 8.3.2RC1

Posted by Remi Collet on January 04, 2024 04:07 PM

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.

RPMs of PHP version 8.3.1RC3 are available

  • as base packages
    • in the remi-modular-test for Fedora 37-39 and Enterprise Linux ≥ 8
    • in the remi-php83-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

RPMs of PHP version 8.2.14RC1 are available

  • as base packages
    • in the remi-modular-test for Fedora 37-39 and Enterprise Linux ≥ 8
    • in the remi-php82-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

emblem-notice-24.png The Fedora 39EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngPHP version 8.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation: follow the wizard instructions.

emblem-notice-24.png Announcements:

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Update of system version 8.3 (EL-7) :

yum --enablerepo=remi-php83,remi-php83-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.2 (EL-7) :

yum --enablerepo=remi-php82,remi-php82-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png Notice:

  • version 8.3.2RC1 is also in Fedora rawhide for QA
  • EL-9 packages are built using RHEL-9.3
  • EL-8 packages are built using RHEL-8.9
  • EL-7 packages are built using RHEL-7.9
  • oci8 extension uses the RPM of the Oracle Instant Client version 21.12 on x86_64 or 19.19 on aarch64
  • intl extension uses libicu 73.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.2.15 and 8.3.2 are planed for January 18th, in 2 weeks.

Software Collections (php82, php83)

Base packages (php)

Missing rubygem json-canonicalization 0.3.2

Posted by Kushal Das on January 03, 2024 05:10 PM

I did not upgrade our mastodon server to 4.2.0 from 4.1.9 for a long time. Finally while doing so in the morning, I got the following error with the bundle install command.

Your bundle is locked to json-canonicalization (0.3.2) from rubygems repository
https://rubygems.org/ or installed locally, but that version can no longer be
found in that source. That means the author of json-canonicalization (0.3.2) has
removed it. You'll need to update your bundle to a version other than
json-canonicalization (0.3.2) that hasn't been removed in order to install.

I have no clue about how Ruby works, but somehow only updating the lockfile via bundle lock --update json-canonicalization did not help. Finally updated the Gemfile.lock file to have json-canonicalization (0.3.3) manually. That solved the issue and I could continue with the update steps.

Writing Docs with Kate

Posted by Fedora Magazine on January 03, 2024 08:00 AM

Kate (KDE Advanced Text Editor) is a Free and Open Source text editor, available for Linux, Windows and macOS.

For documentation writers, the integrated Git features of Kate can help simplify the writing process. You do not need to remember Git commands and type them in the terminal every time you make changes to files or switch branches.

This article focuses on key features of Kate for contributors working on various Fedora documentation repositories. The capabilities can be extended to other documentation repositories.

Preparations For Using Kate With Your Repository

  1. Add SSH key to settings of your account on Pagure, GitLab or GitHub.
    • On Pagure, go to My Settings – SSH Keys – Add SSH Key
    • On GitLab, Preferences – User Settings – Add an SSH Key
    • On GitHub, Settings – SSH and GPG keys – New SSH key
  2. Fork a project: Go to the upstream repository and select the Fork button
  3. Clone the repository
    • In your forked repository, select Clone with SSH.
    • Next, copy that link to your clipboard and paste it in GIT URL in terminal.
    • When cloning a repository, you can specify the new directory name as an additional argument. $ git clone <GIT URL> new directory
  4. Install Kate. If you are Linux users, go to the packager manager of your distro to install Kate. If you use Fedora Linux, we recommend the Fedora Linux RPM version or Flatpak.

Sessions

Sessions in Kate text editor keep separate projects grouped together and help you work with multiple documentation repositories in a single view.

To save repositories in a session:

Go to the File, pull down menu – Select Open folder – Choose the cloned directory.
From the Sessions, pull down menu – Select Save session – Enter a session name – Press OK.

On the left pane, click project list saved to a new session ‘Magazine’. Next time you open Kate, the cloned repositories saved to the session will reappear.

<figure class="aligncenter size-large"><figcaption class="wp-element-caption">Sessions Menu</figcaption></figure>

Using the Status Bar to checkout a branch

With Kate editor, you can switch branches or create a new branch on the status bar and pop-up screen.

The current branch is shown in the lower right corner in the status bar.
To create a new branch, select Main branch.
From the popup menu select Create New Branch and enter a new branch name.

<figure class="aligncenter size-large"><figcaption class="wp-element-caption">Popup menu showing Create New branch</figcaption></figure>

Built-in support for AsciiDoc highlighting

Files with the AsciiDoc extension will automatically be highlighted using the rules in asciidoc.xml. You don’t need to install external plugins.

On-the-fly spell checking

If you want automatic spell checking as you type, press Ctrl + Shift + O. This key combination will toggle spell check on and off.

Git toolview

The toolview on the left pane shows the git status of each open files.

<figure class="aligncenter size-large"><figcaption class="wp-element-caption">Show diff</figcaption></figure>

Staged means the files are added (same as Git add) and will be committed if you select the Commit button at the top.

Modified shows the changes that are not staged yet.

Click the Commit button at the top of the left panel to show the diff for that commit. This will open the selected commit in the Commit toolview. If you want to see all the changes in the commit, right click and select Show Full Commit. Add a commit message.

git push button is to the right of the commit button. git pull button is to the right of the git push button.

Select the refresh icon (circular arrows) to see what has been going on with staged files and commits.

Integrated terminal

Press F4, or select the terminal button, to toggle the integrated terminal on and off.

You may take your writing to the next level, ensure documentation quality, by using build scripts and Vale linter via the integrated terminal.

Step 1. Run build scripts

To check document quality locally, you may run build and preview scripts in the integrated terminal. Build and preview scripts let you view the changes exactly how it will be published in Docs pages through Antora static site generator.<mark class="annotation-text annotation-text-yoast" id="annotation-text-3df5bd09-7f9e-4d78-afdd-76041ce51da0"></mark>

Note: check the README page of your Fedora documentation repositories to use the correct file name for build scripts and instructions. The following is an example:

To build and preview the site, run:

$ ./docsbuilder.sh -p

The result will be available at http://localhost:8080

To stop the preview:

$ ./docsbuilder.sh -k

Step 2. Run Vale on your text

Vale is a command line tool that checks your text for adherence to a defined style guide. Run Vale locally referring to the guide.

Credits and acknowledgements

Big thanks to Nicco, a KDE developer, who provided me with a great deal of inspiration from his video tutorial channel ‘Nicco loves Linux‘.

Kate Version used in this article was 23.08.3

Upstream documentation

The following are Fedora documentation Git repos used in this article;

Quick Docs
Kinoite User Docs
IoT User Docs
Documentation Contributors Guide

Gaming only on Linux, one year in

Posted by Timothée Ravier on January 02, 2024 11:00 PM

I have now been playing games only on Linux for a year, and it has been great.

With the GPU shortage, I had been waiting for prices to come back to reasonable levels before buying a new GPU. So far, I had always bought NVIDIA GPUs as I was using Windows to run games and the NVIDIA drivers had a better “reputation” than the AMD/Radeon ones.

With Valve’s Proton seriously taking off thanks to the Steam Deck, I wanted to get rid of the last computer in the house that was running Microsoft Windows, that I had kept only for gaming.

But the NVIDIA drivers story on Linux had never been great, especially on distributions that move kernel versions quickly to follow upstream releases like Fedora. I had tried using the NVIDIA binary drivers on Fedora Kinoite but quickly ran into some of the issues that we have listed in the docs.

At the time, the Universal Blue project did not exist yet (Jorge Castro started it a bit later in the year), otherwise I would have probably used that instead. If you need NVIDIA support today on Fedora Atomic Desktops (Silverblue, Kinoite, etc.), I heavily recommend using the Universal Blue images.

Hopefully this will be better in the future for NVIDIA users with the work on NVK

So, at the beginning of last year (January 2023), I finally decided to buy an AMD Radeon RX 6700 XT GPU card.

What a delight. Nothing to setup, fully supported out of the box, works perfectly on Wayland. Valve’s Proton does wonders. I can now play on my Linux box all the games that I used to play on Windows and they run perfectly. Just from last year, I played Age of Wonders 4 and Baldur’s Gate 3 without any major issue, and did it pretty close to the launch dates. Older titles usually work fairly well too.

Sure, some games require some little tweaks, but it is nothing compared to the horrors of managing a Windows machine. And some games require tweaks on Windows as well (looking at you Cyberpunk 2077). The best experience is definitely with games bought on Steam which usually work out of the box. For those where it is not the case, protondb is usually a good source to find the tweaks needed to make the games work. I try to keep the list of tweaks I use for the games that I play updated on my profile there.

I am running all of this on Fedora Kinoite with the Steam Flatpak. If you want a more console-like or Steam Deck-like experience on your existing computers, I recommend checking out the great work from the Bazzite team.

Besides Steam, I use Bottles, Cartridge and Heroic Games Launcher as needed (all as Flatpaks). I have not looked at Origins or Uplay/Ubisoft Connect games yet.

According to protondb, the only games from my entire Steam library that are not supported are mostly multiplayer games that require some specific anti-cheat that is only compatible with Windows.

I would like to say a big THANK YOU to all the open source graphics and desktop developers out there and to (in alphabetical order) AMD, Collabora, Igalia, Red Hat, Valve, and other companies for employing people or funding the work that makes gaming on Linux a reality.

Happy new year and happy gaming on Linux!

Iniciando um container no Podman com Ansible

Posted by Daniel Lara on January 02, 2024 09:12 PM

 

Uma dica rápida de  iniciar um container simples com ansible no podman


Como é um servidor remoto , efetuei a copia da chave ssh 

$ ssh-copy-id fedora.server

Criei um arquivo host para ansible

$ vim hosts

e adicionei o server


Vamos criar agora no nosso arquivos YAML , nesse arquivo , vamos efetuar 

o pull da imagem do wilffly , e efetuar run do container na porta 8080 , ou seja 

teremos 2 execução , o pull da imagem e a execução do container .


Uma playbook praticamente simples 

Agora vamos rodar 


$ ansible-playbook -i hosts wildfly.yaml


Podemos validar caso queira 

$ ansible fedora.server -i hosts -a "podman ps"


E podemos ver o wildfly rodando 


Pronto , simples rápido e fácil


Guia de Referncia: 
https://www.redhat.com/sysadmin/automate-podman-ansible 

https://fedoramagazine.org/using-ansible-to-configure-podman-containers/















How systemd exponential restart delay works

Posted by Tomasz Torcz on January 02, 2024 08:25 PM

Since the beginning, systemd had a Restart= directive to do just that – restart a service when it fails or exists. Some tuning was provided by RestartSec= (how long to wait between restarts), StartLimitBurst= (how many times to try) and few minor directives.

Starting with systemd v254, we have two new knobs:

  • RestartSteps= the number of steps to take to increase the interval of auto-restarts

  • RestartMaxDelaySec= the longest time to sleep before restarting a service as the interval goes up

Together, they provide ability to exponentially extend wait-time between restarts. First restart is quite quick, but then system waits more and more. Here what it means.

Let assume a service with following settings:

Restart=always
RestartSec=100ms (the default)
RestartMaxDelaySec=10s
RestartSteps=5

Upon a failure, systemd will wait (rounded a bit):

  1. 100ms until first restart

  2. 250ms until next restart (first step)

  3. 630ms until next restart (second step)

  4. 1.58s until next restart (third step)

  5. 3.98s until next restart (fourth step)

  6. 10.0s until next restart (RestartSteps=5 – fifth step) and following restarts

Second example. Given:

RestartSec=1s
RestartMaxDelaySec=10s
RestartSteps=3

subsequent restarts will be done after waiting:

  1. 100ms

  2. 2.15s (step 1)

  3. 4.64s (step 2)

  4. 10.0s (step 3, as in RestartSteps=3).

Hope this helps. You can find the exact formula in src/core/service.c.

Dealing with weird ELF libraries

Posted by Matthew Garrett on January 02, 2024 07:04 PM
Libraries are collections of code that are intended to be usable by multiple consumers (if you're interested in the etymology, watch this video). In the old days we had what we now refer to as "static" libraries, collections of code that existed on disk but which would be copied into newly compiled binaries. We've moved beyond that, thankfully, and now make use of what we call "dynamic" or "shared" libraries - instead of the code being copied into the binary, a reference to the library function is incorporated, and at runtime the code is mapped from the on-disk copy of the shared object[1]. This allows libraries to be upgraded without needing to modify the binaries using them, and if multiple applications are using the same library at once it only requires that one copy of the code be kept in RAM.

But for this to work, two things are necessary: when we build a binary, there has to be a way to reference the relevant library functions in the binary; and when we run a binary, the library code needs to be mapped into the process.

(I'm going to somewhat simplify the explanations from here on - things like symbol versioning make this a bit more complicated but aren't strictly relevant to what I was working on here)

For the first of these, the goal is to replace a call to a function (eg, printf()) with a reference to the actual implementation. This is the job of the linker rather than the compiler (eg, if you use the -c argument to tell gcc to simply compile to an object rather than linking an executable, it's not going to care about whether or not every function called in your code actually exists or not - that'll be figured out when you link all the objects together), and the linker needs to know which symbols (which aren't just functions - libraries can export variables or structures and so on) are available in which libraries. You give the linker a list of libraries, it extracts the symbols available, and resolves the references in your code with references to the library.

But how is that information extracted? Each ELF object has a fixed-size header that contains references to various things, including a reference to a list of "section headers". Each section has a name and a type, but the ones we're interested in are .dynstr and .dynsym. .dynstr contains a list of strings, representing the name of each exported symbol. .dynsym is where things get more interesting - it's a list of structs that contain information about each symbol. This includes a bunch of fairly complicated stuff that you need to care about if you're actually writing a linker, but the relevant entries for this discussion are an index into .dynstr (which means the .dynsym entry isn't sufficient to know the name of a symbol, you need to extract that from .dynstr), along with the location of that symbol within the library. The linker can parse this information and obtain a list of symbol names and addresses, and can now replace the call to printf() with a reference to libc instead.

(Note that it's not possible to simply encode this as "Call this address in this library" - if the library is rebuilt or is a different version, the function could move to a different location)

Experimentally, .dynstr and .dynsym appear to be sufficient for linking a dynamic library at build time - there are other sections related to dynamic linking, but you can link against a library that's missing them. Runtime is where things get more complicated.

When you run a binary that makes use of dynamic libraries, the code from those libraries needs to be mapped into the resulting process. This is the job of the runtime dynamic linker, or RTLD[2]. The RTLD needs to open every library the process requires, map the relevant code into the process's address space, and then rewrite the references in the binary into calls to the library code. This requires more information than is present in .dynstr and .dynsym - at the very least, it needs to know the list of required libraries.

There's a separate section called .dynamic that contains another list of structures, and it's the data here that's used for this purpose. For example, .dynamic contains a bunch of entries of type DT_NEEDED - this is the list of libraries that an executable requires. There's also a bunch of other stuff that's required to actually make all of this work, but the only thing I'm going to touch on is DT_HASH. Doing all this re-linking at runtime involves resolving the locations of a large number of symbols, and if the only way you can do that is by reading a list from .dynsym and then looking up every name in .dynstr that's going to take some time. The DT_HASH entry points to a hash table - the RTLD hashes the symbol name it's trying to resolve, looks it up in that hash table, and gets the symbol entry directly (it still needs to resolve that against .dynstr to make sure it hasn't hit a hash collision - if it has it needs to look up the next hash entry, but this is still generally faster than walking the entire .dynsym list to find the relevant symbol). There's also DT_GNU_HASH which fulfills the same purpose as DT_HASH but uses a more complicated algorithm that performs even better. .dynamic also contains entries pointing at .dynstr and .dynsym, which seems redundant but will become relevant shortly.

So, .dynsym and .dynstr are required at build time, and both are required along with .dynamic at runtime. This seems simple enough, but obviously there's a twist and I'm sorry it's taken so long to get to this point.

I bought a Synology NAS for home backup purposes (my previous solution was a single external USB drive plugged into a small server, which had uncomfortable single point of failure properties). Obviously I decided to poke around at it, and I found something odd - all the libraries Synology ships were entirely lacking any ELF section headers. This meant no .dynstr, .dynsym or .dynamic sections, so how was any of this working? nm asserted that the libraries exported no symbols, and readelf agreed. If I wrote a small app that called a function in one of the libraries and built it, gcc complained that the function was undefined. But executables on the device were clearly resolving the symbols at runtime, and if I loaded them into ghidra the exported functions were visible. If I dlopen()ed them, dlsym() couldn't resolve the symbols - but if I hardcoded the offset into my code, I could call them directly.

Things finally made sense when I discovered that if I passed the --use-dynamic argument to readelf, I did get a list of exported symbols. It turns out that ELF is weirder than I realised. As well as the aforementioned section headers, ELF objects also include a set of program headers. One of the program header types is PT_DYNAMIC. This typically points to the same data that's present in the .dynamic section. Remember when I mentioned that .dynamic contained references to .dynsym and .dynstr? This means that simply pointing at .dynamic is sufficient, there's no need to have separate entries for them.

The same information can be reached from two different locations. The information in the section headers is used at build time, and the information in the program headers at run time[3]. I do not have an explanation for this. But if the information is present in two places, it seems obvious that it should be able to reconstruct the missing section headers in my weird libraries? So that's what this does. It extracts information from the DYNAMIC entry in the program headers and creates equivalent section headers.

There's one thing that makes this more difficult than it might seem. The section header for .dynsym has to contain the number of symbols present in the section. And that information doesn't directly exist in DYNAMIC - to figure out how many symbols exist, you're expected to walk the hash tables and keep track of the largest number you've seen. Since every symbol has to be referenced in the hash table, once you've hit every entry the largest number is the number of exported symbols. This seemed annoying to implement, so instead I cheated, added code to simply pass in the number of symbols on the command line, and then just parsed the output of readelf against the original binaries to extract that information and pass it to my tool.

Somehow, this worked. I now have a bunch of library files that I can link into my own binaries to make it easier to figure out how various things on the Synology work. Now, could someone explain (a) why this information is present in two locations, and (b) why the build-time linker and run-time linker disagree on the canonical source of truth?

[1] "Shared object" is the source of the .so filename extension used in various Unix-style operating systems
[2] You'll note that "RTLD" is not an acryonym for "runtime dynamic linker", because reasons
[3] For environments using the GNU RTLD, at least - I have no idea whether this is the case in all ELF environments

comment count unavailable comments

neovim, dap and gdb 14.1

Posted by Andreas Schneider on January 02, 2024 04:59 PM

At the beginning of December 2023 gdb version 14.1 was released. The feature I’ve waited for is support for the the Debug Adapter Protocol. Today I finally looked into this and got gdb working with nvim-dap.

<figure class="wp-block-image size-large"></figure>

If you already have setup nvim-dap, it is pretty easy to add support for C/C++ with gdb. First you need to enable DAP for the filetype. I load DAP with lazy only for filetypes I use it for.

Then I created the file plugins/dap/c.lua with the following content:

local ok, dap = pcall(require, 'dap')
if not ok then
    return
end

--
-- See
-- https://sourceware.org/gdb/current/onlinedocs/gdb.html/Interpreters.html
-- https://sourceware.org/gdb/current/onlinedocs/gdb.html/Debugger-Adapter-Protocol.html
dap.adapters.gdb = {
    id = 'gdb',
    type = 'executable',
    command = 'gdb',
    args = { '--quiet', '--interpreter=dap' },
}

dap.configurations.c = {
    {
        name = 'Run executable (GDB)',
        type = 'gdb',
        request = 'launch',
        -- This requires special handling of 'run_last', see
        -- https://github.com/mfussenegger/nvim-dap/issues/1025#issuecomment-1695852355
        program = function()
            local path = vim.fn.input({
                prompt = 'Path to executable: ',
                default = vim.fn.getcwd() .. '/',
                completion = 'file',
            })

            return (path and path ~= '') and path or dap.ABORT
        end,
    },
    {
        name = 'Run executable with arguments (GDB)',
        type = 'gdb',
        request = 'launch',
        -- This requires special handling of 'run_last', see
        -- https://github.com/mfussenegger/nvim-dap/issues/1025#issuecomment-1695852355
        program = function()
            local path = vim.fn.input({
                prompt = 'Path to executable: ',
                default = vim.fn.getcwd() .. '/',
                completion = 'file',
            })

            return (path and path ~= '') and path or dap.ABORT
        end,
        args = function()
            local args_str = vim.fn.input({
                prompt = 'Arguments: ',
            })
            return vim.split(args_str, ' +')
        end,
    },
    {
        name = 'Attach to process (GDB)',
        type = 'gdb',
        request = 'attach',
        processId = require('dap.utils').pick_process,
    },
}

And loaded it with the following snippet:

  if vim.fn.executable('gdb') == 1 then
       require('plugins.dap.c')
   end

My neovim dotfiles are here.

End of the year in Packit

Posted by Weekly status of Packit Team on January 02, 2024 12:00 AM

End of the year in Packit

  • We have hit some issues with the firewall rules on the new cluster hosting our production. We are waiting for the required fix that is done outside of our team which should be deployed around January 11th. We will keep you informed about the current status. At the moment we are aware of the issues related to the:
    • pull-from-upstream jobs that have sources hosted at infradead.org, sourceforge.net and gitlab.gnome.org
    • jobs running on the gitlab.gnome.org in general
  • We have also fixed an issue that caused some Cockpit releases to fail, others might've been affected too, though there are no reports.

Year End Review - 2023

Posted by Farhaan Bukhsh on January 01, 2024 07:30 PM

There are plenty of things that I am grateful for in this year. This year had a lot of firsts and I am thankful that I could experience each of these moments in my life.

So let's go with the firsts:

  1. I had my first international travel this year. I was so happy to meet my team at OpenCraft in Bogota. The trip was a crazy experience that deserves a blog of its own.

  2. Shabnam and I bought our first car, and it humbles me every time we think about it.

  3. We did a month-long excursion and stayed in the mountains. The digital nomad feel was a high point.

  4. We did our first Himalayan trek. We summited Kedarkantha which is at 12,500 feet.

  5. I started dancing(without rhythm) in group Dance fitness sessions.

  6. Did pottery and enjoyed it. (Thanks to Tanvi).

  7. Got my driver's license finally.

  8. Watched Zakir Khan perform live.

Some habits that I cultivated(thanks to Shabnam) that I am happy about:

  1. We took up Badminton as a sport and were consistent on the court.

  2. We started eating clean, so much so that we made some good-looking smoothie/oat bowls this year.

    alt

  3. We have been consistent in improving our health i.e. physical and mental.

  4. Consistent weekend morning walks at Cubbon Park. This might look underrated but the peace it brings you is divine.

Things that I could improve this year:

  1. Write more blogs, I have been lazy or caught up with some other things.

  2. Attend more meetups.

  3. Read and understand more about nutrition and have a meaningful relationship with food.

  4. Cultivate the habit of reading research papers.

  5. Avoid Doom scrolling.

Apart from all this on the professional front, I was able to work on a lot of OpenEdx stuff, especially in the front-end components. There was a lot of good momentum with Metamind as well and we would be able to gauge our direction better now.

Ending This Year With A New Proxy Python Server Script

Posted by Jon Chiappetta on January 01, 2024 02:43 AM

So it’s been a busy year for me personally as I have been trying my best to get setup in a new location and start some new routines with more focus placed on my health (diet and sleep and exercise). I’ve been battling some allergies including seasonal ones but if I get them under control in time I do feel much better (almost 95% of the way I used feel back in the day). I haven’t been able to post much here lately because of work and life but I plan to keep the blog alive if any new technical projects or anything interesting comes up.

One on-going project at home was my network-wide VPN tunnelling experiment via the Mac Mini which has been interesting. OpenVPN worked perfectly speed wise but I was having some performance issues with WireGuard and I believe it was due to the lower MTU required along with the fact that WG does not have similar functionality of being able to pre-fragment larger-sized packets coming in from the network clients being routed inside of the tunnel. I had a workaround with using a custom-modified version of nginx to proxy the client-side connection and start a newly-sized one on behalf of the client thus allowing the MTU to be respected directly from the server-side. It seemed to work better performance wise, however, I believe I was running into timing based connection errors potentially with expired states and sessions and processes.

So to end the year, I am trying a new project that I wrote from scratch in Python. It performs the same core functionality as what I modded into nginx for both UDP and TCP connection types. It’s multi-process via the load-balancing workaround/trick by using multiple localhost IP addresses and the core proxying recvs/sends is multi-threaded (you can set a single-threaded loop if you prefer). I will see how long this one lasts me going into the new year!

https://github.com/stoops/pyproxy

Episode 409 – You wouldn’t hack a train?

Posted by Josh Bressers on January 01, 2024 12:00 AM

Josh and Kurt talk about how some hackers saved the day with a Polish train. We delve into a discussion about how we don’t really own anything anymore if you look around. There’s a great talk from the Blender Conference about this and how GPL makes a difference in the world of software ownership. It’s sort of a dire conversation, but not all hope is lost.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3285-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_409_You_wouldnt_hack_a_train.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_409_You_wouldnt_hack_a_train.mp3</audio>

Show Notes

Year in books for 2023

Posted by Zach Oglesby on December 31, 2023 09:23 PM

Here are the books I finished reading in 2023.

No Time Like The Past A Trail Through Time A Second Chance A Symphony of Echoes Just One Damned Thing After Another The Internet Con The Sunlit Man Klara and the Sun Yumi and the Nightmare Painter The Frugal Wizard’s Handbook for Surviving Medieval England Astrophysics for People in a Hurry (Astrophysics for People in a Hurry Series) Waking Up to What You Do On the Origin of Time Chip War Ordinary Wonder Sapiens: A Brief History of Humankind The Lost Metal Clear and Present Danger Secret Project #1 The Complete Malazan Book of the Fallen

Music of the week: New Year edition

Posted by Peter Czanik on December 31, 2023 11:05 AM

This is my last blog for 2023, Budapest time. However, it might already be the first blog of the year from me, if you live in Japan or New Zealand :-) This time it’s a single song: “Happy new year” from ABBA (and from me :-) ).

<iframe allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/3Uo0JAUWijM" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube Video"></iframe>

TIDAL: https://listen.tidal.com/album/575781/track/575787

<figure> </figure>

Storytelling: 2023 was a quiet blog year. In 2024, I recommit to storytelling.

Posted by Justin W. Flory on December 31, 2023 05:05 AM

The post Storytelling: 2023 was a quiet blog year. In 2024, I recommit to storytelling. appeared first on /home/jwf/.

/home/jwf/ - Free & Open Source, technology, travel, and life reflections

A light-colored wall with bright red neon light letters is shown. The neon lights write out, "We are all made of stories." A text overlay reading "2024: Storytelling" is captioned toward the top of the image.

2023 is almost over. It was a busy year. When I was a student, I used to write about what I was learning. But after finishing my studies, I stopped writing regularly. Now I want to focus on the future and adopt a storytelling theme for 2024. This post summarizes my intentions of committing to storytelling.

Read more: Storytelling: 2023 was a quiet blog year. In 2024, I recommit to storytelling.

about adopting a theme

Recently, Joseph Gayoso from the Fedora Marketing Team proposed the idea of the Fedora Marketing Team adopting a theme for 2024. Together with the below video explainer, I felt his explanation was also convincing for the Team.

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="473" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox" src="https://www.youtube.com/embed/NVGuFdX5guE?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="840"></iframe>
<figcaption class="wp-element-caption">YouTube: Your Theme. CGP Grey. Premiered 26 January 2020.</figcaption></figure>

But it was not only good advice for the team. I tend to avoid resolutions for as a new year tradition. But I recognize change as something that can happen independent from January 1st. That is where the role of an annual theme comes into focus. It offers a flexible framework with wide guideposts. I can choose how to measure my success. Working from a theme provides me a clear way to measure incremental progress while also enabling me to feel tangible accomplishments along the journey.

So, if I could commit to one theme, what would it would be? It would have to be something that I believe in.

storytelling is my theme

I admire storytelling since a long time. I admire its flexibility to be simple yet powerful. It is flexible because there are multiple forms of storytelling. Storytelling can be defined in a literal sense and a metaphorical sense.

In a literal sense, storytelling is the telling of stories. Telling could mean written, spoken, or shown. Stories could mean almost any expression of human experience that fits into a timeline with a plot. Therefore, storytelling is creatively sharing a human experience with others.

In the metaphorical sense, storytelling connects communities. Stories represent several aspects of life that happen around humans. The most powerful stories compel hearts and minds to change. Someone who tells stories that change the hearts and minds of others is an influential person. In this metaphorical sense, storytelling becomes a skill that is honed and practiced.

building my storytelling habit back

What does this have to do with my theme? I am adopting storytelling as my theme because I admire the habits of good storytellers. I want to hone my own ability for both personal and professional contexts. My ability is weakened from lack of practice; it is like a muscle that is sore from not being used in a while. By adopting storytelling as my 2024 theme, it empowers me to write more often in my authentic voice. Lately, recent posts on my blog undergo a rigorous self-editing before I publish them. But in adapting with a theme of storytelling, I commit to being fine with not maintaining maximum production-value on everything I publish. I commit to being authentic over rigorous; honest yet open. I commit to the pursuit of documenting my own human history, or “the world as I see it.”

So, 2023 was a year of big changes for me personally and professionally. I have plenty of things to start writing about. To improve, I need to publish more and be ready to make some mistakes. That’s how I learn, after all.

So with that all in mind, more blog posts seems like a good starting point. To make this plan actionable, it needs more specific steps. My goal for right now is to make the commitment within myself, and follow it up with action in 2024.

Happy New Year, reader.


Original photo by S O C I A L . C U T on Unsplash. Modified by Justin W. Flory. CC BY-SA 4.0.

Share volumes between Podman Systemd services

Posted by Fabio Alessandro Locati on December 31, 2023 12:00 AM
Since the merge of Quadlet in Podman, I’ve been moving multiple services to Podman Systemd services. I find them to be easy to create, manage, and automate. I recently migrated a complex system to Podman Systemd, where multiple processes write in a folder, and one process reads the folder’s content. Before the migration, everything worked properly since all the processes were running natively on the machine with the same user. After the migration, there were some permissions issues.

Fedora a 20 ans : coup d'oeil dans le rétro avec Renault

Posted by Charles-Antoine Couret on December 30, 2023 06:40 PM

Le projet Fedora a fêté le 6 novembre 2023 ses 20 ans. C'est en effet via cette annonce publique de la sortie de la dénommée Fedora Core 1 qui marque le coup d'envoi de cette histoire.

Continuité de la distribution Red Hat Linux en prenant un aspect communautaire, elle aura su au gré du temps s'imposer comme une distribution généraliste majeure de l'écosystème Linux. Au fil des années, de nombreuses technologies incorporées et testées en premier lieu dans la distribution ont su se généraliser au reste des distributions.

Profitons de cet anniversaire pour faire une rétrospective de l'évolution de la distribution mais aussi un rappel d'où nous partions il y a 20 ans. Cela permettra d'envisager aussi ce que nous réserve l'avenir, et qui sait faire un nouveau point d'étape dans 20 ans ?

La genèse

Le commencement de la distribution Fedora nécessite de revenir un peu avant la sortie de Fedora Core 1, en 2002. À ce moment là, l'entreprise Red Hat maintenait deux distributions : Red Hat Enterprise Linux et Red Hat Linux. La première existe toujours et est le produit phare de l'entreprise, à savoir une distribution centrée sur les besoins professionnels et les usages avancés tels que les serveurs ou les stations de travail pro en vendant des licences et du support. La seconde était plutôt destinée au particulier pour avoir une distribution à utiliser à la maison comme il pouvait acheter en magasin une Mandrake ou un Windows XP en ce temps là sur CD.

Dans les deux cas, si les distributions sont libres avec le code source accessible, le développement n'est pas communautaire et Red Hat ne propose dans ses dépôts que les paquets qu'elle maintient.

Warren Togami a eu comme projet universitaire de proposer un dépôt communautaire et complémentaire pour ces distributions. L'objectif était de compléter les logiciels manquants dans les dépôts avec une bonne qualité d'intégration tout en permettant à d'autres personnes de l'aider dans cette tâche.

Le projet prend le nom fedora.us en référence au modèle de chapeau iconique du logo de Red Hat. Et il prend rapidement de l'essor.

Les ressources n'étant pas infinies, Red Hat souhaite se désengager de Red Hat Linux qui n'est pas assez rentable et propose de chapeauter le projet Fedora en adoptant un développement communautaire, et en s'en servant comme base pour sa distribution phare RHEL tout en donnant des ressources matérielles et humaines pour aider le projet Fedora dans sa tâche. En un sens, le projet Fedora et Red Hat Linux fusionnent. Au début cependant Red Hat maintenait le dépôt Core (d'où le nom de Fedora Core) tandis que les personnes extérieures pouvaient gérer de manière autonome le dépôt Extra.

C'est ainsi que fin 2003, Fedora Core 1 est publiée en se basant sur Red Hat Linux 9. Red Hat Linux n'aura plus de mises à jour dès mi-2004 marquant la fin de cette distribution.

Fedora Core 1

Si on fait l'expérience d'installer une Fedora Core 1, le ressenti est assez déroutant. Outre la nostalgie de ces interfaces finalement classiques mais un peu désuètes avec des graphismes très simples voire austères, nous ne sommes pour autant pas déboussolés tant les fondamentaux sont là. Le système est cependant bien plus réactif qu'à l'époque tant les performances ont progressé depuis.

En effet, la Fedora Core 1 nécessitait au minimum :

  • un processeur 400 MHz Pentium II pour un usage graphique
  • l'espace disque requis pouvait aller de 500 Mio au minimum à 5 Gio pour installer l'ensemble des paquets
  • et niveau mémoire 256 Mio étaient recommandées pour un usage graphique.

Il ne manque que le doux bruit du disque dur qui gratte ou celui des courants qui dirigent les faisceaux d'électrons dans les tubes cathodiques avec ce grésillement caractéristique pour afficher un économiseur d'écran ou encore les fréquences pour établir les connexions via un modem 56k pour retrouver cette ambiance du début des années 2000.

Mais si au premier abord, rien n'a changé, en réalité tout est différent. Au moins dès ce moment là, l'architecture x86_64 était proposée mais c'était vraiment le balbutiement. En effet, le premier processeur grand public compatible, le Athlon 64, était sorti seulement deux mois plus tôt. L'architecture phare c'était x86 avec sa variante générique mais ancienne i386. Et autant dire avec ces machines sans accélération des instructions de virtualisation, il n'était pas vraiment envisageable de tester Fedora Core par ce biais là.

Reprenons, à ce moment là les Live CD de Fedora Core n'existaient pas encore officiellement. Pour la tester il fallait donc l'installer. Installer Fedora Core demandait au choix de télécharger l'image DVD de 3,7 Gio ou plusieurs images CD. L'installation par USB était un concept en ce temps là. Cela peut paraître superflu, mais quand Internet n'était pas aussi généralisé ou avec un débit très lent, avoir la logithèque au complet dès le départ était un atout important. Acheter un magazine juste pour avoir un CD d'installation était courant pour s'épargner ces temps de téléchargement interminables, et le coût associé parfois.

D'ailleurs, dès l'écran d'installation cet aspect saute aux yeux car nous pouvions littéralement choisir les paquets à installer au moment de l'installation, et insérer si nécessaire les CD complémentaires pour compléter le processus.

L'interface d'anaconda, le logiciel d'installation, était très linéaire avec beaucoup de questions qui font sourire aujourd'hui. Comme le choix du modèle de souris notamment pour connaître le nombre de boutons et la présence éventuelle de la molette, le choix du modèle d'écran pour connaître la résolution et la fréquence. Possibilité de synchroniser le temps par NTP mais il faut remplir le champ du serveur NTP soi même. Le compte administrateur et le compte utilisateur principal étaient bien distincts.

Une fois l'installation finie, on pouvait profiter du démarrage graphique avec l'outil rhgb (Red Hat Graphical Boot) qui est toujours un nom qu'on peut voir dans les arguments du noyau aujourd'hui chez Fedora Linux. C'était visuellement sympathique, avec cette possibilité de voir la console du lancement des services tout en gardant une interface graphique autour.

L'écran de connexion GDM se lance, il faut saisir le nom du compte qu'on souhaite lancer. Il n'y a pas de liste des comptes disponibles. Et enfin, l'interface GNOME se lance. Cependant même si c'est un GNOME 2, c'était au début un GNOME 2.4 avec un visuel proche de KDE avec une grande barre principale en bas, sans une bar en haut de l'écran qui était son mode canonique caractéristique. Le menu principal était accessible à travers l’icône du chapeau rouge de Red Hat, qui restera là pour les 4 premières versions de la distribution.

Niveau utilitaires, si certains noms sont familiers comme GIMP, la plupart des utilitaires GNOME, d'autres ont bien changé. Firefox n'existait pas, c'était son ancêtre la suite Mozilla qui était à la barre. Qui malheureusement ne gère pas les protocoles TLS récents ce qui rend impossible son usage sur le web moderne ou presque. Le courriel passait de fait par le même outil au lieu de Thunderbird bien qu'Evolution était déjà disponible. La messagerie instantanée était proposée par l'ancêtre de Pidgin : Gaim en prenant en charge beaucoup de protocoles de cette époque dont le leader du marché en France qu'était MSN. Pour la suite bureautique c'était OpenOffice.org, le prédécesseur de LibreOffice, qui était présent.

Les outils systèmes étaient aussi très différents. DNF n'était même pas encore un rêve, la gestion des paquets passait par YUM qui était particulièrement lent et peu fiable. Le nombre de fois que la gestion des dépendances aboutissait à casser le système était fréquent, surtout avec l'usage de dépôts alternatifs pas toujours bien gérés. La configuration du système ne reposait pas uniquement sur GNOME mais sur des interfaces individuels conçus par Red Hat nommés system-config-*. C'était relativement peu flexible, il fallait souvent redémarrer la machine ou les services à la main pour appliquer des changements, le mot de passe superutilisateur était souvent requis, le réseau n'était pas géré par NetworkManager, la gestion du réseau n'était pas aussi dynamique qu'aujourd'hui en particulier pour les connexions sans fil, de même que l'intégration système via l'interface dbus. Imprimer était aussi vite laborieux pour installer l'imprimante et avoir le bon pilote, quand il existait. De même pour le scan des documents avec XSane.

Le démarrage était géré par des scripts init SysV classiques, systemd n'étant pas là de même que tous ses outils qui gravitent autour. Le noyau Linux lui même était un 2.4.22, la seule version de Fedora pré-2.6 qui fut une évolution majeure du noyau.

Le partitionnement standard était basé le système de fichiers ext3. Btrfs et ext4 n'existaient pas encore. Les volumes logiques par défaut n'apparaîtront également qu'un peu plus tard.

Les restrictions légales aussi rendait impossible la lecture de fichiers musicaux MP3 sans dépôts tiers. Et les limitations de l'époque empêchait la lecture du son par plusieurs applications en même temps.

Cependant les bases étaient là, l'interface dans les canons de l'époque ce que beaucoup d'environnements de bureau maintiennent aujourd'hui par ailleurs comme Xfce ou MATE. Le thème Bluecurve accuse de l'âge mais l'intégration graphique globale était bonne sur l'ensemble des composants et sur tous les éléments de la chaine dont lors du démarrage de la machine.

20 ans d'évolution progressive visibles...

Après la sortie de Fedora Core 1, les changements sont arrivés petit à petit jusqu'à aboutir à nos systèmes modernes. Parfois un peu dans la souffrance le temps d'effectuer ces transitions.

Dès la 2e version, Fedora Core change de serveur d'affichage XFree86 pour X.org car la licence du projet avait changé pour devenir non libre. SELinux était aussi proposé avant d'être activé par défaut dans la 3e version. Cela reste encore une distinction de Fedora à ce jour dans l'écosystème en dehors d'Android, bien que Ubuntu par exemple a privilégié une alternative AppArmor depuis. La 3e version apporte la mise à disposition du dépôt extra évoqué plus haut, et le navigateur Web par défaut devient Firefox.

Après la Fedora Core 4 qui se démarque par un fond d'écran identique à son prédécesseur ce qui constitue un cas unique à ce jour, Fedora Core 5 marque un rupture par un thème graphique très poussé à base de bulle. Il met en avant le nouveau logo de Fedora qui restera en place pendant de nombreuses années. Et Fedora Core 6 continuera dans cette lancée avec un nouveau thème dédié et surtout l'arrivée de la technologie AIGLX pour proposer l'accélération graphique aux environnements de bureaux. Cela marqua le début de la mode pour les bureaux avec des bureaux virtuels sous forme de face d'un cube, les fenêtres gélatineuses, et autres effets visuels qu'on pouvait personnaliser via Compiz et Beryl.

L'année 2007 sera assez riche. Le mois de mai est marqué par la sortie de Fedora 7 qui abandonne son qualificatif Core grâce à la fusion des dépôts core et extra qui deviennent le dépôt fedora encore utilisé aujourd'hui comme base du système.

Mais Fedora 7 propose aussi un thème iconique signé Diana Fong, qui sera sans doute le dernier thème aussi personnalisé dans la distribution. Fedora 7 propose aussi un Live CD installable ce qui change complètement la manière d'installer et de tester le système. Dans le même temps, l'outil preupgrade est proposé pour faire des mise à niveau par Internet plutôt que de passer par la mise à niveau via les CD (ou une réinstallation). Dans les deux cas la fiabilité de l'opération restait assez aléatoire.

Au cours de l'année 2007, le projet Fedora Legacy jette l'éponge, c'était une initiative communautaire pour allonger la durée de support des versions de Fedora au delà de la moyenne de 13 mois. Cependant c'était gourmand en ressources et il y avait peu de volontaires. Il faut dire que les utilisateurs de Fedora sont plus attirés par les nouveautés que par l'utilisation prolongée d'une vieille version.

Fin de l'année en novembre, Fedora 8 sort en introduisant NetworkManager pour la gestion du réseau et PulseAudio pour le son. Ce dernier changement s'était fait très tôt et a nécessité beaucoup de versions avant d'avoir une gestion du son stable, notamment à cause de pilotes inadaptés et des changements profonds dans le système qui ont été nécessaires. De manière plus anecdotique les fonds d'écran changeaient aussi toutes les heures, principe concerné aujourd'hui pour avoir une variation de teinte en fonction de la luminosité extérieure supposée.

Mais cela ne s'arrête pas là, Fedora 9 migre de SysV à upstart pour la gestion des services au démarrage, technologie signée Canonical qui servira de tremplin à systemd par la suite car il en corrigera ses limitations. PackageKit fait aussi son entrée pour avoir un gestionnaire de paquets universel, capable d'être une surcouche à Yum, Apt et consorts, technologie toujours au cœur de GNOME Logiciels à ce jour. C'est également lui qui vous propose d'installer le paquet qui fourni la commande que vous venez de saisir si elle n'était pas présente dans votre système. C'est aussi cette version qui propose KDE 4.0, une nouvelle version de rupture qui en appellera d'autres pour cet environnement de bureau mais là aussi avec une fiabilité délicate à ses débuts.

Fedora 10 marque le remplacement de rhgb par plymouth pour l'affichage de l'écran de démarrage, ce composant n'ayant pas bougé depuis. Le système de fichiers ext4 est aussi utilisé par défaut en remplacement de ext3.

Fedora 11 a introduit en avant première après une intense campagne de tests le pilote Nvidia libre nouveau permettant d'exploiter le modesetting du noyau et améliorant grandement l'expérience utilisateur des possesseurs d'une carte graphique de la marque.

Fedora 12 introduit l'outil abrt pour détecter les crash et générer des rapports de bogues automatiquement à partir de ceux-ci, un outil important pour la progression de la qualité du projet Fedora. L'architecture x86 nécessite aussi la variante i686 pour améliorer les performances au détriment de la prise en charge des processeurs plus anciens.

C'est à ce moment là que les dépôts tiers Livna, Dribble et Freshrpms fusionnent pour former le dépôt RPMFusion. Ce dépôt est toujours la référence communautaire pour obtenir des paquets non libres ou ceux soumis aux brevets logiciels comme les codecs multimédia et des logiciels tels que VLC.

Quelques années plus tard, en mai 2011, Fedora 15 remplace Openoffice.org par LibreOffice. Tandis que GNOME 3 devient la nouvelle interface de référence en ayant un style plus moderne et épuré. systemd remplace également upstart pour la gestion des services du système. Il faudra attendre cependant Fedora 17 pour que tous les services reposent sur des unités systemd. Cette version introduit également un nouveau pare feu dynamique firewalld qui est toujours utilisé. Les répertoires systèmes fusionnent pour que `/bin` et `/lib` redirigent vers `/usr`.

Fedora 18 a remplacé l'outil preupgrade par FedUP qui marque un bon en avant dans la fiabilité du processus de mise à niveau du système même si c'était encore perfectible. Fedora 20 introduit la prise en charge de l'architecture ARM.

L'année 2013 fut une autre année charnière pour le projet Fedora. Devant le manque de vision le projet se met en pause pendant 1 an pour réfléchir à son avenir. C'est le projet Fedora.next. Cela donnera lieu à l'adoption des produits Workstation, Server et Cloud / Atomic qui perdurent à nos jour, avec un focus sur l'expérience utilisateur qui a été beaucoup relaté ces dernières années.

La réflexion sur le modèle de distribution des logiciels donnera lieu aux dépôts modulaires et aux variantes immuables de la distribution tels que Fedora Silverblue.

Cette année marque aussi la naissance officielle du Fedora Magazine qui publie une actualité synthétique et instructive du projet Fedora, en langue anglaise uniquement.

Fedora 21, première version sortie après cette période de réflexion met en place les nouveaux produits. Et de manière symbolique les versions de Fedora ne portent plus de nom, auparavant chaque version avait un nom unique qui devait avoir un lien (même ténu) avec le nom de la version précédente.

Fedora 22 marque la fin de l'ère de yum et de FedUp pour reposer sur dnf comme gestionnaire de paquets par défaut. Il était plus rapide, plus fiable et capable de gérer la mise à niveau lui même via un plugin.

En novembre 2016, Fedora 25 propose Wayland pour l'affichage dans l'environnement GNOME par défaut après une expérimentation dans GDM dans la version précédente. Si l'adoption de Wayland n'a pas été un long fleuve tranquille, le chemin parcouru reste important et les progrès visibles. Cette même année l'outil multiplateforme Fedora Media Writer est proposé pour facilement créer une clé USB bootable avec Fedora dessus.

Devant l'amélioration continue de la fiabilité, depuis Fedora 27 il n'y a plus de versions alpha. La version 28 abandonne le compte super utilisateur distinct par l'usage natif de sudo.

Fedora 29 concrétise un peu les objectifs de Fedora.next énoncés quelques années plus tôt par l'arrivée des dépôts modulaires qui n'auront tenu que 5 ans. Mais c'est surtout la première version de Fedora Silverblue qui annonce un début de série pour les distributions et variantes immuables en dehors des conteneurs.

À partir de Fedora 31, l'architecture x86 historique n'est plus prise en charge. 15 ans plus tôt c'était pourtant l'image de référence de la distribution.

Il faudra attendre Fedora 33 pour que btrfs devienne le système de fichier par défaut et de fait l'abandon des volumes logiques comme méthode de partitionnement privilégié car c'est directement fourni par btrfs. Par ailleurs zram compresse la RAM pour augmenter la quantité de mémoire virtuelle au lieu d'une partition swap dédiée comme c'était l'usage.

Fedora 34 permet à Pipewire de remplacer PulseAudio pour la gestion du son, avec une transition plus en douceur cette fois-ci. Le logo de la distribution change une nouvelle fois. Et Fedora Linux 35 adopte le nom actuel de la distribution.

... et 20 ans d'évolution en coulisse

La distribution elle même n'est pas la seule à avoir connu des changements en 20 ans. La communauté et l'infrastructure pour gérer ce projet ont aussi évolué.

En terme d'infrastructure, créer une distribution nécessite de gérer la traduction, le code source des paquets RPM à générer et des outils autour, la génération de ces paquets RPM, il faut des moyens de communication entre les contributeurs mais aussi entre utilisateurs, et divers services internes ou site web pour afficher les informations pertinentes.

Par exemple la traduction a débuté avec l'outil Transifex, avant de passer à Zanata puis à Weblate. Pour le code source cela a commencé avec Trac à l'ère où le gestionnaire de version SVN régnait en maître avant de passer à une solution maison nommée pagure pour tirer profit de git pour finir chez gitlab. Les forums officiels sont arrivés tardivement et sont passés de Ask à Discourse.

Les outils de développement ont significativement évolué même s'ils n'ont pas forcément changé en cours de route comme Bugzilla pour rapporter les bogues ou Koji pour construire les paquets ou encore les listes de diffusion avec Mailman / Hypperkitty pour les échanges de courriels entre développeurs. Des outils ont été ajouté en court de route comme fedmsg pour permettre la communication et la notification entre les différentes applications de l'infrastructure du projet Fedora. Ou encore Anitya qui permet d'être notifié si un projet Libre a une nouvelle version publiée qui pourrait de fait justifier de mettre à jour un paquet dans Fedora. Et tant d'autres.

Niveau organisation décisionnelle les changements n'ont pas été très importants, si ce n'est la création d'un organisme dédié à la remontée communautaires des idées, la centralisation des activités de communication notamment des ambassadeurs avec la Fedora Ambassadors Steering Committee devenue depuis Mindshare en étendant son champ d'application.

Au fil des années malgré sa stratégie intacte d'introduire des changements importants en avance sur son temps, la qualité du projet Fedora s'est considérablement améliorée. Il était courant avant de considérer qu'il fallait attendre quelques semaines / mois avant de mettre à niveau son système, le temps d'essuyer les plâtres. Aujourd'hui ce temps est révolu, si les problèmes surviennent parfois, le système reste globalement stable et ce même pour les versions en développement. Avant 2010 utiliser Fedora Rawhide était par exemple un défi en soi, aujourd'hui cela ne l'est plus.

Ce travail résulte d'une maturité de l'écosystème du Logiciel Libre, les briques de base changent moins souvent. Les logiciels sont de manière générale plus testés et mieux finis et Fedora n'y fait pas exception. L'équipe d'assurance qualité de la distribution a aussi pris en ressources et responsabilités pour parvenir à ce résultat. Les outils développés dans ce but comme la notation des mise à jour avec un karma, la création de suite de tests pour de nombreux paquets ou l'outil abrt, et des pratiques telles que les journées de tests y contribuent également. Cela permet à Fedora aujourd'hui de poursuivre sa mission sans dissuader les gens de s'en servir au quotidien ce qui est important dans ce but.

Une communauté francophone quasiment aussi ancienne

La communauté francophone aussi va sur ses 20 ans. Le 24 mai 2004 naissait le site Fedora-fr pour proposer un forum et une documentation en langue française sur base de de l'outil Xoops.

Une page dédiée permet de voir l'évolution de l'équipe de la charte graphique au fil du temps. Après quelques années la plateforme a migré vers eZ publish pour la page d'accueil et FluxBB pour le forum. Tandis que depuis cette année c'est une base Wordpress et Flarum qui ont pris le relai. Le travail de maintenance se poursuit. Le serveur a longtemps été un serveur dédié chez l'hébergeur Ikoula, dont le dénommé Zod, pour finir sur un serveur virtuel chez l'hébergeur Scalway.

Pendant très longtemps le projet Fedora était très centrée sur l'anglais : les sites officiels n'étaient pas toujours traduits, le wiki pas multilingue, les forums officiels sont arrivés tardivement et restent majoritairement focalisés sur l'anglais, le Fedora magazine reste non traduit. D'où la nécessité rapidement d'avoir une communauté francophone avec son propre espace et indépendant du projet Fedora en tant qu'organisation.

Pour mieux gérer les ressources et responsabilités autour du site Fedora-fr, l'association Fedora-fr est fondée le 17/04/2007 à Charleville-Mézière. L'association ensuite se renommera en Borsalinux-fr suite à un accord signé avec Red Hat à ce sujet, le droit américain le nécessitant. L'association sera également déplacée à Paris pour faciliter sa gestion.

La communauté francophone a souvent été reconnue comme dynamique avec de bonnes initiatives et des membres compétents. Le point d'orgue a été l'organisation de la FUDCON (Fedora Users and Developers Conference) à Paris en 2012.

Et après ?

L'aventure du projet Fedora ne s'arrête pas là.

Si les technologies de rupture sont moins fréquentes qu'à ses débuts, il y a de nombreux challenges à relever. Par exemple les variantes immuables gagnent en popularité et utilisabilité. L'objectif reste de se diriger vers ce modèle à terme, est-ce que Fedora sautera le pas en abandonnant Fedora Workstation pour Silverblue ?

Le modèle de distribution des paquets peut aussi évoluer. Le projet Fedora investit beaucoup la question de la génération de paquets Flatpak à partir des RPM. Et si demain la plupart des logiciels étaient distribués par ces Flatpak plutôt que les classiques fichiers RPM ?

La chaine de démarrage souffre aussi de nombreuses limitations liées à l'historique de l’architecture x86. Il semble clair que l'avenir de la prise en charge du BIOS est sombre et que ce n'est plus qu'une question d'années avant d'assister à son abandon. La prise en charge d'UEFI seulement permettra de simplifier cette partie du système et d'envisager d'autres manière de démarrer le système comme avec systemd-boot au lieu de GRUB avec les fonctionnalités qu'il peut fournir dans ce contexte, ou encore le noyau unifié ce qui a été évoqué lors des dernières versions de Fedora. Mais cela signifierait probablement la fin de la prise en charge de nombreuses vieilles machines.

Et sans doute bien d'autres choses qui dépendront aussi des évolutions de l'informatique en général et dans le Logiciel Libre en particulier.

Et vous, quels souvenirs avez-vous de ces 20 années avec Fedora ?

New badge: Fedora 39 CoreOS Test Day !

Posted by Fedora Badges on December 28, 2023 11:05 AM
Fedora 39 CoreOS Test DayYou helped solidify the core for the Fedora 39 rebase!

Untitled Post

Posted by Zach Oglesby on December 27, 2023 06:47 PM

Had a goal to read 24 books this year, going to fall four short as I have not read anything cover-to-cover since the beginning of the month. Hard to read for fun when I am trying to write a non-fiction book at the same time.

New badge: Fedora 38 CoreOS Test Day !

Posted by Fedora Badges on December 27, 2023 11:55 AM
Fedora 38 CoreOS Test DayYou helped solidify the core for the Fedora 38 rebase!

Kubernetes with CRI-O on Fedora Linux 39

Posted by Fedora Magazine on December 27, 2023 08:00 AM

Kubernetes is a self-healing and scalable container orchestration platform. It abstracts away the underlying infrastructure and makes life easier for administrators and developers by improving productivity, deployment lifecycle, and by streamlining devops processes. The goal of this article is to show how to deploy a Kubernetes cluster on Fedora Linux 39 machines using CRI-O as a container engine.

1. Preparing the cluster nodes

Both master and worker nodes must be prepared before installing Kubernetes. Preparations ensure proper capabilities, proper kernel modules are loaded, swap, cgroups version and other prerequisites to installing the cluster.

Kernel modules

Kubernetes, in its standard configuration, requires the following kernel modules and configuration values for bridging network traffic, overlaying filesystems, and forwarding network packets. An adequate size for user and pid namespaces for userspace containers is also provided in the below configuration example.

[user@fedora ~]$ sudo cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

[user@fedora ~]$ systemctl restart systemd-modules-load.service

[user@fedora ~]$  cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
user.max_pid_namespaces             = 1048576
user.max_user_namespaces            = 1048576
EOF

[user@fedora ~]$  sudo sysctl --system

Installing CRI-O

Container Runtime Interface OCI is an opensource container engine dedicated to Kubernetes. The engine implements the Kubernetes grpc protocol (CRI) and is compatible with any low-level OCI container runtime. All supported runtimes must be installed separately on the host. It is important to note that CRI-O is version-locked with Kubernetes. We will deploy cri-o:1.27 with kubernetes:1.27 on fedora-39.

[user@fedora ~] sudo dnf install -y cri-o cri-tools

To check what the package installed:

[user@fedora ~]$ rpm -qRc cri-o
config(cri-o) = 0:1.27.1-2.fc39conmon >= 2.0.2-1
container-selinux
containers-common >= 1:0.1.31-14
libseccomp.so.2()(64bit)
/etc/cni/net.d/100-crio-bridge.conflist  
/etc/cni/net.d/200-loopback.conflist
/etc/crictl.yaml
/etc/crio/crio.conf
...

Notice it uses conmon for monitoring and container-selinux policies. Also, the main configuration file is crio.conf and it added some default networking plugins to /etc/cni. For networking, this guide will not rely on the default CRI-O plugins; though it is possible to use them.

[user@fedora ~]$ sudo rm -rf /etc/cni/net.d/*  

Besides the above configuration files, CRI-O uses the same image and storage libraries as Podman. So you can use the same configuration files for registries and signature verification policies as you would when using Podman. See the CRI-O README for examples.

Cgroups v2

Recent versions of Fedora Linux have cgroups v2 enabled by default. Cgroups v2 brings better control over memory and CPU resource management. With cgroups v1, a pod would receive a kill signal when a container exceeds the memory limit. With cgroups v2, memory allocation is “throttled” by systemd. See the cgroupfsv2 docs for more details about the changes.

[user@fedora ~]$ stat -f /sys/fs/cgroup/
  File: "/sys/fs/cgroup/"
    ID: 0        Namelen: 255     Type: cgroup2fs

Additional runtimes

In Fedora Linux, systemd is both the init system and the default cgroups driver/manager. While checking crio.conf we notice this version already uses systemd. If no other cgroups driver is explicitly passed to kubeadm, then kubelet will also use systemd by default in version 1.27. We will set systemd explicitly, nonetheless, and change the default runtime to crun which is faster and has a smaller memory footprint. We will also define each new runtime block as shown below. We will use configuration drop-in files and make sure the files are labeled with the proper selinux context.

[user@fedora ~]$ sudo dnf install -y crun

[user@fedora ~]$ sudo sed -i 's/# cgroup_manager/cgroup_manager/g' /etc/crio/crio.conf
[user@fedora ~]$ sudo sed -i 's/# default_runtime = "runc"/default_runtime = "crun"/g' /etc/crio/crio.conf

[user@fedora ~]$ sudo mkdir /etc/crio/crio.conf.d
[user@fedora ~]$ sudo tee -a /etc/crio/crio.conf.d/90-crun <<CRUN 
[crio.runtime.runtimes.crun]
runtime_path = "/usr/bin/crun"
runtime_type = "oci"
CRUN


[user@fedora ~]$ echo "containers:1000000:1048576" | sudo tee -a /etc/subuid
[user@fedora ~]$ echo "containers:1000000:1048576" | sudo tee -a /etc/subgid
[user@fedora ~]$ sudo tee -a /etc/crio/crio.conf.d/91-userns <<USERNS 
[crio.runtime.workloads.userns]
activation_annotation = "io.kubernetes.cri-o.userns-mode"
allowed_annotations = ["io.kubernetes.cri-o.userns-mode"]
USERNS

[user@fedora ~]$ sudo chcon -R --reference=/etc/crio/crio.conf  /etc/crio/crio.conf.d/ 

[user@fedora ~]$ sudo ls -laZ /etc/crio/crio.conf.d/ 
root root system_u:object_r:container_config_t:s0  70 Nov  1 19:26 .
root root system_u:object_r:container_config_t:s0  40 Nov  1 11:12 ..
root root system_u:object_r:container_config_t:s0  81 Nov  1 11:14 90-crun
root root system_u:object_r:container_config_t:s0 148 Dec 11 13:20 91-user

crio.conf respects the TOML format and is easily managed and maintained. The help/man pages are also detailed. After you change the configuration, enable the service.

[user@fedora ~]$ sudo systemctl daemon-reload
[user@fedora ~]$ sudo systemctl enable crio --now 

Disable swap

The latest Fedora Linux versions enable swap-on-zram by default. zram creates an emulated device that uses RAM as storage and compresses memory pages. It is faster than traditional disk partitions. You can use zramctl to inspect and configure your zram device(s). However, the device’s initialization and mounting are performed by systemd on system startup as configured in the zram-generator.conf file.

[user@fedora ~]$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
zram0  251:0    0  3.8G  0 disk [SWAP]
vda    252:0    0   15G  0 disk 

[user@fedora ~]$ sudo swapoff -a
[user@fedora ~]$ sudo zramctl --reset /dev/zram0
[user@fedora ~]$ sudo dnf -y remove zram-generator-defaults

Firewall rules

Keep the firewall enabled and open only the necessary ports in accordance with the official docs. We have a set of rules for the Control Planes nodes.

[user@fedora ~]$ sudo firewall-cmd --set-default-zone=internal
[user@fedora ~]$ sudo firewall-cmd --permanent \
--add-port=6443/tcp --add-port=2379-2380/tcp \
--add-port=10250/tcp --add-port=10259/tcp \
--add-port=10257/tcp 
[user@fedora ~]$ sudo firewall-cmd --reload

For Worker nodes, the following configuration must be used given the default service port range.

[user@fedora ~]$ sudo firewall-cmd --set-default-zone=internal
[user@fedora ~]$ sudo firewall-cmd --permanent  \
--add-port=10250/tcp --add-port=30000-32767/tcp 
[user@fedora ~]$ sudo firewall-cmd --reload

Please note we did not discuss network topology. In such discussions, control plane nodes and worker nodes are on different subnets. Each subnet has an interface that connects all hosts. VMs could have multiple interfaces and/or the administrator might want to associate a specific interface with a specific zone and open ports on that interface. In such cases you will explicitly provide the zone argument to the above commands.

The DNS service

Fedora Linux 39 comes with systemd-resolved configured as its DNS resolver. In this configuration the user has access to a local stub file that contains a 127.0.0.53 entry that directs local DNS clients to systemd-resolved.

lrwxrwxrwx. 1 root root 39 Sep 11  2022 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf

The reference to 127.0.0.53 triggers a coredns loop plugin error in Kubernetes. A list of next-hop DNS servers is maintained by systemd in /run/systemd/resolve/resolv.conf. According to the systemd-resolved man page, the /etc/resolv.conf file can be symlinked to /run/systemd/resolve/resolv.conf so that local DNS clients will bypass systemd-resolved and talk directly to the DNS servers. For some DNS clients, however, bypassing systemd-resolved might not be desirable.

A better approach is to configure kubelet to use the resolv.conf file. Configuring kubelet to reference the alternate resolv.conf will be demonstrated in the following sections.

Kubernetes packages

We will use kubeadm that is a mature package to easily and quickly install production-grade Kubernetes.

[user@fedora ~]$ sudo dnf install -y kubernetes-kubeadm kubernetes-client

kubernetes-kubeadm generates a kubelet drop-in file at /etc/systemd/system/kubelet.service.d/kubeadm.conf. This file can be used to configure instance-specific kubelet configurations. However, the recommended approach is to use kubeadm configuration files. For example, kubeadm creates /var/lib/kubelet/kubeadm-flags.env that is referenced by the above mentioned kubelet drop-in file.

The kubelet will be started automatically by kubeadm. For now we will enable it so it persists across restarts.

[user@fedora ~]$ sudo systemctl enable kubelet

2. Initialize the Control Plane

For the installation, we pass some cluster wide configuration to kubeadm like pod and service CIDRs. For more details refer to kubeadm configuration docs  and kubelet config docs.

[user@fedora ~]$ cat <<CONFIG > kubeadmin-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  name: master1
  criSocket: "unix:///var/run/crio/crio.sock"
  imagePullPolicy: "IfNotPresent"
  kubeletExtraArgs: 
    cgroup-driver: "systemd"
    resolv-conf: "/run/systemd/resolve/resolv.conf"
    max-pods: "4096"
    max-open-files: "20000000"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "1.27.0"
networking:
  podSubnet: "10.32.0.0/16"
  serviceSubnet: "172.16.16.0/22"
controllerManager:
  extraArgs:
    node-cidr-mask-size: "20"
    allocate-node-cidrs: "true"
---
CONFIG

In the above configuration, we have chosen different IP subnets for pods and services. This is useful when debugging. Make sure they do not overlap with your node’s CIDR. To summarize the IP ranges:

  • services “172.16.16.0/22” – 1024 services cluster wide
  • pods “10.32.0.0/16” – 65536 pods cluster wide, max 4096 pods per kubelet and 20 million open files per kubelet. For other important kubelet parameters refer to kubelet config docs. Kubelet is an important component running on the worker nodes so make sure you read the config docs carefully.

kube-controller-manager has a component called nodeipam that splits the podcidr into smaller ranges and allocates these ranges to each node via the (node.spec.podCIDR /node.spec.podCIDRs) properties. Controller Manager property ‐‐node-cidr-mask-size defines the size of this range. By default it is /24, but if you have enough resources you can make it larger; in our case /20. This will result in 4096 pods per node with a maximum of 65536/4096=16 nodes. Adjust these properties to fit the capacity of your bare-metal server.

[user@fedora ~]$ hostnamectl set-hostname master1
[user@master1 ~]$ sudo kubeadm init --skip-token-print=true --config=kubeadmin-config.yaml

[user@master1 ~]$ mkdir -p $HOME/.kube
[user@master1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[user@master1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

There are newer networking plugins that leverage ebpf kernel capabilities or ovn. However, installing such plugins requires uninstalling kube-proxy and we want to maintain the deployment as standard as possible. Some of the networking plugins read the kubeadm-config configmap and set up the corect CIDR values without the need to read a lot of documentation.

[user@master1 ~]$ kubectl create -f https://github.com/antrea-io/antrea/releases/download/v1.14.0/antrea.yml

Antrea, OVN-Kubernetes are interesting CNCF projects; especially for bare-metal clusters where network speed becomes a bottleneck. It also has support for some high-speed Mellanox network cards. Check pods and svc health and whether a correct IP address was assigned.

[user@master1 ~]$ kubectl get pods -A -o wide
NAME                                 READY  IP              NODE     
antrea-agent-x2j7r                   2/2    192.168.122.3   master1
antrea-controller-5f7764f86f-8xgkc   1/1    192.168.122.3   master1
coredns-787d4945fb-55pdq             1/1    10.32.0.2       master1
coredns-787d4945fb-ndn78             1/1    10.32.0.3       master1
etcd-master1                         1/1    192.168.122.3   master1
kube-apiserver-master1               1/1    192.168.122.3   master1
kube-controller-manager-master1      1/1    192.168.122.3   master1
kube-proxy-mx7ns                     1/1    192.168.122.3   master1
kube-scheduler-master1               1/1    192.168.122.3   master1

[user@master1 ~]$ kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP 
default       kubernetes   ClusterIP   172.16.16.1
kube-system   antrea       ClusterIP   172.16.18.214
kube-system   kube-dns     ClusterIP   172.16.16.10 

[user@master1 ~]$ kubectl describe node master1 | grep PodCIDR
PodCIDR:                      10.32.0.0/20
PodCIDRs:                     10.32.0.0/20

All pods should be running and healthy. Notice how the static pods and the daemonsets have the same IP address as the node. CoreDNS is also reading directly from the /run/systemd/resolve/resolv.conf file and not crashing.

Generate a token for joining the worker node.

[user@master1 ~]$ kubeadm token create --ttl=30m --print-join-command

The output of this command contains details for joining the worker node.

3. Join a Worker Node

We need to set the hostname and kubeadm join. Kubelet on this node also requires configuration. Do this at the systemd level or by using a kubeadm config file with placeholders. Replace the placeholders with the values from the previous command. The kubelet args respect the same convention as kubelet params, but without leading dashes.

[user@fedora ~]$ hostnamectl set-hostname worker1

[user@worker1 ~]$ cat <<CONFIG > join-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: <TOKEN>
    apiServerEndpoint: <MASTER-IP:PORT>
    caCertHashes: ["<HASH>"]
    timeout: 5m
nodeRegistration:
  name: worker1
  criSocket: "unix:///var/run/crio/crio.sock"
  imagePullPolicy: "IfNotPresent"
  kubeletExtraArgs: 
    cgroup-driver: "systemd"
    resolv-conf: "/run/systemd/resolve/resolv.conf"
    max-pods: "4096"
    max-open-files: "20000000"
---
CONFIG

[user@worker1 ~]$ sudo kubeadm join --config=join-config.yaml

From master node check the range allocated by nodeipam to both nodes:

[user@master1 ~]$ kubectl describe node worker1 | grep PodCIDR
PodCIDR:                      10.32.16.0/20
PodCIDRs:                     10.32.16.0/20

Notice the cluster-wide pod CIDR — 10.32.0.0/16 — was split by Controller Manager into 10.32.0.0/20 for the first node and 10.32.16.0/20 for the second node with non-overlapping segments of 4096 IP addresses each.

4. Security considerations

Run three sample pods to test the setup.

[user@master1 ~]$ kubectl apply -f - <<EOF
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: fedora
    image: fedora/fedora:latest
    args: ["sleep", "infinity"]
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-userns-1
  annotations:
    io.kubernetes.cri-o.userns-mode: "auto:size=256"
spec:
  containers:
  - name: fedora
    image: fedora/fedora:latest
    args: ["sleep", "infinity"]
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-userns-2
  annotations:
    io.kubernetes.cri-o.userns-mode: "auto:size=256"
spec:
  containers:
  - name: fedora
    image: fedora/fedora:latest
    args: ["sleep", "infinity"]
---
EOF

4.1 Discretionary Access Control

By default, Linux’s security model is based on Discretionary Access Control (DAC). This security model is based on user identity and the filesystem ownership and permissions associated with that user.

Since containers are Linux processes, you can watch them by running the ps command on your host server. Start a container process and check it using ps. The kubelet is the worker node main process and, by default, it runs as root (uid=0). There is a feature gate — KubeletInUserNamespace — but it is currently in an alpha stage of development. All the other containers will run as user id 0 as well. To properly function, all the containers must mount the /proc and /sys pseudofilesystems and have access to some processes on the host. Under these circumstances, a rogue container process running as root could assume elevated privileges on the host. This should explain the need for isolating processes by running them as underprivileged users.

This “soft” isolation can be done via kubernetes’ spec.securityContext.(RunAsUser|RunAsGroup|fsGroup), but this method requires additional administrative work like creating and maintaining users and groups etc. This can be automated via Admission Controllers, but we discuss below a different approach using user namespaces.

User namespaces are a Linux feature that is part of the same basic DAC security model. They are enabled by default in the latest Linux versions and you might have encountered them while working with Podman or Singularity.

CRI-O schedules userns workloads via the io.kubernetes.cri-o.userns-mode: “auto:size=n” annotation. This annotation can be added manually to YAML files as demonstrated in the above example or automatically via an admission controller. The annotation based behavior might change. You will need to follow the version updates for Kubernetes and CRI-O.

user@worker1:~$ cat /etc/subuid
user:524288:65536
containers:1000000:1048576

user@worker1:~$ ps -eo pid,uid,gid,args,label | grep -E 'kubelet|sleep'
  2980      0 0 kubelet system_u:system_r:kubelet_t:s0-s0:c0.c1023
  13067     0 0 sleep   system_u:system_r:container_t:s0:c483,c911
  13078 1000000 1000000 sleep system_u:system_r:container_t:s0:c508,c675
  13105 1000256 1000256 sleep system_u:system_r:container_t:s0:c300,c755

Notice kubelet and the test-pod are running as root on the host while both test-pod-userns are running as temporary dynamic user from the range “containers” defined in /etc/subuid . CRI-O uses the containers/storage plugin and therefore looks for default user containers to map subuid and subgids. According to current /etc/subuid file, the dynamic users will begin at UID 1000000 with a maximum of 1048576 users. The annotation assigns a range of 256 UIDs to each container. To change the defaults and mappings refer to containers-storage.conf man page.

4.2 Mandatory Access Control

SELinux is enabled on Fedora Linux in enforcing mode by default and it implements the Mandatory Access Control (MAC) security model. This model requires explicit rules that allow a labeled source context (process) to access a labeled target context (files|ports).

The labels have the following format as shown in the above examples:

user:role:type:sensitivity-level:category-levels

CRI-O requires the containers-selinux package. We installed Kubernetes while keeping SELinux in enforcing mode, but there are a few general scenarios that might require additional SELinux configuration:

  • Binding ports
  • Mounting storage
  • Sharing storage

Binding ports

Create a sample pod binding to a privileged host port. This is useful, for example, when creating ingress controllers. You will notice the rootless container was able to bind to the privileged port.

[user@master1 -A ~]$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: test-hostport
  annotations:
    io.kubernetes.cri-o.userns-mode: "auto:size=256"
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
      hostPort: 80
EOF

user@worker1:~$ sudo semanage port -l 
http_port_t tcp 80, 443 ...

user@worker1:~$ sudo ss -plntZ
Address:Port Process                                                                                          
0.0.0.0:80 "crio",proc_ctx=system_u:system_r:container_runtime_t:s0

Port 80 (target context) is labeled http_port_t and the process trying to access it (source context) is labeled container_runtime_t. To check specific rules that allow this and to debug potential issues, use sesearch. Although, in this specific example, container_t process was allowed to assume container_runtime_t domain and to bind eventually to port 80, this might not always be desirable.

user@worker1:~$ sesearch -A -s container_runtime_t -t http_port_t -c tcp_socket

Mounting storage

The process container_t is MCS constrained which means every new container will receive two new random categories. At the moment, when mounting a volume, Kubernetes is not automatically re-labelling files with these two categories. There is a community effort via features like SELinuxMountReadWriteOncePod, but you will have to follow the progress in the future versions. For this demo, we will label the files manually.

The categories cannot have any value. They are defined in the setrans.conf file as shown below. Refer to the SELinux documentation for details about modifying the sizes of the MCS ranges.

user@worker1:~$ cat /etc/selinux/targeted/setrans.conf 
s0=SystemLow
s0-s0:c0.c1023=SystemLow-SystemHigh
s0:c0.c1023=SystemHigh

The DAC permissions are enforced in parallel with the MAC permissions, so the Linux mode bits must be set to grant sufficient access in addition to the SELinux labels. We also need to set the proper label container_file_t and the category level as well. With s0 level all the containers will be able to write to the volume. To restrict access to them, we need to label them with process categories.

user@worker1:~$ sudo mkdir -m=777 /data
user@worker1:~$ sudo semanage fcontext -a -t container_file_t /data
user@worker1:~$ sudo restorecon -R -v /data
user@worker1:~$ mkdir -m=777 /data/folder{1..2}
user@worker1:~$ ls -laZ /data
drwxrwxrwx. root root unconfined_u:object_r:container_file_t:s0 .
drwxrwxrwx. user user unconfined_u:object_r:container_file_t:s0 folder1
drwxrwxrwx. user user unconfined_u:object_r:container_file_t:s0 folder2

The semanage fcontext command cannot assign category labels so we will have to use chcat:

user@worker1:~$ chcat -- +c800 /data/folder1
user@worker1:~$ chcat -- +c801 /data/folder2
user@worker1:~$ ls -laZ /data
drwxrwxrwx. unconfined_u:object_r:container_file_t:s0      .
drwxrwxrwx. unconfined_u:object_r:container_file_t:s0:c800 folder1
drwxrwxrwx. unconfined_u:object_r:container_file_t:s0:c801 folder2

With the configuration shown above, the container process must have category c800 to access folder1, and c801 is required to access folder2. To avoid random labeling, pass the spec.securityContext.seLinuxOptions object.

[user@master1 ~]$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: test-hostpath1
  annotations:
    io.kubernetes.cri-o.userns-mode: "auto:size=256"
spec:
  securityContext:
    seLinuxOptions:
      level: "s0:c800"
  containers:
  - name: test
    image: fedora/fedora:latest
    args: ["sleep", "infinity"]
    volumeMounts:
    - mountPath: /test
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /data
---
apiVersion: v1
kind: Pod
metadata:
  name: test-hostpath2
  annotations:
    io.kubernetes.cri-o.userns-mode: "auto:size=256"
spec:
  securityContext:
    seLinuxOptions:
      level: "s0:c801"
  containers:
  - name: test
    image: fedora/fedora:latest
    args: ["sleep", "infinity"]
    volumeMounts:
    - mountPath: /test
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /data
EOF

Next, try to write to these folders. Notice the process labels, file labels, and the file ownership.

user@master1:~$ kubectl exec test-hostpath1 -- touch /test/folder1/testfile
user@master1:~$ kubectl exec test-hostpath1 -- touch /test/folder2/testfile
touch: cannot touch '/test/folder2/testfile': Permission denied

user@master1:~$ kubectl exec test-hostpath2 -- touch /test/folder2/testfile
user@master1:~$ kubectl exec test-hostpath2 -- touch /test/folder1/testfile
touch: cannot touch '/test/folder1/testfile': Permission denied

user@worker1:~$ ps -eo pid,uid,gid,args,label | grep -E 'sleep'
40475 1000512 1000512 sleep system_u:system_r:container_t:s0:c801
40500 1000256 1000256 sleep system_u:system_r:container_t:s0:c800

user@worker1:~$ ls -laZ /data/folder1
drwxrwxrwx. user   unconfined_u:object_r:container_file_t:s0:c800 .
-rw-r--r--. 1000256 system_u:object_r:container_file_t:s0:c800     testfile

user@worker1:~$ ls -laZ /data/folder2
drwxrwxrwx. user   unconfined_u:object_r:container_file_t:s0:c801 .
-rw-r--r--. 1000512 system_u:object_r:container_file_t:s0:c801     testfile

Sharing storage

In the above examples, the containers share storage that has the same group and categories or storage with the most permissive s0 level. In production environments you will most likely deal with dynamic storage provisioners that will have to automatically relabel directories and files with whatever random category labels were assigned by Kubernetes. This means the storage provisioner must be SELinux aware and you need to read the configuration settings carefully for anything SELinux-specific.

Proper file permissions achieve a lot of security. SELinux simply adds a layer of security on top of the base file permissions.

More Security

We have touched on the basics of Fedora Linux’s security models. Securing Kubernetes is a broad field of study and it requires significant effort to come to a full understanding of how it all works. To review the best practices and tools beyond what this article has covered, refer to the SELinux docs and the Linux Foundation CKS learning track.

Conclusion

In this article, we have achieved a small, bare-metal Kubernetes setup running on Fedora Linux. CRI-O is a versatile CNCF graduated project that supports user namespaces and any OCI-compliant runtime. Just like Fedora, Kubernetes is continuously improving and can only benefit from Fedora Linux’s advanced security model and features. Follow the Kubernetes QuickDocs to stay apprised of the latest changes. Thanks to all the hard working people maintaining the above mentioned packages.

D-Bus overview

Posted by Fedora Magazine on December 25, 2023 08:00 AM

What D-Bus is

D-Bus serves various purposes aiming to facilitate the cooperation between different processes in the system. This article will describe D-Bus and how it performs this function.

From the D-Bus creators definition:

D-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a “single instance” application or daemon, and to launch applications and daemons on demand when their services are needed.

D-Bus is, mainly, an interprocess communication (IPC) protocol. Its main advantage is that applications don’t need to establish one-to-one communication with every other process that they need to talk with. D-Bus, instead, offers a virtual bus to which all applications connect and send the messages to.

<figure class="wp-block-image size-full">Processes without D-Bus<figcaption class="wp-element-caption">Processes without D-Bus</figcaption></figure>
<figure class="wp-block-image size-full">Processes with D-Bus<figcaption class="wp-element-caption">Processes with D-Bus</figcaption></figure>

Reaching the desired application

We talk about a D-Bus system but in a system there can be more than one bus. Actually, normally, there are at least 2:

  • A system bus: applications that manage system-wide resources that are connected to it.
  • One session bus per logged in user: desktop applications commonly use this bus.

To send D-Bus messages to the desired destination on a bus, a way to identify each of the applications connected to a bus is needed. For that reason, each application gets at least one bus name that others can specify as the destination of its messages. There are two name types:

  • Unique connection names: Each application that connects to the bus automatically gets one. It’s a unique identifier that will never be reused for a different application. These names start with a colon (‘:’) and normally look something like “:1.94” (the characters after the colon have no special meaning but are unique).
  • Well-known names: While unique connection names are dynamically assigned, applications that need to be easily discoverable by others must own a fixed name that is well known to them. They look like reversed domain names, similar to “org.freedesktop.NetworkManager”.

You can see what applications are connected to the two main buses using the busctl commands below. We’ll discover that nowadays many of the main components of the system use D-Bus. In the following examples, omit the –acquired option to also see unique connection names.

$ busctl list --system --acquired
...
$ busctl list --user --acquired
...

In an analogy with IP networks, unique connection names are like dynamic IP addresses and well-known names are like hostnames or domain names.

For example, NetworkManager is a daemon that controls the networking configuration of the system. Clients like GNOME control center, or the command line tools nmcli and nmstate, connect to the system bus and send D-Bus messages to the “org.freedesktop.NetworkManager” well-known name to get info or request changes.

Note: the D-Bus specification uses the term “bus name”, so it’s somehow official. This can be quite confusing because people tend to think that it refers to different buses, instead of apps connected to the same bus. Many articles and tools use the term “service” instead of “bus name” because it’s more intuitive, but it has a different meaning in the spec so it’s discouraged. In this article I use the term “destination” whenever possible. In any case, remember that “bus name” refers to “name IN the bus”, and not “name OF a bus”.

D-Bus objects

D-Bus applications can expose some of their resources and actions in what we could call a D-Bus API.

The exposed resources resemble the main concepts of object oriented programming languages so they are called D-Bus objects. Specifically, each application can expose an indefinite number of objects with properties and methods. Individual D-Bus objects are identified by their D-Bus object paths, which look much like Unix paths (i.e. /path/to/the/object). 

You can inspect what objects a certain application exposes using the busctl command. For example:

$ busctl tree org.freedesktop.NetworkManager

Applications can express the hierarchy between objects, thanks to this path-like identifier, in an intuitive way. It is easy to know that “/Background/Color” is an object that hierarchically belongs to the object “/Background”. The spec does not mandate following this hierarchical design but almost all applications do it.

Note: it is common in many applications that the object paths begin with the reverse domain of the author (i.e. /org/freedesktop/NetworkManager/*). This avoids name collisions with other objects from different libraries or sources. However, this is not mandatory and not all applications do it.

D-Bus interfaces, methods and properties

Each D-Bus object implements one or more D-Bus interfaces and each interface defines some properties and methods. Properties are like variables and methods are like functions. The names of D-Bus interfaces look like “org.example.AppName.InterfaceName”.

If you already know any programming language that uses the concept of interface, like Java, this will be familiar to you. If you are not, just remember that interfaces are like types for the objects and each object can have more than one type at the same time. But keep in mind that, unlike what happens in many of these languages, D-Bus objects never define direct members, only the interfaces do.

Example:

  • A printing server application defines the following interfaces:
    • Printer: defines the method “PrintDocument” and the property “InkLevel”.
    • Scanner: defines the method “ScanDocument”.
  • The object “/Devices/1” represents a normal printer, so it only implements the interface “Printer”.
  • The object “/Devices/2” represents a printer with scanner, so it implements both interfaces.

Introspection

Introspection allows you to get a list of the interfaces, with its methods and properties, that a D-Bus object implements. For example, the busctl command is used as follows:

$ busctl introspect org.freedesktop.NetworkManager /org/freedesktop/NetworkManager

Methods and Properties

Now let’s see how to call to D-Bus methods and read properties from the terminal in two examples:

# Method call
busctl call                                                \
    --system                                               \
    org.freedesktop.NetworkManager           `# app name`  \
    /org/freedesktop/NetworkManager/Settings `# object`    \
    org.freedesktop.NetworkManager.Settings  `# interface` \
    ListConnections                          `# method`

# Property read
busctl get-property                                           \
    --system                                                  \
    org.freedesktop.NetworkManager              `# app name`  \
    /org/freedesktop/NetworkManager/Devices/2   `# object`    \
    org.freedesktop.NetworkManager.Device.Wired `# interface` \
    PermHwAddress                               `# property`

To read or write properties, we actually have to call methods of the standard interface “org.freedesktop.DBus.Properties” defined by the D-Bus specs. Busctl, however, does it under the hood with the get/set-property command to abstract a bit from that. Exercise suggestion: read a property using busctl call (hint: you need to specify the arguments’ signature, which is “ss”).

Note: when calling D-Bus methods, it’s not mandatory to specify the interface, but if two interfaces define the same method name, the result is undefined, so you should always specify it. The tool busctl makes it mandatory for this reason.

D-Bus type system

D-Bus uses a strict type system, so all properties, arguments and return values must have a well defined type.

There are some basic types like BYTE, BOOLEAN, INT32, UINT32, DOUBLE, STRING or OBJECT_PATH. These basic types can be grouped into 3 different types of containers: STRUCT, ARRAY and DICT_ENTRY (dictionaries). Containers can be nested within other containers of the same or different type.

There is also a VARIANT type which allows some kind of dynamic typing.

Each type can be identified by a signature string. The signatures of basic types are single characters like “i”, “u”, “s”, etc. Signatures of compound types are strings like “(iibs)”, “ai” or “a{s(ii)}”. A complete description of all types and signatures is beyond the scope of this article, but depending on the language and/or D-Bus library that you use you will need at least some knowledge on how to specify the type of the values you pass or receives. Check the D-Bus specification for more info.

Putting it all together (python example)

Now that we know the basic concepts of D-Bus, we are ready to send D-Bus messages. Let’s see a complete Python example.

#!/usr/bin/env python3

# Import the 'dbus' module from dbus-python package.
# The package can be installed with `pip install dbus-python`.
# Documenation: https://dbus.freedesktop.org/doc/dbus-python/
import dbus

# We need the NetworkManager's D-Bus API documentation to
# know what objects and methods we are interested in:
# https://networkmanager.dev/docs/api/latest/spec.html

# We'll connect to system bus
bus = dbus.SystemBus()

# We'll send our messages to NetworkManager
NM_WELL_KNOWN_NAME = "org.freedesktop.NetworkManager"

# Call to the following method:
#  - object: /org/freedesktop/NetworkManager
#  - interface: org.freedesktop.NetworkManager
#  - method: GetDeviceByIpIface
#  - input argument: "eth0" (type: STRING)
#  - return value: the device's path (type: OBJECT_PATH)
#
# Get the path to the object that represents the device with
# the interface name "eth0".
nm_dbus_obj = bus.get_object(
    NM_WELL_KNOWN_NAME, "/org/freedesktop/NetworkManager"
)
nm_dbus_iface = dbus.Interface(
    nm_dbus_obj, "org.freedesktop.NetworkManager"
)
try:
    device_dbus_path = nm_dbus_iface.GetDeviceByIpIface("eth0")
except dbus.exceptions.DBusException as e:
    print("D-Bus error: " + str(e))
    quit()

print("D-Bus path to eth0 device: " + str(device_dbus_path))

# Call to the following method:
#  - object: the device that we obtained in the previous step
#  - interface: org.freedesktop.NetworkManager.Device
#  - method: Disconnect
#
# Request to the NM daemon to disconnect the device
# Note: NM will return an error if it was already disconnected
device_dbus_obj = bus.get_object(
    NM_WELL_KNOWN_NAME, device_dbus_path
)
device_dbus_iface = dbus.Interface(
    device_dbus_obj, "org.freedesktop.NetworkManager.Device"
)
try:
    device_dbus_iface.Disconnect()
except dbus.exceptions.DBusException as e:
    print("D-Bus error: " + str(e))
    quit()

print("Device disconnected")

Note that we didn’t need to specify the type of the method’s argument. This is because dbus-python does its best to convert the Python values to the equivalent D-Bus values (i.e. str to STRING, list to ARRAY, dict to DICT_ENTRYs, etc.). However, as D-Bus has a strict type system, you will need to specify the type when there is ambiguity. For example, for integer types you will often need to use dbus.UInt16(value), dbus.Int64(value), etc.

More D-Bus features

Another important and widely used D-Bus feature are signal messages. Applications can subscribe to other applications’ signals, specifying which of them they are interested in. The producer application sends signal messages when certain events happen, and the subscriber receives them in an asynchronous way. This avoids the need for polling all the time.

There are many other useful features in D-Bus like services activation, authentication, introspection, etc, which are far beyond the scope of this article.

Useful tools

For command line, systemd’s busctl is the most intuitive and complete tool, with great features like monitoring or capturing D-Bus traffic as a pcap file. However, value types have to be specified as D-Bus signatures, which is hard if you don’t know them very well. In that case, dbus-send from the dbus-tools package or Qt’s qdbus might be easier to use.

When starting to play around with D-Bus, exploring the different application’s objects hierarchy is much easier with a GUI tool like the Qt DBUS Viewer (QDbusViewer).

<figure class="wp-block-image size-full">Screenshot of Qt D-Bus Viewer<figcaption class="wp-element-caption">Screenshot of QDbusViewer</figcaption></figure>

Consulted articles

Episode 408 – Does Kubernetes need long term support?

Posted by Josh Bressers on December 25, 2023 12:00 AM

Josh and Kurt talk about a story asking for a Kubernetes LTS. Should open source projects have LTS versions? What does LTS even mean? Why is maintaining software so hard? It’s a lively discussion all about the past, present, and future of open source LTS.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3280-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_408_Does_Kubernetes_need_long_term_support.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_408_Does_Kubernetes_need_long_term_support.mp3</audio>

Show Notes

Fedora Ops Architect Weekly

Posted by Fedora Community Blog on December 24, 2023 03:02 PM

Your weekly summary of stuff ‘n things happening around Fedora!

Holidays!

Between now and early January, expect things to be a little quieter around the Fedora space as many of us will be taking some time off to spend it with friends and family. If something is not working as expected, check the status page, common issues forum, and if things really start to escalate, open a ticket in the infra issues tracker and ask around in on the #fedora matrix channel if someone might be able to help.

On a personal note, I will be on vacation between now and will be back to work on Thursday January 4th. I will be online on Wednesday 27th December to wrangle those changes, so if you need me for anything, message or email me and if I can I will get to it then, and if not, its a 2024 task 🙂

I hope you all have a wonderful holiday/end of year period, and I cannot wait to see you in 2024!

F39 Election Results!

The F39 elections have concluded and a huge congratulations again to our new nominated members of Council, Mindshare and FESCo, and an even bigger thank you to all those who were nominated and voted. To read more on this election cycle, visit the F39 Election Results blog post.

Fedora Linux 40 Upcoming Milestones

The following is a list of submission deadlines for Change Proposals. If you wish to submit a change proposal, please do so *before* the below dates.

  • 26th December – Proposals requiring mass rebuilds
  • 26th December – Proposals that are System-Wide changes
  • 16th January ’24 – Proposals that are Self-Contained

For those who have approved F40 changes, weekly reports will begin in early January, but do take note of the following dates now too:

Please visit the F40 Change Set page for an up to date list of accepted System Wide and Self-Contained changes. The full overview of the Fedora Linux 40 release schedule can also be viewed >>here<<.

Travel & Events

FOSDEM

FOSDEM ’24 is back on February 3rd & 4th in Brussels, Belgium. The cfp has now closed and notifications have been sent to inform folks whether their talk has been accepted or not. A provisional schedule for the Distributions Devroom is available >>here<< , and Fedora will have a stand at the conference too. If you’re attending, make sure to stop by to say hi!

DevConf.cz

DevConf.cz will return in Brno, Czech Republic on June 13 – 15th 2024. Call for proposals is now open until March 3rd.

Discussions

Announced F40 Change Proposals

The post Fedora Ops Architect Weekly appeared first on Fedora Community Blog.

bodhi-server 8.0.1

Posted by Bodhi on December 24, 2023 08:58 AM

Released on 2023-12-24.
This is a bugfix release that fixes an urgent issue about bodhi-server not honouring cookie authentication settings.

Bug fixes

  • The Bodhi authentication policy wasn't honoring settings from config (#5572).

Contributors

The following developers contributed to this release of Bodhi:

  • Mattia Verga