Fedora security Planet

Episode 429 – The autonomy of open source developers

Posted by Josh Bressers on May 20, 2024 12:00 AM

Josh and Kurt talk about open source and autonomy. This is even related to some recent return to office news. The conversation weaves between a few threads, but fundamentally there’s some questions about why do people do what they do, especially in the world of open source. This also is a problem we see in security, security people love to tell developers what to do. Developers don’t like being told what to do.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3388-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_429_The_autonomy_of_open_source_developers.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_429_The_autonomy_of_open_source_developers.mp3</audio>

Show Notes

Episode 428 – GitHub artifact attestation

Posted by Josh Bressers on May 13, 2024 12:00 AM

Josh and Kurt talk about a new to sign artifacts on GitHub. It’s in beta, it’s not going to be easy to use, it will have bugs. But that’s all OK. This is how we start. We need infrastructure like this to enable easier to use features in the future. Someday, everything will be signed by default.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3382-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_428_GitHub_artifact_attestation.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_428_GitHub_artifact_attestation.mp3</audio>

Show Notes

changing author on lots of commits at once

Posted by Adam Young on May 10, 2024 08:18 PM
git rebase HEAD~4 --exec 'git commit --amend --author "Adam Young admiyo@os.amperecomputing.com" --no-edit'

How not to waste time developing long-running processes

Posted by Adam Young on May 09, 2024 06:22 PM

Developing long running tasks might be my least favorite coding activity. I love writing and debugging code…I’d be crazy to be in this profession if I did not. But when a task takes long enough, your attention wanders and you get out of the zone.

Building the Linux Kernel takes time. Even checking the Linux Kernel out of git takes a non-trivial amount of time. The Ansible work I did back in the OpenStack days to build and tear down environments took a good bit of time as well. How do I keep from getting out of the zone while coding on these? It is hard, but here are some techniques.

Put everything in the container ahead of time

Build processes necessarily need a lot of tools. On a Fedora System, I often start by doing

yum groupinstall "Development Tools"

And then still may need to install a few more things: Java, Rust, or Python libraries, meson, Dwarves, bison, and others.

If you are doing CI inside a container, as is the norm for the git hosting services, you can specify a custom container. Anything that is not part of your build should be in the container.

Note that by doing this, you are putting another variable in your build process: which version of the container was used. By installing packages at build time, you are using the latest. By using a pre-built container, some of those packages may have gone stale. This is kind of the reason FOR using containers, as you can change package versions at your own cadence, but make sure you are deliberate about it.

Get it working, then disable it

If part of a long running task works, hold on to the output of that task, and disable the task itself. This implies that any information generated from this stage of the task can be deduced from the artifacts.

A Linux Kernel build takes a long time, even with all 100+ processors working on it. Skip the build until you need it to be fresh again.

use mkdir to simulate git checkouts

If you are doing a git checkout, but don’;t actually need that checkout for the follow on stage, you can short circuit that checkout by doing a mkdir of the root directory. This is a more useful hack than I originally realized. Pulling down a huge tree like the Linux kernel can take quite some time. Skipping it until you are ready to test it can save that time.

Use touch or fallocate to simulate build artifacts

If your automation is primarily for moving artifacts around, you can temporarily skip the stage that builds the artifacts until you get the follow on stages working. This is often done in conjunction with the git checkout skip above.

There are many variations to these work-arounds, as well as several more I have not yet documented. The point is to identify where you are waiting on tasks, and to figure out ways to avoid that time while developing the rest of your automation.

If you are working with Ansible, using the flag to start on a specific task can be a powerful time saver. However, that means structuring your playbook such that you don’t gather up all the information you need for a task at the beginning.

The same general idea is true for bash based operations: you can use functions that you call directly, but make sure you don;’t need too much context for that function call. Make it easy to start in the middle, and end in the middle.

Keeping the CI logic in bash

Posted by Adam Young on May 08, 2024 02:46 PM

As much as I try to be a “real” programmer, the reality is that we need automation, and setting up automation is a grind. A necessary grind.

One thing that I found frustrating was that, in order to test our automation, I needed to kick off a pipeline in our git server (gitlab, but the logic holds for others) even though the majority of the heavy lifting was done in a single bash script.

In order to get to the point where we could run that script in a gitlab runner, we needed to install a bunch of packages (Dwarves, Make, and so forth) as well as do some SSH Key provisioning in order to copy the artifacts off to a server. The gitlab-ci.yml file ended up being a couple doze lines long, and all those lines were bash commands.

So I pulled the lines out of gitlab-ci.yml and put them into the somewhat intuitively named file workflow.sh. Now my gitlab-ci.yml file is basically a one liner that calls workflow.sh.

But I also made it so workflow.sh can be called from the bash command line of a new machine. This is the key part. By doing this, I am creating automation that the rest of my team can use without relying on gitlab. Since the automation will be run from gitlab, no one can check in a change that breaks the CI, but they can make changes that will make life easier for them on the remote systems.

The next step is to start breaking apart the workflow into separate pipelines, due to CI requirements. To do this, I do three things:

  • Move the majority of the logic into functions, and source a functions.sh file. This lets me share across top-level bash scripts
  • Make one top-level function for each pipeline.
  • replace workflow.sh with a script per pipeline. These are named pipeline_<stage>. These scripts merely change to the source directory, and then call top level functions in functions.sh.

The reason for the last split is to keep logic from creeping into the pipeline functions. They are merely interfaces to the single set of logic in functions.sh.

The goal of having the separate functions source-able is to be able to run interior steps of the overall processing without having to run the end-to-end work. This is to save the sitting-around time for waiting for a long running process to complete….more on that in a future article.

Episode 427 – Will run0 replace sudo?

Posted by Josh Bressers on May 06, 2024 12:00 AM

Josh and Kurt talk about a sudo replacement going into systemd called run0. It sounds like it’ll get a lot right, but systemd is a pretty big attack surface and not everyone is a fan. We shall have to see if this ends up replacing sudo.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3377-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_427_Will_run0_replace_sudo.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_427_Will_run0_replace_sudo.mp3</audio>

Show Notes

Episode 426 – Automatically exploiting CVEs with AI

Posted by Josh Bressers on April 29, 2024 12:00 AM

Josh and Kurt talk about a paper describing using a LLM to automatically create exploits for CVEs. The idea is probably already happening in many spaces such as pen testing and intelligence services. We can’t keep up with the number of vulnerabilities we have, there’s no way we can possibly keep up with a glut of LLM generated vulnerabilities. We really need to rethink how we handle vulnerabilities.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3372-4" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_426_Automatically_exploiting_CVEs_with_AI.mp3?_=4" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_426_Automatically_exploiting_CVEs_with_AI.mp3</audio>

Show Notes

Episode 425 – Video game cheaters, also pretendo

Posted by Josh Bressers on April 22, 2024 12:00 AM

Josh and Kurt talk about a database of game cheaters. Cheating in games has many similarities to security problems. Anti cheat rootkits are also terrible. The clever thing however is using statistics to identify cheaters. Statistics don’t lie. Also, we discuss the Pretendo project sitting on a vulnerability for a year, is this ethical?

<audio class="wp-audio-shortcode" controls="controls" id="audio-3367-5" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_425_Video_game_cheaters_also_pretendo.mp3?_=5" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_425_Video_game_cheaters_also_pretendo.mp3</audio>

Show Notes

Running Keystone in development mode on Ubuntu 22.04

Posted by Adam Young on April 15, 2024 06:30 PM

Things have diverged a bit from the docs. Just want to document here what I got working:

I had already checked out Keystone and run the unit tests.

I needed uwsgi

sudo apt install uwsgi-core
sudo apt install uwsgi-plugin-python3


Then a modified command line to run the server:

uwsgi --http-socket 127.0.0.1:5000    --plugin /usr/lib/uwsgi/plugins/python3_plugin.so   --wsgi-file $(which keystone-wsgi-public)

This got me the last part

https://stackoverflow.com/questions/31330905/uwsgi-options-wsgi-file-and-module-not-recognized

Episode 424 – The Notepad++ Parasite Website

Posted by Josh Bressers on April 15, 2024 12:00 AM

Josh and Kurt talk about a Notepad++ fake website. It’s possibly not illegal, but it’s certainly ethically wrong. We also end up discussing why it seems like all these weird and wild things keep happening. It’s probably due to the massive size of open source (and everything) now. Things have gotten gigantic and we didn’t really notice.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3363-6" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_424_The_Notepad_Parasite_Website.mp3?_=6" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_424_The_Notepad_Parasite_Website.mp3</audio>

Show Notes

One Week With KDE Plasma Workspaces 6 on Fedora 40 Beta (Vol. 2)

Posted by Stephen Gallagher on April 08, 2024 08:31 PM

Checking In

It’s been a few days since my first entry in this series. For the most part, things have been going quite smoothly. I have to say, I am liking KDE Plasma Workspaces 6 much better than previous releases (which I dabbled with but admittedly did not spend a significant amount of time using). The majority of what I want to do here Just Works. This should probably not come as a surprise to me, but I’ve been burned before when jumping desktops.

I suppose that should really be my first distinct note here: the transition from GNOME Desktop to KDE Plasma Workspaces has been minimally painful. No matter what, there will always be some degree of muscle memory that needs to be relearned when changing working environments. It’s as true going from GNOME to KDE as it is from Windows to Mac, Mac to ChromeOS and any other major shift. That said, the Fedora Change that prompted this investigation is specifically about the possibility of changing the desktop environment of Fedora Workstation over to using KDE Plasma Workspaces and away from GNOME. As such, I will be keeping in mind some of the larger differences that users would face in such a transition.

Getting fully hooked up

The first few days of this experience, I spent all of my time directly at my laptop, rather than at my usual monitor-and-keyboard setup. This was because I didn’t want to taint my initial experience with potential hardware-specific headaches. My main setup involves a very large 21:9 aspect monitor, an HDMI surround sound receiver and a USB stereo/mic headset connected via a temperamental USB 3.2/Thunderbolt hub and the cheapest USB A/B switch imaginable (I share these peripherals with an overpowered gaming PC). So when I put aside my usual daily driver and plugged my Thinkpad into the USB-C hub, I was prepared for the worst. At the best of times, Fedora has been… touchy about working with these devices.

Let’s start with the good bits: When I first connected the laptop to my docking station, I was immediately greeted by an on-screen display asking me how I wanted to handle the new monitor. Rather than just making a guess between cloning or spanning the desktop, it gave me an easy and visual prompt to do so. Unfortunately, I don’t have a screenshot of this, as after the first time it seems that the system “remembers” the devices and puts them back the way I had them. This is absolutely desirable for the user, but as a reviewer it makes it harder to show it off. (EDIT: After initial publication, I was informed of the meta-P shortcut which allowed me to grab this screenshot)

<figure class="wp-block-image size-large"></figure>

Something else that I liked about the multi-monitor support was the way that the virtual desktop space on the taskbar automatically expanded to include the contents from both screens. It’s a simple thing, but I found that it made it really easy to tell at a glance which desktop I had particular applications running on.

All in all, I want to be clear here: the majority of my experience with KDE Plasma Workspaces has been absolutely fine. So many things work the same (or close enough) to how they work in GNOME that the transition has actually been much easier than I expected. the biggest workflow changes I’ve encountered are related to keyboard shortcuts, but I’m not going to belabor that, having discussed it in the first entry. The one additional keyboard-shortcut complaint I will make is this: using the “meta” key and typing an application name has a strange behavior that gets in my way. It almost behaves identically to GNOME; I tap “meta” and start typing and then hit enter to proceed. But the issue I have with KDE is this: I’m a fast typist and the KDE prompt doesn’t accept <enter> until the visual effect of opening the menu completes. This baffles me, as it accepts all of the other keys. So my muscle memory to launch a terminal by quickly tapping “meta”, typing “term” and hitting enter doesn’t actually launch the terminal. It leaves me at the menu with konsole sitting there. When I hit enter after the animation completes, it works fine. So while the behavior isn’t wrong, per se, it’s frustrating. The fact that it accepts the other characters makes me think this was a deliberate choice that I don’t understand.

There have been a few other issues, mostly around hardware support. I want to be clear: I’m fully aware that hardware is hard. One issue in particular that has gotten in the way is support for USB and HDMI sound devices in KDE Plasma. I don’t know if it’s specifically my esoteric hardware or a more general problem, but it has been very hard to get KDE to use the correct inputs and outputs. In the case of the HDMI audio receiver, I still haven’t been able to get KDE to present it as an output option in the control panel. It connects to the receiver and treats it as a very basic 720p video output device, but it just won’t recognize it as an audio output device. My USB stereo headset with mic has also been more headache than headset: after much trial and error, I’ve managed to identify the right output to send stereo output to it, but no matter what I have fiddled with, it does not recognize the microphone.

More issues on the hardware front are related to having two webcam devices available. KDE properly detects both the built-in camera on the laptop as well as the external webcam I have clipped to the top of my main monitor, but it seems to have difficulty switching between them. I’m not yet 100% sure how much of this is a KDE problem and how much a Firefox problem, but it is frustrating. Sometimes I’ll select my external webcam and it will still be taking input from the built-in camera. Also, it seems to always show two entries for both devices. I need to do more digging here, but I anticipate that I’ll be filing a bug report once I gather enough data.

Odds and Ends

I have mixed feelings about KDE’s clipboard applet in the toolbar. On the one hand, I can certainly see the convenience of digging into the clipboard history, particularly if you accidentally drag-select something and replace the clipboard copy you intended to keep. On the other hand, as a heavy user of Bitwarden who regularly copies passwords1 out of the wallet and into other applications, the fact that all of the clipboard contents are easily viewable in plaintext to anyone walking by if I forget to lock my screen for a few seconds is quite alarming. I’m pretty sure I’ll either have to disable this applet or build a habit of clearing it any time I copy a password. Probably the former, as I don’t like the fact that I have to call up and make the plaintext visible first in order to delete it without clearing the entire history anyway.

Conclusion

This will probably seem odd after a post that mostly contained complaints and nitpicks, but I want to reiterate: my experience over the last several days has actually been quite good. When dealing with a computer, I consider “it was boring” to be the highest of praise. Using KDE has not been a life-altering experience. It has been a stable, comfortable environment in which to get work done. Have I experienced some issues? Absolutely. None of them are deal-breakers, though the audio issues are fairly annoying. My time in the Fedora Project has shown me that hardware issues inevitably get fixed once they are noticed, so I’m not overly worried.

As for me? I’m going to stick around in KDE for a while and see how things play out. If you’re reading this and you’re curious, I’ll happily direct you to the Fedora KDE Spin for the Live ISO or the Kinoite installer if, like me, you enjoy an atomic update environment. Make sure to select “Show Beta downloads” to get Plasma 6!

  1. I generate high-entropy, unique random passwords for everything. Don’t you? ↩

Episode 423 – FCC cybersecurity label for consumer devices

Posted by Josh Bressers on April 08, 2024 12:00 AM

Josh and Kurt talk about a new FCC program to provide a cybersecurity certification mark. Similar to other consumer safety marks such as UL or CE. We also tie this conversation into GrapheneOS, and what trying to claim a consumer device is secure really means. Some of our compute devices have an infinite number of possible states. It’s a really weird and hard problem.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3355-7" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_423_FCC_cybersecurity_label_for_consumer_devices.mp3?_=7" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_423_FCC_cybersecurity_label_for_consumer_devices.mp3</audio>

Show Notes

One Week With KDE Plasma Workspaces 6 on Fedora 40 Beta (Vol. 1)

Posted by Stephen Gallagher on April 03, 2024 11:41 PM

Why am I doing this?

As my readers may be aware, I have been a member of the Fedora Engineering Steering Committee (FESCo) for over a decade. One of the primary responsibilities of this nine-person body is to review the Fedora Change Proposals submitted by contributors and provide feedback as well as being the final authority as to whether those Changes will go forth. I take this responsibility very seriously, so when this week the Fedora KDE community brought forth a Change Proposal to replace GNOME Desktop with KDE Plasma Workspaces as the official desktop environment in the Fedora Workstation Edition, I decided that I would be remiss in my duties if I didn’t spend some serious time considering the decision.

As long-time readers of this blog may recall, I was a user of the KDE desktop environment for many years, right up until KDE 4.0 arrived. At that time, (partly because I had recently become employed by Red Hat), I opted to switch to GNOME 2. I’ve subsequently continued to stay with GNOME, even through some of its rougher years, partly through inertia and partly out of a self-imposed responsibility to always be running the Fedora/Red Hat premier offering so that I could help catch and fix issues before they got into users’ and customers’ hands. Among other things, this led to my (fairly well-received) series of blog posts on GNOME 3 Classic. As it has now been over ten years and twenty(!) Fedora releases, I felt like it was time to give KDE Plasma Workspaces another chance with the release of the highly-awaited version 6.0.

How will I do this?

I’ve committed to spending at least a week using KDE Plasma Workspaces 6 as my sole working environment. This afternoon, I downloaded the latest Fedora Kinoite installer image and wrote it to a USB drive.1 I pulled out a ThinkPad I had lying around and went ahead with the install process. I’ll describe my setup process a bit below, but (spoiler alert) it went smoothly and I am typing up this blog entry from within KDE Plasma.

What does my setup look like?

I’m working from a Red Hat-issued ThinkPad T490s, a four-core Intel “Whiskey Lake” x86_64 system with 32 GiB of RAM and embedded Intel UHD 620 graphics. Not a powerhouse by any means, but only about three or four years old. I’ve wiped the system completely and done a fresh install rather than install the KDE packages by hand onto my usual Fedora Workstation system. This is partly to ensure that I get a pristine environment for this experimen and partly so I don’t worry about breaking my existing system.

Thoughts on the install process

I have very little to say about the install process. It was functionally identical to installing Fedora Silverblue, with the minimalist Anaconda environment providing me some basic choices around storage (I just wiped the disk and told it to repartition it however it recommends) and networking (I picked a pithy hostname: kuriosity). That done, I hit the “install” button, rebooted and here we are.

First login

Upon logging in, I was met with the KDE Welcome Center (Hi Konqi!), which I opted to proceed through very thoroughly, hoping that it would provide me enough information to get moving ahead. I have a few nitpicks here:

First, the second page of the Welcome Center (the first with content beyond “this is KDE and Fedora”) was very sparse, saying basically “KDE is simple and usable out of the box!” and then using up MOST of its available screen real estate with a giant button directing users to the Settings app. I am not sure what the goal is here: it’s not super-obvious that it is a button, but if you click on it, you launch an app that is about as far from “welcoming” as you can get (more on that later). I think it might be better to just have a little video or image here that just points at the settings app on the taskbar rather than providing an immediate launcher. It both disrupts the “Welcome” workflow and can make less-technical users feel like they may be in over their heads.

<figure class="wp-block-image size-large"></figure>

I actually think the next page is a much better difficulty ramp; it presents some advanced topics that they might be interested in, but it doesn’t look quite as demanding of them and it doesn’t completely take the user out of the workflow.

<figure class="wp-block-image size-large"></figure>

Next up on the Welcome Center was something very welcome: an introduction to Discover (the “app store”). I very much like this (and other desktop environments could absolutely learn from it). It immediately provides the user with an opportunity to install some very popular add-ons.2

<figure class="wp-block-image size-large"></figure>

The next page was a bit of a mixed bag for me. I like that the user is given the option to opt-in to sharing anonymous user information, but I feel like the slider and the associated details it provided are probably a bit too much for most users to reasonably parse. I think this can probably be simplified to make it more approachable (or at least bury the extra details behind a button; I had to extend the window from its default size to get a screenshot).

<figure class="wp-block-image size-large"></figure>

At the end of the Welcome Center was a page that gave me pause: a request for donations to the KDE project. I’m not sure this is a great place for it, since the user hasn’t even spent any time with the environment at all yet. It seems a bit too forwards with asking for donations. I’m not sure where a better place is, but getting begged for spare change minutes after installing the OS doesn’t feel right. I think that if we were to make KDE the flagship desktop behind Fedora Workstation, this would absolutely have to come out. I think it gives a bad first impression. I think a far better place to leave things would be the preceding page:

<figure class="wp-block-image size-large"></figure>

OK, so let’s use it a bit!

With that out of the way, I proceeded to do a bit of setup for personal preferences. I installed my preferred shell (zsh) and some assorted CLI customizations for the shell, vi, git, etc. This was identical to the process I would have followed for Silverblue/GNOME, so I won’t go into any details here. I also have a preference for touchpad scrolling to move the page (like I’m swiping a touch-screen), so I set that as well. I was confused for a bit as it seemed that wasn’t having an effect, but I realized I had missed that “touchpad” was a separate settings page from “mouse” and had flipped the switch on the wrong devices. Whoops!

In the process of setting things up to my liking, I did notice one more potential hurdle for newcomers: the default keyboard shortcuts for working with desktop workspaces are different from GNOME, MacOS and Windows 11. No matter which major competitor you are coming from, this will cause muscle-memory stumbles. It’s not that any one approach is better than another, but the fact that they are all completely different makes me sigh and forces me to think about how I’m interacting with the system instead of what I want to do with it. Unfortunately, KDE did not make figuring this out easy on me; even when I used the excellent desktop search feature to find the keyboard shortcut settings, I was presented by a list of applications that did not clearly identify which one might contain the system-wide shortcuts. By virtue of past experience with KDE, I was able to surmise that the KWin application was the most likely place, but the settings app really didn’t seem to want to help me figure that out. Then, when I selected KWin, I was presented with dozens of pages of potential shortcuts, many of which were named similarly to the ones I wanted to identify. This was simply too many options with no clear way to sort them. I ended up resorting to trying random combinations of ctrl, alt, meta and shift with arrow keys until I eventually stumbled upon the correct set.

Next, I played around a bit with Discover, installing a pending firmware update for my laptop (which hadn’t been turned on in months). I also enabled Flathub and installed Visual Studio Code to see how well Flatpak integration works and also for an app that I know doesn’t natively use Wayland. That was how I discovered that my system had defaulted to a 125% fractional scaling setup. In Visual Studio Code, everything looked very slightly “off” compared to the rest of the system. Not in any way I could easily put my finger to, until I remembered how badly fractional scaling behaved on my GNOME system. I looked into the display settings and, sure enough, I wasn’t at an integer scaling value. Out of curiosity, I played around with the toggle for whether to have X11 apps scale themselves or for the system to do it and found that the default “Apply scaling themselves” was FAR better looking in Visual Studio Code. At the end of the day, however, I decided that I preferred the smaller text and larger available working area afforded me by setting the scaling back to 100%. That said, if my eyesight was poorer or I needed to sit further away from the screen, I can definitely see the advantages to the fractional scaling and I was very impressed by how sharp it managed to be. Full marks on that one!

I next went to play around in Visual Studio Code with one of my projects, but when I tried to git clone it, I hit an issue where it refused my SSH key. Digging in, I realized that KDE does not automatically check for keys in the default user location (~/.ssh) and prompt for their passphrases. I went ahead and used ssh-add to manually import them into the SSH keyring and moved along. I find myself going back and forth on this; on the one hand, there’s a definite security tradeoff inherent in allowing the desktop to prompt (and offer to save) the passphrase in the desktop keyring (encrypted by your login password). I decline to save mine persistently, preferring to enter it each time. However, there’s a usability tradeoff to not automatically at least launching an askpass prompt. In any case, it’s not really an issue for me to make this part of my usual toolbox entry process, but I’m a technical user. Newbies might be a bit confused if they’re coming from another environment.

I then went through the motions of getting myself signed in to the various messaging services that I use on a daily basis, including Fedora’s Matrix. Once signed in there via Firefox, I was prompted to enable notifications, which I did. I then discovered the first truly sublime moment I’ve had with Plasma Workspaces: the ephemeral notifications provided by the desktop. The way they present themselves, off to the side and with a vibrant preview window and show you a progress countdown until they vanish is just *chef’s kiss*. If I take nothing else away from this experience, it’s that it is possible for desktop notifications to be beautiful. Other desktops need to take note here.

I think this is where I’m going to leave things for today, so I’ll end with a short summary: As a desktop environment, it seems to do just about everything I need it to do. It’s customizable to the point of fault: it’s got so many knobs to twist that it desperately needs a map (or perhaps a beginner vs. expert view of the settings app). Also, the desktop notifications are like a glass of icy lemonade after two days lost in the desert.

  1. This was actually my first hiccough: I have dozens of 4 GiB thumbdrives lying around, but the Kinoite installer was 4.2 GiB, so I had to go buy a new drive. I’m not going to ding KDE for my lack of preparedness, though! ↩
  2. Unfortunately I hit a bug here; it turns out that all of those app buttons will just link to the updates page in Discover if there is an update waiting. I’m not sure if this is specific to Kinoite yet. I’ll be investigating and filing a ticket about it in the appropriate place. ↩

XZ Bonus Spectacular Episode

Posted by Josh Bressers on April 01, 2024 12:03 PM

Josh and Kurt talk about the recent events around XZ. It’s only been a few days, and it’s amazing what we already know. We explain a lot of the basics we currently know with the attitude much of these details will change quickly over the coming week. We can’t fix this problem as it stands, we don’t know where to start yet. But that’s not a reason to lose hope. We can fix this if we want to, but it won’t be flashy, it’ll be hard work.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3359-8" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/XZ_Bonus_Spectacular_Episode.mp3?_=8" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/XZ_Bonus_Spectacular_Episode.mp3</audio>

Show Notes

Episode 422 – Do you have a security.txt file?

Posted by Josh Bressers on April 01, 2024 12:00 AM

Josh and Kurt talk about the security.txt file. It’s not new, but it’s not something we’ve discussed before. It’s a great idea, an easy format, and well defined. It’s not high on many of our todo lists, but it’s something worth doing.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3351-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_422_Do_you_have_a_securitytxt_file.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_422_Do_you_have_a_securitytxt_file.mp3</audio>

Show Notes

Revision control and sheet music

Posted by Adam Young on March 29, 2024 03:38 PM

Musescore is a wonderful tool. It has made a huge impact on my musical development over the past couple decades. Sheet music is the primary way I communicate and record musical ideas, and Musescore the tool and musecore.com have combined to make a process that works for me and my musical needs.

I have also spent a few years writing software, and the methods that we have learned to use in software development have evolved due to needs of scale and flexibility. I would like to apply some of those lessons to how I manage sheet music. But there are disconnects.

The biggest issue I have is that I want the same core song in multiple different but related formats. The second biggest issue is that I want to be able to make changes to a song, and to collaborate with other composers in making those changes.

The Sheet music I work with is based on the western notation system. I use a fairly small subset of the overall notation system, as what I am almost always working towards is a musical arrangement for small groups. The primary use case I have is for lead sheets: a melody line and a series of chords for a single instrument. Often, this is for a horn player. I need to be able to transpose the melody line into the appropriate key and range for that particular horn: E flat for a Baritone sax, C for a Flute, B flat for a saxophone, C but in Bass clef for a Trombone.

The second use case I have is to be able to arrange a section harmonically for a set of instruments. This means reusing the melody from a lead sheet, but then selecting notes for the other instruments to play as accompaniment. Often, this can be done on piano first, and then allocation the notes the piano plays out to the single-note horns.

I also wish to make play-along recordings, using ireal pro. This uses a very small subset of the lead sheet information: really just the chords and the repeats. A more complex version could be used to build MMA versions.

When I work with a more complex arrangement, I often find that I change my mind on some aspect: I wish to alter a repeat, a chord symbol, or the set of notes in the melody line.ideally that would be reflected through all the various aspects of the score.

The musescore files are stored in an xml format developed for the program. This XML file is compressed, and the the file extension reflects this: mscz. I have a song that is 40 measures long (32 with 8 bars repeated) and no more than 2 chords per measure, no more than 8 notes per measure. The underlying document used to store this song, when transposed for 6 different instruments is 22037 lines long. This is not a document format meant to be consumed by humans.

Here is a sample of how a single note is represented:

        <Articulation name="marcatoStaccato">
          <velocity>120</velocity>
          <gateTime>50</gateTime>
          </Articulation>

Here is an example for a chord

 <Chord>
            <linkedMain/>
            <durationType>quarter</durationType>
            <Note>
              <linkedMain/>
              <pitch>74</pitch>
              <tpc>16</tpc>
              </Note>
            </Chord>

This is generated code. When software is responsible for creating more software, it will often product the output in a format that can then be consumed by another tool designed to read human readable source and convert it to binary. XML is a notoriously chatty format, and the number of lines in the Musescore file reflects that.

The “document” that we humans interface with is based on pen and paper. If I were to do this by hand, I would start with a blank page of pre-printed staff paper, and annotate on it such details as the clef, the key signature, the bar lines, the notes, and the chord symbols. This system is optimized for hand-written production, and human reading-on-the-stage consumption. This is how it is displayed on the screen as well. Why, then is the underlying file format so unfriendly?

Because file formats are based on text, and we have restricted that to very specific rules as well: Characters are represented as numeric values (ascii now unicode) and anything implying the layout on the page needs to be embedded in the document as well.

There are options for turn text into sheet music: ABC and Lilypond are both formats that do this. Why don’t I use those? I have tried in the past. But when I am writing music, I am thinking in terms of notation, not in terms of text, and they end up preventing me from doing what I need. They don’t solve the transposition or other problems as is. Perhaps the issue is that we need more tooling around one of these format, but then we return to the problem of generated code.

Once the sheet music is printed out, the options for annotating a revision is to write on the sheet music, or to edit the original source and reprinting it. In practice, both of these are done often. IN front of me on my desk is a copy of a tune where both chords and individual notes have been changed in pencil. After living with these revisions for quite some time, I eventually got them recorded back into the source file and reprinted it.

Carrying around reams of sheet music quickly becomes unfeasible. If you are in multiple groups, each of which have a large repertoire, the need to reduce the physical burden will likely lead you to an electronic solution: sheet music displayed on a tablet. However, the way that you distribute sheet music here will determine what options you have allowed for the artist to annotate corrections and instructions on the music in front of them: most PDFs don’t let you edit them, and you cannot write on a screen with a ball point pen.

As a band director, I would like to be able to note changes on a score for a particular group and automate the distribution of that change to the group.

As a band member I would like to be able to make changes to a score and have them show up when I view the score electronically. Ideally, these changes would be reflected in the ireal/MMA version that I use to practice as well as the sheet music I read.

As a collaborator, I would like to be able to submit a suggested change to a score as played by my group and have the music director be able to incorporate my suggestion into the score.

As a collaborator, I would like to be able to receive suggestions from another collaborator, and provide feedback on the suggestion if I decide not to include it as-is.

As an arranger, I would like to be able to come up with a score on piano and automate the process of converting it to a set of instruments.

As a band leader, I would like to be able to come up with a lead sheet in Musescore and automate the process of converting it to MMA and irealpro formats. I would like to be able to import the accompaniment from an irealpro or MMA format into a musescore document as a first pass for the percussion or other rhythm section arrangement.

I had long thought about an application to make it possible to work on music together with other people. At West Point we used to say Collaborate and Graduate. For this, I say:

Collaborate and Orchestrate.

Episode 421 – CISA’s new SSDF attestation form

Posted by Josh Bressers on March 25, 2024 12:00 AM

Josh and Kurt talk about the new SSDF attestation form from CISA. The current form isn’t very complicated, and the SSDF has a lot of room for interpretation. But this is the start of something big. It’s going to take a long time to see big changes in supply chain security, but we’re confident they will come.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3345-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_421_CISA_new_SSDF_attestation_form.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_421_CISA_new_SSDF_attestation_form.mp3</audio>

Show Notes

When to Ansible? When to Shell?

Posted by Adam Young on March 19, 2024 06:00 PM

Any new technology requires a mental effort to understand. When trying to automate the boring stuff, one decision I have to make is whether to use straight shell scripting or whether to perform that operation using Ansible. What I want to do is look at a simple Ansible playbook I have written, and then compare what the comparable shell script would look like to determine if it would help my team to use Ansible or not in this situation.

The activity is building a Linux Kernel that comes from a series of topic branches applied on top of a specific upstream version. The majority of the work is done by a pre-existing shell script, so what we mostly need to do is git work.
Here is an annotated playbook. After each play, I note what it would take to do that operation in shell.

---
- name: Build a kernel out of supporting branches
  hosts: servers
  remote_user: root
  vars:
    #defined in an external vars file so we can move ahead
    #kernel_version: 
    #soc_id: 
    test_dir: /root/testing
    linux_dir: "{{ test_dir }}/linux"
    tools_dir: "{{ test_dir }}/packaging"

  tasks:

  - name:  ssh key forwarding for gitlab
    ansible.builtin.copy:
      src: files/ssh.config
      dest: /root/.ssh/config
      owner: root
      group: root
      mode: '0600'
      backup: no
  #scp $SRCDIR/files/ssh.config $SSH_USER@SSH_HOST:/root/.ssh/ssh.config
  #ssh $SSH_USER@SSH_HOST chmod 600/root/.ssh
  #ssh $SSH_USER@SSH_HOST chown root:root /root/.ssh/ssh.config

  - name: create testing dir
    ansible.builtin.file:
      path: /root/testing
      state: directory
      #ssh $SSH_USER@SSH_HOST mkdir -p /root/testing


  - name: Install Build Tools
    ansible.builtin.yum:
      name: make, gcc, git, dwarves, openssl, grubby, rpm-build, perl
      state: latest
    #ssh $SSH_USER@SSH_HOST yum -y install make, gcc, git, dwarves, openssl, grubby, rpm-build, perl

  - name: git checkout Linux Kernel
    ansible.builtin.git:
      repo: git@gitlab.com:{{ GIT_REPO_URL }}/linux.git
      dest: /root/testing/linux
      version: v6.5
#This one is a bit more complex, as it needs to check if the repo already 
#exists, and, if so, do a pull, otherwise do a clone.

  - name: add stable stable Linux Kernel repo
    ansible.builtin.command: git remote add stable https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
    changed_when: false
    args:
      chdir: /root/testing/linux
    ignore_errors: true
#there should be and Ansible git command for adding an additional remote.  
#I could not find it, so I resported to command.  
#This is identical to running via ssh

  - name: fetch stable stable Linux Kernel repo
    ansible.builtin.command: git fetch stable
    args:
      chdir: /root/testing/linux
#Same issue as above.  This shows that, when an Ansible command is
#well crafted, it can link multiple steps into a single command, reduce the #need for an additional ssh-based command.

  - name: git checkout gen-early-patches
    ansible.builtin.git:
      repo: git@gitlab.com:{{ GIT_REPO_URL }}/packaging.git
      dest: "{{ tools_dir }}"
      version: main
#same issue as with the clone for  the Linux Kernel repository

  - name: generate early kernel patches
    ansible.builtin.shell:
      cmd: "{{tools_dir }}/git-gen-early-patches.sh {{ tools_dir }}/gen-{{ soc_id }}-{{ kernel_version }}-general.conf"
      chdir: /root/testing/linux
#One benefit to running with Ansible is that it will automatically
#wrap a shell call like this with an error check.  This cuts dowm
#on boilerplate code and the potential to miss one.

  - name: determine current patch subdir
    ansible.builtin.find:
      paths: /root/testing/linux
      use_regex: true
      #TODO build this pattern from the linux kernel version
      pattern: ".*-v6.7.6-.*"
      file_type: directory
    register: patch_directory
#This would probably be easier to do in shell:
#BUILD_CMD=$( find . -name ".*-v6.7.6-.*/build.sh" | sort | tail -1 )
#

  - ansible.builtin.debug:
      msg: "{{ patch_directory.files | last }}"

  - set_fact:
      patch_dir: "{{ patch_directory.files | last }}"

  - ansible.builtin.debug:
      msg: "{{ patch_dir.path }}/build.sh"


  - name: build kernel
    ansible.builtin.shell:
      cmd: "{{ patch_dir.path }}/build.sh"
      chdir: /root/testing/linux
#Just execute the  value of BUILD_CMD
#ssh $SSH_USER@SSH_HOST /root/testing/linux/$BUILD_CMD

So, should this one be in Ansible or shell? It is a close call. Ansible makes it hard to do shell things, and this needs a bunch of shell things. But Ansible is cleaner in doing stuff on a remote machine from a known starting machine, which is how this is run: I keep the Ansible playbook on my Laptop, connect via VPN, and run the playbook on a newly provisioned machine, or rerun it on a machine while we are in the process of updating the Kernel version, etc.

This use case does not make use of one of the primary things that Ansible is very good at doing: running the same thing on a bunch of machines at the same time. Still, it shows that Ansible is at least worth evaluating if you are running a workflow that spans two machines, and has to synchronize state between them. For most tasks, the Ansible play will be sufficient, and falling back to Shell is not difficult for most other tasks.

Cleaning a machine

Posted by Adam Young on March 19, 2024 05:10 PM

After you get something working, you find you might have missed a step in documenting how you got that working. You might have installed a package that you didn’t remember. Or maybe you set up a network connection. In my case, I find I have often brute-forced the SSH setup for later provisioning. Since this is done once, and then forgotten, often in the push to “just get work done” I have had to go back and redo this (again usually manually) when I get to a new machine.

To avoid this, I am documenting what I can do to get a new machine up and running in a state where SSH connections (and forwarding) can be reliably run. This process should be automatable, but at a minimum, it should be understood.

To start with, I want to PXE boot the machine, and reinstall the OS. Unless you are using a provisioning system like Ironic or Cobbler, this is probably a manual process. But you still can automate a good bit of it. The first step is to tell the IPMI based (we are not on Red Fish yet) BMC to boot to PXE upon the next reboot.

This is the first place to introduce some automation. All of our ipmitool commands are going to take the majority of the same parameters. So we can take the easy step of creating a variable for this command, and use environmental variables to fill in the repeated values.

export CMD_IPMITOOL="ipmitool -H $DEV_BMCIP -U $IPMI_USER -I lanplus -P $IPMI_PASS"

One benefit of this is that you can now extract the variables into an environment variable file that you source separate from the function. That makes this command reusable for other machines.

To require PXE booting on the next pass we also make use of a function that can be used to power cycle the system. Note that I include a little bit of feedback in the commands I use so that the user does not get impatient.

dev_power_on(){
        CMD_="$CMD_IPMITOOL power on"
        echo $CMD_
        $CMD_
}

dev_power_off(){
        CMD_="$CMD_IPMITOOL power off"
        echo $CMD_
        $CMD_
}
dev_power_cycle(){
        dev_power_off
        dev_power_on
        dev_power_status
}

dev_pxe(){
       CMD_="$CMD_IPMITOOL  chassis bootdev pxe" 
        echo $CMD_
        $CMD_
        dev_power_cycle
}

Once this function is executed, the machine will boot to PXE mode. What this looks like is very dependent on your setup. There are two things that tend to vary. One is how you connect to the machine in order to handle the PXE setup. If you are lucky, you have a simple interface. We have a serial console concentrator here, so I can connect to the machine using a telnet command: I get this command from our lab manager. IN other stages of life, I have had to use minicom to connect to a physical UART (serial port) to handle PXE boot configuration. I highly recommend the serial concentrator route if you can swing it.

But usually you have an IPMI based option to open the serial console. Just be ware that this might conflict with, and thus disable, a UART based way of connecting. For me, I can do this using:

$CMD_IPMITOOL sol activate

The other thing that varies is your PXE set up. We have a PXE menu that allows us to select between many different Linux distributions with various configurations. My usual preference is to do a minimal install, just enough to get the machine up and on the network accessible via SSH. This is because I will almost always do an upgrade of all packages (.deb/.rpm) on the system once it is booted. I also try to make sure I don’t have any major restrictions on disk space. Some of the automated provisioning approaches make the Root filesystem or the home filesystem arbitrarily small. For development, I need to be able to build a Linux Kernel and often a lot of other software. I don’t want to run out of disk space. A partitioning scheme that is logical for a production system may not work for me. My Ops team provides and option that has Fedora 39 Server GA + Updates, Minimal, Big Root. This serves my needs.

I tend to reuse the same machines, and thus have ssh information in the files under ~/.ssh/known_hosts. After a reprovision, this information is no longer accurate, and needs to be replaced. In addition, the newly provisioned machine will not have an ssh public key on it that corresponds with my private Key. If only they used FreeIPA…but I digress…

If I try to connect to the reprovisioned machine:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:EcPh9oazsjRaC9q8fJqJc8OjHPoF4vtXQljrHJhKDZ8.
Please contact your system administrator.
Add correct host key in /home/ayoung/.ssh/known_hosts to get rid of this message.
Offending ED25519 key in /home/ayoung/.ssh/known_hosts:186
  remove with:
  ssh-keygen -f "/home/ayoung/.ssh/known_hosts" -R "10.76.111.73"
Host key for 10.76.111.73 has changed and you have requested strict checking.
Host key verification failed.

The easiest way to wipe the information is:

ssh keygen -R $DEV_SYSTEMIP

Coupling this with provisioning the public key makes sense. And, as I wrote in the past, I need to set up ssh-key forwarding for gitlab access. Thus, this is my current ssh prep function:

#Only run this once per provisioning
dev_prep_ssh(){
        ssh-keygen -R $DEV_SYSTEMIP
        ssh-copy-id -o StrictHostKeyChecking=no root@$DEV_SYSTEMIP
        ssh-keyscan gitlab.com 2>&1  | grep -v \# | ssh root@$DEV_SYSTEMIP "cat >> .ssh/known_hosts"
}

The first two could be done via Ansible as well. I need to find a better way to do the last step via Ansible (line_in_file seems to be broken by this), or to bash script it so that it is idempotent.

Episode 420 – What’s going on at NVD

Posted by Josh Bressers on March 18, 2024 12:00 AM

Josh and Kurt talk about what’s going on at the National Vulnerability Database. NVD suddenly stopped enriching vulnerabilities, and it’s sent shock-waves through the vulnerability management space. While there are many unknowns right now, the one thing we can count on is things won’t go back to the way they were.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3341-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_420_Whats_going_on_at_NVD.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_420_Whats_going_on_at_NVD.mp3</audio>

Show Notes

Remotely checking out from git using ssh key forwarding.

Posted by Adam Young on March 13, 2024 11:16 PM

Much of my work is done on machines that are only on load to me, not permanently assigned. Thus, I need to be able to provision them quickly and with a minimum of fuss. One action I routinely need to do is to check code out of a git server, such as gitlab.com. We use ssh keys to authenticate to gitlab. I need a way to do this securely when working on a remote machine. Here’s what I have found

Key Forwarding

While it is possible to create an ssh key for every server I use, that leads to a mess. As important, it leads to an insecure situation where my ssh keys are sitting on machines that are likely to be reassigned to another user. To perform operations on git over ssh, I prefer to use key forwarding. That involves setting up on the development host a .ssh/config file that loos like this:

Host *.gitlab.com
   ForwardAgent yes

Depending on your setup, you might find it makes sense to just copy this file over as is, which is what I do. A more flexible scheme using something that appends these entries if they are non-existent may make sense if you are using Ansible and the ssh_config module or a comparable tool.

known_host seeding

When you first ssh to a development host, there is a likelihood that it will not know about the git server host. In order to make connections without warning or errors, you need to add the remote hosts fingerprints into the ~/.ssh/know_hosts files. This one-liner can do that:

ssh-keyscan gitlab.com 2>&1  | grep -v \# | ssh $DEV_USER@$DEV_HOST  "cat >> .ssh/known_hosts"

ssh-keyscan will produce output like this:

# gitlab.com:22 SSH-2.0-GitLab-SSHD
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
# gitlab.com:22 SSH-2.0-GitLab-SSHD
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
# gitlab.com:22 SSH-2.0-GitLab-SSHD
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
# gitlab.com:22 SSH-2.0-GitLab-SSHD
# gitlab.com:22 SSH-2.0-GitLab-SSHD

So I remove the comments and just add the fingerprints.

I tried to get this to work using Ansible and the lineinfile module, but I got an error 127…not sure why.

EDIT: I have corrected it. I should have used with_items, not with_lines, and ssh_keyscan_output.stdout_lines

---
- name: Set up ssh forwarding for gitlab 
  hosts: servers
  remote_user: root

  tasks:

  - name: keyscan gitlab.com
    command: ssh-keyscan gitlab.com
    register: ssh_keyscan_output

  - name: Save key fingerprints
    ansible.builtin.lineinfile:
      path: /root/.ssh/known_hosts
      line: "{{ item }}"
    with_items: " {{ ssh_keyscan_output.stdout_lines }}"

But something like that should be possible. When I did not first pre-seed the fingerprint, and I tried to do a git checkout over ssh, I would get this error:

 ssh root@10.76.111.74 "cd testing ;  git clone git@gitlab.com:$REMOTE_REPO "
bash: line 1: cd: testing: No such file or directory
Cloning into 'kernel-tools'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I saw a comparable error over Ansible. The solution was to run the one liner I posted above.

EDIT: One thing I did not make expliciti is that you need to enable ssh forwarding in your ssh command:

        ssh $DEV_USER@$DEV_SYSTEMIP -A

Episode 419 – Malicious GitHub repositories

Posted by Josh Bressers on March 11, 2024 12:00 AM

Josh and Kurt talk about an attack against GitHub where attackers are creating malicious repositories then artificially inflating the number of stars and forks. This is really a discussion about how can we try to find signal in all the noise of a massive ecosystem like GitHub.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3336-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_419_Malicious_GitHub_repositories.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_419_Malicious_GitHub_repositories.mp3</audio>

Show Notes

Episode 418 – Being right all the time is hard

Posted by Josh Bressers on March 04, 2024 12:00 AM

Josh and Kurt talk about recent stories about data breaches, flipper zero banning, and realistic security. We have a lot of weird challenges in the world of security, but hard problems aren’t impossible problems. Sometimes we forget that.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3331-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_418_Being_right_all_the_time_is_hard.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_418_Being_right_all_the_time_is_hard.mp3</audio>

Show Notes

Episode 417 – Linux Kernel security with Greg K-H

Posted by Josh Bressers on February 26, 2024 12:10 AM

Josh and Kurt talk to GregKH about Linux Kernel security. We most focus on the topic of vulnerabilities in the Linux Kernel, and what being a CNA will mean for the future of Linux Kernel security vulnerabilities. The future of Linux Kernel security vulnerabilities is going to be very interesting.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3325-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_417_Linux_Kernel_security_with_Greg_K-H.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_417_Linux_Kernel_security_with_Greg_K-H.mp3</audio>

Show Notes

Anatomy of a Jam Performance

Posted by Adam Young on February 20, 2024 10:53 PM

My Band, The Standard Deviants, had a rehearsal this weekend. As usual, I tried to record some of it. As so often happens, our best performance was our warm up tune. This time, we performed a tune called “The SMF Blues” by my good friend Paul Campagna.

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen" frameborder="0" height="329" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/ZWs9G1dTfJA?feature=oembed" title="SMF Blues" width="584"></iframe>
</figure>

What is going on here? Quite a bit. People talk about Jazz as improvisation, but that does not mean that “Everything” is made up on the spot. It takes a lot of preparation in order for it to be spontaneous. Here are some of the things that make it possible to “just jam.”

Blues Funk

While no one in the band besides me knew the tune before this session, the format of the tune is a 12 bar blues. This is probably the most common format for a jam tune. A lot has been written on the format elsewhere, so I won’t detail here. All of the members of this group know a 12 bar blues, and can jump in to one after hearing the main melody once or twice.

The beat is a rock/funk beat set by Dave, our Drummer. This is a beat that we all know. This is also a beat that Dave has probably put 1000 hours into playing and mastering in different scenarios. Steve, on Bass, has played a beat like this his whole career. We all know the feel and do not have to think about it.

This song has a really strong turn around on the last two bars of the form. It is high pitched, repeated, and lets everyone know where we are anchored in the song. It also tells people when a verse is over, and we reset.

Signals

The saxophone plays a lead role in this music, and I give directions to the members of the band. This is not to “boss” people around, but rather to reduce the number of options at any given point so that we as a unit know what to do. Since we can’t really talk in this scenario, the directions have to be simple. There are three main ways I give signals to the band.

The simplest is to step up an play. As a lead instrument, the saxophone communicates via sound to the rest of the band one of two things; either we are playing the head again, or I am taking a solo. The only person that really has to majorly change their behavior is Adam 12 on Trombone. Either he plays the melody with me or he moves into a supporting role. The rest of the band just adjusts their energy accordingly. We play louder for a repetition of the head, and they back off a bit for a solo.

The second way I give signals to the band is by direct eye contact. All I have to do is look back at Dave in the middle of a verse to let him know that the next verse is his to solo. We have played together long enough that he knows what that means. I reinforce the message by stepping to the side and letting the focus shift to him.

The third way is to use hand gestures. As a sax player, I have to make these brief, as I need two hands to play many of the notes. However, there are alternative fingerings, so for short periods, I can play with just my left hand, and use my right to signal. The most obvious signal here is the 4 finger hand gesture I gave to Adam 12 that we are going to trade 4s. This means that each of us play for four measures and then switch. If I gave this signal to all of the band, it would mean that we would be trading 4s with the drums, which we do on longer form songs. Another variation of this is 2s, which is just a short spot for each person.

One of the wonderful thing about playing with this band is how even “mistakes” such as when i tried to end the tune, and no one caught the signal…and it became a bass solo that ended up perfectly placed. Steve realized we all quieted out, and that is an indication for the Bass to step up, and he did.

Practice Practice Practice

Everyone here knows their instrument. We have all been playing for many decades, and know what to do. But the horn is a harsh mistress, and she demands attention. As someone once said:

Skip a day and you know. Skip two days and your friends know. Skip three days and everyone knows.

Musical Wisdom from the ages

Adam 12 is the newest member of the band. It had been several years since he played regularly before he joined us. His first jam sessions were fairly short. We have since heard the results of the hard work that he has put in.

I try to play every day. It completes with many other responsibilities and activities in modern life. But my days seems less complete if I did not at least blow a few notes through the horn.

Listening

Music is timing. Our ears are super sensitive to changes in timing, whether at the micro level, translating to differences in pitch, or the macro level, with changes in tempo…which is just another word for time. Dave is the master of listening. He catches on to a pattern one of us it playing and he works it into his drumming constantly. Steve is the backbone of the band. Listening to the bass line tells us what we need to know about speed and location in the song. The more we play together, the more we pick up on each others cues through playing. The end effect is that we are jointly contributing to an event, an experience, a performance.

Episode 416 – Thomas Depierre on open source in Europe

Posted by Josh Bressers on February 19, 2024 12:00 AM

Josh and Kurt talk to Thomas Depierre about some of the European efforts to secure software. We touch on the CRA, MDA, FOSDEM, and more. As expected Thomas drops a huge amount of knowledge on what’s happening in open source. We close the show with a lot of ideas around how to move the needle for open source. It’s not easy, but it is possible.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3321-8" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_416_Thomas_Depierre_on_open_source_in_Europe.mp3?_=8" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_416_Thomas_Depierre_on_open_source_in_Europe.mp3</audio>

Show Notes

Swimming positions improvements

Posted by Adam Young on February 16, 2024 06:58 PM

I have been getting in the pool for the past couple of months, with a goal of completing a triathalon this summer. Today I got some pointers on things I need to improve on in my freestyle technique.

Kick

My kick needs some serious work. I am pulling almost exclusively with my arms. As Jimmy (the lifeguard and a long time swim coach) said, “You’re killing yourself.”

He had me do a drill with just a kickboard. “If your thighs are not hurting you are doing it wrong.” I focused on barely breaking the surface with my heels, pointing my toes, and keeping my legs relatively straight…only a slight bend.

Next he had me do a lap with small flippers. “You shouldn’t fell like you are fighting them.” The force you to point your toe. It felt almost too easy. We ran out of time for me to try integrating it into a regular stroke.

For weight lifting, he recommended squats.

For a sprint he recommended 3 kicks per stroke/6 per pair. For longer courses, two kicks per stroke. I think I will shoot for 2/ stroke, as I am going for 1/2 mile total.

Breathing

Improving my kick will improve my whole body position, including my breathing. It seems I am pulling my head too far out of the water, mainly because my legs are dropping. Although the opposite is true, too: pulling my head too far out of the water is causing my legs to drop. The two should be fixed together.

Arm Entry

One other swimmer at the pool that I asked for advice told me to “lead with my elbows” and then to think about entering the water “like a knife through butter”. Jimmy added that I should be reaching “long and lean.” Like a fast sailboat.

After that, the stroke should go out, and then finish in an S.

I think I need to glide more during the initial entry of the arm into the water.

Jimmy recommended a drill, either using a kickboard or a ring, and holding that out in front, and passing it from hand to hand.

Head position

I should be looking down and to the front,while the the top of my head breaks the surface.

CentOS Connect 2024 report

Posted by Alexander Bokovoy on February 13, 2024 07:25 AM

February 1st-4th I participated in two events in Brussels: CentOS Connect and FOSDEM. FOSDEM is getting closer to its quarter a century anniversary next year. With 67 mini-conferences and another 30 events around it, it is considered one of the largest conferences in Europe, any topic included. This report is about CentOS Connect.

CentOS Connect

CentOS Connect was a two-day event preceding FOSDEM. Organized by the CentOS project, it brought together contributors from multiple projects around CentOS Stream upstreams (such as Fedora Project) and downstreams (from Red Hat Enterprise Linux, AlmaLinux, Rocky Linux, and others), long-time users and community members. A lot of talks were given on both days and several special interest groups (SIGs) ran their workshops on the first day.

CentOS Integration SIG

I have attended a workshop organized by the CentOS Integration SIG. The focus of this group is to define and run integration tests around CentOS Stream delivery to the public. While CentOS Stream composes are built using the messages on how RHEL next minor version is composed after being thoroughly tested by Red Hat’s quality engineering group, there is a value in testing CentOS Stream composes by other community members independently as their use cases might differ. Right now the set of tests is defined as a t_functional suite maintained by the CentOS Core SIG but this suite is not enough for the complex scenarios.

The Integration SIG is looking at improving the situation. Two projects were present and interested in this work: RDO (RPM-based distribution of OpenStack) and RHEL Identity Management. Both teams have to deal with coordinated package deliveries across multiple components and both need to get multi-host deployments tested. RDO can be treated as a product built on top of CentOS Stream where its own RDO deliveries are installed on top of CentOS Stream images. RHEL Identity Management (IdM), on the other hand, is a part of RHEL. It is famously known as a ‘canary in the RHEL mine’ (thanks to Stephen Gallagher’s being poetic a decade ago): if anything is broken in RHEL, chances are it will be visible in RHEL IdM testing. With RHEL IdM integration scope expanding to provide passwordless desktop login support, this becomes even more important and also requires multi-host testing.

CentOS Integration SIG’s proposal to address these requirements is interesting. Since CentOS Stream compose event is independent of the tests, a proposal by Aleksandra Fedorova and Adam Samalik is to treat the compose event in a way similar to a normal development workflow. Producing CentOS Stream compose would generate a merge request of a metadata associated with it to a particular git repository. This merge request then can be reacted to by the various bots and individuals. Since CentOS Stream compose is public, it is already accessible to third-party developers to run their tests and report back the results. This way qualification of the compose promotion to CentOS Stream mirrors can be affected by all parties and will help to keep already published composes in a non-conflicting state.

Since most of the projects involved already have their own continuous integration systems, adding another trigger for those would be trivial. For RHEL IdM and its upstream components (FreeIPA, SSSD, 389-ds, Dogtag PKI, etc.) it would also be possible finally to react to changes to CentOS Stream well ahead of time too. In the past this was complicated by a relative turbulence in the CentOS Stream composes early in RHEL next minor release development time when everybody updates their code and stabilization needs a lot of coordination. I am looking forward to Aleksandra’s proposal to land in the Integration SIG’s materials soon.

Update: Aleksandra has published her proposal at the Fedora Discussion board

CentOS Stream and EPEL

Two talks, by Troy Dawson and Carl George, described the current state of EPEL repository and its future interactions with CentOS 10 Stream. Troy’s discussion was fun, as usual: a lot of statistics obtained from the DNF repository trackers shows that old habits are hard to get rid off and moving forward is a struggle to many users despite security and supportability challenges with older software. Both CentOS 7 and CentOS 8 Stream end of life dates are coming in 2024, adding enough pressure themselves to that.

There are interesting statistics from the EPEL evolution. In August 2023 at Flock to Fedora conference Troy and Carl pointed out that 727 packagers maintained 7298 packages in EPEL 7, 489 packagers handled 4968 packages in EPEL 8, and 396 packagers were handling 5985 packages in EPEL 9. Half a year later we have 7874 packages in EPEL 7, 5108 packages in EPEL 8, and 6868 packages in EPEL 9. Pace seems to pick up EPEL 9 with upcoming end of life dates for older versions. Since soon only EPEL 9 and EPEL 10 would be actively built, there are probably more packagers that would be active in them as well.

EPEL 10 is coming soon. It will bring a slight difference in package suffixes and overall reduction of complexity to packagers. It is great to see how close EPEL work tracks to ELN activities in Fedora. One thing worth noting is that every Fedora package maintainer is a potential EPEL maintainer as well because EPEL reuses Fedora infrastructure for package maintenance. Even if someone is not maintaining EPEL branches on their Fedora packages (I am not doing that myself – my packages are mostly in RHEL), it allows easy jump-in and collaboration. After all, if packages aren’t in RHEL but are in Fedora, their EPEL presence is just one git branch (and Fedora infrastructure ticket) away.

20 years of CentOS project

First day of the CentOS Connect event ended with a party to celebrate 20 years of the CentOS Project. First release (“CentOS version 2”) went out on May 14th, 2004 but since CentOS Connect is the closest big event organized by the project this year, getting a lot of contributors together to celebrate this anniversary seemed appropriate. A huge cake was presented, so big that it wasn’t possible to consume it completely during the party. It was delicious (like a lot of Belgian chocolate!) and next day coffee breaks allowed me to enjoy it a lot.

FreeIPA and CentOS project infrastructure

My own talk was originally planned to gather feedback from all projects which build on top of CentOS as they use a very similar infrastructure. CentOS Project infrastructure is shared with Fedora Project which is built around FreeIPA as a backend and Noggin as a user management frontend. I asked in advance for some feedback from Fedora, CentOS, AlmaLinux, and RockyLinux infrastructure teams and haven’t gotten that much which prompted my own investigation. It is not an easy job since most organizations aren’t really interested in telling the world details of their core infrastructure. Hope was that I’d be able to see real world usage and maybe take some lessons from it back to the development teams.

While working on my talk, we also experienced an outage in Fedora infrastructure related to the upgrade of the FreeIPA setup used there. My team has been helping Fedora infrastructure administrators so I finally got the feedback I was looking for. That led to several fixes upstream and they have recently been backported to RHEL and CentOS Stream as well. However, the overall lack of feedback is concerning – or at least I thought so.

During CentOS Connect I had the opportunity to discuss this with both AlmaLinux (Jonathan) and RockyLinux (Luis) projects’ sysadmins. “It just works” is a common response I get. Well, that’s nice to hear but what was more exciting to hear is that these projects went a bit further than we did in Fedora and CentOS Stream with their environments. Luis has published the whole RockyLinux infrastructure code responsible for FreeIPA deployment. It is based heavily on ansible-freeipa, reuses the same components we developed for Fedora Infrastructure. Rocky also runs FreeIPA tests in OpenQA instance, similarly to Fedora, and hopefully Luis would contribute more tests to cover passwordless login, already available in CentOS Stream. This would be a very welcome contribution in the light of the Integration SIG activities, helping us to test more complex scenarios community-wide. AlmaLinux infrastructure also uses ansible-freeipa but the code is not publicly available, for whatever reason. Jonathan promised to rectify this problem.

Hallway tracks

I had a number of discussions with people from all kinds of CentOS communities. FreeIPA sits at a unique position by providing secure infrastructure to run POSIX workloads and linking it with modern requirements to web authentication. Our effort to improve passwordless login to Linux environments with reuse of Kerberos to propagate authorization state across multiple machines helps to solve the ‘between the worlds’ problems. That is something that many academic institutions have noticed already and many talks I had in the hallways were related to it.

Episode 415 – Reducing attack surface for less security

Posted by Josh Bressers on February 12, 2024 12:00 AM

Josh and Kurt talk about a blog post explaining how to create a very very small container image. Generally in the world of security less is more, but it’s possible to remove too much. A lot of today’s security tooling relies on certain things to exist in a container image, if we remove them we could actually result in worse security than leaving it in. It’s a weird topic, but probably pretty important.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3316-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_415_Reducing_attack_surface_for_less_security.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_415_Reducing_attack_surface_for_less_security.mp3</audio>

Show Notes

Episode 414 – The exploited ecosystem of open source

Posted by Josh Bressers on February 05, 2024 12:00 AM

Josh and Kurt talk about open source projects proving builds, and things nobody wants to pay for in open source. It’s easy to have unrealistic expectations for open source projects, but we have the open source that capitalism demands.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3310-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_414_The_exploited_ecosystem_of_open_source.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_414_The_exploited_ecosystem_of_open_source.mp3</audio>

Show Notes

Vector times Matrix in C++

Posted by Adam Young on February 03, 2024 11:56 PM

The basic tool for neural networks is: vector times matrix equals vector. The first vector is your input pattern, the second is your output pattern. Stack these in a series and you have a deep neural network. The absolute simplest implementation I could find for this is in C++ using boost.

Here it is:



#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/io.hpp>

int main () {
    using namespace boost::numeric::ublas;
    matrix<double> m (3, 3);
    vector<double> v (3);
    for (unsigned i = 0; i < std::min (m.size1 (), v.size ()); ++ i) {
        for (unsigned j = 0; j < m.size2 (); ++ j)
            m (i, j) = 3 * i + j;
        v (i) = i;
    }


     vector<double> out1 = prod (m, v);
     vector<double> out2 = prod (v, m);

    std::cout << out1 << std::endl;
    std::cout << out2 << std::endl;


}

This is almost verbatim the sample code from Boost BLAS. Scroll down to the prod example. So now the question is, how does one go from that to a neural network? More to come.

Another question that comes to my mind is how would you optimize this if you have a vector based co-processor on your machine?

A Different Key Setup for Chords

Posted by Adam Young on January 29, 2024 01:45 AM

For my birthday last year, my family got me an accordion. Yes, I am one of those people that collect musical instruments, not for their monetary value but because I enjoy trying them out. I spent some time trying to wrap my mind and my hands around the one button chord set up on the side. I know there are masters out there that have developed a set of techniques, but for those of use from other instruments, the arraignment of notes and chords in a cycle of fifth is challenging.

It got me thinking about how else you could arrange them. This might also be somewhat inspired by guitar and bass setups, but here is what I came up with:

Start with a chromatic arrangement like this:

<figure class="wp-block-table">
G#A#C#E
GACD#
FG#BD
EGA#C#
D#F#AC
DFG#B
C#EGA#
CD#F#A
BDFG#
A#C#EG
ACD#F#
<figcaption class="wp-element-caption">Chromatic arrangement</figcaption></figure>

Assume that this is a keyboard with the bottom left nearest to you, pointing toward the ground. The Top right is pointing away and skyward. You put your pinky on A, ring finger on C, middle finger on D#, and index finger on F#. If you play those four notes, you get a diminished 7th chord.

What should be apparent from this set up is that you can play all forms of 7th chords with your pinky on the root note. To play A Major seventh, pinky on A, ring on C#, middle on E, Index on G#. If each button is a centimeter apart, Your index finger reaches at most 4 cm from lowest to highest.

To play an A augmented pinky on A. Ring on C#, Middle on F. Your index could play the root an octave up.

This same pattern would work up to F. We’d need to extend up four notes in each column to get something that would work for each chord.

Lets move our index finger to the A four notes from the bottom. From here, an Adom7 chord in second inversion could be played with pinky (P) on C#, ring(R) on E, middle(M) on G. A Maj 7moves the ring one more space up, which should still be comfortable to reach.

Lets say you have to play a I-vi-ii-V7 progression. We’ll do this in C major, both because it is well known and because it is in the middle of our keyboard. Lets say we start with the Cmaj7 in root position. Using the first letter of each finger as short hand: P-C, R-E, M-G, I-B.

<figure class="wp-block-image size-large"></figure>

(I realize now that all the images are 180 flipped from how I described it above. Just treat it like guitar tab….)

Now to Amin7, we’ll play it as I noted above: Index stays on the A. All other fingers stay in position: P-C, R-E, M-G, I-A.

<figure class="wp-block-image size-large"><figcaption class="wp-element-caption">A MINOR</figcaption></figure>

Now, to move to a Dmin7, we leave the Pinky and the Index in place (P-C, I-A) Ring finger and Middle fingers move down two keys to D and F.  Seems pretty efficient P-C, R-F, M-F I-A.

<figure class="wp-block-image size-large"></figure>

That leaves the G7. For this we want to use a third inversion, with the middle finger staying on the G. The Ring finger extends up to the B. The Pinky extends up to the D. The Ring finger extends up to the F. Again, seems pretty efficient.

<figure class="wp-block-image size-large"></figure>

I thik I ma going to do some paper prototyping here.

And with gratitude I think this is a more efficient G7 position.

<figure class="wp-block-image size-large"><figcaption class="wp-element-caption">Alternative voicing for G7.</figcaption></figure> <figure class="wp-block-image size-large"><figcaption class="wp-element-caption">My Accordion. Ain’t she pretty?</figcaption></figure>

Episode 413 – PyTorch and NPM get attacked, but it’s OK

Posted by Josh Bressers on January 29, 2024 12:00 AM

Josh and Kurt talk about an attack against PyTorch and NPM. The PyTorch attack shows the difficulty of operating a large open source project. The NPM situation continues to show the difficulty in trying to backdoor open source. Many people are watching, and it only takes one person to notice a problem and report it, and we all benefit.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3303-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_413_PyTorch_and_NPM_get_attacked_but_its_OK.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_413_PyTorch_and_NPM_get_attacked_but_its_OK.mp3</audio>

Show Notes

Episode 412 – Blame the users for bad passwords!

Posted by Josh Bressers on January 22, 2024 12:00 AM

Josh and Kurt talk about the 23andMe compromise and how they are blaming the users. It’s obviously the the fault of the users, but there’s still a lot of things to discuss on this one. Every company has to care about cybersecurity now, even if they don’t want to.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3298-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_412_Blame_the_users_for_bad_passwords.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_412_Blame_the_users_for_bad_passwords.mp3</audio>

Show Notes

Episode 411 – The security tools that started it all

Posted by Josh Bressers on January 15, 2024 12:00 AM

Josh and Kurt talk about a grab bag of old technologies that defined the security industry. Technology like SELinux, SSH, Snort, ModSecurity and more all started with humble beginnings, and many of them created new security industries.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3292-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_411_The_security_tools_that_started_it_all.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_411_The_security_tools_that_started_it_all.mp3</audio>

Show Notes

Print the line after a match using AWK

Posted by Adam Young on January 08, 2024 06:46 PM

We have an internal system for allocating hardware to developers on a short term basis. While the software does have a web API, it is not enabled by default, nor in our deployment. Thus, we end up caching a local copy of the data about the machine. The machine names are a glom of architecture, and location. So I make a file with the name of the machine, and a symlink to the one I am currently using.

One of the pieces of information I need out of this file is the IP address for ssh access. The format is two lines in the file, like this:

BMCIP
10.76.244.85
SYSTEMIP
10.76.244.65

While the SYSTEMIP is an easy regular expression match, what I need is the line after the match, and that is a little less trivial. Here is what I have ended up with (using this post as a reference).

awk 'f{print; f=0};/SYSTEMIP/{f=1}' current_dev 

Which produces the expected output:

10.76.244.65

Running a UEFI program upon reboot

Posted by Adam Young on January 08, 2024 03:51 AM

Linux is only one of the Operating Systems that our chips need to support. While some features need a complete OS for testing, we want to make sure that features are as tested as possible before we try to debug them through Linux.

My current code implements has to talk to the firmware. Thus we need to have a responder in the firmware. Testing this from Linux has to go through the entire Linux networking stack.

The developers of the firmware already have to deal with UEFI. UEFI is installed with the firmware. Since writing UEFI apps is basically like writing DOS apps (according to Ron Minnich) it is not a huge demand on our Firmware team to have them use UEFI as the basis for writing thei test code. Thus, they produced a simple app called RespTest.efi That checks that their code does what they expect it to do.

The normal process is that they boot the UEFI shell, switch to the file system, and copy up the Executable via tftp, Once it is on the system, they run it manually. After the first time the executable is leaded, the executable is persisted on the system. Thus, to run it a second or third time, after, say, a change to the firmware, can be automated from the operating system.

ls /boot/efi/
EFI  ResponderTest.efi  spinor.img

The above was executed from a Linux system running that has the efi partition mounted in /boot/efi. The RespTest.efi binary was copied up using tftp.

To see the set of UEFI boot options, we can use efibootmgr. These are mostly various flavors of PXE

efibootmgr
BootCurrent: 0001
Timeout: 10 seconds
BootOrder: 0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000A
Boot0000* UiApp	FvVol(5c60f367-a505-419a-859e-2a4ff6ca6fe5)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0001* Fedora (Samsung SSD 970 EVO Plus 250GB)	PcieRoot(0x60000)/Pci(0x1,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-25-38-5A-91-51-38-0A)/HD(1,GPT,4c1510a2-110f-492f-8c64-50127bc2e552,0x800,0x200000)/File(\EFI\fedora\shimaa64.efi) File(.????????)
Boot0002* UEFI PXEv4 (MAC:0C42A15A9B28)	PcieRoot(0x0)/Pci(0x1,0x0)/Pci(0x0,0x0)/MAC(0c42a15a9b28,1)/IPv4(0.0.0.00.0.0.0,0,0){auto_created_boot_option}
...
Boot0009* UEFI HTTPv6 (MAC:0C42A15A9B29)	PcieRoot(0x0)/Pci(0x1,0x0)/Pci(0x0,0x1)/MAC(0c42a15a9b29,1)/IPv6([::]:<->[::]:,0,0)/Uri(){auto_created_boot_option}
Boot000A* UEFI Shell	FvVol(5c60f367-a505-419a-859e-2a4ff6ca6fe5)/FvFile(7c04a583-9e3e-4f1c-ad65-e05268d0b4d1)

Lets say I want to boot to the UEFI shell next time. Since that has an index of 000A, I can run the following command to tell UEFI to boot to the shell once and only once. After that it will return to the default order:

efibootmgr –bootnext 000A

Here are the first few lines of output from that command;

# efibootmgr --bootnext 000A
BootNext: 000A
BootCurrent: 0001
Timeout: 10 seconds

If I want to add an addition boot option, I can use the –create option. To tell it to boot the ResponderTest,efi executable:

efibootmgr efibootmgr --create -d /dev/nvme0n1p1 --label RespTest -l RespTest.efi

The last line of output from that command is:

Boot000B* RespTest	HD(1,GPT,4c1510a2-110f-492f-8c64-50127bc2e552,0x800,0x200000)/File(RespTest.efi)efibootmgr

If I then want to make UEFI select this option upon next boot…

# efibootmgr --bootnext 000B
BootNext: 000B
BootCurrent: 0001
Timeout: 10 seconds

Note that the executable is in /boot/efi mounted on the Linux filesystem, but that will be the EF0: in UEFI. If you wish to put something further down the tree, you can, but remember that UEFI uses reverse slashes. I think it would look something like this…but I have not tried it yet:

efibootmgr efibootmgr –create -d /dev/nvme0n1p1 –label RespTest -l EFI\\RespTest.efi

Note the double slash to avoid Linux escaping….I think this is necessary.

Episode 410 – Package identifiers are really hard

Posted by Josh Bressers on January 08, 2024 12:00 AM

Josh and Kurt talk about package identifiers. We break this down in the context of an OpenSSF response to a CISA paper on software identifications. The identifiers that get all the air time are purl, CPE, SWID, and OmniBOR. This is a surprisingly complex problem space. It feels easy, but it’s not.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3289-7" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_410_Package_identifiers_are_really_hard.mp3?_=7" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_410_Package_identifiers_are_really_hard.mp3</audio>

Show Notes

On The Lineageos User Exposure Window

Posted by Robbie Harwood on January 07, 2024 05:00 AM

Unfortunate scheduling means that LineageOS users are exposed to publicly disclosed vulnerabilities for typically nine days per month. Here's why, and what I (a user who is otherwise uninvolved in the project) think could be done to improve the situation.

Per official Android documentation:

Important: The Android Security Bulletins are published on the first Monday of each month unless that Monday falls on a holiday. If the first Monday of the month is a holiday the bulletins will be published on the following work day.

Holiday time off makes sense for human reasons, though it makes release days inconsistent. (And I assume we're talking US holidays because Google, though this isn't stated.) Adherence to this isn't great - most egregiously, something happened that resulted in a March 13, 2023 release which is probably the largest slip since August 13, 2015 (which is far back as the table goes). But I've worked in security engineering long enough to know that sometimes dates will slip, and I'm sure it's not intentional. Clerical errors like the November 2023 bulletin date are also inevitable.

But let's assume for the sake of simplicity that disclosure happens as written: on the first Monday of the month (or the following day) a bulletin is published listing the CVEs included, their type/severity, and which Android (i.e., AOSP) versions are affected. Importantly absent here are the patches for these vulnerabilities, which per the bulletin boilerplate take 2 days to appear. So what's the point of the bulletin?

In any case, this means that patches won't be available until Wednesday or Thursday. Lineage posts updates on Friday; per datestamps, these updates are built on Thursdays. This means that in order to have the security fixes (and security string) in the next Lineage update, there is less than a day's window to gather patches, backport them, post pull requests, run CI, get code review, etc., before the weekly build is made - a seemingly herculean feat. And sometime months, it won't be technically possible at all.

So since patches will not land in the next release, all users of official builds are exposed to publicly disclosed vulnerabilities for typically nine days. (I think it's 8-10, but I don't discount off-by-one, and that also relies on Lineage not slipping further; everyone's human and it does happen, especially on volunteer projects.)

Clearly, the schedule is a problem. Here are my initial thoughts on how this might be addressed:

  1. Release Lineage updates on a different day. If it takes four days to run through the backport+review+build pipelines, then plan to release on Monday. Users will still be exposed for the length of time it takes to backport+review+build.
  2. Add Lineage to the embargo list. This would mean that backport and review could take place prior to public disclosure, and so including the security fixes in the upcoming Friday update becomes doable. Users are still exposed for 2 days, but that's way better than 9. (I am not involved in the Lineage project so it's possible they already are included, but that seems unlikely given security update PRs are not usually sent on the day code becomes publicly available.)
  3. Stop the bulletin/patch desync. I cannot come up with a good reason why the security bulletin and patches are released at different times, let alone releasing the bulletin before the patches. This makes reasoning about fix availability unnecessarily complicated. However, it would probably require Google to care.
  4. Update Lineage less frequently. It seems like the least amount of work, but if, e.g., Lineage released once per month, then the day of the release could be whenever the security fixes for the month land. This is also helpful beccause it minimizes the number of times that flashing needs to occur. (But that may be a larger discussion for a different post.)

These are not mutually exclusive, either.

To close out, I need to head off something that I am quite sure will come up. Yes, I am capable of running my own custom Android builds and kernel trees. I do not want to do this, and more importantly, users who are not developers by definition cannot do it. That is to say: OTA updates and official builds are the most important versions of the project, not whatever custom stuff we make. (After all, if that weren't the case, there wouldn't be five major forks over the signature spoofing decision.)

How Did I break my Code

Posted by Adam Young on January 04, 2024 11:17 PM

Something I did in a commit broke my code. I have a git bisect that shows me the last good commit and the first bad one.

The code in question is a device driver for MCTP over PCC. MCTP is a network protocol used to have different components on a single system talk to each other. PCC is a shared buffer mechanism with a “doorbell” or register-that-triggers-an-interrupt. Another team needs to use this driver.

In looking through the output, a co-worker stated “it looks like you are sending the wrong buffer.” I suspected he was correct.

The buffers are pulled from ACPI information in a table specific to PCC, called the PCCT.

This is from the PCCT, which is common to both the good and bad run, and shows the machine (physical) addresses of the memory.


03 [Extended PCC Master Subspace]
               Base Address       = 00004000000D4004


04 [Extended PCC Slave Subspace]
               Base Address       = 00004000000D4404

Here is output data I get from a run of code from tip of tree.




[  644.338469] remap_pcc_comm_addresses outbox pcc_chan->shmem_base_addr = 4000000d4004 mapped to  ffff800080621004
[  644.348632] remap_pcc_comm_addresses inbox pcc_chan->shmem_base_addr = 4000000d4404 mapped to ffff80008061f404

[  644.338469] remap_pcc_comm_addresses outbox pcc_chan->shmem_base_addr = 4000000d4004 mapped to  ffff800080621004
[  644.348632] remap_pcc_comm_addresses inbox pcc_chan->shmem_base_addr = 4000000d4404 mapped to ffff80008061f404
 



 
[  828.014307] _mctp_get_buffer pcc_chan->shmem_base_addr = 1  ptr = 000000007f3ab582 
[  828.021950] mctp_pcc_tx buffer = ffff80008061f404

What I see is that I am sending data on the inbox address, not the outbox. So where did I swap it?

541392a553a2c2b7f741d89cff0ab8014a367299 is the first bad commit.

Lets take a look at the code in that commit. The diff shows a lot of changes in _mctp_get_buffer. Here’s what it looks like in non-diff form

static unsigned char * _mctp_get_buffer (struct mbox_chan * chan){
        void __iomem * pcc_comm_addr;
         
        struct pcc_mbox_chan * pchan = (struct pcc_mbox_chan *) chan;

        
        if (pchan == mctp_pcc_dev->out_chan){
                pcc_comm_addr =  mctp_pcc_dev->pcc_comm_outbox_addr;
        }else{
                pcc_comm_addr = mctp_pcc_dev->pcc_comm_inbox_addr;
        }
        pr_info("%s pcc_chan->shmem_base_addr = %llx  ptr = %p \n",
                        __func__, pchan->shmem_base_addr, pcc_comm_addr );
        return (unsigned char *  )pcc_comm_addr;
} 

How could this fail? Well, what if the cast from chan to pchan is bogus? If that happens, the if block will not match, and we will end up in the else block.

Why would I cast like that? Assume for a moment (as I did) that the definitions looked like this:

struct   pcc_mbox_chan {
 struct mbox_chan  chan;
}

Ignore anything after the member variable chan. This is a common pattern in C when you need to specialize a general case.

This is not what the code looks like. Instead, it looks like this:

struct   pcc_mbox_chan {
 struct mbox_chan*  chan;
}

The structure that chan points to was allocated outside of this code, and thus cannot be cast.

The else block always matched.

The reason the old code ran without crashing is it was getting a valid buffer, just not the right one. I was trying to send the “outbox” buffer, but instead sent the “inbox” buffer.

Here is the corrected version of the if block.

if (chan == mctp_pcc_dev->out_chan->mchan){
        pcc_comm_addr = mctp_pcc_dev->pcc_comm_outbox_addr;
}else if (chan == mctp_pcc_dev->in_chan->mchan){
        pcc_comm_addr = mctp_pcc_dev->pcc_comm_inbox_addr;
}
if (pcc_comm_addr){
        pr_info("%s buffer ptr = %px \n",__func__, pcc_comm_addr );
        return (unsigned char * )pcc_comm_addr;
}

What made this much harder to debug is the fact that the “real” system I was testing against had changed and I was not able to test my code against it for a week or so, and by then I had added additional commits on top of the original one. This shows the importance of testing early, testing often, and testing accurately.

To give a sense of how things got more complicated, here is the current version of the get_buffer function:

static unsigned char * _mctp_get_buffer (struct mbox_chan * chan){
        struct mctp_pcc_ndev * mctp_pcc_dev = NULL;
        struct list_head *ptr;
        void __iomem * pcc_comm_addr = NULL;
        list_for_each(ptr, &mctp_pcc_ndevs) {
                mctp_pcc_dev = list_entry(ptr,struct mctp_pcc_ndev, head);
        
                if (chan == mctp_pcc_dev->out_chan->mchan){
                        pcc_comm_addr =  mctp_pcc_dev->pcc_comm_outbox_addr;
                }else if (chan == mctp_pcc_dev->in_chan->mchan){
                        pcc_comm_addr = mctp_pcc_dev->pcc_comm_inbox_addr;
                }
                if (pcc_comm_addr){
                        pr_info("%s buffer ptr = %px \n",
                                __func__, pcc_comm_addr );
                        return (unsigned char *  )pcc_comm_addr;
                }
        }
        //TODO this should never happen, but if it does, it will
        //Oops.  Handle the error better
        pr_err("%s unable to match mctp_pcc_ndev", __func__);
        return 0;
}

Why is it now looking through a list? One of the primary changes I had to make was to move the driver from only supporting a single device to supporting multiple. Finding which buffer to get from struct mbox_chan is again going from general to specific.

This code is still in flight. The list lookup may well not survive the final cut, as there is possibly a better way to get the buffer from the original structure, but it is not clear cut. This works, and lets the unit tests continue to pass. This allows the team to make progress. It might be technical debt, but that is necessary part of a complicated development integration process.

Episode 409 – You wouldn’t hack a train?

Posted by Josh Bressers on January 01, 2024 12:00 AM

Josh and Kurt talk about how some hackers saved the day with a Polish train. We delve into a discussion about how we don’t really own anything anymore if you look around. There’s a great talk from the Blender Conference about this and how GPL makes a difference in the world of software ownership. It’s sort of a dire conversation, but not all hope is lost.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3285-8" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_409_You_wouldnt_hack_a_train_fixed.mp3?_=8" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_409_You_wouldnt_hack_a_train_fixed.mp3</audio>

Show Notes

Episode 408 – Does Kubernetes need long term support?

Posted by Josh Bressers on December 25, 2023 12:00 AM

Josh and Kurt talk about a story asking for a Kubernetes LTS. Should open source projects have LTS versions? What does LTS even mean? Why is maintaining software so hard? It’s a lively discussion all about the past, present, and future of open source LTS.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3280-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_408_Does_Kubernetes_need_long_term_support_fixed.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_408_Does_Kubernetes_need_long_term_support_fixed.mp3</audio>

Show Notes

Episode 407 – Should Santa use AI?

Posted by Josh Bressers on December 18, 2023 12:00 AM

It’s the 2023 Christmas Spectacular! Josh and Kurt talk about what would happen if Santa starts using AI to judge which children are naughty and nice. There’s some fun in this one, but it does get pretty real. While we tried to discuss Santa using AI, the reality is this sort of AI is coming for many of us. AI will be making decisions for all of us in the near future (if it isn’t already). While less fun than we had hoped for, it’s an important conversation.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3272-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_407_Should_Santa_use_AI.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_407_Should_Santa_use_AI.mp3</audio>

Show Notes

Episode 406 – The security of radio

Posted by Josh Bressers on December 11, 2023 12:00 AM

Josh and Kurt talk about a few security stories about radio. The TETRA:BURST attack on police radios, spoofing GPS for airplanes near Iran, and Apple including cellular radios in the macbooks. The common thread between all these stories is looking at the return on investment for security. Sometimes good enough security is fine, sometimes it’s not worth fixing certain security problems because the risk vs reward doesn’t work out.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3268-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_406_The_security_of_radio.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_406_The_security_of_radio.mp3</audio>

Show Notes

Episode 405 – Modding games isn’t cheating and security isn’t fair

Posted by Josh Bressers on December 04, 2023 12:00 AM

Josh and Kurt talk about Capcom claiming modding a game is akin to cheating. The arguments used are fundamentally one of equity vs equality. Humans love to focus on equality instead of equity when we deal with most problems. This is especially true in the world of security. Rather than doing something that has a net positive, we ignore the details and focus on doing something that feels “right”.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3263-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_405_Modding_games_isnt_cheating_and_security_isnt_fair.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_405_Modding_games_isnt_cheating_and_security_isnt_fair.mp3</audio>

Show Notes

recvfrom system call in python

Posted by Adam Young on November 27, 2023 07:21 PM

With a socket created, and a message sent, the next step in our networking journey is to receive data.

Note that I used this post to get the parameter right. Without using the ctypes pointer function I got an address error.

print("calling recvfrom")
recv_sockaddr_mctp = struct_sockaddr_mctp(AF_MCTP, 0, mctp_addr , 0, 0)
len = 2000;
rxbuf = create_string_buffer(len)
sock_addr_sz = c_int(sizeof(recv_sockaddr_mctp))
MSG_TRUNC = 0x20
rc = libc.recvfrom(sock, rxbuf, len, MSG_TRUNC, byref(recv_sockaddr_mctp), pointer(sock_addr_sz));
errno = get_errno_loc()[0]

if (rc < 0 ):
    print("rc = %d errno =  %d" % (rc, errno))
    exit(rc)

print(rxbuf.value)

From previous articles you might be wondering about these values.

AF_MCTP = 45
mctp_addr = struct_mctp_addr(0x32)

However, the real magic is the structures defined using ctypesgen. I recommend reading the previous article for the details.

While the previous two article are based on the code connect example that I adjusted and posted here, the recvfrom code is an additional call. The c code it is based on looks like this:

    addrlen = sizeof(addr);

    rc = recvfrom(sd, rxbuf, len, MSG_TRUNC,
                (struct sockaddr *)&addr, &addrlen);

Episode 403 – Does the government banning apps work?

Posted by Josh Bressers on November 27, 2023 12:00 AM

Josh and Kurt talk about the Canadian Government banning WeChat and Kaspersky. There’s a lot of weird little details in this conversation. It fundamentally comes down to a conversation about risk. It’s easy to spout nonsense about risk, but having an honest discussion about it is REALLY complicated. But the government plays by a very different set of rules.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3258-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_403_Does_the_government_banning_apps_work.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_403_Does_the_government_banning_apps_work.mp3</audio>

Show Notes

sendto system call from python

Posted by Adam Young on November 22, 2023 08:18 PM

Once we open a socket, we probably want to send and receive data across it.Here is the system call we need to make in order to send data as I wrote about in my last post:

c = sendto(sd, buf, sizeof(buf), 0,
                (struct sockaddr *)&addr, sizeof(addr));

Note the cast of the addr parameter. This is a structure with another structure embedded in it. I tried hand-jamming the structure, but the data sent across the system call interface was not what I expected. In order to make it work ,I uses a utility called ctypesgen with the header I separated out in my last post. Runing this:

ctypesgen ./spdmcli.h

generates a lot of code, much of which I don’t need. The part I do need looks like this:


__uint8_t = c_ubyte# /usr/include/bits/types.h: 38
__uint16_t = c_ushort# /usr/include/bits/types.h: 40
__uint32_t = c_uint# /usr/include/bits/types.h: 42
uint8_t = __uint8_t# /usr/include/bits/stdint-uintn.h: 24
uint16_t = __uint16_t# /usr/include/bits/stdint-uintn.h: 25
uint32_t = __uint32_t# /usr/include/bits/stdint-uintn.h: 26

# /root/spdmcli/spdmcli.h: 5
class struct_mctp_addr(Structure):
    pass

struct_mctp_addr.__slots__ = [
    's_addr',
]
struct_mctp_addr._fields_ = [
    ('s_addr', uint8_t),
]

# /root/spdmcli/spdmcli.h: 9
class struct_sockaddr_mctp(Structure):
    pass

struct_sockaddr_mctp.__slots__ = [
    'smctp_family',
    'smctp_network',
    'smctp_addr',
    'smctp_type',
    'smctp_tag',
]
struct_sockaddr_mctp._fields_ = [
    ('smctp_family', uint16_t),
    ('smctp_network', uint32_t),
    ('smctp_addr', struct_mctp_addr),
    ('smctp_type', uint8_t),
    ('smctp_tag', uint8_t),
]

# /root/spdmcli/spdmcli.h: 3
try:
    MCTP_TAG_OWNER = 0x08
except:
    pass

mctp_addr = struct_mctp_addr# /root/spdmcli/spdmcli.h: 5

sockaddr_mctp = struct_sockaddr_mctp# /root/spdmcli/spdmcli.h: 9

With those definition, the following code makes the system call with the parameter data in the right order:


mctp_addr = struct_mctp_addr(0x32) # /root/spdmcli/spdmcli.h: 16
sockaddr_mctp = struct_sockaddr_mctp(AF_MCTP, 1, mctp_addr , 5, 0x08)

p = create_string_buffer(b"Held") 
p[0] = 0x10 
p[1] = 0x82

sz = sizeof(sockaddr_mctp)
print("calling sendto")


rc = libc.sendto(sock, p, sizeof(p), 0, byref(sockaddr_mctp), sz) 
errno = get_errno_loc()[0]
print("rc = %d errno =  %d" % (rc, errno)   )

if (rc < 0 ):
    exit(rc)

print("OK")

With this Next step is to read the response.

Updated MCTP send code

Posted by Adam Young on November 22, 2023 04:26 PM

While the existing documentation is good, there are a couple things that have changed since it was originally written, and I had to make a couple minor adjustments to get it to work. Here’s the code to send a message. The receive part should work as originally published; what is important is the set of headers. I built and ran this on an AARCH64 platform running Fedora 38.

I split the code into a header and a .c file for reasons I will address in another article.

spdmcli.h

#include <stdint.h>

#define MCTP_TAG_OWNER 0x08

struct mctp_addr {
    uint8_t             s_addr;
};

struct sockaddr_mctp {
    uint16_t            smctp_family;
    uint32_t            smctp_network;
    struct mctp_addr    smctp_addr;
    uint8_t             smctp_type;
    uint8_t             smctp_tag;
}

I also coded this for spdm, not MCTP control messages. The difference is the value for addr.smctp_type

EDIT: I also added the recvfrom call to make this complete.

spdmcli.c

#include <linux/mctp.h>
#endif
#include <linux/if_link.h>
#include <linux/rtnetlink.h>

#include "stdio.h"
#include "spdmcli.h"

int main(void)
{
    struct sockaddr_mctp addr = { 0 };
    char buf[] = "hello, world!";
    int sd, rc;

    /* create the MCTP socket */
    sd = socket(AF_MCTP, SOCK_DGRAM, 0);
    if (sd < 0)
        err(EXIT_FAILURE, "socket() failed");

    /* populate the remote address information */
    addr.smctp_family = AF_MCTP;  /* we're using the MCTP family */
    addr.smctp_addr.s_addr = 0x32;   /* send to remote endpoint ID 32 */
    addr.smctp_type = 5;          /* encapsulated protocol type iSPDM = 5 */
    addr.smctp_tag = MCTP_TAG_OWNER; /* we own the tag, and so the kernel
                                        will allocate one for us */

    addr.smctp_network = 1 ;
    /* send the MCTP message */
    rc = sendto(sd, buf, sizeof(buf), 0,
                (struct sockaddr *)&addr, sizeof(addr));

    if (rc != sizeof(buf))
        err(EXIT_FAILURE, "sendto() failed");

   //EDIT: The following code recieves a message from the network and prints it to the console.
        if (rc < 0){
        err(EXIT_FAILURE, "recvfrom() failed");
    }

    printf("recv rc = %d\n",rc);        

    for (int i =0; i< rc; i++){
        printf("%x", rxbuf[i]); 

    }



    return EXIT_SUCCESS;
}

socket system call from python

Posted by Adam Young on November 21, 2023 08:37 PM

While the Python socket API is mature, it does not yet support MCTP. I thought I would thus try to make a call from python into native code. The first step is to create a socket. Here is my code to do that.

Note that this is not the entire C code needed to make network call, just the very first step.I did include the code to read errno if the call fails.

#!/usr/bin/python3
from ctypes import *
libc = CDLL("/lib64/libc.so.6")
AF_MCTP = 45
SOCK_DGRAM  = 2
rc = libc.socket (AF_MCTP, SOCK_DGRAM, 0)
#print("rc = %d " % rc)
get_errno_loc = libc.__errno_location
get_errno_loc.restype = POINTER(c_int)
errno = get_errno_loc()[0]
print("rc = %d errno =  %d" % (rc, errno)   )
print("OK")

Running this code on my machine shows success

# ./spdm.py 
rc = 3 errno =  0
OK