Fedora security Planet

Error building Kernel ln: target ‘+/source’: No such file or directory

Posted by Adam Young on September 25, 2023 04:47 PM

I have battled this problem a couple times and so I am documenting the issue and solution here.
Here is the error message from make modules_install

ln: target '+/source': No such file or directory

The short solution is to watch out for stray whitespace in the Makefile

This happens when I attempt to modify the Makefile in order to revision control the Kernel. I I suspect that a build I am about to make will prevent the machine I am working on from booting, I want to keep an older build functional in order to restore the machine.

Here is the diff for my Makefile:

diff --git a/Makefile b/Makefile
index 2fdd8b40b7e0..cb747c16e33c 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 VERSION = 6
 PATCHLEVEL = 5
-SUBLEVEL = 0
+SUBLEVEL = 5
 EXTRAVERSION =
 NAME = Hurr durr I'ma ninja sloth

When the run fails, the diff will show a stray whitespace character around the SUBLEVEL value. This value is used to generate the string the becomes part of the path where the install happens. At the top level, Kernels get installed into here:

# ls -la /usr/lib/modules/
total 20
drwxr-xr-x.  8 root root  122 Sep 25 08:15 .
dr-xr-xr-x. 34 root root 4096 Sep 18 09:51 ..
drwxr-xr-x.  8 root root 4096 May 15 19:06 5.17.5-300.fc36.aarch64
drwxr-xr-x.  8 root root 4096 Sep 18 09:53 6.2.15-100.fc36.aarch64
drwxr-xr-x.  3 root root 4096 Sep 22 10:26 6.5.0+
drwxr-xr-x.  2 root root    6 Sep 23 07:42 6.5.1
drwxr-xr-x.  2 root root    6 Sep 23 07:57 6.5.5
drwxr-xr-x.  3 root root 4096 Sep 25 08:18 6.5.5+

The source directory is a symlink underneath it:

# ls -la /usr/lib/modules/6.5.5+/source
lrwxrwxrwx. 1 root root 11 Sep 25 08:15 /usr/lib/modules/6.5.5+/source -> /root/linux

Thus by having an extra space anywhere in there, the path generation gets messed up.

Episode 394 – The lie anyone can contribute to open source

Posted by Josh Bressers on September 25, 2023 12:00 AM

Josh and Kurt talk about filing bugs for software. There’s the old saying that anyone can file bugs and submit patches for open source, but the reality is most people can’t. Filing bugs for both closed and open source is nearly impossible in many instances. Even if you want to file a bug for an open source project, there are a lot of hoops before it’s something that can be actionable.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3216-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_394_The_lie_anyone_can_contribute_to_open_source.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_394_The_lie_anyone_can_contribute_to_open_source.mp3</audio>

Show Notes

Make Haste Slowly

Posted by Adam Young on September 20, 2023 08:18 PM

In the software development world, we call it technical debt.


In the Army it was “Half-assed, full-blast. Don’t know where we are going but we should have been there yesterday.”

And the solution was told to be by a guy going through officer basic with me…after a long career as an NCO in Army Special Forces.


“Make Haste Slowly.”


It is ok to “just make it work.” But have a strict enough code review process that is happy to kick back semi-functional code to get it production quality.

Think of it like an English Essay: it is ok to show your teacher a rough draft, but expect lots of Red Ink and rewriting on it.


Unit Test Everything. Automated testing will catch when your change code that breaks other code. Visual and manual testing does not count. It has to be automated or it is not sufficient. Not writing unit tests is heavy tech debt.

Finding a line of code in the Kernel from a stack trace

Posted by Adam Young on September 20, 2023 08:14 PM

To find out what line a particular stack trace entry points to, use the script ./scripts/faddr2line for example If I have the line __get_vm_area_node+0x17c/0x1a8 I can run

./scripts/faddr2line vmlinux.o __get_vm_area_node+0x17c/0x1a8
__get_vm_area_node+0x17c/0x1a8:
__get_vm_area_node at /root/linux/mm/vmalloc.c:2579 (discriminator 1)

Episode 393 – Can you secure something you don’t own?

Posted by Josh Bressers on September 18, 2023 12:00 AM

Josh and Kurt talk about the weird world we live in how where we can’t control a lot of our hardware. We don’t really have control over most devices we interact with on a daily basis. The conversation shifts into a question of how can we decide what to trust and where. It’s a very strange problem we experience now.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3212-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_393_Can_you_secure_something_you_dont_own.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_393_Can_you_secure_something_you_dont_own.mp3</audio>

Show Notes

Episode 392 – Curl and the calamity of CVE

Posted by Josh Bressers on September 11, 2023 12:00 AM

Josh and Kurt talk about why CVE is making the news lately. Things are not well in the CVE program, and it’s not looking like anything will get fixed anytime soon. Josh and Kurt have a unique set of knowledge around CVE. There’s a lot of confusion and difficulty in understanding how CVE works.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3208-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_392_Curl_and_the_calamity_of_CVE.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_392_Curl_and_the_calamity_of_CVE.mp3</audio>

Show Notes

Following a code path in the Linux Kernel without a debugger

Posted by Adam Young on September 10, 2023 03:18 AM

Sometimes you don’t get to use a debugger. When do bare metal development, often it is faster to get to the root of a problem by throwing in trace statements, and seeing what path is taken through the code.

There are two main techniques I have been using to do this. The first is to print out the spot in the code using built in macros that tell the file, the name of the function, and the line number. That looks like this:

pr_info("%s %s %d", __FILE__, __func__,  __LINE);

I know it looks a little weird having some upper and some lower case in there, but that is what works.

However, Linux makes heavy use of function pointers, and you cannot use tags to jump to a function whose name you do not know. To print out the source of a function from a pointer, you can use the print formatting macros specific to the Linux Kernel. For example: I can use

printk("%ps", pmu->event_init);

In my case, that prints out:

arm_cspmu_event_init [arm_cspmu_module]

Which I could then jump to using the :tag command in vim.

Episode 391 – The WordPress 100 year disaster recovery problem

Posted by Josh Bressers on September 04, 2023 12:00 AM

Josh and Kurt talk about wordpress selling web services with a 100 year lifespan. Will WordPress still be around in 100 years? What would 100 years of disaster recovery look like? Most of us will never need to think about 100 years of disaster recovery.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3204-4" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_391_The_Wordpress_100_year_disaster_recovery_problem.mp3?_=4" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_391_The_Wordpress_100_year_disaster_recovery_problem.mp3</audio>

Show Notes

How to optimize a flipit solve

Posted by Adam Young on September 01, 2023 10:13 PM

According to the puzzles coder, any puzzle can be solved in 3 clicks (or less). That means that following my algorithm is likely to be significantly less efficient than an optimum solve.

I am not sure I buy it…yet.

Lets see if we can optimize a solve path for a puzzle.

To simplify the notation, I am going to label the boxes using Hexidecimal notation.

  • The top left box is 0,
  • the one next to it is 1.
  • the one at the end of the row is 3
  • the first one in the second row is 4
  • The last one is F
<figure class="wp-block-image size-large"></figure>

A solve can thus be listed as a stream of hexidecimal characters. For example, this sequence C 8 C D C E F B F 8 9 D C can solve the puzzle below.

<figure class="wp-block-image size-full"></figure>

That is 13 clicks, 10 more than the proposed optimum.

<figure class="wp-block-image size-full"></figure>

However, If I then go to swap cells 4 and E, I end up clicking C 3 times on every other click. Something tells me this is going to be redundant. The same is true for the bottom right: I end up clicking F Twice in alteration. And I end up back on D C.

I note that this sequence now also solves the puzzle:

8 D E B 8 9 D

7 clicks. Over a 40% improvement.

What about that D? What if we skip that? Yep.

8 E B 8 9 Solves it. That is less than half the number of clicks we started with. Can we go further? What if we drop those 8s?

Yep. E B 9 Solves it as well.

It seems like the pattern is to remove any double clicks. Lets try another puzzle and see.

Puzzle 65535 is all black. Using my algorithm there are no repeated clicks. So that does us no good. But it implies that there must be a way to transition from one solve pattern to another.

<figure class="wp-block-image size-full"></figure>

The algorithmic solution to this puzzle is

0 2 3 4 0 3 2 6 7

If we drop the repeated 0 2 and 3 we get

4 6 7

Which also is a solution.

The blank puzzle (puzzle 0) can be solved with

0 2 5 4 2 3 7 6 8 9 D C A B F E 0 3 F C

What if we drop all of the duplicates?

547689DABE

It does not work…but it is close…if we add a further 2 3 at the very end IT does. Curious.

The following pattern also solves it: Click the three points around the corner, but not the corner itself. In ANY ORDER.

12456789ABDE

Does that mean that order is meaningless? Lets go back to our original puzzle # 4079 :

EB9 E9B 9BE 9EB Be9 B9E

Yes. They all solve it.

So it appears that the 3 clicks to solve might have some merit. We need a way to convert from non-overlapping solutions to solution that overlap with redundancy. This means we might well got from an algorithmic solution to a less efficient solution, and then from there to a more efficient solution.

Edit: Rereading what he said, I see now that it was “any daily or random” puzzle. Which are, I am guessing, generated using the algorithm of “start with the solution, and randomly flip three dots.”

How to Win at Flip It

Posted by Adam Young on September 01, 2023 05:04 PM

My Friend Evan Barr coded up a puzzle he had read about in a Mensa magazine. I suggest you click around on it a bit before reading on.

Techniques to Solve


Here is how I went about solving it, not just for one incarnation, but for any.

If we number the Grid like a Sudoku puzzle, we have box (1,1) through (4,4).

Swapping the corners

Start by setting all four corners to black by clicking them directly. This will Mess up other dots, but we will fix them later.

Swapping interior dots


There are a series of click patterns that have desired results . The first of these is for switching the state of any of the four interior dots.

If you click each of the four dots connected to a corner in either a clockwise or a counterclockwise fashion, you can swap the state of the dot in center diagonally connected.

Stated another way, clicks [(1,1),(1,2),(2,2),(2,1)] will switch (3,3). Every other dot on the board remains in the initial state.

<figure class="wp-block-image size-full is-resized"></figure> <figure class="wp-block-image size-full is-resized"></figure>

Swapping internal side blocks.

Once you know the interior technique, you do not need to worry about maintaining the state of the interior blocks. Thus, you can use a more destructive technique to swap any of the edge blocks not in a corner. For example, to swap block (1,3), you can click on (1,2) and then block (1,1). This Will also swap block (2,3) but you can reverse that with the interior dots technique.

<figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-full"></figure>

Chaining techniques

Once you have these set of techniques, you can chain them to solve the whole puzzle.

  • Swap the corner blocks so they are all the same.
  • Swapping just the edge internal blocks to get the external and internal dots in opposition.
  • Follow by using Swapping interior dots to set all of the interior dots to the same color.

Additional Techniques

You won’t need these techniques but they are neat.

Swap The whole grid

It should be trivial to see that clicking in any of the corners will swap all of the blocks in the four blocks connected to the corner. Clicking all four corner blocks swaps the entire state of the grid.

Swapping just the corner blocks.

To swap the states of just the four corner blocks, click the four internal blocks one at a time in either a clockwise or counterclockwise patterns. [(2,2), (2,3), (3,3), (2,2)]

<figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-full"></figure>

Swapping just the edge internal blocks.

To swap the blocks on the edges that are not corner blocks, click each of them in sequence in either a clockwise or counterclockwise pattern. e.g. [(1,2),(1,3),(2,3),(3,4),(4,3),(4,2),(3,1),(2,)]

<figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-full"></figure>

Building a Kernel RPM with the Built-in Makefile target

Posted by Adam Young on August 30, 2023 03:35 PM

Note that you need to have a .config file that will be included in the build. It will also use the Version as specified in your Makefile. Then run

make rpm-pkg

Which will use the RPM build infra set up for your user to put the rpm in $HOME/rpmbuild/

Episode 390 – Rust shipping binaries doesn’t matter

Posted by Josh Bressers on August 28, 2023 12:00 AM

Josh and Kurt talk about a blog post that explains how C and C++ compilers prioritize performance over correctness. This is the class story of security vs usability. Security is never the primary goal. If a security requirement doesn’t also enable other business goals it will fail. We also touch on the news of a Rust package containing binary files. It doesn’t really have anything to do with security, it’s all about convenience.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3200-5" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_390_Rust_shipping_binaries_doesnt_matter.mp3?_=5" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_390_Rust_shipping_binaries_doesnt_matter.mp3</audio>

Show Notes

Episode 389 – What would HashiCorp do?

Posted by Josh Bressers on August 21, 2023 12:00 AM

Josh and Kurt talk about the HashiCorp license change and copyright problems in open source. This isn’t the first and won’t be the last time we see this, but it’s very likely open source developers and communities will view any project that has a contributor license agreement as a problem moving forward.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3196-6" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_389_What_would_HashiCorp_do.mp3?_=6" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_389_What_would_HashiCorp_do.mp3</audio>

Show Notes

Multi-homed FreeIPA server investigation

Posted by Alexander Bokovoy on August 16, 2023 07:04 AM

Once in a while people come and ask for FreeIPA servers to work in multi-homed environments. A multi-homed environment in this context is a deployment where the same server is accessible through multiple network interfaces which connect together networks which are not routable to each other. This is typical for administrative and operational networks but there are other types of environments which employ disconnected networks for their operations. FreeIPA server right now has a single host name that resolves to the same IP address in all networks and if one cannot reach the server through that IP address, access to IPA server would not be possible. This typically assumes use of unicast networking as well.

A solution many people look for is to be able to access IPA servers by their interface-specific addresses. Since all secure communication over HTTPS and other protocols (LDAP, Kerberos, etc.) uses name-based resolution in the first place, use of different host names is implied. For example, if FreeIPA is deployed at DNS domain example.test, it would be using Kerberos realm EXAMPLE.TEST and then the original IPA server would be deployed at a host named ipa.example.test (the server host name is not that important here, rather the fact that is is an individual host name). Let’s look at possible communications with this server in a non-multihomed environment first.

Single-homed environment

An IPA client uses HTTPS to communicate with IPA management API, SSSD on the IPA client would use LDAP(S) and Kerberos protocols. In both HTTPS and LDAP(S) cases TLS negotiation would force checking server TLS certificate correctness. A hostname of the host we connect to (ipa.example.test) would have to be present as a dNS SAN record in the TLS certificate presented by the IPA server.

In Kerberos protocol case a different mechanism is used. Yet, Kerberos KDC must know the name of the service principal that a client is asking a service ticket for. If the client wants to acquire a service ticket to ldap/ipa.example.test@EXAMPLE.TEST, this service principal must exist in the Kerberos database that KDC is looking up at.

Multi-homed environment

There are multiple ways of exposing a single hostname in multi-homed environment but they generally involve use of DNS views specific to the individual networking. In such cases DNS serves visible to clients in one network would resolve ipa.example.test to an IP address in that specific network. FreeIPA DNS integration does not support DNS views; this means any of DNS manipulations would have to be done externally to FreeIPA. This is, of course possible, but it really is not then different from a single-homed environment from FreeIPA perspective.

Thus, we would have to have not a single ipa.example.test hostname but for each independent network’s address present on the IPA server a different host name must be present. Let’s assume these are ipa1.example.test and ipa2.example.test. Had we not done this split and simply added multiple addresses for the same ipa.example.test name, clients might resolve the name to an IP address which they could not reach through their own networking routing.

Requirements

Immediately we get a set of requirements here:

  • TLS certificates issued by IPA CA for HTTPS and LDAP(S) use on IPA server must include dNS SAN records for each hostname represented by the multi-homed server.

  • Kerberos principals for at least LDAP (ldap/), HTTP (HTTP/), and the system (host/) service principals must have aliases for all multi-homed hostnames.

The latter requirement means that if ipa1.example.test is the primary name, then ldap/ipa1.example.test should have an alias of ldap/ipa2.example.test, HTTP/ipa1.example.test should have an alias of HTTP/ipa2.example.test, and host/ipa1.example.test should have an alias of host/ipa2.example.test.

The same would apply to any other service hosted on IPA server: SMB (cifs/..) or DNS services would need those aliases as well. However, this is not enough.

Implementation considerations

Hostname aliases

FreeIPA does additional checks when issuing certificiates prior to passing the request to the Dogtag CA that is integrated into FreeIPA. For hosts and services on those hosts we also check whether a requestor is granted to issue these certificates. In FreeIPA terms, a host ipa1.example.test would be allowed to issue certificates with dNS SAN record of ipa2.example.test if a host object of ipa1.example.test manages the host object ipa2.example.test in FreeIPA.

Here lies our first problem. A host object in FreeIPA represents the host principal in Kerberos, host/ipa1.example.test. If we created two host objects, ipa1.example.test and ipa2.example.test, then they cannot be aliases to each other on Kerberos level because they’d be two completely different objects from FreeIPA perspective.

Perhaps, we can avoid creating two different host objects? DNS records for hosts are different from the host objects themselves, we only need to have different IP addresses for the hostnames represented by these host entries, not the host entries themselves. May be we could mark one hostname an alias of the other host object?

On Kerberos level FreeIPA does have Kebreros principal name aliasing already. However, it does not exist for hosts as this task has never appeared in past. We would need to add a way to add multiple names to the host object. One way to achieve that is to rely on the fact that fqdn LDAP attribute is multi-valued. Unfortunately, it is also enforced to be a primary key in IPA API – while the underlying LDAP attribute is a multi-valued one, IPA API will enforce its single value:

$ ipa host-mod ipa1.example.test --addattr fqdn=ipa2.example.test
ipa: ERROR: fqdn: Only one value allowed.

This happens because in IPA API any parameter which could be a multi-valued one should explicitly set multivalue=True in its definition. We probably would need to change the multi-valued state for fqdn parameter:

...
        Str('fqdn', hostname_validator,
            cli_name='hostname',
            label=_('Host name'),
            primary_key=True,
            normalizer=normalize_hostname,
>>>>>>      multivalue=True,
        ),
...

and review countless places where its single value is assumed through the code, like in the resolve_fqdn() helper below. LDAP does not guarantee a particular order of returned values for the multi-valued attributes. From LDAP protocol point of view they all equal, there is no particular order.

def resolve_fqdn(name):
    hostentry = api.Command['host_show'](name)['result']
    return hostentry['fqdn'][0]

An alternative would be to introduce a separate attribute purely for the hostname alias management. We don’t need to use it anywhere else at Kerberos level because there we use Kerberos-specific attributes to handle Kerberos principal names and aliases.

LDAP Access Controls

A big part of the FreeIPA access control mechanism relies on 389-ds LDAP server access control interface. Permissions and roles in FreeIPA effectively define a set of ACIs for 389-ds to check access rights. IPA servers verified to be present in certain resource groups (like cn=masters,cn=ipa,cn=etc,$BASEDN). For host aliases this means they should be present in the same groups to be able to operate as their own entities when checking permissions. This is important for internal logic in IPA LDAP plugins and in KDC driver. For the cases when authentication would be done via GSSAPI a resulting Kerberos principal will be normalized to he primary name of the system anyway.

Certificate issuance

Issuing a certificate is a whole separate topic which still awaits its write up. An abridged version of it can be found in my freeipa-users@ mailing list response from May 2022. I need to turn that into a proper document one day.

From the perspective of aliases, we would need to teach the certificate request processing code to look at the host and service aliases when validating SAN records.

Installer integration

There are two approaches to setting up this multi-homed environment. We can provide all information upfront or we can add a tool that adds individual aliases after deployment. This would mean to (re-)generate certificates, create host aliases and services, create configuration snippets and other details which are required to handle multiple host names for the same host from different networks.

The after-deployment case would cause re-issuance of certificates. For external CA providers it could be handled with existing tool that allows to replace existing TLS certificates with externally provided ones. We need to create a checker to verify that all required dNS SAN names were added and are available. In general, troubleshooting this environment would be non-trivial so a special module for ipa-healthcheck definitely would help.

Final thoughts

Multi-homed environments are hard to automate as many assumptions aren’t actually known to us. They are partly implicit to system and network administrator’s work and cannot be derived merely from the system state. It means administrators would need to aid IPA installers with an information. At this point, it is unclear how to structure this information and which of it going to be useful enough. In contemporary Linux environments you might have DNS resolution depend on a specific network interface thanks to systemd-resolved or VPN connection properties. We might not have that information for introspection in advance. While automatically issuing certificates with required names to cover multi-homed setup is not going to be easy, writing down requirements for external CAs and verifying those certificates before applying them at the second stage of external CA enrollment would further complicate things.

All these problems could be solved, of course. Prioritization of this work against other, more urging tasks, is what we need to figure first…

Episode 388 – Video game vulnerabilities

Posted by Josh Bressers on August 14, 2023 12:00 AM

Josh and Kurt ask the question what is a vulnerability, but in the framing of video games. Security loves to categorize all bugs as security vulnerabilities or not security vulnerabilities. But the reality nothing is so simple. Everything is a question of risk, not vulnerability. The discussion about video games can help us to better have this discussion.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3192-7" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_388_Video_game_vulnerabilities.mp3?_=7" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_388_Video_game_vulnerabilities.mp3</audio>

Show Notes

Flock to Fedora 2023 report

Posted by Alexander Bokovoy on August 11, 2023 10:46 AM

On August 2nd-4th, 2023, Fedora Project ran its annual contributors conference, Flock to Fedora, in Cork, Ireland. After a previous successful Flock in 2019 in Budapest, Fedora contributors did not meet in person due to rough pandemia years and had created Nest with Fedora online event instead. Nest ran for three years but online meetings aren’t a full replacement for face to face collaboration. Cork’s Flock was supposed to combine both online and offline events together.

I have been attending and presenting at various Flock and Nest events over past seven years. I was looking forward to see and collaborate with many other project participants and users and get to know new people as well.

My travel to Cork was unremarkable. I took a direct flight from Helsinki to Dublin and then an Aircoach bus to Cork. The ‘unremarkable’ part was really about the unexpected delays people did report over the Matrix channel. The only ‘trouble’ I had was to catch a taxi at 11pm after arrival to Cork to get to my B&B. The Aircoach bus from Dublin airport is very popular in summer and whatever taxi fleet is in Cork was DDoSed by the passengers.

Cork is hilly. I stayed in excellent B&B across the road from the conference hotel. The hotel and its conference facilities are in separate buildings; the events building is up hill from the hotel. Walking is helpful, climbing harder but given we are sitting most of the time, was a welcoming ‘struggle’. Perhaps, my stay outside of the conference hotel has also helped to avoid COVID-19 which few other participants, sadly, contracted. It is hit or miss every time.

Unfortunately, not everyone made to Cork. Marina Zhurakhinskaya passed away in June 2022. Ben Cotton, Fedora Program Manager, has been let go as a part of Red Hat’s layoffs earlier this year. Both had definitely changed Fedora project dramatically, in many ways, both leading to openness and friendliness Fedora is known for. Many presenters remembered both Marina and Ben during their sessions.

2023’s edition of Flock to Fedora was also the first Fedora Project’s event collocated with CentOS Connect. As a result, it brought together Red Hat Enterprise Linux distribution upstreams and downstreams together.

Talks

In total, there were up to four parallel tracks, dedicated to different areas of a distribution development and a project’s life and spanned over three days. That, unsurprisingly, made it challenging to visit all talks and activities. It is a common trait shared by many successful events. And for those who wanted to continue discussions after a talk has ended, there is always a ‘hallway track’.

State of Fedora 2023

The first talk was ‘State of Fedora 2023’ by the project leader, Matthew Miller. Recording is available here. I am linking to the re-take of the talk as the original streaming was off by 20 minutes and Matthew had to reprise it again.

A major announcement made during the talk was a hiring one. Fedora Operations Architect role has been introduced after a program manager role that Ben Cotton so masterfully executed was eliminated. Hopefully, this new role will be filled soon and will allow to capture the same benefits that Ben brought to Fedora. The role is a bit different, though, as it is focused on cross-project and cross-distro impact across Fedora and RHEL.

Fedora contributors’ survey results were also unveiled by Matthew in the talk. In general, contributors keep trust in the project and continue their participation at the pre-pandemia levels. Recent social networking turmoil around Red Hat actions hasn’t influenced the results too much. The screenshots below are from the video stream as the talk’s slides aren’t yet available.

The talk went into details on what Matthew and Fedora Council aim for Fedora Project’s future. Growing a project with thousands of contributors spread around the world and representing different cultures is hard. A lot of effort is put into making Fedora a welcoming place to everyone who is willing to work together towards a common goal.

rpminspect: Lessons from three distributions

David Cantrell created and maintains a tool that helps RHEL maintainers to keep their packages sane over years of maintenance. It is run as a part of CentOS Stream merge request process, as part of Fedora gating and pull request testing, and as a gating test for RHEL.

The talk itself is an excellent retrospective on what one should consider when creating a new Open Source project while working on it full time. David provided observations on how to sell the idea to your management, how to get people interested in becoming a community for your project, how to sustain development in a long run. This is one of rare gems of a ‘lone wolf’ maintainership stories that everybody needs to absorb when they start their new journey. Believe me, it is worth it.

Using AI/ML to process automated test results from OpenQA

Tim Flink from Fedora QE team decided to apply AI/ML to a problem of identifying hanging or failing jobs in OpenQA. OpenQA runs full-VM tests and records screencasts of everything what is shown on a VM screen. A crash of a graphical environment is abruptly visible there as graphics would be replaced by a terminal with a Wayland’s stacktrace. What followed is an experiment on processing these screens to reliably detect a particular type of a crash.

We spent some time with Tim discussing how these experiments can be applied to finding out possible issues in other system reports. Since Fedora is upstream of CentOS Stream or RHEL, it means certain issues – and their fixes – would often appear in Fedora first. If we could train a model on those issues in Fedora, can we detect automatically whether a particular fix is required in RHEL later? This is quite relevant to FreeIPA and SSSD as we do run their tests in OpenQA as a part of Fedora Server release criteria.

Another possible use case is to do a reverse training. Since we do know how a potential failure could look like, we can intentionally build an OpenQA test environment that would reproduce the failure and then train a model to recognize logs from such failures in real life scenarios. For example, establishing trust to Active Directory in FreeIPA is reliant on a working DNS setup, working firewall, etc. Failure to communicate through an incomplete firewall would be reflected with timeouts in the logs which we could train to recognize. There are endless possibilities here to aid through known errors

Hallway track

On a similar note, I had discussion with Amazon’s David Duncan in the ‘hallway track’ which started from an observation that Cloud SIG would really benefit from our passwordless work: distributing VMs with pre-set passwords is not ideal, an ability to inject FIDO2 passkey information and have everything obey it at login in cloud would be great to have. Somewhere along this way, discussion switched to CoreOS-based environments and I realised my experiments with Fedora Silverblue to develop passwordless support for FreeIPA would probably be a subject to a talk that would be interesting to others as well.

I am running my own Silverblue images which source SSSD and FreeIPA upstream test builds to allow me easily to switch between different potential options in one go, without messing with an installation environment. It is quite important for the integration work we do and would be crucial for end-to-end testing of upcoming GNOME changes.

This also provided me an insight into what container-based environments need from FreeIPA and overall from enterprise domains to fit nicely. I should have submitted a talk about that to Flock! Well, I will do one next year, for sure. (And, TODO: file issues to track for that integration to FreeIPA upstream!)

Another interesting discussion we had with Jonathan Dieter. Jonathan is a long term Fedora contributor and FreeIPA user. For past several years Jonathan works with a local Irish company that provides services around the world to test local phone numbers. They maintain an infrastructure in more than 80 countries where there might be no global cloud providers at all. To keep that infrastructure reliable, they use FreeIPA (not alone, of course) and OStree-based images.

Asahi Linux and Fedora

It is one of the talks that I missed to attend in person as conflicts are inevitable: Mo Duffy’s Podman Desktop talk and Adam Williamson’s Fedora CI state talk were running at the same time.

Asahi Linux is a project which aims to upstream support for Apple’s ARM64 architecture, best known through Apple’s M1 and M2 systems. At the Flock Asahi Linux project members have announced that not only Fedora Asahi Remix will be the flagship distribution for the project, but also Fedora Discourse instance will be used to handle Asahi Linux community collaborations.

Asahi’s announcement also an example of how friendly has become Fedora Project as a community over years. I am definitely looking forward to see the remix to become one of official builds of Fedora.

Podman desktop: from Fedora to Kubernetes for beginners

Mo Duffy gave an outstanding talk about using Podman desktop to deliver workloads for non-technical people. It was a highlight of the conference, for sure. She also made few interesting points. For one, running cloud-based workloads locally to allow offline operations is nice. Mo demonstrated a Penpot instance, which is a design and prototyping application. Running it locally helps to maintain the same workflow while on an intercontinental flight. However, even more interesting is that this approach also allows to use a cloud software that otherwise is considered insecure. For example, running a Wordpress setup locally to benefit from its nice UI in a local browser and export static web site content to push to the actual web hosting.

By lowering a barrier to use containerised applications through Podman Desktop we may hope to get more people join our community and contribute. Starting with Podman Desktop’s friendliness would allow these newcomers to discover other Fedora flavors and features. It is certainly an interesting aspect we could expand further in a way similar how this ‘F’ in Fedora got expanded in Mo’s presentation.

Panel: Upstream collaboration & cooperation in the Enterprise Linux ecosystem

Another conference highlight was the panel that brought representatives of Fedora, RHEL, Rocky Linux, Alma Linux, and CentOS Stream together on stage. Distributions upstream and downstream of RHEL presented their views on various development and community topics. It is worth to watch the stream.

State of EPEL

Trow Dawson and Carl George did present another state of EPEL. EPEL has a solid contributors’ base who keep thousands of packages available to RHEL and downstream distributions’ users. EPEL is using Fedora infrastructure and for many packages it shares maintainers with Fedora (EPEL branches are branches in Fedora dist-git for the same package, if this package is not in RHEL). So all EPEL contributors are Fedora contributors. ;)

One interesting aspect in every “State of EPEL” talk is a long tail of the EPEL demographics. Much like “State of Fedora” shows demographics of Fedora releases, EPEL statistics includes details on who is running the lowest number of downstream systems:

Passwordless Fedora

My talk was on the morning of the second day. People were still recovering from the night of International Candy Swap and table games so at start I had may be a couple of attendies. Eventually, we’ve got more people in the room and there were also online attendees so it wasn’t so feeling so lonely.

My talk was similar to previous ones at FOSDEM and SambaXP. What was new is a demo from Ray Strode on how potentially a user experience could look like in GNOME for a passwordless login. Ray implemented a prototype of external identity provider login flow that Allan Day has shared recently. This flow could be used for login through Microsoft’s Entra ID (a.k.a. Azure AD) or any OAuth2 provider supported by FreeIPA. We aren’t fully there yet but the goal is to do this work once for GNOME and reuse for various passwordless authentication approaches supported through SSSD.

I also showed an old demo from my FOSDEM and Flock 2016. It shows how we integrated 2FA tokens (Yubikeys in this example) with FreeIPA to authenticate and obtain Kerberos tickets through a KDC proxy over HTTPS. These tickets then were used to login onto a VPN. This is something that is possible in Fedora and RHEL for almost a decade now.

OpenQA hacking

Before Flock, Adam Williamson started to work on integrating Samba AD tests into OpenQA for Fedora. It almost worked well but there were few issues Adam wasn’t able to resolve so we set down at the Flock and figured out at least few of those. The only remaining one was an apparent race condition within a test that enrolls a system to Samba AD using kickstart. SSSD, it seems, starts before networking is up and stable, and decides that it is offline. When the test tries to resolve an Active Directory user, SSSD fails to do so as it thinks to be offline.

Interestingly, the same test against FreeIPA works fine. The same test done past kickstart works fine as well, for both FreeIPA and Samba AD. There is probably a need to add a waiting period to settle a network state. We saw this in past too but never found a good way to trigger a proper event for SSSD to recover.

Social events

I am trying to reduce candy consumption so I skipped the social events on the first day but attended the conference dinner on the second day. All social events during the Flock well organized and this one wasn’t exception either. We had interesting discussions with Fedora and Rocky folks, getting to know there is a lot of similarity in how people do their lives across the world.

On Friday’s night another social event was a Ghost Tour. However, we skipped it and together with few other people went to do a bit of memorabilia road through another Mexican place and (of course!) a local bar. Life in IT and development in 90’s and early 2000’s weren’t that much different in US and Europe, really. Thanks to Spot and Amazon for covering the dinner, thanks to other folks for beer and a company.

I left on Saturday at noon using the same Aircoach bus towards Dublin airport. The bus was full – make sure you have booked your seat online in advance. My flight back to Finland was uneventful as well. Overall, it was a great conference, as usual. I’d like to say thank you to all volunteers and organizers who keep Flock so wonderful and Fedora project so welcoming. Thank you!

Episode 387 – Enterprise open source is different

Posted by Josh Bressers on August 07, 2023 12:00 AM

Josh and Kurt talk about the difference between what we think of as traditional open source, and enterprise software projects that have an open source license. They are both technically open source, but how the projects work is very very different.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3188-8" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_387_Enterprise_open_source_is_different.mp3?_=8" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_387_Enterprise_open_source_is_different.mp3</audio>

Show Notes

Episode 386 – We are watching web 2.0 burn

Posted by Josh Bressers on July 31, 2023 12:00 AM

Josh and Kurt talk about a new Google proposal that would add DRM for the web. All the ad driven companies seem to be acting very strangely, there’s probably a reason for this. The way ads used to pay for content is changing, but a lot of these giant companies don’t know how to adapt. It’s going to be very interesting times in the near future.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3182-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_386_We_are_watching_web_2_0_burn.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_386_We_are_watching_web_2_0_burn.mp3</audio>

Show Notes

Episode 385 – Is open source an insider threat?

Posted by Josh Bressers on July 24, 2023 12:00 AM

Josh and Kurt talk about insider threats, but not quite in the way one would expect. The potential for insider threats is possibly higher than usual right now, but what about open source? Are open source developers insider threats for your organization? Have you ever thought about this before?

<audio class="wp-audio-shortcode" controls="controls" id="audio-3176-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_385_Is_open_source_an_insider_threat.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_385_Is_open_source_an_insider_threat.mp3</audio>

Show Notes

Episode 384 – What’s next for open source?

Posted by Josh Bressers on July 17, 2023 12:00 AM

Josh and Kurt talk about some of the efforts to measure and understand open source. There are projects like the OpenSSF Scorecard. We want to measure open source for some idea of quality. Is AI generated code better than a random open source project found on GitHub? Can we track the countries contributors are from? These are all interesting problems that everyone will have to deal with soon.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3172-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_384_Whats_next_for_open_source.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_384_Whats_next_for_open_source.mp3</audio>

Show Notes

Episode 383 – Is open source dying?

Posted by Josh Bressers on July 10, 2023 12:00 AM

Josh and Kurt talk about the notion that open source is somehow dying. What’s actually happening is corporate open source is changing, which some are trying to deform into something wrong with open source. Open source is doing great, probably better than ever.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3167-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_383_Is_open_source_dying.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_383_Is_open_source_dying.mp3</audio>

Show Notes

Faking a device via ACPI

Posted by Adam Young on July 08, 2023 04:46 AM

I need to write a driver for a device that does not exist yet. So, I am going to use the Linux kernel tooling fro ACPI to create the illusion that the device exists. Here is how.

Devices on an ACPI enabled Linux machine typically exist in a able called the DSDT. This has a real name, but I just thing of it as “Different Stuff Different Table”. However, you can also have a machine specific table that you create at boot time and put entries in there. This is called the SSDT. Again, it has a real name, but I just think of it as “Same Device Different Table” Here is my simple SSDT definition in the ACPI domain specific language.

 /*
 * Template for [SSDT] ACPI Table (AML byte code table)
 */
DefinitionBlock("","SSDT", 2, "Ampere", "_SSDT_01", 0x00000001)
{

    Scope(\_SB)
    {
       Device(MCTP)
       {
           Name (_HID, "MCTPA1B2")

           Name (_STA, 0x0F)
       }
       Device(PLDM)
       {
           Name (_HID, "PLDMA1B2")

           Name (_STA, 0x0F)
       }
       Device(PSTL)
       {
           Name (_HID, "POSTA1B2")

           Name (_STA, 0x0F)
       }
     }

}

Now, this actually has three devices in it, and I was doing some unit testing setup with dependent devices so I though I might use the other ones. I left them in as an example to show how multiple devices look in an SSDT.

To convert this to the binary format, I use iasl, a utility in the acpica-tools RPM. Yes, I still do RPM, although all of this works on Debian based systems as well.

Here is my script to load it into the Linux kernel. It uses a module for the ACPI configuration filesystem.

#! /bin/sh 
modprobe acpi_configfs
mkdir /sys/kernel/config/acpi/table/ssdd
cat ~/acpi/bak/mctpdev.aml > /sys/kernel/config/acpi/table/ssdd/aml

To confirm that the file exists, you can use the acpidump command. If you run it with the -b flag you get all the tables in binary format.

[root@hackery tmp]# mkdir acpi
[root@hackery tmp]# cd acpi/
[root@hackery acpi]# acpidump -b

The iasl command will then decompile the table and you can view the contents. I’ll leave you with the full content.

[root@hackery acpi]# iasl ssdt.dat 

Intel ACPI Component Architecture
ASL+ Optimizing Compiler/Disassembler version 20220331
Copyright (c) 2000 - 2022 Intel Corporation

File appears to be binary: found 31 non-ASCII characters, disassembling
Binary file appears to be a valid ACPI table, disassembling
Input file ssdt.dat, Length 0x83 (131) bytes
ACPI: SSDT 0x0000000000000000 000083 (v02 Ampere _SSDT_01 00000001 INTL 20220331)
Pass 1 parse of [SSDT]
Pass 2 parse of [SSDT]
Parsing Deferred Opcodes (Methods/Buffers/Packages/Regions)

Parsing completed
Disassembly completed
ASL Output:    ssdt.dsl - 1141 bytes
[root@hackery acpi]# cat ssdt.dsl 
/*
 * Intel ACPI Component Architecture
 * AML/ASL+ Disassembler version 20220331 (64-bit version)
 * Copyright (c) 2000 - 2022 Intel Corporation
 * 
 * Disassembling to symbolic ASL+ operators
 *
 * Disassembly of ssdt.dat, Sat Jul  8 04:42:31 2023
 *
 * Original Table Header:
 *     Signature        "SSDT"
 *     Length           0x00000083 (131)
 *     Revision         0x02
 *     Checksum         0xCE
 *     OEM ID           "Ampere"
 *     OEM Table ID     "_SSDT_01"
 *     OEM Revision     0x00000001 (1)
 *     Compiler ID      "INTL"
 *     Compiler Version 0x20220331 (539099953)
 */
DefinitionBlock ("", "SSDT", 2, "Ampere", "_SSDT_01", 0x00000001)
{
    Scope (\_SB)
    {
        Device (MCTP)
        {
            Name (_HID, "MCTPA1B2")  // _HID: Hardware ID
            Name (_STA, 0x0F)  // _STA: Status
        }

        Device (PLDM)
        {
            Name (_HID, "PLDMA1B2")  // _HID: Hardware ID
            Name (_STA, 0x0F)  // _STA: Status
        }

        Device (PSTL)
        {
            Name (_HID, "POSTA1B2")  // _HID: Hardware ID
            Name (_STA, 0x0F)  // _STA: Status
        }
    }
}

Episode 382 – Red Hat, you were the chosen one!

Posted by Josh Bressers on July 03, 2023 12:00 AM

Josh and Kurt talk about Red Hat closing up the RHEL source code. Kurt and Josh both worked at Red Hat in the past. This isn’t a show that bashes Red Hat, and it’s not a show praising them. We take an honest look at the past, present, and future of Linux. There’s a lot to talk about in this one. TL;DR, Red Hat was the chosen on, and we all feel betrayed.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3163-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_382_Red_Hat_you_were_the_chosen_one.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_382_Red_Hat_you_were_the_chosen_one.mp3</audio>

Show Notes

Episode 381 – WTF Reddit, APIs and risk

Posted by Josh Bressers on June 26, 2023 12:00 AM

Josh and Kurt talk about the incredible Reddit debacle. At the center of it all is an API. What does it mean to be using an API and how does this relate itself back to our own risk. Many of us rely on APIs for countless things, and if a company decides to cut off that API somehow, it could create a mess.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3158-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_381_WTF_Reddit_APIs_and_risk.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_381_WTF_Reddit_APIs_and_risk.mp3</audio>

Show Notes

RHEL, Rocky, and CentOS

Posted by Adam Young on June 23, 2023 12:33 AM

When talking about Rocky Linux or other distros based on RHEL,
I don’t think that it is a good idea to paint any one person as the bad guy or that they are the bad guy or operating in bad faith. I do not think this is the case with any of the people behind Rocky.

I was in sales at Red Hat during this time, and I winced when I heard the announcement. While Stream was and is a great step forward, closing off the Open But unsupported CentOS option was bad for Red Hat’s business. Instead of walking into an organization built around Yum and RPM, and the tools that managed them, we’d end up waling into organizations built around apt and .deb.

As A Linux guy, I don’t care. As Red Hat employee, I did.

Rocky provides a need for people that need to have their stuff behave like RHEL but are doing something that means that Red Hat support is an unnecessary expense. For example, when I worked at Penguin Computing, we needed our stuff to act like RHEL, as it was going to be loaded on machines that could be rebooted to RHEL, but we were doing a custom Kernel module (BProc) that was not going to be accepted upstream. Loading it into a RHEL system would taint the Kernel.

We were PXE booting a large array of very NON-RHEL systems from this one. We could not afford the expense of RHEL suport for 200+ Nodes that were only installed with our own Code. CentOS was much easier for us than Rolling our own RPMS.

Did we benefit from Red Hat effort? Yes. Did we contribute back to the community? As much as my 9 person development team could. Aside from the base OS we also had significant effort put into scientific libraries and Message Passing subsystems. We were stretched thin.

Rocky is Good for Red Hat. If I was back and sales and I knew that a customer team was based on Rocky, I would know how to get them running on RHEL if the opportunity arose. I would know that there stuff would run. That is a huge benefit to the sales process.

If no one is calling Red Hat data centers for support or filing customer support tickets for Rocky, it costs Red Hat nothing. If people find bugs on Rocky and file them against RHEL in RH bugzilla, it is good for Red Hat.

Stream is also a good effort, but it does not fulfill the need that Rocky does.

LED Keyboard

Posted by Adam Young on June 19, 2023 09:53 PM
<figure class="wp-block-image"></figure>
xset led on

Episode 380 – A new Sovereign Tech Fund program and the BBC on destroying hard drives

Posted by Josh Bressers on June 19, 2023 12:00 AM

Josh and Kurt talk about a new program from the Sovereign Tech Fund to fund open source work. It’s a great looking program with an acceptable amount of money behind the program. We also talk about a story claiming millions of perfectly good hard drives are destroyed per year. They’re probably not OK at all.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3154-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_380_A_new_Sovereign_Tech_Fund_program_and_the_BBC_on_destroying_hard_drives.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_380_A_new_Sovereign_Tech_Fund_program_and_the_BBC_on_destroying_hard_drives.mp3</audio>

Show Notes

All Twelve Keys

Posted by Adam Young on June 14, 2023 12:06 AM

As Jazz students, we are often exhorted to take a lesson and practice it in all 12 Keys. How many ways can we take a pattern through all 12 keys? Lets do some math.

Bottom Line Up Front: The easiest options are chromatic or cycle of fifths. Here’s how I justify that statement.

Lets map each note to a number, using an analog clock face positions of the numbers for each note, starting with A = 1.

<figure class="wp-block-image size-full"></figure>

The number 12 is very useful as a collection of things, because it can be evenly broken up in many ways:

  • 2 Groups of 6
  • 3 groups of 4
  • 6 Groups of 2
  • 4 groups of 3

However, this means that if we try to use one of those numbers as the basis for skipping notes, we will not hit every key. For example, take the “two groups of 6” option. That is what you get if you use the Whole tone scale. If You start at 1 (A) you will get 1,3,5,7,9,11 and then back to 1. To hit every key you would then have to do a half step or a step and a half to get the other 67 notes. Four groups of 3 is the augmented triads. 3 groups of 4 is the diminshed triads.

Add in the fact that 8 and 9 are multiples of 3 and 4 and that leaves use with a couple options:

  • Plus or Minus 1
  • Plus or Minus 5
  • Plus or Minus 7

These map to the most common ways we are told to practice in all 12 keys: Chromatically or the cycle of fifths. It should be pretty obvious how plus 1 means a half step up; Start on A, then on A# etc. This is more common than Minus 1, or a half step down, but the effect is fairly similar.

If we start with the fact that 5 plus 7 = 12 it should not then be a surprise that +7 gets the same pattern as minus 5. 1 + 7 = 8. Going backwards around the clock is easiest if we add 12 before cross the 12 boundary. So 1 + 12 – 5 = 13 – 5 = 8.

So plus five is an interval of a Fourth ( A to D) as is minus 7.

Plus 7 is an interval of a fifth (A to E) as is minus 5.

So if we want an unbroken pattern that is going hit every key, those are our only options…or we can make an irregular pattern that adds up to 5 or 7. For example, +3 +2, which would be a minor third followed by a whole step. It can also be larger intervals: For example

  • +6 + 1,or tritone, half step.
  • +7 +6: perfect 5th.
  • +6 +5 tritone perfect fourth

Those work because the forth/fifth shifts between the two whole tone scales.

This Means we could also do +4 +3: major third minor third. This Is Roughly based on a major triad; The final note of one is the Root of the next.

I suspect that it was pondering along these lines that lead to the Coltrane Circle.

<figure class="wp-block-image size-large"></figure>

Episode 379 – Will open source save the world, again?

Posted by Josh Bressers on June 12, 2023 12:00 AM

Josh and Kurt talk about some new open source projects that aim to start taking back some of our privacy and rights. It’s a huge hill to climb, but it seems like there is some hope. Open source doesn’t care about growth, or numbers, or anything really, so it can’t ever lose.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3150-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_379_Will_open_source_save_the_world_again.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_379_Will_open_source_save_the_world_again.mp3</audio>

Show Notes

“Agree to Disagree”

Posted by Adam Young on June 10, 2023 08:48 PM

Never use that phrase with me again, please. It is a most insulting phrase.

It is saying “You are wrong, and you are not even worth engaging with to try and change your mind. I am always willing to listen to a strong argument. I do not agree to disagree with you; I agree to hold on to my opinions until shown a superior one. I agree to keep trying to change your mind until it is changed, or until you provide a better argument.

That phrase has been used on me too many times to dismiss my concerns. It is associated with one of the worst job interviews of my career.

Either argue your point or stay silent.

I do not agree to disagree with you in perpetuity. I disagree with you now. I am willing to put in the effort to bring you around to my view point. I am willing to listen to your point of view.

If you are not willing to engage with me, do not engage with me.

I am far more likely to say “I am not willing to pursue this argument.” Which I often will do for things that may be too painful to discuss. That is usually politics, religion or other sensitive matters like that. But that is almost always an issue of self-preservation.

This phrase has not value and you should excise it from your lexicon.

If your disagree, you better have your argument in order.

Rocket ships and radishes

Posted by Josh Bressers on June 07, 2023 01:08 AM

There’s been something in the back of my brain that’s been bothering me about talks at the big conferences lately but I just couldn’t figure out how to talk about it. Until I listed to this episode of The Hacker Mind Podcast on Self Healing Operating Systems (it’s a great podcast, like and subscribe). The episode was all about this incredibly bizarre way to store operating system state in a SQL database (yeah, you read that right). The guest made no excuses that this is a pretty wild idea and it’s not going to happen anytime soon. But we need weird research like this, it’s part of the forward march of progress.

This sort of obviously experimental research is what I’m going to call “rocket ships” in the context of this blog post. It’s work that is extremely interesting, but will take decades to turn into something that affects us directly. Almost nobody can really do it, there’s no short term money in this research, and even fewer will directly benefit for decades. It will almost certainly have benefits someday, and won’t look anything like it does today.

Now, there’s another side of this story. All that stuff we actually have to do. This is the boring every day stuff that keeps the trains moving and electricity running. Nobody is giving talks about this because it just sort of exists. I don’t think KubeCon is going to accept the talk “How I keep your family from freezing to death in the winter”. Let’s call this work “eating our vegetables” or “radishes” because I needed something that started with an R. Humans like alliteration. Rocket Ship … Radish … oh hey, Research also starts with R!

So anyway, this brings me to a lot of the big conferences. I think what I see is a lot of rocket ship research that’s dressed up as radishes. I’m not going to pick on any one company or project specifically because that’s not just bad taste, but I also can’t afford to make any more enemies. Instead, I’m going to make up an example to explain what I see. Let’s use supply chain security as the backdrop because that’s the universe I mostly live in right now. You could just as easily substitute in AI, or blockchain, or french fries.

Our hero, or maybe villain, who knows, everyone is a hero and villain to someone, they are giving a talk on how we can secure open source build systems by rewriting them all in Apple II basic. It will of course be creating reproducible binaries, and we should sign everything with a new variant of Sigstore crossed with PGP called Pigstore, the mascot is a duck. And it probably also draws pictures of clowns, or maybe cats, some sort of mammal. There’s a demo, and a GitHub website. You know it’s been donated to the OpenSSF because that’s a cool thing to do. And of course the name is an unpronounceable German word related to ship building, and the mascot is some sort of obscure animal that lives in a cave.

The room was packed, the talk went over great. The conference WiFi only let half the demos work, but that was enough to show off the internet connected Apple II building Node.js. Animated GIFs showed off the rest. On second thought none of this mattered because the demos were 75 pages of terminal text nobody could read. Whatever, everything went perfect. Oh, and did they mention we should expect all open source projects to start using this build system because if they don’t everyone will call them names! The future starts today!

In the academic days (like our operating system example from the opening), it would be well understood that this was rocket ship research. It almost certainly wouldn’t go anywhere anytime soon, but was a step as part of the larger story of progress. As the arrow of time drags us all into the future, so does the path of progress, as long as you don’t live in Florida.

But this isn’t how these talks work anymore. On Monday, there will be customers asking where on your roadmap this new build system falls so they can take advantage of all these features they didn’t even know they needed. They didn’t know they needed more clown pictures, but suddenly it’s very obvious it was the thing missing from their soul that previously a fidget spinner was filling. That senior developer who quit last week said the biggest problems were no unit tests or code reviews, but it’s probably actually this.

Your leadership will say this needs to be part of Q2 planning because all the competitors are also doing it, so they read on LinkedIn. The community meeting for the project has ballooned to 300 people, all talking about how they are busy integrating things into their production environments as we speak, just skip right over test and dev, this goes straight to prod! So you know it’s going to be a huge deal.

I want to stress, these sort of projects have immense value, but I also suspect that pretending they’re something that can be used, a radish, is actually hurting the research, or rocket ship if you’re keeping track. Now instead of researchers doing weird research things, you have community meetings filled with developer advocates trying to make sure their company gets in on the ground floor. There’s no more room for experimentation because apparently version 0.1 of the framework just got released yesterday, nobody knows who did it or how. In 3 months at the next conference something even newer will get talked about, this project will be mostly forgotten, and what could have been important research is going to fester in an abandoned GitHub repository forever.

I’m not sure how to wrap this up. Is this a problem with the companies funding this research wanting to pretend it’s something more real, or is it the large conferences looking for more “real world” focused solutions instead of research that won’t go anywhere for decades? Maybe things have always been like this and I just didn’t notice before now.

Episode 378 – Naming things is harder than security

Posted by Josh Bressers on June 05, 2023 12:00 AM

Josh and Kurt talk about namespaces. They were a topic in the last podcast, and resulted in a much much larger discussion for us. We decided to hash out some of our thinking in an episode. This is a much harder problem than either of us expected. We don’t have any great answers, but we do have a lot of questions.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3130-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_378_Naming_things_is_harder_than_security.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_378_Naming_things_is_harder_than_security.mp3</audio>

Show Notes

Episode 377 – The world is changing too fast for humans to understand

Posted by Josh Bressers on May 29, 2023 12:00 AM

Josh and Kurt talk about PyPI suspending new accounts and packages for a day, and a 60 minutes story about deepfakes. The problems are mostly the same, but for very different reasons. The world is changing faster than we can keep up, so what is a human to do?

<audio class="wp-audio-shortcode" controls="controls" id="audio-3126-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_377_The_world_is_changing_too_fast_for_humans_to_understand.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_377_The_world_is_changing_too_fast_for_humans_to_understand.mp3</audio>

Show Notes

Opening the perf file descriptor

Posted by Adam Young on May 26, 2023 07:53 PM

A couple months back I recorded this line on my blog as part of investigating perf:

perf record --branch-filter any,save_type,u true 

What is the interface between the perf binary and the linux Kernel when I run this? There is a system call to open a file handle. The man page says this:

int syscall(SYS_perf_event_open, struct perf_event_attr *attr,
                    pid_t pid, int cpu, int group_fd, 
                    unsigned long flags);


But how does that get called by the perf binary…the answer is trickier than I originally thought.

When debugging system calls, the primary tool I use is strace. I ran it like this:

sudo strace  perf record --branch-filter any,save_type,u true  2>&1  | less

Inside of less, I can search for the string “event_open” and I am taken to a spot in the output where the system call is made…11 times. Each one opens a file handle, and all of them are kept open after the system call. Here is the first call:


perf_event_open({
                   type=PERF_TYPE_SOFTWARE, 
                   size=0 /* PERF_ATTR_SIZE_??? */,
                   config=PERF_COUNT_SW_CPU_CLOCK,
                   sample_period=0,
                   sample_type=0,
                   read_format=0,
                   exclude_kernel=1,
                   precise_ip=0 /* arbitrary skid */, ...},
                 -1,
                  2,
                 -1,
                  PERF_FLAG_FD_CLOEXEC) = 4 

And here is the last one


perf_event_open({
                   type=PERF_TYPE_HARDWARE, 
                   size=PERF_ATTR_SIZE_VER7,
                   config=PERF_COUNT_HW_CPU_CYCLES,
                   sample_freq=4000,                  sample_type=PERF_SAMPLE_IP|PERF_SAMPLE_TID|PERF_SAMPLE_TIME|PERF_SAMPLE_ID|PERF_SAMPLE_PERIOD|PERF_SAMPLE_BRANCH_STACK,
                   read_format=PERF_FORMAT_ID, 
                   disabled=1, 
                   inherit=1, 
                   mmap=1, 
                   comm=1, 
                   freq=1, 
                   enable_on_exec=1, 
                   task=1,
                   precise_ip=3 /* must have 0 skid */,
                   sample_id_all=1,
                   exclude_guest=1,
                   mmap2=1,
                   comm_exec=1,
                   ksymbol=1,
                   bpf_event=1,
                ...},
                11127,
                7,
               -1,
                PERF_FLAG_FD_CLOEXEC) = 12 

I’ve formatted this to make the relationship of the parameters a little clearer. Most significant is that the first parameter is a large structure, which strace knows how to interpret.

If we keep looking down the output of strace, we can see another block of perf_event_open. When all of these are executed we end up with file descriptors 4-12 and 13-20 open for the remainder of the program as products of the perf_event_open system call. These are never read from, so the information must come from the kernel via another means. Again, we can see a block of related system calls, this time ioctls:


ioctl(13, PERF_EVENT_IOC_ENABLE, 0)     = 0
ioctl(14, PERF_EVENT_IOC_ENABLE, 0)     = 0
ioctl(15, PERF_EVENT_IOC_ENABLE, 0)     = 0
ioctl(16, PERF_EVENT_IOC_ENABLE, 0)     = 0
ioctl(17, PERF_EVENT_IOC_ENABLE, 0)     = 0
ioctl(18, PERF_EVENT_IOC_ENABLE, 0)     = 0
ioctl(19, PERF_EVENT_IOC_ENABLE, 0)     = 0
ioctl(20, PERF_EVENT_IOC_ENABLE, 0)     = 0

But we do a see a clone system call, which implied a child process or thread.

clone3({flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, child_tid=0x7faf329ef910, parent_tid=0x7faf329ef910, exit_signal=0, stack=0x7faf321ef000, stack_size=0x7ffec0, tls=0x7faf329ef640} => {parent_tid=[11539]}, 88) = 11539

If we look for system calls based on this pid (11539) we later see:

poll([{fd=13, events=POLLIN|POLLERR|POLLHUP}, {fd=14, events=POLLIN|POLLERR|POLLHUP}, {fd=15, events=POLLIN|POLLERR|POLLHUP}, {fd=16, events=POLLIN|POLLERR|POLLHUP}, {fd=17, events=POLLIN|POLLERR|POLLHUP}, {fd=18, events=POLLIN|POLLERR|POLLHUP}, {fd=19, events=POLLIN|POLLERR|POLLHUP}, {fd=20, events=POLLIN|POLLERR|POLLHUP}], 8, 1000 <unfinished ...>

However, the only time this pid is referenced again is to clean up, in this block of syscalls:


[pid 11539] <... poll resumed>)         = 0 (Timeout)
[pid 11539] rt_sigprocmask(SIG_BLOCK, ~[RT_1], NULL, 8) = 0
[pid 11539] madvise(0x7faf321ef000, 8368128, MADV_DONTNEED) = 0
[pid 11539] exit(0)                     = ?
[pid 11539] +++ exited with 0 +++

Episode 376 – Open Source Summit, who built your open source, and AI

Posted by Josh Bressers on May 22, 2023 12:00 AM

Josh and Kurt talk about the Open Source Summit in Vancouver. Josh was there and we pick on two observations. Firstly that security keeps trying to use fear as a feature, except it doesn’t work. Secondly we discuss AI and how people are talking about it. It is changing things, how much is yet to be seen.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3122-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_376_Open_Source_Summit_who_built_your_open_source_and_AI.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_376_Open_Source_Summit_who_built_your_open_source_and_AI.mp3</audio>

Show Notes

Testing an MCTP Driver via Echo request

Posted by Adam Young on May 19, 2023 10:27 PM

MCTP stands for Management Component Transport Protocol. While it is designed around a server, it is also designed as a network protocol. As such, the Linux implementation makes use of the Socket interface and the Kernel plumbing for dealing with network protocols. To support a new transport mechanism, you implement a new device driver specific to that transport mechanism that implements the struct net_device contract, which includes implementing the functions for struct net_device_ops.

Before I write a full implementation, I want to write something that only echos a packet back to the receiver. This mimics the behavior of the mctp-echo server in the user tools, and can make use of the mctp-req echo client.

A network driver moves traffic from the operating system to the network device or in the opposite direction. My proof-of-concept code is going to do half of each operation: it is going to take a packet from the operating system, munge it, and send it back to the operating system. In order to perform this operation, I hae to implement the function .ndo_start_xmit from the struct net_device_ops. Here’s my implementation:

static netdev_tx_t mctp_pcc_tx(struct sk_buff *skb, struct net_device *ndev)
{
        struct mctp_hdr * hdr;
        struct control_work_data * control_write_data;

        u8      orig_dest;
        u8      orig_src;
        
        printk(KERN_INFO "mctp_pcc_tx called\n");

        hdr = mctp_hdr(skb);
        
        printk("Version:     %x\n",hdr->ver);
        printk("Source:      %x\n",hdr->src);
        printk("Destination: %x\n",hdr->dest);
        printk("flags_seq_tag:      %x\n",hdr->flags_seq_tag);
        pkt_hex_dump(skb); 


        orig_dest = hdr->dest;
        orig_src = hdr->src;

        __mctp_cb(skb);

        hdr = mctp_hdr(skb);
        hdr->src=orig_dest;
        hdr->dest=orig_src;
        hdr->flags_seq_tag += 8;
        //This is a single static bloc instead of a kmalloc'ed block. The kmalloc
        //Was triggering a check during interrupt context and stack dumping
        control_write_data = &control_write_data_static;//kmalloc(sizeof(struct control_work_data), GFP_KERNEL);
        control_write_data->skb = skb;
        INIT_WORK(&control_write_data->work, temp_echo_handler);
        init_waitqueue_head(&control_write_data->wq);

        schedule_work(&control_write_data->work);

        return NETDEV_TX_OK;
}

A few observations.

  • I am assuming that this code is supposed to consume and free the sk_buff that is handed in. In stead of doing that, I reuse it for the outward journey. I’d like to confirm that this is correct, but I have not gotten a double free message, so I think I am right.
  • I swap the source and destination values., as that is where the response message needs to go. The original destination value lets me set a routing rule to ensure that the packet gets sent to the new device driver. The recvfrom call in the mctp_req program looks for a packet from this address
  • There is a flag on the header that indicates that this is a response packet. Without this, the packet does not get delivered back to the the mctp_req process. That is what this code does: hdr->flags_seq_tag += 8; Should be a bitwise or setting, but this proves the concept.
  • Since the start_tx call is done in interrupt context, we want to exit out as quickly as possible. The essential work that has to be done here is scheduling the work that cannot be done in interrupt context. This is done in the function linked in the control_write_data block with the skbuff data attached. I could have put more of this work into the callback function, but nothing I did here would block.

The work_queue function that sends the packet back to the kernel looks like this:

void temp_echo_handler(struct work_struct *work){
        struct control_work_data *my_data = container_of(work, \
                                 struct control_work_data, work);
        pr_info("%s\n", __func__);
        if (my_data->skb != NULL){
                pr_info("%s calling netif_rx\n", __func__);
                netif_rx(my_data->skb);
        }
 
        pr_info("%s after netif_rx \n", __func__);
}

Aside from the logging, the function calls netif_rx(my_data->skb) which sends the packet back into the kernels’ network processing code…it ends up on a queue to get delivered to the appropriate waiting socket.

To set up a test for this code, I run the following shell script:

#!/bin/sh
insmod mctp_pcc.ko 
mctp route add 9 via mctpipcc2d
mctp address add 10 dev mctpipcc2d
mctp link set mctpipcc2d  up

With this run, I can see the mctp device using the mctp tool:

# mctp link
dev lo index 1 address 0x00:00:00:00:00:00 net 1 mtu 65536 up
dev mctpipcc2d index 9 address 0x(no-addr) net 1 mtu 68 up

To avoid fooling myself, I need to ensure that the mctp-echo server is not running, or it will respond for me.

A test run looks like this:

# ./obj/mctp-req eid 9
req:  sending to (net 1, eid 9), type 1
-> sent
req:  message from (net 1, eid 9) type 1 len 1: 0x00..

I added the ->sent message from debugging, but otherwise this is the vanilla code linked above.

System Tap on Fedora 38

Posted by Adam Young on May 19, 2023 09:59 PM

Getting System Tap up and running on F38 has involved chasing down a few modules.

I am running on

uname -a
Linux hackery 6.2.15-300.fc38.aarch64+debug #1 SMP PREEMPT_DYNAMIC Thu May 11 16:10:31 UTC 2023 aarch64 GNU/Linux

I had to get the kernel-debug-devel package in in order to get the files under /usr/src/kernels/6.2.15-300.fc38.aarch64+debug/. Running stap looks for a file in /lib/modules/6.2.15-300.fc38.aarch64+debug/build which is a symlink to this directory.

sudo yum install kernel-debug-devel kernel-debug-modules kernel-debuginfo kernel-debug-core  kernel-devel kernel-devel

With that installed I can run the command:

stap --dump-probe-types

I knew that things were OK once I could get the stap-prep command to to work. That involved getting the debug Kernel running and the appropriate other RPMS. As of now I am running This version of the Linux Kernel from the Fedora RPMs 6.2.15-300.fc38.aarch64+debug.

The funny thing is that I ended up not using System Tap after all this, but I wanted to record the thing I learned.

Smirking Sigma

Posted by Adam Young on May 18, 2023 08:53 PM

Design prototypes for a band logo

<figure class="wp-block-image size-full"></figure> <figure class="wp-block-image size-full"></figure>

All images Copyright Adam Young.

Designed in Inkscape. Working title is “Smirking Sigma”

The font used is Beachman Script. by David Rakowski. It really seems to capture the classic feel of a fifties music venue.

Remote git with ssh

Posted by Adam Young on May 17, 2023 05:37 PM

Working on a different architecture from my Laptop means that I am invariably working on a remote machine. My current development can be done on an Ampere AltraMax machine, which means I have 80 processors available; quite a nice benefit when doing a Linux Kernel compile that can use all of the processors available.

Because the machine is a shared resource out of out lab, I want to make sure I can recreate my work there on another machine; this one could be reclaimed or powered down due to lab priorities. Thus, all my remote work is done in git, and I use the ssh protocol to pull changes from my work server to my laptop fairly regularly…whenever I feel I have valuable work that could be potentially lost.

I will assume that you can figure out how to create a git repo on work machine (write a comment if you really feel that would be necessary to expand) and will just show how to pull it onto my laptop.

Here are the steps:

  • create a local git repo.
  • add your remote server ssh login as a git remote
  • git fetch from the remote server.

Here’s what it looks like:

mkdir myproj
cd myproj
git remote add eng16sys root@eng16sys:devel/myproj
git fetch eng16sys

That last command will generate output like this:


remote: Enumerating objects: 13, done.
remote: Counting objects: 100% (13/13), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 13 (delta 3), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (13/13), 3.07 KiB | 785.00 KiB/s, done.
From 10.76.105.76:devel/mctp_pcc
 * [new branch]      main       -> eng16sys-r105/main

This is a two-way setup in that I can push code from my laptop to the server as well. However, since I don’t have ssh from the server back to my laptop, I cannot pull code to the server from my laptop from a command prompt on the server. This is security constraint; I don’t want to allow people that have access to my server (a shared resource) to also have access to my personal laptop.

Acronym Challenge Programmatic Interface

Posted by Adam Young on May 15, 2023 07:45 PM

How do you know what is inside your computer? There are a couple tools. If the hardware is on the PCI bus, from the command line you can run lspci, which will in turn enumerate the discovered devices on that bus. But what if the hardware is not on the PCI bus? And how does the Kernel discover it in the first place? For the hardware that I have to work with, the answer is that it is enumerated by the Unified Extensible Firmware Interface (UEFI) coded embedded in the device and exposed via the Advanced Configuration and Power Interface (ACPI). This world is full of four letter acronyms. Here are my notes on some of them.

What I am about to write about is based on what I’ve learned from looking at Ampere System(s)-on-a-Chip (SoC). While details change from generation to generation, this is the general pattern I’ve learned. It is not specific to ARM servers, but it is common in ARM servers, or so I’ve been told by people who have spent more time in this world than I have.

Adam, we ain’t in Keystone anymore.

We don’t tend to call them Central Processing Units (CPUs) anymore. It seems kinda silly to call them Central when there are dozens or hundreds of them on a single SoC. We’ll call them cores. When a core needs to talk to another component on the SoC it uses some form of interconnect. The one that I have used is called Platform Communication Channel (PCC).

There are lots of channels, on a board for communication between many components. There are two items of note in a PCC; the mailbox and the doorbell. I love that these two terms are NOT ACRONYMS. They are, of course, analogies, and good ones, if not perfect. A mailbox is a shared memory location (offset and length in physical memory). A Doorbell is a register. The idea is that you put your message in the mailbox, and then push the doorbell.

The analogy breaks down when you realize that the doorbell is not just a bit, but rather a full number that can carry additional information about the package that was left in the mailbox. And, because this is a multi user operating system, there is a locking and synchronization process involved.

The Channel is a generic communication mechanism. On an AltraMax system it is used by (at least) the xgene_hwmon Linux kernel module and the cppc_cpufrew kernel module. hwmon is for temperature and power monitoring. The PPC in CPPC is not the same PPC as above. It stands for Collaborative Processor Performance Control.

ACPI information is exposed in what the spec calls tables. For the most part, these are normalized tuples, but there is some nesting. Two tables of interest for communication between parts of an Ampere System on a Chip (SoC) are the Differentiated System Description Table (DSTD) and the Platform Communication Channel Table (PCCT.) I wanted to take a closer look at what is in these tables. To do so, I used acpidump with the -b switch to extract the tables and the iasl command to convert these two tables to their human readable source code forms.

The PCCT table has a short header, and then a repeated series of entries like this one:

[032h 0050   4]           Platform Interrupt : 00000038
[036h 0054   1]        Flags (Decoded Below) : 00
                                    Polarity : 0
                                        Mode : 0
[037h 0055   1]                     Reserved : 00
[038h 0056   8]                 Base Address : 00000000F0C10000
[040h 0064   8]               Address Length : 0000000000004000

[048h 0072  12]            Doorbell Register : [Generic Address Structure]
[048h 0072   1]                     Space ID : 00 [SystemMemory]
[049h 0073   1]                    Bit Width : 20
[04Ah 0074   1]                   Bit Offset : 00
[04Bh 0075   1]         Encoded Access Width : 03 [DWord Access:32]
[04Ch 0076   8]                      Address : 0000100001540010

[054h 0084   8]                Preserve Mask : 0000000000000000
[05Ch 0092   8]                   Write Mask : 0000000053000040
[064h 0100   4]              Command Latency : 000003E8
[068h 0104   4]          Maximum Access Rate : 00000000
[06Ch 0108   2]      Minimum Turnaround Time : 0000
[06Eh 0110  12]        Platform ACK Register : [Generic Address Structure]
[06Eh 0110   1]                     Space ID : 00 [SystemMemory]
[06Fh 0111   1]                    Bit Width : 20
[070h 0112   1]                   Bit Offset : 00
[071h 0113   1]         Encoded Access Width : 03 [DWord Access:32]
[072h 0114   8]                      Address : 0000100001540020

[07Ah 0122   8]            ACK Preserve Mask : 0000000000000000
[082h 0130   8]               ACK Write Mask : 0000000000010001

[08Ah 0138   1]                Subtable Type : 02 [HW-Reduced Comm Subspace Type2]
[08Bh 0139   1]                       Length : 5A

Lets compare this with entries in the DSDT.

 Device (C000)
        {
            Name (_HID, "ACPI0007" /* Processor Device */)  // _HID: Hardware ID
            Name (_UID, Zero)  // _UID: Unique ID
            Method (_LPI, 0, NotSerialized)  // _LPI: Low Power Idle States
            {
                Return (PLPI) /* \_SB_.PLPI */
            }

            Name (PCPC, Package (0x17)
            {
                0x17, 
                0x03, 
                ResourceTemplate ()
                {
                    Register (PCC, 
                        0x20,               // Bit Width
                        0x00,               // Bit Offset
                        0x0000000000000000, // Address
                        0x02,               // Access Size
                        )
                }, 

What I have learned so far is that the HID (Hardware Identifier? I think) for this device is ACPI0007, which is a Processor, or Core using the earlier terminology. It has inside it a PCPC package. I have not been able to expand this acronym but I have been told that they are registers within CPPC, so possibly the letters mean the same thing but are in a different order. Maybe Processor Collaborative Performance Counter?

The two values we are interested in here are the Access Size and Address values. The Access Size is actually a zero relative index. This points to the third entry in the PCCT. This particular register has an offset of 0. The next has 0x0000000000000004, and then 0x0000000000000008, and then 0x000000000000000C, and so on.

My understanding is that in PCC there are registers and there are shared memory regions. Because a shared memory region might be off the SoC in SDRAM, it can have a lot higher latency than a register. Thus, to keep latency down, these use the on chip registers.

More on this as I learn more. And I might be WAAAAY off on this, so please do not use it to design your hardware.

Episode 375 – The market forces of left-pad, Episode 77 remaster part 2

Posted by Josh Bressers on May 15, 2023 12:00 AM

Josh and Kurt finish up the leftpad discussion. We spent a lot of time talking about how the market will respond to these sort of events, and the market did indeed speak; very little has changed. There is an aspect of all these security events where we need to understand the cost vs benefit just isn’t there. it may never be there. Rather than whine and complain, we need to work with our constraints.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3119-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_375_The_market_forces_of_left-pad_Episode_77_remaster_part_2.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_375_The_market_forces_of_left-pad_Episode_77_remaster_part_2.mp3</audio>

Show Notes

Episode 374 – The event we called left-pad, Episode 77 remaster part 1

Posted by Josh Bressers on May 08, 2023 12:00 AM

Josh and Kurt revisit Episode 77, which was named “npm and the supply chain” but was a discussion about the incident we all know now as “leftpad”. We didn’t understand what was happening at the time, but this would become an event we talk about for years to come. It’s shocking how many of the things we discuss are still completely valid five years later.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3114-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_374_The_event_we_called_left-pad_Episode_77_remaster_part_1.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_374_The_event_we_called_left-pad_Episode_77_remaster_part_1.mp3</audio>

Show Notes

ipmitool lan print

Posted by Adam Young on May 04, 2023 02:13 PM

when run from inside a console/ssh session will tell you the ipmi address of the machine you are on.

Episode 373 – HHGG security, Episode 42 remaster part 2

Posted by Josh Bressers on May 01, 2023 12:00 AM

This is the second part of remastering Episode 42 which is all about the security in the Hitchhiker’s Guide to the Galaxy movie. It’s a fun show and it’s shocking how many of these security themes are still relevant today.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3110-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_373_HHGG_security_Episode_42_remaster_part_2.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_373_HHGG_security_Episode_42_remaster_part_2.mp3</audio>

Show Notes

The Minimum Linux Kernel Module Code to Register a Driver

Posted by Adam Young on April 24, 2023 10:42 PM

I’ve been working through John Madieu’s Book on Linux Device Driver Development. When typing in the Sample code for the Platform device, I got a Segmentation fault registering the device (insmod).

I started with the example code for registering the platform_driver:

static struct platform_driver mctp_pcc_driver = {
    .probe = mctp_pcc_driver_probe,
    .remove = mctp_pcc_driver_remove,
    
};
static int __init mctp_pcc_mod_init(void)
{
        int rc = 0;
        printk(KERN_INFO "initializing MCTP over PCC\n");
        rc = platform_driver_register(&mctp_pcc_driver);
        return rc;
}

The stack trace in the Kernel Oops showed that there was string compare happening in the platform_driver_register call. Looking at other Kernel modules code, I noticed that it set the .driver { .name } value. This change worked:

static struct platform_driver mctp_pcc_driver = {
    .probe = mctp_pcc_driver_probe,
    .remove = mctp_pcc_driver_remove,
    .driver = {
          .name = "mctp_pcc",
    }
};

Once I made this change, I can insmod the module, and check to see that the system sees the driver:

# find /sys/ -name mctp_pcc
/sys/kernel/btf/mctp_pcc
/sys/kernel/debug/printk/index/mctp_pcc
/sys/bus/platform/drivers/mctp_pcc
/sys/module/mctp_pcc

And rmmod-ing the module removes these /sys entries.

If you go to the Book github site and check the code, you can he he fills out the structure with the owner value as well:

static struct platform_driver mypdrv = {
    .probe      = my_pdrv_probe,
    .remove     = my_pdrv_remove,
    .driver     = {
        .name     = "platform-dummy-char",
        .owner    = THIS_MODULE,
    },
};

One other change I noticed when looking at his sample code is that he does not use an init or exit function for the module, but instead registers the driver with the single macro:

 module_platform_driver(mypdrv);

Which I confirmed also creates and removes the /sys entries upon insmod/rmmod.

Episode 372 – HHGG security, Episode 42 remaster part 1

Posted by Josh Bressers on April 24, 2023 12:00 AM

The podcast is on a hiatus for a little while due to some personal matters, but that creates an opportunity to remaster some fun old episodes. These shows are REALLY hard to listen to at the current quality (tools and talent has come a long way in the last few years).

This is a remaster of Episode 42 which is all about the security in the Hitchhiker’s Guide to the Galaxy movie. It’s a fun show and it’s shocking how many of these security themes are still relevant today.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3105-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_372_HHGG_security_Episode_42_remaster_part_1.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_372_HHGG_security_Episode_42_remaster_part_1.mp3</audio>

Show Notes

Keeping build output on one screen

Posted by Adam Young on April 20, 2023 07:40 PM

When a build goes wrong, the amount of error messaging can easily scroll off the screen. Usually the error is on the first line reported. Here’s a couple ways to make it easier to read just the lines you want.

The first is just to grep for the word ‘error.’ This works for gcc:

make 2>&1 | grep error

Which produces

/root/devel/mctp_pcc/mctp_pcc.c:178:18: error: ‘name’ undeclared (first use in this function)
/root/devel/mctp_pcc/mctp_pcc.c:178:52: error: ‘fe’ undeclared (first use in this function); did you mean ‘fd’?
/root/devel/mctp_pcc/mctp_pcc.c:179:9: error: ‘ndev’ undeclared (first use in this function); did you mean ‘cdev’?
/root/devel/mctp_pcc/mctp_pcc.c:179:37: error: ‘dev’ undeclared (first use in this function); did you mean ‘cdev’?
/root/devel/mctp_pcc/mctp_pcc.c:182:17: error: ‘rc’ undeclared (first use in this function); did you mean ‘rq’?
/root/devel/mctp_pcc/mctp_pcc.c:187:20: error: ‘idx’ undeclared (first use in this function); did you mean ‘ida’?
/root/devel/mctp_pcc/mctp_pcc.c:190:24: error: ‘STATE_IDLE’ undeclared (first use in this function); did you mean ‘VTIME_IDLE’?
/root/devel/mctp_pcc/mctp_pcc.c:193:34: error: ‘mctp_serial_tx_work’ undeclared (first use in this function)
/root/devel/mctp_pcc/mctp_pcc.c:197:17: error: label ‘free_netdev’ used but not defined
/root/devel/mctp_pcc/mctp_pcc.c:183:17: error: label ‘free_ida’ used but not defined
/root/devel/mctp_pcc/mctp_pcc.c:199:1: error: no return statement in function returning non-void [-Werror=return-type]

A bit of background…all Linux processes have 3 file descriptors (FD) by default. STDIN is for reading in information from other processes. STDIN is file descriptor 0. STDOUT is for, well, STANDARD’ output, or the output that the program is supposed to produce. STDOUT is FD 1. STDERR is for output that is not standard, and is intended for error messages. STDERR is FD 2.. Make and GCC spit their output into STDERR. The trick is to tell the program to redirect standard error into STDOUT via the magic symbol 2>&1. I read this in my head as “two goes into ampersand one.” STDERR is 2, STDOUT is 1.

If you want to get just the first set of line you can use the head command like this:

make 2>&1 | head -10

Which produces

make -C /lib/modules/6.2.0+/build M=/root/devel/mctp_pcc modules
make[1]: Entering directory '/root/devel/linux'
  CC [M]  /root/devel/mctp_pcc/mctp_pcc.o
/root/devel/mctp_pcc/mctp_pcc.c: In function ‘create_mctp_pcc_nnetdev’:
/root/devel/mctp_pcc/mctp_pcc.c:178:18: error: ‘name’ undeclared (first use in this function)
  178 |         snprintf(name, sizeof(name), "mctpipcc%x", fe);
      |                  ^~~~
/root/devel/mctp_pcc/mctp_pcc.c:178:18: note: each undeclared identifier is reported only once for each function it appears in
/root/devel/mctp_pcc/mctp_pcc.c:178:52: error: ‘fe’ undeclared (first use in this function); did you mean ‘fd’?
  178 |         snprintf(name, sizeof(name), "mctpipcc%x", fe);

This gets just the first 10 lines…use a different number if you want a different amount of output

These two options are fairly easy to type and thus don’t really call for scripting. As the amount of filtering gets longer,. you make want to make scripts that build up more complex selection of the output.

If you have control of your makefile, you can use the -Wfatal-errorsflag as is written up here, but I am calling into precanned Makefiles and it drops the flag. To modify Makefiles see this discussion.


Episode 371 – pip install is the tool we deserve but not the tool we need

Posted by Josh Bressers on April 17, 2023 12:00 AM

Josh and Kurt talk about a blog post about pip and virtual environments. This eventually turns into a larger conversation around packaging tools and how we see incremental changes over time. The package ecosystems were what we needed a few years ago, but our needs have changed.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3099-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_371_-_pip_install_is_the_tool_we_deserve_but_not_the_tool_we_need.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_371_-_pip_install_is_the_tool_we_deserve_but_not_the_tool_we_need.mp3</audio>

Show Notes

Episode 370 – Open Source is bigger than you can imagine

Posted by Josh Bressers on April 10, 2023 12:00 AM

Josh and Kurt talk about some data on the size of NPM. Josh wrote a blog post and a report about the amount of SEO spam in NPM was released. Open source is enormous, and it’s mostly one person. It’s hard to imagine how this all works sometimes and this lack of understanding can create challenges.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3094-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_370_Open_Source_is_bigger_than_you_can_imagine.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_370_Open_Source_is_bigger_than_you_can_imagine.mp3</audio>

Show Notes