Fedora security Planet

Episode 329 – Signing (What is it good for)

Posted by Josh Bressers on June 27, 2022 12:01 AM

Josh and Kurt talk about what the actual purpose of signing artifacts is. This is one of those spaces where the chain of custody for signing content is a lot more complicated than it sometimes seems to be. Is delivering software over https just as good as using a detached signature? How did we end up here, what do we think the future looks like? This episode will have something for everyone to complain about!

<audio class="wp-audio-shortcode" controls="controls" id="audio-2814-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_329_Signing_What_is_it_good_for.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_329_Signing_What_is_it_good_for.mp3</audio>

Show Notes

XPath for libvirt external snapshop path

Posted by Adam Young on June 24, 2022 07:20 PM

The following xmllint XPath query will pull out the name of the backing file for a VM named fedora-server-36 and an external snapshot named fedora-36-post-install,

virsh snapshot-dumpxml fedora-server-36 fedora-server-36-post-install | xmllint --xpath "string(//domainsnapshot/disks/disk[@snapshot='external']/source/@file)" -

The string function extracts the attribute value.

This value can be used in the process of using or deleting the snapshot.

Copy in for-each loops in C++

Posted by Adam Young on June 23, 2022 05:03 PM

I had a bug in my OpenGL program. Here was the original code:

  for (Orbitor o : orbitors){
    o.calculate_position();
  }

and here was the working version

  for (std::vector<orbitor>::iterator it = orbitors.begin() ;
       it != orbitors.end();
       ++it)
    {
      it->calculate_position();
    }

The bug was that the o.calculate_position(); call was supposed to update the internal state of the Orbitor structure, but was called on a copy of the instance in the original structure, and not on the original structure itself. Thus, when a later call tried to show the position, it was working with the version that had not updated the position first, and thus was showing the orbitors in the wrong position.

The reason for this bug makes sense to me: the first version of the loop makes a copy of the instance to use inside the for loop. The second uses a pointer to the original object.

One way that I can protect against these kinds of bugs is to create a private copy constructor, so that the objects cannot be accidentally copied.

Intro to libvirt based virtualization on Linux

Posted by Adam Young on June 22, 2022 03:14 PM

The processes of development, installation, testing, and debugging of software all benefit from the use of a virtual machines. If you are working in a Linux based infrastructure, you have access to the virtual machine management on your system. There are a handful of related technologies that all work together to help you get your work done.

  • A hypervisor is machine that runs virtual machines. There are several proprietary hypervisor distributions in the marketplace, such as VMwares ESXi and Microsofts HyperV.
  • KVM is the Linux Kernel module that allows you to run a virtual machine in a process space isolated from the rest of the system.
  • Qemu is an implementation of that virtual machine. It was originally an emulator (hence the name) and can still be run that way, but it is far more powerful and performant when run in conjuntion with KVM
  • Xen is an alternative approach that preceeded KVM. It implementaed the entire virtualization in Kernel space; Linus did not like this approach and denied merging it into the mainline Linux Kernel.
  • libvirt is a client/server implementation to allow you to communicate with a Linux machine running an implementation of virtualization like KVM/Qemu or Xen. There are a handful of other implementations as well, but for our purposes, we will focus on the KVM/Qemu approach.
  • libvirtd is the server Daemon for libvirt. It is run via systemd on the current Fedora and Ubuntu releases.
  • virsh is a CLI application that allows you to send commands to the libvirt subsystem
  • virt-manager is a GUI program that lets you send libvirt commands via a more discoverable workflow.

There are other tools, but these are enough to get started.

In order to run a virtual machine, you need a hypervisor machine to host it. This might be your laptop, or it might be a remote server. For example, I am running Ubuntu 22 on a Dell Latitude laptop, and I can run a virtual machine on that directly. Here are the set of libvirt related packages I have installed:

    null
  • gir1.2-libvirt-glib-1.0/jammy
  • libvirt-clients/jammy-updates
  • libvirt-daemon-config-network/jammy-updates
  • libvirt-daemon-config-nwfilter/jammy-updates
  • libvirt-daemon-driver-qemu/jammy-updates
  • libvirt-daemon-system-systemd/jammy-updates
  • libvirt-daemon-system/jammy-updates
  • libvirt-daemon/jammy-updates
  • libvirt-glib-1.0-0/jammy
  • libvirt-glib-1.0-data/jammy
  • libvirt0/jammy-updates
  • python3-libvirt/jammy

I am fairly certain I only had to install libvirt-daemon-system and libvirt-daemon-driver-qemu and then enable the via systemd commands.

Once the daemon is up and listening, you can run the virt-manager gui to connect to it and perform basic operations. On Fedora and Ubuntu, this is provided by the package virt-manager and can be run from the command line as virt-manager.

<figure class="wp-block-image">virt-manager with two servers listed<figcaption>virt-manager with two servers listed</figcaption></figure>

In the above picture, you can see the localhost listed in black, but also another host from our lab listed in gray. It is grayed out because I have not yet attempted to connect to this server this session.

The localhost system has no virtual machines currently defined. Adding one is simple, but you do need to have the install media to do anything with it. I’ll show this on the remote system instead. The steps you use to build a VM remotely are the same as you use to build it locally.

To Create a new connection, start by selecting the FIle menu and then Add Connection. Leave the Hypervisor as QEMU/KVM and select the checkbox to connect to the remote host over ssh. You probably need to connect to the remote machine as root. You can use either a FQDN or an IPv4 address to connect. IPv6 might work, but I have not tried it.

<figure class="wp-block-image">virt-manager add connection dialog</figure>

This is going to use the same key or password you would use to connect via ssh from the command line.

Once you have a connection, the line is displayed darker with any VMs listed. I have one VM, named fedora-server-36.

<figure class="wp-block-image">virt-amanaager remote connection with one VM</figure>

When running a virtual machine, you want to use the same architecture as the underlying hypervisor. For the most part, this means X86_64. However, I work for a company that builds AARCH64 based server chips, and the machine listed above is one of ours.

If you float the mouse over a running VM, right click and select the “open” you will get a terminal console.

<figure class="wp-block-image">VM terminal in virt-maanger</figure>

To see the details you can select the View->details menu item. here you can see the Architecture is aarch64.

<figure class="wp-block-image">VM details</figure>

In the next article, will go through network configuration and launching a virtual machine.

Conan-izing an OpenGL project.

Posted by Adam Young on June 22, 2022 02:25 PM

Now that I can build my app with Autotools, I want to make it work with conan. In my head, I have conan mapped to projects like cargo in rust and pip in Python. However, C++ has a far less homogenized toolchain, and I expect things are going to be more “how to make it work for you.” I started with Autotools to minimize that.

I did, however, install a few packages for development, and I am tempted to start by removing those packages and try to make conan do the work to fetch and install them.

 history | grep "apt install"
 2067  sudo apt install freeglut3-dev
 2095  sudo apt install  libboost-date-time-dev 
 2197  sudo apt install autoconf

So I removed those

sudo apt remove freeglut3-dev libboost-date-time-dev

Now, follow the same general pattern as the getting started guide:

conan search freeglut --remote=conancenter
Existing package recipes:

freeglut/3.2.1

Because i am using python file instead of the file conanfile.txt, I have:

$ git diff conanfile.py
diff --git a/conanfile.py b/conanfile.py
index a59fca3..b545ca0 100644
--- a/conanfile.py
+++ b/conanfile.py
@@ -40,3 +40,8 @@ class OrbitsConan(ConanFile):
     def package(self):
         autotools = Autotools(self)
         autotools.install()
+
+    def build_requirements(self):
+        self.tool_requires("boost/1.79.0")
+        self.test_requires("freeglut/3.2.1")
+

but when I try to install:

mkdir build
cd build
conan install ..

After much output, I get the error:

Installing (downloading, building) binaries...
ERROR: There are invalid packages (packages that cannot exist for this configuration):
freeglut/3.2.1: Invalid ID: freeglut does not support gcc >= 10 and clang >= 11

So…I guess we build that, too….nope. OK, going to punt on that for the moment, and see if I can get the rest to build, including boost. Comment out the freeglut line in conanfile.py and try again.

ERROR: Missing prebuilt package for 'boost/1.79.0'
Use 'conan search boost/1.79.0 --table=table.html -r=remote' and open the table.html file to see available packages
Or try to build locally from sources with '--build=boost'

OK, add in the –build boost flag as recommended. I am not certain that this is building the date library or not, and building all of boost might take a while….but no. Seems like it worked. Let’s try configure and build.

conan build .. 
/home/ayoung/devel/git/admiyo/orbits/./src/orbits.h:2:10: fatal error: boost/date_time/gregorian/gregorian.hpp: No such file or directory
    2 | #include "boost/date_time/gregorian/gregorian.hpp"

OK, let’s see if that is in the cached files from conan:

$ find  ~/.conan/ -name gregorian.hpp | grep date_time
/home/ayoung/.conan/data/boost/1.79.0/_/_/package/dc8aedd23a0f0a773a5fcdcfe1ae3e89c4205978/include/boost/date_time/gregorian/gregorian.hpp
/home/ayoung/.conan/data/boost/1.79.0/_/_/source/source_subfolder/boost/date_time/gregorian/gregorian.hpp

So, yeah, it is there, but Autotools does not seem to be setting the include directory from the package. Looking back at the output from the previous command, I see that the g++ line only has these two -I flags.

-I. -I/home/ayoung/devel/git/admiyo/orbits/./src

I would expect that conan would implicitly add the includes directory from older packages to the build for a new package. However, that does not seem to be the case. I’ll look in to it. But for now, I can work around it by adding the directories myself. That is the power of using a full programming language like python as opposed to a Domain Specific Language. Here’s my build step:

    def build(self):
        env_build = AutoToolsBuildEnvironment(self)
        autotools = Autotools(self)

        CXXFLAGS=""
        for PATH in  env_build.include_paths:
            incdir = " -I%s"%PATH
            CXXFLAGS = CXXFLAGS + incdir
        os.environ["CXXFLAGS"] = CXXFLAGS
        
        autotools.autoreconf()
        autotools.configure()
        autotools.make()

Note that I used the pdb mechanism pixelbeat wrote about way back. It let me inspect what was going on during the built process…huge time saver.

This might be why conan punts on the include path: I had to use environment variables to pass the additional values on down through Autotools to the Makefile. I don’t know cmake well enough to say whether it would have the same issue.

With this change added, the build completes to the point that I have a running orbit executable.

If this were part of a larger workflow, I would post this build to an internal conan repository server to be shared amongst the team. We’ll be doing that with the actual binaries used for production work.

Poking back at the FreeGLUT build, I see a couple things. First, if we look at the conanfile.py that was used to build it, we see that the check is right there in the code. We also see that it points to an old issue that was long ago fixed. Note that the link is not versioned, so if it gets changed, this article will not point to the error anymore.

I tried downloading that recipe and running it locally. It turns out that it depends on a slew of system packages (all of the X stuff required for OpenGL, etc). Which I installed. At that point, the build fails with:

CMake Error: The source directory “/home/ayoung/devel/conan/freeglut” does not appear to contain CMakeLists.txt.

And, while I could debug this, I decided that I am not going to pursue building against the package posted to central. For now, using system packages for the build seems to make more sense, especially for a dependency like FreeGLUT. If a larger ecosystem of native packages emerges, I might revisit this decision.

Converting an OpenGL project to use Autotools

Posted by Adam Young on June 21, 2022 01:51 PM

In a science fiction game I play, we often find ourselves asking “which side of the Sun is Mars from the Earth right now?” Since this is for a game, the exact distance is not important, just “same side” or “90 degrees ahead” is a sufficient answer. But “right now” actually means a couple hundred years in the future.

Perfect problem to write a little code to solve. Here’s what it looks like.

<figure class="wp-block-video"><video controls="controls" src="http://adam.younglogic.com/wp-content/uploads/2022/06/orbits-running.webm"></video></figure>

My team is using Conan for package management of our native code. I want to turn this Orbits project into a Conan package.

We’re using Autotools for other things, so I’l start by creating a new Autotool based project.

conan new orbits/0.1 --template autotools_exe
File saved: Makefile.am
File saved: conanfile.py
File saved: configure.ac
File saved: src/Makefile.am
File saved: src/main.cpp
File saved: src/orbits.cpp
File saved: src/orbits.h
File saved: test_package/conanfile.py

Aaaaaand….I see it is opinionated. I am going to move my code into this structure. Specifically, most of the code will got into orbits.cpp, but I do have stuff that needs to go in to main. Time to restructure.

To start, I move my existing orbit.cpp into src, over writing the orbits.cpp file in there. I rename the main function to orbits, and add in the argc argv parameters so I can call them from main. That should be enough to get it to compile, assuming I can get the Autotools configuration correct. I start by installing make sure.

ayoung@ayoung-Latitude-7420:~/devel/git/admiyo/orbits$ autoconf 
configure.ac:3: error: possibly undefined macro: AM_INIT_AUTOMAKE
...

And many other lines of errors, mostly about missing files. This gives me a way forward.

Many of the files are project administration, such as the AUTHORS file, that credits the people that work on it, or the Licenses file which, while boiler-plate, makes it unambiguous what the terms are for redistribution of code and binaries. It is tempting to just use touch to create them, but you are better off putting in the effort to at least start the files. More info here.

Remember that the README.md file will be displayed on the git repo’s landing page for the major git projects, and it is worth treating that as your primary portal for communicating with any potential users and contributors.

Autotools assumes that you have just the minimal checked in to git, and the rest of the code files are generated. I ended up with this short script to run it in order:

#!/bin/sh

autoreconf --install
./configure
automake
make

Since you are generating a slew of files, it is worth noting the command to remove all but the git-generated files.

First, make sure you have everything you are actively working on committed! If not, you will lose files.

Then run:

git clean -xdf .

The primary file I needed to work on was configure.ac. This is the file that generates the configure script. This script is used to test if the required dependencies are available on the system. Here’s my configure.ac

AC_INIT([orbits], [0.1], [])
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
AC_PROG_CXX
AM_PROG_AR
LT_INIT
AC_CHECK_LIB([glut], [glutInit])
AC_CHECK_LIB([GL],[glVertex2f])
PKG_CHECK_MODULES([GL], [gl])
AC_CONFIG_FILES([Makefile src/Makefile])
AC_OUTPUT

This was the hardest part to get right.

AC_CHECK_LIB([GL],[glVertex2f])
PKG_CHECK_MODULES([GL], [gl])

The top line checks that the libGL shared library is installed on the system, and has the glVertex2f symbol defined in it. You would think that you just need the second line, which checks for the module. It turns out that configure is also responsible for generating the linker flags in the make file. So, while the PKG_CHECK_MODULES will tell you that you have OpenGL installed, the AC_CHECK_LIB line will make you use it.

I’m sure there is more wisdom to be gained in working with Autotools, but this is enough to get me started.

Now that I have the Autotools part working (maybe even correctly?!) I need to work on making it build via conan.

But that is another story.

Episode 328 – The Security of Jobs or Job Security

Posted by Josh Bressers on June 20, 2022 12:01 AM

Josh and Kurt talk about the security of employees leaving jobs. Be it a voluntary departure or in the context of the current layoffs we see, what are the security implications of having to remove access for one or more people departing their job?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2809-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_328_The_Security_of_Jobs_or_Job_Security.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_328_The_Security_of_Jobs_or_Job_Security.mp3</audio>

Show Notes

Parsing libvirt xmldump using xpath

Posted by Adam Young on June 14, 2022 07:58 PM

In a recent article, I saw yet another example of using grep to pull information out of xml, and then to manually look for a field. However, XML is structured, and with XPath, we can pull out exactly what we need.

virsh dumpxml fedora-server-36 | xmllint --xpath "//domain/devices/disk[@device='disk']"  -

That will produce output like this:

<disk device="disk" type="file">
      <driver discard="unmap" name="qemu" type="qcow2">
      <source file="/var/lib/libvirt/images/fedora-server-36.qcow2">
      <target bus="virtio" dev="vda">
      
</disk>

Note that I did more in my XPath than required by the original article. I wanted to show an example of querying based on an attribute inside the selected node.

Update: Here is an example for what is done later in the article: pull the path out of the pool xml.

virsh pool-dumpxml default |  xmllint --xpath "//pool/target/path/text()"  -
/var/lib/libvirt/images

Episode 327 – The security of alert fatigue

Posted by Josh Bressers on June 13, 2022 12:01 AM

Josh and Kurt talk about a funny GitHub reply that notified 400,000 people. It’s fun to laugh at this, but it’s an easy open to discussing alert fatigue and why it’s important to be very mindful of our communications.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2803-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_327_The_security_of_alert_fatigue.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_327_The_security_of_alert_fatigue.mp3</audio>

Show Notes

Episode 326 – Big fat containers

Posted by Josh Bressers on June 06, 2022 12:01 AM

Josh and Kurt talk about containers. There are a lot of opinions around what type of containers is best. Back when it all started there were only huge distro sized containers. Now we have a world with many different container types and sizes. Is one better?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2797-4" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_326_Big_fat_containers.mp3?_=4" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_326_Big_fat_containers.mp3</audio>

Show Notes

Episode 325 – Is one open source maintainer enough?

Posted by Josh Bressers on May 30, 2022 12:00 AM

Josh and Kurt talk about a recent OpenSSF issue that asks the question how many open source maintainers should a project have that’s “healthy”? Josh did some research that shows the overwhelming majority of packages have one maintainer. What does that mean?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2791-5" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_325_Is_one_open_source_maintainer_enough.mp3?_=5" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_325_Is_one_open_source_maintainer_enough.mp3</audio>

Show Notes

Building Linux tip-of-tree on an Ampere based system

Posted by Adam Young on May 25, 2022 04:37 PM

I Have an Ampere Altra-Max/INGRASYS Yushan Server System running Centos 8 stream.

Because we are a chip manufacteror, we don’t sell end systems, we provide a reference platform that is a starting point for our customers to make a product. This leads to bizarre set of internal versus external names. One thing that you can rely on, however, is the identifier of the processor itself:

# cat /proc/cpuinfo 
processor	: 0
BogoMIPS	: 50.00
Features	: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
CPU implementer	: 0x41
CPU architecture: 8
CPU variant	: 0x3
CPU part	: 0xd0c
CPU revision	: 1
...

TO make this readable, use the utility lscpu:

[root@eng14sys-r
111 ~]# lscpu 
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              80
On-line CPU(s) list: 0-79
Thread(s) per core:  1
Core(s) per socket:  80
Socket(s):           1
NUMA node(s):        1
Vendor ID:           ARM
BIOS Vendor ID:      Ampere(R)
Model:               1
Model name:          Neoverse-N1
BIOS Model name:     Ampere(R) Altra(R) Processor
Stepping:            r3p1
CPU max MHz:         3000.0000
CPU min MHz:         1000.0000
BogoMIPS:            50.00
L1d cache:           64K
L1i cache:           64K
L2 cache:            1024K
NUMA node0 CPU(s):   0-79
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs


I want to build the latest Linus-repo Linux Kernel and run it on the server. Here’s the steps I went through.

Since this was pretty much a clean install, I had none of the development tools installed, to include git. I’ll include here a couple other packages that I learned I needed later in the process. This is the set of successful commands I ended up running.

yum groupinstall "Development Tools"
yum install openssl-devel
yum install python3
yum install ncurses-devel #for menuconfig, which I didn't really need.
dnf config-manager --set-enabled "powertools"
yum install dwarves
git clone https://github.com/torvalds/linux
cd linux
export NPROC=`nproc`
yes "" | make oldconfig 

make -j $NPROC
make -j $NPROC modules
make modules_install
make install
grub2-mkconfig -o /boot/grub2/grub.cfg

Before a reboot I get

[root@eng14sys-r111 linux]# uname -mrs
Linux 4.18.0-365.el8.aarch64 aarch64

After a reboot my grub menu looks like this

      CentOS Stream (4.18.0-365.el8.aarch64) 8                                 
      CentOS Stream (4.18.0-348.el8.aarch64) 8                                 
      CentOS Stream (0-rescue-306ea476c3584adea2089f1980a56ca3) 8              
      System setup

At the start of the boot log I see

[    0.000000] Booting Linux on physical CPU 0x0000120000 [0x413fd0c1]
[    0.000000] Linux version 5.18.0+ (root@eng14sys-r111.scc-lab.amperecomputing.com) (gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-13), GNU ld version 2.30-114.el8) #2 SMP Wed May 25 08:33:07 PDT 2022

After boot and log in:

[root@eng14sys-r111 ~]# uname -a
Linux eng14sys-r111.scc-lab.amperecomputing.com 5.18.0+ #2 SMP Wed May 25 08:33:07 PDT 2022 aarch64 aarch64 aarch64 GNU/Linux

Episode 324 – WTF is up with WFH

Posted by Josh Bressers on May 23, 2022 12:00 AM

Josh and Kurt talk about the whole work from home debate. It seems like there are a lot of very silly excuses why working from home is bad. We’ve both been working from home for a long time and have a chat about the topic. There’s not much security in this one, but it is a fun discussion.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2786-6" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_324_WTF_is_up_with_WFH.mp3?_=6" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_324_WTF_is_up_with_WFH.mp3</audio>

Show Notes

Errors running Keystone pep8

Posted by Adam Young on May 17, 2022 10:41 PM

The command to run the formatting tests for the keystone project is:

tox -e pe8

Running this on Fedora35 failed for me with this error:

ERROR:   pep8: could not install deps [-chttps://releases.openstack.org/constraints/upper/master, -r/opt/stack/keystone/test-requirements.txt, .[ldap,memcache,mongodb]]; v = InvocationError("/opt/stack/keystone/.tox/pep8/bin/python -m pip install -chttps://releases.openstack.org/constraints/upper/master -r/opt/stack/keystone/test-requirements.txt '.[ldap,memcache,mongodb]'", 1)

What gets swallowed up is the actual error in the install, and it has to do with the fact that the python dependencies are compiled against native libraries. If I activate the venv and run the command by hand, I can see the first failure. But if I look up at the previous output, I can see it, just buried a few screens up:

    Error: pg_config executable not found.

A Later error was due to the compile step erroring out looking for lber.h:

 In file included from Modules/LDAPObject.c:3:
  Modules/common.h:15:10: fatal error: lber.h: No such file or directory
     15 | #include <lber.h>
        |          ^~~~~~~~
  compilation terminated.
  error: command '/usr/bin/gcc' failed with exit code 1

To get the build to run, I need to install both libpq-devel and libldap-devel and now it fails like this:

File "/opt/stack/keystone/.tox/pep8/lib/python3.10/site-packages/pep257.py", line 24, in <module>
    from collections import defaultdict, namedtuple, Set
ImportError: cannot import name 'Set' from 'collections' (/usr/lib64/python3.10/collections/__init__.py)

This appears to be due to the version of python3 on my system (3.10) which is later than supported by upstream openstack. I do have python3.9 installed on my system, and can modify the tox.ini to use it by specifying the basepython version.

 
[testenv:pep8]
basepython = python3.9
 deps =

And then I can run tox -e pep8.

Episode 323 – The fake 7-Zip vulnerability and SBOM

Posted by Josh Bressers on May 16, 2022 12:00 AM

Josh and Kurt talk about a fake 7-Zip security report. It’s pretty clear that everyone is running open source all the time. We end on some thoughts around what SBOM is good for, and who should be responsible for them.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2780-7" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_323_The_fake_7Zip_vulnerability_and_SBOM.mp3?_=7" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_323_The_fake_7Zip_vulnerability_and_SBOM.mp3</audio>

Show Notes

Secureboot Signing With Your Own Key

Posted by Robbie Harwood on May 12, 2022 04:00 AM

Create and enroll a key

First, we'll need some packages:

dnf install pesign mokutil keyutils

(Package names are the same on most major distros, though of course your package manager won't be the same.)

Next, we create a key for signing. This uses efikeygen, from the pesign project; I prefer efikeygen because it also creates an NSS database that will be useful for pesign later.

efikeyen -d /etc/pki/pesign -S -TYPE -c 'CN=Your Name Key' -n 'Custom Secureboot'

Replace Your Name Key with your name, and Custom Secureboot with a name for the key that you'll remember for later steps. For -TYPE, replace with -m if you only plan to sign custom kernel modules, and -k otherwise.

(Note that this will set up the key for use in /etc/pki/pesign, which is convenient for pesign later, but it can also be placed elsewhere, like on a hardware token - see efikeygen(1) for more options.)

Next, export the public key and import it into the MOK (the Machine Owner Keystore - the list of keys trusted for signing):

certutil -d /etc/pki/pesign -n 'Custom Secureboot' -Lr > sb_cert.cer
mokutil --import sb_cert.cer

Again, replace Custom Secureboot the chosen name. mokutil will prompt you for a password - this will be used in a moment to import the key.

Reboot, and press any key to enter the mok manager. Use the up/down arow keys and enter to select "enroll mok", then "view key". If it's the same key you generated earlier, continue, then "yes" to enroll. The mok manager will prompt you for the password from before - note that it will not be echoed (no dots, either). When finished, select reboot.

To check the key imported okay, we can use keyctl. Output could look something like this:

~# keyctl show %:.platform
Keyring
1072527305 ---lswrv      0     0  keyring: .platform
1013423757 ---lswrv      0     0   \_ asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53
 246036308 ---lswrv      0     0   \_ asymmetric: Your Name Key: 31fe6684706ff53faf26cec7e700f84aa0fd22ae
 919193603 ---lswrv      0     0   \_ asymmetric: Red Hat Secure Boot CA 5: cc6fa5e72868ba494e939bbd680b9144769a9f8f
 341707055 ---lswrv      0     0   \_ asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4

where the important part is that your key is among those listed. (The Microsoft keys are the "normal" anchors for secureboot, and the Red Hat one is present because this is a RHEL machine.)

Unenroll a key

If you want to unenroll a key you added, just do

mokutil --delete sb_cert.cer

This will prompt for the password, and reboot through mok manager as before, except this time select the option to delete the key.

Sign a kernel

Kernel signatures are part of the vmlinuz file. Unfortunately, the process differs between x64 (or amd64, or x86_64, or whatever you want to call it) and aarch64. First, x64 because it's simpler:

pesign -c 'Custom Secureboot' -i vmlinuz-VERSION -s -o vmlinuz-VERSION.signed
pesign -S -i vmlinuz-VERSION.signed # check the signatures
mv vmlinuz-VERSION.signed vmlinuz-VERSION

Replace VERSION with whatever suffix your vmlinuz has, and Custom Secureboot with whatever name you chose earlier.

On aarch64/aa64, things are slightly more involved because the signature is pre-compression. Not to worry, though:

zcat vmlnuz-VERSION > vmlinux-VERSION
pesign -c 'Custom Secureboot -i vmlinux-VERSION -s -o vmlinux-VERSION.signed
pesign -S -i vmlinux-VERSION.signed # check signature
gzip -c vmlinux-VERSION.signed > vmlinuz-VERSION
rm vmlinux-VERSION*

Sign a kernel module

First, prerequisites - the signing tool. On Fedora/RHEL-likes:

dnf install kernel-devel

while on Debian-likes I believe this is part of linux-kbuild, and therefore pulled in by linux-headers.

The sigining tool uses openssl, so we need to extract the key from the NSS database:

pk12util -o sb_cert.p12 -n 'Custom Secureboot' -d /etc/pki/pesign

Replacing Custom Secureboot as before. This will prompt for a password, which will encrypt the private key - we'll need this for the next step:

opensl pkcs12 -in sb_cert.p12 -out sb_cert.priv -nocerts -noenc

This is exporting an unencrypted private key, so of course handle with care :)

Signing will be something like:

/usr/src/kernels/$(uname -r)/scripts/sign-file sha256 sb_cert.priv sb_cert.cer my_module.ko

where my_module.ko is of course the module to be signed. Debian users will I think want a path more like /usr/lib/linux-kbuild-5.17/scripts/sign-file.

To inspect:

~# modinfo my_modile.ko | grep signer
  signer:         Your Name Key

where Your Name Key will be your name as entered during generation.

To test, insmod my_module.ko and to remove rmmod my_module.

Sign a grub build

This is fairly straightforward - the signatures live in the .efi files, which are just PE binaries, which live in /boot/efi/EFI/distro_name (e.g., /boot/efi/EFI/redhat).

pesign -i grubx64.efi -o grubx64.efi.signed -c 'Custom Secureboot' -s
pesign -i grubx64.efi.signed -S # check signatures
mv grubx64.efi.signed grubx64.efi

where Custom Secureboot is once again the name you picked above. Note that x64 is the architecture name in EFI - so if you're on aarch64, it'd be aa64, etc..

Episode 322 – Adam Shostack on the security of Star Wars

Posted by Josh Bressers on May 09, 2022 12:00 AM

Josh and Kurt talk to Adam Shostack about his new book “Threats: What Every Engineer Should Learn From Star Wars”. We discuss some of the lessons and threats in the Star Wars universe, it’s an old code I hear. We also discuss if Star Wars is a better than Star Trek for teaching security (it probably is). It’s a fun conversation and sounds like an amazing book.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2773-8" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_322_Adam_Shostack_on_the_security_of_Star_Wars.mp3?_=8" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_322_Adam_Shostack_on_the_security_of_Star_Wars.mp3</audio>

Show Notes

Keystone LDAP with Bifrost

Posted by Adam Young on May 04, 2022 07:39 PM

I got keystone in my Bifrost install to talk via LDAP to our Freeipa server. Here’s what I had to do.

I started with a new install of bifrost, using Keystone and TLS.

./bifrost-cli install --enable-keystone --enable-tls  --network-interface enP4p4s0f0np0 --dhcp-pool 192.168.116.25-192.168.116.75

After making sure that Keystone could work for normal things;

source /opt/stack/bifrost/bin/activate
export OS_CLOUD=bifrost-admin
 openstack user list -f yaml
- ID: 1751a5bb8b4a4f0188069f8cb4f8e333
  Name: admin
- ID: 5942330b4f2c4822a9f2cdf45ad755ed
  Name: ironic
- ID: 43e30ad5bf0349b7b351ca2e86fd1628
  Name: ironic_inspector
- ID: 0c490e9d44204cc18ec1e507f2a07f83
  Name: bifrost_user

I had to install python3-ldap and python3-ldappool .

sudo apt install python3-ldap python3-ldappool

Now create a domain for the LDAP data.

openstack domain create freeipa
...
openstack domain show freeipa -f yaml

description: ''
enabled: true
id: 422608e5c8d8428cb022792b459d30bf
name: freeipa
options: {}
tags: []

Edit /etc/keystone/keystone.conf to support domin specific backends and back them with file config. When you are done, your identity section should look like this.

[identity]
domain_specific_drivers_enabled=true
domain_config_dir=/etc/keystone/domains
driver = sql

Create the corresponding directory for the new configuration files.

sudo mkdir /etc/keystone/domains/

Add in a configuration file for your LDAP server. Since I called my domain freeipa I have to name the config file /etc/keystone/domains/keystone.freeipa.conf

[identity]
driver = ldap

[ldap]
url = ldap://den-admin-01


user_tree_dn = cn=users,cn=accounts,dc=younglogic,dc=com
user_objectclass = person
user_id_attribute = uid
user_name_attribute = uid
user_mail_attribute = mail
user_allow_create = false
user_allow_update = false
user_allow_delete = false
group_tree_dn = cn=groups,cn=accounts,dc=younglogic,dc=com
group_objectclass = groupOfNames
group_id_attribute = cn
group_name_attribute = cn
group_member_attribute = member
group_desc_attribute = description
group_allow_create = false
group_allow_update = false
group_allow_delete = false
user_enabled_attribute = nsAccountLock
user_enabled_default = False
user_enabled_invert = true

To make changes, to restart sudo systemctl restart uwsgi@keystone-public

sudo systemctl restart uwsgi@keystone-public

And test that it worked

openstack user list -f yaml  --domain freeipa
- ID: b3054e3942f06016f8b9669b068e81fd2950b08c46ccb48032c6c67053e03767
  Name: renee
- ID: d30e7bc818d2f633439d982783a2d145e324e3187c0e67f71d80fbab065d096a
  Name: ann

This same approach can work if you need to add more than one LDAP server to your Keystone deployment.

Episode 321 – Relativistic Security: Project Zero on 0day

Posted by Josh Bressers on May 02, 2022 12:01 AM

Josh and Kurt talk about the Google Project Zero blog post about 0day vulnerabilities in 2021. There were a lot more than ever before, but why? Part of the challenge is the whole industry is expanding while a lot of our security technologies are not. When the universe around you is expanding but you’re staying the same size, you are actually shrinking.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2768-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_321_Relativistic_Security_Project_Zero_on_0day.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_321_Relativistic_Security_Project_Zero_on_0day.mp3</audio>

Show Notes

Episode 320 – Security Twitter is not the real world

Posted by Josh Bressers on April 25, 2022 12:01 AM

Josh and Kurt talk about a survey about a TuxCare patch management and vulnerability detection. Sometimes our security bubble makes us forget what it’s like in the real world for the people who keep our infrastructure running. Patching isn’t always immediate, automation doesn’t fix everything, and accepting risk is very important.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2764-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_320_Security_Twitter_is_not_the_real_world.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_320_Security_Twitter_is_not_the_real_world.mp3</audio>

Show Notes

ipxe.efi for aarch64

Posted by Adam Young on April 20, 2022 08:31 PM

To make the AARCH64 ipxe process work using bifrost, I had to

git clone https://github.com/ipxe/ipxe.git
cd ipxe/src/
make bin-arm64-efi/snponly.efi ARCH=arm64
sudo cp bin-arm64-efi/snponly.efi /var/lib/tftpboot/ipxe.efi

This works for the Ampere reference implementation servers that use a Mellanox network interface card, which supports (only) snp.

How do you keep the Kolla Playing?

Posted by Adam Young on April 20, 2022 07:37 PM

(With apologies to Bergman and Legrand)

I need to modify how the ipxe container mounts directories. Why? AARCH64 iPXE stuff. Specifically, I need to get my own version of a file into the directory that a container mounts when it is running. How do I do that? I don’t know yet, so I am going to look.

I do know that the starting point for running a container in Kolla is the Ansible playbook that launches it, and that for the Ironic containers at least, that calls into a custom library called kolla_docker. This is implemented as python executable:kolla-ansible/ansible/library/kolla_docker.py. The vast majority of that file, however, it parameter parsing, and the real work is done in the call to DockerWorker. While this has ansible as the first stage in the package name, it is actually under the kolla_ansible repository.

The call to this modules is a little convoluted. The module variable is of type AnsibleModule. After that:

dw = DockerWorker(module)
result = bool(getattr(dw, module.params.get('action'))())
module.exit_json(changed=dw.changed, result=result, **dw.result)

This is a way of getting an optional attribute off an object without throwing an exception. If the dw object has the action attribute, its value will be in the result variable, other wise result will be null. Hence, this is put in an exception handling block in case the action is bogus string that does not map to a function. The attribute returned from getattr is called as a no-args method on the dw object.

This could be thought of as a builder pattern, with the exit_json call being the build method. It seems the intention is to be more like the command pattern, but the command is built and executed in a single function.

In this case, we know that the “action” is "start_container" however, it is possible that there were previous calls to the same module prior in the workflow. Lets check:

$ grep kolla_docker ansible/roles/ironic/tasks/*
ansible/roles/ironic/tasks/bootstrap_service.yml:  kolla_docker:
ansible/roles/ironic/tasks/bootstrap_service.yml:  kolla_docker:
ansible/roles/ironic/tasks/bootstrap.yml:  kolla_docker:
ansible/roles/ironic/tasks/check-containers.yml:  kolla_docker:
ansible/roles/ironic/tasks/rolling_upgrade.yml:  kolla_docker:

bootstrap.yml calls import_tasks: bootstrap_service.yml, so we know the service call happens before any calls in bootstrap. An in both cases, the container started is part of the bootstrap process:

import_tasks: bootstrap_service.yml

name: "bootstrap_ironic"
name: "bootstrap_ironic_inspector"

At the end of the bootstrap_service.yaml file we can see the actual start of the ironic_pxe_container:

name: "bootstrap_ironic_pxe"

More interesting is when we look for grep -rn ironic_ipxe ansible/roles/* in this directory: that returns, amongst other things:

ansible/roles/ironic/defaults/main.yml:67:    container_name: ironic_ipxe

Looking at the defaults file, we can see that the set of services for ironic are defined in a declarative style of yaml, much like they were handled by Terraform. Here are the two services I care about, under the top level dictionary key of ironic_services.

  ironic-pxe:
    container_name: ironic_pxe
    group: ironic-pxe
    enabled: true
    image: "{{ ironic_pxe_image_full }}"
    volumes: "{{ ironic_pxe_default_volumes + ironic_pxe_extra_volumes }}"
    dimensions: "{{ ironic_pxe_dimensions }}"
  ironic-ipxe:
    container_name: ironic_ipxe
    group: ironic-ipxe
    # NOTE(mgoddard): This container is always enabled, since may be used by
    # the direct deploy driver.
    enabled: true
    image: "{{ ironic_pxe_image_full }}"
    volumes: "{{ ironic_ipxe_default_volumes + ironic_ipxe_extra_volumes }}"
    dimensions: "{{ ironic_ipxe_dimensions }}"
    healthcheck: "{{ ironic_ipxe_healthcheck }}"

The actual kolla_docker task that uses this is defined in a handler in ansible/roles/ironic/handlers/main.yml. While I don’t yet know what registers this handler, I know that it gets called at some point. It has an action of “recreate_or_restart_container”

Interesting to note that this seems to have changed in the master branch; there is no ipxe container, but rather dnsmasq and tftp containers as well as an ironic-http container.

My question now becomes: What happens when the action “recreate_or_restart_container” get executed by the DockerWorker module? We can see that the function is implemented here. The interesting work is delgated to the docker_client, stored in the member variable dc:

self.dc = get_docker_client()(**options)

At this point, I have all I need in order to understand how Kolla affects change on the Docker containers. I do not yet understand the Docker infrastructure that makes that happen in the context of systemd, but that is a tale for another day.

In my next post, I’ll go through the steps I need to take to change the way the ipxe container is run.

Episode 319 – Patch Tuesday with a capital T

Posted by Josh Bressers on April 18, 2022 12:01 AM

Josh and Kurt talk about a lot of security vulnerabilities in this month’s Patch Tuesday. There’s also a new Git vulnerability. This sparks the age old question of how fast to patch? The answer isn’t binary, the right answer is whatever works best for you, not what someone tells you is best.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2760-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_319_Patch_Tuesday_with_a_capital_T.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_319_Patch_Tuesday_with_a_capital_T.mp3</audio>

Show Notes

Episode 318 – Social engineering and why zlib got a 2018 CVE ID

Posted by Josh Bressers on April 11, 2022 12:01 AM

Josh and Kurt talk about hackers using emergency data requests to gain access to sensitive data. The argument that somehow backdoors can be protected falls under this problem. We don’t yet have the technical or policy protections in place to actually protect this data. We also explain why this zlib issue got a 2018 CVE ID in 2022.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2755-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_318_Social_engineering_and_why_zlib_got_a_2018_CVE_ID.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_318_Social_engineering_and_why_zlib_got_a_2018_CVE_ID.mp3</audio>

Show Notes

Bifrost Spike on an Ampere AltraMax

Posted by Adam Young on April 08, 2022 11:43 PM

For the past week I worked on getting a Ironic standalone to run on an Ampere AltraMax server in our lab. As I recently was able to get a baremetal node to boot, I wanted to record the steps I went through.

Our base operating system for this install is Ubuntu 20.04.

The controller node has 2 Mellanox Technologies MT27710 network cards, each with 2 ports apiece.

I started by following the steps to install with the bifrost-cli. However, there were a few places where the installation assumes an x86_64 architecture, and I hard-swapped them to be AARCH64/ARM64 specific:

$ git diff HEAD
diff --git a/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Debian_family.yml b/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Debian_family.yml
index 18e281b0..277bfc1c 100644
--- a/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Debian_family.yml
+++ b/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Debian_family.yml
@@ -6,8 +6,8 @@ ironic_rootwrap_dir: /usr/local/bin/
 mysql_service_name: mysql
 tftp_service_name: tftpd-hpa
 efi_distro: debian
-grub_efi_binary: /usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed
-shim_efi_binary: /usr/lib/shim/shimx64.efi.signed
+grub_efi_binary: /usr/lib/grub/arm64-efi-signed/grubaa64.efi.signed
+shim_efi_binary: /usr/lib/shim/shimaa64.efi.signed
 required_packages:
   - mariadb-server
   - python3-dev
diff --git a/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Ubuntu.yml b/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Ubuntu.yml
index 7fcbcd46..4d6a1337 100644
--- a/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Ubuntu.yml
+++ b/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Ubuntu.yml
@@ -26,7 +26,7 @@ required_packages:
   - dnsmasq
   - apache2-utils
   - isolinux
-  - grub-efi-amd64-signed
+  - grub-efi-arm64-signed
   - shim-signed
   - dosfstools
 # NOTE(TheJulia): The above entry for dnsmasq must be the last entry in the

The long term approach to these is to make those variables architecture specific.

In order to install, I ran the cli:

./bifrost-cli install --network-interface enP4p4s0f1 --dhcp-pool 192.168.116.100-192.168.116.150 

It took me several tries with -e variables until realized that it was not going to honor them. I did notice that the heart of the command was the Ansible call, which I ended up running directly:

/opt/stack/bifrost/bin/ansible-playbook   ~/bifrost/playbooks/install.yaml -i ~/bifrost/playbooks/inventory/target -e bifrost_venv_dir=/opt/stack/bifrost -e @/home/ansible/bifrost/baremetal-install-env.json 

You may notice that I added a -e with the baremetal-install-env.json file. That file had been created by the earlier CLI run., and contained the variables specific to my install. I also edited it to trigger the build of the ironic cleaning image.

{
  "create_ipa_image": false,
  "create_image_via_dib": false,
  "install_dib": true,
  "network_interface": "enP4p4s0f1",
  "enable_keystone": false,
  "enable_tls": false,
  "generate_tls": false,
  "noauth_mode": false,
  "enabled_hardware_types": "ipmi,redfish,manual-management",
  "cleaning_disk_erase": false,
  "testing": false,
  "use_cirros": false,
  "use_tinyipa": false,
  "developer_mode": false,
  "enable_prometheus_exporter": false,
  "default_boot_mode": "uefi",
  "include_dhcp_server": true,
  "dhcp_pool_start": "192.168.116.100",
  "dhcp_pool_end": "192.168.116.150",
  "download_ipa": false,
  "create_ipa_image": true
}

With this ins place, I was able to enroll nodes using the Bifrost cli:

 ~/bifrost/bifrost-cli enroll ~/nodes.json

I prefer this to using my own script. However, my script checks for existence and thus can be run idempotently, unlike this one. Still, I like the file format and will likely script to it in the future.

WIth this, I was ready to try booting the nodes, but they hung as I reported in an earlier article.

The other place where the deployment is x86_64 specific is the iPXE binary. In a bifrost install on Ubuntu, the binary is called ipxe.efi, and it is placed in /var/lib/tftpboot/ipxe.efi. It is copied from the grub-ipxe package which places it in /boot/ipxe.efi. Although this package is not tagged as an x86_64 architecture (Debian/Ubuntu call it all) the file is architecture specific.

I went through the steps to fetch and install the latest one out of jammy which has an additional file: /boot/ipxe-arm64.efi. However, when I replaced the file /var/lib/tftpboot/ipxe.efi with this one, the baremetal node still failed to boot, although it did get a few steps further in the process.

The issue, as I understand it, is that the binary needs as set of drivers to set up the http request in the network interface cards, and the build in the Ubuntu package did not have that. Instead, I cloned the source git repo and compiled the binary directly. Roughly

git clone https://github.com/ipxe/ipxe.git
cd ipxe/src
make bin-arm64-efi/snponly.efi  ARCH=arm64

SNP stands for the Simple Network Protocol. I guess this protocol is esoteric enough that Wikipedia has not heard of it.

The header file in the code says this:

  The EFI_SIMPLE_NETWORK_PROTOCOL provides services to initialize a network interface,
  transmit packets, receive packets, and close a network interface.
 

It seems the Mellanox cards support/require SNP. With this file in place, I was able to get the cleaning image to PXE boot.

I call this a spike as it has a lot of corners cut in it that I would not want to maintain in production. We’ll work with the distributions to get a viable version of ipxe.efi produced that can work for an array of servers, including Ampere’s. In the meantime, I need a strategy to handle building our own binary. I also plan on reworking the Bifrost variables to handle ARM64/AARCH64 along side x86_64; a single server should be able to handle both based on the Architecture flag sent in the initial DHCP request.

Note: I was not able to get the cleaning image to boot, as it had an issue with werkzeug and JSON. However, I had an older build of the IPA kernel and initrd that I used, and the node properly deployed and cleaned.

And yes, I plan on integrating Keystone in the future, too.

ERROR: Boot option loading failed

Posted by Adam Young on April 07, 2022 02:43 AM

When PXE Booting an AARCH64 server, the above message probably means that you are fetching an x86_64 image for iPXE, not ARM64. Here’s how I debugged it.

You can check by looking in the DHCP messages for the file. I track this using tcpdump:

sudo  tcpdump -i enP4p4s0f0  port 67 or port 68 -e -n -vv

In my case (Using Ironic)

    BF Option 67, length 10: "/ipxe.efi^@"

I can fetch that using curl:

curl tftp://192.168.116.64/ipxe.efi --output /tmp/y

And check the file:

# file /tmp/y
/tmp/y: MS-DOS executable PE32+ executable (DLL) (EFI application) x86-64, for MS Windows

If I do it with the right file:

$ curl tftp://192.168.116.64/ipxe.efi --output /tmp/z
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1000k  100 1000k    0     0  28.7M      0 --:--:-- --:--:-- --:--:-- 28.7M
100 1000k  100 1000k    0     0  28.7M      0 --:--:-- --:--:-- --:--:-- 28.7M
ansible@openstack02-r116:/tmp$ file z
z: MS-DOS executable PE32+ executable (DLL) (EFI application) Aarch64, for MS Windows

On Ubuntu 20.04, You can confirm that the file is downloaded via tftp in the syslog. Go to the end and search backwards using ?tftp

Apr  7 02:38:48 openstack02-r116 dnsmasq-tftp[280480]: sent /var/lib/tftpboot/ipxe.efi to 192.168.116.64

If it works correctly, you will see output like this:

>>Checking Media Presence......
>>Media Present......
>>Start PXE over IPv4 on MAC: 0C-42-A1-52-19-B0.
  Station IP address is 192.168.116.119

  Server IP address is 192.168.116.64
  NBP filename is /ipxe.efi
  NBP filesize is 1024000 Bytes

Episode 317 – The lack of compromise in security

Posted by Josh Bressers on April 04, 2022 12:01 AM

Josh and Kurt talk about the binary nature of security. Many of our ideas are yes or no, there’s not much in the middle. The conversation ends up derailed due to a Twitter thread about pinning dependencies. This gives you an idea how contentious of a topic pinning is. The final takeaway is not to let security turn into your identity, it ends up making a mess.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2749-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_317_The_lack_of_compromise_in_security.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_317_The_lack_of_compromise_in_security.mp3</audio>

Show Notes

Episode 316 – You have to use open source

Posted by Josh Bressers on March 28, 2022 12:01 AM

Josh and Kurt talk about the latest NPM backdoored package. It feels like this keeps happening. We talk about why this is and why it’s probably OK. Kurt fixes Linus’ Law, in open source the superpower isn’t bugs are shallow (they’re not), the superpower is security bugs in open source can’t be ignored.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2744-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_316_You_have_to_use_open_source.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_316_You_have_to_use_open_source.mp3</audio>

Show Notes

Discoverability in API design

Posted by Adam Young on March 25, 2022 03:06 AM

There are a handful of questions a user will (implicitly) ask when using your API:

  1. What actions can I do against this endpoint?
  2. How do I find the URLs for those actions?
  3. What information do I need to provide in order to perform this action?
  4. What permission do I need in order to perform this action.

Answering these questions can be automated. The user, and the tools they use, can discover the answers by working with the system. That is what I mean when I use the word “Discoverability.”

We missed some opportunities to answer these questions when we designed the APIs for Keystone OpenStack. I’d like to talk about how to improve on what we did there.

First I’d like to state what not to do.

Don’t make the user read the documentation and code to an external spec.

Never require a user to manually perform an operation that should be automated. Answering every one of those question can be automated. If you can get it wrong, you will get it wrong. Make it possible to catch errors as early as possible.

Lets start with the question: “What actions can I do against this endpoint?” In the case of Keystone, the answer would be some of the following:

Create, Read, Update and Delete (CRUD) Users, Groups of Users, Projects, Roles, and Catalog Items such as Services and Endpoints. You can also CRUD relationships between these entities. You can CRUD Entities for Federated Identity. You can CRUD Policy files (historical). Taken in total, you have the tools to make access control decisions for a wide array of services, not just Keystone.

The primary way, however, that people interact with Keystone is to get a token. Let’s use this use case to start. To Get a token, you make a POST to the $OS_AUTH_URL/v3/auth/tokens/ URL. The data

How would you know this? Only by reading the documentation. If someone handed you the value of their OS_AUTH_URL environment variable, and you looked at it using a web client, what would you get? Really, just the version URL. Assuming you chopped off the V3:

$ curl http://10.76.10.254:35357/
{"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "http://10.76.10.254:35357/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}

and the only URL in there is the version URL, which gives you back the same thing.

If you point a web browser at the service, the output is in JSON, even though the web browser told the server that it preferred HTML.

What could this look like: If we look at the API spec for Keystone:  We can see that the various entities referred to Above hat fairly predictable URL forms. However, for this use case, we want a token, so we should, at a minimum, see the path to get to the token. Since this is the V3 API, we should See an entry like this:

{"rel": "auth", "href": "http://10.76.10.254:35357/v3/auth"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}

And is we then performed an HTTP GET on http://10.76.10.254:35357/v3/auth we should see a link to :

{"rel": "token", "href": "http://10.76.10.254:35357/v3/auth/token"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}

Is this 100% of the solution? No. The Keystone API shows its prejudices toward PASSWORD based authentication, a very big antipattern. The Password goes in clear text into the middle of the JSON blob posted to this API. We trust in SSL/TLS to secure it over the wire, and have had to erase from logs and debugging. This is actually a step backwards from BASIC_AUTH in HTTP. All this aside, there is still no way to tell what you need to put into the body of the token request without reading the documentation….unless you know the magic of JSON-HOME.

Here is what you would need to do to get a list of the top level URLS, excluding all the ones that are templated, and thus require knowing an ID.

curl 10.76.116.63:5000 -H "Accept: application/json-home" | jq '.resources | to_entries | .[] | .value | .href ' | sort -u
  • “/v3/auth/catalog”
  • “/v3/auth/domains”
  • “/v3/auth/OS-FEDERATION/saml2”
  • “/v3/auth/OS-FEDERATION/saml2/ecp”
  • “/v3/auth/projects”
  • “/v3/auth/system”
  • “/v3/auth/tokens”
  • “/v3/auth/tokens/OS-PKI/revoked”
  • “/v3/credentials”
  • “/v3/domains”
  • “/v3/domains/config/default”
  • “/v3/ec2tokens”
  • “/v3/endpoints”
  • “/v3/groups”
  • “/v3/limits”
  • “/v3/limits/model”
  • “/v3/OS-EP-FILTER/endpoint_groups”
  • “/v3/OS-FEDERATION/identity_providers”
  • “/v3/OS-FEDERATION/mappings”
  • “/v3/OS-FEDERATION/saml2/metadata”
  • “/v3/OS-FEDERATION/service_providers”
  • “/v3/OS-OAUTH1/access_token”
  • “/v3/OS-OAUTH1/consumers”
  • “/v3/OS-OAUTH1/request_token”
  • “/v3/OS-REVOKE/events”
  • “/v3/OS-SIMPLE-CERT/ca”
  • “/v3/OS-SIMPLE-CERT/certificates”
  • “/v3/OS-TRUST/trusts”
  • “/v3/policies”
  • “/v3/projects”
  • “/v3/regions”
  • “/v3/registered_limits”
  • “/v3/role_assignments”
  • “/v3/role_inferences”
  • “/v3/roles”
  • “/v3/s3tokens”
  • “/v3/services”
  • “/v3/users”

This would be the friendly list to return from the /v3 page. Or, if we wanted to streamline it a bit for human consumption, we could put a top level grouping around each of these APIs. A friendlier list would look like this (chopping off the /v3)

  • auth
  • assignment
  • catalog
  • federation
  • identity
  • limits
  • resource
  • assignment
  • policy

There are a couple ways to order the list. Alphabetical order is the simplest for an English speaker if they know what they are looking for. This won’t internationalize, and it won’t guide the user to the use cases that are most common. Thus, I put auth at the top, as that is, by far, the most common use case. The others I have organized based on a quick think-through from most to least common. I could easily be convinced to restructure this a couple different ways.

However, we are starting to trip over some of the other aspects of usability. We have provided the user with way more information than they need, or, indeed, can use at this point. Since none of those operations can be performed unauthenticated, we have lead the user astray; we should show them, at this stage, only what they can do in their current state. Thus: the obvious entry would be.

  • /v3/auth/tokens.
  • /v3/auth/OS-FEDERATION
As these are the only two directions they can go unauthenticated.

Lets continue on with the old-school version of a token request using the v3/auth/tokens resource, as that is the most common use case. How now does a user request a token? Depends on whether they want to use password or another token, or multifactor, and whether they want an unscoped token or a scoped token.

None of this information is in the JSON home. You have to read the docs.

If we were using straight HTML to render the response, we would expect a form. Something along the lines of:

There is, as of now, no standard way to put form data into JSON. However, there are numerous standards to chose from. One such standard is FormData API. JSON Scheme https://json-schema.org/. If we look at the API do, we get a table that specifies the name. Anything that is not a single value is specified as an object, which really means a JSON object which is a dictionary that can bee deeply nested. We can see the complexity in the above form, where the scope value determines what is meant by the project/domain name field. And these versions don’t allow for IDs to be used instead of the names for users, projects, or domains.

A lot of the custom approach here is dictated by the fact that Keystone does not accept standard authentication. The Password based token request could easily be replaced with BASIC-AUTH. Tokens themselves could be stored as session cookies, with the same timeouts as the token expiration. All of the One-Offs in Keystone make it more difficult to use, and require more application specific knowledge.

Many of these issues were straightened out when we started doing federation. Now, there is still some out-of-band knowledge required to use the Federated API, but this was due to concerns about information leaking that I am going to ignore for now. The approach I am going to describe is basically what is used by any app that allows you to log in from the different cloud providers Identity sources today.

From the /v3 page, a user should be able to select the identity provider that they want to use. This could require a jump to /v3/FEDERATION and then to /v3/FEDERATION/idp, in order to keep things namespaced, or the list could be expanded in the /v3 page if there is really nothing else that a user can do unauthenticated.

Let us assume a case where there are three companies that all share access to the cloud; Acme, Beta, and Charlie. The JSON response would be the same as the list identity providers API. The interesting part of the result is this one here:

 "protocols": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME/protocols"

Lets say that a given Identity provider supports multiple protocols. Here is where the user gets to chose which hone they want to use to try and authenticate. An HTTP GET on the link above would return that list: The documentation shows an example of an identity provider that supports saml2. Here is an expanded one that shows the set of protocols a user could expect in a private cloud running FreeIPA and Keycloak, or Active Directory and ADFS.

{
    "links": {
        "next": null,
        "previous": null,
        "self": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME/protocols"
    },
    "protocols": [
        {
            "id": "saml2",
            "links": {
                "identity_provider": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME",
                "self": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME/protocols/saml2"
            },
            "mapping_id": "xyz234"
        },
        {
            "id": "x509",
            "links": {
                "identity_provider": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME",
                "self": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME/protocols/x509"
            },
            "mapping_id": "xyz235"
        },
        {
            "id": "gssapi",
            "links": {
                "identity_provider": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME",
                "self": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME/protocols/gssapi"
            },
            "mapping_id": "xyz236"
        },
        {
            "id": "oidc",
            "links": {
                "identity_provider": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME",
                "self": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME/protocols/oidc"
            },
            "mapping_id": "xyz237"
        },
        {
            "id": "basic-auth",
            "links": {
                "identity_provider": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME",
                "self": "http://example.com/identity/v3/OS-FEDERATION/identity_providers/ACME/protocols/basic-auth"
            },
            "mapping_id": "xyz238"
        }
    ]
}

Note that this is very similar to the content that a web browser gives back in a 401 response: the set of acceptable authentication mechanisms. I actually prefer this here, as it actually allows the user to select the appropriate mechanism for the use case, which may vary depending on where the use connects from.

Lets ignore the actual response from the above links and assume that, if the user is unauthenticated, they merely get a link to where they can authenticate. /v3/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}/auth. The follow on link is a GET. Not a POST. There is no form Data required. The mapping resolves the users Domain Name/ID, so there is no need to provide that information, and the token is a Federated unscoped token.

The actual response contains the list of groups that a user belongs to. This is an artifact of the mapping, and it is useful for debugging. However, what the user has at this point is, effectively, an unscoped token. It is passed in the X-Subject-Token header, and not in the session cookie. However, for an HTML based workflow, and, indeed, for sane HTTP workflows against Keystone, a session scoped cookie containing the token would be much more useful.

With an unscoped token, a user can perform some operations against a Keystone server, but those operations are either read-only, operations specific to the user, or administrative actions specific to the Keystone server. For OpenStack, the vast majority of the time the user is going to Keystone to request a scoped token to use on one of the other services. As such, the user probably needs to convert the unscoped token shown above to a token scoped to a project. A very common setup has the user assigned to a single project. Even if they are scoped to multiple, it is unlikely that they are scoped to many. Thus, the obvious next step is to show the user a URL that will allow them to get a token scoped to a specific project.

Keystone does not have such a URL. In Keystone, again you are required to go through /v3/auth/tokens to request a scoped token.

A much friendlier URL scheme would be /v3/auth/projects which lists the set of projects a user can request a token for, and /v3/auth/project/{id} which lets a user request a scoped token for that project

However, even if we had such a URL pattern, we would need to direct the user to that URL. There are two distinct use cases. The first is the case where the user has just authenticated, and in the token response, they need to see the project list URL. A redirect makes the most sense, although the list of projects could also be in authentication response. However, the user might also be returning to the Keystone server from some other operation, still have the session cookie with the token in it, and start at the discovery page again. IN this case, the /v3/ response should show /v3/auth/projects/ in its list.

There is, unfortunately, one case where this would be problematic. With Hierarchical projects, a single assignment could allow a user to get a token for many projects. While this is a useful hack in practice, it means that the project list page could get extremely long. This is, unfortunately also the case with the project list page itself; projects may be nested, but the namespace needs to be flat, and listing projects will list all of them, only the parent-project ID distinguishes them. Since we do have ways to do path nesting in HTTP, this is a solvable problem. Lets lump the token request and the project list APIs together. This actually makes a very elegant solution;

Instead of /v3/auth/projects we put a link off the project page itelf back to /v3/auth/tokens but accepting the project ID as a URL parameter, like this: /v3/auth/tokens?project_id=abc123.

Of course, this means that there is a hidden mechanism now. If a user wants to look at any resource in Keystone, they can do so with an unscoped token, provided they have a role assignment on the project or domain that manages that object.

To this point we have discussed implicit answers to the questions of finding URLs and discovering what actions a user can perform. For the token request, is started discussing how to provide the answer to “What information do I need to provide in order to perform this action?” I think now we can state how to do that: the list page for any collection should either provide an inline form or a link to a form URL. The form provides the information in a format that makes sense for the content type. If the user does not have the permission to create the object, they should not see the form. If the form is on a separate link, a user that cannot create that object should get back a 403 error if they attempt to GET the URL.

If Keystone had been written to return HTML when hit by a browser instead of JSON, all of this navigation would have been painfully obvious. Instead, we subscribed to the point of view that UI was to be done by the Horizon server.

There still remains the last question: “What permission do I need in order to perform this action?” The user only thinks to answer this question when they come across an operation that they cannot perform. I’ll did deeper into this in the next article


Facts vs Feelings

Posted by Josh Bressers on March 21, 2022 10:07 PM

Earlier today I asked a question on Twitter

<figure class="wp-block-image size-large"></figure>

Holy cow that thread took on a life of its own. The question is easy enough, do we have any security data on pinning vs not pinning dependencies? We don’t, I know this, but I was hoping someone was working on something (I don’t think they are). But during the thread I also think I figured how to be start collecting this data. That’s a post for the future.

Now, I should clarify, it think pinning dependencies is a fine idea. I imagine my question was misread by a number of people to be some secret attempt at pushing an agenda. Sadly it was not. I am a bit harsh in the thread on people trying to pass off opinion as fact. The whole point of the question was to find some data, then things got weird.

I want to address something I realized in this thread. There is no data on this point, yet there is quite a bit of guidance in that thread. Everyone just sort of talks past each other. Some people are horrified by the idea of not pinning your dependencies. Some think pinning dependencies is the worst thing ever. It’s all quite fascinating. Everyone has an opinion, nobody has data. How often do we make decisions based on how we feel about something, but then end up confusing our feelings for facts? I think all the random odd views we can see in that thread are the result of different opinions. Opinions based on how someone feels. Data is like a line in the sand you can always see, opinions are like waves washing back and forth. Sometimes weird things wash up.

It’s easy to get caught up things we want to believe to be true. I know people who still maintain changing your password on a regular basis is more secure. There is actual research that shows changing your passwords on a regular cadence is less secure than using one very long password, actual real research. But because it feels wrong, it must be wrong.

I find myself doing this all the time. I might have a certain opinion and it’s really hard to objectively unlearn something. My favorite example I use of this is I once thought PGP was the best possible form of secure messaging. The keys are generated locally, you can pass messages via any medium. You can send messages to nearly anyone. In theory it’s a great system! But in reality it’s terrible. It took me a long time to realize my love of PGP was mostly emotional. The data was clear, PGP has a lot of problems. I try not to use it now.

I’m not trying to make anyone feel bad for whatever view they may or may not have in this post. I wanted to scribble down some quick thoughts because the whole conversation reminded me that opinion is not fact. Facts need data. And sometimes we believe something for so long, we don’t even notice we are trying to pass off our opinions as facts.

There’s an old hacker saying “question everything”. It’s very relevant in this discussion.

It’s relevant in every discussion, especially this blog post.

Episode 315 – Who even makes all these terrible decisions?

Posted by Josh Bressers on March 21, 2022 12:01 AM

Josh and Kurt talk about Microsoft accidentally letting us find out about ads in file explorer. Changing your clocks sucks. And touch on some of the security implications of the Russian invasion and sanctions. There are a lot of security lessons we can all learn. Mostly what not to do.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2728-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_315_Who_even_makes_all_these_terrible_decisions.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_315_Who_even_makes_all_these_terrible_decisions.mp3</audio>

Show Notes

Generating a clouds.yaml file

Posted by Adam Young on March 16, 2022 11:00 PM

Kolla creates an admin.rc file using the environment variables. I want to then use this in a terraform plan, but I’d rather not generate terrafoprm specific code for the Keystone login data. So, a simple python script converts from env vars to yaml.

#!/usr/bin/python3
import os
import yaml

clouds = {
   "clouds":{
    "cluster": {
        "auth" : {
            "auth_url" : os.environ["OS_AUTH_URL"], 
            "project_name": os.environ["OS_PROJECT_NAME"],
            "project_domain_name": os.environ["OS_PROJECT_DOMAIN_NAME"],
            "username": os.environ["OS_USERNAME"],
            "user_domain_name": os.environ["OS_USER_DOMAIN_NAME"],
            "password": os.environ["OS_PASSWORD"]
        }
    }
    }
}


print (yaml.dump(clouds))

To use it:

./clouds.py > clouds.yaml

Note that you should have sourced the appropriate config environment variables file, such as :

. /etc/kolla/admin-openrc.sh

Episode 314 – The Linux Dirty Pipe vulnerability

Posted by Josh Bressers on March 14, 2022 12:01 AM

Josh and Kurt talk about the Linux Kernel Dirty Pipe security vulnerability. This bug is an amazing combination of amazing complexity, incredible simplicity, and a little bit of luck. The discovery is amazing, the analysis is enlightening. There’s almost no way a bug like this could be found outside of open source.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2723-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_314_The_Linux_Dirty_Pipe_vulnerability.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_314_The_Linux_Dirty_Pipe_vulnerability.mp3</audio>

Show Notes

Secure Boot Statements

Posted by Robbie Harwood on March 09, 2022 05:00 AM

I have said some things about secure boot and my involvement here.

I will try to say the same things monthly from here on out. The current plan is for them to appear dated in this folder.

That said, I am human and will probably forget sometimes. So while I encourage keeping an eye out for them, it's probably best to reach out if I miss one and not worry too much unless several get missed.

Please remember that this only relates to what I do and what I know: it doesn't include other distros, other parts of the supply chain, etc.. Other maintainers are encouraged to offer similar statements for their respective involvement.

(This statement, this post, and especially the encouragement for other maintainers to do the same originate from my co-worker pjones's statement.)

Episode 313 – Insecurity at scale

Posted by Josh Bressers on March 07, 2022 12:01 AM

Josh and Kurt talk about the challenges of security at scale. Specifically we focus on why a lot of security starts to fall apart once you have to do something more than a few times. There’s a lot of new thinking we need to push security forward.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2719-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_313_Insecurity_at_scale.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_313_Insecurity_at_scale.mp3</audio>

Show Notes

Episode 312 – The Legend of the SBOM

Posted by Josh Bressers on February 28, 2022 12:01 AM

Josh and Kurt talk about SBOMs. Not what they are, there’s plenty about that. We talk about why everyone keeps claiming they’re super important, and why we’re starting to see some people question if we really need them. SBOMs are part of a future that’s still being invented.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2712-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_312_The_Legend_of_the_SBOM.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_312_The_Legend_of_the_SBOM.mp3</audio>

Show Notes

Episode 311 – Did you scan the QR code?

Posted by Josh Bressers on February 21, 2022 12:01 AM

Josh and Kurt talk about the Coinbase Super Bowl ad. It was a QR code, lots of security people were aghast at how many people scanned the QR code. The reality is scanning QR codes isn’t dangerous. What other security advice just won’t go away?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2706-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_311_Did_you_scan_the_QR_code.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_311_Did_you_scan_the_QR_code.mp3</audio>

Show Notes

Root versus groups

Posted by Adam Young on February 17, 2022 10:12 PM

If you lock down everything, you either need to hand out keys, or plan on doing everything yourself, and getting overwhelmed.

Probably the single most power ful tool in Linux land to keep people from having to be “root” is the group concept. For example, if I want people to run Docker containers, they need to be able to talk to the Docker socket. The root user can do this by virtue of its global access. However, the more limited access approach is to add a user to the docker group.

Here is a short session I had on a newly provisioned machine:

docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json: dial unix /var/run/docker.sock: connect: permission denied
ansible@eng13sys-r111scc-lab:~/openstack-kolla-ampere-aio-scripts$ sudo docker ps
[sudo] password for ansible: 
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
scripts$ groups
ansible adm cdrom sudo dip plugdev lxd
ansible@eng13sys-r111scc-lab:~
scripts$ sudo usermod -a -G docker ansible
ansible@eng13sys-r111scc-lab:~
exit

Edit: Note that I added the -g flag. Otherwise, you remove the user from the sudo group, which breaks sudo.

Now when I log back in to the system:

ansible@eng13sys-r111scc-lab:~$ groups
ansible docker
ansible@eng13sys-r111scc-lab:~$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

This points out a couple of the shortcomings of the Linux approach. First, the group is specific to this particular workload, running docker containers, as opposed to the role that the user has in the organization. For example, we can think of (at least) two different types of users that need to run docker containers: database admins and web server admins. However, a web server admin does not get to play around with the storage subsystems the way that a DB admin can. Thus, in order to distinguish the power between these two users, we have to assign a specific user to either just the docker group, or the docker group plus whatever else he is going to do. We cannot just create a webserver group, and a docker group, and assign users at that level.

The second is that, in order to assign a user to a group, you need to be root. You cannot re-delegate some access you have already.

FreeIPA can actually provide this degree of group nesting and redelegation, but that does not help if you are administering a stand alone system.

You might notice that the name of my user in the example above is ansible. This is, again, collecting up a bunch of different users into a single account, much like doing everything as root is. The ansible user would then have to be a member of every group that has access to everything that every user needs to do.

Episode 310 – Hayley Tsukayama from the EFF talks about privacy

Posted by Josh Bressers on February 14, 2022 12:01 AM

Josh and Kurt talk to Hayley Tsukayama from the EFF about privacy. We all know privacy in the modern age is very complicated and difficult. Normal people don’t have many allies when it comes to privacy. The EFF has been blazing the trail for digital rights for more than 30 years! This episode has a ton of amazing details, it’s easy to see how the EFF became the jewel of the Internet.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2701-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_310_Hayley_Tsukayama_from_the_EFF_talks_about_privacy.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_310_Hayley_Tsukayama_from_the_EFF_talks_about_privacy.mp3</audio>

Show Notes

Fixing Playback in Musecore on Fedora 35

Posted by Adam Young on February 08, 2022 03:21 PM

Recently, the playback on Musescore became distorted. It was sped up, the notes were dissonant (no that is not my writing!) and they seemed to crackle and pop.

When both systems I have exhibited the same problem, I knew it was an upgrade issue, and not my hardware.

This phenomenon seems to have occurred a few times over the years, and I tried many of the recommended fixes. What finally worked was changing the output from PulseAudio to Jack.

From the Top Menu select Edit->Preferences. On the dialog window that comes up, select the I/O tag. There are 4 radio buttons, one next to each of

  • PulseAudio
  • PortAudio
  • ALSA Audio
  • Jack Audio Server

I changed it from PulseAudio to Jack Audio Server and playback worked.

<figure class="wp-block-image"></figure>

I suspect this has something to do with PipeWire. Jack and PulseSaudio are now two interfaces into the same sound system, implemented as PipeWire. While this has simplified audio for me elsewhere, I have heard comments from others about some issues getting audio straightened out. Jack is more Latency focused and aware than Pulse, which has more of a desktop, and thus all-sound-or-nothing per application basis for design. I’ve been a fan of Jack for a while, so I am happy to find that Musescore works when it uses Jack as the Audio interface.

Episode 309 – The bright future of open source secuirty

Posted by Josh Bressers on February 07, 2022 12:00 AM

Josh and Kurt talk about NPM requiring 2FA for the top 100 packages. We discuss the new Alpha and Omega projects from the OpenSSF and what it could mean for the future of open source security. Then we end on a note about the new Samba critical vulnerability.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2697-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/forcedn/opensourcesecuritypodcast/Episode_309_The_bright_future_of_open_source_security.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/forcedn/opensourcesecuritypodcast/Episode_309_The_bright_future_of_open_source_security.mp3</audio>

Show Notes

Enabling Fedora 35 Virtualization on an Ampere based machine

Posted by Adam Young on February 03, 2022 12:21 AM

I have a fresh install of Fedora 35 on a lab machine. I want to run a virtual machine on it. I have ssh access to the root account and a public key copied over.

The packages you need are listed below:

dnf install libvirt libvirt-daemon-kvm qemu-kvm virt-install

Run the daemon, and make sure it runs on startup

systemctl enable libvirtd
systemctl start libvirtd

You can do a simple test to see if it is ready with:

virt-install --name centos8-2 --memory 10240 --vcpus=2 --os-type=Linux --os-variant=centos7.0

If it is OK, you will either get a vm, or a message like:

ERROR The requested volume capacity will exceed the available pool space when the volume is fully allocated. (20480 M requested capacity > 12728 M available) (Use –check disk_size=off or –check all=off to override)

Below are my notes from debugging a new setup. Before I had the libvirt-daemon-kvm and qemu-kvm packages installed, I got the error:

ERROR    Host does not support any virtualization options

Looking at the CPU info:

# lscpu 
Architecture:           aarch64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Little Endian
CPU(s):                 80
  On-line CPU(s) list:  0-79
Vendor ID:              ARM
  BIOS Vendor ID:       Ampere(R)
  Model name:           Neoverse-N1
    BIOS Model name:    Ampere(R) Altra(R) Processor
    Model:              1
    Thread(s) per core: 1
    Core(s) per socket: 80
    Socket(s):          1
    Stepping:           r3p1
    Frequency boost:    disabled
    CPU max MHz:        3000.0000
    CPU min MHz:        1000.0000
    BogoMIPS:           50.00
    Flags:              fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs

Note that you will not see the virtualization extensions listed that are x86_64 specific. On a x86_64 system, you can find them with

egrep '^flags.*(vmx|svm)' /proc/cpuinfo

But they are not defined in the ARM64 headers in the Linux Kernel. The only way I’ve found thus far is to look in the journal:

# journalctl | grep EL2
Feb 01 07:20:14 eng14sys-r111.scc-lab.amperecomputing.com kernel: CPU: All CPU(s) started at EL2


Episode 308 – Welcome to the jungle – How to talk about open source security

Posted by Josh Bressers on January 31, 2022 12:01 AM

Josh and Kurt talk about how to get attention for security problems. Recent research around Twitter credentials checked into GitHub showed us how to get a lot of attention when compared to a problem like Log4Shell which took years before anyone really picked up on the problem. It’s hard to talk about security sometimes.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2693-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_308_Welcome_to_the_jungle_How_to_talk_about_open_source_security.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_308_Welcome_to_the_jungle_How_to_talk_about_open_source_security.mp3</audio>

Show Notes

Episode 307 – Got vulnerabilities? Introducing GSD

Posted by Josh Bressers on January 24, 2022 12:01 AM

Josh and Kurt talk about the Global Security Database (GSD) project. This is a Cloud Security Alliance (CSA) effort to build community around vulnerability identifiers.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2689-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/forcedn/opensourcesecuritypodcast/Episode_307_Got_vulnerabilities_Introducing_GSD.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/forcedn/opensourcesecuritypodcast/Episode_307_Got_vulnerabilities_Introducing_GSD.mp3</audio>

Show Notes

Episode 306 – Open source isn’t broken, it’s an experience

Posted by Josh Bressers on January 17, 2022 12:01 AM

Josh and Kurt talk about the faker and colors NPM events. There is a lot of discussion around open source being broken or somehow failing because of these events. The real answer is open source is an experience. How we interact with our dependencies determines what the experience looks like.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2685-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_306_Open_source_isnt_broken_its_an_experience.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_306_Open_source_isnt_broken_its_an_experience.mp3</audio>

Show Notes

Issue with the solution to the 6s problem

Posted by Adam Young on January 11, 2022 06:54 PM

I recently came across a posted solutions to the 6s problem. I’m going to argue that several these solutions are invalid. Or, more precisely, I am going to argue that they are only conidered valid due to a convention in notation.

Part of the problem definition states that you cannot add additional digits to get to the solution, only operators. The operators that are used start with addition, subtraction, multiplication, division, and factorial. To solve some of the more difficult lines of the problem, they introduce the square root operator. This, however, is the degenerate form of the a fractional exponent. In other words, you can write either

<figure class="wp-block-image">{\sqrt {2}}</figure>

or

<figure class="wp-block-image">2^{{1/2}}</figure>

Note that in the bottom case, you introduce two new digits: a 1 and a 2.

To be fair, the factorial operator is also shorthand for a fairly long operations. If it was written in product notation, it would be:

<figure class="wp-block-image"></figure>

Which also introduces and additional 1.

This arbitrary distinction occured to me when I was looking at the solution for the 8s problem. It occurred to me that 2^3 is 8, and so a more elegant solution would be to take the cube root of 8 for each digit and sum them. However, this explicitly violates the rules of the puzzle, as the symbol for the cube root is the same as square root, but with a superscript 3.

Why do I care: because there is a pattern with notation that mixes the default case with the more explicit non-default expansions. For example, look at these two network devices names:

enp1s0f0np0 and enP4p4s0f0np0.

You have to look close to parse the difference. It is easier to see when they are stacked:

  • enp1s0f0np0
  • enP4p4s0f0np0

The fact that the second one is longer helps your eye see that the thrid chracter is a lowercase p in the first and uppercase in the second. Why? That field indicates some aspect of the internal configuration of the machine, something about the bridge to which the device is attached. The first is attached to bridge 0, which is the default, and is thus elided. The second is attached to bridge 4, and is thus explicitly named.

Yeah, it is a pain to differentiate.

So the solution to the problem is based on context sensitive parsing of the definition of the problem, to include the fact that the square root is considered a standard symbol without a digit to explicitly state what root is being taken.

Let’s take that option off the table. Are there still solutions to the 6s problem when defined more strictly. What are the set of acceptable operators that can be used to solve this puzzle? What is the smallest set?


Episode 305 – Norton, Ethereum, NFT, and Apes

Posted by Josh Bressers on January 10, 2022 12:01 AM

Josh and Kurt talk about Norton creating an Ethereum mining pool. This is almost certainly a bad idea, we explain why. We then discuss the reality of NFTs and the case of stolen apes. NFTs can be very confusing. The whole world of cryptocurrency is very confusing for normal people. None of this is new, there have always been con artists, there will always be con artists.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2681-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_305_Norton_Ethereum_NFT_and_Apes.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_305_Norton_Ethereum_NFT_and_Apes.mp3</audio>

Show Notes

Reading a log out of a docker file

Posted by Adam Young on January 07, 2022 04:50 PM

I have to pull the log out of a docker process to figure out why it is crashing. The Docker container name is ironic_ipxe.

cat $( docker inspect ironic_ipxe  | jq -r  '.[] | .LogPath' )

Fedora Has Too Many Security Bugs 3

Posted by Robbie Harwood on January 04, 2022 05:00 AM

(Previously part 2 and part 1.

Right now, there are 1917 open CVE bugs against Fedora. This is a decrease of 172 from last year - so again, we report good news. Gratitude toward maintainers who have been reducing their backlog.

Year breakdown:

2005: 1
2011: 1
2012: 4
2013: 4
2014: 5
2015: 17
2016: 71
2017: 227
2018: 341
2019: 225
2020: 311
2021: 710

While the bug that was last year's tail (a 2009 bug) has disappeared, the tail is now much longer with the addition of the 2005 bug. The per-year deltas are:

2005: +1
2006: N/A
2007: N/A
2008: N/A
2009: -1
2010: N/A
2011: -1
2012: -5
2013: -1
2014: ±0
2015: ±0
2016: -1
2017: -25
2018: -26
2019: -35
2020: -390
2021: +302

(N/A is reported where neither last year's run nor this year's run had bugs in that year bucket, while ±0 indicates no change in the number year to year. The 2021 change is somewhat expected since there's a lag between CVEs being assigned numbers and being disclosed.)

Unfortunately, the balance has shifted back toward EPEL: EPEL has 1035 of the 1917 total, a change of +77. This has outsized impact because EPEL is much smaller than non-EPEL Fedora.

For ecosystems, the largest ones I see are:

mingw: 99 (-41)
python: 95 (-14)
nodejs: 85 (-14)
rubygem: 20 (-7)
php: 13 (-6)

and it's nice to see a reduction on all of them.

Finally, to close as before, there have been no changes to Fedora policy around security handling, nor is there a functioning Security Team at this time. Obviously no one should be forced into that role, but if anyone wants a pet project: the incentive structure here is still wrong.

For completeness, since bugzilla has changed a bit, here's my script operating on the CSV file:

#!/usr/bin/python3

import csv
import re

from collections import defaultdict

with open("Bug List 2.csv", "r") as f:
    db = list(csv.DictReader(f))

print(f"total bugs: {len(db)}")

years = defaultdict(int)
r = re.compile(r"CVE-(\d{4})-")
for bug in db:
    match = r.search(bug["Summary  "])
    if match is None:
        continue
    year = match.group(1)
    years[year] += 1

for key in sorted(years.keys()):
    print(f"    {key}: {years[key]}")

epel = [bug for bug in db if bug["Product  "] == "Fedora EPEL"]

print(f"epel #: {len(epel)}")

components = defaultdict(int)
for bug in db:
    components[bug["Component  "]] += 1

# This spews - but uncomment to identify groups visually
# for c in sorted(components.keys()):
#     print(f"{c}: {components[c]}")

def ecosystem(e: str) -> int:
    count = 0
    for c in components:
        if c.startswith(f"{e}-"):
            count += components[c]
    return count

print(f"mingw ecosystem: {ecosystem('mingw')}")
print(f"python ecosystem: {ecosystem('python')}")
print(f"nodejs ecosystem: {ecosystem('nodejs')}")
print(f"rubygem ecosystem: {ecosystem('rubygem')}")
print(f"php ecosystem: {ecosystem('php')}")

Episode 304 – Will we ever fix all the vulnerabilities?

Posted by Josh Bressers on January 03, 2022 12:01 AM

Josh and Kurt talk about the question will we ever fix all the vulnerabilities? The question came from Reddit and is very reasonable, but it turns out this is REALLY hard to discuss. The answer is of course “no”, but why it is no is very complicated. Far more complicated than either of us thought it would be.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2675-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_304_Will_we_ever_fix_all_the_vulnerabilities.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_304_Will_we_ever_fix_all_the_vulnerabilities.mp3</audio>

Show Notes