Fedora security Planet

Episode 184 - It’s DNS. It's always DNS

Posted by Open Source Security Podcast on February 24, 2020 12:10 AM
Josh and Kurt talk about the sale of the corp.com domain. Is it going to be the end of the world, or a non event? We disagree on what should happen with it. Josh hopes an evildoer buys it, Kurt hopes for Microsoft. We also briefly discuss the CIA owning Crypto AG.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/13267556/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    Episode 183 - The great working from home experiment

    Posted by Open Source Security Podcast on February 17, 2020 12:13 AM
    Josh and Kurt talk about a huge working from home experiment because of the the Coronavirus. We also discuss some of the advice going on around the outbreak, as well as how humans are incredibly good at ignoring good advice, often to their own peril. Also an airplane wheel falls off.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/13175570/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes


      Episode 182 - Does open source owe us anything?

      Posted by Open Source Security Podcast on February 10, 2020 12:01 AM
      Josh and Kurt talk about open source maintainers and building communities. While an open source maintainer doesn't owe anyone anything, there are some difficult conversations around holding back a community rather than letting it flourish.

      <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/13053860/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

      Show Notes


        Create a host and get a keytab from the CLI

        Posted by Adam Young on February 04, 2020 03:07 AM

        Since I have to do this a lot, figured I would write it down here. Follow on to Kerberizing a Service in OpenShift.

        export HOST=krbocp-container-krbocp.apps.demo.redhatfsi.com
        export PRINCIPAL=HTTP/$HOST@REDHATFSI.COM
        ipa host-add $HOST --force
        ipa service-add $PRINCIPAL -force
        ipa-getkeytab -k keytabs/$PRINCIPAL.keytab -p $PRINCIPAL
        

        With that keytab uploaded as a secret, the host krbocp-container-krbocp.apps.demo.redhatfsi.com also allows authentication via Kerberos. Note that I scped it to my local machine

        $ scp idm.redhatfsi.com:keytabs/HTTP/krbocp-container-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM.keytab ~/keytabs/HTTP/krbocp-container-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM.keytab 
        $ mkdir ~/keytabs/HTTP/krbocp-container-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM
        $ cp ~/keytabs/HTTP/krbocp-container-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM.keytab ~/keytabs/HTTP/krbocp-container-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM/apache.keytab
        

        The command to upload it is then:

         
        oc create secret generic apache-container-keytab --from-file ~/keytabs/HTTP/krbocp-container-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM
        

        Yes, this is screaming for Ansible.

        Protecting a Service in OpenShift using Kerberos

        Posted by Adam Young on February 04, 2020 02:52 AM

        The same container image that can run HTTPD using Kerberos to authenticate in Podman can be used to do the same thing in OpenShift. Here’s the changes

        When running in OpenShift, my app gets a Hostname of krbocp-git-krbocp.apps.demo.redhatfsi.com which I can create inside my IdM server, as well as a Service of type HTTP running on that host. I’ll need a keytab for this service.

        [ayoung@idm ~]$ kinit ayoung
        Password for ayoung@REDHATFSI.COM: 
        $ export PRINCIPAL=HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM
        $ ipa service-show $PRINCIPAL
          Principal name: HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM
          Principal alias: HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM
          Keytab: True
          Managed by: krbocp-git-krbocp.apps.demo.redhatfsi.com
        $ ipa-getkeytab -k keytabs/$PRINCIPAL.keytab -p $PRINCIPAL
        Keytab successfully retrieved and stored in: keytabs/HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM.keytab
        

        I have to bring it over to my workstation. This is obviously a sub-optimal step that I would not mind eliding in the future, but for now, copy it local to a name that is friendly for the OpenShift API so we can upload that file as a secret to OpenShift

        $ mkdir ~/keytabs/HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM.keytab 
        $ scp idm.redhatfsi.com:keytabs/HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM.keytab ~/keytabs/HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM/apache.keytab 
        $ oc create secret generic apache-keytab --from-file ~/keytabs/HTTP/krbocp-git-krbocp.apps.demo.redhatfsi.com@REDHATFSI.COM/
        secret/apache-keytab created
        $ oc get secret apache-keytab -o yaml
        apiVersion: v1
        data:
          apache.keytab: ... elided 
        kind: Secret
        metadata:
          creationTimestamp: "2020-02-03T18:03:43Z"
          name: apache-keytab
          namespace: krbocp
          resourceVersion: "2507619"
          selfLink: /api/v1/namespaces/krbocp/secrets/apache-keytab
          uid: 9c3ffc0f-544a-4912-a591-549fe392fae0
        type: Opaque
        

        To make this secret usable in the container, I find the deployment named krbocp-git and edit it. Here is what the spec section of the yaml looks like

            spec:
              containers:
              - image: image-registry.openshift-image-registry.svc:5000/krbocp/krbocp-git@sha256:ec778f7df6ed4768fa54a84f87dc6e2b2be619395ef1bf7a2bd9efb73ca7c865
                imagePullPolicy: Always
                name: krbocp-git
                resources: {}
                terminationMessagePath: /dev/termination-log
                terminationMessagePolicy: File
                volumeMounts:
                - mountPath: /etc/httpd/secrets
                  name: secret-volume
                  readOnly: true
              dnsPolicy: ClusterFirst
              restartPolicy: Always
              schedulerName: default-scheduler
              securityContext: {}
              terminationGracePeriodSeconds: 30
              volumes:
              - name: secret-volume
                secret:
                  defaultMode: 420
                  items:
                  - key: apache.keytab
                    mode: 511
                    path: apache.keytab
                  secretName: apache-keytab
        

        Et Voila:

        $ curl -s  --negotiate -u : http://krbocp-git-krbocp.apps.demo.redhatfsi.com/envvars | grep REMOTE_USER
        
        REMOTE_USER
        'custom/sampleapp.apps.demo.redhatfsi.com@REDHATFSI.COM'

        I was getting confused why this was not working for the image based deployment I did in the same project until I realize it has a different hostname. http://krbocp-container-krbocp.apps.demo.redhatfsi.com/ will not work with the same keytab.

        The only if available in the Kitchen

        Posted by Adam Young on February 03, 2020 08:14 PM
        <figure class="wp-block-media-text__media"></figure>

        This old kitchen…

        8 Tone scale for that strange chord in Take The A-Train

        Posted by Adam Young on February 03, 2020 04:25 PM

        You must Take the A Train…if you want to improvise over a standard. But this standard tune has a non-standard chord in Bars 3 and 4. If you are playing the “Real Book” version in C, the song starts with two measures of C Major 7, and then goes up a whole step to D. If we stayed in the Key of C, this would be a Dminor chord. Billy Strayhorn was much more creative than that, and he put in a chord rarely seen anywhere else: D7 b5.

        The notes of this chord are D and F#, just as in the major chord, but the fifth is dropped from A to A flat. The Final note is the C.

        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e399504342f0',], "�X:1�K:C�L:1/4�D^F_Ac�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e3995043436e',], "� X:1� K:C� L:1/4�D^F_Ac�".replace(/\x01/g,"\n"), {}, {});</script>

        Say we want to Bebop over this chord, and we need an 8 tone scale. What notes should we pick? The general rule is to keep the chord tones on the down beat, and get filler notes on the up beat. We also know that we are modulating from the C Major scale used in the first two measures, as well as the ii-V7-I progression that follows it. We can chose to stay as close to C as possible, as far from C as possible, or somewhere in the middle.

        If we try to stay on the white notes in between we get

        D E F# G Ab B C C#

        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e399504343e9',], "�X:1�K:C�L:1/4�DE^FG_ABc^cd�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e39950434458',], "� X:1� K:C� L:1/4�DE^FG_ABc^cd�".replace(/\x01/g,"\n"), {}, {});</script>

        Note that this give you a minor third leading in to the 7th. Thus, it sounds somewhat like a harmonic minor. This pattern is W W H H b3 H H H

        To convert to my older post, rotate the minor third to the end:

        H H H W W H H b3

        THis is scale # 12

        And over the chord:

        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e399504344dd',], "�X:1�M:C�Q:1/4=120�L:1/8�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] DE^FG_ABc^cd |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e3995043455f',], "�X:1�M:C�Q:1/4=120�L:1/8�%%score (T1 T2) (B1 B2)�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] DE^FG_ABc^cd |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {});</script>

        Lets say we want to go as different as possible from that C Major scale, maintaining the Chord tones on the down beat. That gives us:

        D Eb F# Gb Ab Bb C C# D

        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e399504345e5',], "�X:1�M:C�Q:1/4=120�L:1/8�%%score (T1 T2) (B1 B2)�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D _E ^F# G _A _B c ^c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e39950434666',], "�X:1�M:C�Q:1/4=120�L:1/8�%%score (T1 T2) (B1 B2)�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D _E ^F G _A _B c ^c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {});</script>

        This pattern is H b3 H H W W H H

        rotated to put the b3 at the end

        H H W W H H H b3

        It is scale #6

        These two scales sound very similar, as we put a lot of restrictions on them. A Scale is constructed from 12 chromatic tones. 12 chose 8 means that only 4 notes are going to be skipped. The harder hitting chord tones mean that the chords are going to ring through the passing tones and give the same general effect.

        This post was inspired by the this video: by Scott Paddock. One other scale he proposed is the Whole tone scale. A D Whole Tone Scale Looks like this:

        D E F# G# A# C

        It only has 6 distinct tones, not 7 like a major scale. It Also treats the F3 to G# (Enharmonically Ab from the chord) as a suspension as there are not any skipped tones between them. If we set that next to our first scale:

        D E F# G# A# C

        D E F# G Ab C C#

        We see that we are missing the G natural but get the rest of the notes. If We Add the G natural and the C# to get 8 tones we have

        D E F# G G# A# B C C#

        Over the chords:

        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e399504346ea',], "�X:1�M:C�Q:1/4=120�L:1/8�%%score (T1 T2) (B1 B2)�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D E ^F G ^G ^A c | ^c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e39950434767',], "�X:1�M:C�Q:1/4=120�L:1/8�%%score (T1 T2) (B1 B2)�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D E ^F G ^G ^A c | ^c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {});</script>

        We could alternatively put the B natural in, but it means we lose the A#, which is part of the whole tone effect.

        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e399504347ea',], "�X:1�M:C�Q:1/4=120�L:1/8�%%score (T1 T2) (B1 B2)�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D E ^F G ^G B c | ^c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e39950434866',], "�X:1�M:C�Q:1/4=120�L:1/8�%%score (T1 T2) (B1 B2)�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D E ^F G ^G B c | ^c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {});</script>

        This is W W H H H H H H. I don’t see that in my list. We can fit that in to #42 in place of the duplicate I had there

        We could also move the C natural to a passing tone and lose the C sharp.

        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e399504348ea',], "�X:1�M:C�Q:1/4=120�L:1/8�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D E ^F G ^G ^A B | c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e39950434966',], "�X:1�M:C�Q:1/4=120�L:1/8�V:T1 clef=treble-8 name=\"Melody I\" snm=\"T.I\"�V:B1 middle=d clef=treble name=\"Accomp I\" snm=\"B.I\"�K:C�% End of header, start of tune body:�% 1�[V:T1] D E ^F G ^G ^A B | c d |�[V:B1] [D^F_Ac]8 |�".replace(/\x01/g,"\n"), {}, {});</script>

        This is W W W H H H H W which is scale # 35.


        From an ease of playing perspective, I would probably keep with the first scale. It emphasizes the notes that are different from the C major, and it gets the minor third in there which is a great color.

        Fixing a MacBook Pro 8,2 with dead AMD GPU

        Posted by William Brown on February 03, 2020 02:00 PM

        Fixing a MacBook Pro 8,2 with dead AMD GPU

        I’ve owned a MacBook Pro 8,2 late 2011 edition, which I used from 2011 to about 2018. It was a great piece of hardware, and honestly I’m surprised it lasted so long given how many MacOS and Fedora installs it’s seen.

        I upgraded to a MacBook Pro 15,1, and I gave the 8,2 to a friend who was in need of a new computer so she could do her work. It worked really well for her until today when she’s messaged me that the machine is having a problem.

        The Problem

        The machine appeared to be in a bootloop, where just before swapping from the EFI GPU to the main display server, it would go black and then lock up/reboot. Booting to single user mode (boot holding cmd + s) showed the machine’s disk was intact with a clean apfs. The system.log showed corruption at the time of the fault, which didn’t instill confidence in me.

        Attempting a recovery boot (boot holding cmd + r), this also yielded the bootloop. So we have potentially eliminated the installed copy of MacOS as the source of the issue.

        I’ve then used the apple hardware test (boot while holding d), and it has passed the machine as a clear bill of health.

        I have seen one of these machines give up in the past - my friends mother had one from the same generation and that died in almost the same way - could it be the same?

        The 8,2’s cursed gpu stack

        The 8,2 15” mbp has dual gpu’s - it has the on cpu Intel 3000, and an AMD radeon 6750M. The two pass through an LVDS graphics multiplexer to the main panel. The external display port however is not so clear - the DDC lines are passed through the GMUX, but the datalines directly attach to the the display port.

        The machine is also able to boot with EFI rendering to either card. By default this is the AMD radeon. Which ever card is used at boot is also the first card MacOS attempts to use, but it will try to swap to the radeon later on.

        This generation had a large number of the radeons develop faults in their 3d rendering capability so it would render the EFI buffer correctly, but on the initiation of 3d rendering it would fail. Sounds like what we have here!

        To fix this …

        Okay, so this is fixable. First, we need to tell EFI to boot primarily from the intel card. Boot to single user mode and then run.

        nvram fa4ce28d-b62f-4c99-9cc3-6815686e30f9:gpu-power-prefs=%01%00%00%00
        

        Now we need to prevent loading of the AMD drivers so that during the boot MacOS doesn’t attempt to swap from Intel to the Radeon. We can do this by hiding the drivers. System integrity protection will stop you, so you need to do this as part of recovery. Boot with cmd + r, which now works thanks to the EFI changes, then open terminal

        cd /Volumes/Macintosh HD
        sudo mkdir amdkext
        sudo mv System/Library/Extensions/AMDRadeonX3000.kext amdkext/
        

        Then reboot. You’ll notice the fans go crazy because the Radeon card can’t be disabled without the driver. We can post-boot load the driver to stop the fans to fix this up.

        To achieve this we make a helper script:

        # cat /usr/local/libexec/amd_kext_load.sh
        #!/bin/sh
        /sbin/kextload /amdkext/AMDRadeonX3000.kext
        

        And a launchctl daemon

        # cat /Library/LaunchDaemons/au.net.blackhats.fy.amdkext.plist
        <?xml version="1.0" encoding="UTF-8"?>
        <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
        <plist version="1.0">
                <dict>
                        <key>Label</key>
                        <string>au.net.blackhats.fy.amdkext</string>
                        <key>Program</key>
                        <string>/usr/local/libexec/amd_kext_load.sh</string>
                        <key>RunAtLoad</key>
                        <true/>
                        <key>StandardOutPath</key>
                        <string>/var/log/amd_kext_load.log</string>
                </dict>
        </plist>
        

        Now if you reboot, you’ll have a working mac, and the fans will stop properly. I’ve tested this with suspend and resume too and it works! The old beast continues to live :)

        Episode 181 - The security of SIM swapping

        Posted by Open Source Security Podcast on February 03, 2020 12:01 AM
        Josh and Kurt talk about SIM swapping. What is it, how does it work. Why should you care? There's not a ton you can do to protect yourself, but we go over some of the basic concepts and what to watch out for. It's unfortunate this is still a problem.

        <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12968429/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

        Show Notes


          Using mod_auth_gssapi via Podman

          Posted by Adam Young on February 02, 2020 03:25 AM

          Kerberos is a cryptographically secure authentication mechanism in use in many large organizations. Developers may want to make their applications work with Kerberos while developing inside containers. Here is a quick proof-of-concept that shows how to set up a container to work with mod_auth_gssapi., the Apache module that makes use of Kerberos.

          Kerberos identity is based on hostnames. So, even though the webserver is running on localhost, we want to address it via a hostname. I allocated localdev.redhatfsi.com for my development. On the IdM server, I created a host called

          localdev.redhatfsi.com,

          a service principal called localdev.redhatfsi.com@REDHATFSI.COM.keytab

          and allocated a keytab, which I then store on my local machine in

          ~/keytabs/HTTP/localdev.redhatfsi.com@REDHATFSI.COM.keytab.

          I added the following entry to my /etc/hosts file:

          0.0.0.0 localdev.redhatfsi.com

          I am still using the setup from my earlier post, although I have now created server and client subdirectories. The server code is here

          I added the mod_auth_gssapi RPM in the yum install portion of the Dockerfile:

          RUN yum -y install httpd mod_wsgi mod_auth_gssapi

          Here is my configuration for the apache server:

          PidFile /tmp/apache.pid
          ErrorLog /dev/stderr
          TransferLog /dev/stdout
          
          <virtualhost>
              DocumentRoot /var/www/html
              WSGIScriptAlias /envvars /var/www/cgi-bin/envvars.wsgi
              <directory>
              Order allow,deny
              Allow from all
              </directory>
          </virtualhost>
          
          <location></location>
              AuthType GSSAPI
              AuthName "GSSAPI Single Sign On Login"
              GssapiCredStore keytab:/etc/httpd/localdev.redhatfsi.com@REDHATFSI.COM.keytab
              Require valid-user
          </location>
          
          

          This is not the cleanest setup, as it was somewhat trial-and-error. I did peek at the file /etc/httpd/conf.d/ipa.conf on the IdM server to get a sense of what they are tracking, but there is far more there than I need.

          To run the server, I bindmount in the Keytab:

           
          podman run -P --name krbocp --mount type=bind,source=/home/ayoung/keytabs/HTTP/localdev.redhatfsi.com@REDHATFSI.COM.keytab,destination=/etc/httpd/localdev.redhatfsi.com@REDHATFSI.COM.keytab --rm admiyo:krbocp

          Most notable is that I am still usering the mod_wsgi code that dumps the environment. I am specifically looking for GSSAPI specific variables, as well as REMOTE_USER.

          PORT=`podman port -l | cut -d':' -f2`   curl -s --negotiate -u : localdev.redhatfsi.com:$PORT/envvars
            

          Returns the following Data

          euid
          48
          UNIQUE_ID
          ‘XjY-1MjfUl0pGWTY63U-OAAAAA0’
          GSS_NAME
          ‘ayoung@REDHATFSI.COM’
          GSS_SESSION_EXPIRATION
          ‘1580698393’
          GATEWAY_INTERFACE
          ‘CGI/1.1’
          SERVER_PROTOCOL
          ‘HTTP/1.1’
          REQUEST_METHOD
          ‘GET’
          QUERY_STRING
          REQUEST_URI
          ‘/envvars’
          SCRIPT_NAME
          ‘/envvars’
          HTTP_HOST
          ‘localdev.redhatfsi.com:38319’
          HTTP_USER_AGENT
          ‘curl/7.66.0’
          HTTP_ACCEPT
          ‘*/*’
          SERVER_SIGNATURE
          SERVER_SOFTWARE
          ‘Apache/2.4.37 (centos) mod_auth_gssapi/1.6.1 mod_wsgi/4.6.4 Python/3.6’
          SERVER_NAME
          ‘localdev.redhatfsi.com’
          SERVER_ADDR
          ‘10.0.2.100’
          SERVER_PORT
          ‘38319’
          REMOTE_ADDR
          ‘10.0.2.2’
          DOCUMENT_ROOT
          ‘/var/www/html’
          REQUEST_SCHEME
          ‘http’
          CONTEXT_PREFIX
          CONTEXT_DOCUMENT_ROOT
          ‘/var/www/html’
          SERVER_ADMIN
          ‘root@localhost’
          SCRIPT_FILENAME
          ‘/var/www/cgi-bin/envvars.wsgi’
          REMOTE_PORT
          ‘53008’
          REMOTE_USER
          ‘ayoung@REDHATFSI.COM’
          AUTH_TYPE
          ‘Negotiate’
          PATH_INFO
          mod_wsgi.script_name
          ‘/envvars’
          mod_wsgi.path_info
          mod_wsgi.process_group
          mod_wsgi.application_group
          ‘10.0.2.100:38319|/envvars’
          mod_wsgi.callable_object
          ‘application’
          mod_wsgi.request_handler
          ‘wsgi-script’
          mod_wsgi.handler_script
          mod_wsgi.script_reloading
          ‘1’
          mod_wsgi.listener_host
          mod_wsgi.listener_port
          ‘8080’
          mod_wsgi.enable_sendfile
          ‘0’
          mod_wsgi.ignore_activity
          ‘0’
          mod_wsgi.request_start
          ‘1580613588392411’
          mod_wsgi.request_id
          ‘XjY-1MjfUl0pGWTY63U-OAAAAA0’
          mod_wsgi.script_start
          ‘1580613588404414’
          wsgi.version
          (1, 0)
          wsgi.multithread
          True
          wsgi.multiprocess
          True
          wsgi.run_once
          False
          wsgi.url_scheme
          ‘http’
          wsgi.errors
          <_io.TextIOWrapper name='<wsgi.errors>‘ encoding=’utf-8’>
          wsgi.input
          <mod_wsgi.input at="at" object="object">
          wsgi.input_terminated
          True
          wsgi.file_wrapper
          <class>
          apache.version
          (2, 4, 37)
          mod_wsgi.version
          (4, 6, 4)
          mod_wsgi.total_requests
          3
          mod_wsgi.thread_id
          4
          mod_wsgi.thread_requests
          0

          kinit with a service keytab

          Posted by Adam Young on January 31, 2020 10:32 PM

          Remote services are not You; they do work on your behalf. When a remote service authenticates to another service, it should not impersonate you. If you use a keytab issued to your princial (say yourname@YOUNGLOGIC.INFO) you are not going to be able to log in to things using password; The IdM server only allows on or the other credential to be active at any given time. Even if you do use the Keytab, if you need to have it in two locations, you need to copy it. Which becomes a nightmare if it gets compromised. So, we want to make service accounts to work on our behalf. Here’s what I have so far.

          Although My IdM server is not the definitive name server for my domain, I can add entries in to it to give it subdomains. So, I have created apps.demo.redhatfsi.com. I can make hosts under this to use as services, even though they are really going to all be on one IP address and exposed via a wildcard DNS entry. For this sample I have a host named sampleapp.apps.demo.redhatfsi.com. I might end up creating a DNS entry for it eventually, but I don’t need it at the moment. Just the IdM Host record is sufficient.

          Under that host I created a service called custom. That leads to the full principal name of custom/sampleapp.apps.demo.redhatfsi.com@REDHATFSI.COM

          I’ve granted permission to the ayoung user to both create and fetch the keytab (using the Web UI, a nice extension, props to the coders!). To actually fetch it, I log in to an ipa-client machine, kinit as ayoung, and run:

          ipa-getkeytab -k custom.keytab -p custom/sampleapp.apps.demo.redhatfsi.com@REDHATFSI.COM

          I can now use that to run kinit. Note that I need to pass the principal name in, as well as the file where the keytab resides:

          kinit -t custom.keytab  custom/sampleapp.apps.demo.redhatfsi.com@REDHATFSI.COM
          

          I can use this in a “curl with negotiate” call like this:

          curl --negotiate -u custom/sampleapp.apps.demo.redhatfsi.com@REDHATFSI.COM: https://idm.redhatfsi.com/ipa/
          

          Kerberos Secured Web Call from a Podman container

          Posted by Adam Young on January 31, 2020 09:40 PM

          What does it take to make a call to a Kerberized service from a container running in podman? Here are the steps I am going through to debug and troubleshoot.

          UPDATE: Now in a rootless container.

          I have an IPA server set up. If I call $HOSTNAME/ipa/ui and I am not authenticated, I can see the response:

          Unable to verify your Kerberos credentials.

          If I am authenticated I get:

          Moved Permanently.

          Bad Habit alert: you can do

          echo $PASSWORD | kinit

          Don’t do something like that. Instead, for process inside of containers, you want to use a mechanism to automate the fetching of a TGT. Typically, you want to use a keytab. I wrote this up years ago.

          Instead, I bindmount the directory with the keytab in it into the container at run time, and automate the fetching of the TGT.

          1. Create a directory structure on your workstation like this: ~/user/1001
          2. go to an ipa client machine and generate a keytab. Note that IPA will disable your password once you do this. You hacve been warned.
          3. scp the keytab onto the workstation in the 0 subdirectory from the first step.machine
          4. Make sure the keytab is readable by the desired end user. Yes, this is a security risk. You have been warned.
          5. test that the keytab can be used to geta tgt: kinit -t /home/ayoung/user/1001/client.keytab ayoung@EXAMPLE.COM

          Here is my Dockerfile:

          FROM centos:latest
          
          MAINTAINER Adam Young
          
          RUN yum -y install krb5-workstation strace curl sssd-client
          RUN chmod a+rwx /var/kerberos/krb5/user
          COPY krb5.conf /etc/krb5.conf
          # Note: I added the following and can run this all in a 
          # non-root Container 
          USER 1001
          CMD KRB5_TRACE=/dev/stderr  curl -k --negotiate -u : https://idm.example.com/ipa 
          #CMD kinit -V testuser@REDHATFSI.COM  -t /var/kerberos/krb5/user/0/client.keytab ; curl -k --negotiate -u : https://idm.example.com/ipa
          #CMD echo $EUID
          #CMD cat /var/kerberos/krb5/user/0/client.keytab
          #CMD curl -k --negotiate -u : https://idm.example.com/ipa
          #CMD cat /etc/krb5.conf
          

          Note that I left in many of my debugging lines. Each of these have been useful for troubleshooting.

          The top, uncommented, CMD line is the one that I actually use. The KRB5_TRACE env var is useful for debugging, although you will want to remove it for production.

          The line below it shows an explicit kinit. You can do something like this if you want to put the keytab in some other location, or to check a permissions problem.

          The 0 in the directory structure reflects the effective user ID (EUID) that will be run in the container. The echo $EUID line was for confirming that the EUID was correct.

          I use a slightly modified krb5.conf file from the IPA server. I removed two lines from the top that looked like this:

          • includedir /etc/krb5.conf.d/
          • includedir /var/lib/sss/pubconf/krb5.include.d/

          I do not have anything in those directories that I need for this use case, and if I include those lines, I either need to create those subdirs, or I get the error:

          “kinit: Included profile directory could not be read while initializing Kerberos 5 library”

          I put the build command into a file named build.sh. It looks like this:

          podman build --tag admiyo:curlkrb -f Dockerfile

          I put the run command into a file called run.sh. It looks like this:

          podman run -P --name krbocp --mount type=bind,source=/home/ayoung/user,destination=/var/kerberos/krb5/user --rm admiyo:curlkrb

          Note the bindmount. When I move this to Kubernetes, this will be done by a secret instead.

          Code on github.

          EDIT: Note that the SELinux context for the TGT must be set properly for the user to be able to access the keytab. I have been able to use the following command to set it:

          sudo chcon  -R -t httpd_sys_content_t ~/user/1001/client.keytablient
          

          Running HTTPD as an ordinary user using Podman

          Posted by Adam Young on January 31, 2020 03:44 AM

          While it is always tempting to run a program as root, we know we should not do it. When developing, you want to make the process as non-root as possible. Here is what I am doing to write mod_wsgi code and run it as a non root user.

          See my last article for the code I am running.

          To Build the container image:

          podman build --tag admiyo:krbocp -f Dockerfile
          

          Nothing strange there, but worth noting that the build is done as the ayoung user, not as root.

          To run the container:

          podman run  -P --name krbocp  --rm admiyo:krbocp
          

          To find the ip/port of the container:

          podman port -l
          

          Which looks like this:

          $ podman port -l
          8080/tcp -> 0.0.0.0:36727
          $ curl 0.0.0.0:36727
          

          Building (and running) a custom HTTPD container image

          Posted by Adam Young on January 29, 2020 10:49 PM

          Having used Apache HTTPD for a good portion of my professional career, and being responsible for explaining how OpenShift works, I decided to try and build an Apache HTTPD container from scratch. For follow on work, I want to see the environment, so the container is essentially wrapping a mod_wsgi APP that dumps the environment. I took some trial and error to get it to run. Here is the end result:

          I’m building using a Docker file.

          FROM centos:latest
          
          MAINTAINER Adam Young
          
          RUN yum -y install httpd mod_wsgi
          RUN sed -i.bak 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf
          RUN sed -i.bak 's/    CustomLog/#    CustomLog/' /etc/httpd/conf/httpd.conf
          
          
          COPY index.html /var/www/html/
          COPY envvars.py   /var/www/cgi-bin/envvars.wsgi
          RUN  chmod a+rx   /var/www/cgi-bin/envvars.wsgi
          COPY krbocp.conf /etc/httpd/conf.d/
          
          CMD /usr/sbin/httpd -D FOREGROUND
          
          EXPOSE 8080
          
          

          While I had wanted to keep all of the HTTPD configuration in an external file, it turns out I needed to turn a couple things off in the main file first. Hence the two sed lines. The first one:

          sed -i.bak 's/Listen 80/Listen 8080/' /etc/httpd/conf/httpd.conf

          Changes the port on which the container instance listens from 80 to 8080. Since my end state is going to be to run this in OpenShift, and I want OpenShift to provide TLS, port 8080 cleartext is OK for now.

          The second sed line:

          sed -i.bak 's/    CustomLog/#    CustomLog/' /etc/httpd/conf/httpd.conf

          Turns off the default logging for the access_log file. I need to override tis later.

          This is my simplistic mod_wsgi conf file:

          PidFile /tmp/apache.pid
          ErrorLog /dev/stderr
          TransferLog /dev/stdout
          
          
          
          <virtualhost>
              DocumentRoot /var/www/html
              WSGIScriptAlias /envvars /var/www/cgi-bin/envvars.wsgi
              <directory>
              Order allow,deny
              Allow from all
              </directory>
          </virtualhost>
          
          

          I got to this via trial and error. The logging section was based on a hint from https://twitter.com/nazgullien. Thank You Very Much.

          Here is the WSGI code itself.

          from os import environ
          
          def application(environ, start_response):
              status = '200 OK'
              output = b'<html><head></head><body>
          ' keys = environ.keys() for key in keys: line = '
          %s
          %s
          \n' % (key, repr(environ[key])) output += line.encode('utf-8') output += b'
          </body></html>' response_headers = [('Content-type', 'text/html'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output]

          The complete code repository is here. Last tested commit is 33f4bcfae23aba4b60cb485e52b11a07caa50a8f but this repo is going to keep moving, so don’t be surprised if it looks nothing like I describe above by the time you look.

          Fedora Has Too Many Security Bugs

          Posted by Robbie Harwood on January 28, 2020 05:00 AM

          I don't work on Fedora security directly, but I do maintain some crypto components. As such, I have my own opinions about how things ought to work, which I will refrain from here. My intent is to demonstrate the problem so that the project can discuss solutions.

          To keep this easy to follow, my data and process is in a section at the end; curious readers should be able to double-check me.

          Problem

          At the time of writing, there are 2,336 open CVE bugs against Fedora. While it's not realistic for that number to be 0, this is clearly way too many.

          Additionally, a majority of them (2309) are older than 4 weeks. I understand from experience that even the most important bugs are rarely fixed instantaneously, but having bugs that old (and so many) speaks to deeper problems with the way maintenance is done right now. In fact, here's the year breakdown:

          2010: 4
          2011: 11
          2012: 17
          2013: 18
          2014: 30
          2015: 74
          2016: 195
          2017: 425
          2018: 682
          2019: 852
          2020: 27
          

          So a concentration in the past couple years, but with a very long tail.

          My query includes both "regular" Fedora and EPEL. There are 1266 EPEL bugs in this set (which leaves 1070 non-EPEL). So the problem is worse for EPEL, but EPEL is by no means responsible for this huge number.

          It's possible to slice this by component, but I don't actually want to do that here because my intent is not to point fingers at specific people. However, there are a few ecosystems that seem to be having particular trouble (based on visual inspection of names):

          mingw: 459
          python: 81
          nodejs: 72
          ruby: 32
          php: 23
          

          (The remainder don't clearly group.)

          That's all the analysis I could think of to run, but see the methodology section below if you want to build on what's here.

          Fedora policies

          A majority of CVE bugs are created by Red Hat's Product Security (of which team I am not a member; I'm in Security Engineering). They provide this service on a best-effort basis. As I understand it, the theory is that maintainers should be aware enough of their packages to know whether a release fixes a security bug or not. (And also that an extra bug for us to close once in a while isn't the end of the world.)

          Fedora has some policy around security bugs in the Package maintainer responsibilities document, but it's very weak (to someone coming from RFC 2119-land, at any rate). It says:

          Package maintainer should handle security issues quickly, and if they need help they should contact the Security Response Team.

          ("Security Response Team" is a broken link, which I've reported here.)

          This effectively treats security bugs no differently than other bugs. The only recourse for maintainers not fixing bugs in general is the nonresponsive maintainer process, which won't help if the maintainer is still active in the process but hostile toward fixing/triaging their bugs.

          So it has to go to FESCo. FESCo presumably does not want to handle the hundreds of tickets for all of these, which means that the status quo is inadequate.

          In short: no one is minding the store, and more worryingly, there is no way for anyone to start minding the store.

          What I've done

          I've reached out to some maintainers, including folks from mingw. I currently have a FESCo ticket (#2333) for getting those resolved.

          Reaching out to EPEL maintainers has proven unsuccessful on the whole. From my scattered sampling, these bugs aren't getting fixed because the default package assignee is not interested in maintaining EPEL. (Presumably someone else did in the past, and they have vanished.)

          I've also written this post, which I hope will spark a concerted effort to fix the problem.

          Methodology

          All information in this post is readily accessible to any Fedora contributor, no special access required.

          I used this bugzilla query. I downloaded the data as CSV, then queried and filtered it using python. I'm sure there are better ways to do this, but I'm not a statistician. I also mostly write C.

          Once I had downloaded the CSV, I imported it like so:

          import csv
          
          with open("bugs-2020-01-28.csv", "r") as f:
              db = list(db.DictReader(f))
          

          The csv module's interface is obnoxious. It wants to give back an iterator over the file, so we have to drain the iterator before the file can be closed. (Otherwise it becomes unhappy.) This leaves us with an object of type roughly List<OrderedDict<String, String>>. Really what I'd like is Set<Dict<String, String>>, but neither Dict nor OrderedDict are hashable, so Python doesn't allow that.

          For determining age, I'm abusing the fact that it's January 2020 right now:

          old = [bug for bug in db if "CVE-2020" not in bug["Summary"]]
          

          Mapping years:

          import re
          
          from collections import defaultdict
          
          years = defaultdict(int)
          r = re.compile(r"CVE-(\d{4})-")
          for bug in db:
              match = r.search(bug["Summary"])
              if match is None:
                  continue
          
              year = match.group(1)
              years[year] += 1
          
          for key in sorted(years.keys()):
              print(f"{key}: {years[key]}
          

          EPEL:

          epel = [bug for bug in db if bug["Product"] == "Fedora EPEL"]
          

          Components:

          components = defaultdict(int)
          for bug in db:
              components[bug["Component"]] += 1
          
          for c in sorted(components.keys()):
              print(f"{c}: {components[c]}")
          
          def ecosystem(e):
              count = 0
              for c in components:
                  if c.startswith(f"{e}-"):
                      count += components[c]
          
              return count
          

          Episode 180 - A Tale of Two Vulnerabilities

          Posted by Open Source Security Podcast on January 27, 2020 01:01 AM
          Josh and Kurt talk about two recent vulnerabilities that have had very different outcomes. One was the Citrix remote code execution flaw. While the flaw is bad, the handling of the flaw was possibly worse than the flaw itself. The other was the Microsoft ECC encryption flaw. It was well handled even though it was hard to understand and it is a pretty big deal. As all these things go, fixing and disclosing vulnerabilities is hard.

          <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12893882/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

          Show Notes


            Episode 179 - Google Project Zero and the 90 day clock

            Posted by Open Source Security Podcast on January 20, 2020 12:34 AM
            Josh and Kurt talk about the updated Google Project Zero disclosure policy. What's the new policy, what does it mean, and will it really matter? We suspect it will improve some things, but won't drastically change much.

            <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12802634/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

            Show Notes


              There are no root causes

              Posted by William Brown on January 19, 2020 02:00 PM

              There are no root causes

              At Gold Coast LCA2020 I gave a lightning talk on swiss cheese. Well, maybe not really swiss cheese. But it was about the swiss cheese failure model which was proposed at the university of manchester.

              Please note this will cover some of the same topics as the talk, but in more detail, and with less jokes.

              An example problem

              So we’ll discuss the current issues behind modern CPU isolation attacks IE spectre. Spectre is an attack that uses timing of a CPU’s speculative execution unit to retrieve information from another running process on the same physical system.

              Modern computers rely on hardware features in their CPU to isolate programs from each other. This could be isolating your web-browser from your slack client, or your sibling’s login from yours.

              This isolation however has been compromised by attacks like Spectre, and it looks unlikely that it can be resolved.

              What is speculative execution?

              In order to be “fast” modern CPU’s are far more complex than most of us have been taught. Often we believe that a CPU thread/core is executing “one instruction/operation” at a time. However this isn’t how most CPU’s work. Most work by having a pipeline of instructions that are in various stages of execution. You could imagine it like this:

              let mut x = 0
              let mut y = 0
              x = 15 * some_input;
              y = 10 * other_input;
              if x > y {
                  return true;
              } else {
                  return false;
              }
              

              This is some made up code, but in a CPU, every part of this could be in the “pipeline” at once.

              let mut x = 0                   <<-- at the head of the queue and "further" along completion
              let mut y = 0                   <<-- it's executed part way, but not to completion
              x = 15 * some_input;
              y = 10 * other_input;           <<-- all of these are in pipeline, and partially complete
              if x > y {                      <<-- what happens here?
                  return true;
              } else {
                  return false;
              }
              

              So how does this “pipeline” handle the if statement? If the pipeline is looking ahead, how can we handle a choice like an if? Can we really predict the future?

              Speculative execution

              At the if statement, the CPU uses past measurements to make a prediction about which branch might be taken, and it then begins to execute that path, even though ‘x > y’ has not been executed or completed yet! At this point x or y may not have even finished being computed yet!

              Let’s assume for now our branch predictor thinks that ‘x > y’ is false, so we’ll start to execute the “return false” or any other content in that branch.

              Now the instructions ahead catch up, and we resolve “did we really predict correctly?”. If we did, great! We have been able to advance the program state asynchronously even without knowing the answer until we get there.

              If not, ohh nooo. We have to unwind what we were doing, clear some of the pipeline and try to do the correct branch.

              Of course this has an impact on timing of the program. Some people found you could write a program to manipulate this predictor and using specific addresses and content, they could use these timing variations to “access memory” they are not allowed to by letting the specualative executor contribute to code they are not allowed to access before the unroll occurs. They could time this, and retrieve the memory contents from areas they are not allowed to access, breaking isolation.

              Owwww my brain

              Yes. Mine too.

              Community Reactions

              Since this has been found, a large amount of the community reaction has been about the “root cause”. ‘Clearly’ the root cause is “Intel are bad at making CPU’s” and so everyone should buy AMD instead because they “weren’t affected quite as badly”. We’ve had some intel CPU updates and kernel/program fixes so all good right? We addressed the root cause.

              Or … did we?

              Our computers are still asynchronous, and contain many out-of-order parts. It’s hard to believe we have “found” every method of exploiting this. Indeed in the last year many more ways to bypass hardware isolation due to our systems async nature have been found.

              Maybe the “root cause” wasn’t addressed. Maybe … there are no ….

              History

              To understand how we got to this situation we need to look at how CPU’s have evolved. This is not a complete history.

              The PDP11 was a system owned at bell labs, where the C programing language was developed. Back then CPU’s were very simple - A CPU and memory, executing one instruction at a time.

              The C programming language gained a lot of popularity as it was able to be “quickly” ported to other CPU models to allow software to be compiled on other platforms. This led to many systems being developed in C.

              Intel introduced the 8086, and many C programs were ported to run on it. Intel then released the 80486 in 1989, which had the first pipeline and cache to improve performance. In order to continue to support C, this meant the memory model could not change from the PDP11 - the cache had to be transparent, and the pipeline could not expose state.

              This has of course led to computers being more important in our lives and businesses, so we expected further performance, leading to increased frequencies and async behaviours.

              The limits of frequencies were really hit in the Pentium 4 era, when about 4GHz was shown to be a barrier of stability for those systems. They had very deep pipelines to improve performance, but that also had issues when branch prediction failed causing pipeline stalls. Systems had to improve their async behaviours futher to squeeze every single piece of performance possible out.

              Compiler developers also wanted more performance so they started to develop ways to transform C in ways that “took advantage” of x86_64 tricks, by manipulating the environment so the CPU is “hinted” into states we “hope” it gets into.

              Many businesses also started to run servers to provide to consumers, and in order to keep costs low they would put many users onto single pieces of hardware so they could share or overcommit resources.

              This has created a series of positive reinforcement loops - C is ‘abi stable’ so we keep developing it due to it’s universal nature. C code can’t be changed without breaking every existing system. We can’t change the CPU memory model without breaking C, which is hugely prevalent. We improve the CPU to make C faster, transparently so that users/businesses can run more C programs and users. And then we improve compilers to make C faster given quirks of the current CPU models that exist …

              Swiss cheese model

              It’s hard to look at the current state of systems security and simply say “it’s the cpu vendors fault”. There are many layers that have come together to cause this situation.

              This is called the “swiss cheese model”. Imagine you take a stack of swiss cheese and rotate and rearrange the slices. You will not be able to see through it. but as you continue to rotate and rearrange, eventually you may see a tunnel through the cheese where all the holes line up.

              This is what has happened here - we developed many layers socially and technically that all seemed reasonable over time, and only after enough time and re-arrangements of the layers, have we now arrived at a situation where a failure has occured that permeates all of computer hardware.

              To address it, we need to look beyond just “blaming hardware makers” or “software patches”. We need to help developers move away from C to other languages that can be brought onto new memory models that have manual or other cache strategies. We need hardware vendors to implement different async models. We need to educate businesses on risk analysis and how hardware works to provide proper decision making capability. We need developers to alter there behaviour to work in environments with higher performance constraints. And probably much much more.

              There are no root causes

              It is a very pervasive attitude in IT that every issue has a root cause. However, looking above we can see it’s never quite so simple.

              Saying an issue has a root cause, prevents us from examining the social, political, economic and human factors that all become contributing factors to failure. Because we are unable to examine them, we are unable to address the various layers that have contributed to our failures.

              There are no root causes. Only contributing factors.

              Shift on Stack: api_port failure

              Posted by Adam Young on January 19, 2020 12:55 AM

              I finally got a right-sized flavor for an OpenShift deployment: 25 GB Disk, 4 VCPU, 16 GB Ram. With that, I tore down the old cluster and tried to redeploy. Right now, the deploy is failing at the stage of the controller nodes querying the API port. What is going on?

              Here is the reported error on the console:

              <figure class="wp-block-image"></figure>

              The IP address of 10.0.0.5 is attached to the following port:

              $ openstack port list | grep "0.0.5"
              | da4e74b5-7ab0-4961-a09f-8d3492c441d4 | demo-2tlt4-api-port       | fa:16:3e:b6:ed:f8 | ip_address='10.0.0.5', subnet_id='50a5dc8e-bc79-421b-aa53-31ddcb5cf694'      | DOWN   |

              That final “DOWN” is the port state. It is also showing as detached. It is on the internal network:

              <figure class="wp-block-image"></figure>

              Looking at the installer code, the one place I can find a reference to the api_port is in the template data/data/openstack/topology/private-network.tf used to build the value openstack_networking_port_v2. This value is used quite heavily in the rest of the installers’ Go code.

              Looking in the terraform data built by the installer, I can find references to both the api_port and openstack_networking_port_v2. Specifically, there are several object of type openstack_networking_port_v2 with the names:

              $ cat moc/terraform.tfstate  | jq -jr '.resources[] | select( .type == "openstack_networking_port_v2") | .name, ", ", .module, "\n" '
              api_port, module.topology
              bootstrap_port, module.bootstrap
              ingress_port, module.topology
              masters, module.topology
              

              On a baremetal install, we need an explicit A record for api-int.<cluster_name>.<base_domain>. That requirement does not exist for OpenStack, however, and I did not have one the last time I installed.

              api-int is the internal access to the API server. Since the controllers are hanging trying to talk to it, I assume that we are still at the stage where we are building the control plane, and that it should be pointing at the bootstrap server. However, since the port above is detached, traffic cannot get there. There are a few hypotheses in my head right now:

              1. The port should be attached to the bootstrap device
              2. The port should be attached to a load balancer
              3. The port should be attached to something that is acting like a load balancer.

              I’m leaning toward 3 right now.

              The install-config.yaml has the line:
              octaviaSupport: “1”

              But I don’t think any Octavia resources are being used.

              $ openstack loadbalancer pool list
              
              $ openstack loadbalancer list
              
              $ openstack loadbalancer flavor list
              Not Found (HTTP 404) (Request-ID: req-fcf2709a-c792-42f7-b711-826e8bfa1b11)
              
              

              Self Service Speedbumps

              Posted by Adam Young on January 15, 2020 05:18 PM

              The OpenShift installer is fairly specific in what it requires, and will not install into a virtual machine that does not have sufficient resources. These limits are:

              • 16 GB RAM
              • 4 Virtual CPUs
              • 25 GB Disk Space

              This is fairly frustrating if your cloud provider does not give you a flavor that matches this. The last item specifically is an artificial limitation as you can always create an additional disk and mount it, but the installer does not know to do that.

              In my case, there is a flavor that almost matches; it has 10 GB of Disk space instead of the required 25. But I cannot use it.

              Instead, I have to use a larger flavor that has double the VCPUs, and thus eats up more of my VCPU quota….to the point that I cannot afford more than 4 Virtual machines of this size, and thus cannot create more than one compute node; OpenShift needs 3 nodes for the control plane.

              I do not have permissions to create a flavor on this cloud. Thus, my only option is to open a ticket. Which has to be reviewed and acted upon by an administrator. Not a huge deal.

              This is how self service breaks down. A non-security decision (link disk size with the other characteristics of a flavor) plus Access Control rules that prevent end users from customizing. So the end user waits for a human to respond

              In my case, that means that I have to provide an alternative place to host my demonstration, just in case things don’t happen in time. Which costs my organization money.

              This is not a ding on my cloud provider. They have the same OpenStack API as anyone else deploying OpenStack.

              This is not a ding on Keystone; create flavor is not a project scoped operation, so I can’t even blame my favorite bug.

              This is not a ding on the Nova API. It is reasonable to reserve the ability to create Flavors to system administrators. If instances have storage attached, to provide it in reasonable sized chunks.

              My problem just falls at the junction of several different zones of responsibility. It is the overlap that causes the pain in this case. This is not unusual

              Would it be possible to have a more granular API, like “create customer flavor” that built a flavor out of pre-canned parts and sizes? Probably. That would solve my problem. I don’t know if this is a general problem, though.

              This does seem like it is something that could be addressed by a GitOps type approach. In order to perform an operation like this, I should be able to issue a command that gets checked in to git, confirmed, and posted for code review. An administrator could then confirm or provide an alternative approach. This happens in the ticketing system. It is human-resource-intensive. If no one says “yes” the default is no…and thing just sits there.

              What would be a better long term solution? I don’t know. I’m going to let this idea set for a while.

              What do you think?

              Episode 178 - Are CVEs important and will ransomware put you out of business?

              Posted by Open Source Security Podcast on January 13, 2020 12:03 AM

              Josh and Kurt talk about a discussion on Twitter about if discovering CVE IDs is important for a resume? We don't think it is. We also discuss the idea of ransomware putting a company out of business. Did it really? Possibly but it probably won't create any substantial change in the industry.

              <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12707411/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

              Show Notes


                On the Passing of Neal Peart

                Posted by Adam Young on January 11, 2020 11:57 PM

                I’m a nerdy male of Jewish, Eastern European Descent. I was born in 1971. My parents listened to John Denver, Simon and Garfunkle, Billy Joel, Mac Davis, Anne Murray and Carly Simon. My Uncle Ben started me on Saxophone in second grade.

                <figure class="wp-block-image"><figcaption>Image From “The Buffalo News”</figcaption></figure>

                Second grade was also the year that I moved from one side of Stoughton to the other, to 86 Larson Road, 3 houses up from the Grabers. While Brian Graber was my age, and destined to be one of my best friends through high school, it was his older brother, Stephen, that introduced me to two things that would change my life; the game of Dungeons and Dragons, and the band Rush.

                <figure class="wp-block-image"></figure>

                I said Nerdy.

                I can’t say enough about how D&D got me into history, reading, and all that propelled me through life.

                The soundtrack to that life was provided by Rush. Why? The stories that they told. Xanadu. The Trees, 2112. Hemispheres.

                But the song that grabbed me? The Spirit of Radio.

                <figure class="wp-block-image"></figure>

                I even had the name wrong for all my life: I thought it was the Spirit of THE Radio. But that was not the tag line from the radio station that Neal used when he was inspired to write the song.

                Invisible airways Crackle with Life

                Bright Antennae Bristle with the Energy

                Emotional Feedback on a Timeless Wavelength

                Bearing a gift beyond price, Almost Free.

                The Spirit of Radio

                The opening Riff is still my favorite thing to play on Guitar.

                The chords, all four of them, are simple, and yet just enough of a variation from the “Same Three Chords” to give the song its own sound.

                “Begin the Day with a Friendly voice, companion unobtrusive.”

                How man trips to school started with the radio? In 1986, when my sister was driving me, and Brian, and her friend to Heidi, the radio was our start of the day.

                You can chose a read guide in some celestial voice

                If you chose not to decide you still have made a choice

                You can chose from phantoms fears and kindness that can kill

                I will chose a path that’s clear

                I will chose freewill.

                Freewill

                My Mother was (and is) a huge Robert Frost fan. We often talked about how he was sometimes belittled by other poets for being to “pretty” in his rhymes. Was Neal Peart a poet? A philosopher? He certainly got me started on Ayn Rand (a stage I moved beyond, eventually) but also taught me the term “Free Will.”

                <figure class="wp-block-image"></figure>

                Leaving my homeland
                Playing a lone hand
                My life begins today

                Fly By Night

                My cousin Christopher Spelman came up for a week in July when I was 12 and he ended up staying all summer. We listened to Rush endlessly, discussed lyrics and drum technique. It was part of the cement that held our life long friendship together. I remember riding down the “Lazy River” trying to figure out which song sounded like “da DAHn da da DUH DUH…” and finally realizing it was the bridge from “Fly By Night.”

                <figure class="wp-block-image"></figure>

                Any escape might help to smooth
                The unattractive truth
                But the suburbs have no charms to soothe
                The restless dreams of youth


                Subdivisions

                I didn’t like the synthesized 80s. I followed Rush through them, but wished they would write music like I had heard from their earlier albums. But they had all grown. Neal moved from story telling to philosophy. Many of his lyrics on “Presto” could have been written for me as an adolescent.

                <figure class="wp-block-image"><figcaption>Counterparts</figcaption></figure>

                As the years went by, we drifted apart
                When I heard that he was gone
                I felt a shadow cross my heart
                But he’s nobody’s
                Hero

                Nobodys Hero

                And so I grew apart from my favorite band. Music, the pillar of my life in High School, took a second seat during my Army years. By the time I emerged, Rush was in hiatus. I had my own stories.

                Keep on riding North and West
                Then circle South and East
                Show me beauty, but there is no peace
                For the ghost rider

                Ghost Rider

                And that was 20 plus years ago. They went fast. IN the past several years, I’ve shared a love with my elder son for rock music, with Rush holding a special place in our discussions. The song “Ghost Rider” grabbed his imagination. We’ve both read the graphic novel of “Clockwork Angles.” He was the first person I told when I heard the news. The second was My friend Steve, a bassist I jam with far less frequently, and a member of our weekly D20 Future game night…a direct descendant of that D&D obsession from my elementary school years.

                Living in a fisheye lens
                Caught in the camera eye
                I have no heart to lie
                I can’t pretend a stranger
                Is a long awaited friend

                Limelight

                Neal held a certain fascination for me. He was an introvert, a technician, a writier, and a perfectionist. The nicest of people, he seemed to have to learn how to protect himself. I don’t think he wrote anything more personal than “Limelight” where he explained to his fanbase the feelings he had from fame. We all knew him, but we were strangers, not long-awaited friends.

                Suddenly, you were gone
                From all the lives you left your mark upon

                Afterimage

                As I process the lost of one of the most important artists in my life, I am listening to their albums. I started in chronology, but I was drawn to different songs. “Afterimage” the song where he says goodbye to an old friend, was the first that came to my mind. And so many others. Most people think of him as drummer (and that he was indeed) but I think of him as a lyricist, and I keep seeking out his words. Lucky for me, he wrote so many of them. I am mostly drawn now to the ones I know less well, from the end of their career, the last few albums.

                Geddy Lee is the voice of Rush, and we hear the words in his performances, but they are Neal’s words. And they have meant a lot to me.

                <figure class="wp-block-image"></figure>


                Episode 177 - Fake or real? The security of counterfeit goods

                Posted by Open Source Security Podcast on January 06, 2020 12:03 AM

                Josh and Kurt talk about marketplace safety and security. Will we ever see an end to the constant flow of counterfeit goods? The security industry has the same problem the marketplace industry has, without substantial injury we don't see movement towards meaningful change.

                <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12599621/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

                Show Notes


                  Episode 176 - The 'predictions are stupid' prediction episode

                  Posted by Open Source Security Podcast on December 30, 2019 12:00 AM

                  Josh and Kurt talk about security predictions for 2020. None of the predictions are even a bit controversial or unexpected. We're in a state of slow change, without disruptive technology next year will look a lot like this year.

                  <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12531041/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

                  Show Notes


                    Concurrency 2: Concurrently Readable Structures

                    Posted by William Brown on December 28, 2019 02:00 PM

                    Concurrency 2: Concurrently Readable Structures

                    In this post, I’ll discuss concurrently readable datastructures that exist, and ideas for future structures. Please note, this post is an inprogress design, and may be altered in the future.

                    Before you start, make sure you have read part 1

                    Concurrent Cell

                    The simplest form of concurrently readable structure is a concurrent cell. This is equivalent to a read-write lock, but has concurrently readable properties instead. The key mechanism to enable this is that when the writer begins, it clones the data before writing it. We trade more memory usage for a gain in concurrency.

                    To see an implementation, see my rust crate, concread

                    Concurrent Tree

                    The concurrent cell is good for small data, but a larger structure - like a tree - may take too long to clone on each write. A good estimate is that if your data in the cell is larger than about 512 bytes, you likely want a concurrent tree instead.

                    In a concurrent tree, only the branches involved in the operation are cloned. Imagine the following tree:

                             --- root 1 ---
                            /               \
                        branch 1         branch 2
                        /     \         /        \
                    leaf 1   leaf 2  leaf 3    leaf 4
                    

                    When we attempt to change a value in leaf 4 we copy it before we begin.

                          ---------------------------
                         /   --- root 1 ---          \-- root 2
                         v  /               \                \
                        branch 1         branch 2            branch 2(c)
                        /     \         /        \           /    \
                    leaf 1   leaf 2  leaf 3    leaf 4        |    leaf 4(c)
                                        ^                    |
                                        \-------------------/
                    

                    In the process the pointers from the new root 2 to branch 1 are maintained. branch 2(c) also maintains a pointer to leaf 3.

                    This means that in this example only 3/7 nodes are copied, saving a lot of cloning. As your tree grows this saves a lot of work. Consider a tree with node-widths of 7 pointers and at height level 5. Assuming perfect layout, you only need to clone 5/~16000 nodes. A huge saving in memory copy!

                    The interesting part is a reader of root 1, also is unaffected by the changes to root 2 - the tree from root 1 hasn’t been changed, as all it’s pointers and nodes are still valid.

                    When any reader of root 1 ends, we clean up all the nodes it pointed to that no longer are needed by root 2 (this can be done with atomic reference counting, or garbage lists in transactions).

                             --- root 2 ---
                            /               \
                        branch 1         branch 2(c)
                        /     \         /        \
                    leaf 1   leaf 2  leaf 3    leaf 4(c)
                    

                    It is through this copy-on-write (also called multi view concurrency control) that we achieve concurrent readability in the tree.

                    This is really excellent for databases where you have in memory structures that work in parallel to the database transactions. In kanidm an example is the in-memory schema that is used at run time but loaded from the database. They require transactional behaviours to match the database, and ACID properties so that readers of a past transaction have the “matched” schema in memory.

                    Future Idea - Concurrent Cache

                    A design I have floated in my head is a concurrently readable cache - it should have the same transactional properties as a concurrently readable structure - one writer, multiple readers with consistent views of the data. As well it should support rollbacks if a writer fails.

                    This scheme should work with any cache type - LRU, LRU2Q, LFU. I plan to use ARC however.

                    ARC was popularised by ZFS - ARC is not specific to ZFS, it’s a strategy for cache replacement.

                    ARC is a combination of an LRU and LFU with a set of ghost lists and a weighting factor. When an entry is “missed” it’s inserted to the LRU. When it’s accessed from the LRU a second time, it moves to the LFU.

                    When entries are evicted from the LRU or LFU they are added to the ghost list. When a cache miss occurs, the ghost list is consulted. If the entry “would have been” in the LRU, but was not, the LRU grows and the LFU shrinks. If the item “would have been” in the LFU but was not, the LFU is expanded.

                    This causes ARC to be self tuning to your workload, as well as balancing “high frequency” and “high locality” operations.

                    A major problem though is ARC is not designed for concurrency - LFU/LRU rely on double linked lists which is very much something that only a single thread can modify safely.

                    How to make ARC concurrent

                    To make this concurrent, I think it’s important to specify the goals.

                    • Readers should be able to read and find entries in the cache
                    • If a reader locates a missing entry it must be able to load it from the database
                    • The reader should be able to send loaded entries to the cache so they can be used.
                    • Reader access metrics should be acknowledged by the cache.
                    • Multiple reader generations should exist
                    • A writer should be able to load entries to the cache
                    • A writer should be able to modify an entry of the cache without affecting readers
                    • Writers should be able to be rolled back with low penalty

                    There are a lot of places to draw inspiration from, and I don’t think I can list - or remember them all.

                    My current “work in progress” is that we use a concurrently readable pair of trees to store the LRU and LFU. These trees are able to be read by readers, and a writer can concurrently write changes.

                    The ghost lists of the LRU/LFU are maintained single thread by the writer. The linked lists for both are also single threaded and use key-references from the main trees to maintain themselves. The writer maintains the recv end of an mpsc queue. Finally a writer has an always-incrementing transaction id associated.

                    A reader when initiated has access to the writer of the queue and the transaction id of the writer that created this generation. The reader has a an empty hash map.

                    Modification to ARC

                    A modification is that we need to retain the transaction id’s related to items. This means the LRU and LFU contain:

                    type Txid: usize;
                    
                    struct ARC<K, Value<V>> {
                        lru: LRU<K, Value<V>>,
                        lfu: LFU<K, Value<V>>,
                        ghost_lru: BTreeMap<K, Txid>
                        ghost_lfu: BTreeMap<K, Txid>
                    }
                    
                    struct Value<V> {
                        txid: Txid,
                        data: V,
                    }
                    

                    Reader Behaviour

                    The reader is the simpler part of the two, so we’ll start with that.

                    When a reader seeks an item in the cache, it references the read-only LRU/LFU trees. If found, we queue a cache-hit marker to the channel.

                    If we miss, we look in our local hashmap. If found we return that.

                    If it is not in the local hashmap, we now seek in the database - if found, we load the entry. The entry is stored in our local hashmap.

                    As the reader transaction ends, we send the set of entries in our local hash map as values (see Modification to ARC), so that the data and the transaction id of the generation when we loaded is associated. This has to be kept together as the queue could be recieving items from many generations at once.

                    The reader attempts a “try_include” at the end of the operation, and if unable, it proceeds.

                    enum State<V> {
                        Missed<V>
                        Accessed
                    }
                    
                    struct ChanValue<K, V> {
                        txid: Txid,
                        key: K,
                        data: State<V>
                    }
                    

                    Writer Behaviour

                    There are two major aspects to writer behaviour. The writer is responsible for maintaining a local cache of missed items, a local cache of writen (dirty) items, managing the global LRU/LFU, and responding to the reader inclusion requests.

                    When the writer looks up a value, it looks in the LFU/LRU. If found (and the writer is reading) we return the data to the caller, and add an “accessed” value to the local thread store.

                    If the writer is attempting to mutate, we clone the value and put it into the local thread store in the “dirty” state.

                    enum State {
                        Dirty(V),
                        Clean(V),
                        Accessed
                    }
                    
                    struct Value<V> {
                        txid: usize,
                        state: State<V>
                    }
                    

                    If it is not found, we seek the value in the database. It is added to the cache. If this is a write, we flag the entry as dirty. Else it’s flagged clean.

                    If we abort, we move to the include step before we complete the operation.

                    If we commit, we write our clean and dirty flagged data to the LFU/LRU as required. The LRU/LFU self manages it’s lists and sets, it’s okay to the concurrent behaviours. We indicate which items have been accessed.

                    We the perform an “include” operation. Readers attempt this at the end of their operations if the lock can be taken, and skip if not.

                    We dequeue from the queue up to some limit of values. For each value that is requested, we look it up in our LRU/LFU.

                    • If the value was not in the ARC, and in the ghost list, include it + it’s txid if the txid is higher than the ghost key txid
                    • If the value was not in the ARC, and not in the ghost list, include it.
                    • If the value is in the ARC, but a lower txid, we update the access metrics.
                    • If the value is in the ARC and a higher txid, we update the access metrics and update the value to the newer version.
                    • If the value is an accessed marker, and the item is in the ghost list, continue
                    • If the value is an accessed marker, and the item is in the ARC, we update it’s access metrics

                    Questions for Future William

                    ARC moves from LRU -> LFU if the LRU has a hit, but this seems overly aggresive. Perhaps this should be if LRU is a hit on 2 occasions move to LFU?

                    A thread must wake and feed the cache if we are unable to drain the readers, as we don’t want the queue to grow without bound.

                    Limitations and Concerns

                    Cache missing is very expensive - multiple threads may load the value, the readers must queue the value, and the writer must then act on the queue. Sizing the cache to be large enough is critically important as eviction/missing will have a higher penalty than normal. Optimally the cache will be “as large or larger” than the working set.

                    Due to the inclusion cost, the cache may be “slow” during the warm up, so this style of cache really matters for highly concurrent software that can not tolerate locking behaviour, and for items where the normal code paths are extremely slow. IE large item deserialisation and return.

                    Concurrency 1: Types of Concurrency

                    Posted by William Brown on December 28, 2019 02:00 PM

                    Concurrency 1: Types of Concurrency

                    I want to explain different types of concurrent datastructures, so that we can explore their properties and when or why they might be useful.

                    As our computer systems become increasingly parallel and asynchronous, it’s important that our applications are able to work in these environments effectively. Languages like Rust help us to ensure our concurrent structures are safe.

                    CPU Memory Model Crash Course

                    In no way is this a thorough, complete, or 100% accurate representation of CPU memory. My goal is to give you a quick brief on how it works. I highly recommend you read “what every programmer should know about memory” if you want to learn more.

                    In a CPU we have a view of a memory space. That could be in the order of KB to TB. But it’s a single coherent view of that space.

                    Of course, over time systems and people have demanded more and more performance. But we also have languages like C, that won’t change from their view of a system as a single memory space, or change how they work. Of course, it turns out C is not a low level language but we like to convince ourselves it is.

                    To keep working with C and others, CPU’s have acquired cache’s that are transparent to the operation of the memory. You have no control of what is - or is not - in the cache. It “just happens” asynchronously. This is exactly why spectre and meltdown happened (and will continue to happen) because these async behaviours will always have the observable effect of making your CPU faster. Who knew!

                    Anyway, for this to work, each CPU has multiple layers of cache. At L3 the cache is shared with all the cores on the die. At L1 it is “per cpu”.

                    Of course it’s a single view into memory. So if address 0xff is in the CPU cache of core 1, and also in cache of core 2, what happens? Well it’s supported! Caches between cores are kept in sync via a state machine called MESI. These states are:

                    • Exclusive - The cache is the only owner of this value, and it is unchanged.
                    • Modified - The cache is the only owner of this value, and it has been changed.
                    • Invalid - The cache holds this value but another cache has changed it.
                    • Shared - This cache and maybe others are viewing this valid value.

                    To gloss very heavily over this topic, we want to avoid invaild. Why? That means two cpus are contending for the value, causing many attempts to keep each other in check. These contentions cause CPU’s to slow down.

                    We want values to either be in E/M or S. In shared, many cpu’s are able to read the value at maximum speed, all the time. In E/M, we know only this cpu is changing the value.

                    This cache coherency is also why mutexes and locks exist - they issue the needed CPU commands to keep the caches in the correct states for the memory we are accessing.

                    Keep in mind Rust’s variables are immutable, and able to share between threads, or mutable and single thread only. Sound familar? Rust is helping with concurrency by keeping our variables in the fastest possible cache states.

                    Data Structures

                    We use data structures in programming to help improve behaviours of certain tasks. Maybe we need to find values quicker, sort contents, or search for things. Data Structures are a key element of modern computer performance.

                    However most data structures are not thread safe. This means only a single CPU can access or change them at a time. Why? Because if a second read them, due to cache-differences in content the second CPU may see an invalid datastructure, leading to undefined behaviour.

                    Mutexes can be used, but this causes other CPU’s to stall and wait for the mutex to be released - not really what we want on our system. We want every CPU to be able to process data without stopping!

                    Thread Safe Datastructures

                    There exist many types of thread safe datastructures that can work on parallel systems. They often avoid mutexes to try and keep CPU’s moving as fast as possible, relying on special atomic cpu operations to keep all the threads in sync.

                    Multiple classes of these structures exist, which have different properties.

                    Mutex

                    I have mentioned these already, but it’s worth specifying the properties of a mutex. A mutex is a system where a single CPU exists in the mutex. It becomes one “reader/writer” and all other CPU’s must wait until the mutex is released by the current CPU holder.

                    Read Write Lock

                    Often called RWlock, these allow one writer OR multiple parallel readers. If a reader is reading then a writer request is delayed until the readers complete. If a writer is changing data, all new reads are blocked. All readers will always be reading the same data.

                    These are great for highly concurrent systems provided your data changes infrequently. If you have a writer changing data a lot, this causes your readers to be continually blocking. The delay on the writer is also high due to a potentially high amount of parallel readers that need to exit.

                    Lock Free

                    Lock free is a common (and popular) datastructue type. These are structures that don’t use a mutex at all, and can have multiple readers and multiple writers at the same time.

                    The most common and popular structure for lock free is queues, where many CPUs can append items and many can dequeue at the same time. There are also a number of lock free sets which can be updated in the same way.

                    An interesting part of lock free is that all CPU’s are working on the same set - if CPU 1 reads a value, then CPU 2 writes the same value, the next read from CPU 1 will show the new value. This is because these structures aren’t transactional - lock free, but not transactional. There are some times where this is really useful as a property when you need a single view of the world between all threads, and your program can tolerate data changing between reads.

                    Wait Free

                    This is a specialisation of lock free, where the reader/writer has guaranteed characteristics about the time they will wait to read or write data. This is very detailed and subtle, only affecting real time systems that have strict deadline and performance requirements.

                    Concurrently Readable

                    In between all of these is a type of structure called concurrently readable. A concurrently readable structure allows one writer and multiple parallel readers. An interesting property is that when the reader “begins” to read, the view for that reader is guaranteed not to change until the reader completes. This means that the structure is transactional.

                    An example being if CPU 1 reads a value, and CPU 2 writes to it, CPU 1 would NOT see the change from CPU 2 - it’s outside of the read transaction!

                    In this way there are a lot of read-only immutable data, and one writer mutating and changing things … sounds familar? It’s very close to how our CPU’s cache work!

                    These structures also naturally lend themself well to long processing or database systems where you need transactional (ACID) properties. In fact some databases use concurrent readable structures to achieve ACID semantics.

                    If it’s not obvious - concurrent readability is where my interest lies, and in the next post I’ll discuss some specific concurrently readable structures that exist today, and ideas for future structures.

                    Episode 175 - Defenders will always be one step behind

                    Posted by Open Source Security Podcast on December 23, 2019 12:00 AM

                    Josh and Kurt talk about the opportunistic nature of crime. Defenders have to defend, which means the adversaries are by definition always a step ahead. We use the context of automobile crimes to frame the discussion.

                    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12494456/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

                    Show Notes


                      Running the TripleO Keystone Container in OpenShift

                      Posted by Adam Young on December 21, 2019 12:31 AM

                      Now that I can run the TripleO version of Keystone via podman, I want to try running it in OpenShift.

                      Here is my first hack at a deployment yaml. Note that it looks really similar to the keystone-db-init I got to run the other day.

                      If I run it with:

                      oc create -f keystone-pod.yaml
                      

                      I get a CrashLoopBackoff error, with the following from the logs:

                      $ oc logs pod/keystone-api 
                       sudo -E kolla_set_configs
                       sudo: unable to send audit message: Operation not permitted
                       INFO:main:Loading config file at /var/lib/kolla/config_files/config.json
                       ERROR:main:Unexpected error:
                       Traceback (most recent call last):
                       File "/usr/local/bin/kolla_set_configs", line 412, in main
                       config = load_config()
                       File "/usr/local/bin/kolla_set_configs", line 294, in load_config
                       config = load_from_file()
                       File "/usr/local/bin/kolla_set_configs", line 282, in load_from_file
                       with open(config_file) as f:
                       IOError: [Errno 2] No such file or directory: '/var/lib/kolla/config_files/config.json' 
                      
                      

                      I modified the config.json to remove steps that were messing me up. I think I can now remove evn that last config file, but I left it for now.

                      {
                         "command": "/usr/sbin/httpd",
                         "config_files": [
                              {  
                                    "source": "/var/lib/kolla/config_files/src/*",
                                    "dest": "/",
                                    "merge": true,
                                    "preserve_properties": true
                              }
                          ],
                          "permissions": [
                      	    {
                                  "path": "/var/log/kolla/keystone",
                                  "owner": "keystone:keystone",
                                  "recurse": true
                              }
                          ]
                      }
                      

                      I need to add the additional files to a config map and mount those inside the container. For example, I can create a config map with the config.json file, a secret for the Fernet key, and a config map for the apache files.

                      oc create configmap keystone-files --from-file=config.json=./config.json
                      kubectl create secret generic keystone-fernet-key --from-file=../kolla/src/etc/keystone/fernet-keys/0
                      oc create configmap keystone-httpd-files --from-file=wsgi-keystone.conf=../kolla/src/etc/httpd/conf.d/wsgi-keystone.conf
                      

                      Here is my final pod definition

                      apiVersion: v1
                      kind: Pod
                      metadata:
                        name: keystone-api
                        labels:
                          app: myapp
                      spec:
                        containers:
                        - image: docker.io/tripleomaster/centos-binary-keystone:current-tripleo 
                          imagePullPolicy: Always
                          name: keystone
                          env:
                          - name: KOLLA_CONFIG_FILE
                            value: "/var/lib/kolla/config_files/src/config.json"
                          - name: KOLLA_CONFIG_STRATEGY
                            value: "COPY_ONCE"
                          volumeMounts:
                          - name: keystone-conf
                            mountPath: "/etc/keystone/"
                          - name: httpd-config
                            mountPath: "/etc/httpd/conf.d"
                          - name: config-json
                            mountPath: "/var/lib/kolla/config_files/src"
                      
                          - name: keystone-fernet-key
                            mountPath: "/etc/keystone/fernet-keys/0"
                        volumes:
                        - name: keystone-conf
                          secret:
                            secretName: keystone-conf
                            items:
                            - key: keystone.conf
                              path: keystone.conf
                              mode: 511	
                        - name: keystone-fernet-key
                          secret:
                            secretName: keystone-fernet-key
                            items:
                            - key: "0"
                              path: "0"
                              mode: 511	
                        - name: config-json
                          configMap:
                             name: keystone-files
                        - name: httpd-config
                          configMap:
                             name: keystone-httpd-files
                      
                      

                      And show that it works for basic stuff:

                      $ oc rsh keystone-api
                      sh-4.2# curl 10.131.1.98:5000
                      {"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10.131.1.98:5000/v3/", "rel": "self"}]}]}}curl (HTTP://10.131.1.98:5000/): response: 300, time: 3.314, size: 266
                      

                      Next steps: expose a route, make sure we can get a token.

                      Official TripleO Keystone Images

                      Posted by Adam Young on December 19, 2019 09:00 PM

                      My recent forays into running containerized Keystone images have been based on a Centos base image with RPMs installed on top of it. But TripleO does not run this way; it runs via containers. Some notes as I look into them.

                      The official containers for TripleO are currently hosted on docker.com. The Keystone page is here:

                      Don’t expect the docker pull command posted on that page to work. I tried a comparable one with podman and got:

                      $ podman pull tripleomaster/centos-binary-keystone
                      Trying to pull docker.io/tripleomaster/centos-binary-keystone...
                        manifest unknown: manifest unknown
                      Trying to pull registry.fedoraproject.org/tripleomaster/centos-binary-keystone...
                      

                      And a few more lines of error output. Thanks to Emilien M, I was able to get the right command:

                      $ podman pull tripleomaster/centos-binary-keystone:current-tripleo
                      Trying to pull docker.io/tripleomaster/centos-binary-keystone:current-tripleo...
                      Getting image source signatures
                      ...
                      Copying config 9e85172eba done
                      Writing manifest to image destination
                      Storing signatures
                      9e85172eba10a2648ae7235076ada77b095ed3da05484916381410135cc8884c
                      
                      

                      Since I did this as a normal account, and not as root, the image does not get stored under /var, but instead goes somewhere under $HOME/.local. If I type

                      $ podman images
                      REPOSITORY                                       TAG               IMAGE ID       CREATED        SIZE
                      docker.io/tripleomaster/centos-binary-keystone   current-tripleo   9e85172eba10   2 days ago     904 MB
                      

                      I can see the short form of the hash starting with 9e85. I copy that to use to match the subdir under ls /home/ayoung/.local/share/containers/storage/overlay-image

                      ls /home/ayoung/.local/share/containers/storage/overlay-images/9e85172eba10a2648ae7235076ada77b095ed3da05484916381410135cc8884c/
                      

                      If I cat that file, I can see all of the layers that make up the image itself.

                      Trying a naive: podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo I get an error that shows just how kolla-centric this image is:

                      $ podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo
                      + sudo -E kolla_set_configs
                      INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
                      ERROR:__main__:Unexpected error:
                      Traceback (most recent call last):
                        File "/usr/local/bin/kolla_set_configs", line 412, in main
                          config = load_config()
                        File "/usr/local/bin/kolla_set_configs", line 294, in load_config
                          config = load_from_file()
                        File "/usr/local/bin/kolla_set_configs", line 282, in load_from_file
                          with open(config_file) as f:
                      IOError: [Errno 2] No such file or directory: '/var/lib/kolla/config_files/config.json'
                      

                      So I read the docs. Trying to fake it with:

                      $ podman run -e KOLLA_CONFIG='{}'   docker.io/tripleomaster/centos-binary-keystone:current-tripleo
                      + sudo -E kolla_set_configs
                      INFO:__main__:Validating config file
                      ERROR:__main__:InvalidConfig: Config is missing required "command" key
                      

                      When running with TripleO, The config files are generated from Heat Templates. The values for the config.json come from here.
                      This gets me slightly closer:

                      podman run  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -e KOLLA_CONFIG='{"command": "/usr/sbin/httpd"}'   docker.io/tripleomaster/centos-binary-keystone:current-tripleo
                      

                      But I still get an error of “no listening sockets available, shutting down” even if I try this as Root. Below is the whole thing I tried to run.

                      $ podman run   -v $PWD/fernet-keys:/var/lib/kolla/config_files/src/etc/keystone/fernet-keys   -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -e KOLLA_CONFIG='{ "command": "/usr/sbin/httpd", "config_files": [ { "source": "/var/lib/kolla/config_files/src/etc/keystone/fernet-keys", "dest": "/etc/keystone/fernet-keys", "owner":"keystone", "merge": false, "perm": "0600" } ], "permissions": [ { "path": "/var/log/kolla/keystone", "owner": "keystone:keystone", "recurse": true } ] }'  docker.io/tripleomaster/centos-binary-keystone:current-tripleo

                      Lets go back to simple things. What is inside the container? We can peek using:

                      $
                      podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls
                      

                      Basically, we can perform any command that will not last longer than the failed kolla initialization. No Bash prompts, but shorter single line bash commands work. We can see that mysql is uninitialized:

                       podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/keystone/keystone.conf | grep "connection ="
                      #connection = 
                      

                      What about those config files that the initialization wants to copy:

                      podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls /var/lib/kolla/config_files/src/etc/httpd/conf.d
                      ls: cannot access /var/lib/kolla/config_files/src/etc/httpd/conf.d: No such file or directory

                      So all that comes from external to the container, and is mounted at run time.

                      $ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/passwd  | grep keystone
                      keystone:x:42425:42425::/var/lib/keystone:/usr/sbin/nologin

                      Which owns the config and the log files.

                      $ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls -la /var/log/keystone
                      total 8
                      drwxr-x---. 2 keystone keystone 4096 Dec 17 08:28 .
                      drwxr-xr-x. 6 root     root     4096 Dec 17 08:28 ..
                      -rw-rw----. 1 root     keystone    0 Dec 17 08:28 keystone.log
                      $ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls -la /etc/keystone
                      total 128
                      drwxr-x---. 2 root     keystone   4096 Dec 17 08:28 .
                      drwxr-xr-x. 2 root     root       4096 Dec 19 16:30 ..
                      -rw-r-----. 1 root     keystone   2303 Nov 12 02:15 default_catalog.templates
                      -rw-r-----. 1 root     keystone 104220 Dec 14 01:09 keystone.conf
                      -rw-r-----. 1 root     keystone   1046 Nov 12 02:15 logging.conf
                      -rw-r-----. 1 root     keystone      3 Dec 14 01:09 policy.json
                      -rw-r-----. 1 keystone keystone    665 Nov 12 02:15 sso_callback_template.html
                      $ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/keystone/policy.json
                      {}
                      

                      Yes, policy.json is empty.

                      Lets go back to the config file. I would rather not have to pass in all the config info as an environment variable each time. If I run as root, I can use the podman bind-mount option to relabel it:

                       podman run -e KOLLA_CONFIG_FILE=/config.json  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -v $PWD/config.json:/config.json:z   docker.io/tripleomaster/centos-binary-keystone:current-tripleo  

                      This eventually fails with the error message “no listening sockets available, shutting down” Which seems to be due to the lack of the httpd.conf entries for keystone:

                      # podman run -e KOLLA_CONFIG_FILE=/config.json  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -v $PWD/config.json:/config.json:z   docker.io/tripleomaster/centos-binary-keystone:current-tripleo  ls /etc/httpd/conf.d
                      auth_mellon.conf
                      auth_openidc.conf
                      autoindex.conf
                      README
                      ssl.conf
                      userdir.conf
                      welcome.conf
                      

                      The clue seems to be in the Heat Templates. There are a bunch of files that are expected to be in /var/lib/kolla/config_files/src in side the container. Here’s my version of the WSGI config file:

                      Listen 5000
                      Listen 35357
                      
                      ServerSignature Off
                      ServerTokens Prod
                      TraceEnable off
                      
                      ErrorLog /var/log/kolla/keystone/apache-error.log"
                      <ifmodule log_config_module="log_config_module">
                          CustomLog /var/log/kolla/keystone/apache-access.log" common
                      </ifmodule>
                      
                      LogLevel info
                      
                      <directory>
                          <filesmatch>
                              AllowOverride None
                              Options None
                              Require all granted
                          </filesmatch>
                      </directory>
                      
                      
                      <virtualhost>
                          WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} python-path=/usr/lib/python2.7/site-packages
                          WSGIProcessGroup keystone-public
                          WSGIScriptAlias / /usr/bin/keystone-wsgi-public
                          WSGIApplicationGroup %{GLOBAL}
                          WSGIPassAuthorization On
                          <ifversion>= 2.4>
                            ErrorLogFormat "%{cu}t %M"
                          </ifversion>
                          ErrorLog "/var/log/kolla/keystone/keystone-apache-public-error.log"
                          LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" logformat
                          CustomLog "/var/log/kolla/keystone/keystone-apache-public-access.log" logformat
                      </virtualhost>
                      
                      <virtualhost>
                          WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} python-path=/usr/lib/python2.7/site-packages
                          WSGIProcessGroup keystone-admin
                          WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
                          WSGIApplicationGroup %{GLOBAL}
                          WSGIPassAuthorization On
                          <ifversion>= 2.4>
                            ErrorLogFormat "%{cu}t %M"
                          </ifversion>
                          ErrorLog "/var/log/kolla/keystone/keystone-apache-admin-error.log"
                          LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" logformat
                          CustomLog "/var/log/kolla/keystone/keystone-apache-admin-access.log" logformat
                      </virtualhost>
                      

                      So with a directory structure like this:

                      C[root@ayoungP40 kolla]find src/ -print
                      src/
                      src/etc
                      src/etc/keystone
                      src/etc/keystone/fernet-keys
                      src/etc/keystone/fernet-keys/1
                      src/etc/keystone/fernet-keys/0
                      src/etc/httpd
                      src/etc/httpd/conf.d
                      src/etc/httpd/conf.d/wsgi-keystone.conf
                      
                      

                      And a Kolla config.json file like this:

                      {
                         "command": "/usr/sbin/httpd",
                         "config_files": [
                              {
                                    "source": "/var/lib/kolla/config_files/src/etc/keystone/fernet-keys",
                                    "dest": "/etc/keystone/fernet-keys",
                                    "merge": false,
                                    "preserve_properties": true
                              },{
                                    "source": "/var/lib/kolla/config_files/src/etc/httpd/conf.d",
                                    "dest": "/etc/httpd/conf.d",
                                    "merge": false,
                                    "preserve_properties": true
                              },{  
                                    "source": "/var/lib/kolla/config_files/src/*",
                                    "dest": "/",
                                    "merge": true,
                                    "preserve_properties": true
                              }
                          ],
                          "permissions": [
                      	    {
                                  "path": "/var/log/kolla/keystone",
                                  "owner": "keystone:keystone",
                                  "recurse": true
                              }
                          ]
                      }
                      
                      

                      I can run Keystone like this:

                      podman run -e KOLLA_CONFIG_FILE=/config.json  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -v $PWD/config.json:/config.json:z   -v $PWD/src:/var/lib/kolla/config_files/src:z  docker.io/tripleomaster/centos-binary-keystone:current-tripleo
                      

                      keystone-db-init in OpenShift

                      Posted by Adam Young on December 18, 2019 08:48 PM

                      Before I can run Keystone in a container, I need to initialize the database. This is as true for running in Kubernetes as it was using podman. Here’s how I got keystone-db-init to work.

                      The general steps were:

                      • use oc new-app to generate the build-config and build
                      • delete the deployment config generated by new-app
                      • upload a secret containing keystone.conf
                      • deploy a pod that uses the image built above and the secret version of keystone.conf to run keystone-manage db_init
                      oc delete deploymentconfig.apps.openshift.io/keystone-db-in
                      

                      To upload the secret.

                      kubectl create secret generic keystone-conf --from-file=../keystone-db-init/keystone.conf
                      

                      Here is the yaml definition for the pod

                      apiVersion: v1
                      kind: Pod
                      metadata:
                        name: keystone-db-init-pod
                        labels:
                          app: myapp
                      spec:
                        containers:
                        - image: image-registry.openshift-image-registry.svc:5000/keystone/keystone-db-init
                          imagePullPolicy: Always
                          name: keystone-db-init
                          volumeMounts:
                          - name: keystone-conf
                            mountPath: "/etc/keystone/"
                        volumes:
                        - name: keystone-conf
                          secret:
                            secretName: keystone-conf
                            items:
                            - key: keystone.conf
                              path: keystone.conf
                              mode: 511       
                          command: ['sh', '-c', 'cat /etc/keystone/keystone.conf']
                      

                      While this is running as the keystone unix account, I am not certain how that happened. I did use the patch command I talked about earlier on the deployment config, but you can see I am not using that in this pod. That is something I need to straighten out.

                      To test that the database was initialized:

                      $ oc get pods -l app=mariadb-keystone
                      NAME                       READY   STATUS    RESTARTS   AGE
                      mariadb-keystone-1-rxgvs   1/1     Running   0          9d
                      $ oc rsh mariadb-keystone-1-rxgvs
                      sh-4.2$ mysql -h mariadb-keystone -u keystone -pkeystone keystone
                      Welcome to the MariaDB monitor.  Commands end with ; or \g.
                      Your MariaDB connection id is 908
                      Server version: 10.2.22-MariaDB MariaDB Server
                      
                      Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
                      
                      Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                      
                      MariaDB [keystone]> show tables;
                      +------------------------------------+
                      | Tables_in_keystone                 |
                      +------------------------------------+
                      | access_rule                        |
                      | access_token                       |
                      ....
                      +------------------------------------+
                      46 rows in set (0.00 sec)
                      

                      I’ve fooled myself in the past thinking that things have worked when they have note. To make sure I am not doing that now, I dropped the keystone database and recreated it from insider the mysql monitor program. I then re-ran the pod, and was able to see all of the tables.

                      When to B Sharp

                      Posted by Adam Young on December 18, 2019 03:55 PM

                      A Friend asked me a question about the following piece of music:

                      <figure class="wp-block-image"></figure>

                      This song has 3 flats….E B A. Why do they have a C flat and F flat and not use B and use natural sign? or E and use natural sign.

                      The answer has to do with the relationship between the major and minor scales. Lets start there.

                      In the middle ages, when musicians were standardizing musical notation, music was often performed in a minor key. Thus, if you play all the white keys on a piano, they start on a really low A, and you play up a minor scale.

                      <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e34a3645184f',], "�X:1�K:C�L:1/4�ABcdefga�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                      <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e34a364518db',], "� X:1� K:C� L:1/4�ABcdefga�".replace(/\x01/g,"\n"), {}, {});</script>


                      This is called the Natural minor scale, to distinguish it from many other minor sounding scales. This is the same set of notes as the C Major scale.

                      <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e34a3645195a',], "�X:1�K:C�L:1/4�CDEFGABc�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                      <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e34a364519cf',], "� X:1� K:C� L:1/4�CDEFGABc�".replace(/\x01/g,"\n"), {}, {});</script>

                      We call these two scales the “Relative major” and “relative minor” to each due to the fact that they are sharing a set of notes.

                      Lets look at the keys we are not playing now. There are black keys on the piano that appear in a group of two followed by a group of three. None of these notes appear in the A natural minor or C major scale.

                      <figure class="wp-block-image"></figure>

                      We use these grouping of black keys to keep track of our location on the keyboard. If we look at the Key all the way to the left, we know it is a C because it is followed by a group of two black Keys. This diagram has four Cs on it, 3 of them right before the pattern, and one all the way at the right.

                      We need a way to name these black keys. The white keys have letter names that are in sequence, A through G. The black keys are usually named in relationship to the nearest white keys. If we go up to a black key from a white key, we use the same name as that white key, but add the term Sharp. If we go down from a White key, we append the term flat. The generic term for sharps and flats is “accidentals.”

                      You might be tempted to call the first black key all the way to the left C sharp (C#) and the last black key on the right B flat (B♭) and people would know what you are talking about. However, you would be equally correct to call these two notes D♭ and A# respectively. There is no one canonical name for these two notes…or any note on the key board. More on that in a bit.

                      Because we skip these black keys as we play up the white keys, the difference in frequencies between two successive notes changes. When we go from C to D, we skip the left most of a pair of two notes. By skipping a key, we say we are progressing in a “Whole step.” When we go from E to F, we do not have a black key to skip. We call this a “Half step.”

                      I’ll abbreviate W for Whole and H for Half.

                      The pattern for the Major scale is W W H W W W H.

                      The pattern for the Natural Minor Scale is W H W W H W W.

                      We usually talk about the pattern of the notes from low to high, but the pattern holds equally from high to low. Thus we can take these patterns and start on any note of the piano and build a scale. If we start too far to the right, where we would run out of keyboard, we can build the scale downwards. Thus, we say there are 12 distinct major scales and 12 distinct natural minor scales.

                      We can start on the A and make a major scale. From A we Go to B, just like we did in the natural minor scale. But instead of the Half step to C, we go a whole step to the black key after it. We call this C#….more on that in a bit. The pattern continues with D E F# G# and finally returning to A.

                      <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e34a36451a4e',], "�X:1�K:C�L:1/4�AB^cde^f^ga�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                      <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e34a36451ac1',], "� X:1� K:C� L:1/4�AB^cde^f^ga�".replace(/\x01/g,"\n"), {}, {});</script>

                      This has the same major quality as the C major scale I posted above, but starts lower. This pattern can be moved all around the keyboard.

                      The same thing can be done using the minor pattern starting on C. The end result of that is:

                      <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e34a36451b3c',], "�X:1�K:C�L:1/4�CD_EFG_A_Bc�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                      <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e34a36451baf',], "� X:1� K:C� L:1/4�CD_EFG_A_Bc�".replace(/\x01/g,"\n"), {}, {});</script>

                      Instead of adding 3 sharps, we have added three flats.

                      When we compare A major to A minor, or C major to C minor, we call these parallel scales.

                      • We call A minor the parallel minor to A major
                      • We call C major the parallel major to C minor.
                      • E flat Major is the Relative Major to C minor
                      • F sharp minor is the relative minor to A major

                      Music often switches between the major and parallel minor for dramatic effect.

                      My friend tells me that the music has 3 flats it the key signature. This is the key of E flat major. In order to play the relative minor, we would have to add 3 flats. But there are only 5 black keys on the piano. How can we get 6 flats? Lets first talk about the order in which we add sharps and flats to a scale.


                      We use the term “interval” to define the distance between two notes, based on the major scale. The distance from C to E is a third, as E is the third note in the C scale. The distance from C to G is a fifth. The fifth is a really important interval.

                      The image below demonstrates a concept called the cycle of fifths.

                      <figure class="wp-block-image"></figure>

                      If you start at C, you can play up an interval of a fifth to G. Repeat this pattern to D, and so on. Eventually you will return to C. You get all 12 distinct notes of the chromatic scale this way.

                      One thing that this cycle of fifths diagram shows is the relationship between a scale and the accidentals. As we take one step around the circle clockwise, we add a sharp. If we take one step around the circle clockwise we remove a sharp. The key of C has no sharps. The key of G has one sharp (F#), the Key of B has 5 sharps. The key of C has no flats, the key of F one flat (B♭) and the key of A♭ has four flats.

                      Look at the pattern of the sharps and flats we add. First we add F#, then C#, Then G#. Each of these notes is a fifth away from the one prior to it. Adding flats, we add B♭, then E♭, then A♭. These are also a fifth away from each other (just in the other direction). When you think of it, it makes sense: the scales are a fifth apart, so the changes to the scales are also a fifth apart. Math is fun!

                      What about those two scales at the bottom that have two names: D♭/C# and G♭/F#? They have both notations? This gets back to what I was talking about when I said there is no one canonical name for a black key, or any note on the keyboard. The sequence of Black Keys can be thought of as: C# D# F# G# A#. They can also be D♭ E♭ G♭ A♭ B♭. Once we get above 5 accidentals, one of the white keys will also get labeled with an accidental. The C note can be thought of as B#. The B note can be names C♭. F♭ is the same key as E, and F is the same key as E#.

                      So that key of B with 5 sharps could be rewritten as the key of C♭ with 7 flats. C♭D♭E♭F♭G♭A♭B♭C♭. That would balance off the D♭/C# scale posted above.

                      While the circle of fifth above is posted with the major scales, we can use it to determine the notes of the parallel natural minor scale. We know that A natural minor has the same set of notes as C major. This makes sense: we said we add a sharp each time we go clockwise, and add a flat when we go to counter-clockwise. It turns out that adding a flat is the same as subtracting a flat. To go from A major to minor, we want to subtract 3 sharps. Thus, we look three spaces counter-clockwise around the circle of fifths. We would do this by noting the “borrowed” notes as natural ising the sign: F♮, C♮ and G♮.

                      If we are writing in the key of C major, and we want to write in the key of C minor, we would add 3 flats.

                      At this point, we have enough theory to understand why the notation at the top of this article uses a C♭ to indicate a note that could be as clearly marked as a B. It is because we are in the key of E♭ and we are borrowing the C flat from the relative minor scale. Compare the two passages below:

                      <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e34a36451c2e',], "�X:1�K:Eb�L:1/8�B=B_B=B_B=B_B=B�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                      <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e34a36451ca9',], "�X:1�K:Eb�L:1/8�B_cBcBcBc�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>

                      The second one is much simpler.

                      Musical notation is an aid to communication of musical intent. The same thing can be written multiple ways. The writer should chose the notation that makes it easiest to communicate their intent.

                      Let me close by adding that musical notation editors like Musescore will transpose parts and keep them in key. If this part had originally been written in C, with the minor notes using added flats, it would translate to E♭ by adding the C♭. Notation like we see in this example if very common with the musical tools used today.

                      Packaging and the Security Proposition

                      Posted by William Brown on December 18, 2019 02:00 PM

                      Packaging and the Security Proposition

                      As a follow up to my post on distribution packaging, it was commented by Fraser Tweedale (@hackuador) that traditionally the “security” aspects of distribution packaging was a compelling reason to use distribution packages over “upstreams”. I want to dig into this further.

                      Why does C need “securing”

                      C as a language is unsafe in every meaning of the word. The best C programmers on the planet are incapable of writing a secure program. This is because to code in C you have to express a concurrent problem, into a language that is linearised, which is compiled relying on undefined behaviour, to be executed on an asynchronous concurrent out of order CPU. What could possibly go wrong?!

                      There is a lot you need to hold in mind to make C work. I can tell you now that I spend a majority of my development time thinking about the code to change rather than writing C because of this!

                      This has led to C based applications having just about every security issue known to man.

                      How is C “secured”

                      So as C is security swiss cheese, this means we have developed processes around the language to soften this issue - for example advice like patch and update continually as new changes are continually released to resolve issues.

                      Distribution packages have always been the “source” of updates for these libraries and applications. These packages are maintained by humans who need to update these packages. This means when a C project releases a fix, these maintainers would apply the patch to various versions, and then release the updates. These library updates due to C’s dynamic nature means when the machine is next rebooted (yes rebooted, not application restarted) that these fixes apply to all consumers who have linked to that library - change one, fix everything. Great!

                      But there are some (glaring) weaknesses to this model. C historically has little to poor application testing so many of these patches and their effects can’t be reproduced. Which also subsequently means that consuming applications also aren’t re-tested adequately. It can also have impacts where a change to a shared library can impact a consuming application in a way that was unforseen as the library changed.

                      The Dirty Secret

                      The dirty secret of many of these things is that “thoughts and prayers” is often the testing strategy of choice when patches are applied. It’s only because humans carefully think about and write tiny amounts of C that we have any reliability in our applications. And we already established that it’s nearly impossible for humans to write correct C …

                      Why Are We Doing This?

                      Because C linking and interfaces are so fragile, and due to the huge scope in which C can go wrong due to being a memory unsafe language, distributions and consumers have learnt to fear version changes. So instead we patch ancient C code stacks, barely test them, and hope that our castles of sand don’t fall over, all so we can keep “the same version” of a program to avoid changing it as much as possible. Ironically this makes those stacks even worse because we’ve developed infinite numbers of bespoke barely tested packages that people rely on daily.

                      To add more insult to this, most of this process is manual - humans monitor mailing lists, and have to know what code needs what patch, and when in what release streams. It’s a monumental amount of human time and labour involved to keep the sand castles standing. This manual involvement is what leads to information overload, and maintainers potentially missing security updates or releases that causes many distribution packages to be outdated, missing patches, or vulnerable more often than not. In other cases packages continue to be shipped that are unmaintained or have no upstream, so any issues that may exist are unknown or unresolved.

                      Distribution Security

                      This means all of platform and distribution security comes to one factor.

                      A lot of manual human labour.

                      It’s is only because distributions have so many volunteers or paid staff, that this entire system continues to progress to give the illusion of security and reliability. When it fails, it fails silently.

                      Heartbleed really dragged the poor state of C security into the open , and it’s still not been addressed.

                      When people say “how can we secure docker/flatpak/Rust” like we do with distributions, I say: “Do we really secure distributions at all?”. We only have a veneer of best effort masquerading as a secure supply chain.

                      A Different Model …

                      So let’s look briefly at Rust and how you package it today (against distribution maintainer advice).

                      Because it’s staticly linked, each application must be rebuilt if a library changes. Because the code comes from a central upstream, there are automated tools to find security issues (like cargo audit). The updates are pulled from the library as a whole working tested unit, and then built into our application to to recieve further testing and verification of the application as a whole singular functional unit.

                      These dependencies once can then be vendored to a tar (allowing offline builds and some aspects of reproducability). This vendor.tar.gz is placed into the source rpm along with the application source, and then built.

                      There is a much stronger pipeline of assurances here! And to further aid Rust’s cause, because it is a memory safe language, it eliminates most of the security issues that C is afflicted by, causing security updates to be far fewer, and to often affect higher level or esoteric situations. If you don’t believe me, look at the low frequency, and low severity commits for the rust advisory-db

                      People have worried that because Rust is staticly linked we’ll have to rebuild it and update it continually to keep it secure - I’d say because it’s Rust we’ll have stronger guarantees at build that security issues are less likely to exist and we won’t have to ship updates nearly as often as a C stack.

                      Another point to make is Rust libraries don’t release patches - because of Rust’s stronger guarantees at compile time and through integrated testing, people are less afraid of updates to versions. We are very unlikely to see Rust releasing patches, rather than just shipping “updates” to libraries and expecting you to update. Because these are staticly linked, we don’t have to worry about versions for other libraries on the platform, we only need to assure the application is currently working as intended. Because of the strong typing those interfaces of those libraries has stronger compile time guarantees at build time, meaning the issues around shared object versioning and symbol/version mismatching simply don’t exist - one of the key reasons people became version change averse in the first place.

                      So Why Not Package All The Things?

                      Many distribution packagers have been demanding a C-like model for Rust and others (remember, square peg, round hole). This means every single crate (library) is packaged, and then added to a set of buildrequires for the application. When a crate updates, it triggers the application to rebuild. When a security update for a library comes out, it rebuilds etc.

                      This should sound familiar … because it is. It’s reinventing Cargo in a clean-room.

                      RPM provides a way to manage dependencies. Cargo provides a way to manage dependencies.

                      RPM provides a way to offline build sources. Cargo provides a way to offline build sources.

                      RPM provides a way to patch sources. Cargo provides a way to update them inplace - and patch if needed.

                      RPM provides a way to … okay you get the point.

                      There is also a list of what we won’t get from distribution packages - remember distribution packages are the C language packaging system

                      We won’t get the same level of attention to detail, innovation and support as the upstream language tooling has. Simply put, users of the language just won’t use distribution packages (or toolchains, libraries …) in their workflows.

                      Distribution packages can’t offer is the integration into tools like cargo-audit for scanning for security issues - that needs still needs Cargo, not RPM, meaning the RPM will need to emulate what Cargo does exactly.

                      Using distribution packages means you have an untested pipeline that may add more risks now. Developers won’t use distribution packages - they’ll use cargo. Remember applications work best as they are tested and developed - outside of that environment they are an unknown.

                      Finally, the distribution maintainers security proposition is to secure our libraries - for distributions only. That’s acting in self interest. Cargo is offering a way to secure upstream so that everyone benefits. That means less effort and less manual labour all around. And secure libraries are not the full picture. Secure applications is what matters.

                      The large concerning factor is the sheer amount of human effort. We would spend hundreds if not thousands of hours to reinvent a functional tool in a disengaged manner, just so that we can do things as they have always been done in C - for the benefit of distributions individually rather than languages upstream.

                      What is the Point

                      Again - as a platform our role is to provide applications that people can trust. The way we provide these applications is never going to be one size fits all. Our objective isn’t to secure “this library” or “that library”, it’s to secure applications as a functional whole. That means that companies shipping those applications, should hire maintainers to work on those applications to secure their stacks.

                      Today I honestly think Rust has a better security and updating story than C packages ever has, powered by automation and upstream integration. Let’s lean on that, contribute to it, and focus on shipping applications instead of reinventing tools. We need to accept our current model is focused on C, that developers have moved around distribution packaging, and that we need to change our approach to eliminate the large human risk factor that currently exists.

                      We can’t keep looking to the models of the past, we need to start to invest in new methods for the future.

                      Today, distributions should focus on supporting and distributing _applications_ and work with native language supply chains to enable this.

                      Which is why I’ll keep using cargo’s tooling and auditing, and use distribution packages as a delievery mechanism for those applications.

                      What Could it Look Like?

                      We have a platform that updates as a whole (Fedora Atomic comes to mind …) with known snapshots that are tested and well known. This platform has methods to run applications, and those applications are isolated from each other, have their own libraries, and security audits.

                      And because there are now far fewer moving parts, quality is easier to assert, understand, and security updates are far easier and faster, less risky.

                      It certainly sounds a lot like what macOS and iOS have been doing with a read-only base, and self-contained applications within that system.

                      Running as keystone

                      Posted by Adam Young on December 17, 2019 02:38 PM

                      In order to run the various Keystone containers as the Keystone user, we can use the modification specified here.

                      First, add a keystone service account and a security constraint to let it run as the keystone user. This is probably more power than we really want to give it, but it will force things to work. Update the deployment config to use this new service account.

                      $  oc create serviceaccount keystone
                      serviceaccount/keystone created
                      $ oc adm policy add-scc-to-user anyuid -z keystone --as system:admin
                      securitycontextconstraints.security.openshift.io/anyuid added to: ["system:serviceaccount:keystondev:keystone"]
                      $ oc patch deploymentconfig.apps.openshift.io/keystone-db-init --patch '{"spec":{"template":{"spec":{"serviceAccountName": "keystone"}}}}'deploymentconfig.apps.openshift.io/keystone-db-init patched
                      

                      Now looking at the log:

                      $ oc logs pod/keystone-db-init-2-nwg8g
                      Databasedb-sync+ echo -n Database
                      + echo -n db-sync
                      + keystone-manage db_sync
                       [COMPLETE]
                      bootstrap + echo ' [COMPLETE]'
                      + echo -n 'bootstrap '
                      + keystone-manage bootstrap --bootstrap-password=FreeIPA4All
                      /etc/keystone/fernet-keys/ does not exist
                       [COMPLETE]
                      + echo ' [COMPLETE]'
                      

                      We see that the application ran without throwing an error.

                      Packaging, Vendoring, and How It’s Changing

                      Posted by William Brown on December 17, 2019 02:00 PM

                      Packaging, Vendoring, and How It’s Changing

                      In today’s thoughts, I was considering packaging for platforms like opensuse or other distributions and how that interacts with language based packaging tools. This is a complex and … difficult topic, so I’ll start with my summary:

                      Today, distributions should focus on supporting and distributing _applications_ and work with native language supply chains to enable this.

                      Distribution Packaging

                      Let’s start by clarifying what distribution packaging is. This is your linux or platforms method of distributing it’s programs libraries. For our discussion we really only care about linux so say suse or fedora here. How macOS or FreeBSD deal with this is quite different.

                      Now these distribution packages are built to support certain workflows and end goals. Many open source C projects release their source code in varying states, perhaps also patches to improve or fix issues. These code are then put into packages, dependencies between them established due to dynamic linking, they are signed for verification purposes and then shipped.

                      This process is really optimised for C applications. C has been the “system language” for many decades now, and so we can really see these features designed to promote - and fill in gaps - for these applications.

                      For example, C applications are dynamically linked. Because of this it encourages package maintainers to “split” applications into smaller units that can have shared elements. An example that I know is openldap which may be a single source tree, but often is packaged to multiple parts such as the libldap.so, lmdb, openldap-client applications, it’s server binary, and probably others. The package maintainer is used to taking their scalpels and carefully slicing sources into elegant packages that can minimise how many things are installed to what is “just needed”.

                      We also see other behaviours where C shared objects have “versions”, which means you can install multiple versions of them at once and programs declare in their headers which library versions they want to consume. This means a distribution package can have many versions of the same thing installed!

                      This in mind, the linking is simplistic and naive. If a shared object symbol doesn’t exist, or you don’t give it the “right arguments” via a weak-compile time contract, it’s likely bad things (tm) will happen. So for this, distribution packaging provides the stronger assertions about “this program requires that library version”.

                      As well, in the past the internet was a more … wild place, where TLS wasn’t really widely used. This meant that to gain strong assertions about the source of a package and that it had not been tampered, tools like GPG were used.

                      What about Ruby or Python?

                      Ruby and Python are very different languages compared to C though. They don’t have information in their programs about what versions of software they require, and how they mesh together. Both languages are interpreted, and simply “import library” by name, searching a filesystem path for a library matching regardless of it’s version. Python then just loads in that library as source straight to the running vm.

                      It’s already apparent how we’ll run into issues here. What if we have a library “foo” that has a different function interface between version 1 and version 2? Python applications only request access to “foo”, not the version, so what happens if the wrong version is found? What if it’s not found?

                      Some features here are pretty useful from the “distribution package” though. Allowing these dynamic tools to have their dependencies requested from the “package”, and having the package integrity checked for them.

                      But overtime, conflicts started, and issues arose. A real turning point was ruby in debian/ubuntu where debian package maintainers (who are used to C) brought out the scalpel and attempted to slice ruby down to “parts” that could be reused form a C mindset. This led to a combinations of packages that didn’t make sense (rubygems minus TLS, but rubygems requires https), which really disrupted the communities.

                      Another issue was these languages as they grew in popularity had different projects requiring different versions of libraries - which as before we mentioned isn’t possible beside library search path manipulations which is frankly user hostile.

                      These issues (and more) eventually caused these communities as a whole to stop recommending distribution packages.

                      So put this history in context. We have Ruby (1995) and Python (1990), which both decided to avoid distribution packages with their own tools aka rubygems (2004) and pip (2011), as well as tools to manage multiple parallel environments (rvm, virtualenv) that were per-application.

                      New kids on the block

                      Since then I would say three languages have risen to importance and learnt from the experiences of Ruby - This is Javascript (npm/node), Go and Rust.

                      Rust went further than Ruby and Python and embedded distribution of libraries into it’s build tools from an early date with Cargo. As Rust is staticly linked (libraries are build into the final binary, rather than being dynamicly loaded), this moves all dependency management to build time - which prevents runtime library conflict. And because Cargo is involved and controls all the paths, it can do things such as having multiple versions available in a single build for different components and coordinating all these elements.

                      Now to hop back to npm/js. This ecosystem introduced a new concept - micro-dependencies. This happened because javascript doesn’t have dead code elimination. So if given a large utility library, and you call one function out of 100, you still have to ship all 99 unused ones. This means they needed a way to manage and distribute hundreds, if not thousands of tiny libraries, each that did “one thing” so that you pulled in “exactly” the minimum required (that’s not how it turned out … but it was the intent).

                      Rust also has inherited a similar culture - not to the same extreme as npm because Rust DOES have dead code elimination, but still enough that my concread library with 3 dependencies pulls in 32 dependencies, and kanidm from it’s 30 dependencies, pulls in 365 into it’s graph.

                      But in a way this also doesn’t matter - Rust enforces strong typing at compile time, so changes in libraries are detected before a release (not after like in C, or dynamic languages), and those versions at build are used in production due to the static linking.

                      This has led to a great challenge is distribution packaging for Rust - there are so many “libraries” that to package them all would be a monumental piece of human effort, time, and work.

                      But once again, we see the distribution maintainers, scalpel in hand, a shine in their eyes looking and thinking “excellent, time to package 365 libraries …”. In the name of a “supply chain” and adding “security”.

                      We have to ask though, is there really value of spending all this time to package 365 libraries when Rust functions so differently?

                      What are you getting at here?

                      To put it clearly - distribution packaging isn’t a “higher” form of distributing software. Distribution packages are not the one-true solution to distribute software. It doesn’t magically enable “security”. Distribution Packaging is the C language source and binary distribution mechanism - and for that it works great!

                      Now that we can frame it like this we can see why there are so many challenges when we attempt to package Rust, Python or friends in rpms.

                      Rust isn’t C. We can’t think about Rust like C. We can’t secure Rust like C.

                      Python isn’t C. We can’t think about Python like C. We can’t secure Python like C.

                      These languages all have their own quirks, behaviours, flaws, benefits, and goals. They need to be distributed in unique ways appropriate to those languages.

                      An example of the mismatch

                      To help drive this home, I want to bring up FreeIPA. FreeIPA has a lot of challenges in packaging due to it’s huge number of C, Python and Java dependencies. Recently on twitter it was annouced that “FreeIPA has been packaged for debian” as the last barrier (being dogtag/java) was overcome to package the hundreds of required dependencies.

                      The inevitable outcome of debian now packaging FreeIPA will be:

                      • FreeIPA will break in some future event as one of the python or java libraries was changed in a way that was not expected by the developers or package maintainers.
                      • Other applications may be “held back” from updating at risk/fear of breaking FreeIPA which stifles innovation in the java/python ecosystems surrounding.

                      It won’t be the fault of FreeIPA. It won’t be the fault of the debian maintainers. It will be that we are shoving square applications through round C shaped holes and hoping it works.

                      So what does matter?

                      It doesn’t matter if it’s Kanidm, FreeIPA, or 389-ds. End users want to consume applications. How that application is developed, built and distributed is a secondary concern, and many people will go their whole lives never knowing how this process works.

                      We need to stop focusing on packaging libraries and start to focus on how we distribute applications.

                      This is why projects like docker and flatpak have surprised traditional packaging advocates. These tools are about how we ship applications, and their build and supply chains are separated from these.

                      This is why I have really started to advocate and say:

                      Today, distributions should focus on supporting and distributing _applications_ and work with native language supply chains to enable this.

                      Only we accept this shift, we can start to find value in distributions again as sources of trusted applications, and how we see the distribution as an application platform rather than a collection of tiny libraries.

                      The risk of not doing this is alienating communities (again) from being involved in our platforms.

                      Follow Up

                      There have been some major comments since:

                      First, there is now a C package manager named conan . I have no experience with this tool, so at a distance I can only assume it works well for what it does. However it was noted it’s not gained much popularity, likely due to the fact that distro packages are the current C language distribution channels.

                      The second was about the security components of distribution packaging and this - that topic is so long I’ve written another post about the topic instead, to try and keep this post focused on the topic.

                      Finally, The Fedora Modularity effort is trying to deal with some of these issues - that modules, aka applications have different cadences and requirements, and those modules can move more freely from the base OS.

                      Some of the challenges have been explored by LWN and it’s worth reading. But I think the underlying issue is that again we are approaching things in a way that may not align with reality - people are looking at modules as libraries, not applications which is causing things to go sideways. And when those modules are installed, they aren’t isolated from each other , meaning we are back to square one, with a system designed only for C. People are starting to see that but the key point is continually missed - that modularity should be about applications and their isolation not about multiple library versions.

                      oc new-app

                      Posted by Adam Young on December 16, 2019 04:49 PM

                      The tools you use should help you grow from newbie to power user. OpenShift’s command line is one such tool. When getting started with Kubernetes development, the new-app option to the oc command line can help movbe you along the spectrum.

                      To isolate what the new-app command does, I am going to start a new project that has nothing in it. Continuing with usiong the OpenStack Keystone WSGI app as my target application, I am going to call the new projhect keystonedev:

                      $ oc new-project keystondev
                      Now using project "keystondev" on server "https://api.demo.redhatfsi.com:6443".
                      
                      You can add applications to this project with the 'new-app' command. For example, try:
                      
                          oc new-app django-psql-example
                      
                      to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:
                      
                          kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
                      
                      [ayoung@ayoungP40 ~]$ oc get all
                      No resources found.
                      

                      Now I run new-app:

                      $ oc new-app https://github.com/admiyo/keystone-db-init.git#chmodconf
                      --> Found Docker image 5e35e35 (4 weeks old) from docker.io for "docker.io/centos:7"
                      
                          * An image stream tag will be created as "centos:7" that will track the source image
                          * A Docker build using source code from https://github.com/admiyo/keystone-db-init.git#chmodconf will be created
                            * The resulting image will be pushed to image stream tag "keystone-db-init:latest"
                            * Every time "centos:7" changes a new build will be triggered
                          * This image will be deployed in deployment config "keystone-db-init"
                          * The image does not expose any ports - if you want to load balance or send traffic to this component
                            you will need to create a service with 'oc expose dc/keystone-db-init --port=[port]' later
                          * WARNING: Image "docker.io/centos:7" runs as the 'root' user which may not be permitted by your cluster administrator
                      
                      --> Creating resources ...
                          imagestream.image.openshift.io "centos" created
                          imagestream.image.openshift.io "keystone-db-init" created
                          buildconfig.build.openshift.io "keystone-db-init" created
                          deploymentconfig.apps.openshift.io "keystone-db-init" created
                      --> Success
                          Build scheduled, use 'oc logs -f bc/keystone-db-init' to track its progress.
                          Run 'oc status' to view your app.
                      

                      Taking the advice to run oc status:

                      $ oc status
                      In project keystondev on server https://api.demo.redhatfsi.com:6443
                      
                      dc/keystone-db-init deploys istag/keystone-db-init:latest <-
                        bc/keystone-db-init docker builds https://github.com/admiyo/keystone-db-init.git#chmodconf on istag/centos:7 
                          build #1 running for about a minute - ee21636: removed all but db-sync call (Adam Young <ayoung>)
                        deployment #1 waiting on image or update
                      
                      
                      2 infos identified, use 'oc status --suggest' to see details.
                      

                      But what has that created:

                      $ oc get all
                      NAME                           READY   STATUS    RESTARTS   AGE
                      pod/keystone-db-init-1-build   1/1     Running   0          2m55s
                      
                      NAME                                                  REVISION   DESIRED   CURRENT   TRIGGERED BY
                      deploymentconfig.apps.openshift.io/keystone-db-init   0          1         0         config,image(keystone-db-init:latest)
                      
                      NAME                                              TYPE     FROM            LATEST
                      buildconfig.build.openshift.io/keystone-db-init   Docker   Git@chmodconf   1
                      
                      NAME                                          TYPE     FROM          STATUS    STARTED         DURATION
                      build.build.openshift.io/keystone-db-init-1   Docker   Git@ee21636   Running   2 minutes ago   
                      
                      NAME                                              IMAGE REPOSITORY                                                                             TAGS   UPDATED
                      imagestream.image.openshift.io/centos             default-route-openshift-image-registry.apps.demo.redhatfsi.com/keystondev/centos             7      2 minutes ago
                      imagestream.image.openshift.io/keystone-db-init   default-route-openshift-image-registry.apps.demo.redhatfsi.com/keystondev/keystone-db-init  
                      

                      From top to bottom:

                      • a pod used to perform the build. This will be relatively short lived. This pod performs git clone and buildah bud and will store the image produced in the local repository.
                      • A deploymentconfig. This is an OpenShift pre-cursor to the Kubernetes deployment. See here for a distinction between the two objects.
                      • A buildconfig. This will be used to produce the build above as well as additional builds if triggered
                      • A build. This is the process performed in the build pod above. This is a common pattern in OpenShift: a pod is used to manage some other resource. The build will live beyond the life of the build pod.
                      • An imagestream for our code. This is the set of images produced by the build. The deployment will always attempt to run the latest image in the image stream when triggered.
                      • An imagestream for the Centos7 repo, which is the basis for the keystone-db-init image above.

                      If I wait a little while and run oc get all again, I see a few more pods.

                        null
                      • pod/keystone-db-init-1-deploy 0/1 Completed 0 6m42s
                      • pod/keystone-db-init-1-fq8qv 0/1 CrashLoopBackOff 5 6m32

                      When you deploy an application in openshift, you run a command like

                      oc create -f <file>

                      When the new-app command runs, it needs a place to perform this action as well. The pod that ends with -deploy is the process in which this happens.

                      The final pod is the product of running the code we just built. It is in crash loop back off state, which means something went wrong. This is not a surprise, considering there is no database associated with this code, but lets see what the error log shows:

                      $ oc logs pod/keystone-db-init-1-fq8qv
                      Databasedb-sync+ echo -n Database
                      + echo -n db-sync
                      + keystone-manage db_sync
                      ...
                      IOError: [Errno 13] Permission denied: '/var/log/keystone/keystone.log'
                       [COMPLETE]
                      bootstrap + echo ' [COMPLETE]'
                      + echo -n 'bootstrap '
                      + keystone-manage bootstrap --bootstrap-password=FreeIPA4All
                      Traceback (most recent call last):
                        File "/usr/bin/keystone-manage", line 10, in <module>
                          sys.exit(main())
                      ...
                      IOError: [Errno 13] Permission denied: '/var/log/keystone/keystone.log'
                       [COMPLETE]
                      + echo ' [COMPLETE]'
                      

                      Some output elided. But it should be clear that my container has permission issues. This parallels what I saw running locally with podman: I need to run the pod as a specific user. More on that in a bit.

                      Episode 174 - GitHub turns security up to 11; A discussion with Rob Schultheis

                      Posted by Open Source Security Podcast on December 16, 2019 12:01 AM

                      Josh and Kurt talk to Rob Schultheis from GitHub about some of the amazing projects GitHub is working on. We discuss GitHub security advisories, getting a CVE from GitHub, and what the new GitHub Security Lab is doing. It's a great conversation about how GitHub is working to make security better for all of us.

                      <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12409715/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

                      Show Notes


                        Fixing opensuse virtual machines with resume

                        Posted by William Brown on December 14, 2019 02:00 PM

                        Fixing opensuse virtual machines with resume

                        Today I hit an unexpected issue - after changing a virtual machines root disk to scsi, I was unable to boot the machine.

                        The host is opensuse leap 15.1, and the vm is the same. What’s happening!

                        The first issue appears to be that opensuse 15.1 doesn’t support scsi disks from libvirt. I’m honestly not sure what’s wrong here.

                        The second is that by default opensuse leap configures suspend and resume to disk - by it uses the pci path instead of a swap volume UUID. So when you change the bus type, it renames the path making the volume inaccessible. This causes boot to fail.

                        To work around you can remove “resume=/disk/path” from the cli. Then to fix it permanently you need:

                        transactional-update shell
                        vim /etc/default/grub
                        # Edit this line to remove "resume"
                        GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0,115200 resume=/dev/disk/by-path/pci-0000:00:07.0-part3 splash=silent quiet showopts"
                        
                        vim /etc/default/grub_installdevice
                        # Edit the path to the correct swap location as by ls -al /dev/disk/by-path
                        /dev/disk/by-path/pci-0000:00:07.0-part3
                        

                        I have reported these issues, and I hope they are resolved.

                        Reading keystone.conf in a container

                        Posted by Adam Young on December 12, 2019 12:09 AM

                        Step 3 of the 12 Factor app is to store config in the environment. For Keystone, the set of configuration options is controlled by the keystone.conf file. In an earlier attempt at containerizing the scripts used to configure Keystone, I had passed an environment variable in to the script that would then be written to the configuration file. I realize now that I want the whole keystone.conf external to the application. This allow me to set any of the configuration options without changing the code in the container. More importantly, it allows me to make the configuration information immutable inside the container, so that the applications cannot be hacked to change their own configuration options.

                        I was running the pod and mounting the local copy I had of the keystone.conf file using this command line:

                        podman run --mount type=bind,source=/home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf,destination=/etc/keystone/keystone.conf:Z --add-host keystone-mariadb:10.89.0.47   --network maria-bridge  -it localhost/keystone-db-init 
                        

                        It was returning with no output. To diagnose, I added on /bin/bash to the end of the command so I could poke around inside the running container before it exited.

                        podman run --mount /home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf:/etc/keystone/keystone.conf    --add-host keystone-mariadb:10.89.0.47   --network maria-bridge  -it localhost/keystone-db-init /bin/bash
                        

                        Once inside, I was able to look at the keystone log file. A Stack trasce made me realize that I was not able to actually read the file /etc/keystone/keystone.conf. Using ls I would show up like this:

                        -?????????? ? ?        ?             ?            ? keystone.conf:
                        

                        It took a lot of trial and error to recitify it including:

                        • adding a parallel entry to my hosts /etc/password and /etc/groups file for the keystone user and group
                        • Ensuring that the file was owned by keystone outside the container
                        • switching to the -v option to create the bind mount, as that allowed me to use the :Z option as well.
                        • addingthe -u keystone option to the command line

                        The end command looked like this:

                        podman run -v /home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf:/etc/keystone/keystone.conf:Z  -u keystone         --add-host keystone-mariadb:10.89.0.47   --network maria-bridge  -it localhost/keystone-db-init 

                        Once I had it correct, I could use the /bin/bash executable to again poke around inside the container. From the inside, I could run:

                        $ keystone-manage db_version
                        109
                        $ mysql -h keystone-mariadb -ukeystone -pkeystone keystone  -e "show databases;"
                        +--------------------+
                        | Database           |
                        +--------------------+
                        | information_schema |
                        | keystone           |
                        +--------------------+
                        

                        Next up is to try this with OpenShift.

                        From Double Harmonic to Octotonic

                        Posted by Adam Young on December 11, 2019 05:30 PM

                        Say you want to play fast 8th note runs on a double harmonic minor song. What note should you add by default?

                        The Double Harmonic scale has two minor thirds in it. Two common modes of it are the Hungarian and the Arabic minor. The hungarian version is structured W, H, b3, H, H, b3, H

                        Here is the Hungarian.

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3bff7a',], "�X:1�K:C�L:1/4�cd_e^f|g_abc\'|c\'b_ag|^f_edc|�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c0005',], "�X:1�K:C�L:1/4�cd_e^f|g_abc\'|c\'b_ag|^f_edc|�".replace(/\x01/g,"\n"), {}, {});</script>

                        The Arabic scale is the same, but played from the fifth. If we consider the Hungarian the Ionian, then the Arabic scale is the Mixolydian. They don’t exactly play the same roles.

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c0087',], "�X:1�K:C�L:1/4�G_ABc|d_e^fg|g ^f _e d| cB_AG|�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c0100',], "�X:1�K:C�L:1/4�G_ABc|d_e^fg|g ^f _e d| cB_AG|�".replace(/\x01/g,"\n"), {}, {});</script>

                        If you want to play an eighth note run, you need to decide whether you want to maintain the sound of the two minor thirds or not. If You do, that means you are going to have to split the Whole step. That means H, H, H, b3, H, H, b3, H. In C:

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c017f',], "�X:1�K:C�L:1/4�c_d=d_e|^fg_ab|c\'b_ag|^f_ed_d|c�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c01f6',], "�X:1�K:C�L:1/4�c_d=d_e|^fg_ab|c\'b_ag|^f_ed_d|c�".replace(/\x01/g,"\n"), {}, {});</script>

                        Going back to my 8 tone scales post that is scale #10. There are four half steps in a row, and two half steps between the minor thirds.

                        By putting the half step right at the begining, it changes where the scale tones align with the chord tones. Assuming you are playing this over a C minor, You’ve just moved the third and the fifth off the downbeat.

                        If we want to make sure to hit the chord tones on the beat, we want to push the extra note to the end of the scale. That implies splitting the second of the two minor thirds. Adding the dominant seventh is a likely target:

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c0275',], "�X:1�K:C�L:1/4�cd_e^f|g_a_b=b|c\'b_b_a|g^f_ed|c�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c02ed',], "�X:1�K:C�L:1/4�cd_e^f|g_a_b=b|c\'b_b_a|g^f_ed|c�".replace(/\x01/g,"\n"), {}, {});</script>

                        This scale pattern is W, H, b3, H, H, W, H, H. Since I put the Minor thirds at the end in my original post, we can rewrite it as: H, H, W, H, H, W, H, b3 which is scale 22.

                        This puts the Dominant Seventh on the down beat. If the desired sound is more like a minor/major seventh, you might prefer to use the sixth instead of the dominant seventh.

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c037f',], "�X:1�K:C�L:1/4�cd_e^f|g_a=ab|c\'ba_a|g^f_ed|c�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c03f6',], "�X:1�K:C�L:1/4�cd_e^f|g_a=ab|c\'ba_a|g^f_ed|c�".replace(/\x01/g,"\n"), {}, {});</script>

                        This is W, H, b3, H, H, H, W, H. Rotating the minor 3rd to the last position: H, H, H, W, H, W, H, b3 is scale 25.

                        What if you are looking to play a Blues lick? The scale already has a Tritone in it. The minor blues scale is

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c0477',], "�X:1�K:C�L:1/4�c_ef^f|g_bc\'�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c04ef',], "�X:1�K:C�L:1/4�c_ef^f|g_bc\'�".replace(/\x01/g,"\n"), {}, {});</script>

                        he example of adding just the seventh is above. Here is what we get if we just add the fourth, and “break” the first minor third:

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c056d',], "�X:1�K:C�L:1/4�cd_ef|^fg_ab|c\'b_ag|^f=f_ed|c�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c05e4',], "�X:1�K:C�L:1/4�cd_ef|^fg_ab|c\'b_ag|^f=f_ed|c�".replace(/\x01/g,"\n"), {}, {});</script>

                        This is W H W H H H 3 H. Rotated so the 3rd is at the end: H W H W H H H 3 or scale 16.

                        If we are willing to break both of the minor 3rds, we can cover the blues scale by dropping the Major seventh to a dominant seventh:

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c0667',], "�X:1�K:C�L:1/4�cd_ef|^fg_a_b|c\' _b _a g|#f =f _e d |c�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c06e1',], "�X:1�K:C�L:1/4�cd_ef|^fg_a_b|c\' _b _a g|^f =f _e d | c�".replace(/\x01/g,"\n"), {}, {});</script>

                        This sounds “bluesy” to me.

                        This is W H W H H H W W. Rotated to H H H W W W H W this matches scale 38.

                        What about off the Arabic Harmonic Minor scale? Lets put is beside a G Blues scale:

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c0762',], "�X:1�K:C�L:1/4�G_ABc|d_e^fg|G_Bc^c|dfg�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c07db',], "�X:1�K:C�L:1/4�G_ABc|d_e^fg|G_Bc^c|dfg�".replace(/\x01/g,"\n"), {}, {});</script>

                        To convert from Harmonic to a Blues,we need to add a Bb, a C# and an F natural. Adding three tones would give us a 10 tone scale, which might be interesting; a chromatic scale minus A and E.

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c085d',], "�X:1�K:C�L:1/4�G_A_B=B|c^cd_e|f^fg�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c08d7',], "�X:1�K:C�L:1/4�G_A_B=B|c^cd_e|f^fg�".replace(/\x01/g,"\n"), {}, {});</script>

                        This also seems to have both the minor and blues feel in it. But how could we pare it down to 8 notes? We can drop any of the notes not in the blues scale: Ab B natural, Eb, or F#. All of these notes are make up the minor 3rd intervals that give the scale its distinctive sound. The F# is used in a Mixolydian Bebop scale to make sure the Dom 7th lands on a down beat. The A flat is a b9, often added to 7th chords. This implies we should drop the Eb and B natural, although this decision is a little arbitrary.

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1f49c3c0955',], "�X:1�K:C�L:1/4�G_A_Bc|^cdf^f|g^f=fd|^c=c_B_A|G�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1f49c3c09cc',], "�X:1�K:C�L:1/4�G_A_Bc|^cdf^f|g^f=fd|^c=c_B_A|G�".replace(/\x01/g,"\n"), {}, {});</script>

                        This is H W W H H H 3 H H. What is interesting is that it has a minor third, but it is not one the ones from the original scale. Rotating it to put that minor third at the end: H H W W H H H 3 or scale 37

                        8 Tone scales #1: Lots of Half Steps

                        Posted by Adam Young on December 10, 2019 08:05 PM

                        Since I posted my article on 8 tone scales, I’ve gotten an embedded player. Here’s what the first of these 8 tone scales sounds like.

                        Lets start with Scale 1: H H H H H H 5.

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1a6633e3b53',], "�X:1�K:C�L:1/4�A^ABc^c|d^d^g|a^g^d=d|^c=cB^A=A|�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1a6633e3bd7',], "�X:1�K:C�L:1/4�A^ABc^c|d^d^g|a^g^d=d|^c=cB^A=A|�".replace(/\x01/g,"\n"), {}, {});</script>

                        What if we Put the Jump in the middle:

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1a6633e3c59',], "�X:1�K:C�L:1/4�c^cd^d^ga^ab|c\'b^a=a^g^d=d^c=c�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>
                        <script type="text/javascript">ABCJS.renderMidi(['abc-midi-5e1a6633e3ccf',], "�X:1�K:C�L:1/4�c^cd^d^ga^ab|c\'b^a=a^g^d=d^c=c�".replace(/\x01/g,"\n"), {}, {});</script>

                        So, this sounds pretty good. It has a chromatic feel with a big intervalic jump, and that describes a lot of good melody lines. The Half steps could be made to work out with numerous chords. For example, lets say we want to play this over an A Major 7 Chord, starting on the third. Ascending, the downbeats would be C# G A B which are the third, seventh, root and ninth. That big jump lines up with chord tones.

                        If You start A Half step earlier you will also get an interval of a fifth lining up with the jump, but on the upbeats.

                        If we want to play an A Domiant 7th We have a tritone between the 3rd and the seventh. We could align the big fifth of a jump with the Root to the Fifth (A to E) But then we miss the third, arguably the most important tone for capturing the sound of the scale. If We put the jump starting on the third, we miss the 7th. If we put the jump on the 7th, we miss both the root and the third.

                        This scale may work better over a minor chord. The Jump from Minor third to 7th aligns nicely with the jump, skipping over the fifth.

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1a6633e3d4c',], "�X:1�K:C�L:1/4�^c^g�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>

                        We can lead in to the jump with a chromatic run:

                        <script type="text/javascript">ABCJS.renderAbc(['abc-paper-5e1a6633e3dc7',], "�X:1�K:C�L:1/8�z^ABcg^ga^a|b�".replace(/\x01/g,"\n"), {}, {staffwidth: 200}, {});</script>

                        Benefits of Saxophone

                        Posted by Adam Young on December 09, 2019 05:50 PM

                        When you play Saxophone, your whole body plays music.

                        <figure class="wp-block-image">Playing Sax by the Seine<figcaption>Playing Sax by the Seine</figcaption></figure>

                        Hearing, visual, and temporal. You feel the vibrations on your skin and the contact of the instrument. You taste the reed and the metal of the mouthpiece. You even smell the cork grease.

                        Touch.

                        <figure class="wp-block-image"><figcaption>Thumb rest</figcaption></figure>

                        Playing Sax is a tactile experience. Your Fingers stay in contact with the keys. You do not depend on your eyes to see where to place your fingers. Emilio replaced the thumb rest on my Tenor with a brass one, and it feels better, and transmits the vibrations better. Playing Saxophone is a very oral; probably why I gave up sucking my thumb. It’s physical, holding up the horn, and the strap often cuts in to your neck; making this more comfortable with better,wider, softer straps has been very pleasant.

                        Breath.

                        <figure class="wp-block-image"></figure>

                        Of all the things that Saxophone demands, perhaps it is air that is the most personal. It is like swimming under water, and trying to reach the far side of the pool. Wind instruments demand a degree of development of your diaphragm only achievable by good cardiovascular workouts.

                        Time

                        <figure class="wp-block-image"></figure>

                        Sound is timing. The biggest difference between music and most visual arts is the timing aspect. You have to make sure you hit at the right time, and that is more like athletics than painting or carving. Jazz takes the mental aspect of playing far beyond any other activity I’ve done, with the possible exception of software development, and even there it is debatable. While I’ve had to meet the occasional deadline, there is not a fine grained time aspect to coding. You need to do all of the slow thinking in the practice room, and then fast, reactive thinking while playing. But you are processing the chords, and figuring out…not what series of notes, but rather what patters of scales, which notes to add in and take out as your fingers move, based on the chords played by the rhythm section.

                        Speed. 

                        <figure class="wp-block-image"></figure>

                        Saxophone is a fairly static instrument with hand movement; unlike the piano, you don’t need to reach for notes. Or the string instruments for that matter. While you might rotate your hands to reach the keys, the movements are small and efficient. Its why Charlie Parker was able to play those rapid streams of notes; there is very little to slow you down.

                        People

                        <figure class="wp-block-image"></figure>

                          The Saxophone is designed to produce a single note at a time. Unlike a guitar or a piano, it is not a simple task to sing a song and accompany yourself on the saxophone.  Instead, a Saxophone plays best in conjunction with other musicians.  A Saxophonist may spend long hours in the practice room working solo, but performance calls for an ensemble.  I particularly love playing off another saxophone.  The intervals and chords formed by multiple Saxophones provide a rich of experience.

                        <figure class="wp-block-image"></figure>

                        These are some of my favorite aspects of playing saxophone. There are others. Some of these things apply to other instruments and activities as well. But Saxophone holds a special place in my heart…and in my hands and my ears.

                        Containers from first principals

                        Posted by Adam Young on December 09, 2019 03:20 PM

                        Computing is three things: calculation, movement, and storage. The rest is commentary.

                        What are containers? I was once told they were “just” processes. It took me a long time to get beyond that “just” to really understand them. Processes sit in the middle of a set of abstractions in computer science. Containers are built on that abstraction. What I’d like to do here is line up the set of abstractions that support containers from the first principals of computer science.

                        Computation is simple math: addition and the operations built from it like subtraction and multiplication, and simple binary tricks like left shift which are effectively forms of multiplication.

                        A CPU takes a value out of memory, performs math on it, and stores it back in memory. Sometimes that math requires two values from memory. This process is repeated endlessly as long as your computer is on.

                        Storage is the ability to set a value somewhere and come back later to see that it has the same value. If maintaining that value requires electricity, we call it volatile memory. If it can survive a power outage, we call it persistent storage.

                        The movement of information from one location to another involves the change of voltage across a wires. Usually, one value is used to select the destination, and another value is transferred.

                        That is it. That is the basics in a nutshell. All other abstractions in computer science are built from these three pieces.

                        One little quibble: there is a huge bit I am skipping over: interactions with the outside world. Input, from sensors, and various parts of the output story as well. I’ll just acknowledge those now, but I’m not going to go in to them in too much depth.

                        If computation could only transform existing data, it would be useless. Devices that interact with the outside world makes computation capable of reacting to and inducing change in the real world. Want to draw a picture on the screen? You need to store the right values in the right places so the graphics hardware can tell the monitor what value to set on a pixel on the screen. Want to tell the computer what to do? Press a button on a keyboard that changes a voltage in a circuit. As soon as possible, these changes become just more computation.

                        When computers just started, back in the 1940s or so, there was very little abstraction. The output from a computer was produced by technology not much different than a typewriter. Instead of human fingers, a solenoid can depress a key. Want to type an ‘A’ character? Send a pulse to the right solenoid. Input came from switches, not much different than sending the wakeup message to your light-bulb

                        In the intervening years, the complexity of how we generate that output has exploded. The Keyboard that I use to type this article contains within it a full computer, that talks to the outside world via the USB protocol. The monitor I read from as I type contains a different computer that talks HDMI. In between is a laptop Capable of those protocols, TCP/IP, Bluetooth, 802.11 and more. These systems are possible because the protocols, the pre-agreed meaning of a sequence of signals, are well understood and implemented by the manufacturers. The basic ideas of compute, movement, and storage are used at all layers of implementing these protocols.

                        Early computers were given one task at a time. Most of those tasks were mathematical in nature, such as computing tables of logarithms or settings for cannons to fire at targets at different ranges and elevations. These tasks, which were both time consuming and error prone when performed by humans, were sufficient to keep these computers occupied. A small amount of time was required to load in a new program, or to read out the results of the old one. As the value of the computer time grew, additional mechanisms allowed the computer operators to queue up batches of jobs, so that the machine could immediately start the next once the previous one finished.

                        When a human manually loads and unloads a computer, there is not much call for a naming structure for resources. However, once the loading and storage of resources is automated, a second process needs a way to find a resource produced by a previous. While there are many ways this has been implemented, one of the most common and easiest to understand is a directed-acyclic-graph (DAG) with a single root node. All resources live as nodes withing this graph. We call these nodes directories, unless they are end nodes, in which case we call them files.

                        To find a resources, you start at the root, and navigate from node to node until you find the resources desired. You recognize this as the filesystem structure of your computer. We can produce a name for a resource by building a path from the root of the tree to the resources itself. if they are such as find an executable file a the node /usr/bin/xz. This path traverses from the root node (/) to the bin node, and finally identifies the xz end node.

                        Further complexity emerged as computers became more powerful and we expected them to do more complex tasks. As the amount of a space a computer required decreased, engineers put more compute power into that space. Where before there had been one processing unit, there might now be two, four, or more. These processors were linked together, and programmers came up with ways of splitting up mathematical tasks to use multiple processors at once.

                        Computers were also monitoring sensors in the outside world. When a sensor hit a threshold, it would send a signal to the computer. The computer needed to be able to pause what it was doing in order to be able to handle that sensor. We call these signals “interrupts” because they interrupt the flow of execution of a program. When a processor takes an interrupt, it stores the values it was working on, and switches to another program that contains the steps to handle the interrupt.

                        Taken together, the ability to have multiple processors and the ability for a single processor to handle an interrupt provide the basis for sharing a computer. The simplest sharing is that each person gets their own processor. But sometimes one person needs many processors, and another person only one. Being able to transfer the workloads between processors becomes important. But even if a single person is using the computesr, she might want them to perform different tasks, and make sure that the results are not accidentally mixed. The ability to time share a computer means that the resources of a computer need to be divisible. What are those resources? Essentially, the same three we started with: computation, movement, storage. If the processor is working on one task, it should not see instructions from the other tasks. If the processor is storing a new value in memory, it should not over write a value from another task.

                        The simplest sharing abstraction for a single processor is called “threading.” A thread is a series of instructions. If there are two threads running, it might be that one thread runs on one process, and a second runs on a different one, or both might run on a single processor, with an interrupt telling the processor when it is time to switch between the threads. The two threads share all of the resources that are used for the larger abstractions. They see the same volatile memory. They share the same abstractions that we use to manage movement and storage: network devices and names, file systems and devices, and so on. The only thing that distinguishes the two threads is the current instruction each one is processing.

                        If we wish to keep the two threads from overwriting each other’s values, we can put the next abstraction between them: virtual memory. When we have virtual memory, we have a subsystem that translates from the destination locations in the process to the physical locations in memory. That way, if each of our now-separated threads want to store and retrieve a value from memory locations 4822, they will each see their own values, and not the values of the other thread. When we add memory separate to threads, we have created the abstraction called a “process.”

                        If the only abstraction we add to the thread is virtual memory, than the two processes still share all of the other resources in the system. This means that if one process opens and writes to a file named /ready, the other process can then open and read that file. If we wish to prevent that, we need a permission system in place that says what a given process can or cannot do with outside resources. This access control system is used for all resources in the DAG/Filesystem.

                        What if we want to make sure that two process cannot interact even inside the filesystem? We give them two distinct Filesystems. Just as virtual memory puts an layer of abstraction between the process and the physical memory, we ut an abstraction between a process and its filesystem. There might be other resources that are sharable and not put into the filesystem, such as the identification number system for processes, called process ids, or PIDS. The network devices have separate means of identification as well. We call each of these things “namespaces.” When we give two processes distinct namespaces, and ensure that they cannot interact with processes that have a different set of namespaces, we call these processes “containers.”

                        This whole article was to build up to the understanding of containers from the simplest possible abstractions. From computation, movement, and storage, we can see how greater degrees of complexity and abstraction grow up until we have the container abstraction.


                        Episode 173 - Ho Ho Homeland Security

                        Posted by Open Source Security Podcast on December 09, 2019 12:10 AM

                        Josh Santa and Kurt talk the border nightmare Santa Clause has to deal with as he traverses the globe. Questions we explore include: Are the reindeer farm animals? Is the North Pole a farm? Is Santa an intellectual property thief? Does Krampus eat politicians? Does Santa have a passport? Does Santa have an emergency radio?
                        <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/12317846/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

                        Show Notes


                          Building an OpenShift LDAP URL from FreeIPA

                          Posted by Adam Young on December 07, 2019 07:31 PM

                          If you want to use LDAP authentication with OpenShift, you need to build an LDAP URL from the information you do have. Here are the steps.

                          I’ve installed a Red Hat IdM server. If I ssh in to the server, I can use kinit to authenticate, and use the ipa command line to query. I created a user named openshift that will be used to perform the operations from the OpenShift instance. Let’s use that as a starting point. The user-show command does not show the LDAP info by default, but if you add the –all flag, you do get it:

                          $ ipa user-show openshift  --all | grep dn:
                            dn: uid=openshift,cn=users,cn=accounts,dc=redhatfsi,dc=com
                          

                          The users are all stored in the same tree. So we can remove the uid entry from the start of that line to get the base DN. We can use curl to test:

                          curl  ldaps://$HOSTNAME/cn=users,cn=compat,dc=redhatfsi,dc=com
                          

                          This is the output produced:

                          DN: cn=users,cn=compat,dc=redhatfsi,dc=com
                          	objectClass: extensibleObject
                          
                          	cn: users
                          

                          Note that this can be done as an anonymous user. I have not had to authenticate to the IdM server. However, it idoes not list the users. To get some values back, we need to tell the query how deep to go in the tree. Use the scope one for idm, as the user tree is flat.

                          $ curl  ldaps://idm.redhatfsi.com/cn=users,cn=compat,dc=redhatfsi,dc=com?uid
                          DN: cn=users,cn=compat,dc=redhatfsi,dc=com
                          
                          [ayoung@idm ~]$ curl  ldaps://idm.redhatfsi.com/cn=users,cn=compat,dc=redhatfsi,dc=com?uid?one
                          DN: uid=openshift,cn=users,cn=compat,dc=redhatfsi,dc=com
                          	uid: openshift
                          
                          
                          DN: uid=ayoung,cn=users,cn=compat,dc=redhatfsi,dc=com
                          	uid: ayoung
                          
                          
                          DN: uid=admin,cn=users,cn=compat,dc=redhatfsi,dc=com
                          	uid: admin
                          


                          Copying files into a container at run time

                          Posted by Adam Young on December 06, 2019 11:36 PM

                          There are three distinct things that have to happen between installing the keystone software and running a Keystone instance. The first if management of the configuration files. Second is the database migrations, and third is the keystone bootstrap of the data base values. When coding container images to run a keystone server, not only do you need to be aware of each of these stpes, you need to make sure you are performing them in such a way that you can run scale the the Keystone server horizontally, handle zero downtime upgrades, and handle token-validating key rotations. Federated identity adds an additional twist as you need to handle the addition of httpd config changes for new identity providers.

                          Let’s walk through this setup in detail.

                          Keystone was written assuming a Linux setup with configuration files in /etc. The keystone package owns the /etc/keystone directory. The primary configuration file is /etc/keystone/keystone.conf. This file is not checked in to the Keystone source repository, but is rather generated from defaults embedded in the source code. I generate the base config file using tox:

                          tox genconfig

                          While the RPMs and other package files do something similar, they then hold on the artifact, and will have the Keystone package own the file once installed.

                          When tox builds the sample configuration file, it ends up in the keystone git repository directory under etc/keystone.conf.sample. The values in this file direct all other changes.

                          The most important change to the keystone.conf file deals with the Database connection. The connect config is in the [database] section of the config file. The default value is

                          connection = 

                          This cannot be defaulted. This is the SQL Alchemy connection string used to find and authenticate to the Relational Database that stores all of the data for the Keystone instance. It looks like this:

                          mysql+pymysql://username:password@hostname/dbname
                          

                          The password field is used for authentication, and it must be clear text. This is, obviously, a bad pattern. Finding a way to fix this problem is beyond the scope of this article. For now, lets talk about mitigation.

                          In the Kubernetes world, passwords should be kept in secrets. Using a local podman, we can mimic this by passing in environment variables when we run the container, or by mounting a volume. The container needs to use these variables to update the config file prior to running. The script that runs your application, whether to run bootstrap or to run the keystone server itself, must update the config file. If the environment variable is passwed in on the command line, the script would look like this:

                          #!/bin/bash
                          openstack-config  --set  /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:$MYSQL_PASSWORD@$mariadb-keystone/keystone
                          

                          This way, the user only has one variable they can affect: the password itself.

                          If you want to do the same thing using a file:

                          #!/bin/bash
                          MYSQL_PASSWORD=$( cat /etc/keystone/mysql.pass )
                          openstack-config  --set  /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:$MYSQL_PASSWORD@$mariadb-keystone/keystone
                          

                          You could use both, to allow a command line override for testing:

                          if [ -z "$MYSQL_PASSWORD" ]
                          then MYSQL_PASSWORD=$( cat /etc/keystone/dbpass.txt )
                          fi

                          This allows me to run the database initialization like this from the cli:

                           podman run -it   --add-host keystone-mariadb:10.89.0.47   --network maria-bridge   --env MYSQL_PASSWORD=my-secret-pw   localhost/keystoneconfig
                          

                          But better yet, if I want to run it while mounting a file inside the container:

                           podman run -it --mount type=bind,source=/tmp/pass.txt,destination=/etc/keystone/dbpass.txt  --add-host keystone-mariadb:10.89.0.47   --network maria-bridge   localhost/keystoneconfig
                          

                          This last option matches the way that Kubernetes would mount a secret as a file.

                          We can use this same method for handline the keys for fernet tokens. The following line runs the keystone server

                           podman run -d  --mount type=bind,source=/tmp/pass.txt,destination=/etc/keystone/dbpass.txt      --mount type=bind,source=/etc/keystone/fernet-keys,destination=/etc/keystone/fernet-keys   --add-host keystone-mariadb:10.89.0.47   --network maria-bridge   -it localhost/keystone 
                          

                          Here’s the modified version of the script that runs the web server and wsgi module. You can see where I commented out the explicit creation of the fernet key repository.

                          #!/bin/bash
                          # Copied from
                          #https://github.com/CentOS/CentOS-Dockerfiles/blob/master/httpd/centos7/run-httpd.sh
                          # Make sure we're not confused by old, incompletely-shutdown httpd
                          # context after restarting the container.  httpd won't start correctly
                          # if it thinks it is already running.
                          rm -rf /run/httpd/* /tmp/httpd*
                          keystone-manage bootstrap --bootstrap-password my-secret-password
                          
                          MYSQL_HOST=keystone-mariadb
                          MYSQL_PORT=3306
                          
                          if [ -z "$MYSQL_PASSWORD" ]
                          then MYSQL_PASSWORD=$( cat /etc/keystone/dbpass.txt )
                          fi
                          
                          openstack-config  --set  /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:$MYSQL_PASSWORD@$MYSQL_HOST/keystone
                          #keystone-manage fernet_setup  --keystone-user keystone  --keystone-group keystone
                          
                          exec /usr/sbin/apachectl -DFOREGROUND
                          

                          Password Quality and Badlisting in Kanidm

                          Posted by William Brown on December 06, 2019 02:00 PM

                          Password Quality and Badlisting in Kanidm

                          Passwords are still a required part of any IDM system. As much as I wish for Kanidm to only support webauthn and stronger authentication types, at the end of the day devices can be lost, destroyed, some people may not be able to afford them, some clients aren’t compatible with them and more.

                          This means the current state of the art is still multi-factor auth. Something you have and something you know.

                          Despite the presence of the multiple factors, it’s still important to quality check passwords. Microsoft’s Azure security team have written about passwords, and it really drives home the current situation. I would certainly trust these people at Microsoft to know what they are talking about given the scale of what they have to defend daily.

                          The most important take away is that trying to obscure the password from a bruteforce is a pointless exercise because passwords end up in password dumps, they get phished, keylogged, and more. MFA matters!

                          It’s important here to look at the “easily guessed” and “credential stuffing” category. That’s what we really want to defend against with password quality, and MFA protects us against keylogging, phising (only webauthn), and reuse.

                          Can we Avoid This?

                          Yes! Kanidm supports a “generated” password that is a long, high entropy password that should be stored in a password manager or similar tool to prevent human knowledge. This fits to our “device as authentication” philosophy. You authenticate to your device (phone, laptop etc), and then the devices stored passwords authenticate you from that point on. This has the benefit that devices and password managers generally perform better checking of the target where we enter the password, making phising less likely.

                          But sometimes we can’t rely on this, so we still need human-known passwords, so we still take steps to handle these.

                          Quality

                          In the darkages, we would decree that human known passwords had to have a number, symbol, roman numeral, no double letters, one capital letter, two kanji and no symbols that could be part of an sql injection because of that one legacy system we can’t patch.

                          This lead to people making horrid, un-rememberable passwords in leetspeak, or giving up altogether and making the excellent “Password1” (which passes AD minimum password requirements on server 2003).

                          What we really want is entropy, length, and memorability. Dropbox made a great library for this called zxcvbn, which has since been ported to rust . I highly recommend it.

                          This library is great because it focuses on entropy, and then if the password doesn’t meet these requirements, the library recommends ways to improve. This is excellent for human interaction and experience, guiding people to create better passwords that they can remember, rather than our outdated advice of the complex passwords as described above.

                          Badlisting

                          Badlisting is another great technique for improving password quality. It’s essentially a blocklist of passwords that people are not allowed to set. This way you can have corporate-specific breach lists, or the top 10k most used passwords badlisted to prevent users using them. For example, “correct horse battery staple” may be a strong password, but it’s well known thanks to xkcd.

                          It’s also good for preventing password reuse in your company if you are phished and the credentials privately notified to you as some of the regional CERT’s do, allowing you to block these without them being in a public breach list.

                          This is important as many bots will attempt to spam these passwords against accounts (rate limiting auth and soft-locking accounts also helps to delay these attack styles).

                          In Kanidm

                          In Kanidm, we chose to use both approaches. First we check the password with zxcvbn, then we ensure it’s not in a badlist.

                          In order to minimise the size of the badlist, the badlist uses case insensitive storage so that multiple variants of “password” and “PasSWOrD” are only listed once. We also preprocessed the badlist with zxcvbn to remove any passwords that it would have denied from being entered. The preprocessor tool will be shipped with kanidm so that administrators can preprocess their own lists before adding them to the badlist configuration.

                          Creating a Badlist

                          I decided to do some analysis on a well known set of passwords maintained on the seclists repository . Apparently this is what pentesters reach for when they want to bruteforce or credential stuff on a domain.

                          I analysed this in three ways. The first was as a full set of passwords (about 25 million) and a smaller but “popular” set in the “rockyou” files which is about 60,000 passwords. Finally I did an analysis of the rockyou + top 10 million most common (which combined was 1011327 unique passwords, so about 50k of the rockyou set is from top 10 million).

                          From the 25 million set I ran this through a preprocessor tool that I developed for kanidm. It eliminated anything less than a score of 3 and no length rule. This showed that zxcvbn was able to prevent 80% of these inputs from being allowed. If I was to ship this full list, this would contain 4.8 million badlisted passwords. It’s pretty amazing already that zxcvbn stops 80% of bad passwords that end up in breaches from being able to be used, with the remaining 20% likely to be complex passwords that just got dumped by other means.

                          However, for the badlist in Kanidm, I decided to start with “what’s popular” for now, and to allow sites to add extra content if they desire. This meant that I focused instead on the “rockyou” password set instead.

                          From the rockyou set I did more tests. zxcvbn has a concept of scores, and we can have policy to request a minimum score is present to allow the password. I did a score 3 test, a score 3 with min pw len 10 and a score 4 test. This showed the following results which has the % blocked by zxcvbn and the no. that passed which will required badlisting as zxcvbn can’t detect them (today).

                          TEST     | % blocked | no. passed
                          ---------------------------------
                           s3      |  98.3%    |  1004
                           s3 + 10 |  98.9%    |  637
                           s4      |  99.7%    |  133
                          

                          Personally, it’s quite hilarious that “2fast2furious” passed the score 3 check, and “30secondstomars” and “dracomalfoy” passed the score 4 check, but who am I to judge - that’s what bad lists are for.

                          More seriously, I found it interesting to see the effect of the check on length - not only was the preprocessor step faster, but alone that eliminated ~400 passwords that would have “passed” on score 3.

                          Finally, from the rockyou + 10m set, the results show the following in the same conditions.

                          TEST     | % blocked | no. passed
                          ---------------------------------
                           s3      |  89.9%    |  101349
                           s3 + 10 |  92.4%    |  76425
                           s4      |  96.5%    |  34696
                          

                          This shows that a very “easy” win is to enforce password length, in addition to entropy checkers like zxcvbn, which are effective to block 92% of the most common passwords in use on a broad set and 98% of what a pentester will look for (assuming rockyou lists). If you have a high security environment you should consider setting zxcvbn to request passwords of score 4 (the maximum), given that on the 10m set it had a 96.5% block rate.

                          Conclusions

                          You should use zxcvbn, it’s a great library, which quickly reduces a huge amount of risk from low quality passwords.

                          After that your next two strongest controls are password length, and being able to support badlisting.

                          Better yet, use MFA like Webauthn as well, and support server-side generated high-entropy passwords!

                          Injecting a Host Entry in podman-run

                          Posted by Adam Young on December 05, 2019 04:27 PM

                          How does an application find its database? For all but the most embedded of solutions, the database exposes a port on a network. In a containerized development process, one container needs to find another container’s network address. But podman only exposes the IP address of a pod, not the hostname. How can we avoid hardcoding IP addresses of remote services into our containers?

                          Here is the database I built in a recent post:

                          # podman ps
                          CONTAINER ID  IMAGE                             COMMAND  CREATED       STATUS           PORTS  NAMES
                          c89f6ae06c1f  docker.io/library/mariadb:latest  mysqld   12 hours ago  Up 12 hours ago         keystone-mariadb
                          

                          But If I try to attach to it using the podname, things fail:

                          # podman  run -it --network maria-bridge    -e MYSQL_ROOT_PASSWORD="my-secret-pw"    --rm mariadb sh    -c 'exec mysql -hkeystone-mariadb -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"'
                          ERROR 2005 (HY000): Unknown MySQL server host 'keystone-mariadb' (-2)
                          

                          I can grab the IP address from the mariadb pod:

                          # podman inspect keystone-mariadb | jq -r '.[] | .NetworkSettings | .IPAddress'
                          10.89.0.47
                          

                          The –add-host flag to the podman-run command that allows us to inject an entry into /etc/hosts.

                          # podman  run -it --network maria-bridge   --add-host keystone-mariadb:10.89.0.47    -e MYSQL_ROOT_PASSWORD="my-secret-pw"    --rm mariadb sh    -c 'exec mysql -hkeystone-mariadb -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"'
                          Welcome to the MariaDB monitor.  Commands end with ; or \g.
                          Your MariaDB connection id is 13
                          Server version: 10.4.10-MariaDB-1:10.4.10+maria~bionic mariadb.org binary distribution
                          
                          Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
                          
                          Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                          
                          MariaDB [(none)]> 
                          


                          This is the start of a strategy for service-to-service communcation. I’ll be building on this approach moving forward.

                          Injecting Parameters into container image

                          Posted by Adam Young on December 04, 2019 03:12 PM

                          An earlier port hard coded the IP address and port used for MariaDB connections. I want to pull these out so I can pass them in on the command line when I create the client.

                          First step: execute the refactoring of introduce a parameter…but into the shell script. This involves change both the calling and called programs. Here’s what that change looks like:

                          $ git diff
                          diff --git a/keystoneconfig/Dockerfile b/keystoneconfig/Dockerfile
                          index e9ea6a3..7b76558 100644
                          --- a/keystoneconfig/Dockerfile
                          +++ b/keystoneconfig/Dockerfile
                          @@ -8,4 +8,4 @@ RUN yum install -y centos-release-openstack-stein &&\
                            
                           COPY ./keystone-configure.sql /
                           COPY ./configure_keystone.sh /
                          -CMD /configure_keystone.sh 
                          +CMD /configure_keystone.sh 10.89.0.4 3306
                          diff --git a/keystoneconfig/configure_keystone.sh b/keystoneconfig/configure_keystone.sh
                          index 0ae6410..7982813 100755
                          --- a/keystoneconfig/configure_keystone.sh
                          +++ b/keystoneconfig/configure_keystone.sh
                          @@ -1,7 +1,11 @@
                           #!/bin/bash
                          - 
                          +
                          +MYSQL_HOST=$1
                          +MYSQL_PORT=$2
                          +
                          +
                           echo -n Database 
                          -mysql -h 10.89.0.4  -P3306 -uroot --password=my-secret-pw < keystone-configure.sql
                          +mysql -h $MYSQL_HOST  -P$MYSQL_PORT -uroot --password=my-secret-pw < keystone-configure.sql
                           echo " [COMPLETE]"
                            
                           echo -n "configuration "
                          -openstack-config  --set  /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@172.17.0.2/keystone
                          +openstack-config  --set  /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@$MYSQL_HOST/keystone
                           DATABASE_CONN=`openstack-config  --get  /etc/keystone/keystone.conf database connection `
                           echo $DATABASE_CONN
                          
                          

                          Note that I am also going to run this by changing the IP address to the currently active one in my Mariadb container. That way I can test it. First, grab the IP.

                          #  podman inspect $PODID | jq -r '.[] | .NetworkSettings | .IPAddress'
                          10.89.0.4
                          

                          And run it:

                          podman run -it --network maria-bridge  localhost/keystoneconfig
                          

                          Gives the output:

                          # podman run -it --network maria-bridge  localhost/keystoneconfig 
                           [COMPLETE]
                          configuration mysql+pymysql://keystone:keystone@10.89.0.4/keystone
                           [COMPLETE]
                          db-sync  [COMPLETE]
                          bootstrap /etc/keystone/fernet-keys/ does not exist
                           [COMPLETE]
                          

                          Lets run it again to see what happens:

                          DatabaseERROR 1007 (HY000) at line 2: Can't create database 'keystone'; database exists [COMPLETE] configuration mysql+pymysql://keystone:keystone@10.89.0.4/keystone [COMPLETE] db-sync [COMPLETE] bootstrap /etc/keystone/fernet-keys/ does not exist [COMPLETE]

                          Similar, but with the warning that the Database is already created. Lets look using the MySQL client:

                          # podman  run -it --network maria-bridge    -e MYSQL_ROOT_PASSWORD="my-secret-pw"    --rm mariadb sh    -c 'exec mysql -h10.89.0.4 -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"'
                          Welcome to the MariaDB monitor.  Commands end with ; or \g.
                          Your MariaDB connection id is 14
                          Server version: 10.4.10-MariaDB-1:10.4.10+maria~bionic mariadb.org binary distribution
                          
                          Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
                          
                          Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                          
                          MariaDB [(none)]> show databases;
                          +--------------------+
                          | Database           |
                          +--------------------+
                          | information_schema |
                          | keystone           |
                          | mysql              |
                          | performance_schema |
                          +--------------------+
                          4 rows in set (0.001 sec)
                          
                          MariaDB [(none)]> use keystone;
                          Reading table information for completion of table and column names
                          You can turn off this feature to get a quicker startup with -A
                          
                          Database changed
                          MariaDB [keystone]> show tables;
                          +------------------------------------+
                          | Tables_in_keystone                 |
                          +------------------------------------+
                          | access_rule                        |
                          .....
                          +------------------------------------+
                          46 rows in set (0.001 sec)
                          

                          Now that the script is parameterized, the next step is to parameterize the container image itself. It turns out that there are two main ways to do this; to pass in environment variables or add to the command line. First, the env vars approach.

                          I made a slight change to the script above so that it used the env vars, but did not expect them to be passed in from outside:

                          diff --git a/keystoneconfig/configure_keystone.sh b/keystoneconfig/configure_keystone.sh
                          index 05c8ed8..094e30e 100755
                          --- a/keystoneconfig/configure_keystone.sh
                          +++ b/keystoneconfig/configure_keystone.sh
                          @@ -1,7 +1,7 @@
                           #!/bin/bash
                           
                          -MYSQL_HOST=$1
                          -MYSQL_PORT=$2
                          +#MYSQL_HOST=$1
                          +#MYSQL_PORT=$2
                          

                          To run the container:

                          podman run -it --network maria-bridge  -e MYSQL_HOST=10.89.0.4 -e MYSQL_PORT=3306 localhost/keystoneconfig
                          

                          I like this from the calling side as it names the parameters explicitly. However, if I were re-using the script indie the container, I might want to have the parameters on the command line. They are also global parameters, which is an anti-pattern as well.

                          To run the script with the command line parameters passed in order, I revert the keystoneconfig/configure_keystone.sh aso that is pulls in the parameters on the command line, but leave the parameters off the CMD line in Dockerfile.

                          However, if I want to replace the command line parameters, I need to specify the executable as well. Which means that I have to execute the container like this:

                          # podman run -it --network maria-bridge  localhost/keystoneconfig    /configure_keystone.sh   10.89.0.4  3306
                          

                          There is an alternative to using configuration values this way. We can mount the config file inside the docker container. But that is a tale for another day.

                          Running MariaDB from Podman

                          Posted by Adam Young on December 03, 2019 07:23 PM

                          I am moving all of my tooling over from Docker to podman and buildah. One thing I want to reproduce it the mariadb setup I used.

                          First, check the networks status:

                          sudo -
                          podman network ls
                          NAME     VERSION   PLUGINS
                          podman   0.4.0     bridge,portmap,firewall
                          

                          Now create the network and check it:

                          /etc/cni/net.d/maria-bridge.conflist
                          [root@ayoungP40 ~]# podman network ls
                          NAME           VERSION   PLUGINS
                          podman         0.4.0     bridge,portmap,firewall
                          maria-bridge   0.4.0     bridge,portmap,firewall
                          

                          Deploy the Database:

                          $ export MYSQL_ROOT_PASSWORD=my-secret-pw
                          $ PODID= $( podman run --network=maria-bridge --name some-mariadb -e MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD -d mariadb:latest )
                          

                          Find out the IP address of the container we just created:

                          POD_IP=$( podman inspect $PODID | jq -r '.[] | .NetworkSettings | .IPAddress' )
                          

                          Connect to it from the client. Note that I was not able to pass the IP address through as a variable. More on that in a bit.

                          podman  run -it --network maria-bridge    -e MYSQL_ROOT_PASSWORD="my-secret-pw"    --rm mariadb sh    -c 'exec mysql -h10.89.0.4 -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"'
                          Welcome to the MariaDB monitor.  Commands end with ; or \g.
                          Your MariaDB connection id is 8
                          Server version: 10.4.10-MariaDB-1:10.4.10+maria~bionic mariadb.org binary distribution
                          
                          Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
                          
                          Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                          
                          MariaDB [(none)]> show databases;
                          +--------------------+
                          | Database           |
                          +--------------------+
                          | information_schema |
                          | mysql              |
                          | performance_schema |
                          +--------------------+
                          3 rows in set (0.001 sec)
                          


                          Innovation

                          Posted by Russel Doty on December 03, 2019 06:51 PM

                          The previous article introduced the concept of product lifecycles. Examining the lifecycle model leads to the conclusion that the most profitable approach is to focus on the majority markets and largely ignore the innovators. In fact this is valid – within limits!

                          Clayton Christensen addresses this in The Innovator’s Dilemma where he introduces two types of innovation: sustaining innovation, which is innovation directed at solving an existing problem, and disruptive innovation, which involves using new technology to initially create new markets and then to ultimately address mainstream markets.

                          The concept can be summarized as sustaining innovation is a problem looking for a solution, while disruptive innovation is a solution looking for a problem. For sustaining innovation you understand the problem that needs to be solved and the challenge is to solve it. You understand the market, the customers and their needs, alternative solutions, and competitors. You can perform valid market research, make financial projections, and apply existing resources, processes, and skills.

                          Christensen discovered that existing companies do very well with sustaining innovation. They can tackle extraordinarily complex and difficult technologies and apply them to meeting their customers needs. They can make large investments and overcome seemingly impossible challenges. As an old saying goes, understanding the problem is 80% of the solution.

                          On the other hand, Christensen also discovered that successful companies do a poor job of dealing with disruptive technologies. They tend to either ignore a new technology until a competitor has established a strong position or they fail to successfully develop and market products built on the new technologies.

                          What is going on here? Is the problem with sustaining innovation? Not at all – successful companies are built on continuous improvement. Companies that don’t continuously improve their products and processes will fall behind the companies that do. Unswerving dedication to customers is a hallmark of a great company. Attempts to challenge a successful company in an established market are expensive and usually unsucessful.

                          Making sense of this apparent contradiction needs several more concepts.

                          Customer Needs

                          There are several components to the model that Christensen proposes. A core concept is customer needs – specifically, how well a technology meets customer needs.

                          Capabilities vs. customer requirements of a technology

                          This chart is a different look at the innovators/majority market used by Moore in Crossing the Chasm. It shows a typical technology development curve where a new technology starts out being useful but not meeting all customer needs. The technology improves to the point where it meets and then ultimately exceeds customer needs.

                          Note that a “good enough” product can still be improved. It doesn’t meet all needs of all customers. Customer needs do continue to grow over time. The interesting case occurs when technology/performance improvement is growing faster than customer demands. When this occurs the customer focus moves from technology and performance to other factors such as convenience, reliability – and cost! Customers are unwilling to pay a premium for product capabilities that exceed their needs.

                          Technology Evolution

                          Christensen proposed that the evolution of technology shown in the customer needs chart follows an “S” curve. In the early stages investments in a new technology are largely speculative. This is fundamental research – experimentation to discover how to build the new technology and discovery of what it can do.

                          If the technology is viable an inflexion point is reached where incremental investments in the technology or product produce significant increases in performance or capabilities. This is typically where large market growth occurs.

                          As the technology matures each increment of investment produces smaller returns – you reach a point of diminishing returns for investments.

                          With successful products you have typically been moving up-market as the technology evolves – delivering more support to more demanding customers in a broader market. This requires – and delivers! – larger gross margins for the products and a larger organization with more overhead to meet the demands of large customers.

                          Improvements in a technology vs. investments

                          Following this model we have seen a scrappy startup with an exciting new technology growing into a successful and profitable mainstream company – the classic success story!

                          This leaves us with unanswered questions: First, how does the scrappy startup grow into a profitable company rather than becoming another failure? Second, how can an existing successful company deal with disruptive innovation?