Fedora security Planet

Episode 254 – Right to Repair Security

Posted by Josh Bressers on January 18, 2021 12:01 AM

Josh and Kurt talk about the new right to repair rules in the EU. There’s a strange line between loving the idea of right to repair, but also being horrified as security people at the idea of a device being on the Internet for 30 years.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2236-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_254_Right_to_Repair_Security.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_254_Right_to_Repair_Security.mp3</audio>

Show Notes

Episode 253 – Defenders only need to be right once

Posted by Josh Bressers on January 11, 2021 12:01 AM

Josh and Kurt talk about this idea that seems to exist in security of “attackers only need to be right once” which is silly. The reality is attackers have to get everything right, defenders really only need to get it right once. But “defenders only need to be right once” isn’t going to sell any products.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2232-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_253_Defenders_only_need_to_be_right_once.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_253_Defenders_only_need_to_be_right_once.mp3</audio>

Show Notes

Homelab OpenShift 4 on Baremetal: Part 1

Posted by Adam Young on January 08, 2021 04:16 PM

My work as a cloud Solutions Architect is focused on OpenShift. Since I work in the financial sector, my customers are very security focused. These two factorshave converged on me working on OpenShift installs on disconnected networks.

The current emphasis on OpenShift is for virtualization. While virtualization can be nested, it typically has a performance penalty. More important, though, is that virtualization is a technology for taking advantage of bare metal installs.

I need to run OpenShift 4 on baremetal in my homelab via a disconnected install . Here we go.

To read the OpenShift install directions, you need a Red Hat subscription.

Hardware

I’ve written up my Hardware setup before. Not a lot has changed, except that I have a fourth Dell r610 now. This one is problematic, in that I can’t seem to talk to the iDrac on it. I can get it to PXE boot, but I need to go manually push the button myself. I’ve kep it close; the rack is now in my attic, behind a small door.

<figure class="wp-block-image">Keep your friends close, and your servers...close enough to reboot.<figcaption>Attic Rack. Rattick?</figcaption></figure>

It is a quieter in my office now, but not silent, when the cluster is running.

Thus, I can install the machines via PXE or via USB. Right now, I am going to work through a USB based install. This removes one layer of technology for most people. I can layer on the PXE approach after.

I do my work from a machine called Nuzleaf. This is a the Bastion host; it has direct network access to the outside world. Nuzleaf runs the HTTPD server that hands out the control planes ignition data. It also serves Yum repositories and container image repositories. It has two interfaces: a wireless one for the outside traffic and an ethernet port connected to the servers.

<figure class="wp-block-image"><figcaption>Nuzleaf: the bastion host</figcaption></figure>

While I have a programmable switch, I don’t need it for this setup. Instead, Everything is in a very flat setup behind the NUC.

<figure class="wp-block-image"><figcaption>Simple switch setup</figcaption></figure>

Downloads

I downloaded the ISO rhcos-4.6.8-x86_64-live.x86_64.iso from the Red Hat site. This has been copied to a USB drive with:

sudo dd bs=4M if=rhcos-4.6.8-x86_64-live.x86_64.iso of=/dev/sda

If you look closely you can see the USB stick in the top Server (the one not racked properly, of course) which is going to act as the Bootstrap server.

The bootstrap server has been named Boldore. Mneomnic for Bootstrap. I really should rename the three servers. I think I want to go backwards from Z. I have Zygarde and Zubat…which ever of those is on the bottom should continue. Yungoos. Xatu. I like this. Need to redo things with those names.

If I were to boot the Dell r610s from the USB stick right now, they would not be able to download their ignition data. Thus, when I boot them, I need to intercept the Grub stage and inject the right values for ignition and other things.

You can see why I want to do this all via PXE.

In addition to the coreos image, I grabbed the installer and the command line tools. All of these files are on Nuzleaf in /home/ayoung/apps/ocp4.6.

$ ls ..
homelab  openshift-client-linux.tar.gz  openshift-install  openshift-install-linux.tar.gz  pull-secret  README.md  rhcos-4.6.8-x86_64-live.x86_64.iso

I’ve already extracted the installer:

tar -zxf openshift-install-linux.tar.gz

Install Config

I made a subdir for the generated files, including the one I need to manage by hand: install-config.yaml

BIG RED BLINKTAB WARNING: If you run the installer, you will delete this file. It is essential for sanity and reproducibility that you always have a backup copy of this. I create a file called install-config.yaml.orig. I tend to edit this one and copy it over to the file install-config.yaml. I’m going to try something different this time, and see if I can do it with a symlink instead.

Here is my starter install-config.yaml.orig

apiVersion: v1
baseDomain: home.younglogic.net
compute:
- hyperthreading: Enabled   
  name: worker
  replicas: 0 
controlPlane:
  hyperthreading: Enabled   
  name: master 
  replicas: 3 
compute:
- name: worker
  platform: {}
  replicas: 0

metadata:
  name: homelab 
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14 
    hostPrefix: 23 
  networkType: OpenShiftSDN

  serviceNetwork: 
  - 10.22.21.240/16
platform:
  none: {} 
fips: false 
pullSecret: 'removed'
sshKey: |
  ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSmDjgSrcA9NSTbv1XN7ZFnH7kDE2fq69KaKDqYM/JVM3gYP7+NwRwlsulNrtLjsdbOgocAMGKdCy3UWpKNLCDwIPC2ONW8xMrc3mjVfK2Te7Tbgu1E+QeD7p76nJAn22+hTFa1nMFvkGEpvBE1b2YYEHV+UrsVN/f/+YUNeD2Wvfcaq/H7kgWkwanwTq4NpljEa/5UDXjYopL2W8IpQyKo5xR7srLm8AwAaWVLUBaSRsIHI2BUgIjk07TT+4/9K/BlMtaomL4/a8mW56FbE6ZmsyD7tgKT6TKkezlXAm2VyaWUG8+Ee2jQ2gW5eRvZuafirfSjpXRrO7FdLukygkDtGWDYwZnEi8zn5zPVRw7Eor8jg6xAkHdicGsMxK5JdMfkE7BHB7f5RKIv6je3aH1SG9LAYOKuMvVu7Z0UbgTTAmwITo/A3VxxRD4MWePce0fGhKHEKcT+hb5Aevnjeej08YvT8ErGvIOzq6tP9MKgN01ZrMnJ28idYzQXAfUZLM= ayoung@nuzleaf.home.younglogic.net


Aside: One thing I would love to see (since I am designing…er dreaming here) is the ability to pull in the ssh and pull secrets from remote files.

To generate the manifests:

[ayoung@nuzleaf ocp4.6]$ ./openshift-install create manifests --dir=./homelab/
INFO Consuming Install Config from target directory 
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings 
INFO Manifests created in: homelab/manifests and homelab/openshift 
[ayoung@nuzleaf ocp4.6]$ ls homelab/
install-config.yaml.bak  install-config.yaml.orig  manifests  openshift

It worked! I mean, running this way removed the symlink but left the .orig file intact.

Inside the homelab directory we have two generated sub-directories: manifests and OpenShift

For iterative development, you want to be able to re-establish your starting point quickly. One technique you can use to do this is to make sure that directory that gets filled with auto-generated files can always be wiped out and regenerated.

So I am moving my install config up one level.

[ayoung@nuzleaf ocp4.6]$ rm -rf homelab
[ayoung@nuzleaf ocp4.6]$ mkdir homelab
[ayoung@nuzleaf ocp4.6]$ ln -s $PWD/homelab-install-config.yaml homelab/install-config.yaml
[ayoung@nuzleaf ocp4.6]$ ls -la homelab
total 0
drwxrwxr-x. 2 ayoung ayoung  33 Jan  8 10:21 .
drwxrwxr-x. 3 ayoung ayoung 264 Jan  8 10:21 ..
lrwxrwxrwx. 1 ayoung ayoung  52 Jan  8 10:21 install-config.yaml -> /home/ayoung/apps/ocp4.6/homelab-install-config.yaml

I will put this in an Ansible playbook at some point. For now, I can recreate the install manifests in a clean and easy manner.

Episode 252 – Is open source dangerous? Open source won, who cares, shut up!

Posted by Josh Bressers on January 04, 2021 12:01 AM

Josh and Kurt talk about a report on open source security from the Canadian Centre for Cyber Security. The title pretty much sums it up.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2228-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_252_Is_open_source_dangerous_Open_source_won_who_cares_shut_up.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_252_Is_open_source_dangerous_Open_source_won_who_cares_shut_up.mp3</audio>

Show Notes

Episode 251 – Communication is hard, security communication is more hard

Posted by Josh Bressers on December 28, 2020 12:01 AM

Josh and Kurt talk about communication. It’s really hard to talk about a lot of what we do. How do we know if a device is secure? How do we know our knowledge is correct?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2222-4" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_251_Communication_is_hard_security_communication_is_more_hard.mp3?_=4" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_251_Communication_is_hard_security_communication_is_more_hard.mp3</audio>

Show Notes

Episode 250 – Door 25: Why do we do the things we do? Question everything

Posted by Josh Bressers on December 25, 2020 12:01 AM

Josh and Kurt talk about why we do the things we do. Sometimes we have to question everything

<audio class="wp-audio-shortcode" controls="controls" id="audio-2196-5" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_250_Door_25_Why_do_we_do_the_things_we_do_Question_everything.mp3?_=5" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_250_Door_25_Why_do_we_do_the_things_we_do_Question_everything.mp3</audio>

Links

Episode 249 – Door 24: Information wants to be free

Posted by Josh Bressers on December 24, 2020 12:01 AM

Josh and Kurt talk about the idea of information wanting to be free. It’s Christmas, we should give it what it wants!

<audio class="wp-audio-shortcode" controls="controls" id="audio-2192-6" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_249_Door_24_Information_wants_to_be_free.mp3?_=6" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_249_Door_24_Information_wants_to_be_free.mp3</audio>

Links

Episode 248 – Door 23: How to report 1000 security flaws

Posted by Josh Bressers on December 23, 2020 12:01 AM

Josh and Kurt talk about how to file 1000 security flaws. One is easy, scale is hard.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2188-7" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_248_Door_23_How_to_report_1000_security_flaws.mp3?_=7" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_248_Door_23_How_to_report_1000_security_flaws.mp3</audio>

Episode 247 – Door 22: How to report one security flaw

Posted by Josh Bressers on December 22, 2020 12:01 AM

Josh and Kurt talk about how to report one security flaw

<audio class="wp-audio-shortcode" controls="controls" id="audio-2184-8" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_247_Door_22_How_to_report_one_security_flaw.mp3?_=8" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_247_Door_22_How_to_report_one_security_flaw.mp3</audio>

Episode 246 – Door 21: Bug bounties

Posted by Josh Bressers on December 21, 2020 12:01 AM

Josh and Kurt talk about bug bounties

<audio class="wp-audio-shortcode" controls="controls" id="audio-2179-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_246_Door_21_Bug_bounties.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_246_Door_21_Bug_bounties.mp3</audio>

Episode 245 – Door 20: Is SMS 2FA better than no 2FA?

Posted by Josh Bressers on December 20, 2020 12:01 AM

Josh and Kurt talk about if SMS 2 factor auth is better than no 2FA

<audio class="wp-audio-shortcode" controls="controls" id="audio-2174-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_245_Door_20_Is_SMS_2FA_better_than_no_2FA.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_245_Door_20_Is_SMS_2FA_better_than_no_2FA.mp3</audio>

Links

Episode 244 – Door 19: TLS certificate trust

Posted by Josh Bressers on December 19, 2020 12:01 AM

Josh and Kurt talk about modern TLS certificate trust

<audio class="wp-audio-shortcode" controls="controls" id="audio-2169-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_244_Door_19_TLS_certificate_trust.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_244_Door_19_TLS_certificate_trust.mp3</audio>

Episode 243 – Door 18: Don’t roll your own crypto or auth

Posted by Josh Bressers on December 18, 2020 12:01 AM

Josh and Kurt talk about why it’s a horrible idea to roll your own crypto or auth

<audio class="wp-audio-shortcode" controls="controls" id="audio-2164-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_243_Door_18_Dont_roll_your_own_crypto_or_auth.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_243_Door_18_Dont_roll_your_own_crypto_or_auth.mp3</audio>

Jamulus Latency on Fedora 33

Posted by Adam Young on December 17, 2020 08:51 PM

Somehow or other I can seem to get things working with Jamulus, but I really am struggling to explain how.

<figure class="wp-block-image"><figcaption>Jamulus</figcaption></figure>

As I type this, I am connected to a local Jamulus server, and I have an Overall delay of about 70ms….which seems to be in the acceptable range.

<figure class="wp-block-image"><figcaption>Jamulus settings show overall delay in the bottom right.</figcaption></figure>

I am running the Focusright: Scarlett Solo Analog to Digital converter from a microphone pickup. This is connected to my laptop via USB (2.0).

<figure class="wp-block-image"><figcaption>ScarlettSolo 2nd Generation.</figcaption></figure>

And right now USB Tree looks like this:

$ lsusb --tree
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/10p, 5000M
    |__ Port 4: Dev 2, If 0, Class=Hub, Driver=hub/4p, 5000M
        |__ Port 4: Dev 3, If 0, Class=Hub, Driver=hub/4p, 5000M
/:  Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/16p, 480M
    |__ Port 4: Dev 2, If 0, Class=Hub, Driver=hub/4p, 480M
        |__ Port 4: Dev 6, If 0, Class=Hub, Driver=hub/4p, 480M
            |__ Port 2: Dev 10, If 1, Class=Printer, Driver=usblp, 480M
            |__ Port 2: Dev 10, If 2, Class=Mass Storage, Driver=usb-storage, 480M
            |__ Port 2: Dev 10, If 0, Class=Vendor Specific Class, Driver=, 480M
            |__ Port 1: Dev 9, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M
            |__ Port 1: Dev 9, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
        |__ Port 2: Dev 4, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
    |__ Port 6: Dev 3, If 2, Class=Audio, Driver=snd-usb-audio, 480M
    |__ Port 6: Dev 3, If 0, Class=Audio, Driver=snd-usb-audio, 480M
    |__ Port 6: Dev 3, If 3, Class=Vendor Specific Class, Driver=snd-usb-audio, 480M
    |__ Port 6: Dev 3, If 1, Class=Audio, Driver=snd-usb-audio, 480M
    |__ Port 8: Dev 5, If 1, Class=Video, Driver=uvcvideo, 480M
    |__ Port 8: Dev 5, If 0, Class=Video, Driver=uvcvideo, 480M
    |__ Port 9: Dev 7, If 0, Class=Vendor Specific Class, Driver=, 12M
    |__ Port 14: Dev 8, If 0, Class=Wireless, Driver=btusb, 12M
    |__ Port 14: Dev 8, If 1, Class=Wireless, Driver=btusb, 12M

So, yeah ,I have a few things connected, mouse,. keyboard, printer. I’m in a docking station, and I think that is the difference between Bus1 and Bus2.

I’m running Fedora 33 with the Low latency Kernel from Stanford CCRMA Built for Fedora 32 (the 33 Kernel is not out yet). Fetched from here.

I have disabled pulse at the user level as I wrote about in a previous article.

I still don’t really know Jack.

On QJackCtl I’ve gone to Settings-Advanced and set the Scarlett Solo as the input and output device. You can see the Green card icon in the combo boxes on the right hand side below.

<figure class="wp-block-image"><figcaption>QJackCtl advanced settings</figcaption></figure>

Here’s the black magic. I was getting really port round trips times (120msec) and then I went to QJackCtl And messed with the Parameters. I upped the Periods/Buffer from 2 to 4, and restarted the server.

<figure class="wp-block-image"><figcaption>QJackCtl Parameters</figcaption></figure>

The latency value in the bottom right went up to 21.3. But the overall delay dropped to about 70ms.

Then I set it back to the values you see above. Periods per buffer: 2. And the Overall Delay stayed the same. As I write this now, it is at 64 ms.

I’m tempted to keep tweaking the values to see what happens. But for now…I know there seems to be a relationship between Jack parameters and the Overall Delay in Jamulus. I just don’t know what it is.

Episode 242 – Door 17: Vulnerability response

Posted by Josh Bressers on December 17, 2020 12:01 AM

Josh and Kurt talk about vulnerability response. What is it, what does it mean, how does it work

<audio class="wp-audio-shortcode" controls="controls" id="audio-2160-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_242_Door_17_Vulnerability_response.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_242_Door_17_Vulnerability_response.mp3</audio>

Getting SweetHome3D To Run on Fedora 33

Posted by Adam Young on December 16, 2020 08:53 PM

When I tried running SweetHome3D, I got two different problems depending on which of the scripts I tried. I eventually was able to get ./SweetHome3D-Java3D-1_5_2 to run. At first I got this error:

$ ./SweetHome3D-Java3D-1_5_2 
Exception in thread "main" java.lang.UnsatisfiedLinkError: /home/ayoung/apps/sweet/SweetHome3D-6.4.2/lib/libj3dcore-ogl.so: libnsl.so.1: cannot open shared object file: No such file or directory

I was able to resolve with guidance from this thread. I had to install libnsl .

$ yum search libnsl
========================= Name Exactly Matched: libnsl =========================
libnsl.i686 : Legacy support library for NIS
libnsl.x86_64 : Legacy support library for NIS
======================== Name & Summary Matched: libnsl ========================
libnsl2-devel.i686 : Development files for libnsl
libnsl2-devel.x86_64 : Development files for libnsl
============================= Name Matched: libnsl =============================
libnsl2.x86_64 : Public client interface library for NIS(YP) and NIS+
libnsl2.i686 : Public client interface library for NIS(YP) and NIS+
[ayoung@ayoungP40 SweetHome3D-6.4.2]$ sudo yum install libnsl

And then it runs.

<figure class="wp-block-image"></figure>

Moving things around in OpenStack

Posted by Adam Young on December 16, 2020 12:49 AM

While reviewing  the comments on the Ironic spec, for Secure RBAC. I had to ask myself if the “project” construct makes sense for Ironic.  I still think it does, but I’ll write this down to see if I can clarify it for me, and maybe for you, too.

Baremetal servers change.  The whole point of Ironic is to control the change of Baremetal servers from inanimate pieces of metal to “really useful engines.”  This needs to happen in a controlled and unsurprising way.

Ironic the server does what it is told. If a new piece of metal starts sending out DHCP requests, Ironic is going to PXE boot it.  This is the start of this new piece of metals journey of self discovery.  At least as far as Ironic is concerned.

But really, someone had to rack and wire said piece of metal.  Likely the person that did this is not the person that is going to run workloads on it in the end.  They might not even work for the same company;  they might be a delivery person from Dell or Supermicro.  So, once they are done with it, they don’t own it any more.

Who does?  Who owns a piece of metal before it is enrolled in the OpenStack baremetal service?

No one.  It does not exist.

Ok, so lets go back to someone pushing the button, booting our server for the first time, and it doing its PXE boot thing.

Or, we get the MAC address and enter that into the ironic database, so that when it does boot, we know about it.

Either way, Ironic is really the playground monitor, just making sure it plays nice.

What if Ironic is a multi-tenant system?  Someone needs to be able to transfer the baremetal server from where ever it lands up front to the people that need to use it.

I suspect that ransferring metal from project to project is going to be one of the main use cases after the sun has set on day one.

So, who should be allowed to say what project a piece of baremetal can go to?

Well, in Keystone, we have the idea of hierarchy.  A Project is owned by a domain, and a project can be nested inside another project.

But this information is not passed down to Ironic.  There is no way to get a token for a project that shows its parent information.  But a remote service could query the project hierarchy from Keystone. 

https://docs.openstack.org/api-ref/identity/v3/?expanded=show-project-details-detail#show-project-details

Say I want to transfer a piece of metal from one project to another.  Should I have a token for the source project or the remote project.  Ok, dump question, I should definitely have a token for the source project.  The smart question is whether I should also have a token for the destination project.

Sure, why not.  Two tokens. One has the “delete” role and one that has the “create” role.

The only problem is that nothing like this exists in Open Stack.  But it should.

We could fake it with hierarchy;  I can pass things up and down the project tree.  But that really does not one bit of good.  People don’t really use the tree like that.  They should.  We built a perfectly nice tree and they ignore it.  Poor, ignored, sad, lonely tree.

Actually, it has no feelings.  Please stop anthropomorphising the tree.

What you could do is create the destination object, kind of a potential piece-of-metal or metal-receiver.  This receiver object gets  a UUID.  You pass this UUID to the “move” API. But you call the MOVE api with a token for the source project.   The move is done atomically. Lets call this thing identified by a UUID a move-request. 

The order of operations could be done in reverse.  The operator could create the move request on the source, and then pass that to the receiver.  This might actually make mores sense, as you need to know about the object before you can even think to move it.

Both workflows seem to have merit.

And…this concept seems to be something that OpenStack needs in general. 

Infact, why should the API not be a generic API. I mean, it would have to be per service, but the same API could be used to transfer VMs between projects in Nova nad between Volumes in Cinder. The API would have two verbs one for creating a new move request, and one for accepting it.

POST /thingy/v3.14/resource?resource_id=abcd&destination=project_id

If this is called with a token, it needs to be scoped. If it is scoped to the project_id in the API, it creates a receiving type request. If it is scoped to the project_id that owns the resource, it is a sending type request. Either way, it returns an URL. Call GET on that URL and you get information about the transfer. Call PATCH on it with the appropriately scoped token, and the resource is transferred. And maybe enough information to prove that you know what you are doing: maybe you have to specify the source and target projects in that patch request.

A foolish consistency is the hobgoblin of little minds.

Edit: OK, this is not a new idea. Cinder went through the same thought process according to Duncan Thomas. The result is this API: https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfer

Which looks like it then morphed to this one:

https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfers-volume-transfers-3-55-or-later


Episode 241 – Door 16: 16 bits of change

Posted by Josh Bressers on December 16, 2020 12:01 AM

Josh and Kurt talk about the switch from 16 to 32 to 64 bit and even the changes from Intel to ARM

<audio class="wp-audio-shortcode" controls="controls" id="audio-2156-4" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_241_Door_16_16_bits_of_change.mp3?_=4" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_241_Door_16_16_bits_of_change.mp3</audio>

Episode 240 – Door 15: Supplier compliance

Posted by Josh Bressers on December 15, 2020 12:01 AM

Josh and Kurt talk about supplier compliance

<audio class="wp-audio-shortcode" controls="controls" id="audio-2152-5" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_240_Door_15_Supplier_compliance.mp3?_=5" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_240_Door_15_Supplier_compliance.mp3</audio>

Links

Adding an IP address to a Bridge

Posted by Adam Young on December 14, 2020 04:59 PM

OpenShift requires a load balancer for providing access to the hosted applications. Although I can run a three node cluster, I need a fourth location to provide a load balancer that can then provide access to the cluster.

For my home lab set up, this means I want to run one on my bastion host….but it is already running HTTP and (FreeIPA) Red Hat IdM. I don’t want to break that. So, I want to add a second IP address to the bastion host, and have all of the existing services make use of the existing IP address. Only the new HA Proxy instance will use the new IP address.

This would be trivial for a simple Ethernet port, but I am using a Bridge, which makes it a touch trickier, but not terribly so.

Adding an IP address can be done using the following command:

sudo ip addr add 192.168.123.6/24 dev br0

The IP Address comes from the same subnet as both the bastion host and the OpenShift cluster machines already use. The DHCP server does not allocate addresses below .100 So this is a safe static value to use. br0 already has the address 192.168.123.1.

In fact, that IP address is visible in the network scripts:

$ cat /etc/sysconfig/network-scripts/ifcfg-br0 
STP=yes
BRIDGING_OPTS=priority=32768
TYPE=Bridge
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=br0
UUID=4ca027d3-c472-4471-888b-12c295ad2cc1
DEVICE=br0
ONBOOT=yes


IPADDR=192.168.123.1
PREFIX=24


However, I want this to persist over a reboot. If I bring the br0 connection down and then back up again, it is gone.

Well, if I am dumb and I bring it down when I am logged on over it, I lock myself out, but fortunately the device also has a Wireless connection.

I can use the nmcli command to add the additional address like this:

sudo nmcli con mod br0 +ipv4.addresses "192.168.123.6/24"

Which does not make the change immediately, but rather requires that I bring the device down and back up.

And I freeze myself out of the Bastion host on that interface. What is wrong?

$ ping nuzleaf
PING nuzleaf.home.younglogic.net (192.168.123.1) 56(84) bytes of data.
From ayoungP40 (192.168.123.2) icmp_seq=8 Destination Host Unreachable

Looking at the routing table:

$ ip route
default via 10.0.0.1 dev wlp2s0 proto dhcp metric 600 
10.0.0.0/24 dev wlp2s0 proto kernel scope link src 10.0.0.240 metric 600 
10.88.0.0/16 dev cni-podman0 proto kernel scope link src 10.88.0.1 linkdown 
192.168.100.0/24 dev virbr0 proto kernel scope link src 192.168.100.1 linkdown 
192.168.123.0/24 dev br0 proto kernel scope link src 192.168.123.1 metric 425 
192.168.123.0/24 dev br0 proto kernel scope link src 192.168.123.6 metric 425 
192.168.130.0/24 dev virbr1 proto kernel scope link src 192.168.130.1 linkdown 

We have two entries for the 192.168.123.0/24 network. I know that I want the .1 entry. If I delete both, and add back in one, I get ping responses:

sudo ip route del 192.168.123.0/24
sudo ip route del 192.168.123.0/24
sudo ip route add 192.168.123.0/24 via 192.168.123.1

Can I add this as a static route? I try

 sudo  nmcli connection modify br0 +ipv4.routes "192.168.123.0/24 192.168.123.1"

But Now I have 3 routes. I need to get rid of that DEFROUTE=yes value. I resist the urge to do this via a text editor and instead turn again to nmcli:

sudo  nmcli connection modify br0 ipv4.never-default yes

Bring the device down and back up again. It takes a moment for the route information to settle, but I start getting ping response again after a few seconds. But can I log in? ssh to the machine….yes. Eventually.

Going back to the routing table:

$ ip route
default via 10.0.0.1 dev wlp2s0 proto dhcp metric 600 
10.0.0.0/24 dev wlp2s0 proto kernel scope link src 10.0.0.240 metric 600 
10.88.0.0/16 dev cni-podman0 proto kernel scope link src 10.88.0.1 linkdown 
192.168.100.0/24 dev virbr0 proto kernel scope link src 192.168.100.1 linkdown 
192.168.123.0/24 dev br0 proto kernel scope link src 192.168.123.1 metric 425 
192.168.123.0/24 dev br0 proto kernel scope link src 192.168.123.6 metric 425 
192.168.123.0/24 via 192.168.123.1 dev br0 proto static metric 425 
192.168.130.0/24 dev virbr1 proto kernel scope link src 192.168.130.1 linkdown 

Once I again, I delete all of the routes for the 192.168.123.0/24 network. I run the following command three times:

sudo ip route del 192.168.123.0/24

Then recycle the bridge interface:

$ sudo nmcli conn down br0
Connection 'br0' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/19)
$ sudo nmcli conn up br0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/21)

They are still there…but I can still connect over the wired interface. I think the static route takes precedence. I’m going to leave it as is for now.

EDIT: SSH is taking forever to connect, even though pings are returned. Not sure if it is the routing, or DNS. It is always DNS, isn’t it?

Committee or Community: Slowing down the future

Posted by Josh Bressers on December 14, 2020 03:02 PM

I wrote a blog post about looking back, and I have a bit of snark in there where I talk about slowing down the future. I wanted to explain this a bit more and give everyone some food for thought around how we used to do things and how we should do them moving forward. There are groups and people that exist to slow things down. Sometimes that’s on purpose for good reasons, sometimes it’s on purpose for bad reasons, sometimes it’s not on purpose at all.

I want to start with the idea that a lot of standards are there to slow us down on purpose. This isn’t meant to be a hot take, this is the actual truth and it’s a good thing. Standards exist to help everyone work together. If standards change too quickly it creates barriers instead of opportunities. Imagine if HTTP or TCP/IP changed drastically every year. It would be horrible, the internet wouldn’t look anything like it does today.

Now, there are times when slow change is the opposite of what we want to do. Emerging technologies are a great example of this. Imagine if the Linux Kernel API changes had to pass a standards committee. There would be no progress, development would grind to a halt and nobody would want to contribute to such a project. The project wouldn’t be the success it is today.

There are some standards groups where being slow actually helps progress, and there are some groups that hurt progress by moving slowly. For the purpose of this blog post, let’s focus on new technologies. New technology needs to move fast and iterate without a committee telling them what to do. New technologies should work more like an open source project to move forward. In the world of open source it’s easier to build an example then talk about what the example does. The work is fast and the work itself is the discussion. This model has mostly taken over the world. It is fast, open, and makes it easy to help.

What does this have to do with standards?

Every now and then I get invited to a meeting for a standards group, usually having something to do with open source security, and ever time it ends the same. It’s the same people saying the same things they’ve been saying for decades, and most of those ideas didn’t work back then either. There is little progress because the solution to most problems seems to “committee harder”. I have a suspicion two hours of building a prototype is worth two months of these meetings.

There are problems can’t be solved by a committee when the committee moves slower than reality. You’re actually moving backwards in relation to the world around you. Consider the case where you’re rowing a boat against the current in a river. If you’re rowing slower than the river is flowing, you’re not making progress. You’re moving backwards, just slightly less slowly than not rowing at all.

How does progress work in the open source world? Progress in the open source world gets done because someone does it. It doesn’t get done because a committee talks about the problem and solution for months. There are things that move forward because someone does the work. You try a solution, if it doesn’t work you iterate on it a bit and try again. Eventually something that works emerges. I am aware there are counter examples to this, by all means, Tweet them at me.

BUT NO CIVILIZED SOCIETY CAN WORK THIS WAY! You shout. Sometimes it shouldn’t (mentioned that above). But there are many many examples against this today. Successful open source isn’t built on top of committees. Successful open source is built on top of community.

I have two example that show off both sides of this universe.

First if we look at all the source composition scanners that exist, we can see a lot of progress. I’ve written at length about the quality of these scanners (spoiler: it’s not good), but fundamentally they are getting better and will keep getting better. There is no standardization at the moment which does create some challenges as a user, but given how new this technology is nobody really knows what it will look like in a year, I’d rather see them move fast and get better. Trying to standardize this would stifle progress at a time when progress is hugely important.

But as a related counterexample I want to pick on security vulnerability identifiers. CWE is very much a committee driven, boat. Anyone who knows how CWE works understands it’s not moving fast enough in the modern world. CWE is paddling against the current, but it’s not paddling fast enough. I have a previous post where I look at some CWE details, it’s pretty clear it needs some updating.

I think a huge reason we don’t see much progress in this space is because there is more talking than work. It’s possible nobody will ever do the work because it’s not very exciting. There are just certain things that struggle to exist as open source projects.

The next obvious question is how do we decide if our project should be a committee or a community? I think if you have to ask the question you should start with a community. It’s probably going to be a lot easier to transition from a community to a committee than the other way around. I also have a suspicion the future will be communities, even for slow moving things. It takes reality time to catch up sometimes.

So how do you start a community instead of a committee? Committees are mostly driven from meetings, communities are not. If your group has a lot of meetings and not a lot of work (meetings are not work), then you have a committee. If you have almost no meetings, many public discussions, and a lot of git commits – you have a community.

The purpose of this post is really about setting expectations for the future. If you’re looking to change the future, a committee isn’t going to do it. What will change the future is something that looks like an open source project driven by community collaboration. Open source won for a reason. It doesn’t just work better, it is better.

Episode 239 – Door 14: Backdoors

Posted by Josh Bressers on December 14, 2020 12:01 AM

Josh and Kurt talk about backdoors in open source software

<audio class="wp-audio-shortcode" controls="controls" id="audio-2148-6" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_239_Door_14_Backdoors.mp3?_=6" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_239_Door_14_Backdoors.mp3</audio>

Episode 238 – Door 13: Unlucky or survivor bias?

Posted by Josh Bressers on December 13, 2020 12:01 AM

Josh and Kurt talk about the unluckiest man in the world and survivor bias

<audio class="wp-audio-shortcode" controls="controls" id="audio-2144-7" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_238_Door_13_Unlucky_or_survivor_bias.mp3?_=7" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_238_Door_13_Unlucky_or_survivor_bias.mp3</audio>

Links

Uniforms

Posted by Adam Young on December 12, 2020 03:00 AM

My folks asked me to get my stuff out of their storage. Included was my old West Point uniforms…and something special.

<figure class="wp-block-gallery columns-3 is-cropped"><figcaption class="blocks-gallery-caption">Uniforms
</figcaption></figure>

My Grandfather’s World War Two Eisenhower Jacket and a shirt with his First Sargent Stripes.

A tight fit. His build was like mine…when he was 30 and I was 20.

Episode 237 – Door 12: Video game hacking

Posted by Josh Bressers on December 12, 2020 12:01 AM

Josh and Kurt talk about video game hacking. The speedrunners are doing the best security research today

<audio class="wp-audio-shortcode" controls="controls" id="audio-2141-8" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_237_Door_12_Video_game_hacking.mp3?_=8" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_237_Door_12_Video_game_hacking.mp3</audio>

Links

Episode 236 – Door 11: Should you get on a 737?

Posted by Josh Bressers on December 11, 2020 12:01 AM

Josh and Kurt talk about the safety of a 737

<audio class="wp-audio-shortcode" controls="controls" id="audio-2134-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_236_Door_11_Should_you_get_on_a_737.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_236_Door_11_Should_you_get_on_a_737.mp3</audio>

Links

Content Based Access Control in Messaging

Posted by Adam Young on December 10, 2020 02:58 PM

In an OpenStack system, the communication between the compute nodes and the scheduler goes through a messaging system such as RabbitMQ. While there have been different models over the years, the basic assumption has remained that all actors identify themselves to the broker via a password and are trusted from that point forward.

What would happen if a compute node was compromised? The service running on the node could send any message one the bus that it wanted. Some of these messages are not ones that a compute node should ever send, such as “Migrate VM X to Node Y.” If the compromise was delivered via a VM, that hostile VM could then attempt to migrate itself to other nodes and compromise them, or could attempt to migrate other VMs to the compromised nodes and read their contents.

How could we mitigate attacks of this nature?

<figure class="wp-block-image"></figure>

One way to defend against message based attacks uses the identity of sender to confirm it is authorized to send messages of that type. However, since a compromised node can write any data into the message it wants, we cannot rely on the content of the messages to establish identity.

Lets say we require all messages to be signed. This would be comparable to the CMS based PKI tokens I wrote, and which have since been removed from Keystone. CMS is a PKI based approach to document signing. The payload of a message is hashed, the hash is encrypted with a private key, and the recipient uses the associated public key to verify the message.

While the acronym PKI is short for Public Key Infrastructure, or asymmetric cryptography, it really means X509 based certificates as the means of distributing the public keys, as that provides a way to establish a chain of trust. There has been a lot written about X509, here and elsewhere.

This approach suffers from the key distribution problems as PKI tokens, not to mention the expense of encryption. While the Cryptography story in Python is better than it was back then, we’d still like to avoid building all that additional infrastructure. Especially since, with Transport Layer Security (TLS), the messages are already encrypted across the wire. PKI is not just the certificates themselves, but the totality of the infrastructure for requesting and signing certificates as well.

TLS uses PKI to set up an encrypted network connection. For the vast majority of TLS applications, only the server certificate is required. However, the server can authenticate the client using client certificates. This is called Mutual TLS, or MTLS. This is a more secure way for a client to identify itself than passwords. We will have to handle the certificate distribution. But we need to do that for the server certificates already, so we need some semblance of PKI anyway.

<figure class="wp-block-image"></figure>

Keystone already has the ability to map an X509 certificate attribute to a user; it was one of the first protocols supported in the Identity Federation effort. Thus. if the service has a “user” record in Keystone, the X509 Client certificate used to set up the MTLS connection can be used to identify that user. We can even assign roles to these users. Thus, all compute node would be assigned users with something like the “compute” role at the service scope.

Probably worthwhile to point out that service:all is a lousy way to scope these privileges. It would be better for the scope to be limited to the Nova service, or better yet, the endpoint. It would be an improvement to even scope them to the region. If regions were projects, we could scope to regions. Anything that can contain other things should be a project. Keystone has too many abstractions that are disjoint. Everything in Keystone should be a project, and live in a unified hierarchy.

If messages only had a “type” field, then the policy rules would attempt to match a role to a message type. Users with the role “compute_scheduler” would be able to send messages of type “migrate VM” but users with the role “compute_node” would not.

I briefly looked in to doing this kind of check with RabbitMQ, and it seems like it would work. QPid gave GSSAPI and Kerberos support, which would be ideal, but I think it used X509 for TLS, not the GSSAPI based encryption, so there would still be the need for PKI. FreeIPA easily supports both, but it woud be nice to get things down to a single type of Cryptography. However, Since QPid support was pulled from OpenStack, it was a non-starter.

<figure class="wp-block-image"></figure>

Looking to the future, Pulsar and Kafka both seem to support TLS, and multiple types of encryption. Message level policy enforcement could be built right into the platform. This would be useful for a much wider set of applications beyond just OpenStack services.

<figure class="wp-block-image"></figure>


Episode 235 – Door 10: Deciding what information matters

Posted by Josh Bressers on December 10, 2020 12:01 AM

Josh and Kurt talk about Apple leaking internal IP addresses. Sometimes we create our own emergencies over things that don’t matter.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2130-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_235_Door_10_Deciding_what_information_matters.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_235_Door_10_Deciding_what_information_matters.mp3</audio>

Links

Standing Desk

Posted by Adam Young on December 09, 2020 08:02 PM

It’s been a while since I built my standing desk. Here’s the current state of it.

<figure class="wp-block-image size-large"></figure>

The major support is steel pipe, with an external diameter of 2 inches. The wood is all attached via holes cut using a hole saw slightly larger than the pipe.

The bench is heavy, but I’ve still had to tie it to the desk to let me lean against it while working. It lets me stand all day.

<figure class="wp-block-image"></figure>

The monitor is mounted on a wall mount bracket attached to a 4×4 post.

The top bracket used to have a shelf with a speaker, but these days acts as a hat rack.

<figure class="wp-block-image"></figure>

Getting the laptop it’s own shelf allowed me to believe ng it up to eye level. With the number of video conference meetings, this has become my normal place to see my coworkers. The larger screen usually has two windows docked side by side.

The wrist rest is rounded and polished to be forgiving to arm contact. That s is one piece I’d like to redo eventually, but only for aesthetic reasons. It works fine as is.

<figure class="wp-block-image"></figure>

A section of 4×4 post supports the keyboard tray and desk. The clp underneath keeps it from slowly slipping down the pipe.

<figure class="wp-block-image"></figure>

Cables run to the power and to the keyboard and mouse via 1inch holes.

Episode 234 – Door 09: public key cryptography

Posted by Josh Bressers on December 09, 2020 12:01 AM

Josh and Kurt talk about public key cryptography

<audio class="wp-audio-shortcode" controls="controls" id="audio-2126-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_234_Door_09_public_key_cryptography.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_234_Door_09_public_key_cryptography.mp3</audio>

Episode 233 – Door 08: man 8 security

Posted by Josh Bressers on December 08, 2020 12:01 AM

Josh and Kurt talk about the OpenBSD security(8) man page and the importance of automating security

<audio class="wp-audio-shortcode" controls="controls" id="audio-2121-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_233_Door_08_man_8_security.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_233_Door_08_man_8_security.mp3</audio>

Links

Episode 232 – Door 07: 7 is the best prime, 2 is the dumbest

Posted by Josh Bressers on December 07, 2020 12:01 AM

Josh and Kurt talk about prime numbers

<audio class="wp-audio-shortcode" controls="controls" id="audio-2116-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_232_Door_07_7_is_the_best_prime_2_is_the_dumbest.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_232_Door_07_7_is_the_best_prime_2_is_the_dumbest.mp3</audio>

Episode 231 – Door 06: 6 wifi risks … that don’t actually matter

Posted by Josh Bressers on December 06, 2020 12:01 AM

Josh and Kurt talk about the non problems with public wifi we love to pretend matter

<audio class="wp-audio-shortcode" controls="controls" id="audio-2112-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_231_Door_06_6_wifi_risks_that_dont_actually_matter.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_231_Door_06_6_wifi_risks_that_dont_actually_matter.mp3</audio>

Links

Episode 230 – Door 05: 5 reasons you need 24/7 robot monitoring

Posted by Josh Bressers on December 05, 2020 12:01 AM

Josh and Kurt talk about why you need 24/7 monitoring of all the things

<audio class="wp-audio-shortcode" controls="controls" id="audio-2108-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_230_Door_05_5_reasons_you_need_247_robot_monitoring.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_230_Door_05_5_reasons_you_need_247_robot_monitoring.mp3</audio>

Links

Episode 229 – Door 04: EFF’s Cover Your Tracks

Posted by Josh Bressers on December 04, 2020 12:01 AM

Josh and Kurt talk about how the EFF is helping us prevent Internet tracking

<audio class="wp-audio-shortcode" controls="controls" id="audio-2103-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_229_Door_04_EFFs_Cover_Your_Tracks.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_229_Door_04_EFFs_Cover_Your_Tracks.mp3</audio>

Links

Episode 228 – Door 03: Do all vulnerabilities matter equally?

Posted by Josh Bressers on December 03, 2020 12:01 AM

Josh and Kurt talk about how many security vulnerabilities matter enough to fix?

<audio class="wp-audio-shortcode" controls="controls" id="audio-2099-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_228_Door_03_Do_all_vulnerabilities_matter_equally.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_228_Door_03_Do_all_vulnerabilities_matter_equally.mp3</audio>

Links

Episode 227 – Door 02: Marketing department or selection bias?

Posted by Josh Bressers on December 02, 2020 12:01 AM

Josh and Kurt talk about cybersecurity statistics and the value of the data we have.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2095-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_227_Door_02_Marketing_department_or_selection_bias.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_227_Door_02_Marketing_department_or_selection_bias.mp3</audio>

Links

Episode 226 – Door 01: Advent calendars

Posted by Josh Bressers on December 01, 2020 12:01 AM

Josh and Kurt talk about advent calendars. We are publishing 25 5 minute episodes in 25 days. Also portable X-ray machines.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2092-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_226_Door_01_Advent_calendars.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_226_Door_01_Advent_calendars.mp3</audio>

Episode 237 – Door 12: Video game hacking

Posted by Josh Bressers on November 29, 2020 10:00 PM

Josh and Kurt talk about video game hacking. The speedrunners are doing the best security research today

<audio class="wp-audio-shortcode" controls="controls" id="audio-2138-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_237_Door_12_Video_game_hacking.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_237_Door_12_Video_game_hacking.mp3</audio>

Links

Musical Midi Accompaniment: First Tune

Posted by Adam Young on November 23, 2020 02:16 AM

Here is a tune I wrote called “Standard Deviation” done as an accompaniment track using MMA. This is a very simplistic interpretation that makes no use of dynamics, variations in the BossaNova Groove, or even decent repeat logic. But it compiles.

Here’s the MMA file.

Slightly Greater than one Standard Deviation from the Mean:

Episode 225 – Who is responsible if IoT burns down your house?

Posted by Josh Bressers on November 23, 2020 12:01 AM

Josh and Kurt talk about the safety and liability of new devices. What happens when your doorbell can burn down your house? What if it’s your fault the doorbell burned down your house? There isn’t really any prior art for where our devices are taking us, who knows what the future will look like.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2077-10" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_225_Who_is_responsible_if_IoT_burns_down_your_house.mp3?_=10" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_225_Who_is_responsible_if_IoT_burns_down_your_house.mp3</audio>

Show Notes

Musical Midi Accompaniment: Understanding the Format

Posted by Adam Young on November 22, 2020 07:52 PM

Saxophone is a solo instrument. Unless you are into the sounds of Saxophone multiphonics, harmony requires playing with some other instrument. For Jazz, this tends to be a rhythms section of Piano, Bass, and Drums. As a kid, my practicing (without a live Rhythm section) required playing along with pre-recordings of tunes. I had my share of Jamie Aebersold records.

Nowadays, the tool of choice for most Jazz muscians, myself included is iReal Pro. A lovely little app for the phone. All of the Real Book tunes have their chord progressions been posted and generated. The format is simple enough.

But it is a proprietary app. While I continue to support and use it, I am also looking for alternatives that let me get more involved. One such tool is Musical MIDI Accompaniment. I’m just getting started with it, and I want to keep my notes here.

First is just getting it to play. Whether you get the tarball or checkout from Git, there is a trick that you need to do in order to even play examples: regenerate the libraries.

./mma.py -G

That allows me to generate a midi file from a file in the MMA Domain Specific Language (DSL) which is also called MMA. I downloaded the backing track for I’ve Got You Under My Skin https://www.mellowood.ca/mma/examples/examples.html and, once I regenerated the libraries with the above command, was able to run :

./mma.py ~/Downloads/ive-got-you-under-my-skin.mma
Creating new midi file (120 bars, 4.57 min / 4:34 m:s): '/home/ayoung/Downloads/ive-got-you-under-my-skin.mid'

Which I can then play with timidity.

The file format is not quite as simplistic as iReal Pro, but does not look so complex that I won’t be able to learn it.

There are examples of things that look like real programming. Begin and End Blocks.

Line Numbers. This is going to give my flashbacks to coding in Basic on my C64…not such an unpleasant set of memories. And musical ones at that.

Ok, lets take this apart. Here is the first few lines:

// I've Got You Under My Skin

Tempo 105
Groove Metronome2-4

	z * 2

Comments are doubles slashes. Title is just for documentation.

Tempo is in BPM.

Groove Metronome2-4 Says to use a Groove, the MMA “Grooves, in some ways, are MMA ‘s answer to macros … but they are cooler, easier to use, and have a more musical name. ” Says the manual. So, somewhere we have inherited a Groove called Metronome…something. Is the 2-4 part of the name? It looks it. Found this in the library

lib/stdlib/metronome.mma:97:DefGroove Metronome2-4 A very useful introduction. On bar one we have hits on beats 1 and 3; on bar two hits on beats 1, 2, 3 and 4.

Which is based on a leader counting off the time in the song. If you play the midi file, you can hear the cowbell-effect used to count off

z * 2 is the way of saying that this extends for 2 measures.

The special sequences, “-” or “z”, are also the equivalent of a rest or “tacet” sequence. For example, in defining a 4 bar sequence with a bass pattern on the first 3 bars and a walking bass on bar 4 you might do something like:

If you already have a sequence defined5.2 you can repeat or copy the existing pattern by using a single “*” as the pattern name. This is useful when you are modifying an existing sequence.

The next block is the definition of a section he calls Solo. This is a Track.

Begin Solo
	Voice Piano2
	Octave 4
	Harmony 3above
	Articulate 90
	Accent 1 20
 	Volume f
End

I think that the expectation is that you get the majority of the defaults from the Groove, and customize the Solo track.


As a general rule, MELODY tracks have been designed as a “voice” to accompany a predefined form defined in a GROOVE—it is a good idea to define MELODY parameters as part of a GROOVE. SOLO tracks are thought to be specific to a certain song file, with their parameters defined in the song file.

So if it were a Melody track definition is would be ignored, and the track from the Rhumba base would be used instead.

The next section defines what is done overall.

Keysig 3b


Groove Rhumba
Alltracks SeqRnd Off
Bass-Sus Sequence -		// disable the strings

Cresc pp mf 4

Keysig directive can be found here. This will generate a MIDI KeySignature event. 3b means 3 flats in the midi spec. Major is assumed if not specified. Thus this is the key of E Flat.

The Groove Rhumba directive is going to drive most of the song. The definitions for this Groove can be found under the standard library I might tear apart a file like this one in a future post.

The next two lines specify how the Groove is to be played. SeqRnd inserts randomness into the sequencing, to make it more like a live performance. This directive shuts down the randomness.

Bass-Sus Sequence – seems to be defining a new, blank sequence. The comment implies that it is shutting off the strings. I have to admit, I don’t quite understand this. I’ve generated the file with this directive commented out and detect no differences. Since Bass-Sus is defined in the Bossa Nova Groove under the standard library, I’m tempted to think this is an copy-pasta error. Note that it defines “Voice Strings” and I think that is what he was trying to disable. I suspect a git history will show the Bass-Sus getting pulled out of the Rhumba file.

Cresc pp mf 4 Grow in volume from piano (soft) to mezzo-forte (Medium Loud) over 4 bars. Since no track is specified, it is for the master volume.

// 4 bar intro

1 	Eb		{4.g;8f;2e;}
2 	Ab      {4.a;8g;2f;}
3 	Gm7     {1g;}
4 	Bb7     {2b;}

Delete Solo

Now we start seeing the measures. The numbers are optional, and just for human readers to keep track.
Measure 1 is an E flat chord. The braces delineate a Riff line. The 4 means a Quarter note. The period after it Makes it Dotted, half again as long, or the equivalent of 3 tied eighth notes. The Note played is a g. This is adjusted for the octave appropriate to the voice. This is followed by an eighth note f, an a half note e. This adds up to a full measure; 3/8 + 1/8 + 4/8.

After the four bar intro, the solo part is deleted, and the normal Rhumba patterns take effect.

The next line is a Repeat directive, which is paired with the repeatending directive on line 129 and repeatend directives on line 135. This says that measures 5-60 should be repeated once, first and second ending style.

The Groove changes many times during the song, and I think this leads to the one bug I noticed: the time keeps changing, speeding up and slowing down. I think these match up with the Groove changes, but I am not yet certain.

It should be fairly easy to translate one of my songs into this format.

We can’t move forward by looking back

Posted by Josh Bressers on November 19, 2020 03:24 PM

For the last few weeks Kurt and I have been having a lively conversation about security ratings scales. Is CVSS good enough? What about the Microsoft scale? Are there other scales we should be looking at? What’s good, what’s missing, what should we be talking about.

There’s been a lot of back and forth and different ideas, over the course of our discussions I’ve come to realize an important aspect of security which is we don’t look forward very often. What I mean by this is there is a very strong force in the world of security to use prior art to drive our future decisions. Except all of that prior art is comically out of date in the world of today.

An easy example are existing security standards. All of the working groups that build the standards, and ideas the working groups bring to the table, are using ideas from the past to solve problems for the future. You can argue that standards are at best a snapshot of the past, made in the present, to slow down the future. I will elaborate on that “slow down the future” line in a future blog post, for now I just want to focus on the larger problem.

It might be easiest to use an example, I shall pick on CVSS. The vast majority of ideas and content in a standard such as CVSS is heavily influenced by what once was. If you look at how CVSS scores things, it’s clear a computer in a datacenter was in mind for many of the metrics. That was fine a decade ago, but it’s not fine anymore. Right now anyone overly familiar with CVSS is screaming “BUT CVSS DOESN’T MEASURE RISK IT MEASURES SEVERITY”, which I will say: you are technically correct, nobody cares, and nobody uses it like this. Sit down. CVSS is a perfect example of the theory being out of touch with reality.

Am I suggesting CVSS has no value? I am not not. In its current form CVSS has some value (it should have a lot more). It’s all we have today, so everyone is using it, and it’s mostly good enough in the same way you can drive a nail with a rock. I have a suspicion it won’t be turned into something truly valuable because it is a standard based on the past. I would like to say we should take this into account when we use CVSS, but nobody will. The people doing the work don’t have time to care about improving something that’s mostly OK, and the people building the standards don’t do the work, so it’s sort of like a Mexican standoff, but one where nobody showed up.

There are basically two options for CVSS: don’t use it because it doesn’t work properly, or use it and just deal with the places it falls apart. Both of those are terrible options. There’s little chance it’s going to get better in the near future. There is a CVSSv4 design document here. If you look at it, does it look like something describing a modern cloud based architecture? They’ve been working on this for almost five years; do you remember what your architecture looked like even a year ago? For most of us in the world of IT a year is a lifetime now. Looking backwards isn’t going to make anything better.

OK, I’ve picked on CVSS enough. The real reason to explain all of this is to change the way we think about problems. Trying to solve problems we already had in the past won’t help with problems we have today, or will have in the future. I think this is more about having a different mindset than security had in the past. If you look at the history of infosec and security, there has been a steady march of progress, but much of that progress has been slower than the forward movement of IT in general. What’s holding us back?

Let’s break this down into People, Places, and Things

People

I use the line above “The people doing the work don’t have time to care, and the people building the standards don’t do the work”. What I mean by this is there are plenty of people doing amazing security work. We don’t hear about them very often though because they’re busy working. Go talk to someone building detection rules for their SIEM, those are the people making a difference. They don’t have time to work on the next version of CVSS. They probably don’t even have the time to file a bug report against an open source project they use. There are many people in this situation in the security world. They are doing amazing work and getting zero credit. These are the heroes we need.

But we have the heroes we deserve. If you look at many of the people working on standards, and giving keynotes, and writing blogs (oh hi), a lot of them live in a world that no longer exists. I willingly admit I used to live in a world that didn’t exist. I had an obsession with problems nobody cared about because I didn’t know what anyone was really doing. I didn’t understand cloud, or detection, or compliance, or really anything new. Working at Elastic and seeing what our customers are accomplishing in the world of security has been a life changing experience. It made me realize some those people I thought were leaders weren’t actually making the world a better place. They were desperately trying to keep the world in a place that they were relevant and could understand.

Places

One of my favorite examples these days is the fact that cloud won, but a lot of people are still talking about data centers or “hybrid cloud” or some other term that means owning a computer. A data center is a place. Places don’t exist anymore, at least not for the people making a difference. Now there are reasons to have a data center, just like there are reasons to own an airplane. Those reasons are pretty niche and solve a unique problem. We’re not worried about those niche problems today.

How many of our security standards focus on having a computer in a room, in a place? Too many. Why doesn’t your compliance document ask about the seatbelts on your airplane? Because you don’t own an airplane, just like you don’t (or shouldn’t) own a server. The world changed, security is still catching up. There are no places anymore. Trying to secure a server in a room isn’t actually helping anyone.

Things

Things is one of the most interesting topics today. How many of us have corporate policies that say you can only access company systems from your laptop, while connected to a VPN, and wearing a hat. Or some other draconian rule. Then how many of us have email on our personal phones? But that’s not a VPN, or a hat, or a laptop! Trying to secure a device is silly because there are a near infinite number devices and possible problems.

We used to think about securing computers. Servers, desktops, laptops, maybe a router or two. Those are tangible things that exist. We can look at them, we can poke them with a stick, we can unplug them. We don’t have real things to protect anymore and that’s a challenge. It’s hard to think about protecting something that we can’t hold in our hand. The world has changed in a such a way that the “things” we care about aren’t even things anymore.

The reality is we used to think of things as objects we use, but things of today are data. Data is everything now. Every service, system, and application we use is just a way to understand and move around data. How many of our policies and ideas focus on computers that don’t really exist instead of the data we access and manipulate?

Everything new is old again

I hope the one lesson you take away from all of this is to be wary of leaning on the past. The past contains lessons, not directions. Security exists in a world unlike any we’ve ever seen, the old rules are … old. But it’s also important to understand that even what we think of as a good idea today might not be a good idea tomorrow.

Progress is ahead of you, not behind.

Keystone and Cassandra: Parity with SQL

Posted by Adam Young on November 18, 2020 09:41 PM

Look back at our Pushing Keystone over the Edge presentation from the OpenStack Summit. Many of the points we make are problems faced by any application trying to scale across multiple datacenters. Cassandra is a database designed to deal with this level of scale. So Cassandra may well be a better choice than MySQL or other RDBMS as a datastore to Keystone. What would it take to enable Cassandra support for Keystone?

Lets start with the easy part: defining the tables. Lets look at how we define the Federation back end for SQL. We use SQL Alchemy to handle the migrations: we will need something comparable for Cassandra Query Language (CQL) but we also need to translate the table definitions themselves.

Before we create the tables, we need to create keyspace. I am going to make separate keyspaces for each of the subsystems in Keystone: Identity, Assignment, Federation, and so on. Here’s the Federated one:

CREATE KEYSPACE keystone_federation WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '3'}  AND durable_writes = true;

The Identity provider table is defined like this:

    idp_table = sql.Table(
        'identity_provider',
        meta,
        sql.Column('id', sql.String(64), primary_key=True),
        sql.Column('enabled', sql.Boolean, nullable=False),
        sql.Column('description', sql.Text(), nullable=True),
        mysql_engine='InnoDB',
        mysql_charset='utf8')
    idp_table.create(migrate_engine, checkfirst=True)

The comparable CQL to create a table would look like this:

CREATE TABLE identity_provider (id text PRIMARY KEY , enables boolean , description text);

However, when I describe the schema to view the table defintion, we see that there are many tuning and configuration parameters that are defaulted:

CREATE TABLE federation.identity_provider (
    id text PRIMARY KEY,
    description text,
    enables boolean
) WITH additional_write_policy = '99p'
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND cdc = false
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '16', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND default_time_to_live = 0
    AND extensions = {}
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair = 'BLOCKING'
    AND speculative_retry = '99p';

I don’t know Cassandra well enough to say if these are sane defaults to have in production. I do know that someone, somewhere, is going to want to tweak them, and we are going to have to provide a means to do so without battling the upgrade scripts. I suspect we are going to want to only use the short form (what I typed into the CQL prompt) in the migrations, not the form with all of the options. In addition, we might want an if not exists  clause on the table creation to allow people to make these changes themselves. Then again, that might make things get out of sync. Hmmm.

There are three more entities in this back end:

CREATE TABLE federation_protocol (id text, idp_id text, mapping_id text,  PRIMARY KEY(id, idp_id) );
cqlsh:federation> CREATE TABLE mapping (id text primary key, rules text,    );
CREATE TABLE service_provider ( auth_url text, id text primary key, enabled boolean, description text, sp_url text, RELAY_STATE_PREFIX  text);

One thing that is interesting is that we will not be limiting the ID fields to 32, 64, or 128 characters. There is no performance benefit to doing so in Cassandra, nor is there any way to enforce the length limits. From a Keystone perspective, there is not much value either; we still need to validate the UUIDs in Python code. We could autogenerate the UUIDs in Cassandra, and there might be some benefit to that, but it would diverge from the logic in the Keystone code, and explode the test matrix.

There is only one foreign key in the SQL section; the federation protocol has an idp_id that points to the identity provider table. We’ll have to accept this limitation and ensure the integrity is maintained in code. We can do this by looking up the Identity provider before inserting the protocol entry. Since creating a Federated entity is a rare and administrative task, the risk here is vanishingly small. It will be more significant elsewhere.

For access to the database, we should probably use Flask-CQLAlchemy. Fortunately, Keystone is already a Flask based project, so this makes the two projects align.

For migration support, It looks like the best option out there is cassandra-migrate.

An effort like this would best be started out of tree, with an expectation that it would be merged in once it had shown a degree of maturity. Thus, I would put it into a namespace that would not conflict with the existing keystone project. The python imports would look like:

from keystone.cassandra import migrations
from keystone.cassandra import identity
from keystone.cassandra import federation

This could go in its own git repo and be separately pip installed for development. The entrypoints would be registered such that the configuration file would have entries like:

[application_credential] driver = cassandra

Any tuning of the database could be put under a [cassandra] section of the conf file, or tuning for individual sections could be in keys prefixed with cassanda_ in the appropriate sections, such as application_credentials as shown above.

It might be interesting to implement a Cassandra token backend and use the default_time_to_live value on the table to control the lifespan and automate the cleanup of the tables. This might provide some performance benefit over the fernet approach, as the token data would be cached. However, the drawbacks due to token invalidation upon change of data would far outweigh the benefits unless the TTL was very short, perhaps 5 minutes.

Just making it work is one thing. In a follow on article, I’d like to go through what it would take to stretch a cluster from one datacenter to another, and to make sure that the other considerations that we discussed in that presentation are covered.

Feedback?

Episode 224 – Are old Android devices dangerous?

Posted by Josh Bressers on November 16, 2020 12:01 AM

Josh and Kurt talk about what happens when important root certificates expire on old Android devices? Who should be responsible? How can we fix this? Is this even something we can or should fix? How devices should age is a really hard problem that needs a lot of discussion.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2038-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_224_Are_old_Android_devices_dangerous.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_224_Are_old_Android_devices_dangerous.mp3</audio>

Show Notes

Episode 223 – Full disclosure won, deal with it

Posted by Josh Bressers on November 09, 2020 12:01 AM

Josh and Kurt talk about the idea behind the full disclosure of security vulnerability details. There have been discussions about this topic for decades with many people on all sides of the issue. The reality is however, if you look at the current state of things, this discussion is settled, full disclosure won.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2027-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_223_Full_disclosure_won_deal_with_it.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_223_Full_disclosure_won_deal_with_it.mp3</audio>

Show Notes

Dependency Injection in Java

Posted by Adam Young on November 07, 2020 12:04 AM

You might be thinking that this is a long solved problem. I think I have something a little bit different.

This is very similar to the C++ based one that I wrote long ago.

There are several design decisions in this approach that I would like to make explicit.

  • It favors constructor dependency resolution, although it does not require it.
  • It favors type safety.
  • It does not require modification of existing code to add decorators, and thus allows you to mix code from different sources.
  • The dependency declarations are all done in simple Java.
  • The components built this way can implement the composite design pattern, and the framework can be thought of as the builder for them.
  • The declaration of the factories is separate from the implementation of the objects themselves. This allows different factories to be used in different settings.

The resolvers can be chained in parent/child relationships. A common example of this is in a web application, a request will be the child of a session, which in turn will be the child of the web application. The rule is that a child object can resolve a dependency via the parent, but a parent cannot resolve a dependency via a child. Parent scoped objects are expected to live beyond the lifespan of child scoped objects.

Java Generics can be used to select the appropriate factory function to create an instance of a class. Say I want to create an instance of Message pump, a class that pulls a message from a source and sends it to a sink.

MessagePump messagePump = childResolver.fetch(MessagePump.class);

If we have a factory interface like this:

package com.younglogic.resolver;

public interface  Factory<t> {
	T create (Resolver registry); 
};

You can collect them up in a registry like this:

package com.younglogic.resolver;

import java.util.HashMap;
import java.util.Map;

public class Registry {
	
	@SuppressWarnings("rawtypes")
	Map<class> factories = new HashMap<class>();

	public Resolver createResolver(Resolver parent) {
		return new Resolver(this, parent);
	}

	<t> void register(Class<t> c, Factory<t> f) {
		factories.put(c, f);
	};

}

The actual factory implementation can be anonymous classes.

registry.register(MessageSink.class, new Factory<messagesink>() {
    @Override
    public ConcreteMessageSink create(Resolver registry) {
	return new ConcreteMessageSink();
    }
});

The resolution is done via a lazy load proxy defined like this:

package com.younglogic.resolver;

import java.util.HashMap;
import java.util.Map;
import java.lang.InstantiationError;

public class Resolver {

	@SuppressWarnings("rawtypes")
	Map<class> instances = new HashMap<class>();

	private Registry registry;
	private Resolver parent;

	Resolver(Registry registry, Resolver parent) {
		super();
		this.registry = registry;
		this.parent = parent;
	}

	@SuppressWarnings("unchecked")
	public <t extends="extends" object="Object"> T fetch(@SuppressWarnings("rawtypes") Class c) throws InstantiationError {
		T o = (T) instances.get(c);
		if (o == null) {
			// Don't synchronize for the fast path, only the
			// slow path where we need to create a new object.

			Factory<t> factory = registry.factories.get(c);
			if (factory == null) {
				if (parent == null){
					throw new InstantiationError();
				}
				return parent.fetch(c);
			}
			synchronized (instances) {
				if (instances.get(c) == null) {
					try {
						o = (T) factory.create(this);
						instances.put(c, o);
					} catch (ClassCastException e) {
						throw new InstantiationError();
					}
				}
			}
		}
		return o;
	}
}

One feature I have not implemented is away to distinguish between two different components that implement the same interface.

The drawback to Java in this case is that there is no clean up; you are still depending on the garbage collector, and finalize might never be called for your objects.

Full code is here: https://github.com/admiyo/javaresolver

Hidden Tuples

Posted by Adam Young on November 06, 2020 11:17 PM

If you are going to write a Sudoku solver, write a brute force, depth first search. You can get it running fast enough.

But what if you couldn’t? What if the puzzles were so big that solving them by brute force was not computationally feasible? A Sudoku puzzle is build on a basis of 3: The Blocks are 3X3, there are 3X 3 of them in the puzzle, and the rows and columns are are 9 cells (3 * 3) long. This approach scales up. If you were to do a basis of 4, you could use the Hexadecimal digits, and have 16 X 16 puzzles.

A Basis of K leads to a puzzle size of (K^4). The basis can be any integer. A Basis of 10 would lead to a puzzle size of 1000.

The Sudoku puzzle shows exponential growth. https://en.wikipedia.org/wiki/Combinatorial_explosion#Sudoku

What could you do for a complex puzzle? Use heuristics to reduce the problem set to the point where a the brute force algorithm can complete.

One of the most powerful heuristics to use is the Hidden Tuples algorithm, and I recently went through the process of implementing it. I’ll post the code, but first, a bit on notation.

I am going to use the term house to refer to a row, block, or column. It is a generic term to mean any one of them, or, when I say houses, all three of them.

The simplest format for storing a Sudoku puzzle is as a string. You can mark the solved cells with a single digit, and an unsolved cell as a blank. However, to calculate operations on a column you need to do the math to find the column’s location. It turns out that most programming languages do exactly this for two dimensional arrays. So the logical way to store it in memory while working on the puzzle is as a 2 D array.

Thus a puzzle defined like this:

sample_puzzle = ("800000000" +
                 "003600000" +
                 "070090200" +
                 "050007000" +
                 "000045700" +
                 "000100030" +
                 "001000068" +
                 "008500010" +
                 "090000400")

Would be layed out logically like this:

 8                
     3 6          
   7     9   2    
   5       7      
         4 5 7    
       1       3  
     1         6 8
     8 5       1  
   9         4    

Another way of interpreting the blank cell is that it can potentially be any value, 1-9 for a Basis of 3, 1 through Basis^2 for larger puzzles. However, those blank cells can be rewritten as arrays of digits. You can then remove each digit once it is no longer viable. Thus, a solved cell is one that contains one digit in it, and an unsolved cell contains more than one digit. This is one of the techniques used when solving a puzzle by hand. But when solving it computationally, it is actually easier to go ahead an expand out a blank board to be filled with:

[‘1′,’2′,’3′,’4′,’5′,’6′,’7′,’8′,’9’]

Now, when you write your puzzle on the board, as you write each solved digit into its location you also go ahead and remove that digit from all of the other cells in the same column, row, and block. Thus, a partial solution looks like this:

┏━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━┳━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━┳━━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━┓
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃    8    │   1246  │  24569  ┃   2347  │  12357  │   1234  ┃  13569  │   4579  │ 1345679 ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┠─────────┼─────────┼─────────╂─────────┼─────────┼─────────╂─────────┼─────────┼─────────┨
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃  12459  │   124   │    3    ┃    6    │  12578  │   1248  ┃   1589  │  45789  │  14579  ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┠─────────┼─────────┼─────────╂─────────┼─────────┼─────────╂─────────┼─────────┼─────────┨
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃   1456  │    7    │   456   ┃   348   │    9    │   1348  ┃    2    │   458   │  13456  ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┣━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━╋━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━╋━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━┫
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃  123469 │    5    │   2469  ┃   2389  │   2368  │    7    ┃   1689  │   2489  │  12469  ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┠─────────┼─────────┼─────────╂─────────┼─────────┼─────────╂─────────┼─────────┼─────────┨
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃  12369  │  12368  │   269   ┃   2389  │    4    │    5    ┃    7    │   289   │   1269  ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┠─────────┼─────────┼─────────╂─────────┼─────────┼─────────╂─────────┼─────────┼─────────┨
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃  24679  │   2468  │  24679  ┃    1    │   268   │   2689  ┃   5689  │    3    │  24569  ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┣━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━╋━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━╋━━━━━━━━━┿━━━━━━━━━┿━━━━━━━━━┫
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃  23457  │   234   │    1    ┃  23479  │   237   │   2349  ┃   359   │    6    │    8    ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┠─────────┼─────────┼─────────╂─────────┼─────────┼─────────╂─────────┼─────────┼─────────┨
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃  23467  │   2346  │    8    ┃    5    │   2367  │  23469  ┃    39   │    1    │   2379  ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┠─────────┼─────────┼─────────╂─────────┼─────────┼─────────╂─────────┼─────────┼─────────┨
┃         │         │         ┃         │         │         ┃         │         │         ┃
┃  23567  │    9    │   2567  ┃   2378  │  123678 │  12368  ┃    4    │   257   │   2357  ┃
┃         │         │         ┃         │         │         ┃         │         │         ┃
┗━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━┻━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━┻━━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━┛

We can further reduce the above puzzle by running additional algorithms run against it. At the simplest, we can just search for and singletons: numbers that appear in only one cell in a house.

The most complex algorithm I have heard of people actually using is hidden tuple removal. I use the word tuple to cover the three lengths that are most valuable; pairs, triplets, and quads. A Hidden pair is a pair of digits that only exist in 2 cells in a house. When these are identified, you can remove any other numbers from the cell where the pair resides. More powerfully, you can extrapolate into the other houses; if both of those cells from a row or column reside in the same block, you can remove those two numbers from all of the other cells in that block.

When we scale this pattern up to the quad tuple, it gets even more powerful. See, no one cell needs to hold all four numbers. We just need 4 cells. These cells can contain other numbers, but the numbers in this set must be constrained to those four cells. Thus, if one cell has [1,2,8,9] another has [1,3,8,9] another has [2,3,4,8,9] a fourth has [4,8,9], and both 8 and 9 appear in other cells, we can define the tuple as [1,2,3,4] and remove the 8 and the 9 from all of these cells.

The algorithm is expensive. Basically, create an exhaustive list of sets, figure out which cells have a subset of that set, and then look for sets of the right length e.g. 4 for hidden quads.

def find_and_reduce_hidden_tuples(board, n):
    found = find_hidden_tuples(board, n)
    reduce_hidden_tuples(board, found, n)

First lets talk about finding the hidden tuples. Generate a map and look through it for sets of length 4.

def find_hidden_tuples(board, k):
    tuple_map = generate_hidden_tuple_map(board, k)
    found = dict()
    for (key, values) in tuple_map.items():
        if len(values) == k:
            found[key] = values
    return found

To Generate the map, we need to generate an exhausitive list of all the subsets, then iterate through each cell on the board. If a cell contains a non-null subset of the initial set, we add it to the potential matches.

def generate_hidden_tuple_map(board, k): tuples = gen_subsets(common.FULL_SET, k) tuple_map = dict() for cell in BoardCellIterator(board): if len(cell.get()) == 1: continue for tuple in tuples: for digit in tuple: if digit in cell.get(): append_to_tuple_map( tuple_map, cell.r, cell.c, tuple, board) break return tuple_map

To create the exhaustive list of sets:

# select k from n.  Number of elements will be
# n!/(k! - (n-k)!)
def gen_subset_indexes(n, k):
    subsets = []
    max = 1 << n
    for i in range(max):
        indexes = []
        for x in range(9):
            if (i >> x) & 1 == 1:
                indexes.append(x)
        if len(indexes) == k:
            subsets.append(indexes)
    return subsets


# generates all subsets of length k
def gen_subsets(allset, k):
    subsets = []
    indexes = gen_subset_indexes(len(allset), k)
    for i in indexes:
        subset = []
        for j in i:
            subset.append(allset[j])
        subsets.append(subset)
    return subsets

I am kind of proud of this code. It uses the fact that a subset can be implemented as a bit map, a technique I remember learning from Programming Pearls. Basically, look for (9 bit long) bytes where the there number of bits set is 4. Those bits map to the indexes of the subset characters…which happen to be their digit values as well.

Once we have all of the subsets, we look for the matches. These will go in a map. I created a custom type for the key for this map:

class TupleKey:
    def __init__(self, house, location, tuple):
        self.house = house
        self.location = location
        self.tuple = tuple

    def __eq__(self, other):
        if not isinstance(other, TupleKey):
            return False
        return ((self.house == other.house) and
                (self.location == other.location) and
                (self.tuple == other.tuple))

    def __hash__(self):
        return hash(self.__str__())

    def __repr__(self):
        return self.__str__()

    def __str__(self):
        return "%s %s %s" % (self.house, self.location, self.tuple)

This key is tuned to how we need to look up the value at the end; it contains all of the information that makes it unique, and it provides a way to find the value in the overall board. Note that the location field is tuned to the house; if the house is a row, we store the row, if it is a block, we store a composite value that uniquely identifies the block.

Appending it to the map involves ignoring singletons; these are cells that are already solved, and thus remove the tuple in question from being a potential match.

def append_to_tuple_map(tuple_map, r, c, tuple, board):
    def no_singletons(board, itr):
        ok = True
        for cell in itr:
            if not ok:
                break
            for digit in tuple:
                if cell.get() == digit:
                    ok = False
                    break
        return ok

    tuple_str = ""
    for i in tuple:
        tuple_str += i
    keys = []

    if no_singletons(board, RowCellIterator(board, r)):
        keys.append(TupleKey('r', r, tuple_str))

    if no_singletons(board, ColCellIterator(board, c)):
        keys.append(TupleKey('c', c, tuple_str))

    if no_singletons(board, BlockCellIterator(board, r, c)):
        keys.append(TupleKey('b', block_for_rc(r, c), tuple_str))

    for key in keys:
        if tuple_map.get(key) is None:
            tuple_map[key] = []
        tuple_map[key].append((r, c))


def block_for_rc(row, col):
    sg_row = math.floor(row / common.BASIS)
    sg_col = math.floor(col / common.BASIS)
    return [sg_row, sg_col]

Now that we have a map that contains all the hidden tuples, we reduce the cells affected by each hidden tuple:

def reduce_hidden_tuples(board, found, n):
    for (key, cells) in found.items():
        digits_in = key.tuple
        digits_out = common.FULL_SET
        for c in digits_in:
            digits_out = digits_out.replace(c, '')

        itr = None
        if key.house == 'b':
            itr = BlockCellIterator(board, cells[0][0], cells[0][1])
        elif key.house == 'r':
            itr = RowCellIterator(board, cells[0][0])
        elif key.house == 'c':
            itr = ColCellIterator(board, cells[0][1])
        else:
            assert(itr is not None)

        for cell in itr:
            ct = (cell.r, cell.c)
            if ct in cells:
                cell.set(cell.remove_from_set(digits_out))
            else:
                cell.set(cell.remove_from_set(digits_in))

The digits_out removal is, I think, ineffectual. I convinced myself that I needed it when I wrote it, but I am fairly certain that the cells out side of the found list will never contain those numbers.

I am not certain if it makes sense to run this algorithm unless it is coupled with other techniques that make use of the small reductions found by it to do larger reductions on the whole board.

If we go back to our Sudoku puzzles with basis > 3, the tuples found by this technique can be larger than 4. I suspect that there is a relationship of the the nature Basis+1. Thus, for our basis of 10, we probably could expect to find matches with a tuple size of 11, slightly greater than one in ten cells in a house.

But this algorithm is expensive. The tuple map grows with the number of elements on the board. I’ve not calculated the growth, but I suspect it is of the nature of n^2. Still that should be better than N! or exponential.

The number of distinct subsets of size k from a set of size n is

n!/((k!)(n-k)!) which means, among other things, that the number of subsets is roughly the same regardless of the size of k.

However, as the size of n grows, the number of subsets of different lengths we need to generate to also grows. A 9 X 9 Sudoku only needs subsets of lengths 2,3, and 4. If we assume that we should not cross the halfway boundary, A 25 * 25 will need lengths from 2 through 12.

Well, not “need” but rather “benefit from.” You can limit it to hidden “quads” and lower. But the larger puzzle is also going to have larger percentages of hit rates at all tuple sizes. I don’t think I’m prepared to advise what would be the right way to scale it.

Episode 222 – HashiCorp Boundary with Jeff Mitchell

Posted by Josh Bressers on November 02, 2020 12:01 AM

Josh and Kurt talk to Jeff Mitchell about the new HashiCorp project Boundary. We discuss what Boundary is, why it’s cooler than a VPN, and how you can get involved.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2023-9" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_222_HashiCorp_Boundary_with_Jeff_Mitchell.mp3?_=9" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_222_HashiCorp_Boundary_with_Jeff_Mitchell.mp3</audio>

Show Notes

Long Refactoring: Completing the Iterator

Posted by Adam Young on October 29, 2020 01:01 AM

Now that the algorithm does not need a new test_board every time, we no longer need to treat the Node object as a Flyweight. We can move the board into the Node object and remove it from the parameter list of all the functions that operate on it.

Changing the Node object parameter list requires changing all the places it is called. The unit test helps here by identifying the places I break when I modify the function declaration.

______________________________________________________ ERROR collecting treesudoku/test_tree_soduku.py _______________________________________________________
treesudoku/test_tree_soduku.py:1: in <module>
    from treesudoku import tree_sudoku
treesudoku/tree_sudoku.py:245: in <module>
    solver = SudokuSolver(import_csv())
treesudoku/tree_sudoku.py:34: in __init__
    return_string = self.tree_to_solution_string(value)
treesudoku/tree_sudoku.py:38: in tree_to_solution_string
    head_node = Tree_Node(None, 0)
E   TypeError: __init__() missing 1 required positional argument: 'table'

Here are the places I had to change. Note that the new parameter is not yet used. We just know, logically, that the parameter passed in to the functions is always going to match the member.

diff --git a/treesudoku/test_tree_soduku.py b/treesudoku/test_tree_soduku.py
index be4b95b..1266e11 100644
--- a/treesudoku/test_tree_soduku.py
+++ b/treesudoku/test_tree_soduku.py
@@ -50,7 +50,7 @@ def test_sudoku_solver():
 
 def test_advance():
     test_board = tree_sudoku.build_board(puzzle0)
-    node = tree_sudoku.Tree_Node(None, 0)
+    node = tree_sudoku.Tree_Node(None, 0, test_board)
     node.write(test_board)
     assert(test_board[0][0] == '9')
     node = node.advance(test_board)
diff --git a/treesudoku/tree_sudoku.py b/treesudoku/tree_sudoku.py
index 89096d7..7d288ce 100644
--- a/treesudoku/tree_sudoku.py
+++ b/treesudoku/tree_sudoku.py
@@ -35,9 +35,9 @@ class SudokuSolver:
             self.solved_board_strings[key] = return_string
 
     def tree_to_solution_string(self, original_board):
-        head_node = Tree_Node(None, 0)
-        curr_node = head_node
         test_board = copy.deepcopy(original_board)
+        head_node = Tree_Node(None, 0, test_board)
+        curr_node = head_node
         curr_node.check_solved(test_board)
         while True:
             if not curr_node.solved:
@@ -187,12 +187,13 @@ board_index = BoardIndexTable()
 
 
 class Tree_Node:
-    def __init__(self, last_node, index):
+    def __init__(self, last_node, index, board):
         self.board_spot = board_index.table[index]
         self.last_node = last_node
         self.next_node = None
         self.solved = False
         self.index = index
+        self.board = board
         self.reset()
 
     def reset(self):
@@ -223,7 +224,7 @@ class Tree_Node:
     def advance(self, test_board):
         node = self
         if node.next_node is None:
-            new_node = Tree_Node(node, self.index + 1)
+            new_node = Tree_Node(node, self.index + 1, self.board)
             new_node.check_solved(test_board)
             node.next_node = new_node
         return node.next_node

Next step is to wire up the functions to use the member variable instead of the parameter. There are two ways to do this: change one function body, parameter list, and calling locations all at the same time, or change the function bodies first, and then each of the places that call it. I prefer the latter, as it lets me keep the code running successfully with fewer breaking changes between stable points.

This is a long change.

--- a/treesudoku/test_tree_soduku.py
+++ b/treesudoku/test_tree_soduku.py
@@ -50,19 +50,19 @@ def test_sudoku_solver():
 
 def test_advance():
     test_board = tree_sudoku.build_board(puzzle0)
-    node = tree_sudoku.Tree_Node(None, 0)
-    node.write(test_board)
+    node = tree_sudoku.Tree_Node(None, 0, test_board)
+    node.write()
     assert(test_board[0][0] == '9')
-    node = node.advance(test_board)
-    node = node.advance(test_board)
-    node.write(test_board)
+    node = node.advance()
+    node = node.advance()
+    node.write()
     assert(test_board[0][3] == '0')
-    node = node.advance(test_board)
-    node.write(test_board)
+    node = node.advance()
+    node.write()
     assert(test_board[0][3] == '9')
-    back_node = node.retreat(test_board)
+    back_node = node.retreat()
     assert(test_board[0][3] == '0')
     assert(node.value == "9")
-    back_node.write(test_board)
+    back_node.write()
     assert(test_board[0][2] == '3')
     assert(back_node.board_spot == '02')
diff --git a/treesudoku/tree_sudoku.py b/treesudoku/tree_sudoku.py
index 89096d7..00ee60c 100644
--- a/treesudoku/tree_sudoku.py
+++ b/treesudoku/tree_sudoku.py
@@ -35,24 +35,24 @@ class SudokuSolver:
             self.solved_board_strings[key] = return_string
 
     def tree_to_solution_string(self, original_board):
-        head_node = Tree_Node(None, 0)
-        curr_node = head_node
         test_board = copy.deepcopy(original_board)
-        curr_node.check_solved(test_board)
+        head_node = Tree_Node(None, 0, test_board)
+        curr_node = head_node
+        curr_node.check_solved()
         while True:
             if not curr_node.solved:
-                curr_node.write(test_board)
+                curr_node.write()
             if self.box_index.is_value_valid(test_board, curr_node):
                 if curr_node.index + 1 >= MAX:
                     break
-                curr_node = curr_node.advance(test_board)
-                curr_node.check_solved(test_board)
+                curr_node = curr_node.advance()
+                curr_node.check_solved()
             else:
                 # backtrack
                 while len(curr_node.possible_values) == 0:
-                    curr_node = curr_node.retreat(test_board)
+                    curr_node = curr_node.retreat()
                 curr_node.next()
-                curr_node.write(test_board)
+                curr_node.write()
         return self.build_solution_string(head_node)
 
     def build_solution_string(self, head_node):
@@ -187,12 +187,13 @@ board_index = BoardIndexTable()
 
 
 class Tree_Node:
-    def __init__(self, last_node, index):
+    def __init__(self, last_node, index, board):
         self.board_spot = board_index.table[index]
         self.last_node = last_node
         self.next_node = None
         self.solved = False
         self.index = index
+        self.board = board
         self.reset()
 
     def reset(self):
@@ -205,36 +206,36 @@ class Tree_Node:
     def __str__(self):
         return self.value
 
-    def write(self, board):
+    def write(self):
         row = int(self.board_spot[0])
         col = int(self.board_spot[1])
-        board[row][col] = self.value
+        self.board[row][col] = self.value
 
-    def check_solved(self, board):
+    def check_solved(self):
         if self.solved:
             return
         row = int(self.board_spot[0])
         col = int(self.board_spot[1])
-        if board[row][col] != '0':
-            self.value = board[row][col]
+        if self.board[row][col] != '0':
+            self.value = self.board[row][col]
             self.possible_values = []
             self.solved = True
 
-    def advance(self, test_board):
+    def advance(self):
         node = self
         if node.next_node is None:
-            new_node = Tree_Node(node, self.index + 1)
-            new_node.check_solved(test_board)
+            new_node = Tree_Node(node, self.index + 1, self.board)
+            new_node.check_solved()
             node.next_node = new_node
         return node.next_node
 
-    def retreat(self, test_board):
+    def retreat(self):
         node = self
         node.reset()
         if not self.solved:
             row = int(self.board_spot[0])
             col = int(self.board_spot[1])
-            test_board[row][col] = '0'
+            self.board[row][col] = '0'
         node = self.last_node
         node.next_node = None
         return node

Here is our current algorithmic function:

    def tree_to_solution_string(self, original_board):
        test_board = copy.deepcopy(original_board)
        head_node = Tree_Node(None, 0, test_board)
        curr_node = head_node
        curr_node.check_solved()
        while True:
            if not curr_node.solved:
                curr_node.write()
            if self.box_index.is_value_valid(test_board, curr_node):
                if curr_node.index + 1 >= MAX:
                    break
                curr_node = curr_node.advance()
                curr_node.check_solved()
            else:
                # backtrack
                while len(curr_node.possible_values) == 0:
                    curr_node = curr_node.retreat()
                curr_node.next()
                curr_node.write()
        return self.build_solution_string(head_node)

One thing that occurred to me when I reviewed this is that we create a new node, we immediately check if it is solved. Node creation is done at the top of the function and in the advance function. If we move the is_solved check into the constructor, we treat it as an invariant.

-- a/treesudoku/tree_sudoku.py
+++ b/treesudoku/tree_sudoku.py
@@ -38,7 +38,6 @@ class SudokuSolver:
         test_board = copy.deepcopy(original_board)
         head_node = Tree_Node(None, 0, test_board)
         curr_node = head_node
-        curr_node.check_solved()
         while True:
             if not curr_node.solved:
                 curr_node.write()
@@ -46,7 +45,6 @@ class SudokuSolver:
                 if curr_node.index + 1 >= MAX:
                     break
                 curr_node = curr_node.advance()
-                curr_node.check_solved()
             else:
                 # backtrack
                 while len(curr_node.possible_values) == 0:
@@ -195,6 +193,7 @@ class Tree_Node:
         self.index = index
         self.board = board
         self.reset()
+        self.check_solved()
 
     def reset(self):
         self.value = '9'
@@ -225,7 +224,6 @@ class Tree_Node:
         node = self
         if node.next_node is None:
             new_node = Tree_Node(node, self.index + 1, self.board)
-            new_node.check_solved()
             node.next_node = new_node
         return node.next_node

Something else that is now apparent: we are writing the value both at the bottom and the top of the loop, and that is redundant. We should be able to remove the write at the bottom.

diff --git a/treesudoku/tree_sudoku.py b/treesudoku/tree_sudoku.py
index eadd374..0c7b16c 100644
--- a/treesudoku/tree_sudoku.py
+++ b/treesudoku/tree_sudoku.py
@@ -50,7 +50,6 @@ class SudokuSolver:
                 while len(curr_node.possible_values) == 0:
                     curr_node = curr_node.retreat()
                 curr_node.next()
-                curr_node.write()
         return self.build_solution_string(head_node)
 
     def build_solution_string(self, head_node):

This is the common pattern of a long refactoring. You tease apart, and then you simplify. Extract method in one place, Inline that method somewhere else if applicable. The algorithm is now fairly straightforward and understandable, and I would feel that the effort put in thus far would be justified in a production project. But more can certainly be done.