Fedora security Planet

Episode 92 - Chat with Rami Saas the CEO of WhiteSource

Posted by Open Source Security Podcast on April 15, 2018 11:00 PM
Josh and Kurt talk to Rami Saas, the CEO of WhiteSource about 3rd party open source security as well as open source licensing.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6483430/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Comparing Keystone and Istio RBAC

Posted by Adam Young on April 09, 2018 05:55 PM

To continue with my previous investigation to Istio, and to continue the comparison with the comparable parts of OpenStack, I want to dig deeper into how Istio performs
RBAC. Specifically, I would love to answer the question: could Istio be used to perform the Role check?

Scoping

Let me reiterate what I’ve said in the past about scope checking. Oslo-policy performs the scope check deep in the code base, long after Middleware, once the resource has been fetched from the Database. Since we can’t do this in Middleware, I think it is safe to say that we can’t do this in Istio either. SO that part of the check is outside the scope of this discussion.

Istio RBAC Introduction

Lets look at how Istio performs RBAC.

The first thing to compare is the data that is used to represent the requester. In Istio, this is the requestcontext. This is comparable to the Auth-Data that Keystone Middleware populates as a result of a successful token validation. How does Istio populate the the requestcontext? My current assumption is that it makes an Remote call to Mixer with the authenticated REMOTE_USER name.

What is telling is that, in Istio, you have

      user: source.user | ""
      groups: ""
      properties:
         service: source.service | ""
         namespace: source.namespace | ""

Groups no roles. Kubernetes has RBAC, and Roles, but it is a late addition to the model. However…

Istio RBAC introduces ServiceRole and ServiceRoleBinding, both of which are defined as Kubernetes CustomResourceDefinition (CRD) objects.

ServiceRole defines a role for access to services in the mesh.
ServiceRoleBinding grants a role to subjects (e.g., a user, a group, a service)

This is interesting. Where-as Keystone requires a user to go to Keystone to get a token that is then associated with a a set of role assignments, Istio expands this assignment inside the service.

Keystone Aside: Query Auth Data without Tokens

This is actually not surprising. When looking into Keystone Middleware years ago, in the context of PKI tokens, I realized that we could do exactly the same thing; make a call to Keystone based on the identity, and look up all of the data associated with the token. This means that a user can go from a SAML provider right to the service without first getting a Keystone token.

What this means is that the Mixer can respond return the Roles assigned by Kubernetes as additional parameters in the “Properties” collection. However, with the ServiceRole, you would instead get the Service Role Binding list from Mixer and apply it in process.

We discussed Service Roles on multiple occasions in Keystone. I liked the idea, but wanted to make sure that we didn’t limit the assignments, or even the definitions, to just a service. I could see specific Endpoints varying in their roles even within the same service, and certainly have different Service Role Assignments. I’m not certain if Istio distinguishes between “services” and “different endpoints of the same service” yet…something I need to delve in to. However, assuming that it does distinguish, what Istio needs to be able to get request is “Give me the set of Role bindings for this specific endpoint.”

A history lesson in Endpoint Ids.

It was this last step that was a problem in Keystonemiddleware. An endpoint did not know its own ID, and the provisioning tools really did not like the workflow of

  1. create an endpoint for a service
  2. register endpoint with Keystone
  3. get back the endpoint ID
  4. add endpoint  ID to the config file
  5. restart the service

Even if we went with an URL based scheme, we would have had this problem.  An obvious (in hindsight) solution would be to pre-generate the Ids as a unique hash, and to pre-populate the configuration files as well as to post the IDs to Keystone.  These IDs could easily be tagged as a nickname, not even the canonical name of the service.

Istio Initialization

Istio does no have this problem, directly, as it knows the name of the service that it is protecting, and can use that to fetch the correct rules.  However, it does point to a chicken-egg problem that Istio has to solve; which is created first, the service itself, or the abstraction in Istio to cover it?  Since Kubernetes is going to orchestrate the Service deployment, it can make the sensible call;  Istio can cover the service and just reject calls until it is properly configured.

URL Matching Rules

If we look at the Policy enforcement in Nova, we can use the latest “Policy in Code” mechanisms to link from the URL pattern to the Policy rule key, and the key to the actual enforced policy.  For example, to delete a server we can look up the API

And see that it is

/servers/{server_id}

And from the Nova source code:

  policy.DocumentedRuleDefault(
        SERVERS % 'delete',
        RULE_AOO,
        "Delete a server",
        [
            {
                'method': 'DELETE',
                'path': '/servers/{server_id}'
            }
]),

With SERVERS %  expanding via :  SERVERS = 'os_compute_api:servers:%s'  to  os_compute_api:servers:delete.

Digging into Openstack Policy

Then, assuming you can get you hand on the policy file specific to that Nova server you could look at the policy for that rule. Nova no longer includes that generated file in the etc directory. But in my local repo I have:
"os_compute_api:servers:delete": "rule:admin_or_owner"

And the rule:admin_or_owner expanding to "admin_or_owner": "is_admin:True or project_id:%(project_id)s" which does not do a role check at all. The policy.yaml or policy.json file is not guaranteed to exist, in which case you can either use the tool to generate it, or read the source code. From the above link we see the Rule is:

RULE_AOO = base.RULE_ADMIN_OR_OWNER

and then we need to look where that is defined.

Lets assume, for the moment, that a Nova deployment has overridden the main rule to implement a custom role called Custodian which has the ability to execute this API. Could Istio match that? It really depends on whether it can match the URL-Pattern: '/servers/{server_id}'.

In ServiceRole, the combination of “namespace”+”services”+”paths”+”methods” defines “how a service (services) is allowed to be accessed”.

So we can match down to the Path level. However, there seems to be no way to tokenize a Path. Thus, while you could set a rule that says a client can call DELETE on a specific instance, or DELETE on /services, or even DELETE on all URLS in the catalog (whether they support that API or not) you could not say that it could call delete on all services within a specific Namespace. If the URL were defined like this:

DELETE /services?service_id={someuuid}

Istio would be able to match the service ID in the set of keys.

In order for Istio to be able to effectively match, all it really would need would be to identify that an URL that ends /services/feed1234 Matches the pattern /services/{service_id} which is all that the URL pattern matching inside the Web servers do.

Istio matching

It looks like paths can have wildcards. Scroll down a bit to the quote:

In addition, we support prefix match and suffix match for all the fields in a rule. For example, you can define a “tester” role that has the following permissions in “default” namespace:

which has the further example:

    - services: ["bookstore.default.svc.cluster.local"]
       paths: ["*/reviews"]
       methods: ["GET"]

Deep URL matching

So, while this is a good start, there are many more complicated URLs in the OpenStack world which are tokenized in the middle: for example, the new API for System role assignments has both the Role ID and the User ID embedded. The Istio match would be limited to matching: PUT /v3/system/users/* which might be OK in this case. But there are cases where a PUT at one level means one role, much more powerful than a PUT deeper in the URL chain.

For example: The base role assignments API itself is much more complex. To assign a role on a domain uses an URL fragment comparable to that to edit the domain specific configuration file. Both would have to be matched with

       paths: ["/v3/domains/*"]
       methods: ["PUT"]

But assigning a role is a far safer operation than setting a domain specific config, which is really an administrative only operation.

However, I had to dig deeply to find this conflict. I suspect that there are ways around it, and comparable conflicts in the catalog.

Conclusion

So, the tentative answer to my question is:

Yes, Istio could perform the Role check part of RBAC for OpenStack.

But it would take some work. Of Course. An early step would be to write a Mixer plugin to fetch the auth-data from Keystone based on a user. This would require knowing about Federated mappings and how to expand them, plus query the Role assignments. Of, and get the list of Groups for a user. And the project ID needs to be communicated, somehow.

Episode 91 - Security lessons from a 7 year old

Posted by Open Source Security Podcast on April 08, 2018 10:51 PM
Josh and Kurt talk to a 7 year old about security. We cover Minecraft security, passwords, hacking, and many many other nuggets of wisdom.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6432455/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Comparing Istio and Keystone Middleware

Posted by Adam Young on April 07, 2018 10:00 PM

One way to learn a new technology is to compare it to what you already know. I’ve heard a lot about Istio, and I don’t really grok it yet, so this post is my attempt to get the ideas solid in my own head, and to spur conversations out there.

I asked the great Google “Why is Istio important” and this was the most interesting response it gave me: “What is Istio and Its Importance to Container Security.” So I am going to start there. There are obviously many articles about Istio, and this might not even be the best starting point, but this is the internet: I’m sure Ill be told why something else is better!

Lets start with the definition:

Istio is an intelligent and robust web proxy for traffic within your Kubernetes cluster as well as incoming traffic to your cluster

At first blush, these seems to be nothing like Keystone. However, Lets take a look at the software definition of Proxy:

A proxy, in its most general form, is a class functioning as an interface to something else.

In the OpenStack code base, the package python-keystonemiddleware provides a Python class that complies with the WSGI contract that serves as a Proxy to the we application underneath. Keystone Middleware, then is an analogue to the Istio Proxy in that it performs some of the same functions.

Istio enables you to specify access control rules for web traffic between Kubernetes services

So…Keystone + Oslo-policy serves this role in OpenStack. The Kubernetes central control is a single web server, and thus it can implement Access control for all subcomponenets in a single process space. OpenStack is distributed, and thus the access control is also distributed. However, due to the way that OpenStack objects are stored, we cannot do the full RBAC enforcement in middleware (much as I would like to). IN order to check access to an existing resource object in OpenStack, you have to perform the policy enforcement check after the object has been fetched from the Database. That check needs to ensure that the project of the token matches the project of the resource. Since we don’t know this information based solely on the URL, we cannot perform it in Middleware.

What we can perform in Middleware, and what I presented on last year at the OpenStack Summit, is the ability to perform the Role check portion of RBAC in middleware, but defer the project check until later. While we are not going to be doing exactly that, we are pursuing a related effort for application credentials. However, that requires a remote call to a database to create those rules. Istio is not going to have that leeway. I think? Please correct me if I am wrong.

I don’t think Istio could perform this level of deep check, either. It requires parsing the URL and knowing the semantics of the segments, and having the ability to correlate them. That is a lot to ask.

Isito enables you to seamlessly enforce encryption and authentication between node

Keystone certainly does not do this. Nothing enforced TLS between services in OpenStack. Getting TLS everywhere in Tripleo was a huge effort, and it still needs to be explicitly enabled. OpenStack does not provide a CA. Tripleo, when deployed, depends on the Dogtag instance from the FreeIPA server to manage certificates.

By the time Keystone Middleware is executed, the TLS layer would be a distant memory.

Keystoneauth1 is the client piece from Keystone, and it could be responsible for making sure that only HTTPS is supported, but it does not do that today.

Istio collects traffic logs, and then parses and presents them for you:

Keystone does not do this, although it does produce some essential log entries about access.

At this point, I am wondering if Istio would be a viable complement to the security story in OpenStack. My understand thus far is that it would. It might conflict a minor bit with the RBAC enforcement, but I suspect that is no the key piece of what it is doing, and conflict there could be avoided.

Please post your comments, as I would really like to get to know this better, and we can share the discussion with the larger community.

Recursive DNS and FreeIPA

Posted by Adam Young on April 05, 2018 05:38 PM

DNS is essential to Kerberos. Kerberos Identity for servers is based around host names, and if you don’t have a common view between client and server, you will not be able to access your remote systems. Since DNS is an essential part of FreeIPA, BIND is one of the services integrated into the IPA server.

When a user wants to visit a public website, like this one, they click a link or type that URL into their browsers navigation bar. The browser then requests the IP address for the hostname inside the URL from the operating system via a library call. On a Linux based system, the operating system makes the DNS call to the server specified in /etc/resolv.conf. But what happens if the DNS server does not know the answer? It depends on how it is configured. In the simple case, where the server is not allowed to make additional calls, it returns a response that indicates the record is not found.

Since IPA is supposed to be the one-source-of-truth for a client system, it is common practice to register the IPA server as the sole DNS resolver. As such, it cannot just short-circuit the request. Instead, it performs a recursive search to the machines it has set up as Forwarders. For example, I often will set up a sample server that points to the google resolver at 8.8.8.8. Or, now CloudFlare has DNS privacy enabled, I might use that.

This is fine inside controlled environments, but is sub-optimal if the DNS portion of the IPA server is accessible on the public internet. It turns out that forwarding requests allows a DNS server to be used to attack these DNS servers via a distributed denial of service attack. In this attack, the attackers sends the request to all DNS servers that are acting as forwarders, and these forwarders hammer on the central DNS servers.

If you have set up a FreeIPA server on the public internet, you should plan on disabling Recursive DNS queries. You do this by editing the file /etc/named.conf and setting the values:

allow-recursion {"none";};
recursion no;

And restarting the named service.

And then everything breaks. All of your IPA clients can no longer resolve anything except the entries you have in your IPA server.

The fix for that is to add the (former) DNS forward address as a nameserver entry in /etc/resolv.conf on each machine, including your IPA server. Yes, it is a pain, but it limits the query capacity to only requests local to those machines. For example, if my IPA server is on 10.10.2.1 (yes I know this is not routable, just for example) my resolve.conf would look like.

search younglogic.com
nameserver 10.10.2.1
nameserver 1.1.1.1

If you wonder if your Nameserver has this problem, use this site to test it.

Episode 90 - Humans and misinformation

Posted by Open Source Security Podcast on April 02, 2018 01:24 AM
Josh and Kurt talk about all the current misinformation, how humans react to it, and what it means for security.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6425720/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Home made Matzo

Posted by Adam Young on April 01, 2018 05:01 PM

Plate of Matzo
Sufficient quantities to afflict everyone.

Recipe found from the story here.

Episode 89 - Short selling AMD security flaws

Posted by Open Source Security Podcast on March 25, 2018 09:04 PM
Josh and Kurt talk about the recent AMD flaws and the events surrounding the disclosure.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6406210/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Ansible, Azure, and Managed Disks

Posted by Adam Young on March 22, 2018 02:27 AM

Many applications have a data directory, usually due to having an embedded database. For the set I work with, this includes Red Hat IdM/FreeIPA, CloudForms/ManageIQ, Ansible Tower/AWX, and OpenShift/Kubernetes. Its enough of a pattern that I have Ansible code for pairing a set of newly allocated partitions with a set of previously built virtual machines.

I’ll declare a set of variables like this:

cluster_hosts:
  - {name: idm,    flavor:  m1.medium}
  - {name: sso,    flavor:  m1.medium}
  - {name: master0, flavor:  m1.xlarge}
  - {name: master1, flavor:  m1.xlarge}
  - {name: master2, flavor:  m1.xlarge}
  - {name: node0,  flavor:  m1.medium}
  - {name: node1,  flavor:  m1.medium}
  - {name: node2,  flavor:  m1.medium}
  - {name: bastion,  flavor:  m1.small}
cluster_volumes:
  - {server_name: master0, volume_name: master0_var_volume, size: 30}
  - {server_name: master1, volume_name: master1_var_volume, size: 30}
  - {server_name: master2, volume_name: master2_var_volume, size: 30}

In OpenStack, the code looks like this:

- name: create servers
  os_server:
    cloud: "{{ cloudname }}"
    state: present
    name: "{{ item.name }}.{{ clustername }}"
    image: rhel-guest-image-7.4-0
    key_name: ayoung-pubkey
    timeout: 200
    flavor: "{{ item.flavor }}"
    security_groups:
      - "{{ securitygroupname }}"
    nics:
      -  net-id:  "{{ osnetwork.network.id }}"
         net-name: "{{ netname }}_network" 
    meta:
      hostname: "{{ item.name }}.{{ clustername }}"
      fqdn: "{{ item.name }}.{{ clustername }}"
    userdata: |
      #cloud-config
      hostname: "{{ item.name }}.{{ clustername }}"
      fqdn:  "{{ item.name }}.{{ clustername }}"
      write_files:
        -   path: /etc/sudoers.d/999-ansible-requiretty
            permissions: 440
      content: |
        Defaults:{{ netname }} !requiretty
  with_items: "{{ cluster_hosts }}"
  register: osservers

- name: create openshift var volume
  os_volume:
    cloud: "{{ cloudname }}"
    size: 40
    display_name: "{{ item.volume_name }}"
  register: openshift_var_volume
  with_items: "{{ cluster_volumes }}"

- name: attach var volume to OCE Master
  os_server_volume:
    cloud: "{{ cloudname }}"
    state: present
    server: "{{ item.server_name }}.{{ clustername }}"
    volume:  "{{ item.volume_name }}"
    device: /dev/vdb
  with_items: "{{ cluster_volumes }}"

I wanted to do something comparable with Azure. My First take was this:

- name: Create virtual machine
  azure_rm_virtualmachine:
    resource_group: "{{ az_resources }}"
    name: "{{ item.name }}"
    admin_username: "{{ az_username }}"
    admin_password: "{{ az_password }}"
    image:
      offer: RHEL
      publisher: RedHat
      sku: '7.3'
      urn: 'RedHat:RHEL:7.3:latest'
      version: '7.3.2017090723'
  with_items: "{{ cluster_hosts }}"
  register: az_servers

- name: create additional volumes
  azure_rm_managed_disk:    
    name: "{{ item.volume_name }}"
    location: eastus
    resource_group: "{{ az_resources }}"
    disk_size_gb: 40
    managed_by: "{{ item.server_name }}"
  register: az_cluster_volumes
  with_items: "{{ cluster_volumes }}"

However, when I ran that, I got the error:

“Error updating virtual machine idm – Azure Error: OperationNotAllowed
Message: Addition of a managed disk to a VM with blob based disks is not supported.
Target: dataDisk”

And I was not able to reproduce using the CLI:

$ az vm create -g  ayoung_resources  -n IDM   --admin-password   e8f58a03-3fb6-4fa0-b7af-0F1A71A93605 --admin-username ayoung --image RedHat:RHEL:7.3:latest
{
  "fqdns": "",
  "id": "/subscriptions/362a873d-c89a-44ec-9578-73f2e492e2ae/resourceGroups/ayoung_resources/providers/Microsoft.Compute/virtualMachines/IDM",
  "location": "eastus",
  "macAddress": "00-0D-3A-1D-99-18",
  "powerState": "VM running",
  "privateIpAddress": "10.10.0.7",
  "publicIpAddress": "52.186.24.139",
  "resourceGroup": "ayoung_resources",
  "zones": ""
}
[ayoung@ayoung541 rippowam]$ az vm disk attach  -g ayoung_resources --vm-name IDM --disk  CFME-NE-DB --new --size-gb 100

However, looking into the https://github.com/ansible/ansible/blob/v2.5.0rc3/lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py#L1150:

if not data_disk.get('managed_disk_type'):
    data_disk_managed_disk = None
    disk_name = data_disk['storage_blob_name']
    data_disk_vhd = self.compute_models.VirtualHardDisk(uri=data_disk_requested_vhd_uri)

It looks like there is code to default to blob type if the “managed_disk_type” value is unset.

I added in the following line:

    managed_disk_type: "Standard_LRS"

Thus, my modified ansible task looks like this:

- name: Create virtual machine
  azure_rm_virtualmachine:
    resource_group: "{{ az_resources }}"
    name: "{{ item.name }}"
    managed_disk_type: "Standard_LRS"
    admin_username: "{{ az_username }}"
    admin_password: "{{ az_password }}"
    managed_disk_type: "Standard_LRS"
    image:
      offer: RHEL
      publisher: RedHat
      sku: '7.3'
      urn: 'RedHat:RHEL:7.3:latest'
      version: '7.3.2017090723'
  with_items: "{{ cluster_hosts }}"
  register: az_servers

- name: create additional volumes
  azure_rm_managed_disk:    
    name: "{{ item.volume_name }}"
    location: eastus
    resource_group: "{{ az_resources }}"
    disk_size_gb: 40
    managed_by: "{{ item.server_name }}"
  register: az_cluster_volumes
  with_items: "{{ cluster_volumes }}"

Which completed successfully:

Launching Custom Image VMs on Azure With Anisble

Posted by Adam Young on March 20, 2018 08:37 PM

Part of my Job is making sure our customers can run our software in Public clouds.  Recently, I was able to get CloudForms Management Engine (CFME) to deploy to Azure. Once I got it done manually, I wanted to automate the deployment, and that means Ansible.  Turns out that launching custom images from Ansible is not support int the current GA version of the Azure modules, but has been implemented upstream.

Ansible releases package versions here. I wanted the 2.5 that was aligned with Fedora 27 which is RC3 right now. Install using dnf:

sudo dnf install  http://releases.ansible.com/ansible/rpm/preview/fedora-27-x86_64/ansible-2.5.0-0.1003.rc3.fc27.ans.noarch.rpm

And then I can launch using the new syntax for the image dictionary. Here is the task fragment from my tasks/main.yml

- name: Create virtual machine
  azure_rm_virtualmachine:
    resource_group: "{{ az_resources }}"
    name: CloudForms
    vm_size: Standard_D1
    admin_username: "{{ az_username }}"
    admin_password: "{{ az_password }}"
    image:
      name: cfme-azure-5.9.0.22-1      
      resource_group: CFME-NE

Note the two dictionary values under image. This works, so long as the user has access to the image, even if it comes from a different resource group.

Big thanks to <sivel> and <jborean93> In #ansible FreeNode IRC for helping get this to work.

SELinux should and does BLOCK access to Docker socket

Posted by Dan Walsh on March 19, 2018 10:19 AM

I get lots of bugs from people complaining about SELinux blocking access to the Docker socket.  For example https://bugzilla.redhat.com/show_bug.cgi?id=1557893

The aggravating thing is, this is exactly what we want SELinux to prevent.  If a container process got to the point of talking to the /var/run/docker.sock, you know this is a serious security issue.  Giving a container access to the Docker socket, means you are giving it full root on  your system.  

A couple of years ago, I wrote why giving docker.sock to non privileged users is a bad idea.

Now I am getting bug reports about allowing containers access to this socket.

Access to the docker.sock is the equivalent of sudo with NOPASSWD, without any logging. You are giving the process that talks to the socket, the ability to launch a process on the system as full root.

Usually people are doing this because they want the container to do benign operations, like list which containers are on the system, or look a the container logs.  But Docker does not have a nice RBAC system, you basically get full access or no access.  I choose to default to NO ACCESS.

If you need to give container full access to the system then run it as a --privileged container or disable SELinux separation for the container.

podman run --privileged ...
or
docker run --privileged ...

podman run --security-opt label:disable ...
or
docker run --security-opt label:disable ...

Run it privileged

There is a discussion going on in Moby github, about breaking out more security options, specifically adding a flag to container runtimes, to allow users to specify whether they want kernel file systems in the container to be readonly. (/proc, /sys,...)

I am fine with doing this, but my concern with this is people want to make little changes to the security of their containers, but at a certain point you allow full breakout.  Like above where you allow a container to talk to the docker.sock.  

Security tools are being developed to search for things like containers running as --privileged, but they might not understand that --security-opt selinux:disable -v /run:/run is the SAME THING from a security point of view.  If it is simple to break out of container confinement, then we should just be honest and run the container with full privilege.  (--privileged). 

Episode 88 - Chat with Chris Rosen from IBM about Container Security

Posted by Open Source Security Podcast on March 18, 2018 08:34 PM
Josh and Kurt talk about container security with IBM's Chris Rosen.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6378524/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Generating a list of URL patterns for OpenStack services.

Posted by Adam Young on March 16, 2018 05:35 PM

Last year at the Boston OpenStack summit, I presented on an Idea of using URL patterns to enforce RBAC. While this idea is on hold for the time being, a related approach is moving forward building on top of application credentials. In this approach, the set of acceptable URLs is added to the role, so it is an additional check. This is a lower barrier to entry approach.

One thing I requested on the specification was to use the same mechanism as I had put forth on the RBAC in Middleware spec: the URL pattern. The set of acceptable URL patterns will be specified by an operator.

The user selects the URL pattern they want to add as a “white-list” to their application credential. A user could further specify a dictionary to fill in the segments of that URL pattern, to get a delegation down to an individual resource.

I wanted to see how easy it would be to generate a list of URL patterns. It turns out that, for the projects that are using the oslo-policy-in-code approach, it is pretty easy;

cd /opt/stack/nova
 . .tox/py35/bin/activate
(py35) [ayoung@ayoung541 nova]$ oslopolicy-sample-generator  --namespace nova | egrep "POST|GET|DELETE|PUT" | sed 's!#!!'
 POST  /servers/{server_id}/action (os-resetState)
 POST  /servers/{server_id}/action (injectNetworkInfo)
 POST  /servers/{server_id}/action (resetNetwork)
 POST  /servers/{server_id}/action (changePassword)
 GET  /os-agents
 POST  /os-agents
 PUT  /os-agents/{agent_build_id}
 DELETE  /os-agents/{agent_build_id}
...

Similar for Keystone

$ oslopolicy-sample-generator  --namespace keystone  | egrep "POST|GET|DELETE|PUT" | sed 's!# !!' | head -10
GET  /v3/users/{user_id}/application_credentials/{application_credential_id}
GET  /v3/users/{user_id}/application_credentials
POST  /v3/users/{user_id}/application_credentials
DELETE  /v3/users/{user_id}/application_credentials/{application_credential_id}
PUT  /v3/OS-OAUTH1/authorize/{request_token_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles/{role_id}
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens
GET  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles
DELETE  /v3/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}

The output of the tool is a little sub-optimal, as the oslo policy enforcement used to be done using only JSON, and JSON does not allow comments, so I had to scrape the comments out of the YAML format. Ideally, we could tweak the tool to output the URL patterns and the policy rules that enforce them in a clean format.

What roles are used? Turns out, we can figure that out, too:

$ oslopolicy-sample-generator  --namespace keystone  |  grep \"role:
#"admin_required": "role:admin or is_admin:1"
#"service_role": "role:service"

So only admin or service are actually used. On Nova:

$ oslopolicy-sample-generator  --namespace nova  |  grep \"role:
#"context_is_admin": "role:admin"

Only admin.

How about matching the URL pattern to the policy rule?
If I run

oslopolicy-sample-generator  --namespace nova  |  less

In the middle I can see an example like this (# marsk removed for syntax):

# Create, list, update, and delete guest agent builds

# This is XenAPI driver specific.
# It is used to force the upgrade of the XenAPI guest agent on
# instance boot.
 GET  /os-agents
 POST  /os-agents
 PUT  /os-agents/{agent_build_id}
 DELETE  /os-agents/{agent_build_id}
"os_compute_api:os-agents": "rule:admin_api"

This is not 100% deterministic, though, as some services, Nova in particular, enforce policy based on the payload.

For example, these operations can be done by the resource owner:

# Restore a soft deleted server or force delete a server before
# deferred cleanup
 POST  /servers/{server_id}/action (restore)
 POST  /servers/{server_id}/action (forceDelete)
"os_compute_api:os-deferred-delete": "rule:admin_or_owner"

Where as these operations must be done by an admin operator:

# Evacuate a server from a failed host to a new host
 POST  /servers/{server_id}/action (evacuate)
"os_compute_api:os-evacuate": "rule:admin_api"

Both map to the same URL pattern. We tripped over this when working on RBAC in Middleware, and it is going to be an issue with the Whitelist as well.

Looking at the API docs, we can see that difference in the bodies of the operations. The Evacuate call has a body like this:

{
    "evacuate": {
        "host": "b419863b7d814906a68fb31703c0dbd6",
        "adminPass": "MySecretPass",
        "onSharedStorage": "False"
    }
}

Whereas the forceDelete call has a body like this:

{
    "forceDelete": null
}

From these, it is pretty straight forward to figure out what policy to apply, but as of yet, there is no programmatic way to access that.

It would take a little more scripting to try and identity the set of rules that mean a user should be able to perform those actions with a project scoped token versus the set of APIs that are reserved for cloud operations. However, just looking at the admin_or_owner rule for most is sufficient to indicate that it should be performed using a scoped token. Thus, an end user should be able to determine the set of operations that she can include in a white-list.

Teaching an old dog new tricks

Posted by Dan Walsh on March 12, 2018 08:16 PM

I have been working on SELinux for over 15 years.  I switched my primary job to working on containers several years ago, but one of the first things I did with containers was to add SELinux support.  Now all of the container projects I work on including CRI-O, Podman, Buildah as well as Docker, Moby, Rocket, runc, systemd-nspawn, lxc ... all have SELinux support.  I also maintain the container-selinux policy package which all of these container runtimes rely on.

Any ways container runtimes started adding the no-new-privileges capabilities a couple of years ago. 

no_new_privs 

The no_new_privs kernel feature works as follows:

  • Processes set no_new_privs bit in kernel that persists across fork, clone, & exec.
  • no_new_privs bit ensures process/children processes do not gain any additional privileges.
  • Process aren't allowed to unset no_new_privs bit once set.
  • no_new_privs processes are not allowed to change uid/gid or gain any other capabilities, even if the process executes setuid binaries or executables with file capability bits set.
  • no_new_privs prevents Linux Security Modules (LSMs) like SELinux from transitioning to process labels that have access not allowed to the current process. This means an SELinux process is only allowed to transition to a process type with less privileges.

Oops that last flag is a problem for containers and SELinux.  If I am running a command like

# podman run -ti --security-opt no-new-privileges fedora sh

On and SELinux system, usually podman command would be running as unconfined_t, and usually podman asks for the container process to be launched as container_t.  

or 

docker run -ti --security-opt no-new-privileges fedora sh

In the case of Docker the docker daemon is usually running as container_runtime_t. And will attempt to launch the container as container_t.

But the user also asked for no-new-privileges. If both flags are set the kernel would not allow the process to transition from unconfined_t -> container_t. And in the Docker case the kernel would not allow a transition from container_runtime_t -> container_t.

Well you may say that is pretty Dumb.  no_new_privileges is supposed to be a security measure that prevents a process from gaining further privs, but in this case it is actually preventing us from lessening its SELinux access.

Well the SELinux kernel and policy had the concept of "typebounds", where a policy writer could write that one type typebounted another type.  For example

typebounds container_runtime_t container_t, and the kernel would then make sure that container_t has not more allow rules then container_runtime_t.  This concept proved to be problematic for two reasons.  

Typebounds

Writing policy for the typebounds was very difficult and in some cases we would have to add additional access to the bounding type.  An example of this is SELinux can control the `entrypoint` of a process.  For example we write policy that says httpd_t can only be entered by executables labeled with the entrypoint type httpd_exec_t.  We also had a rule that container_runtime_t can only be entered via the entrpoint type of container_runtime_exec_t.  But we wanted to allow any processes to be run inside of a container, we wrote a rules that all executable types could be used as entrypoints to container_t.  With typebounds we needed to add all of these rules to container_runtime_t meaning we would have to allow all executables to be run as container_runtime_t.  Not ideal.

The second problem with typebounds and the kernel and policy only allowed a single typebounds of a type.  So if we wanted to allow unconfined_t processes to launch container_t processes, we would end up writing rules like

typebounds unconfined_t container_runtime_t
typebounds container_runtime_t container_t.

Now unconfined_t would need to grow all of the allow rules of container_runtime_t and container_t.

Yuck!

nnp_transitions

Well I was complaining about this to Lucas Vrabec, the guy who took over selinux-policy from me, and he tells me about this new allow rule called nnp_transitions.  The policy writer could write a rule into policy to say that a process could nnp_transition from one domain to another.

allow container_runtime_t confined_t:process2 nnp_transition;
allow unconfined_t confined_t:process2 nnp_transition;

With a recent enough kernel, SELinux would allow the transition even if the no_new_privs kernel flag was set, and the typebounds rules were NOT in place.  

Boy did I feel like an SELinux NEWBIE.  I added the rules on Fedora 27 and suddenly everything started working.  As of RHEL7.5 this feature will be back ported into the RHEL7.5 kernel.   Awesome.

nosuid_transition

While I was looking at the nnp_transition rules, I noticed that there was also a nosuid_transition permission. nosuid allows people to mount a file system with nosuid flag, this tells the kernel that even if a setuid application exists on this file system, the kernel should ignore it and not allow a process to gain privilege via the file.  You always want untrusted file systems like usb sticks to be mounted with this flag.  Well SELinux systems similarly ignore transition rules on labels based on a nosuid file system.  Similar to nnp_transition, this blocks a process from transition from a privileged domain to a less privileged domain.  But the nosuid_transtion flag allows us to tell the kernel to allow transitions from one domain to another even if the file system is marked nosuid.

allow container_runtime_t confined_t:process2 nosuid_transition;
allow unconfined_t container_t:process2 nosuid_transition;

This means that even if a user used podman to execute a file on a nosuid file system it would be allowed to transition from the unconfined_t to container_t. 

Well it is nice to know there are still things that I can learn new about SELinux.

Episode 87 - Chat with Let's Encrypt co-founder Josh Aas

Posted by Open Source Security Podcast on March 11, 2018 11:00 PM
Josh and Kurt talk about Let's Encrypt with co-founder Josh Aas. We discuss the past, present, and future of the project.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6353715/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Containers and MLS

Posted by Dan Walsh on March 08, 2018 10:43 AM

I have just updated the container-selinux policy to support MLS (Multi Level Security).  

SELinux and Container technology have a long history together.  Some people imagine that containers started just a few years ago with the introduction of Docker, but the technology goes back a lot further then that. 

SELinux originally supported two type of Mandatory Access Control,  Type Enforcement and RBAC (Roles Based Access Control).  

Type Enforcement

Type Enforcement is SELinux main security measure.  Basically every process on the system gets a assigned a type (httpd_t, unconfined_t, container_t ...)  And every object (file, directory, socket, tcp port ...) gets assigned a type. (httpd_sys_content_t, user_home_t, httpd_port_t, container_file_t).  Then we write rules that define the access between process types and object types. Note: The type field is always the third field of the SELinus label.

allow container_t container_file_t:file { open read write }

Anything that is not allowed is denied.

RBAC (Roles Based Access Control)

RBAC although seldom used was an enhancement on Type Enforcement, it basically controlled the types that a user process could become.  When a user logs onto the system he gets assigned User (staff_u, which can have one or more rules, including a default role  staff_r).    Note: the user field is the first field of the SELinux label and the role field is the second field.  Policy defines which types that staff_r role can run with.  For example a staff_r can run with the staff_t type, but it is not allowed to use the unconfined_t type.  If the SELinux user the user logs in with has multiple roles then he could change his role using the newrole command or sudo.  I often setup my account with a user with the staff_r and unconfined_r roles.   Where I login i get the staff_r role but when I become root through sudo I become the unconfined_r role with the unconfined_t type.  This means that if I accedently ran a setuid app on my system that made me root, I would still be staff_t and not able to modify the kernel.

MLS (Multi Level Security)

Back around the time we were working on RHEL 5 (2005/2006) time frame, we decided we wanted to use SELinux for MLS.  We wanted to get RHEL to be certified as EAL4+ LSPP, which required us to support handling data at different levels.  MLS Is very different then type enforcement, in that it is not concerned with the type of the process that is running, but at it sensitivity or security level.  The easiest way to understand this is imagine a group of processes running as TopSecret and another group of processes running as Secret, and then data/file on the system would also be assigned a level.  Like TopSecret and Secret.  The kernel then controls the communcations between these processes based on levels.  A Secret process can not read TopSecret data.  MLS Labels consist of this Sensitivity field and up to 1024 different categories.  Note: the MLS Field is the forth field of an SELinux Label.  You might have a TopSecret Process with the British Intelligence Category.   In order for processes to communicate the kernel enforces SELinux policy on both the sensitivities and categories.  I don't want to get into the weeds on this, there is plenty of information on the web on how MLS systems work. 

Way back in RHEL5 time the "mount namespace" was added to the kernel to handle MLS workloads.  We wanted to setup a system which allowed a user to login to a system at Secret  level, and then later login to the same system as TopSecret.  But we wanted him to have different home directories and /tmp directories when he logged in.  He would see his Secret Home directory when he logged in as secret and the Top Secret directory when he logged in as Top Secret. The Mount namespace allowed us to change the "mount tables" for a group of processes so /home/dwalsh and /tmp would vary for different processes on the system.  Now Mount Namespace is the crucial container feature that allows us to have multiple containers on a system each seeing different version of the OS mount on /.

MCS (Multi Category Security)

MCS was introduced around RHEL 6 Time frame (2008) in order to keep virtual machine separate.  We needed a way to make sure that two Virtual Machines could not attack each other if their was a hypervisor break out.  From an Type enforcement point of view each VM would run as svirt_t, and thier images would be svirt_image_t, we had rules that said:

allow svirt_t svirt_image_t:file manage_file_perms;

This means we could control a VM to only be able to read/write svirt images, and would not be allowed to attack the rest of the host.  But if every VM ran as svirt_t and all their images were svirt_image_t, they would be able to attack each other.  We needed a way in SELinux to keep the VMS apart.

Most systems did not use the MLS field of SELinux since they were not running in MLS mode,  If you are on a targeted system, you would see most processes running with an MLS filed of `s0`.  We decided to take advantage of the MLS field and create a new form of policy enforcement.  We would ignore Sensitivity and just use the categories.  We modified libvirt, the tool we use to launch VMs, to assign 2 random categories to each process and image, making sure the categories for the VM matched its image.  Then the Kernel would enforce that the categories had to match exactly to allow access.  This means a VM running with a Label like system_u:system_r:svirt_t:s0:C1,C2 would be allowed to manage an image labeled system_u:object_r:svirt_image_t:s0:c1,c2 but not allowed access to one labeled system_u:object_r:svirt_image_t:s0:c3,c4. We called this isolation svirt.

We now use MCS Separation and Type Enforcement to isolate containers.  We use different types for containers, so containers run as container_t and content is labeled container_file_t.

Containers and MLS

But what about people who want to run containers on MLS systems?  I recently worked with a user who wanted to run containers on an MLS machine, so we had to modify the policy.  Container Selinux policy is available in github. 

https://github.com/projectatomic/container-selinux

ifdef(`enable_mls',`
init_ranged_daemon_domain(container_runtime_t, container_runtime_exec_t, s0 - mls_systemhigh)
')
mls_file_read_to_clearance(container_runtime_t)
mls_file_write_to_clearance(container_runtime_t)

The only changed we need to make were to allow the container runtimes to run fully ranged.  This means container runtimes like CRI-O, Podman, Buildah and Docker can execute any any level and MCS Category.  We also had to allow container runtimes to be able to manage content at all of its sensitivies and categories.  Luckily this only meant we had to add three lines to the existing policy.  Now an MLS user could install the container-selinux package onto a MLS system and install a container runtime, and they can select which MLS Label they want to run at.

podman run --secutity-opt level:TopSecret:BritishIntelligence fedora sh
docker run --security-opt level:s15:c100,c200 -v /var/lib/content:/var/lib/content:Z nginx 

And the processes in the container and the content in the container will match the label.


Managing CloudForms’ Certificates with certmonger

Posted by Adam Young on March 08, 2018 04:30 AM

When you enroll CloudForms with an IdM Server, you do not automatically get the HTTPS certificates from that server. It takes a deliberate additional step to do so.

Since I am using Ansible to provision tjhe server, I have a task to enroll it with the IPA server. This is handled by the appliance_console_cli . My Ansible tasks look like this:

- name: Install IPA Client packages
  tags:
    - ipaclient
  yum: name=ipa-client,ipa-admintools,python-memcached
       state=present

- name: Set nameserver
  tags:
    - ipaclient
  lineinfile:
    path: /etc/sysconfig/network-scripts/ifcfg-eth0
    line: DNS1={{ nameserver }}

- name: Setup resolv.conf
  tags:
    - ipaclient
  template: src=resolv.conf.j2
            dest=/etc/resolv.conf

- name:  ipa-client
  shell: >
    appliance_console_cli --host cfme.{{ ipa_domain }} --ipaserver idm.{{ ipa_domain }} --iparealm {{ ipa_realm }} --ipaprincipal admin --ipapassword {{ ipa_server_password }}
  tags:
    - ipaclient
    creates: /etc/ipa/default.conf

This does the following:

  1. Installs the packages required to run IPA client
  2. Tells the network layer to use the specified DNS value the next time it updates resolv.conf
  3. Forces an immediate update to resolv.conf with the nameserver needed for IPA client installation
  4. Uses the console script to run ipa-client-install with the appropriate parameters

Running getcert afterwards show 0 certificates tracking.

I did not see an option on the appliance_console Text UI to update the certificates, but there is an option using the CLI.

Just running it dumps a stack trace.

[root@cfme ~]# appliance_console_cli --http-cert
creating ssl certificates

ipa: ERROR: did not receive Kerberos credentials

/opt/rh/cfme-gemset/gems/awesome_spawn-1.4.1/lib/awesome_spawn.rb:105:in `run!': /usr/bin/ipa exit code: 1 (AwesomeSpawn::CommandResultError)
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/principal.rb:42:in `request'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/principal.rb:22:in `register'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/certificate.rb:43:in `request'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/certificate_authority.rb:109:in `configure_http'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/certificate_authority.rb:47:in `activate'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:339:in `install_certs'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:185:in `run'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:425:in `parse'
from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/bin/appliance_console_cli:7:in `'
from /opt/rh/cfme-gemset/bin/appliance_console_cli:23:in `load'
from /opt/rh/cfme-gemset/bin/appliance_console_cli:23:in `'

But it does tell you the problem right up front. It is easy enough to kinit and run the script.

[root@cfme ~]# kinit admin
Password for admin@AYOUNG.RDUSALAB:
[root@cfme ~]# appliance_console_cli --http-cert
creating ssl certificates
configuring apache to use new certs

certificate result: http: complete
[root@cfme ~]# getcert list
Number of certificates and requests being tracked: 1.
Request ID '20180308034344':
status: MONITORING
stuck: no
key pair storage: type=FILE,location='/var/www/miq/vmdb/certs/server.cer.key'
certificate: type=FILE,location='/var/www/miq/vmdb/certs/server.cer'
CA: IPA
issuer: CN=server
subject: CN=server
expires: 2021-02-21 16:28:07 UTC
pre-save command:
post-save command: chmod 644 /var/www/miq/vmdb/certs/server.cer /var/www/miq/vmdb/certs/root.crt
track: yes
auto-renew: yes

However, it does not automatically restart the server. So you would get an error like this:

[root@cfme ~]# curl https://`hostname`
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

Use the CLI to restart the server:

appliance_console_cli --server=restart
[root@cfme ~]# curl https://`hostname`
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

Hmmm, still no love. What is wrong? Lets check getcert:

[root@cfme ~]# getcert list
Number of certificates and requests being tracked: 1.
Request ID '20180308034344':
	status: MONITORING
	stuck: no
	key pair storage: type=FILE,location='/var/www/miq/vmdb/certs/server.cer.key'
	certificate: type=FILE,location='/var/www/miq/vmdb/certs/server.cer'
	CA: IPA
	issuer: CN=server
	subject: CN=server
	expires: 2021-02-21 16:28:07 UTC
	pre-save command: 
	post-save command: chmod 644 /var/www/miq/vmdb/certs/server.cer /var/www/miq/vmdb/certs/root.crt
	track: yes
	auto-renew: yes

The problem can be seen in the issuer: CN=server field. Instead of generating a new Signing request, it reused the old one.

Lets get rid of the old cert and try again:

[root@cfme ~]# getcert stop-tracking -i  20180308034344
Request "20180308034344" removed.
[root@cfme ~]# sudo mv /var/www/miq/vmdb/certs/server.cer /tmp
[root@cfme ~]# appliance_console_cli --http-cert
creating ssl certificates
configuring apache to use new certs

certificate result: http: complete
[root@cfme ~]# getcert list
Number of certificates and requests being tracked: 1.
Request ID '20180308035841':
	status: MONITORING
	stuck: no
	key pair storage: type=FILE,location='/var/www/miq/vmdb/certs/server.cer.key'
	certificate: type=FILE,location='/var/www/miq/vmdb/certs/server.cer'
	CA: IPA
	issuer: CN=Certificate Authority,O=AYOUNG.RDUSALAB
	subject: CN=cfme.ayoung.rdusalab,O=AYOUNG.RDUSALAB
	expires: 2020-03-08 03:58:42 UTC
	dns: cfme.ayoung.rdusalab
	principal name: HTTP/cfme.ayoung.rdusalab@AYOUNG.RDUSALAB
	key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment
	eku: id-kp-serverAuth,id-kp-clientAuth
	pre-save command: 
	post-save command: chmod 644 /var/www/miq/vmdb/certs/server.cer /var/www/miq/vmdb/certs/root.crt
	track: yes
	auto-renew: yes

That looks better. The Issuer is the IPA server. Restart the service and check again.

[root@cfme ~]# curl https://`hostname`

<html><head>

</head><body>

Service Unavailable

The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

</body></html>

Crud. What did I do now?

looking in /var/www/miq/vmdb/log/apache/ssl_error.log

[Wed Mar 07 23:11:54.095648 2018] [proxy:error] [pid 15934] (111)Connection refused: AH00957: HTTP: attempt to connect to 0.0.0.0:3009 (0.0.0.0) failed
[Wed Mar 07 23:11:54.095661 2018] [proxy:error] [pid 15934] AH00959: ap_proxy_connect_backend disabling worker for (0.0.0.0) for 60s
[Wed Mar 07 23:11:54.095667 2018] [proxy_http:error] [pid 15934] [client 10.10.124.229:33360] AH01114: HTTP: failed to make connection to backend: 0.0.0.0

Maybe something didn’t get restarted? Reboot the whole server just to force everything to reinitialize. And then:

# curl https://`hostname`

...

Success.

I suspect that if I had done the certificate work prior to starting the services, I would not have had that problem. I did direct Certmonger calls before I realized that there was a CLI option and did not have to reboot.

So I can add tasks to perform these steps in my Ansible role, right after the IPA client install. Or I can use a custom getcert task. But that is a tale for a different day.

Generating a Callgraph for Keystone

Posted by Adam Young on March 06, 2018 09:53 PM

Once I know a starting point for a call, I want to track the other functions that it calls. pycallgraph will generate an image that shows me that.

All this is done inside the virtual env set up by tox at keystone/.tox/py35

I need a stub of a script file in order to run it. I’ll put this in tmp:

from keystone.identity import controllers
from keystone.server import wsgi
from keystone.common import request

def main():

    d = dict()
    r  = request.Request(d)

    wsgi.initialize_admin_application()
    c = controllers.UserV3()
    c.create_user(r)

if __name__ == '__main__':
    main()

To install pycallgraph:

pip install pycallgraph

And to run it:

 pycallgraph  --max-depth 6  graphviz /tmp/call_create_user.py 

It errors out do to auth issues (it is actually rtunning the code, so don’t do this on a production server)

Here is what it generated.

Click to enlarge. Not great, but it is a start.

Inspecting Keystone Routes

Posted by Adam Young on March 06, 2018 05:17 PM

What Policy is enforced when you call a Keystone API? Right now, there is no definitive way to say. However, with some programmatic help, we might be able to figure it out from the source code. Lets start by getting a complete list of the Keystone routes.

In the WSGI framework that Keystone uses, a Route is the object that used to match the URL. For example, when I try to look at the user with UserId abcd1234, I submit a GET request to the URL https://hostname:port/v3/users/abcd1234. The route path is the pattern /users/{user_id}. The WSGI framework handles the parts of the URL prior to that, and eventually needs to pull out a Python function to execute for the route. Here is how we can generate a list of the route paths in Keystone

from keystone.server import wsgi
app = wsgi.initialize_admin_application()
composing = app['/v3'].application.application.application.application.application.application._app.application.application.application.application
for route in composing._router.mapper.matchlist:
    print(route.routepath)

I’ll put the output at the end of this post.

That long chain of .application properties is due to the way that the pipeline is built using the paste file. In keystone/etc/keystone-paste.ini we see:

[pipeline:api_v3]
# The last item in this pipeline must be service_v3 or an equivalent
# application. It cannot be a filter.
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3

Each of those pipeline elements are python classes specified earlier in the file, that honor the middleware contract. Most of these can be traced back to the keystone.common.wsgi.Middleware base class, which implements this as __call__ method.

    @webob.dec.wsgify(RequestClass=request_mod.Request)
    @middleware_exceptions
    def __call__(self, request):
        response = self.process_request(request)
        if response:
            return response
        response = request.get_response(self.application)
        return self.process_response(request, response)

The odd middleware out is AuthContextMiddleware which extends from from keystonemiddleware.auth_token.BaseAuthProtocol. See if you can spot the difference:

    @webob.dec.wsgify(RequestClass=_request._AuthTokenRequest)
    def __call__(self, req):
        """Handle incoming request."""
        response = self.process_request(req)
        if response:c
            return response
        response = req.get_response(self._app)
        return self.process_response(response

Yep: self._app.

Here is the output from the above code, executed in the python interpreter. This does not have the Verbs in it yet, but a little more poking should show where they are stored:

>>> for route in composing._router.mapper.matchlist:
...     print(route.routepath)
... 
/auth/tokens
/auth/tokens
/auth/tokens
/auth/tokens
/auth/tokens/OS-PKI/revoked
/auth/catalog
/auth/projects
/auth/domains
/auth/system
/users/{user_id}/projects
/roles/{prior_role_id}/implies
/roles/{prior_role_id}/implies/{implied_role_id}
/roles/{prior_role_id}/implies/{implied_role_id}
/roles/{prior_role_id}/implies/{implied_role_id}
/roles/{prior_role_id}/implies/{implied_role_id}
/role_inferences
/projects/{project_id}/users/{user_id}/roles/{role_id}
/projects/{project_id}/users/{user_id}/roles/{role_id}
/projects/{project_id}/users/{user_id}/roles/{role_id}
/projects/{project_id}/groups/{group_id}/roles/{role_id}
/projects/{project_id}/groups/{group_id}/roles/{role_id}
/projects/{project_id}/groups/{group_id}/roles/{role_id}
/projects/{project_id}/users/{user_id}/roles
/projects/{project_id}/groups/{group_id}/roles
/domains/{domain_id}/users/{user_id}/roles/{role_id}
/domains/{domain_id}/users/{user_id}/roles/{role_id}
/domains/{domain_id}/users/{user_id}/roles/{role_id}
/domains/{domain_id}/groups/{group_id}/roles/{role_id}
/domains/{domain_id}/groups/{group_id}/roles/{role_id}
/domains/{domain_id}/groups/{group_id}/roles/{role_id}
/domains/{domain_id}/users/{user_id}/roles
/domains/{domain_id}/groups/{group_id}/roles
/system/users/{user_id}/roles
/system/users/{user_id}/roles/{role_id}
/system/users/{user_id}/roles/{role_id}
/system/users/{user_id}/roles/{role_id}
/system/groups/{group_id}/roles
/system/groups/{group_id}/roles/{role_id}
/system/groups/{group_id}/roles/{role_id}
/system/groups/{group_id}/roles/{role_id}
/role_assignments
/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/domains/{domain_id}/groups/{group_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/domains/{domain_id}/groups/{group_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/domains/{domain_id}/groups/{group_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/domains/{domain_id}/groups/{group_id}/roles/inherited_to_projects
/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/inherited_to_projects
/OS-INHERIT/projects/{project_id}/users/{user_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/projects/{project_id}/users/{user_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/projects/{project_id}/users/{user_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/projects/{project_id}/groups/{group_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/projects/{project_id}/groups/{group_id}/roles/{role_id}/inherited_to_projects
/OS-INHERIT/projects/{project_id}/groups/{group_id}/roles/{role_id}/inherited_to_projects
/regions/{region_id}
/OS-EP-FILTER/endpoints/{endpoint_id}/projects
/OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}
/OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}
/OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}
/OS-EP-FILTER/projects/{project_id}/endpoints
/OS-EP-FILTER/projects/{project_id}/endpoint_groups
/OS-EP-FILTER/endpoint_groups
/OS-EP-FILTER/endpoint_groups
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}/projects/{project_id}
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}/projects/{project_id}
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}/projects/{project_id}
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}/projects
/OS-EP-FILTER/endpoint_groups/{endpoint_group_id}/endpoints
/users/{user_id}/password
/groups/{group_id}/users
/groups/{group_id}/users/{user_id}
/groups/{group_id}/users/{user_id}
/groups/{group_id}/users/{user_id}
/users/{user_id}/groups
/users/{user_id}/application_credentials
/users/{user_id}/application_credentials
/users/{user_id}/application_credentials/{application_credential_id}
/users/{user_id}/application_credentials/{application_credential_id}
/registered_limits
/registered_limits
/registered_limits
/registered_limits/{registered_limit_id}
/registered_limits/{registered_limit_id}
/limits
/limits
/limits
/limits/{limit_id}
/limits/{limit_id}
/domains/{domain_id}/config
/domains/{domain_id}/config
/domains/{domain_id}/config
/domains/{domain_id}/config
/domains/{domain_id}/config/{group}
/domains/{domain_id}/config/{group}
/domains/{domain_id}/config/{group}
/domains/{domain_id}/config/{group}/{option}
/domains/{domain_id}/config/{group}/{option}
/domains/{domain_id}/config/{group}/{option}
/domains/config/default
/domains/config/{group}/default
/domains/config/{group}/{option}/default
/projects/{project_id}/tags
/projects/{project_id}/tags
/projects/{project_id}/tags
/projects/{project_id}/tags/{value}
/projects/{project_id}/tags/{value}
/projects/{project_id}/tags/{value}
/OS-REVOKE/events
/OS-FEDERATION/identity_providers/{idp_id}
/OS-FEDERATION/identity_providers/{idp_id}
/OS-FEDERATION/identity_providers/{idp_id}
/OS-FEDERATION/identity_providers/{idp_id}
/OS-FEDERATION/identity_providers
/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}
/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}
/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}
/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}
/OS-FEDERATION/identity_providers/{idp_id}/protocols
/OS-FEDERATION/mappings/{mapping_id}
/OS-FEDERATION/mappings/{mapping_id}
/OS-FEDERATION/mappings/{mapping_id}
/OS-FEDERATION/mappings/{mapping_id}
/OS-FEDERATION/mappings
/OS-FEDERATION/service_providers/{sp_id}
/OS-FEDERATION/service_providers/{sp_id}
/OS-FEDERATION/service_providers/{sp_id}
/OS-FEDERATION/service_providers/{sp_id}
/OS-FEDERATION/service_providers
/OS-FEDERATION/domains
/OS-FEDERATION/projects
/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}/auth
/auth/OS-FEDERATION/saml2
/auth/OS-FEDERATION/saml2/ecp
/auth/OS-FEDERATION/websso/{protocol_id}
/auth/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}/websso
/OS-FEDERATION/saml2/metadata
/OS-OAUTH1/consumers
/OS-OAUTH1/consumers
/OS-OAUTH1/consumers/{consumer_id}
/OS-OAUTH1/consumers/{consumer_id}
/OS-OAUTH1/consumers/{consumer_id}
/users/{user_id}/OS-OAUTH1/access_tokens
/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}
/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}
/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles
/users/{user_id}/OS-OAUTH1/access_tokens/{access_token_id}/roles/{role_id}
/OS-OAUTH1/request_token
/OS-OAUTH1/access_token
/OS-OAUTH1/authorize/{request_token_id}
/endpoints/{endpoint_id}/OS-ENDPOINT-POLICY/policy
/policies/{policy_id}/OS-ENDPOINT-POLICY/endpoints
/policies/{policy_id}/OS-ENDPOINT-POLICY/endpoints/{endpoint_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/endpoints/{endpoint_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/endpoints/{endpoint_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/services/{service_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/services/{service_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/services/{service_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/services/{service_id}/regions/{region_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/services/{service_id}/regions/{region_id}
/policies/{policy_id}/OS-ENDPOINT-POLICY/services/{service_id}/regions/{region_id}
/OS-SIMPLE-CERT/ca
/OS-SIMPLE-CERT/certificates
/OS-TRUST/trusts
/OS-TRUST/trusts
/OS-TRUST/trusts/{trust_id}
/OS-TRUST/trusts/{trust_id}
/OS-TRUST/trusts/{trust_id}/roles
/OS-TRUST/trusts/{trust_id}/roles/{role_id}
/roles
/roles
/roles/{role_id}
/roles/{role_id}
/roles/{role_id}
/regions
/regions
/regions/{region_id}
/regions/{region_id}
/regions/{region_id}
/services
/services
/services/{service_id}
/services/{service_id}
/services/{service_id}
/endpoints
/endpoints
/endpoints/{endpoint_id}
/endpoints/{endpoint_id}
/endpoints/{endpoint_id}
/credentials
/credentials
/credentials/{credential_id}
/credentials/{credential_id}
/credentials/{credential_id}
/users
/users
/users/{user_id}
/users/{user_id}
/users/{user_id}
/groups
/groups
/groups/{group_id}
/groups/{group_id}
/groups/{group_id}
/policies
/policies
/policies/{policy_id}
/policies/{policy_id}
/policies/{policy_id}
/domains
/domains
/domains/{domain_id}
/domains/{domain_id}
/domains/{domain_id}
/projects
/projects
/projects/{project_id}
/projects/{project_id}
/projects/{project_id}
/

Episode 86 - What happens when 23 thousand certificates leak?

Posted by Open Source Security Podcast on March 05, 2018 10:05 PM
Josh and Kurt talk about the Trustico certificate incident and Let's Encrypt.
<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6324584/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Enable Logging for root Certmonger

Posted by Adam Young on March 03, 2018 01:02 AM

While trying to debug an Ansible module calling Certmonger, I found myself afoul of some mistake I could not quite trace. Certmonger was having trouble reading the key to generate the certificate. But nothing was showing up in the log. Here’s how I got some logging info.

Certmonger is managed by systemd. The configuration is managed in /usr/lib/systemd/system/certmonger.service and on my system it looks like this:

[Unit]
Description=Certificate monitoring and PKI enrollment
After=syslog.target network.target dbus.service

[Service]
Type=dbus
PIDFile=/var/run/certmonger.pid
EnvironmentFile=-/etc/sysconfig/certmonger
ExecStart=/usr/sbin/certmonger -S -p /var/run/certmonger.pid -n $OPTS
BusName=org.fedorahosted.certmonger

[Install]
WantedBy=multi-user.target

I want to enable logging, and to do so, I need to fill in that $OPTS param. I can do that by putting the entry I want in the EnvironmentFile. To see mine:

$ sudo cat /etc/sysconfig/certmonger
OPTS="-d 15"

And restart cermonger

sudo systemctl restart certmonger

Next time I do a certmonger command I can tail the journal:

 sudo journalctl -f

And I will see output along the lines of:

Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Certificate issued (0 chain certificates, 0 roots).
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'NEED_TO_SAVE_CERT'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') now.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') taking writing lock
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] No hooks set for pre-save command.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'START_SAVING_CERT'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') now.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'SAVING_CERT'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') on traffic from 11.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'SAVED_CERT'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') now.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'NEED_TO_SAVE_CA_CERTS'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') now.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'START_SAVING_CA_CERTS'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') now.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'SAVING_CA_CERTS'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') on traffic from 11.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'NEED_TO_READ_CERT'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') now.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Request2('20180303005218') moved to state 'READING_CERT'
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [15680] Will revisit Request2('20180303005218') on traffic from 11.
Mar 02 19:52:19 sso.ayoung.rdusalab certmonger[15680]: 2018-03-02 19:52:19 [16036] Read value "0" from "/proc/sys/crypto/fips_enabled".

OpenStack Role Assignment Inheritance for CloudForms

Posted by Adam Young on February 28, 2018 07:04 PM

Operators expect to use CloudForms to perform administrative tasks. For this reason, the documentation for OpenStack states that the Keystone user must have an ‘admin’ role. We found at least one case, however, where this was not sufficient. Fortunately, we have a better approach, and one that can lead to success in a wider array of deployments.

Background

CloudForms uses the role assignments for the give user account to enumerate the set of projects. Internally it creates a representation of these projects to be used to track resources. However, The way that ‘admin’ is defined on OpenStack is tied to a single project. This means that CloudForms really has no way to ask “what projects can this user manage?” Now, while admin anywhere is admin everywhere so you would not think that you need to enumeration projects, but it turns out that some of the more complex operations, such as mounting a volume, has to cross service boundaries, and need the project abstraction to link the sets of operations. CloudForms design did not see this disconnect, and so some of those operations fail.

Lets assume, for the moment, that a user had to have a role on project in order to perform operations on that project. The current admin-everywhere approach would break. What CloudForms would require is an automated way to give a user a role on a project as soon as that project was created. It turns out that CloudForms is not the only thing that has this requirement.

Role Assignment Inheritance

Keystone projects do not have to be organized as a flat collection. They can be nested into a tree form. This is called “Hierarchical Multitenancy.” Added to that, a role can be assigned to a user or group on parent project and that role assignment is inherited down the tree. This is called “Role Assignment Inheritance.”

This presentation, while old, does a great job of putting the details together.

You don’t need to do anything different in your project setup to take advantage of this mechanism. Here’s something that is subtle: a Domain IS A project. Every project is already in a domain, and thus has a parent project. Thus, you can assign a user a role on the domain-as-a-project, and they will have that role on every project inside that domain.

Sample Code

Here is in command line form.

openstack role add --user CloudAdmin --user-domain Default --project Default --project-domain Default --inherited admin

Lets take those arguments step by step.

--user CloudAdmin  --user-domain Default

This is the user that CloudForms is using to connect to Keystone and OpenStack. Every user is owned by a domain, and this user is owned by the Default” domain.

--project Default --project-domain Default

This is blackest of magic. The Default domain IS-A project. So it owns itself.

--inherited

A role assignment is either on a project OR on all its subprojects. So, the user does not actually have a role that is usable against the Default DOMAIN-AS-A-PROJECT, but only on all odf the subordinate projects. This might seem strange, but it was built this way for exactly this reason: being able to distinguish between levels of a hierarchy.

admin

This is the role name.

Conclusion

With this role assignment, the CloudForms Management Engine instance can perform all operations on all projects within the default domain. If you add another domain to manage a separate set of projects, you would need to perform this same role assignment on the new domain as well.

I assume this is going to leave people with a lot of questions. Please leave comments, and I will try to update this with any major concepts that people want made lucid.

Episode 85 - NPM ate my files

Posted by Open Source Security Podcast on February 28, 2018 02:45 PM
Josh and Kurt talk about npm 5.7.0 breaking Linux systems.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6294305/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Smartcards and You - How To Make Them Work on Fedora/RHEL

Posted by William Brown on February 26, 2018 02:00 PM

Smartcards and You - How To Make Them Work on Fedora/RHEL

Smartcards are a great way to authenticate users. They have a device (something you have) and a pin (something you know). They prevent password transmission, use strong crypto and they even come in a variety of formats. From your “card” shapes to yubikeys.

So why aren’t they used more? It’s the classic issue of usability - the setup for them is undocumented, complex, and hard to discover. Today I hope to change this.

The Goal

To authenticate a user with a smartcard to a physical linux system, backed onto LDAP. The public cert in LDAP is validated, as is the chain to the CA.

You Will Need

I’ll be focusing on the yubikey because that’s what I own.

Preparing the Smartcard

First we need to make the smartcard hold our certificate. Because of a crypto issue in yubikey firmware, it’s best to generate certificates for these externally.

I’ve documented this before in another post, but for accesibility here it is again.

Create an NSS DB, and generate a certificate signing request:

certutil -d . -N -f pwdfile.txt
certutil -d . -R -a -o user.csr -f pwdfile.txt -g 4096 -Z SHA256 -v 24 \
--keyUsage digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment --nsCertType sslClient --extKeyUsage clientAuth \
-s "CN=username,O=Testing,L=example,ST=Queensland,C=AU"

Once the request is signed, and your certificate is in “user.crt”, import this to the database.

certutil -A -d . -f pwdfile.txt -i user.crt -a -n TLS -t ",,"
certutil -A -d . -f pwdfile.txt -i ca.crt -a -n TLS -t "CT,,"

Now export that as a p12 bundle for the yubikey to import.

pk12util -o user.p12 -d . -k pwdfile.txt -n TLS

Now import this to the yubikey - remember to use slot 9a this time! As well make sure you set the touch policy NOW, because you can’t change it later!

yubico-piv-tool -s9a -i user.p12 -K PKCS12 -aimport-key -aimport-certificate -k --touch-policy=always

Setting up your LDAP user

First setup your system to work with LDAP via SSSD. You’ve done that? Good! Now it’s time to get our user ready.

Take our user.crt and convert it to DER:

openssl x509 -inform PEM -outform DER -in user.crt -out user.der

Now you need to transform that into something that LDAP can understand. In the future I’ll be adding a tool to 389-ds to make this “automatic”, but for now you can use python:

python3
>>> import base64
>>> with open('user.der', 'r') as f:
>>>    print(base64.b64encode(f.read))

That should output a long base64 string on one line. Add this to your ldap user with ldapvi:

uid=william,ou=People,dc=...
userCertificate;binary:: <BASE64>

Note that ‘;binary’ tag has an important meaning here for certificate data, and the ‘::’ tells ldap that this is b64 encoded, so it will decode on addition.

Setting up the system

Now that you have done that, you need to teach SSSD how to intepret that attribute.

In your various SSSD sections you’ll need to make the following changes:

[domain/LDAP]
auth_provider = ldap
ldap_user_certificate = userCertificate;binary

[sssd]
# This controls OCSP checks, you probably want this enabled!
# certificate_verification = no_verification

[pam]
pam_cert_auth = True

Now the TRICK is letting SSSD know to use certificates. You need to run:

sudo touch /var/lib/sss/pubconf/pam_preauth_available

With out this, SSSD won’t even try to process CCID authentication!

Add your ca.crt to the system trusted CA store for SSSD to verify:

certutil -A -d /etc/pki/nssdb -i ca.crt -n USER_CA -t "CT,,"

Add coolkey to the database so it can find smartcards:

modutil -dbdir /etc/pki/nssdb -add "coolkey" -libfile /usr/lib64/libcoolkeypk11.so

Check that SSSD can find the certs now:

# sudo /usr/libexec/sssd/p11_child --pre --nssdb=/etc/pki/nssdb
PIN for william
william
/usr/lib64/libcoolkeypk11.so
0001
CAC ID Certificate

If you get no output here you are missing something! If this doesn’t work, nothing will!

Finally, you need to tweak PAM to make sure that pam_unix isn’t getting in the way. I use the following configuration.

auth        required      pam_env.so
# This skips pam_unix if the given uid is not local (IE it's from SSSD)
auth        [default=1 ignore=ignore success=ok] pam_localuser.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        sufficient    pam_sss.so prompt_always ignore_unknown_user
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     [default=bad success=ok user_unknown=ignore] pam_sss.so
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password    sufficient    pam_unix.so sha512 shadow try_first_pass use_authtok
password    sufficient    pam_sss.so use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
-session    optional      pam_systemd.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     optional      pam_sss.so

That’s it! Restart SSSD, and you should be good to go.

Finally, you may find SELinux isn’t allowing authentication. This is really sad that smartcards don’t work with SELinux out of the box and I have raised a number of bugs, but check this just in case.

Happy authentication!

Episode 84 - Have I been pwned?

Posted by Open Source Security Podcast on February 25, 2018 08:09 PM
Josh and Kurt talk about the new data dump from Have I been pwned?

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6294036/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes



Episode 83 - XKCD + CVE = XKCVE

Posted by Open Source Security Podcast on February 21, 2018 12:08 AM
Josh and Kurt talk about the XKCD CVE comic and a flight simulator stealing credentials.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6283065/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/87A93A/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>
https://xkcd.com/1957/

Show Notes









Java on Port 443

Posted by Adam Young on February 18, 2018 02:54 AM

I’ve been working on setting up a Java based SAML provider. This means that the application needs to handle request and response over HTTPS. And, since often this is deployed in data centers where non-standard ports are blocked, it means that the HTTPS really needs to be supported on the proper port, which is 443. Here are the range of options.

Lets assume the app is being served by Tomcat, although this goes for any HTTP server, especially the interpreter based ones.

You have two choices.

  1. Run Tomcat to listen and serve on port 443 directly
  2. run a proxy in front of it.

For proxies, you have three easy choices, and many others, if you are running on Fedora/RHEL/Centos.

  1. IP tables. Listen on port 443, forward to the local port that Tomcat is listening on for HTTPS
  2. Apache  forwarding either HTTP or AJP
  3. HA Proxy forwarding HTTP

Those each have configuration specific issues.  I am not going to go deep in to them here.

Lets return to the case where you want Tomcat to be able to directly listen and respond on port 443.

Your first, and worst, option is to run as root.  Only root is able to listen to ports under 1000 on a default Linux setup.

Apache (and the others) does something like this.  But it uses Unix specific mechanisms to drop privileges.   So when you run PS, you can see that the HTTPD process is running as apache or nobody or httpd depending on your distro.  Basically, to process runs as root, listens on port 443, and then tells the Kernel to downgrade its userid to the less priviledged one.  It might change groups too, depending on how its coded.

Java could, potentially do this, but it would take a JNI call to make the appropriate System call.  Tomcat can’t really handle that.  It also prevents you from re-opening a closed connection.  While Apache tends to fork a new process to handle that problem, Tomcat is not engineered that way.  You might be coding yourself into a corner.

It turns out that the application does not need access to everything that roots does.  And this is a pattern that is not restricted to network listeners. Thus, a few Kernel versions ago, they added “capabilites” to the Kernel. This seems like a better solution. Specifically, our application needs

CAP_NET_BIND_SERVICE
Bind a socket to Internet domain privileged ports
(port numbers less than 1024).

Can we add this to a Tomcat app?

Lets do a little test.  Instead of Tomcat, we can use something simpler:  The EchoServer code used a Princeton Computer Science Class.  Download EchoServer.java, In.java and Out.java.

Compile using

javac EchoServer.

And run using

java EchoServer 4444

In another window, you can telnet into the server and type in a struing, which will be echoed back to you.

$ telnet localhost 4444
Trying ::1...
Connected to localhost.
Escape character is '^]'.
test
test

If you Ctrl C the echo server, you will close the connection.

Ok, what happens if we try this on a port under 1000?  Lets try.  First, edit EchoServer.java so it is listening on port 400, not 4444.

$ diff -u EchoServer.java.orig EchoServer.java
--- EchoServer.java.orig 2018-02-17 18:09:42.846674768 -0500
+++ EchoServer.java 2018-02-17 18:09:57.211684501 -0500
@@ -26,7 +26,7 @@
 public static void main(String[] args) throws Exception {
 
 // create socket
- int port = 4444;
+ int port = 400;
 ServerSocket serverSocket = new ServerSocket(port);
 System.err.println("Started server on port " + port);

Recompile and run:

$ java EchoServer 400
Exception in thread "main" java.net.BindException: Permission denied (Bind failed)
 at java.net.PlainSocketImpl.socketBind(Native Method)
 at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
 at java.net.ServerSocket.bind(ServerSocket.java:375)
 at java.net.ServerSocket.<init>(ServerSocket.java:237)
 at java.net.ServerSocket.<init>(ServerSocket.java:128)
 at EchoServer.main(EchoServer.java:30)

How can we add CAP_NET_BIND_SERVICE?  We have the setcap utility:

setcap – set file capabilities

However…it turns out that this is set on an executable?  What executable?  Can’t be a shell script, as the capabilites are dropped wen the shell executes the embeedded interpreter.  We would have to set it on the Java executable itself.  This is, obviously, a dangers approach, as it means ANY Java program can listen on any port under 1000, but lets see if it works.

First we need to find the Java executable:

[ayoung@ayoung541 echo]$ which java
/usr/bin/java
[ayoung@ayoung541 echo]$ ls -a\l `which java`
lrwxrwxrwx. 1 root root 22 Feb 15 09:10 /usr/bin/java -> /etc/alternatives/java
[ayoung@ayoung541 echo]$ ls -al /etc/alternatives/java
lrwxrwxrwx. 1 root root 72 Feb 15 09:10 /etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-5.b14.fc27.x86_64/jre/bin/java

Can we use this directly?  Lets see:

$ /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-5.b14.fc27.x86_64/jre/bin/java EchoServer 
Exception in thread "main" java.net.BindException: Permission denied (Bind failed)
 at java.net.PlainSocketImpl.socketBind(Native Method)
 at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
 at java.net.ServerSocket.bind(ServerSocket.java:375)
 at java.net.ServerSocket.<init>(ServerSocket.java:237)
 at java.net.ServerSocket.<init>(ServerSocket.java:128)
 at EchoServer.main(EchoServer.java:30)

Looks OK.  Lets try setting the capability:

[ayoung@ayoung541 echo]$ /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-5.b14.fc27.x86_64/jre/bin/java EchoServer 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-5.b14.fc27.x86_64/jre/bin/java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory
[ayoung@ayoung541 echo]$ java EchoServer 
java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory

Something does not like that capability.  We an unset it and get the same result as before.

[ayoung@ayoung541 echo]$ sudo /sbin/setcap cap_net_bind_service=-ep /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-5.b14.fc27.x86_64/jre/bin/java
[ayoung@ayoung541 echo]$ java EchoServer 
Exception in thread "main" java.net.BindException: Permission denied (Bind failed)
 at java.net.PlainSocketImpl.socketBind(Native Method)
 at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
 at java.net.ServerSocket.bind(ServerSocket.java:375)
 at java.net.ServerSocket.<init>(ServerSocket.java:237)
 at java.net.ServerSocket.<init>(ServerSocket.java:128)
 at EchoServer.main(EchoServer.java:30)

Seems I am not the first person to hit this, and a step by step is laid out in the answer here.

Once I add exactly the path:

 $ cat /etc/ld.so.conf.d/java.conf
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-5.b14.fc27.x86_64/jre/bin/../lib/amd64/jli

And run

[ayoung@ayoung541 echo]$ java EchoServer 
Started server on port 400

OK, that works.

Would you want to do that?  Probably not.  If you did, you would probably want a special, limited JDK availalbe only to the application.

It is also possible to build a binary that kicks off the Java process, add the capability to that, and further limit what could call this code.  There is still the risk of someone running it with a different JAVA_PATH and getting different code in place, and using that for a privilege elevation. The only secure path would be to have a custom classloader that read the Java code from a segment of the file; static linking of a Jar file, if you will. And that might be too much. However, all these attacks are still possible with the Java code the way it is set up now, just that we would expect a system administrator to lock down what code could be run after configuring, say, an HTTPD instance as a reverse proxy.

Java and Certmonger Continued

Posted by Adam Young on February 16, 2018 04:52 AM

Now that I know that I can do things like read the Keys from a Programmatic registered provider and properly set up SELinux to deal with it, I want to see if I can make this work for a pre-compiled application, using only environment variables.

I’ve modified the test code to just try and load a provider.

import java.util.Enumeration;
import java.security.KeyStore;
import java.security.PrivateKey;
import java.security.Provider;
import java.security.Security;

import sun.security.pkcs11.SunPKCS11;

public class ReadNSSProps{

    public static char[] password = new char[0];

    public static void main(String[] args) throws Exception{

         for (Provider p: Security.getProviders()){
             System.out.println(p);
        }
        Provider p = Security.getProvider("SunPKCS11-NSScrypto");
        System.out.println(p);
        KeyStore ks = KeyStore.getInstance("PKCS11", p); //p is the provider created above
        ks.load(null, password);
        for (Enumeration<string> aliases = ks.aliases(); aliases.hasMoreElements();){
             System.out.println(aliases.nextElement());
        }

        KeyStore.ProtectionParameter protParam =
           new KeyStore.PasswordProtection(password);

        KeyStore.PrivateKeyEntry pkEntry = (KeyStore.PrivateKeyEntry)
            ks.getEntry("RHSSO", protParam);

        System.out.println(pkEntry);
        PrivateKey pkey =  pkEntry.getPrivateKey();
        System.out.println(pkey);
    }
}

The pkcs11.cfg file still is pretty much the same:

# cat pkcs11.cfg 
name = NSScrypto
nssModule = keystore
nssDbMode = readOnly
nssLibraryDirectory = /lib64/
nssSecmodDirectory = /etc/opt/rh/rh-sso7/keycloak/standalone/keystore

Call the code like this:

java  -Djava.security.properties=$PWD/java.security.properties  ReadNSSProps

And…lots of output including a dump of the private key.

Thanks to these two articles for pointing the way.

Next up is trying to use these to provide the keystore for HTTPS.

Certmonger, SELinux and Keystores in random locations

Posted by Adam Young on February 15, 2018 03:45 PM

In my last post, SELinux was reporting AVCs when certmonger tried to access an NSS Database in a non-standard location. To get rid of the AVC, and get SELinx to allow the operations, we need to deal with the underlying cause of the AVC.

Bottom Line Up Front:

Run these commands.

[root@sso standalone]# semanage fcontext -a -t cert_t $PWD/"keystore(/.*)?"
[root@sso standalone]# restorecon -R -v keystore
scontext=system_u:system_r:certmonger_t:s0 tcontext=unconfined_u:object_r:etc_t:s0 tclass=file

Thanks to OZZ for that.

Here’s How I got there.

Debugging

The original error was:

type=AVC msg=audit(1518668324.903:6506): avc:  denied  { write } for  pid=15310 comm="certmonger" name="cert9.db" dev="vda1" ino=17484324 scontext=system_u:system_r:certmonger_t:s0 tcontext=unconfined_u:object_r:etc_t:s0 tclass=file

Since I created the NSS database without a relabel or other operation, it is still in its default form. Looking at the whole subdirectory:

[root@sso standalone]# ls -Z keystore
-rw-------. root root unconfined_u:object_r:etc_t:s0   cert8.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   cert9.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   key3.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   key4.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   pkcs11.txt
-rw-------. root root unconfined_u:object_r:etc_t:s0   secmod.db

Compare with a properly configure system

Lets contrast this with an NSS Database that is properly labeled. For example, on my IPA server, where SELInux is enforcing, I can look at certmonger and see where it is tracking files.

$ ssh cloud-user@idm.ayoung.rdusalab 
Last login: Wed Feb 14 22:53:20 2018 from 10.10.120.202
[cloud-user@idm ~]$ sudo -i
[root@idm ~]# getcert list
Number of certificates and requests being tracked: 9.
...
Request ID '20180212165505':
	status: MONITORING
	stuck: no
	key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
	certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
...

So looking at

[root@idm ~]# ls -Z /etc/httpd/alias
-rw-r-----. root apache unconfined_u:object_r:cert_t:s0  cert8.db
-rw-r-----. root apache unconfined_u:object_r:cert_t:s0  cert8.db.orig
-rw-------. root root   unconfined_u:object_r:cert_t:s0  install.log
-rw-------. root root   system_u:object_r:ipa_cert_t:s0  ipasession.key
-rw-r-----. root apache unconfined_u:object_r:cert_t:s0  key3.db
-rw-r-----. root apache unconfined_u:object_r:cert_t:s0  key3.db.orig
lrwxrwxrwx. root root   system_u:object_r:cert_t:s0      libnssckbi.so -> /usr/lib64/libnssckbi.so
-rw-------. root apache unconfined_u:object_r:cert_t:s0  pwdfile.txt
-rw-r-----. root apache unconfined_u:object_r:cert_t:s0  secmod.db
-rw-r-----. root apache unconfined_u:object_r:cert_t:s0  secmod.db.orig

The interesting value here is cert_t. From man ls

Display security context so it fits on most displays. Displays only mode, user, group, security context and file name.

The Security context is unconfined_u:object_r:cert_t:s0 which is in user:role:type:level format. What we want to do, then, is change the type on our NSS Database files. We could use chcon to test out the change temporarily, and then semanage fcontext to make the change permanent.

Method

Lets get a method in place to make changes and confirm they happen. I use two terminals. In one I’ll type command, but in the second, I’ll use tail -f to see changes to the log.

[root@sso ~]# tail -f  /var/log/audit/audit.log | grep AVC

Once I request a cert, I will see a line like this added to the output

type=AVC msg=audit(1518708370.985:6639): avc:  denied  { write } for  pid=16459 comm="certmonger" name="cert8.db" dev="vda1" ino=17484343 scontext=system_u:system_r:certmonger_t:s0 tcontext=unconfined_u:object_r:etc_t:s0 tclass=file

In the coding window, I can run commands like this to trigger output from the log;

[root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
New signing request "20180215152610" added.
[root@sso standalone]# getcert stop-tracking  -i 20180215152610
Request "20180215152610" removed.

chcon

Now that I have a baseline, I’m going to try chcon to ensure that I have the type correct.

[root@sso standalone]# sudo chcon -t cert_t keystore keystore/*
[root@sso standalone]# ls -Z keystore
-rw-------. root root unconfined_u:object_r:cert_t:s0  cert8.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  cert9.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  key3.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  key4.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  pkcs11.txt
-rw-------. root root unconfined_u:object_r:cert_t:s0  secmod.db

Lets run the test again:

Running:

# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
New signing request "20180215153108" added.

Produces no new output from our log. We also see that the cert is being tracked.

[root@sso standalone]# getcert list
Number of certificates and requests being tracked: 1.
Request ID '20180215153108':
	status: MONITORING

setenforce

Lets try this again but with SELinux enforcing. First cleanup from our last run

[root@sso standalone]# getcert stop-tracking  -i 20180215153108
Request "20180215153108" removed.
[root@sso standalone]# getcert list
Number of certificates and requests being tracked: 0.

And now:

[root@sso standalone]# getenforce 
Permissive
[root@sso standalone]# setenforce 1
[root@sso standalone]# getenforce 
Enforcing
[root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
New signing request "20180215153334" added.
[root@sso standalone]# getcert list
Number of certificates and requests being tracked: 1.
Request ID '20180215153334':
	status: MONITORING

And the only thing we see in our log is a warning about switching enforcement.

type=USER_AVC msg=audit(1518708789.490:6646): pid=2501 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received setenforce notice (enforcing=1)  exe="?" sauid=81 hostname=? addr=? terminal=?'

semanage

OK, so lets make this change permanent. First, restore it so we know we are having the desired effect.

[root@sso standalone]# restorecon -R -v keystore
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/pkcs11.txt context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert9.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key4.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/secmod.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert8.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key3.db context unconfined_u:object_r:cert_t:s0->unconfined_u:object_r:etc_t:s0
[root@sso standalone]# ls -Z keystore
-rw-------. root root unconfined_u:object_r:etc_t:s0   cert8.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   cert9.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   key3.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   key4.db
-rw-------. root root unconfined_u:object_r:etc_t:s0   pkcs11.txt
-rw-------. root root unconfined_u:object_r:etc_t:s0   secmod.db

Now use semanage to make the change persist:

[root@sso standalone]# semanage fcontext -a -t cert_t $PWD/"keystore(/.*)?"
[root@sso standalone]# restorecon -R -v keystore
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/pkcs11.txt context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert9.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key4.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/secmod.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/cert8.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0
restorecon reset /etc/opt/rh/rh-sso7/keycloak/standalone/keystore/key3.db context unconfined_u:object_r:etc_t:s0->unconfined_u:object_r:cert_t:s0

Do another list to check the current state of the file:

[root@sso standalone]# ls -Z keystore
-rw-------. root root unconfined_u:object_r:cert_t:s0  cert8.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  cert9.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  key3.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  key4.db
-rw-------. root root unconfined_u:object_r:cert_t:s0  pkcs11.txt
-rw-------. root root unconfined_u:object_r:cert_t:s0  secmod.db

One last time, stop tracking the existing cert, and request a new one:

[root@sso standalone]# getcert stop-tracking  -i 20180215153334
Request "20180215153334" removed.
[root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
New signing request "20180215154055" added.
[root@sso standalone]# getcert list
Number of certificates and requests being tracked: 1.
Request ID '20180215154055':
	status: MONITORING
	stuck: no
	key pair storage: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'
	certificate: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'

Java and Certmonger

Posted by Adam Young on February 15, 2018 06:04 AM

Earlier this week, I got some advice from John Dennis on how to set up the certificates for a Java based web application. The certificates were to be issued by the Dogtag instance in a Red Hat Identity Mangement (RH IdM) install. However, unlike the previous examples I’ve seen, this one did some transforms from the certificate files, into PKCS12 and then finally into the keystore. It Looks like this:

ipa-getcert request -f /etc/pki/tls/certs/rhsso-cert.pem -k /etc/pki/tls/private/rhsso-key.pem -I rhsso -K RHSSO/`hostname` -D `hostname`

openssl pkcs12 -export -name rhsso -passout pass:FreeIPA4All -in /etc/pki/tls/certs/rhsso-cert.pem -inkey /etc/pki/tls/private/rhsso-key.pem -out rhsso.p12

keytool -importkeystore -srckeystore rhsso.p12 -srcstoretype PKCS12 -srcstorepass FreeIPA4All -destkeystore keycloak.jks -deststorepass FreeIPA4All -alias rhsso

keytool -keystore keycloak.jks -import -file /etc/ipa/ca.crt -alias ipa-ca

cp keycloak.jks /etc/opt/rh/rh-sso7/keycloak/standalone/

Aside from the complications of this process, it also means that the application will not be updated when Certmonger automatically renews the certificate, leading to potential down time. I wonder if there is a better option.

Keystore Formats

The latest couple releases of Java have supported a wider array of Keystore formats.

from /usr/lib/jvm/java-1.8.0-openjdk-$VERSION.b14.fc27.x86_64/jre/lib/security/java.security

#
# Default keystore type.
#
keystore.type=jks

#
# Controls compatibility mode for the JKS keystore type.
#
# When set to 'true', the JKS keystore type supports loading
# keystore files in either JKS or PKCS12 format. When set to 'false'
# it supports loading only JKS keystore files.
#
keystore.type.compat=true

So it appears that one step above is unnecssary: we could use a PKCS-12 file instead of the Native Java KeyStore. However, Certmonger does not mange PKCS-12 files either, so that is not a complete solution.

PKCS-11

But what about PKCS-11?

One thing that is tricky is that you are rarely going to find much about creating PKCS-11 files: instead, you find wasy to work with them via various tools. Why is that? PKCS-11 is not a file format per set, it is a standard.

The PKCS#11 standard specifies an application programming interface (API), called “Cryptoki,” for devices that hold cryptographic information and perform cryptographic functions. Cryptoki follows a simple object based approach, addressing the goals of technology independence (any kind of device) and resource sharing (multiple applications accessing multiple devices), presenting to applications a common, logical view of the device called a “cryptographic token”.

From the standard.

In other words, PKCS-11 is an API for talking to various forms of storage for cryptographic information, specifically asymmetric keys.

Asymmetric keys is that they come in pairs. One is public, the other is kept private. The PKCS-11 API helps enforce that. Instead of extracting a Private Key from a database in order to encrypt or decrypt data the data is moved into the container and signed internally. The private key never leaves the container.

That is why we have two standards: PKCS-11 and PKCS-12. PKCS-12 Is the standard the way you safely extract a key and transport it to another location.

Ideally, the PKCS-11 token is a hardware device. For example, a Yubikey device. Many computers come with Hardware Security Modules (HSMs) built in for just this purpose.

The Mozilla project developed cryptography to with with these standards. It used to be called Netscape Security Services, but since has been retconned to be Network Security Services. Both, you notice, are the acronym NSS. To be clear, this is separate from the Name Server Switch API, which is also called NSS. I seem to recall having written this all before.

The Firefox Browser, and related programs like Thunderbird, can fetch and store cryptographic certificates and keys in a managed database. This is usually called an NSS database, and it is accessed via PKCS-11, specifically so they have a singe API to use if the site want to do something more locked down, like use an HSM.

OK, so this is a long way of saying that, maybe it is possible to use an NSS database as the Java Keystore.

NSS Database

First, lets create a scratch NSS Database:

[root@sso ~]# cd /etc/opt/rh/rh-sso7/keycloak/standalone/
[root@sso standalone]# mkdir keystore
[root@sso standalone]# certutil -d dbm:$PWD/keystore -N
Enter a password which will be used to encrypt your keys.
The password should be at least 8 characters long,
and should contain at least one non-alphabetic character.

Enter new password: 
Re-enter password: 

Now lets request a cert. Because this NSS database is in a custom location, SELinx is going to block Certmonger from talking to it. For now, I’ll set the machine in permissive mode to let the request go in.

[root@sso standalone]# setenforce permissive
root@sso standalone]# ipa-getcert  request  -w -d dbm:$PWD/keystore -D $HOSTNAME -K RHSSO/$HOSTNAME -n RHSSO
New signing request "20180215041951" added.
[root@sso standalone]# getcert list
Number of certificates and requests being tracked: 1.
Request ID '20180215041951':
	status: MONITORING
	stuck: no
	key pair storage: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'
	certificate: type=NSSDB,location='dbm:/etc/opt/rh/rh-sso7/keycloak/standalone/keystore',nickname='RHSSO',token='NSS Certificate DB'
	CA: IPA
	issuer: CN=Certificate Authority,O=AYOUNG.RDUSALAB
	subject: CN=sso.ayoung.rdusalab,O=AYOUNG.RDUSALAB
	expires: 2020-02-16 04:19:46 UTC
	dns: sso.ayoung.rdusalab
	principal name: RHSSO/sso.ayoung.rdusalab@AYOUNG.RDUSALAB
	key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment
	eku: id-kp-serverAuth,id-kp-clientAuth
	pre-save command: 
	post-save command: 
	track: yes
	auto-renew: yes

SELinux reworking will come at the end.

OK, we should be able to list the certs in the database:

[root@sso standalone]# certutil -L -d dbm:$PWD/keystore 

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

RHSSO                                                        ,,   
[root@sso standalone]# 

Lets try it in Java. First, I need the Compiler

sudo yum install java-1.8.0-openjdk-devel

Now… the Java libraries it turns out are not default allowed to deal with NSS. We need a configuration file and we can create the provider dynamically. For NSS we need: security.provider.10=sun.security.pkcs11.SunPKCS11. The following code seems to succeed:

import java.security.KeyStore;
import java.security.Provider;
import java.security.Security;

import sun.security.pkcs11.SunPKCS11;

public class ReadNSS{

    public static char[] password = new char[0];

    public static void main(String[] args) throws Exception{
        String configName = "/etc/opt/rh/rh-sso7/keycloak/standalone/pkcs11.cfg";
        Provider p = new sun.security.pkcs11.SunPKCS11(configName);
        Security.addProvider(p);
        KeyStore ks = KeyStore.getInstance("PKCS11", p); //p is the provider created above
        ks.load(null, password);
        for (Enumeration<string> aliases = ks.aliases(); aliases.hasMoreElements();){
             System.out.println(aliases.nextElement());
        }
    }
}

With the corresponding config file:

name = NSScrypto
nssModule = keystore
nssDbMode = readOnly
nssLibraryDirectory = /lib64/
nssSecmodDirectory = /etc/opt/rh/rh-sso7/keycloak/standalone/keystore

Compile and run

[root@sso standalone]# javac ReadNSS.java 
[root@sso standalone]# java ReadNSS
RHSSO

We can list the keys. Ideally, I would pass that provider information on the command line, though.

Conclusion

It does look like there is a way to create a database that Java can use as a KeyStore. The question, now, is whether TOmcat and JBoss based web apps can use this mechanism to manage their HTTPS certificates.

SE Linux

What should the SELinux rule be:

type=AVC msg=audit(1518668385.358:6514): avc:  denied  { unlink } for  pid=15316 comm="certmonger" name="key4.db-journal" dev="vda1" ino=17484326 scontext=system_u:system_r:certmonger_t:s0 tcontext=system_u:object_r:etc_t:s0 tclass=file
	Was caused by:
		Missing type enforcement (TE) allow rule.

		You can use audit2allow to generate a loadable module to allow this access.

But that Generates

#============= certmonger_t ==============

#!!!! WARNING: 'etc_t' is a base type.
allow certmonger_t etc_t:file { create setattr unlink write };

Which, if I read it right, allows Certmonger to unlink and write and etc file. We want something more targeted.

How to deal with this is in the next post.

Episode 82 - RSA, TLS, Chrome HTTP, and PCI

Posted by Open Source Security Podcast on February 13, 2018 11:56 PM
Josh and Kurt talk about problems of textbook RSA implementations, the upcoming TLS changes in TLS, and the insecurity of http in Chrome.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6257462/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Virtualization Setup for RH CSA study

Posted by Adam Young on February 10, 2018 03:37 AM

While my company has wonderful resources to allow employees to study for our certifications, they are time limited to prevent waste. I find I’ve often kicked off the lab, only to get distracted with a reql-world-interrupt, and come back to find the lab has timed out. I like working on my own systems, and having my own servers to work on. As such, I’m setting up a complementary system to the corporate one for my own study.

Overview

Here’s the general idea; I’m going to use libvirt and the virtualization infrastructure of my system to build a private network that I can use to provision virtual machines. This network will not have the standard DHCP server, but will, instead, have a specifically targeted on that works in conjunction with a tftp server to provision images. This setup is called Preboot Execution Environment (PXE, usually pronounced Pixie. This should help you remember it.

Thanks, Mairin.

Pixie booting as we would all like to imagine it.

Network Setup

While I originally created the network using the virt-manager UI, I have it now in XML form. it looks like this:

  
  <network>
  <name>provision</name>
  <uuid>bc9db3c9-89a8-41de-9cfa-bd9b5bd771f3</uuid>
  <bridge delay="0" name="virbr1" stp="on">
  <mac address="52:54:00:39:08:19">
  <domain name="provision">
  <ip address="192.168.131.1" netmask="255.255.255.0">
  </ip>
</network>

Create a blank VM on the network setup to PXE boot, and ensure that it does not get a DHCP response.

There are a few guides to setting up PXE on Fedora. This one seems fairly recent, although a few things have changed and I’ll note them. Lets start with the DCHP setup. Aside from replacing yum with dnf, the name of the package is dhcp-server, not dhcp.

sudo dnf install -y dhcp-server

We want this listening on the interface that the VMs are on. From the XML fragment above, we can see that is the virbr1 Bridge. Run ip addr on my laptop shows:

16: virbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:39:08:19 brd ff:ff:ff:ff:ff:ff
    inet 192.168.131.1/24 brd 192.168.131.255 scope global virbr1
       valid_lft forever preferred_lft forever

Since the IP address is 192.168.131.1/24, which maps to the virsh network XML document above, I am fairly certain that if I get a DHCP server listening on this interface (and only this one) I can use it to provision VMs without interference from other DHCP providers.

TCPDump

Lets check with tcp-dump. Thanks for this little fragment.

$ sudo tcpdump -i virbr1 -n port 67 and port 68
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on virbr1, link-type EN10MB (Ethernet), capture size 262144 bytes

To start that shows nothing. Now fire up the virtual machine and:

20:02:03.781250 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:4f:e4:fa, length 396
20:02:05.758560 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:4f:e4:fa, length 396
20:02:09.713201 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:4f:e4:fa, length 396

So that is what our DHCPD instance needs to field.

DHCPD Configuration

The configuration file is now in /etc/dhcp/dhcpd.conf

$ rpmquery -f /etc/dhcp/dhcpd.conf
dhcp-server-4.3.6-8.fc27.x86_64

And it is pretty much empty.

$ sudo cat  /etc/dhcp/dhcpd.conf
#
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp-server/dhcpd.conf.example
#   see dhcpd.conf(5) man page
#

Here is a minimal configuration.

subnet 192.168.131.0 netmask 255.255.255.0 {
  allow booting;
  allow bootp;
  option domain-name "ayoung541.test";
  option domain-name-servers 127.0.0.1;
  option routers 192.168.131.1;
  next-server 192.168.131.1;
  filename "pxelinux.0";

  range 192.168.131.10 192.168.131.20;
}

I poked around a bit, seeing if I needed to do something to make dhcpd only listen on the specified interface, and found this in /etc/sysconfig/dhcp

# WARNING: This file is NOT used anymore.

# If you are here to restrict what interfaces should dhcpd listen on,
# be aware that dhcpd listens *only* on interfaces for which it finds subnet
# declaration in dhcpd.conf. It means that explicitly enumerating interfaces
# also on command line should not be required in most cases.

So I figure with the above configuration I should be Ok. Restart the DHCPD daemon via syscontrol, run tcpdump with the same options, and restart the virtual machine.

First off, I see the following in tcpdump’s output:

21:39:11.791024 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:4f:e4:fa, length 396
21:39:12.792374 IP 192.168.131.1.bootps > 192.168.131.10.bootpc: BOOTP/DHCP, Reply, length 300
21:39:12.792506 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:4f:e4:fa, length 408
21:39:12.792616 IP 192.168.131.1.bootps > 192.168.131.10.bootpc: BOOTP/DHCP, Reply, length 300

Which is pretty encouraging. Looking at the vms console, I see:

PXEBoot Output, No such file.

This is success! A couple things to note:

  • The VM got the IP address of 192.168.131.10 which was the first one in our range.
  • It attempted to download a file named pxelinux.0 via tftp.  That matches what I set in the config file.

Leases

We can see the lease granted to this machine via the leases file.

$ sudo cat /var/lib/dhcpd/dhcpd.leases
# The format of this file is documented in the dhcpd.leases(5) manual page.
# This lease file was written by isc-dhcp-4.3.6

# authoring-byte-order entry is generated, DO NOT DELETE
authoring-byte-order little-endian;

lease 192.168.131.10 {
  starts 6 2018/02/10 00:57:12;
  ends 6 2018/02/10 12:57:12;
  tstp 6 2018/02/10 12:57:12;
  cltt 6 2018/02/10 00:57:12;
  binding state active;
  next binding state free;
  rewind binding state free;
  hardware ethernet 52:54:00:4f:e4:fa;
  uid "\001RT\000O\344\372";
  set vendor-class-identifier = "PXEClient:Arch:00000:UNDI:002001";
}
server-duid "\000\001\000\001\"\020\365\344RT\0009\010\031";

So if we know a MAC address, we can find the IP address for it by searching through this file. If we want to pre-allocate address for a Known mac, we can either do it in the main /etc/dhcp/dhcpd.conf file or in a file in /var/lib/dhcpd.

TFTP Server

Next, I need to set up a TFTP server on that interface and serve out a file. Lets do that.

sudo dnf install -y tftp-server
sudo touch /var/lib/tftpboot/pxelinux.0

This is obviously a bogus file, but this is going step by step. Lets use the tftp client to see if we can fetch that.

$ tftp 192.168.131.1 -c get pxelinux.0
$ file pxelinux.0 
pxelinux.0: empty

Looks good. Delete the empty files and keep going.

SYSLINUX

sudo dnf install -y syslinux
sudo cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/

Lets try to pxeboot the VM again.

PXE Boot Fail looking for ldlinux.c32

Again, we move toward success. The TFTP server is working, and the PXE boot process on the VM attempts to download syslinux.

Lets get the rest of the files in place and try again:

 sudo cp -a /usr/share/syslinux /var/lib/tftpboot/

Reboot the VM. A new failure:

Kickstart

Lets make that configuration:

sudo mkdir  /var/lib/tftpboot/pxelinux.cfg

Create the file /var/lib/tftpboot/pxelinux.cfg/default and put in it:

path syslinux
include menu.cfg
default syslinux/vesamenu.c32
prompt 0
timeout 50

Create the file /var/lib/tftpboot/menu.cfg

menu hshift 13
menu width 49
menu margin 8
menu tabmsg

menu title automated install boot menu
label fedora-26-automated-install
  menu label ^Fedora 26 automated install
  kernel vmlinuz
  append vga=788 initrd=initrd.img ip=dhcp inst.repo=${MIRROR_URL} \
    ks=nfs:${SERVER_IPADDR}:/var/lib/nfsroot/fedora-26-ks.cfg
menu end

This is obviously a straight lift from the above referenced Install. We are not going to leave it like this, just want to get it started for testing.

Lets kick off that PXE boot again:

PXE Automated install Screen

This keeps looping around to 0 and restarting, as it lacks proper values for MIRROR_URL etc, and thus cannot download anything. So while we are still at a failure mode, we are ready to start looking at Kickstart files, which is one of the study objectives for the RH CSA exam. I’ll go through those more in depth in the future.

Deleting an image on RDO

Posted by Adam Young on February 10, 2018 12:06 AM

So I uploaded a qcow image…but did it wrong. It was tagged as raw instead of qcow, and now I want it gone. Only problem….it is stuck.

$ openstack image delete rhel-server-7.4-update-4-x86_64
Failed to delete image with name or ID 'rhel-server-7.4-update-4-x86_64': 409 Conflict
Image 2e77971e-7746-4992-8e1e-7ce1be8528f8 could not be deleted because it is in use: The image cannot be deleted because it is in use through the backend store outside of Glance.

But….I deleted all of the instances connected to it! Come On!

Answer is easy once the code-rage wears off…

When I created a server based on this image, it created a new volume. That volume is locking the image into place.

$ openstack volume list
+--------------------------------------+------+-----------+------+----------------------------------+
| ID                                   | Name | Status    | Size | Attached to                      |
+--------------------------------------+------+-----------+------+----------------------------------+
| 97a15e9c-2744-4f31-95f3-a13603e49b6d |      | error     |    1 |                                  |
| c9337612-8317-425f-b313-f8ba9336f1cc |      | available |    1 |                                  |
| 9560a18f-bfeb-4964-9785-6e76fa720892 |      | in-use    |    9 | Attached to showoff on /dev/vda  |
| 0188edd7-7e91-4a80-a764-50d47bba9978 |      | in-use    |    9 | Attached to test1 on /dev/vda    |
+--------------------------------------+------+-----------+------+----------------------------------+

See that error? I think its that one. I can’t confirm now, as I also deleted the available one, as I didn’t need it, either.

$ openstack volume delete 97a15e9c-2744-4f31-95f3-a13603e49b6d
$ openstack volume delete c9337612-8317-425f-b313-f8ba9336f1cc
$ openstack image delete rhel-server-7.4-update-4-x86_64

And that last command succeeded.

$ openstack image show  rhel-server-7.4-update-4-x86_64
Could not find resource rhel-server-7.4-update-4-x86_64

Keystonerc for RDO cloud

Posted by Adam Young on February 09, 2018 10:17 PM

If you are using RDO Cloud and want to do command line Ops, here is the outline of a keystone.rc file you can use to get started.

unset $( set | awk '{FS="="} /^OS_/ {print $1}' )

export OS_AUTH_URL=https://phx2.cloud.rdoproject.org:35357/v3/
export OS_USERNAME={username}
export OS_PASSWORD={password}
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_PROJECT_NAME={projectname}
export OS_IDENTITY_API_VERSION=3

You might have been given a different AUTH URL to use. The important parts are appending the /v3/ and explicitly setting the OS_IDENTITY_API_VERSION=3. Setting both is overkill, but you can never have too much over kill.

Once you have this set, source it, and you can run:

$ openstack image list
+--------------------------------------+-------------------------------------------+--------+
| ID                                   | Name                                      | Status |
+--------------------------------------+-------------------------------------------+--------+
| af47a290-3af3-4e46-bb56-4f250a3c20a4 | CentOS-6-x86_64-GenericCloud-1706         | active |
| b5446129-8c75-4ce7-84a3-83756e5f1236 | CentOS-7-x86_64-GenericCloud-1701         | active |
| 8f41e8ce-cacc-4354-a481-9b9dba4f6de7 | CentOS-7-x86_64-GenericCloud-1703         | active |
| 42a43956-a445-47e5-89d0-593b9c7b07d0 | CentOS-7-x86_64-GenericCloud-1706         | active |
| ffff3320-1bf8-4a9a-a26d-5abd639a6e33 | CentOS-7-x86_64-GenericCloud-1708         | active |
| 28b76dd3-4017-4b46-8dc9-98ef1cb4034f | CentOS-7-x86_64-GenericCloud-1801-01      | active |
| 2e596086-38c9-41d1-b1bd-bcf6c3ddbdef | CentOS-Atomic-Host-7.1706-GenericCloud    | active |
| 1dfd12d7-6f3a-46a6-ac69-03cf870cd7be | CentOS-Atomic-Host-7.1708-GenericCloud    | active |
| 31e9cf36-ba64-4b27-b5fc-941a94703767 | CentOS-Atomic-Host-7.1801-02-GenericCloud | active |
| c59224e2-c5df-4a86-b7b6-49556d8c7f5c | bmc-base                                  | active |
| 5dede8d3-a723-4744-97df-0e6ca93f5460 | ipxe-boot                                 | active |
+--------------------------------------+-------------------------------------------+--------+

Episode 81 - Autosploit, bug bounties, and the future of security

Posted by Open Source Security Podcast on February 07, 2018 12:05 AM
Josh and Kurt talk about AutoSploit, bug bounties and fixing flaws, market forces in security, future expectations, and how humans perceive threats.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6232416/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


This blog has moved

Posted by Josh Bressers on February 06, 2018 08:16 PM
If you've managed to find this blog and are wondering where the updates are, it's found a new home. You can find the new posts here
https://www.csoonline.com/blog/sober-security/

You can find the RSS feed here
https://www.csoonline.com/blog/sober-security/index.rss

As always, feel free to hit me up on Twitter with questions or comments
@joshbressers

Fedora Red Team on ITProTV

Posted by Jason Callaway on February 03, 2018 02:18 PM

Back at BSidesDE — which was awesome, BTW — I was interviewed by ITProTV, and had the opportunity to discuss the Fedora Red Team.

<iframe allowfullscreen="true" class="youtube-player" height="446" src="https://www.youtube.com/embed/tQojNeA-IiY?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="739"></iframe>

Matching Create and Teardown in an Ansible Role

Posted by Adam Young on January 31, 2018 07:13 PM

Nothing lasts forever. Except some developer setups that no-one seems to know who owns, and no one is willing to tear down. I’ve tried to build the code to clean up after myself into my provisioning systems. One pattern I’ve noticed is that the same data is required for building and for cleaning up a cluster. When I built Ossipee, each task had both a create and a teardown stage. I want the same from Ansible. Here is how I’ve made it work thus far.

The main mechanism I use is a conditional include based on a variable set. Here is the task/main.yaml file for one of my modules:

---
- include_tasks: create.yml
  when: not teardown

- include_tasks: teardown.yml
  when: teardown

I have two playbooks which call the same role. The playbooks/create.yml file:

---
- hosts: localhost
  vars:
    teardown: false
  roles:
    -  provision

and the playbooks/teardown.yaml file:

---
- hosts: localhost
  vars:
    teardown: true
  roles:
    -  provision

All of the real work is done in the tasks/create.yml and tasks/teardown.yml files. For example, I need to create a bunch of Network options in Neutron in a particular (dependency driven) order. Teardown needs to be done in the reverse order. Here is the create fragment for the network pieces:

- name: int_network
  os_network:
    cloud: "{{ cloudname }}"
    state: present
    name: "{{ netname }}_network"
    external: false
  register: osnetwork

- os_subnet:
    cloud: "{{ cloudname }}"
    state: present
    network_name: "{{ netname }}_network"
    name: "{{ netname }}_subnet"
    cidr: 192.168.24.0/23
    dns_nameservers:
      - 8.8.8.7
      - 8.8.8.8

- os_router:
    cloud: "{{ cloudname }}"
    state: present
    name: "{{ netname }}_router"
    interfaces: "{{ netname }}_subnet"
    network: public

To tear this down, I can reverse the order:

    
- os_router:
    cloud: rdusalab
    state: absent
    name: "{{ netname }}_router"

- os_subnet:
    cloud: rdusalab
    state: absent
    network_name: "{{ netname }}_network"
    name: "{{ netname }}_subnet"

- os_network:
    cloud: rdusalab
    state: absent
    name: "{{ netname }}_network"
    external: false

As you can see, the two files share a naming convention: name: “{{ netname }}_network” should really be precalcualted in the vars file and then useed in both cases. That is a good future improvement.

You can see the real value when it comes to lists of objects. For example, to create a set of virtual machines:

- name: create CFME server
  os_server:
    cloud: "{{ cloudname }}"
    state: present
    name: "cfme.{{ clustername }}"
    key_name: ayoung-pubkey
    timeout: 200
    flavor: 2
    boot_volume: "{{ cfme_volume.volume.id }}"
    security_groups:
      - "{{ securitygroupname }}"
    nics:
      -  net-id:  "{{ osnetwork.network.id }}"
         net-name: "{{ netname }}_network"
    meta:
      hostname: "{{ netname }}"
  register: cfme_server

It is easy to reverse this with the list of host names. In teardown.yml:

- os_server:
    cloud: "{{ cloudname }}"
    state: absent
    name: "cfme.{{ clustername }}"
  with_items: "{{ cluster_hosts  }}"

To create the set of resources I can run:

ansible-playbook   playbooks/create.yml 

and to clean up

ansible-playbook   playbooks/teardown.yml 

This is pattern scales. If you have three roles that all follow this pattern, they can be run in forward order to set up, and reverse order to teardown. However, it does tend to work at odds with Ansible’s Role dependency mechanism: Ansible does not allow you to only specify the dependent roles should be run in reverse in the teardown process.

Deploying an image on OpenStack that is bigger than the available flavors.

Posted by Adam Young on January 31, 2018 03:44 AM

Today I tried to use our local OpenStack instance to deploy CloudForms Management Engine (CFME). Our OpenStack deployment has a set of flavors that all are defined with 20 GB Disks. The CFME image is larger than this, and will not deploy on the set of flavors. Here is how I worked around it.

The idea is that, instead of booting a server on Nova using an image and a flavor, first create a bootable volume, and use that to launch the virtual machine.

The command line way to create an 80 GB volume would be:

openstack volume create --image cfme-rhevm-5.9.0.15-1 --size 80 bootable_volume

But as you will see later, I used ansible to create it instead.

Uploading the image (downloaded from the redhat.com portal)

openstack image create --file ~/Downloads/cfme-rhevm-5.9.0.15-1.x86_64.qcow2 cfme-rhevm-5.9.0.15-1

Which takes a little while. Once it is done:

$ openstack image show cfme-rhevm-5.9.0.15-1
+------------------+---------------------------------------------------------------------------------+
| Field            | Value                                                                           |
+------------------+---------------------------------------------------------------------------------+
| checksum         | 52c57210cb8dd2df26ff5279a5b0be06                                                |
| container_format | bare                                                                            |
| created_at       | 2018-01-30T21:09:20Z                                                            |
| disk_format      | raw                                                                             |
| file             | /v2/images/cfcca613-40d9-44c8-b12f-e0ddc93ab914/file                            |
| id               | cfcca613-40d9-44c8-b12f-e0ddc93ab914                                            |
| min_disk         | 0                                                                               |
| min_ram          | 0                                                                               |
| name             | cfme-rhevm-5.9.0.15-1                                                           |
| owner            | fc56aad6163c44dc8beb0c287a975ca3                                                |
| properties       | direct_url='file:///var/lib/glance/images/cfcca613-40d9-44c8-b12f-e0ddc93ab914' |
| protected        | False                                                                           |
| schema           | /v2/schemas/image                                                               |
| size             | 1072365568                                                                      |
| status           | active                                                                          |
| tags             |                                                                                 |
| updated_at       | 2018-01-30T21:35:30Z                                                            |
| virtual_size     | None                                                                            |
| visibility       | private                                                                         |
+------------------+---------------------------------------------------------------------------------+

I used Ansible to create the volume and the server. This is the fragment from my task.yaml file.

- name: create CFME volume
  os_volume:
    cloud: "{{ cloudname }}"
    image: cfme-rhevm-5.9.0.15-1
    size: 80
    display_name: cfme_volume
  register: cfme_volume

- name: create CFME server
  os_server:
    cloud: "{{ cloudname }}"
    state: present
    name: "cfme.{{ clustername }}"
    key_name: ayoung-pubkey
    timeout: 200
    flavor: 2
    boot_volume: "{{ cfme_volume.volume.id }}"
    security_groups:
      - "{{ securitygroupname }}"
    nics:
      -  net-id:  "{{ osnetwork.network.id }}"
         net-name: "{{ netname }}_network"
    meta:
      hostname: "{{ netname }}"
  register: cfme_server

The interesting part is the boot_volume: “{{ cfme_volume.volume.id }} line, which uses the value registered in the volume create step to get the id of the new volume.

Freeing up a Volume from a Nova server that errored

Posted by Adam Young on January 31, 2018 03:44 AM

Trial and error. Its a key part of getting work done in my field, and I make my share of errors. Today, I tried to create a virtual machine in Nova using a bad glance image that I had converted to a bootable volume:

The error message was:

 {u'message': u'Build of instance d64fdd07-748c-4e27-b212-59e8cef9d6bf aborted: Block Device Mapping is Invalid.', u'code': 500, u'created': u'2018-01-31T03:10:56Z'}

The VM could not release the volume.

$  openstack server remove volume d64fdd07-748c-4e27-b212-59e8cef9d6bf de4909df-e95c-4a54-af5c-c24a26146a89
Can't detach root device volume (HTTP 403) (Request-ID: req-725ce3fa-36e5-4dd8-b10f-7521c91a5c32)

So I deleted the instance:

  openstack server delete d64fdd07-748c-4e27-b212-59e8cef9d6bf

But when I went to list the volumes:

+--------------------------------------+-------------+--------+------+---------------------------------------------------------------+
| ID                                   | Name        | Status | Size | Attached to                                                   |
+--------------------------------------+-------------+--------+------+---------------------------------------------------------------+
| de4909df-e95c-4a54-af5c-c24a26146a89 | xxxx        | in-use |   80 | Attached to d64fdd07-748c-4e27-b212-59e8cef9d6bf on /dev/vda  |
+--------------------------------------+-------------+--------+------+---------------------------------------------------------------+
$ openstack volume delete de4909df-e95c-4a54-af5c-c24a26146a89
Failed to delete volume with name or ID 'de4909df-e95c-4a54-af5c-c24a26146a89': Invalid volume: Volume status must be available or error or error_restoring or error_extending and must not be migrating, attached, belong to a group or have snapshots. (HTTP 400) (Request-ID: req-f651299d-740c-4ac9-9f52-8a603eace8f6)
1 of 1 volumes failed to delete.

To unwedge it I need to run:

$ cinder reset-state --attach-status detached de4909df-e95c-4a54-af5c-c24a26146a89
Policy doesn't allow volume_extension:volume_admin_actions:reset_status to be performed. (HTTP 403) (Request-ID: req-8bdff31a-7745-4e5e-a449-a5dac5d87f70)
ERROR: Unable to reset the state for the specified entity(s).

SO, finally, I had to get an admin account (role admin on any project will work, still…)

. ~/devel/openstack/salab/rduv3-admin.rc
cinder  reset-state --attach-status detached de4909df-e95c-4a54-af5c-c24a26146a89

And now (as my non admin user)

$ openstack volume list  
+--------------------------------------+-------------+-----------+------+-----------------------------------------------+
| ID                                   | Name        | Status    | Size | Attached to                                   |
+--------------------------------------+-------------+-----------+------+-----------------------------------------------+
| de4909df-e95c-4a54-af5c-c24a26146a89 | xxxx        | available |   80 |                                               |
+--------------------------------------+-------------+-----------+------+-----------------------------------------------+
$ openstack volume delete xxxx
$ openstack volume list  
+--------------------------------------+-------------+--------+------+-----------------------------------------------+
| ID                                   | Name        | Status | Size | Attached to                                   |
+--------------------------------------+-------------+--------+------+-----------------------------------------------+
+--------------------------------------+-------------+--------+------+-----------------------------------------------+

I talked with the Cinder team about the policy for volume_extension:volume_admin_actions:reset_status and they seem to think that it is too unsafe for an average user to be able to perform. Thus, a “force delete” like this would need to be a new operation, or a different flag on an existing operation.

We’ll work on it.

Episode 80 - GPS tracking and jamming

Posted by Open Source Security Podcast on January 31, 2018 12:14 AM
Josh and Kurt talk about GPS metadata giving away military bases and GPS jamming as part of testing.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6207077/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes



Creating an Ansible Inventory file using Jinja templating

Posted by Adam Young on January 29, 2018 12:31 AM

While there are lots of tools in Ansible for generating an inventory file dynamically, in a system like this, you might want to be able to perform additional operations against the same cluster. For example, once the cluster has been running for a few months, you might want to do a Yum update. Eventually, you want to de-provision. Thus, having a remote record of what machines make up a particular cluster can be very useful. Dynamic inventories can be OK, but often it takes time to regenerate the inventory, and that may slow down an already long process, especially during iterated development.

So, I like to generate inventory files. These are fairly simple files, but they are not one of the supported file types in Ansible. Ansible does support ini files, but the inventory files have maybe lines that are not in key=value format.

Instead, I use Jinja formatting to generate inventory files, and they are pretty simple to work with.

UPDATE: I jumped the gun on the inventory file I was generating. The template and completed inventory have been corrected.

To create the set of hosts, I use the OpenStack server (os_server) task, like this:

- name: create servers
  os_server:
    cloud: "{{ cloudname }}"
    state: present
    name: "{{ item }}.{{ clustername }}"

    image: rhel-guest-image-7.4-0
    key_name: ayoung-pubkey
    timeout: 200
    flavor: 2
    security_groups:
      - "{{ securitygroupname }}"
    nics:
      -  net-id:  "{{ osnetwork.network.id }}"
         net-name: "{{ netname }}_network" 
    meta:
      hostname: "{{ netname }}"
  with_items: "{{ cluster_hosts }}"
  register: osservers

- file:
    path: "{{ config_dir }}"
    state: directory
    mode: 0755

- file:
    path: "{{ config_dir }}/deployments"
    state: directory
    mode: 0755

- file:
    path: "{{ cluster_dir }}"
    state: directory
    mode: 0755

- template:
    src: inventory.ini.j2
    dest: "{{ cluster_dir }}/inventory.ini"
    force: yes
    backup: yes

A nice thing about this task is, whether it is creating new server or not, it produces the same output, which is a json object that has the server data in an array.

The following template is my current fragment.

[all]
{% for item in osservers.results %}
{{ item.server.interface_ip }}
{% endfor %}

{% for item in osservers.results %}
[{{ item.server.name }}]
{{ item.server.interface_ip  }}

{% endfor %}


[ipa]
{% for item in osservers.results %}
{% if item.server.name.startswith('idm')  %}
{{ item.server.interface_ip  }}
{% endif %}
{% endfor %}



[all:vars]
ipa_server_password={{ ipa_server_password }}
ipa_domain={{ clustername }}
deployment_dir={{ cluster_dir }}
ipa_realm={{ clustername|upper }}
cloud_user=cloud-user
ipa_admin_user_password={{  ipa_admin_password }}
ipa_forwarder={{ ipa_forwarder }}
lab_nameserver1={{ lab_nameserver1 }}
lab_nameserver2={{ lab_nameserver2 }}

I keep the variable definitions in a separate file. This produces an inventory file that looks like this:

[all]
10.11.95.161
10.11.95.149
10.11.95.152
10.11.95.159

[idm.ayoung.rdusalab]
10.11.95.161

[master.ayoung.rdusalab]
10.11.95.149

[node0.ayoung.rdusalab]
10.11.95.152

[node1.ayoung.rdusalab]
10.11.95.159



[ipa]
10.11.95.161



[all:vars]
ipa_server_password=FreeIPA4All
ipa_domain=ayoung.rdusalab
deployment_dir=/home/ayoung/rippowam/deployments/ayoung.rdusalab
ipa_realm=AYOUNG.RDUSALAB
cloud_user=cloud-user
ipa_admin_user_password=FreeIPA4All
ipa_forwarder=192.168.52.3
lab_nameserver1=8.8.8.8
lab_nameserver2=8.8.8.7

My next step is to create a host group for all of the nodes (node0 node1) based on a shared attribute. I probably will do that by converting the list of hosts to a dictionary keyed by hostname, and have the name of the groups as the value.

Getting Shade for the Ansible OpenStack modules

Posted by Adam Young on January 28, 2018 06:55 PM

When Monty Taylor and company looked to update the Ansible support for OpenStack, they realized that there was a neat little library waiting to emerge: Shade. Pulling the duplicated code into Shade brought along all of the benefits that a good refactoring can accomplish: fewer cut and paste errors, common things work in common ways, and so on. However, this means that the OpenStack modules are now dependent on a remote library being installed on the managed system. And we do not yet package Shade as part of OSP or the Ansible products. If you do want to use the OpenStack modules for Ansible, here is the “closest to supported” way you can do so.

The Shade library does not attempt to replace the functionality of the python-*client libraries provided by upstream OpenStack, but instead uses them to do work. Shade is thus more of a workflow coordinator between the clients. Thus, it should not surprise you to find that shade required such libraries as keystoneauth1 and python-keystoneclient. In an OSP12 deployment, these can be found in the rhel-7-server-openstack-12-rpms repository. Thus, as a prerequisite, you need to have this repository enabled for the host where you plan on running the playbooks. If you are setting up a jumphost for this, that jumphost should be running RHEL 7.3, as that has the appropriate versions of all the other required RPMs as well. I tried this on a RHEL 7.4 system, and it turns out it has too late a version of python-urllib3.

Shade has one additional dependency beyond what is provided with OSP: Munch. This is part of Fedora EPEL and can be installed from the provided link. Then, shade can be installed from RDO.

Let me be clear that these are not supported packages yet. This is just a workaround to get them installed via RPMs. This is a slightly better solution than using PIP to install and manage your Shade deployment, as some others have suggested. It keeps the set of python code you are running tracked via the RPM database. When a supported version of shade is provided, it should replace the version you install from the above links.

Notes from Jim Peckham’s wrestling camp.

Posted by Adam Young on January 26, 2018 02:12 AM

I twice had the privileged of attending Jim Peckham’s Earning the Right to Win wrestling camp. It was a fantastic experience. Jim was a wonderful coach, teacher, and role model.

Photo from The Boston Globe

Jim Peckham

The camp was run right out of his Two car garage’s Resolite(tm) covered floor. The numbers were kept low, to fit into the limited space.

Each morning, he woke us early, had us perform pushups and situps. There used to be a rope climb, but he had to discontinue it for insurance reasons…but he still had it on the T-Shirt. Breakfast was so healthy that we would spend the rest of the day passing gas.

He taught an insane number of moves: I think he counted to 300. And he insisted we take notes. I thought I had a full set, but it turns out that I got maybe half of them. They stayed locked away until a week ago, when I finally transcribed them. They are not the greatest notes, but hopefully they will trigger a few memories.

Earning the right to win

At least 100% on the mat

Difference is in lifestyle

Extra effort

greatest tool to posses imagination. imagine what it feel like to be #1. Believe you can win. 2 forms of discipline internal and external.

Develop blameless attitude. Don’t blame (even yourself);  learn

must believe in yourself

Pride 2 edged sword must have class, get courteous fashion real pride not fake

Courage is ideals ideas what willing

Single I

Low Singles

Defend Change Angle Turn Knee Into man get weight over the man

1. Basic lock both hands to ankles shoulder against thigh

2. setup snatch single step on his foot hooks

3. Low Single to other leg power impact single

4. If poor shot, bring trailing knee in and down lift lead knee drive in and up with leg behind you to knee tap on far knee with lead hand

5. Lockup push head away and single

6. Miss far knee wipeout behind leg hops forward hops back & block

7. hops forward, hops back cut off above knee picket fence

8. Drag arm to opposite side single transpose

9. Step across and in front lift leg high and foot sweep

10. Same setup hook behind knee and jam

11. Grab his write pull in he reacts back in single.

12. Cheat on transpose bring left arm higher on thigh standing cradle. Heel trip.

two knees down

13. On Knees sprawls post hand stand on right foot, shrub punch & look back

14. Single sprawls. Turn away limp arm he falls beside you right hand windmills

15. Single step over w/ right shoulder limp arm behind him

16. Single Sprawls whizzers post hand, butt up, right knee down and across to underneath lift to double

17. Single Whizzer sprawl trap whizzer wrist step across hug knee to chest push w/ head pull wrist pivot

18. single pull leg laterally step up far ankle pick

3rd position both on knees

19. Single hands locked pull sprawl whizzers squeeze elbow to thigh pull up on toes drive come around

20. Knee against shin pull shin up thigh step up on right foot head down trap left arm above the elbow throw right arm to armpit

21. Same set up his head down my head up legal full nelson locked in armpit

22. or near side cradle with step over

23. same setup he stands up to feet clear leg to double

Fourth position

2bent left in the air, knee inside

24. bump forward knee pick walk into him w/ right leg

Roll strawberry shortcake step back with right, forward with left

25. squeeze elbow to side step back w/ right foot forward with left under his leg squat arch bridge

26. Trap knee rotate right arm to instep pull leg up step out hook ankle cross ankle pick

27. Pull

28. forward drive underneath to backdoor or Bulgarian

29. V on knees he’s on feet single post trap draped arm step up on right foot fall to hip, throw, put

30. (cont) weight on his chin then high leg over

Single II

5th position
High Single
Low Single

1. He sprawls post on head bring head out up ear to sternum up to Foot between Legs, toes and knees inward your chin on his thigh step back w right foot head up.

2. Step forward front or trip back to run pipe or double

5. high single (block) punch far knee with right hand step out into him lift, step through, turk

6.if the knockdown doesn’t work use switch to left step back

7. high single pushes head sideways go to crotch squat and lift leg goes down go to a turk if it start up go between legs from behind go to his back and crunch

8. if he squats con’t he lifted go behind knee right under but w/ crotch hand trip

9. high single trap far hand lift foot picket fence at mid thigh

10. high single reach behind leg w/ left grab foot knee sideways w/ head step back and squat.

11. high single he deep tight whizzers locinch (?) step forward pinch elbow to side squat rotate backwards head to chest

12. high single foot comes down to get inside of leg. pull left hand through step, way back with right hand regroup

13. press down on shoulder reach between knees w/ hand come out behind leg

14. man turns away take foot out while he hops

15. push down on heel up on toe man to man

16. inside end of foot force him to hop forward and take his foot out

Inside

17. hop back wards wallop side of head block ankles he falls back

7th position high single left ends up on chest

8. Grabs elbow jumps up in air to make you carry his weight step back with left foot, clear leg take him down (swing left behind right)

19. or ankle sweep (17)

8th head outside

20. high crotch from knees drive is w left foot step w/ right foot chop knee

21 Get knee under elbow high crotch lift swing feet in front trap ankle drop cover body bulgarian

22. if he grabs you ankle or cross chest slap wrist step up on right foot break to left hip keep weight up on him

23. high crotch spreads legs (sprawls) grab instep step up pull laterally right hand cuts behind knees pivot backwards lean forward.

24. on knees he sprawls cross faces grab ligament on knee right arm up through crotch grabbing ankle post left hand step up on left then right turn to face left hand over inside hip Bulgarian.

25. Dump put shoulder on thigh go cover with right come across

26. head down 1/2 nelson

27. goes through crotch trap wrist break to side bring head out bridge up on him turn down hill

Double Legs

Angle, Base, Penetration

both knees both feet one knee

2 stance parallel sugar foot

parallel drive off either foot 190degress more easily attacked.

2 circle on man to get him into or out of position
break grab wrist elbow pocket come up elbow pocket V hip [?]

Use sugarfoot feed man his weaker side to attack

Elevation change

Flat back
weight on front 1/2 of feet.
keep head at proper angle to back

not too high up, not looking at floor

drop butt to keep head up, strengthen neck

start with hands

com,e up from inside
be in position hand together

cow catcher on a sprawl
under armpit

dropping right hip
he over compensates
rotate knees punch hip with right shoulder chop knee

cow cathers O.C’s bu dropping left hip
he ties up pull in elbow to chest and punch across pull out duck under
step up underneath hand through crotch right and left lock and lift

slap hip drag

throw hand across grab w other hand to Russian 2 on 1

attack center of gravity double takes 10′

head to center chest to thigh man sprawl to side and head goes to side

1st lift man or limb you are attacking

(more) Doubles.

Classical Tie Up
foot leading on side you are on top am on top of

Pull him in a circle step in behind his right foot look for your show &squat slide hand down to knee

He reaches for lock up roll hand over the top and drag
A foot comes forward step in (hook_) pull back on leg. cover his boy with right arm (torque)

He has tie up. Keep head clear grab four fingers force his arm down & out and duck double under pull w/ left force w/ head picot turk
————
You have right write he has left pull right reacts back pull left to double duck under w/ knee

back heel chop

—-

knee buckle double : man reaches for lockup Russian 2 on 1 drive arm up hit double.

he hook back of shoulder lock (chop) hands) behind knees

——–

Ties up baseball grip down grabs wrist drag to double

——-

He grabs wrists split thumb and fingers post up

head between legs sit up grab left ankle out back door

touch and go to double post left hand to his right knee chop left knee to nay and turk

must have angle (picture show shoulder over knee, hip back)

—-

he reaches block hand stop not push and shoot he sprawls go up to bear hug and down legs — move in — step up hook from inside to outside

from stance . he palms shoulders. stutter step to elbow post to double left left hand trap bottom leg. hand between legs/ left knee up to finish

—–

touch and go tap toward apex

tap front blind eyes (cover) snap with other touch front of head to apex go back of head forward apex

–head between trap ankle pull down punch with right shoulder

land with brace (head forward)

Power double: Squat lean forward until you on tips lunge off both feet and double

FIREMANS!

1. Guy hits lead leg up Fireman underhook far arm hook leg whipover

2. Fireman both knees down rotate pal away put it on his chest. squat

3. Fireman to side hold on to butt he reverses high leg bring arm cross chest

4. Firemans sprawl he turns toward you put left hand on his arm, he goes double whiparound

5. Firemans carry Tie up Thumb out palm up pull arm over head [square] hold on arm up crotch swing legs around step up on right foot lower him to mat lower left shoulder raise right

6. He sprawls power double

7. He squats get arm away single

8. land on right knee walk to left knee finish as fireman

9. He sprawls put s right knees down put right first outside knee drive forward left knee to side

10. FM Head underneath sprawls walk up po9st r fist wing left leg to fist break to side finish

11. FM sprawls post r hand hit him in ribs with right with in ribs w/ r and right leg drive left shoulder to mat squeezing arm finish.

Bottom

1. Trap outside wrist
outside knee forward break to side high left under over w/ l arm don’t let go of wrist

2. Step up Drop inside elbow to mat dip in underneath elevate legs push arm down hips up, to head lock

3. Stand up inside push up look up trap 4 fingers hit him in sternum w/ left forearm to thigh. step up pivot step back look for high single weigh back into him

4. Left arm punch ref step outside left hand trap hand switch hand punch elbow in armpit throw hips out

5. outside stand up Trap hand shift weight forward in step pivot out backwards step away w/ left back with right

6. wrist drag up on hand down on wrist (overhinged)
(step up out away lean forward over foot)

7. Long sitout [HEAD Forward] low right knee below left foot let go of hand [turn away]

8. short sitout [Head forward] create space get further away. R leg forward left leg back pivot on shoulder punch far away w/ right hand.

9. Swivel feet away overhook arm drive back into him bring right leg up drive off left foot pivot on shoulder step over go neutral

10. If he follows grab ankle roll him to 4:30 roll down change hand control back to head lock

Peterson

10. Swivel come out front w/ arm must bring arm out at right angle to body both thumbs on top.

11. push but up bring left foot across trap wrist roll across both shoulder turn face

12. Flip Granby step up on right foot fork with feet left hand down throw left elbow back jump land on right hip.

13. 2 on 1 overdrive Grab his wrist w yolk of hand resist then drive

14. post back on right foot hip hoist he grabs ankle grab wrist sit back into him cut back under lower and away

15. man holds ankle grab 4 fingers twist shrug away pivot if he overhooks redrag.

16. same he follows dip roll

17. Same start (ear pop shrug) ankle hip heist

18. he has ankle step step back with left arm doesn’t clear leg does down to right knee elbow up crotch roll

19. if more behind him l arm behind his left left take him crib near leg with my leg

21. if body too far forward drive across straighten out right leg catch elbow w right heel post right hand reverse 1/2

MISTAKES:

takedown 3 mistakes

head down
not set up arms extended

Keep head up!

low single keep head inside

hand in front, down
don’t shoot while other can controls my head .
from knees don reach.

Not Good!

Barrel catcher
Boarding house reach
Quickdraw

Shift balance by weight

foot must be behind or under center of gravity and have weight on it
no set up telegraphs move

man is sugar foot try to circle toward his back
hit when rear foot is in the air.

shoot from close in

never stand up trailing leg w head inside

bottom up

land bell down on head lock

don’t wrestle ear to ear

don’t shoot in a circle
keep elbows in
move both feet at a time
don’t cross feet

Don’t shoot down shoot in
never step over from whizzer

wrestle eye to eye or lower than opponent

wrestle ball of foot

don’t cross feet
don’t wrestle on knees keep weight on opponent.
movement to apex of triangle
bottom position angle and weight)

Adam Young June 21

Age 15 11 months
weight 155

Chinup 12 11 8
Pushup 40 30 28

Deadlift 275 206
Squad 80 60 clean 140 105

9-11 reps

Using JSON home on a Keystone server

Posted by Adam Young on January 25, 2018 03:42 PM

Say you have an AUTH_URL like this:

$ echo $OS_AUTH_URL 
http://openstack.hostname.com:5000/v3

And now you want to do something with it.  You might think you can get the info you want from the /v3 url, but it does not tell you much:

$ curl $OS_AUTH_URL 
{"version": {"status": "stable", "updated": "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links": [{"href": "http://openstack.hostname.com:5000/v3/", "rel": "self"}]}}[ayoung@ayoung541 salab]$

Not too helpful.  Turns out, though, that there is data, it is just requires the json-home accepts header.

You access the document like this:

curl $OS_AUTH_URL -H "Accept: application/json-home"

 

I’m not going to past the output: it is huge. 

Here is how I process it:

curl $OS_AUTH_URL -H "Accept: application/json-home" | jq '. | .resources '

Will format somewhat legibly.  To get a specific section, say the endpoint list you can find it in the doc like this:

 "http://docs.openstack.org/api/openstack-identity/3/rel/endpoints": {
 "href": "/endpoints"
 },

And to pull it out programatically:

curl -s $OS_AUTH_URL -H "Accept: application/json-home" | jq '. | .resources | .["http://docs.openstack.org/api/openstack-identity/3/rel/endpoints"] | .href'
"/endpoints"

Attributes make writing SELinux policy easier

Posted by Dan Walsh on January 25, 2018 04:10 AM

Yesterday I received an email from someone who was attempting to write SELinux policy for a daemon process, "abcd", that he was being required to run on his systems.

"

...

 Here's the problem:  the ****** agent runs as root and has the ability to
make changes to system configuration files.  So deploying this agent
means handing over root-level control of all of my RHEL servers to
people I don't know and who have no accountability to my system owner or
authorizing official.  ..... really, really
uncomfortable about this and want to protect our system from
unauthorized changes inflicted by some panicking yabbo ....

This seems like a job for SELinux.  Our RHEL7 baseline has SELinux
enforcing (
AWESOME) already, so I figured we just need to write a custom policy
that defines a type for the ******* agent process (abcd_t) and confines
that so that it can only access the agent's own data directories.

We did that, and it's not working out.  Once we start the confined
agent, it spews such a massive volume of AVC denials that it threatens
to fill the log volume and the agent never actually starts.  There are
way too many errors for the usual 'read log/allow exception' workflow to
be viable.  My guess is the agent is trying to read a bunch of sensitive
system files and choking because SELinux won't let it.

Have you got any advice or ideas here?

...

Here is my response:

I need to see the AVC's to fully diagnose.  But usually there is a type  attribute for these things.  Type attributes are basically names for  groups of types.  A common usecase for an attribute would be group all the process types on the system.  In SELinux we call these domains, so we created a domain attribute.  I would figure you are seeing something like  abcd_t is trying to list the processes in /proc. (This is basically the equivalent of running ps -ef ) All process types on the  system have the domain attribute.  Instead of adding an allow rule  for abcd_t to read every process type, you could just use the domain  attribute.
 

allow abcd_t domain:dir list_dir_perms; 

allow abcd_t domain:file read_file_perms; 

allow abcd_t domain:lnk_file read_lnk_file_perms;
 

These rules allow abcd_t processes to read all of the data in /proc about processes.  Better yet, there is a SELinux interface that implements this which you could use.

domain_read_all_domains_state(abcd_t)

Another quick and dirty one would be to allow abcd_t to read all files  on the system.  All file types have the file_type attribute. If you wanted to allow abcd_t to read all files on your system, you could do:

files_read_all_files(abcd_t) 

A slightly more security version of this would be to allow it to read all non security files.

files_read_non_security_files(abcd_t)

This would allow abcd_t to read all files except for files like /etc/shadow.

BTW, there are also big interfaces to "dontaudit" a process from reading all files on your systme.  files_dontaudit_getattr_all_files(abcd_t), then you could specify the file types that you want to allow abcd_t to read, and not have to see all the noice when it attempts to read files you don't want it to.

How do I find out the attributes on the system?

If you want to see attributes associated with a type, you can use the seinfo command. 

You can list all of the attributes on in selinux policy using the following:

seinfo -a


If I want to look at attributes on a specific type, I can use the following seinfo command:

> seinfo -tusr_t -x
Types: 1
  type usr_t, base_file_type, base_ro_file_type, entry_type,  exec_type, file_type, mountpoint, non_auth_file_type,  non_security_file_type; 

Types your processes can write to.

When writing SELinux policy I always worry more about what the processes can write then what they read, although obviously there are lots of files that I don't want processes to be able to read.  But the way a hacked process is going to attack my system is by writing to it.

Now when it comes to adding rules and types for what the abcd_t process can write, that should be more specific. 

Look at other policy files to inspiration:

Often when I write policy, or code in general, I look at examples.  The best examples of existing policy are in the source of selinux-policy package,  you can get all of them from github.com

git clone git@github.com:fedora-selinux/selinux-policy-contrib.git contrib

cd contrib

Have a look around...

Using SELinux Attributes and Interfaces make grouping of a lot of rules easier for writing policy.
 

Episode 79 - Skyfall: please don't yell 'fire'

Posted by Open Source Security Podcast on January 24, 2018 12:11 AM
Skyfall Scotland
Josh and Kurt talk about Skyfall, fake reports, risk, logging, and how a civilized society functions.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6182481/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes



Episode 78 - Risk lessons from Hawaii

Posted by Open Source Security Podcast on January 16, 2018 10:40 PM
Josh and Kurt talk about the accidental missile warning in Hawaii. We also discuss general preparedness and risk.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6156285/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Episode 77 - npm and the supply chain

Posted by Open Source Security Podcast on January 11, 2018 02:42 AM
Josh and Kurt talk about the recent npm happenings. What it means for the supply chain, and we end with some thoughts on how maybe none of this matters.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6134997/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


Episode 76 - Meltdown aftermath

Posted by Open Source Security Podcast on January 07, 2018 09:11 PM
Josh and Kurt talk about the aftermath of Meltdown. The details of the flaw are probably less interesting than what happens now.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6123826/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes