Fedora security Planet

Wildcard certificates in FreeIPA

Posted by Fraser Tweedale on February 20, 2017 04:55 AM

The FreeIPA team sometimes gets asked about wildcard certificate support. A wildcard certificate is an X.509 certificate where the DNS-ID has a wildcard in it (typically as the most specific domain component, e.g. *.cloudapps.example.com). Most TLS libraries match wildcard domains in the obvious way.

In this blog post we will discuss the state of wildcard certificates in FreeIPA, but before proceeding it is fitting to point out that wildcard certificates are deprecated, and for good reason. While the compromise of any TLS private key is a serious matter, the attacker can only impersonate the entities whose names appear on the certificate (typically one or a handful of DNS addresses). But a wildcard certificate can impersonate any host whose name happens to match the wildcard value.

In time, validation of wildcard domains will be disabled by default and (hopefully) eventually removed from TLS libraries. The emergence of protocols like ACME that allow automated domain validation and certificate issuance mean that there is no real need for wildcard certificates anymore, but a lot of programs are yet to implement ACME or similar; therefore there is still a perceived need for wildcard certificates. In my opinion some of this boils down to lack of awareness of novel solutions like ACME, but there can also be a lack of willingness to spend the time and money to implement them, or a desire to avoid changing deployed systems, or taking a "wait and see" approach when it comes to new, security-related protocols or technologies. So for the time being, some organisations have good reasons to want wildcard certificates.

FreeIPA currently has no special support for wildcard certificates, but with support for custom certificate profiles, we can create and use a profile for issuing wildcard certificates.

Creating a wildcard certificate profile in FreeIPA

First, kinit admin and export an existing service certificate profile configuration to a file:

ftweedal% ipa certprofile-show caIPAserviceCert --out wildcard.cfg
---------------------------------------------------
Profile configuration stored in file 'wildcard.cfg'
---------------------------------------------------
  Profile ID: caIPAserviceCert
  Profile description: Standard profile for network services
  Store issued certificates: TRUE

Modify the profile; the minimal diff is:

--- wildcard.cfg.bak
+++ wildcard.cfg
@@ -19 +19 @@
-policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=EXAMPLE.COM
+policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM
@@ -108 +108 @@
-profileId=caIPAserviceCert
+profileId=wildcard

Now import the modified configuration as a new profile called wildcard:

ftweedal% ipa certprofile-import wildcard \
    --file wildcard.cfg \
    --desc 'Wildcard certificates' \
    --store 1
---------------------------
Imported profile "wildcard"
---------------------------
  Profile ID: wildcard
  Profile description: Wildcard certificates
  Store issued certificates: TRUE

Next, set up a CA ACL to allow the wildcard profile to be used with the cloudapps.example.com host:

ftweedal% ipa caacl-add wildcard-hosts
-----------------------------
Added CA ACL "wildcard-hosts"
-----------------------------
  ACL name: wildcard-hosts
  Enabled: TRUE

ftweedal% ipa caacl-add-ca wildcard-hosts --cas ipa
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
-------------------------
Number of members added 1
-------------------------

ftweedal% ipa caacl-add-profile wildcard-hosts --certprofiles wildcard
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
-------------------------
Number of members added 1
-------------------------

ftweedal% ipa caacl-add-host wildcard-hosts --hosts cloudapps.example.com
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
  Hosts: cloudapps.example.com
-------------------------
Number of members added 1
-------------------------

Then create a CSR with subject CN=cloudapps.example.com (details omitted), and issue the certificate:

ftweedal% ipa cert-request my.csr \
    --principal host/cloudapps.example.com \
    --profile wildcard
  Issuing CA: ipa
  Certificate: MIIEJzCCAw+gAwIBAgIBCzANBgkqhkiG9w0BAQsFADBBMR8...
  Subject: CN=*.cloudapps.example.com,O=EXAMPLE.COM
  Issuer: CN=Certificate Authority,O=EXAMPLE.COM
  Not Before: Mon Feb 20 04:21:41 2017 UTC
  Not After: Thu Feb 21 04:21:41 2019 UTC
  Serial number: 11
  Serial number (hex): 0xB

Discussion

Observe that the subject common name (CN) in the CSR does not contain the wildcard. FreeIPA requires naming information in the CSR to perfectly match the subject principal. As mentioned in the introduction, FreeIPA has no specific support for wildcard certificates, so if a wildcard were included in the CSR, it would not match the subject principal and the request would be rejected.

When constructing the certificate, Dogtag performs a variable substitution into a subject name string. That string contains the literal wildcard and the period to its right, and the common name (CN) from the CSR gets substituted in after that. The relevant line in the profile configuration is:

policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM

When it comes to wildcards in Subject Alternative Name DNS-IDs, it might be possible to configure a Dogtag profile to add this in a similar way to the above, but I do not recommend it, nor am I motivated to work out a reliable way to do this, given that wildcard certificate are deprecated. (By the time TLS libraries eventually remove support for treating the subject CN as a DNS-ID, I will have little sympathy for organisations that still haven’t moved away from wildcard certs).

In conclusion: you shouldn’t use wildcard certificates, and FreeIPA has no special support for them, but if you really to, you can do it with a custom certificate profile.

Episode 33 - Everybody who went to the circus is in the circus (RSA 2017)

Posted by Open Source Security Podcast on February 15, 2017 06:22 AM
Josh and Kurt are at the same place at the same time! We discuss our RSA sessions and how things went. Talk of CVE IDs, open source libraries, Wordpress, and early morning sessions.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/307825712&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Reality Based Security

Posted by Josh Bressers on February 13, 2017 04:37 AM
If I demand you jump off the roof and fly, and you say no, can I call you a defeatist? What would you think? To a reasonable person it would be insane to associate this attitude with being a defeatist. There are certain expectations that fall within the confines of reality. Expecting things to happen outside of those rules is reckless and can often be dangerous.

Yet in the universe of cybersecurity we do this constantly. Anyone who doesn’t pretend we can fix problems is a defeatist and part of the problem. We just have to work harder and not claim something can’t be done, that’s how we’ll fix everything! After being called a defeatist during a discussion, I decided to write some things down. We spend a lot of time trying to fly off of roofs instead of looking for practical realistic solutions for our security problems.

The way cybersecurity works today someone will say “this is a problem”. Maybe it’s IoT, or ransomware, or antivirus, secure coding, security vulnerabilities; whatever, pick something, there’s plenty to choose from. It’s rarely in a general context though, it will be sort of specific, for example “we have to teach developers how to stop adding security flaws to software”. Someone else will say “we can’t fix that”, then they get called a defeatist for being negative and it’s assumed the defeatists are the problem. The real problem is they’re not wrong. It can’t be fixed. We will never see humans write error free code, there is no amount of training we can give them. Pretending it can is what’s dangerous. Pretending we can fix problems we can’t is lying.

The world isn’t fairy dust and rainbows. We can’t wish for more security and get it. We can’t claim to be working on a problem if we have no clue what it is or how to fix it. I’ll pick on IoT for a moment. How many security IoT “experts” exist now? The number is non trivial. Does anyone have any ideas how to understand the IoT security problems? Talking about how to fix IoT doesn’t make sense today, we don’t even really understand what’s wrong. Is the problem devices that never get updates? What about poor authentication? Maybe managing the devices is the problem? It’s not one thing, it’s a lot of things put together in a martini shaker, shook up, then dumped out in a heap. We can’t fix IoT because we don’t know what it even is in many instances. I’m not a defeatist, I’m trying to live in reality and think about the actual problems. It’s a lot easier to focus on solutions for problems you don’t understand. You will find a solution, those solutions won’t make sense though.

So what do we do now? There isn’t a quick answer, there isn’t an easy answer. The first step is to admit you have a problem though. Defeatists are a real thing, there’s no question about it. The trick is to look at the people who might be claiming something can’t be fixed. Are they giving up, or are they trying to reframe the conversation? If you declare them a defeatist, the conversation is now over, you killed it. On the other side of the coin, pretending things are fine is more dangerous than giving up, you’re living in a fantasy. The only correct solution is reality based security. Have honest and real conversations, don’t be afraid to ask hard questions, don’t be afraid to declare something unfixable. An unfixable problem is really just one that needs new ideas.

You can't fly off the roof, but trampolines are pretty awesome.

I'm @joshbressers on Twitter, talk to me.

Episode 32 - Gambling as a Service

Posted by Open Source Security Podcast on February 08, 2017 01:35 AM
Josh and Kurt discuss random numbers, a lot. Also slot machines, gambling, and dice.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/306639696&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


A sweet metaphor

Posted by Stephen Gallagher on February 06, 2017 05:17 PM

If you’ve spent any time in the tech world lately, you’ve probably heard about the “Pets vs. Cattle” metaphor for describing system deployments. To recap: the idea is that administrators treat their systems as animals: some they treat very much like a pet; they care for them, monitor them intently and if they get “sick”, nurse them back to help. Other systems are more like livestock: their value to them is in their ready availability and if any individual one gets sick, lamed, etc. you simply euthanize it and then go get a replacement.

Leaving aside the dreadfully inaccurate representation of how ranchers treat their cattle, this metaphor is flawed in a number of other ways. It’s constantly trotted out as being representative of “the new way of doing things vs. the old way”. In reality, I cannot think of a realistic environment that would ever be able to move exclusively to the “new way”, with all of their machines being small, easily-replaceable “cattle”.

No matter how much the user-facing services might be replaced with scalable pods, somewhere behind that will always be one or more databases. While databases may have load-balancers, failover and other high-availability and performance options, ultimately they will always be “pets”. You can’t have an infinite number of them, because the replication storm would destroy you, and you can’t kill them off arbitrarily without risking data loss.

The same is true (perhaps doubly so) for storage servers. While it may be possible to treat the interface layer as “cattle”, there’s no way that you would expect to see the actual storage itself being clobbered and overwritten.

The main problem I have with the traditional metaphor is that it doesn’t demonstrate the compatibility of both modes of operation. Yes, there’s a lot of value to moving your front-end services to the high resilience that virtualization and containerization can provide, but that’s not to say that it can continue to function without the help of those low-level pets as well. It would be nice if every part of the system from bottom to top was perfectly interchangeable, but it’s unlikely to happen.

So, I’d like to propose a different metaphor to describe things (in keeping with the animal husbandry theme): beekeeping. Beehives are (to me) a perfect example of how a modern hybrid-mode system is set up. In each hive you have thousands of completely replaceable workers and drones; they gather nectar and support the hive, but the loss of any one (or even dozens) makes no meaningful difference to the hive’s production.

However, each hive also has a queen bee; one entity responsible for controlling the hive and making sure that it continues to function as a coherent whole. If the queen dies or is otherwise removed from the hive, the entire system collapses on itself. I think this is a perfect metaphor for those low-level services like databases, storage and domain control.

This metaphor better represents how the different approaches need to work together. “Pets” don’t provide any obvious benefit to their owners (save companionship), but in the computing world, those systems are fundamental to keeping things running. And with the beekeeping metaphor, we even have a representative for the collaborative output… and it even rhymes with “money”.


There are no militant moderates in security

Posted by Josh Bressers on February 06, 2017 04:16 PM
There are no militant moderates. Moderates never stand out for having a crazy opinion or idea, moderates don’t pick fights with anyone they can. Moderates get the work done. We could look at the current political climate, how many moderate reasonable views get attention? Exactly. I’m not going to talk about politics, that dumpster fire doesn’t need any more attention than it’s already getting. I am however going to discuss a topic I’m calling “security moderates”, or the people who are doing the real security work. They are sane, reasonable, smart, and actually doing things that matter. You might be one, you might know one or two. If I was going to guess, they’re a pretty big group. And they get ignored quite a lot because they're too busy getting work done to put on a show.

I’m going to split existing security talent into some sort of spectrum. There’s nothing more fun than grouping people together in overly generalized ways. I’m going to use three groups. You have the old guard on one side (I dare not mention left or right lest the political types have a fit). This is the crowd I wrote about last week; The people who want to protect their existing empires. On the other side you have a lot of crazy untested ideas, many of which nobody knows if they work or not. Most of them won’t work, at best they're a distraction, at worst they are dangerous.

Then in the middle we have our moderates. This group is the vast majority of security practitioners. The old guard think these people are a bunch of idiots who can’t possibly know as much as they do. After all, 1999 was the high point of security! The new crazy ideas group thinks these people are wasting their time on old ideas, their new hip ideas are the future. Have you actually seen homomorphic end point at rest encryption antivirus? It’s totally the future!

Now here’s the real challenge. How many conferences and journals have papers about reasonable practices that work? None. They want sensational talks about the new and exciting future, or maybe just new and exciting. In a way I don’t blame them, new and exciting is, well, new and exciting. I also think this is doing a disservice to the people getting work done in many ways. Security has never been an industry that has made huge leaps driven by new technology. It’s been an industry that has slowly marched forward (not fast enough, but that’s another topic). Some industries see huge breakthroughs every now and then. Think about how relativity changed physics overnight. I won’t say security will never see such a breakthrough, but I think we would be foolish to hope for one. The reality is our progress is made slowly and methodically. This is why putting a huge focus on crazy new ideas isn’t helping, it’s distracting. How many of those new and crazy ideas from a year ago are even still ideas anymore? Not many.

What do we do about this sad state of affairs? We have to give the silent majority a voice. Anyone reading this has done something interesting and useful. In some way you’ve moved the industry forward, you may not realize it in all cases because it’s not sensational. You may not want to talk about it because you don’t think it’s important, or you don’t like talking, or you’re sick of the fringe players criticizing everything you do. The first thing you should do is think about what you’re doing that works. We all have little tricks we like to use that really make a difference.

Next write it down. This is harder than it sounds, but it’s important. Most of these ideas aren’t going to be full papers, but that’s OK. Industry changing ideas don’t really exist, small incremental change is what we need. It could be something simple like adding an extra step during application deployment or even adding a banned function to your banned.h file. The important part is explaining what you did, why you did it, and what the outcome was (even if it was a failure, sharing things that don’t work has value). Some ideas could be conference talks, but you still need to write things down to get talks accepted. Just writing it down isn’t enough though. If nobody ever sees your writing, you’re not really writing.  Publish your writing somewhere, it’s never been easier to publish your work. Blogs are free, there are plenty of groups to find and interact with (reddit, forums, twitter, facebook). There is literally a security conference every day of the year. Find a venue, tell your story.

There are no militant moderates, this is a good thing. We have enough militants with agendas. What we need more than ever are reasonable and sane moderates with great ideas, making a difference every day. If the sane middle starts to work together. Things will get better, and we will see the change we need.

Have an idea how to do this, let me know. @joshbressers on Twitter

Mapping from iSCSI session to device.

Posted by Adam Young on February 03, 2017 07:11 PM

I was monitoring my system, so I knew the file was /dev/sdb was the new iSCSI target I was trying to turn into a file system. TO provide it, I ran:

iscsiadm -m session --print=3

And saw:

...
		scsi4 Channel 00 Id 0 Lun: 0
		scsi4 Channel 00 Id 0 Lun: 1
			Attached scsi disk sdb		State: running

But what did that do? Using Strace helped me sort it a little. I worked backwards.

stat("/sys/subsystem/scsi/devices/4:0:0:1", 0x7ffc3aab0a50) = -1 ENOENT (No such file or directory)
stat("/sys/bus/scsi/devices/4:0:0:1", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
lstat("/sys/bus/scsi/devices/4:0:0:1/state", {st_mode=S_IFREG|0644, st_size=4096, ...}) = 0
open("/sys/bus/scsi/devices/4:0:0:1/state", O_RDONLY) = 3
read(3, "running\n", 256)               = 8
close(3)                                = 0
write(1, "\t\t\tAttached scsi disk sdb\t\tState"..., 42			Attached scsi disk sdb		State: running

But looking in the file /sys/bus/scsi/devices/4:0:0:1/state I saw only the word “Running” so it must have found the device earlier.

Looking at the complete list of files opened is illuminating, although too long to list here. It starts by enumerating through /sys/class/iscsi_session and hits
/sys/class/iscsi_session/session3. Under /sys/class/iscsi_session/session3/device/target4:0:0 I found:

$ ls -la /sys/class/iscsi_session/session3/device/target4:0:0  
total 0
drwxr-xr-x. 5 root root    0 Feb  3 13:04 .
drwxr-xr-x. 6 root root    0 Feb  3 13:04 ..
drwxr-xr-x. 6 root root    0 Feb  3 13:04 4:0:0:0
drwxr-xr-x. 8 root root    0 Feb  3 13:04 4:0:0:1
drwxr-xr-x. 2 root root    0 Feb  3 14:03 power
lrwxrwxrwx. 1 root root    0 Feb  3 13:04 subsystem -> ../../../../../bus/scsi
-rw-r--r--. 1 root root 4096 Feb  3 13:04 uevent

And, following the symlink:

[ansible@dialga ~]$ ls -la /sys/bus/scsi/devices/4:0:0:1/block
total 0
drwxr-xr-x. 3 root root 0 Feb  3 13:35 .
drwxr-xr-x. 8 root root 0 Feb  3 13:04 ..
drwxr-xr-x. 8 root root 0 Feb  3 13:35 sdb

Notice that /sys/bus/scsi/devices/4:0:0:0/ does not have a subfile called block.

There is probably more to the link than this, but it should be enough to connect the dots. Not sure if there is a way to reverse it short of listing the devices under /sys/bus/scsi/devices/ .

Installing and Running Ansible on Fedora 25

Posted by Adam Young on February 03, 2017 11:30 AM

I have two machines beyond the Laptop on which I am currently typing this article. I want to manage them from my workstation using Ansible. All three machines are running Fedora 25 Workstation.

The two nodes are called dialga and munchlax. You can guess my kids’ interests.

[all]
dialga 
munchlax

Make sure basic Ansible functionality works:

$ ansible -i $PWD/inventory.ini all -m ping
munchlax | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
dialga | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Some config changes I have to make:

Create a new user and group both called ansible on this machine. Change the sudoers file to let the ansible user perform sudo operations without passing a password. This is a security risk in general, but I will be gating all access via my desktop machine and Key based Auth only. I can use my ~/ayoung/.ssh directory to pre-populate this directory, as it only has public keys in it.

A cloud-init install on OpenStack would have set this for me, but since we are talking bare metal here, and no Ironic/PXE, I include this to document what was done manually.

$ sudo cp -a ~ayoung/.ssh/ ~ansible/
$ sudo chown -R ansible:ansible ~ansible/.ssh

Get rid of GSSAPI auth for SSH. I am not using it, and, since I have TGT for my work account, it is slowing down all traffic. Ideally, I would leave GSSAPI enabled, but prioritize Key based auth higher.

$ sudo grep GSSAPI /etc/ssh/sshd_config
# GSSAPI options
#GSSAPIAuthentication yes
GSSAPIAuthentication no
GSSAPICleanupCredentials no
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
#GSSAPIEnablek5users no

Make sure to restart sshd:

sudo systemctl restart sshd

Ensure that the ansible, python2, and dnf_python2 RPMs are installed. Ansible now runs with local modules. This speeds things up, but requires the nodes to have pre-installed code, which I don’t really like. Don’t want to have to update ansible at the start of all playbooks. I am fairly certain that these can all be installed during the initial install of the machine if you chose the additional ansible dnf group.

Episode 31 - XML is never the solution

Posted by Open Source Security Podcast on February 01, 2017 01:37 AM
Josh and Kurt discuss door locks, Ikea, chair testing sounds, electrical safety, autonomous cars, and XML vs JSON.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/305513722&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Barely Functional Keystone Deployment with Docker

Posted by Adam Young on January 31, 2017 04:10 PM

My eventual goal is to deploy Keystone using Kubernetes. However, I want to understand things from the lowest level on up. Since Kubernetes will be driving Docker for my deployment, I wanted to get things working for a single node Docker deployment before I move on to Kubernetes. As such, you’ll notice I took a few short cuts. Mostly, these involve configuration changes. Since I will need to use Kubernetes for deployment and configuration, I’ll postpone doing it right until I get to that layer. With that caveat, let’s begin.

My last post showed how to set up a database and talk to it from a separate container. After I got that working, it stopped, so I am going to back off that a bit, and just focus on the other steps. I do know that the issue was in the setup of the separate Bridge, as when I changed to using the default Bridge network, everything worked fine.

Of the many things I skimped on, the most notable is that I am not doing Fernet tokens, nor am I configuring the Credentials key. These both require outside coordination to have values synchronized between Keystone servers. You would not want the secrets built directly into the Keystone container.

To configure the Keystone database system, I use a single-shot container. This can be thought of as the Command design pattern: package up everything you need to perform an action, and send it to the remote system for execution. In this case, the docker file pulls together the dependencies, which calls a shell script to do the configuration. Here is the Docker file:

FROM index.docker.io/centos:7
MAINTAINER Adam Young <adam>

RUN yum install -y centos-release-openstack-newton &&\
    yum update -y &&\
    yum -y install openstack-keystone mariadb openstack-utils  &&\
    yum -y clean all

COPY configure_keystone.sh /
COPY keystone-configure.sql /
CMD /configure_keystone.sh

The shell script initializes the database using an external SQL file. I use the echo statment for logging/debugging. Passwords are hard coded, as are host names. These should be extracted out to environment variables in the next iteration.

#!/bin/bash

echo -n Database 
mysql -h 172.17.0.2  -P3306 -uroot --password=my-secret-pw < keystone-configure.sql
echo " [COMPLETE]"

echo -n "configuration "
openstack-config  --set  /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@172.17.0.2/keystone
DATABASE_CONN=`openstack-config  --get  /etc/keystone/keystone.conf database connection `
echo $DATABASE_CONN

echo " [COMPLETE]"

echo -n "db-sync "
keystone-manage db_sync
echo " [COMPLETE]"

echo -n "bootstrap "
keystone-manage bootstrap --bootstrap-password=FreeIPA4All
echo " [COMPLETE]"

The SQL file mere creates the Keystone database and initializes access.

-- Don't drop database keystone;
create database keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone'; 
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone'; 
exit

By not dropping the Keystone database, we keep from destroying data if we accidentally run this container twice. It means that, for iterative development, I have to manually delete the database prior to a run, but that is easily done from the command line. I can check the state of the database using:

docker run -it     -e MYSQL_ROOT_PASSWORD="my-secret-pw"    --rm mariadb sh    -c 'exec mysql -h 172.17.0.2 -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"  keystone'

With minor variations on that, I can delete. Once I can connect and confirm that the Database is correctly initialized, I can launch the Keystone container. Here is the Dockerfile

FROM index.docker.io/centos:7
MAINTAINER Adam Young <adam>

RUN yum install -y centos-release-openstack-newton &&\
    yum update -y &&\
    yum -y install openstack-keystone httpd mariadb openstack-utils mod_wsgi &&\
    yum -y clean all

RUN openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@172.17.0.2/keystone &&\
    cp /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d

ADD run-httpd.sh /run-httpd.sh
RUN chmod -v +x /run-httpd.sh

CMD ["/run-httpd.sh"]

This makes use of the resources inside the openstack-keystone RPM to configure the Apache HTTPD instance.

The run-httpd.sh script is lifted from the HTTPD container.

#!/bin/bash
# Copied from
#https://github.com/CentOS/CentOS-Dockerfiles/blob/master/httpd/centos7/run-httpd.sh
# Make sure we're not confused by old, incompletely-shutdown httpd
# context after restarting the container.  httpd won't start correctly
# if it thinks it is already running.
rm -rf /run/httpd/* /tmp/httpd*
exec /usr/sbin/apachectl -DFOREGROUND

This should probably be done as an additional layer on top of the CentOS-Dockerfiles version.

I can then run the Keystone container using:

docker run -it -d  --name openstack-keystone    openstack-keystone

Both containers are running. The configuration container stopped running after it completed its tasks.

$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES
6f6afa855cae        openstack-keystone   "/run-httpd.sh"          29 minutes ago      Up 29 minutes                           openstack-keystone
1127467c0b2b        mariadb:latest       "docker-entrypoint.sh"   17 hours ago        Up 17 hours         3306/tcp            some-mariadb

Confirm access:

$ curl http://172.17.0.3:35357/v3
{"version": {"status": "stable", "updated": "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links": [{"href": "http://172.17.0.3:35357/v3/", "rel": "self"}]}}
$ curl http://172.17.0.3:5000/v3
{"version": {"status": "stable", "updated": "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links": [{"href": "http://172.17.0.3:5000/v3/", "rel": "self"}]}}

Everything you know about security is wrong, stop protecting your empire!

Posted by Josh Bressers on January 30, 2017 12:01 AM
Last week I kept running into old school people trying to justify why something that made sense in the past still makes sense today. Usually I ignore these sort of statements, but I feel like I’m seeing them often enough it’s time to write something up. We’re in the middle of disruptive change. That means that the way security used to work doesn’t work anymore (some people think it does) and in the near future, it won’t work at all. In some instances will actually be harmful if it’s not already.


The real reason I’m writing this up is because there are really two types of leaders. Those who lead to inspire change, and those who build empires. For empire builders, change is their enemy, they don’t welcome the new disrupted future. Here’s a list of the four things I ran into this week that gave me heartburn.


  • You need AV
  • You have to give up usability for security
  • Lock it all down then slowly open things up
  • Firewall everything


Let’s start with AV. A long time ago everyone installed an antivirus application. It’s just what you did, sort of like taking your vitamins. Most people can’t say why, they just know if they didn't do this everyone would think they're weird. Here’s the question for you to think about though: How many times did your AV actually catch something? I bet the answer is very very low, like number of times you’ve seen bigfoot low. And how many times have you seen AV not stop malware? Probably more times than you’ve seen bigfoot. Today malware is big business, they likely outspend the AV companies on R&D. You probably have some control in that phone book sized policy guide that says you need AV. That control is quite literally wasting your time and money. It would be in your best interest to get it changed.


Usability vs security is one of my favorite topics these days. Security lost. It’s not that usability won, it’s that there was never really a battle. Many of us security types don’t realize that though. We believe that there is some eternal struggle between security and usability where we will make reasonable and sound tradeoffs between improving the security of a system and adding a text field here and an extra button there. What really happened was the designers asked to use the bathroom and snuck out through the window. We’re waiting for them to come back and discuss where to add in all our great ideas on security.


Another fan favorite is the best way to improve network security is to lock everything down then start to open it up slowly as devices try to get out. See the above conversation about usability. If you do this, people just work around you. They’ll use their own devices with network access, or just work from home. I’ve seen employees using the open wifi of the coffee shop downstairs. Don’t lock things down, solve problems that matter. If you think this is a neat idea, you’re probably the single biggest security threat your organization has today, so at least identifying the problem won’t take long.


And lastly let’s talk about the old trusty firewall. Firewalls are the friend who shows up to help you move, drinks all your beer instead of helping, then tells you they helped because now you have less stuff to move. I won’t say they have no value, they’re just not great security features anymore. Most network traffic is encrypted (or should be), and the users have their own phones and tablets connecting to who knows what network. Firewalls only work if you can trust your network, you can’t trust your network. Do keep them at the edge though. Zero trust networking doesn’t mean you should purposely build a hostile network.

We’ll leave it there for now. I would encourage you to leave a comment below or tell me how wrong I am on Twitter. I’d love to keep this conversation going. We’re in the middle of a lot of change. I won’t say I’m totally right, but I am trying really hard to understand where things are going, or need to go in some instances. If my silly ramblings above have put you into a murderous rage, you probably need to rethink some life choices, best to do that away from Twitter. I suspect this will be a future podcast topic at some point, these are indeed interesting times.

How wrong am I? Let me know: @joshbressers on Twitter.



Connecting to MariaDB with a network configuration Docker

Posted by Adam Young on January 28, 2017 02:07 AM

Since the “link” directive has been deprecated, I was wondering how to connect to a mariadb instance on a non-default network when both the database and the monitor are running is separate networks. Here is what I got:

First I made sure I could get the link method to work as described on the docker Mariadb site.

Create the network

docker network create --driver bridge maria-bridge

create the database on that network

docker run --network=maria-bridge --name some-mariadb -e \
      MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:latest

create the monitor also on that network. Note that none of the env vars set by link have been set. Fior now, just hard code them

docker run -it --network maria-bridge \
   -e MYSQL_ROOT_PASSWORD="my-secret-pw" \
   --rm mariadb sh \
   -c 'exec mysql -hsome-mariadb -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"'

Running GUI Applications in a container with SELinux

Posted by Adam Young on January 26, 2017 05:13 PM

As I work more and more with containers, I find myself wanting to make more use of them to segregate running third party apps. Taking the lead of Jessie Frazelle I figured I would try to run the Minecraft client in a Container on Fedora 25. As expected, it was a learning experience, but I got it. Here’s the summary:

I started with Wakaru Himura’s docker-minecraft-client Dockerfile. which was written for Ubuntu. When it didn’t work for me, I started trying for a Fedora based one. It took a couple iterations.

The error indicated that the container was having trouble connecting to, or communicating with, the Unix domain socket used by the X server. It was returjned bythe Java code, and here is an abbreviated version of the stack trace.

You can customize the options to run it in the run.sh and rebuilding the image
No protocol specified
Exception in thread "main" java.lang.InternalError: Can't connect to X11 window server using ':1' as the value of the DISPLAY variable.
	at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method)
	at sun.awt.X11GraphicsEnvironment.access$200(X11GraphicsEnvironment.java:65)
	at sun.awt.X11GraphicsEnvironment$1.run(X11GraphicsEnvironment.java:110)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.awt.X11GraphicsEnvironment.(X11GraphicsEnvironment.java:74)
        . . .
	at net.minecraft.bootstrap.Bootstrap.main(Bootstrap.java:378)

Wakaru’s version uses the Oracle JDK. I’ve been running the OpenJDK from Fedora with Minecraft with few problesm, so I simplified the Java install.

I ended up doing a lot of trial and error to get the X authorization code to find the right information from the parent. One thing I tried was the creation of a user inside the container, to mirror my account outside the container, using Wakaru’s code, but modifying it for my personal account.

FROM index.docker.io/fedora:25
MAINTAINER Adam Young <adam@younglogic.com>

RUN dnf -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel java-1.8.0-openjdk-headless
RUN dnf -y  install strace  xorg-x11-xauth
RUN  dnf -y clean all
COPY Minecraft.jar ./

RUN export uid=14370 gid=14370 && \
    mkdir -p /home/ayoung && \
    echo "ayoung:x:${uid}:${gid}:ayoung,,,:/home/ayoung:/bin/bash" >> /etc/passwd && \
    echo "ayoung:x:${uid}:" >> /etc/group  && \
    chown ${uid}:${gid} -R /home/ayoung


CMD XAUTHORITY=~/.Xauthority  /usr/bin/java -jar ./Minecraft.jar

Running that still gave the error from the X server connection. THe audit log shows that SELinux was denying the connection.

$  sudo tail -f  /var/log/audit/audit.log | grep avc
type=AVC msg=audit(1485401508.386:1616): avc:  denied  { connectto } for  pid=18405 comm="java" path=002F746D702F2E5831312D756E69782F5831 scontext=system_u:system_r:container_t:s0:c151,c769 tcontext=unconfined_u:unconfined_r:xserver_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0

Disable SELinux (For now) to see if we can get a success.

sudo setenforce permissive

And run like this

 docker run -ti --rm -e DISPLAY --user ayoung:ayoung -v /run/user/14370/gdm/Xauthority:/run/user/14370/gdm/Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/ayoung/.Xauthority:/home/ayoung/.Xauthority --net=host minecraft

Et, Viola!:

OK. Let’s deal with SELinux. First, re-enable, and confirm.

$ sudo setenforce enforcing
$ getenforce 
Enforcing

Let’s break apart the AVC: Gentoo’s page has a good summary of the pieces:

Log part Meaning
denied { connectto } The attempt to connect to was denied
for pid=18405 The ID of the process that triggered the AVC. Not useful now, as the process has since been terminated.
comm=”java” The command exec’ed by the process that triggered the AVC. This comes from the line run in the container: CMD XAUTHORITY=~/.Xauthority /usr/bin/java -jar ./Minecraft.jar
path=002F746D702F2E5831312D756E69782F5831 path=/tmp/.X11-unix/X1 in Hex. Can be deciphered using:
/bin/python3 -c ‘print(bytearray.fromhex(“‘$1′”).decode())’
scontext= Security context of the process. IN parts…
— system_u: The system user, since dockerd is running from systemd
— container_t: The container target. Docker specific resources have this label
— s0: User is Level 0
— c151,c769 User context.
tcontext= The security context of the target unix Domain Socket
— unconfined_u: Unconfined User. Me. since it was run from me booting my system, which came from a user prompt.
— unconfined_r: Unconfied Role.
— xserver_t: X server lable, to keep all of X’s resources labeled the same way.
— s0-s0: Target is Level 0
— c0.c1023 Target is Context Subcontext.
tclass=unix_stream_socket Target Class shows it is a Domain socket
permissive=0 SELinux was not running in permissive mode.

More information on the Levels and contexts is available for those who wish to understand them, but I didn’t need them for this. They are used by other access control tools, and we are not going to bother with it for the Fedora desktop system.

When dealing with SELinux problems, we have a couple tools in the toolkit.

  • We can change the context of the caller
  • We can change the labels
  • we can change the policy

Of the three, changing the policy is most common. We don’t want to break existing policy, so we need a new rule that says that containers can talk to domains sockets for the xserver. That policy looks like this:

(allow container_t xserver_t (unix_stream_socket (connectto)))

Else where, we’ve seen that the connection reads the X rules in the Xauthority file, which I have pointing to ~/.Xauthority so a second rule makes that part happy. Here is my complete mycontainer.cil file

(allow container_t xserver_t (unix_stream_socket (connectto)))
(allow container_t user_home_t (dir (read)))

Add that to the systems policy with:

sudo  semodule -i mycontainer.cil

Re-enable SELinux enforeing and run the docker file, and it all works.

It took a lot of troubleshooting to get to that point. Special thanks to grift in #selinux for helping with the policy.

Below this is my raw notes and logs, mostly kept for my own historical perspective. There were a few comasnds I used that I will want to look at again. I have the IORC log and out put from more commands below, too.

The file in question is:

$ ls -Z /tmp/.X11-unix/
 system_u:object_r:user_tmp_t:s0 X0 unconfined_u:object_r:user_tmp_t:s0 X1

And we’ll relabel X1.  Since  this is a test, we’ll do a temporary relabel.  Worst case, everything locks up and we have to reboot.

First I better save this post.  I might get locked out….

$ sudo chcon -t container_t /tmp/.X11-unix/X1
[sudo] password for ayoung: 
chcon: failed to change context of '/tmp/.X11-unix/X1' to ‘unconfined_u:object_r:container_t:s0’: Permission denied

Hmmm.  In the Audit log:

type=AVC msg=audit(1485434178.292:1753): avc: denied { relabelto } for pid=28244 comm="chcon" name="X1" dev="tmpfs" ino=47315 scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:container_t:s0 tclass=sock_file permissive=0

To test, I am going to disable SELinux, relabel, then re-enable it.

 

$ chcon -t container_t /tmp/.X11-unix/X1
$ sudo ls -Z /tmp/.X11-unix/
     system_u:object_r:user_tmp_t:s0 X0  unconfined_u:object_r:container_t:s0 X1
$ sudo setenforce enforcing
$ docker run -ti --rm -e DISPLAY --user ayoung:ayoung -v /run/user/14370/gdm/Xauthority:/run/user/14370/gdm/Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/ayoung/.Xauthority:/home/ayoung/.Xauthority --net=host minecraft
Exception in thread "main" java.awt.AWTError: Can't connect to X11 window server using ':1' as the value of the DISPLAY variable.

and in the audit log:

type=AVC msg=audit(1485434417.111:1789): avc: denied { connectto } for pid=28511 comm="java" path=002F746D702F2E5831312D756E69782F5831 scontext=system_u:system_r:container_t:s0:c124,c220 tcontext=unconfined_u:unconfined_r:xserver_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0

It still has the xserver_t label. Looking at it via ls:

$ sudo ls -Z /tmp/.X11-unix/
ls: cannot access '/tmp/.X11-unix/X1': Permission denied
system_u:object_r:user_tmp_t:s0 X0 (null) X1

Ooops. Probably got lucky there we didn’t crash. Lets reset it,

$ sudo setenforce permissive
$ sudo ls -Z /tmp/.X11-unix/
     system_u:object_r:user_tmp_t:s0 X0  unconfined_u:object_r:container_t:s0 X1
$ chcon -t user_tmp_t /tmp/.X11-unix/X1
$ sudo ls -Z /tmp/.X11-unix/
    system_u:object_r:user_tmp_t:s0 X0	unconfined_u:object_r:user_tmp_t:s0 X1
$ sudo setenforce enforcing
$ sudo ls -Z /tmp/.X11-unix/
    system_u:object_r:user_tmp_t:s0 X0	unconfined_u:object_r:user_tmp_t:s0 X1

Since we can’t do ls, it is safe to assume that user launched X processes will also not be able to connect to the socket. But, since the labels don’t match, I am going to assume, also, that we are looking at the wrong file. The target that was denied had a label of xserver_t, and this had user_tmp_t.

Perhaps it was something in /run/user/14370/gdm/Xauthority ? Let’s look.

$ ls -Z /run/user/14370/gdm/Xauthority
unconfined_u:object_r:user_tmp_t:s0 /run/user/14370/gdm/Xauthority

Nope. That path=002F746D702F2E5831312D756E69782F5831 must be pointing somewhere else. Let’s see if it is an inode.

$  find / -inum 002F746D702F2E5831312D756E69782F5831
find: invalid argument `002F746D702F2E5831312D756E69782F5831' to `-inum'

Nope. What is it?

Here is a hint:

$ sudo netstat -xa | grep X11-unix
unix  2      [ ACC ]     STREAM     LISTENING     29431    @/tmp/.X11-unix/X0
unix  2      [ ACC ]     STREAM     LISTENING     47313    @/tmp/.X11-unix/X1
unix  2      [ ACC ]     STREAM     LISTENING     47314    /tmp/.X11-unix/X1
unix  2      [ ACC ]     STREAM     LISTENING     29432    /tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     1612587  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     40628    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     42628    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     37201    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     30693    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     395122   @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     38481    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1614636  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     38798    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1714036  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     353557   @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     30688    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     42436    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     36697    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     42430    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     42363    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     37936    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     3713614  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     55540    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     48315    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     38629    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     3204892  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     49616    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1612589  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1709056  @/tmp/.X11-unix/X1

Resorting to IRC, got some help in #selinux:

grift: path=/tmp/.X11-unix/X1
       /bin/python3 -c 'print(bytearray.fromhex("'$1'").decode())'
       thats now selinux deals with stream connect
        the "connectto" check is done on the process listening on the socket
        basically stream connect/dgram sendto is a two step thing
        1. step on connectto sendto process listening on the socket respectively
        2. step two writing the actual sock file
        allow container_t xserver_t:unix_stream_socket connectto;
        allow container_t user_tmp_t:sock_file write
        you might want to run semodule -DB before you try it
        fedora is kind of quick to hide events

Here is some of the output from the audit log:

    type=AVC msg=audit(1485447098.800:1983): avc:  denied  { connectto } for  pid=1455 comm="java" path=002F746D702F2E5831312D756E69782F5831 scontext=system_u:system_r:container_t:s0:c250,c486 tcontext=unconfined_u:unconfined_r:xserver_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=1
    type=AVC msg=audit(1485447098.800:1984): avc:  denied  { read } for  pid=1455 comm="java" name=".Xauthority" dev="dm-1" ino=393222 scontext=system_u:system_r:container_t:s0:c250,c486 tcontext=system_u:object_r:user_home_t:s0 tclass=dir permissive=1

The convo continued:

grift: echo "(allow container_t xserver_t (unix_stream_socket (connectto)))" > mycontainer.cil
       theres another one that might be related (or not)
       where "java" lists ~/.Xauthority
       echo " (allow container_t user_home_t (dir (read)))" >> mycontainer.cil && semodule -i mycontainer.cil
       run semodule -B to hide it again
       if everything works now for you

Episode 30 - I'm not an expert but I've been yelled at by experts

Posted by Open Source Security Podcast on January 26, 2017 02:02 PM
Josh and Kurt discuss security automation. Machine learning, AI, and a bunch of moral and philosophical boundaries that new future will bring. You've been warned.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/304449487&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Reimagining the Saxophone

Posted by Adam Young on January 25, 2017 06:39 PM

During a trip to Manhattan last winter (Jan 2016 or so) I heard some buskers in Union Square station making sounds that were at once familiar and new.

This is not my video, but this is roughly where they were playing, and this is how Too Many Zooz sounded.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/jMe6Y8GDVEI?feature=oembed" width="584"></iframe>

My whole family stayed and watched for a while.  Entranced.

It turns out, there is a lot of new style music played with old instruments.  Gogol Bordello, Golem, the Pogues, the Dropkick Murphys and many others have done a wonderful job of merging Klezmer and Irish music with Punk, and Post Modern Jukebox has managed to make modern Pop work in older styles. What Too Many Zooz is doing is applying the same ethos to techno/trance/dance.  They call it Brasscore.

I’d call it Jazz.

The wonderful thing about the Tenor Sax is that it sits low enough in the range to cover the low end of the male voice and, with practice, you can get a huge range above.  Sometimes, it needs a little help, though, and you can see Moon Hooch get creative to get notes below the low B flat.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/wwBhxBBa7tE?feature=oembed" width="584"></iframe>

Both groups use effects that were novelties in older music.  Probably the most notable is the use of overtones, that rising squeal that sounds almost electronic.  It turns out there are a whole body of sounds that the dedicated wind player can get from an instrument.  When put together, and played creatively, it can give the impression of a small ensemble, even when just a single Sax player is producing the music.

Derek Brown has put them together masterfully:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/8LEuTnykXp4?feature=oembed" width="584"></iframe>

He has a whole body of tutorials on his site that explains the various techniques.  I’ve been following a few of them.  Right now the two things I am working on is Slap Tonguing and Overtones.  For the Overtones work, I am following the manual of the master:

 

It has been a lot of fun work.  I can hit the second Octave B flat pretty consistently, and the C and C sharp with work.  I can cheat up to the D using the low B flat fingering, by starting with the high D key open.

In doing so, I’ve felt my sound get stronger, especially at the top range.  I’m not where I want to be there yet, though.

The slap tonging is, in some ways, harder to learn, as all of the technique is internal to the mouth.  I’ve followed a few different tutorials, but the instructions here have proven to be the most helpful.

I’ve also gone through this sequence many times.

Not there yet, but every now and then, I get the sound.

I’ve really been motivate to practice and master these techniques.  I’ll post video when I feel I have them down sufficiently.

 

Return on Risk Investment

Posted by Josh Bressers on January 24, 2017 08:25 PM
I found myself in a discussion earlier this week that worked its way into return on investment topics. Of course nobody could really agree on what the return was which is sort of how these conversations often work out. It’s really hard to decide what the return on investment is for security features and products. It can be hard to even determine cost sometimes, which should be the easy number to figure out.

All this talk got me thinking about something I’m going to call risk investment. The idea here is that you have a risk, which we’ll think about as the cost. You have an investment of some sort, it could be a product, training, maybe staff. This investment in theory reduces your risk in some measurable way. The reduction of the risk is the return on risk investment. We like to think about these things in the context of money, but risk doesn’t exactly work that way. Risk isn’t something that can often be measured easily. Even incredibly risky behaviors can work out fine, and playing it safe can end horribly. Rather than try to equate everything to money, what if we ignored that for the moment and just worried about risk.

 First, how do you measure your risk? There isn’t a nice answer for this. There are plenty of security frameworks you can use. There are plenty of methodologies that exist, threat modeling, attack surface analysis, pen test reports, architecture reviews, automated scanning of products and infrastructure. There’s no single good answer to this question. I can’t tell you what your risk profile is, you have to decide how you’re going to measure this. What are you protecting? If it’s some sort of regulated data, there will be substantial cost in losing it, so this risk measurement is easy. It’s less obvious if you’re not operating in an environment that has direct cost to having an incident. It’s even possible you have systems and applications that pose zero risk (yeah, I said it).

 Assuming we have a way to determine risk, now we wonder how do you measure the return on controlling risk? This is possibly more tricky than deciding on how to measure your risk. You can’t prove a negative in many instances, there’s no way to say your investment is preventing something from happening. Rather than measure how many times you didn’t get hacked, the right way to think about this is if you were doing nothing, how would you measure your level of risk? We can refer back to our risk measurement method for that. Now we think about where we do have certain protections in place, what will an incident look like? How much less trouble will there be? If you can’t answer this you’re probably in trouble. This is the important data point though. When there is an incident, how do you think your counter measures will help mitigate damage? What was your investment in the risk?

 And now this brings us to our Return on Risk Investment, or RORI as I’ll call it, because I can and who doesn’t like acronyms? Here’s the thing to think about if you’re a security leader. If you have risk, which we all do, you must find some way to measure it. If you can’t measure something you don’t understand it. If you can’t measure your risk, you don’t understand your risk. Once you have your method to understand what’s happening, make note of your risk measurement without any sort of security measures in place, your risk with ideal (not perfect, perfect doesn't exist) measures in place, and your risk with existing measures in place. That will give you an idea of how effective what you’re doing is. Here’s the thing to watch for. If your existing measures are close to the risk level for no measures, that’s not a positive return. Those are things you either should fix or stop doing. Sometimes it’s OK to stop doing something that doesn’t really work. Security theater is real, it doesn’t work, and it wastes money. The trick is to find a balance that can show measurable risk reduction without breaking the bank.


How do you measure risk? Let me know: @joshbressers on Twitter.


Running SAS University Edition on Fedora 25

Posted by Adam Young on January 24, 2017 03:19 AM

My Wife is a statistician. Over the course of her career, she’s done a lot of work coding in SAS, and, due to the expense of licensing, I’ve never been able to run that code myself. So, when I heard about SAS having a free version, I figured I would download it and have a look, maybe see if I could run something.

Like many companies, SAS went the route of shipping a virtual appliance. They chose to use Virtual Box as the virtualization platform. However, when I tried to install and run the VM in virtual box, I found that the mechanism used to build the Virtual Box specific module for the Linux Kernel, the build assumption were not met, and the VM would not run.

Instead of trying to fix that situation, I investigated the possibility of running the virtual appliance via libvirt on my Fedora systems already installed and configured kvm setup. Turns out it was pretty simple.

To start I went through the registration and download process from here. Once I had a login, I was able to download a file called unvbasicvapp__9411008__ova__en__sp0__1.ova.

What is an ova file? It turns out is is a non-compressed tar file.

$ tar -xf unvbasicvapp__9411008__ova__en__sp0__1.ova
$ ls
SAS_University_Edition.mf  SAS_University_Edition.ovf   SAS_University_Edition.vmdk  unvbasicvapp__9411008__ova__en__sp0__1.ova

Now I had to convert the disk image into something that would work for KVM.

$qemu-img convert -O qcow2 SAS_University_Edition.vmdk SAS_University_Edition.qcow2

Then, I used the virt-manager gui to import the VM. TO be sure I met the constraints, I looked inside the SAS_University_Edition.ovf file. It turns out they ship a pretty modest VM: 1024 MB of Memory and 1 Virtual CPU. These are pretty easy constraints to meet, and I might actually up the amount of memory or CPUs in the VM in the future depending on the size of the data sets I ended up playing around with. However, for now, this is enough to make things work.

Add a new VM from the file menu.

Import the existing image

Use the Browse Local button Browse to the directory where you ran the qemu-img convert command above.

Complete the rest of the VM creation. Defaults should suffice. Run the VM inside VM Manager.

Once the Boot process has completed, you should get enough information from the console to connect to the web UI.

Hitting the Web UI from a browser shows the landing screen.

Click Start… and start coding

Hello, World.

 

Episode 29 - The Security of Rogue One

Posted by Open Source Security Podcast on January 22, 2017 11:00 PM
Josh and Kurt discuss the security of the movie Rogue One! Spoiler: Security in the Star Wars universe is worse than security in our universe.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/303899056&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Mechanical Computer


Episode 28 - RSA Conference 2017

Posted by Open Source Security Podcast on January 19, 2017 02:00 PM
Josh and Kurt discuss their involvement in the upcoming 2017 RSA conference: Open Source, CVEs, and Open Source CVE. Of course IoT and encryption manage to come up as topics.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/303432626&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


What does security and USB-C have in common?

Posted by Josh Bressers on January 16, 2017 06:39 PM
I've decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Episode 27 - Prove to me you are human

Posted by Open Source Security Podcast on January 16, 2017 03:45 PM
Josh and Kurt discuss NTP, authentication issues, network security, airplane security, AI, and Minecraft.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/302981179&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Episode 26 - Tell your sister, Stallman was right

Posted by Open Source Security Podcast on January 12, 2017 02:03 PM
Josh and Kurt end up discussing video game speed running, which is really just hacking. We also end up discussing the pitfalls of the modern world where you don't own your software or services. Stallman was right!

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/302260581&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes





Exploring long JSON files with jq

Posted by Adam Young on January 12, 2017 01:39 PM

The JSON file format is used for marshalling data in lots of different applications. If you are new to an application, and don’t know the data, it might be hard to visually parse the JSON and understand what you are seeing.  The jq command line utility can help make it easier to scope in to a section of the file.  This is a starting point.

Kubelet, the daemon that runs on a Kuberenets node, has a web API for returning stats.  To query it from that node:

curl -k https://localhost:10250/stats/

However, the amount of text returned is several thousand lines.  The first few lines look like this:

$ curl -sk https://localhost:10250/stats/ | head 
{
 "name": "/",
 "subcontainers": [
 {
 "name": "/machine.slice"
 },
 {
 "name": "/system.slice"
 },
 {

Since the JSON top level construct is a dictionary, we can use the function keys from jq to enumerate just the keys.

$ curl -sk https://localhost:10250/stats/ | jq keys
[
 "name",
 "spec",
 "stats",
 "subcontainers"
]

To view the subcontainers, use that key:

$ curl -sk https://localhost:10250/stats/ | jq .subcontainers
[
 {
 "name": "/machine.slice"
 },
 {
 "name": "/system.slice"
 },
 {
 "name": "/user.slice"
 }
]

The stats key returns an array:

$ curl -sk https://localhost:10250/stats/ | jq .stats | head
[
 {
 "timestamp": "2017-01-12T13:23:45.301168504Z",
 "cpu": {
 "usage": {
 "total": 420399104294,
 "per_cpu_usage": [
 202178115170,
 218220989124
 ],

How long is it?  use the length function.  Note that jq functions are piped one into the next.

$ curl -sk https://localhost:10250/stats/ | jq ".stats | length"
9

Want to see the keys of an element?  Index it as an array:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[0] | keys"
[
 "cpu",
 "diskio",
 "filesystem",
 "memory",
 "network",
 "task_stats",
 "timestamp"
]

To see a subelement, use the pipe format.  For example, to see the timestamp of the top element,

$ curl -sk https://localhost:10250/stats/ | jq ".stats[0] | .timestamp"
"2017-01-12T13:29:16.162797308Z"

To see a value for all elements, remove the index from the array. Again, use the pipe notation:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[] | .timestamp"
"2017-01-12T13:32:13.732338602Z"
"2017-01-12T13:32:25.713656307Z"
"2017-01-12T13:32:43.443936137Z"
"2017-01-12T13:33:02.796007138Z"
"2017-01-12T13:33:14.53537449Z"
"2017-01-12T13:33:32.540031699Z"
"2017-01-12T13:33:42.732536856Z"
"2017-01-12T13:33:53.235774027Z"
"2017-01-12T13:34:10.351984713Z"

Which shows that the last element of the array is the latest.  Use the index of -1 to reference this value:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[-1] | .timestamp"
"2017-01-12T13:33:53.235774027Z"

 

Edit: added below.

To find an element of a list based on the value of a key, or the value of a sub element, use the pipe notation within the parameter list of a the call to select. I use a slightly different curl query here, note the summary element at the end. I want to get the pod entry that matches a section of a particular pod name.

curl -sk https://localhost:10250/stats/summary | jq ‘.pods[] | select(.podRef | .name | contains(“virt-launcher-testvm”))’

Episode 25 - The future is now

Posted by Open Source Security Podcast on January 10, 2017 02:00 PM
Josh and Kurt end up discussing CES, IoT, WiFi everywhere, and the future.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/301707567%3Fsecret_token%3Ds-KzKrp&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Security Advice: Bad, Terrible, or Awful

Posted by Josh Bressers on January 09, 2017 03:30 PM
As an industry, we suck at giving advice. I don’t mean this in some negative hateful way, it’s just the way it is. It’s human nature really. As a species most of us aren’t very good at giving or receiving advice. There’s always that vision of the wise old person dropping wisdom on the youth like it’s candy. But in reality they don’t like the young people much more than the young people like them. Ever notice the contempt the young and old have for each other? It’s just sort of how things work. If you find someone older and wiser than you who is willing to hand out good advice, stick close to that person. You won’t find many more like that.

Today I’m going to pick on security though. Specifically security advice directed at people who aren’t security geeks. Heck, some of this will probably apply to security geeks too, so let’s just stick to humans as the target audience. Of all our opportunities around advice, I think the favorite is blaming the users for screwing up. It’s never our fault, it’s something they did, or something wasn’t configured correctly, but still probably something they did. How many times have you dealt with someone who clicked a link because they were stupid. Or they opened an attachment because they’re an idiot. Or they typed a password in that web page because they can’t read. The list is long and impressive. Not once did we do anything wrong. Why would we though? It’s not like we made anyone do those things! This is true, but we also didn’t not make them do those things!

Some of the advice we expect people to listen to is good advice. A great example is telling someone to “log out” of their banking site when they’re done. That makes sense, it’s easy enough to understand, and nothing lights on fire if they forget to do this. We also like to tell people things like “check the URL bar”. Why would a normal person do this? They don’t even know what a URL is. They know what a bar is, it’s where they go to calm down after talking to us. What about when we tell people not to open attachments? Even attachments from their Aunt Millie? She promised that cookie recipe months ago, it’s about time cookies.exe showed up!

The real challenge we have is understanding what is good advice that would supplement a properly functional system. Advice and instructions do not replace a proper solution. A lot of advice we give out is really to mask something that’s already broken. The fact that we expect users to care about a URL or attachment is basically nuts. These are failures in the system, not failures with users. We should be investing our resources into solving the root of the problem, not yelling at people for clicking on links. Instead of telling users not to click on attachments, just don’t allow attachments. Expecting behavior from people rarely changes them. At best it creates an environment of shame but it’s more likely it creates an environment of contempt. They don’t like you, you don’t like them.

As a security practitioner, look for ways to eliminate problems without asking users for intervention. A best case situation will be 80% user compliance. That remaining 20% would require more effort to deal with than anyone could handle, and if your solution is getting people to listen, you need 100% all the time which is impossible for humans but not impossible for computers.

It’s like the old saying, an ounce of prevention is worth a pound of cure. Or if you’re a fan of the metric system, 28.34 grams of prevention is worth 453.59 grams of cure!

Do you have some bad advice? Lay it on me! @joshbressers on Twitter.

Looks like you have a bad case of embedded libraries

Posted by Josh Bressers on January 03, 2017 03:39 PM
A long time ago pretty much every application and library carried around its own copy of zlib. zlib is a library that does really fast and really good compression and decompression. If you’re storing data or transmitting data, it’s very likely this library is in use. It’s easy to use and is public domain. It’s no surprise it became the industry standard.

Then one day, CVE-2002-0059 happened. CVE-2002-0059 was a security flaw that was easy to trigger and easy to exploit. It affected network listening applications that used zlib (which was most of them). Today if this came out, it would make heartbleed look like a joke. This was long long ago though, most people didn’t know anything about security (or care in many instances). If you look at the updates that came out because of this flaw, they were huge because literally hundreds of software applications and libraries had to be patched. This affected Windows and Linux, which was most everything back then. Today it would affect every device on the planet. This isn’t an exaggeration. Every. Single. Device.

A lot of people learned a valuable lesson from CVE-2002-0059. That lesson was to stop embedding copies of libraries in your applications. Use the libraries already available on the system. zlib is pretty standard now, you can find it most anywhere, there is basically no reason to carry around your own version of this library in your project anymore. Anyone who does this would be seen as a bit nuts. Except this is how containers work.

Containing Containers

If you pay attention at all, you know the future of most everything is moving back in the direction of applications shipping with all the bits they need to run. Linux containers have essentially a full linux distribution inside them (a very small one of course). Now there’s a good reason for needing containers today. A long time ago, things moved very slowly. It wouldn’t have been crazy to run the same operating system for ten years. There weren’t many updates to anything. Even security updates were pretty rare. You know that if you built an application on top of a certain version of Windows, Solaris, or Linux, it would be around for a long time. Those days are long gone. Things move very very quickly today.

I’m not foolish enough to tell anyone they shouldn’t be including embedded copies of things in their containers. This is basically how containers work. Besides everything is fast now, including the operating system. You can’t count on the level of stability that once existed. This is a good thing because it gives us the ability to create faster than ever before, container technology is how we solve the problem of a fast changing operating system.

The problem we have today is our tools aren’t quite ready to deal with a security nightmare like CVE-2002-0059. If we found a serious problem like this (we sort of did with CVE-2015-7547 which affected glibc) how long would it take you to update all your containers? How would you update them? How would you even know if the flaw affected you?

The answer is most people wouldn’t update their containers quickly, some wouldn’t update them ever. This sort of goes against the whole DevOps concept. The right way this should work is if some horrible flaw is found in a library you’re shipping, your CI/CD infrastructure just magically deals with it. You shouldn’t have to really know or care. Humans are slow and make a lot of mistakes. They’re also hard to predict. All of these traits go against DevOps. The less we have humans do, the better. This has to be the future of security updates. There’s no secret option C where we stop embedding libraries this time. We need tools that can deal with security updates in a totally automated manner. We’re getting there, but we have a long way to go.

If you’re using containers today, and you can’t rebuild everything with the push of a button, you’re not really using containers. You’re running a custom Linux distribution. Don’t roll your own crypto, don’t roll your own distro.

Do you roll your own distro? Tell me, @joshbressers on Twitter.

Episode 24 - The 2016 prediction edition! (yeah, that's right, 2016)

Posted by Open Source Security Podcast on January 03, 2017 01:14 PM
Josh and Kurt discuss 2016 predictions in 2017, what they got right, what they got wrong, and a bunch of other random things.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/300679437&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Future Proof Security

Posted by Josh Bressers on January 02, 2017 04:00 PM
If you’ve ever written code, even a few lines of it, you know there is always some sort of tradeoff between doing it “right” and doing it "now". This is basically the reality of any industry, there is always the right way, and then there’s the way it’s going to get done. If you’ve ever done any sort of home remodeling project you’re well aware of uncovering the sins of the past as soon as that wall gets opened up.


When you’re writing software there are some places you should never try to make this tradeoff though. In the industry we like to call some of these decisions “technical debt”. It’s not called that to be clever, it’s called that because like all debt, someday you have to pay it back, plus interest. Sometimes those loans come with huge interest rates. How many of us have seen entire projects that were thrown out because of the terrible design decisions made way back at the beginning? It’s sadly not uncommon.


Are there times we should never make a tradeoff between “right” and “now”? Yes, yes there are. The single most important is verify data correctness. Especially if you think it’s trusted input. Today’s trusted input is tomorrow’s SQL injection. Let’s use a few examples (these are actual examples I saw in the past with the names of the innocent changed).


Beware the SQL
Once Bob wrote some SQL to return all the names in one of the ‘Users’ table. It’s a simple enough query, the code looks something like this:

def get_clients():
table_name = “clients”
query = ‘SELECT * from Users_’ + table_name


That’s easy enough to understand, for every other ‘get_’ function, you change the table name variable. Someday in the future, they let the intern write some code, and he decides that would be way easier if the table_name variable was passed to the function, and you set it from the URL. Now you have a SQL injection as any remote user can set the table_name variable to anything, including dangerous SQL. If you’re ever doing SQL queries, use prepared statements. Even if you don’t think you need it. It’ll save a lot of trouble later.


Images as far as the eye can see!
There is an application that has some internal icons, they’re used for the buttons that get displayed for users to click on, no big deal. The developer took an existing image library they found under the rug. It has some security flaws but who cares, all the images it displays are shipped by the app, they’re trusted, no big deal.


In a few years the intern (that guy again!) decides that it would be awesome to show images off the Internet. There just happens to be an image library already included in the application, which is a huge win. There’s even some example code that can be copied from where the buttons are drawn!


This one is pretty easy to see. You have a known bad library that used to parse only trusted input. Now it’s parsing untrusted input and is a pretty big problem. There isn’t an easy fix for this one unfortunately. It’s rarely wise to ship embedded libraries in your projects, but everyone does it. I won't tell you to stop doing this, but I also understand this is one of the great problems we have to solve now that open source is everywhere.

These two examples have been grossly simplified, but this stuff has and will continue to happen. If you’re a software developer, be careful with your shortcuts. Always ask yourself the question “what happens if this suddenly starts parsing untrusted input?” It’ll save you a lot of trouble down the road. Never forget that the technical debt bill will show up someday. Make sure you can afford it.

Do you have a clever technical debt story? Tell me, @joshbressers on Twitter.

We are (still) not who we are

Posted by Stephen Gallagher on December 31, 2016 06:28 PM

This article is a reprint. It first appeared on my blog on January 24, 2013. Given the recent high-profile hack of Germany’s defense minister, I decided it was time to run this one again.

 

In authentication, we generally talk about three “factors” for determining identity. A “factor” is a broad category for establishing that you are who you claim to be. The three types of authentication factor are:

  • Something you know (a password, a PIN, the answer to a “security question”, etc.)
  • Something you have (an ATM card, a smart card, a one-time-password token, etc.)
  • Something you are (your fingerprint, retinal pattern, DNA)

Historically, most people have used the first of these three forms most commonly. Whenever you’ve logged into Facebook, you’re entering something you know: your username and password. If you’ve ever used Google’s two-factor authentication to log in, you probably used a code stored on your smartphone to do so.

One of the less common, but growing, authentication methods are the biometrics. A couple years ago, a major PC manufacturer ran a number of television commercials advertising their laptop models with a fingerprint scanner. The claim was that it was easy and secure to unlock the machine with a swipe of a finger. Similarly, Google introduced a service to unlock an Android smart phone by using facial recognition with the built-in camera.

Pay attention folks, because I’m about to remove the scales from your eyes. Those three factors I listed above? I listed them in decreasing order of security. “But how can that be?” you may ask. “How can my unchangeable physical attributes be less secure than a password? Everyone knows passwords aren’t secure.”

The confusion here is due to subtle but important definitions in the meaning of “security”. Most common passwords these days are considered “insecure” because people tend to use short passwords which by definition have a limited entropy pool (meaning it takes a smaller amount of time to run through all the possible combinations in order to brute-force the password or run through a password dictionary). However, the pure computational complexity of the authentication mechanism is not the only contributor to security.

The second factor above, “something you have” (known as a token), is almost always of significantly higher entropy than anything you would ever use as a password. This is to eliminate the brute-force vulnerability of passwords. But it comes with a significant downside as well: something you have is also something that can be physically removed from you. Where a well-chosen password can only be removed from you by social engineering (tricking you into giving it to an inappropriate recipient), a token might be slipped off your desk while you are at lunch.

Both passwords and tokens have an important side-effect that most people never think about until an intrusion has been caught: remediation. When someone has successfully learned your password or stolen your token, you can call up your helpdesk and immediately ask them to reset the password or disable the cryptographic seed in the token. Your security is now restored and you can choose a new password and have a new token sent to you.

However, this is not the case with a biometric system. By its very nature, it is dependent upon something that you cannot change. Moreover, the nature of its supposed security derives from this very fact. The problem here is that it’s significantly easier to acquire a copy of someone’s fingerprint, retinal scan or even blood for a DNA test than it is to steal a password or token device and in many cases it can even be done without the victim knowing.

Many consumer retinal scanners can be fooled by a simple reasonably-high-resolution photograph of the person’s eye (which is extremely easy to accomplish with today’s cameras). Some of the more expensive models will also require a moving picture, but today’s high-resolution smartphone cameras and displays can defeat many of these mechanisms as well. It’s well-documented that Android’s face-unlock feature can be beaten by a simple photograph.

These are all technological limitations and as such it’s plausible that they can be overcome over time with more sensitive equipment. However, the real problem with biometric security lies with its inability to replace a compromised authentication device. Once someone has a copy of your ten fingerprints, or a drop of your blood from a stolen blood-sugar test or a close-up video of your eye from a scoped video camera, there is no way to change this data out. You can’t ask helpdesk to send you new fingers, an eyeball or DNA. Therefore, I contend that I lied to you above. There is no full third factor for authentication, because, given a sufficient amount of time, any use of biometrics will eventually degenerate into a non-factor.

Given this serious limitation, one should never under any circumstances use biometrics as the sole form of authentication for any purpose whatsoever.

One other thought: have you ever heard the argument that you should never use the same password on multiple websites because if it’s stolen on one, they have access to the others? Well, the same is true of your retina. If someone sticks malware on your cellphone to copy an image of your eye that you were using for “face unlock”, guess what? They can probably use that to get into your lab too.

The moral of the story is this: biometrics are minimally useful, since they are only viable until the first exposure across all sites where they are used. As a result, if you are considering initiating a biometric-based security model, I encourage you to save your money (those scanners are expensive!) and look into a two-factor solution involving passwords and a token of some kind.


Episode 23 - We can't patch people

Posted by Open Source Security Podcast on December 28, 2016 03:45 PM
Josh and Kurt talk about scareware, malware, and how hard this stuff is to stop, and how the answer isn't fixing people.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/299913768&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


The art of cutting edge, Doom 2 vs the modern Security Industry

Posted by Josh Bressers on December 25, 2016 06:05 PM
During the holiday, I started playing Doom 2. I bet I’ve not touched this game in more than ten years. I can't even remember the last time I played it. My home directory was full of garbage and it was time to clean it up when I came across doom2.wad. I’ve been carrying this file around in my home directory for nearly twenty years now. It’s always there like an old friend you know you can call at any time, day or night. I decided it was time to install one of the doom engines and give it a go. I picked prboom, it’s something I used a long time ago and doesn’t have any fancy features like mouselook or jumping. Part of the appeal is to keep the experience close to the original. Plus if you could jump a lot of these levels would be substantially easier. The game depends on not having those features.

This game is a work of art. You don’t see games redefining the industry like this anymore. The original Doom is good, but Doom 2 is like adding color to a black and white picture, it adds a certain quality to it. The game has a story, it’s pretty bad but that's not why we play it. The appeal is the mix of puzzles, action, monsters, and just plain cleverness. I love those areas where you have two crazy huge monsters fighting, you wonder which will win, then start running like crazy when you realize the winner is now coming after you. The games today are good, but it’s not exactly the same. The graphics are great, the stories are great, the gameplay is great, but it’s not something new and exciting. Doom was new and exciting. It created a whole new genre of gaming, it became the bar every game that comes after it reaches for. There are plenty of old games that when played today are terrible, even with the glasses of nostalgia on. Doom has terrible graphics, but that doesn’t matter, the game is still fantastic.

This all got me thinking about how industries mature. Crazy new things stop happening, the existing players find a rhythm that works for them and they settle into it. When was the last time we saw a game that redefined the gaming industry? There aren’t many of these events. This brings us to the security industry. We’re at a point where everyone is waiting for an industry defining event. We know it has to happen but nobody knows what it will be.

I bet this is similar to gaming back in the days of Doom. The 486 just came out, it had a ton of horsepower compared to anything that had come before it. Anyone paying attention knew there were going to be awesome advancements. We gave smart people awesome new tools. They delivered.

Back to security now. We have tons of awesome new tools. Cloud, DevOps, Artificial Intelligence, Open Source, microservices, containers. The list is huge and we’re ready for the next big thing. We all know the way we do security today doesn’t really work, a lot of our ideas and practices are based on the best 2004 had to offer. What should we be doing in 2017 and beyond? Are there some big ideas we’re not paying attention to but should be?

Do you have thoughts on the next big thing? Or maybe which Doom 2 level is the best (Industrial Zone). Let me know.

Episode 22 - IoT Wild West

Posted by Open Source Security Podcast on December 25, 2016 01:36 PM
Josh and Kurt talk about planned obsolescence and IoT devices. Should manufacturers brick devices? We also have a crazy discussion about the ethics of hacking back.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/299448186&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Episode 21 - CVE 10K Extravaganza

Posted by Open Source Security Podcast on December 21, 2016 02:14 PM
Josh and Kurt talk about CVE 10K. CVE IDs have finally crossed the line, we need 5 digits to display them. This has never happened before now.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/298898472&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Does "real" security matter?

Posted by Josh Bressers on December 19, 2016 06:45 PM
As the dumpster fire that is 2016 crawls to the finish line, we had another story about a massive Yahoo breach. 1 billion user accounts had data stolen. Just to give some context here, that has to be hundreds of gigabytes at an absolute minimum. That's a crazy amount of data.

And nobody really cares.

Sure, there is some noise about all this, but in a week or two nobody will even remember. There has been a similar story to this about every month all year long. Can you even remember any of them? The stock market doesn't, basically everyone who has ever had a crazy breach hasn't seen a long term problem with their stock. Sure there will be a blip where everyone panics for a few days, then things go back to normal.

So this brings us to the title of this post.

Does anyone care about real security? What I mean here is I'm going to lump things into three buckets: no security, real security, and compliance security.

No Security
This one is pretty simple. You don't do anything. You just assume things will be OK, someday they aren't, then you clean up whatever mess you find. You could call this "reactive security" if you wanted. I'm feeling grumpy though.

Real Security
This is when you have a real security team, and you spend real money on features and technology. You have proper logging, and threat models, and attack surfaces, and hardened operating systems. Your applications go through a security development process and run in sandbox. This stuff is expensive. And hard.

Compliance Security
This is where you do whatever you have to because some regulation from somewhere says you have to. Password lengths, enabling TLS 1.2, encrypted data, the list is long. Just look at PCI if you want an example. I have no problem with this, and I think it's the future. Here is a picture of how things look today.

I don't think anyone would disagree that if you're doing the minimum compliance suggests, you still will have plenty of insecurity. The problem with the real security is that you're probably not getting any ROI, it's likely a black hole you dump money into and get minimal value back (remember the bit about long term stock prices not mattering here).

However, when we look at the sorry state of nearly all infrastructure and especially the IoT universe, it's clear that No Security is winning this race. Expecting anyone to make great leaps in security isn't going to happen. Most won't follow unless they absolutely have to. This is why compliance is the future. We have to keep nudging compliance to the right on this graph, but we have to move it slowly.

It's all about the Benjamins
As I mentioned above, security problems don't seem to cause a lot of negative financial impact. Compliance problems do. Right now there are very few instances where compliance is required, and even when it is it's not always as strong as it could be. Good security will have to firstly show value (actual measurable value, not some made up statistics), then once we see the value, it should be mandated by regulation. Not everything should be regulated, but we need clear rules as to what should need compliance, why, and especially how. I used to despise the idea of mandatory compliance around security but I think at this point it's the only plausible solution. This problem isn't going to fix itself. If you want to make a prediction ask yourself: is there a reason 2017 will be more secure than 2016?

Do you have thoughts on compliance? Let me know.

Episode 20 - The Death of PGP

Posted by Open Source Security Podcast on December 19, 2016 01:43 PM
Josh and Kurt talk about the death of PGP, and how it's not actually dead at all. It's still really hard to use though.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/298557680&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Episode 19 - A field full of razor blades and monsters

Posted by Open Source Security Podcast on December 14, 2016 03:55 PM
Josh and Kurt talk about the bricking devices (on purpose).

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/297769068&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


SELinux Security Policy: Part 3 – Labels in action!

Posted by Miroslav Grepl on December 13, 2016 11:08 AM

SELinux is all about labels. Do you remember from the previous blog post?

We know that SELinux decisions are based on labels and made by the kernel according to the loaded policy. Access is granted (action is permitted) if and only if there is a rule allowing it. You can see it in picture 2 featured in the first part of this blog series. We also know how these labels look and where they are stored from the second part of these blogs.

Now lets put the pieces together and demonstrate real SELinux decisions between real system entities – between the system and service manager

$ ps -eZ | grep systemd
system_u:system_r:init_t:s0         1 ?        00:00:02 systemd

which is trying to read a picture file in your home directory.

$ ls -Z /home/mgrepl/Pictures/picture 
unconfined_u:object_r:user_home_t:s0 /home/mgrepl/Pictures/picture

We use Fedora Targeted policy so we already know that we care only about types (the 3rd part of above mentioned labels). Together with the class (file) and  with the permission (read) we get the following tuple.

(init_t, user_home_t, file, read)

which is used for the decision made by the kernel, whether the policy permits a process with source type to access object of given class and target type with the requested action.

We can demonstrate this kernel decision using the security_compute_av() function with SELinux python module.

$ python
Python 2.7.12 (default, Oct 12 2016, 14:31:21) 
[GCC 6.2.1 20160916 (Red Hat 6.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import selinux 
>>> avd=selinux.av_decision()
>>> slabel="system_u:system_r:init_t:s0"
>>> tlabel="unconfined_u:object_r:user_home_t:s0"
>>> selinux_class=selinux.string_to_security_class("file")
>>> selinux_permission=selinux.string_to_av_perm(selinux_class,"read")
>>> 
>>> selinux.security_compute_av(slabel,tlabel,selinux_class,selinux_permission,avd)
0
>>> 
>>> if (selinux_permission&avd.allowed == selinux_permission): print("The access is granted.")
... else: print("The access is not granted.")
... 

The access is not granted.

As we can see the access is not granted by the kernel because there is no rule for our (init_t, user_home_t, file, read) tuple. If you want to test a different tuple then you can simply replace bold values in the code above. As I said, there was no rule for tested tuple.

Do we have a way to check if a rule exists?

Yes, we do. We can use the sesearch tool to query existing SELinux rules in the SELinux security policy . For our example, the query has the following form

$ sesearch -A -s init_t -t user_home_t -c file -p read

and will return an empty output because there is no rule as we already know.

What will happen if  the type of the /home/mgrepl/Pictures/picture file will be different and readable by the init_t process type?

We can demonstrate the kernel decision again using the security_compute_av() function.

$ python
Python 2.7.12 (default, Oct 12 2016, 14:31:21) 
[GCC 6.2.1 20160916 (Red Hat 6.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import selinux 
>>> avd=selinux.av_decision()
>>> slabel="system_u:system_r:init_t:s0"
>>> tlabel="unconfined_u:object_r:systemd_home_t:s0"
>>> selinux_class=selinux.string_to_security_class("file")
>>> selinux_permission=selinux.string_to_av_perm(selinux_class,"read")
>>> 
>>> selinux.security_compute_av(slabel,tlabel,selinux_class,selinux_permission,avd)
0
>>> 
>>> if (selinux_permission&avd.allowed == selinux_permission): print("The access is granted.")
... else: print("The access is not granted.")
... 
The access is granted.

We can again query the rule using setools as previously.

$ sesearch -A -s init_t -t systemd_home_t -c file -p read
Found 1 semantic av rules:
   allow init_t systemd_home_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ;

So as we can see SELinux is really all about labels. You can define hundreds, thousands types, assign them to system entities and write rules between them as you wish. SELinux gives you fine-grained control over your system.

If systemd is exploited and an attacker now has permissions of the main systemd process, which is running with root permissions, he could access any files on your system.

With SELinux he could access only files with specific labels for which we define SELinux policy rules. He could access files with the systemd_home_t type but he would not able to access user’s  home files with the user_home_t type.

Is your system protected by SELinux? Do you know how to check it? Do you know how to get more details about decisions performed by SELinux?

I will give you answers on these questions in the next part of this blog series😉

Thank you.


A Ray in a Minecraft Mod

Posted by Adam Young on December 13, 2016 02:19 AM

I want to shoot a ray. And not just parallel to one of the axis of the cartesion coordinate system. I want to look in a direction and shoot a ray in that direction. I want to be able to shoot aray in any direction and walk on it. Like certain ice based superheros. And now I can do that.

In math a line used to be defined as “The [straight or curved] line is the first species of quantity, which has only one dimension, namely length, without any width nor depth, and is nothing else than the flow or run of the point which […] will leave from its imaginary moving some vestige in length, exempt of any width. […] The straight line is that which is equally extended between its points.” While that definition has since changed, it works for Minecraft. A Ray, then, is a half a line: fine a point in the middle, and going only in one direction.

My rays are not infinite, due to resource constraints. So, really, I want to draw an arbitrary line segment, starting from Steve, and continuing on in the direction that Steve is looking.

When I first learned Cartesian coordinates, I learned to plot Y in terms of X. For three dimensions, we need to add in Z. When adding in Z, it might be tempting to plot Z in terms of X as well. But this does not abstract itself to pointing in any arbitrary direction. It turns out it is easier to define X, Y, and Z all in relationship to the distance of the point from the Origin. From our starting point, we move one block along the line, and figure out what the X, Y, and Z values are for that block. Then continue with the next block and so on.

We are going to make our ray shoot 100 blocks.

    double distance = 100.0;

Our origin is the the x,y, and z of the player. From this we need to calculate the end position.

Minecraft gives the facing of the player in two values: Yaw and Pitch. These are pilot terms.

Yaw means roughly “how far around”
Pitch means the angle above or below the horizon.

    double yaw = ( player.rotationYaw + 90 ) * ( Math.PI / 180 );
    double pitch = ( player.rotationPitch * -1 ) * ( Math.PI / 180 );

    double endX = player.posX + distance * Math.cos( yaw ) * Math.cos( pitch );
    double endZ = player.posZ + distance * Math.sin( yaw ) * Math.cos( pitch );
    double endY = player.posY + distance * Math.sin( pitch );

One way to think of this is: go around the circle yaw amount, and then up pitch. As you raise the pitch, the X and Z values get closer to the origin, while the Y value climbs higher into the sky.

Once we have our start and endpoints, we can create an iterator that, when we call next(), will give the next pos object:

Iterator< BlockPos > itr = new LinearIterator(
            player.getPosition( ),
            new BlockPos( endX, endY, endZ ) );

What does this iterator do? Calculates a difference for X, Y, and Z divided by the number of blocks (100 as passed in from before). Deltas,. if you will in the calculus meaning of the term, but with a granularity of one block.
The use the distance formula to figure out the distance from the origin to the current block, and multiply the delta for each by that distance.

    
    public LinearIterator(BlockPos start, BlockPos end) {
        if (start.equals(end)) {
            throw new IllegalArgumentException(
                    "line cannot be defined by two identical points.");
        }
        
        x = start.getX();
        y = start.getY();
        z = start.getZ();
        
        int dx = end.getX() - start.getX();
        int dy = end.getY() - start.getY();
        int dz = end.getZ() - start.getZ();
        int interim = dx * dx + dy * dy + dz * dz;
        this.segmentLength = Math.sqrt(interim);
        
        quantx = dx / segmentLength;
        quanty = dy / segmentLength;
        quantz = dz / segmentLength;
      
    }

If we hit something solid, leave it a lone. Otherwise, put a leaf block at that position (leafs clean up on their own, kind of like how ice is supposed to melt. But Ice does not melt in Minecraft.

    
Iterator< BlockPos > itr = new LinearIterator(
            player.getPosition( ),
            new BlockPos( endX, endY, endZ ) );
    BlockPos previous = null;
    BlockPos current = null;
    BlockPos stairPos = null;
    while ( itr.hasNext( ) ) {
      current = itr.next( );
      Block currentBlock = world.getBlockState( current ).getBlock( );
      if ( currentBlock.equals( Blocks.AIR ) ||
              currentBlock.equals( Blocks.WATER ) ||
              currentBlock.equals( Blocks.PACKED_ICE ) ) {
        world.setBlockState( current, baseState );
        stairPos = current.add( 0, 1, 0 );
      }
      previous = current;
    }

I really need to get an image in there for the ray gun, but for now it is the default pink and black box.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/uuXnOmZ2uxo" width="560"></iframe>

A security lifetime every five years

Posted by Josh Bressers on December 12, 2016 02:36 PM
A long time ago, it wouldn’t be uncommon to have the same job at the same company for ten or twenty years. People loved their seniority, they loved their company, they loved everything staying the same. Stability was the name of the game. Why learn something new when you can retire in a few years?

Well, a long time ago, was a long time ago. Things are quite a bit different now. If you’ve been doing the same thing at the same company for more than five years, there’s probably something wrong. Of course there are always exceptions to every rule, but I bet more than 80% of the people in their jobs for more than five years aren’t exceptions. It’s easy to get too comfortable, it’s also dangerous.

Rather than spending too much time expanding on this idea, I’m going to take it and move into the security universe as that’s where I spend all my time. It’s a silly place, but it’s all I know, so it’s home. While all of IT moves fast, the last few years have been out of control for security. Most of the rules from even two years ago are different now. Things are moving at such a fast pace I’m comfortable claiming that every five years is a lifetime in the security universe.

I’m not saying you can’t work for the same company this whole time. I’m saying that if you’re doing the same thing for five years, you’re not growing. And if you’re not growing, what’s the point?

Now here’s the thing about security. If we think about the people we would consider the “leaders” (using the term loosely, there aren’t even many of those types) we will notice something about the whole “five years” I mentioned. How many of them have done anything on a level that got them where they are today in the last five years? Not many.

Again, there are exceptions. I’ll point to Mudge and the CITL work. That’s great stuff. But for every Mudge I can think of more than ten that just aren’t doing interesting things. There’s nothing wrong with this, I’m not pointing it out to diminish any past contributions to the world. I point it out because sometimes we spend more time looking at the past than we do looking even where we are today, much less where we’re heading in the future.

What’s the point of all this (other than making a bunch of people really mad)? It’s to point out that the people and ideas that are going to move things forward aren’t the leaders from the past, they’re new and interesting people you’ve never heard of. Look for new people with fresh ideas. Sure it’s fun to talk to the geezers, but it’s even more fun to find the people who will be the next geezers.

Episode 18 - The Security of Santa

Posted by Open Source Security Podcast on December 11, 2016 09:48 PM
Josh and Kurt talk about the security concerns and logistics of Santa, elves, and the North Pole.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/297112068&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes

Back of the envelope
3589 x1.32large instances (1952 gigs ram)
holds 7 petabytes of data in memory
cost $1,483,979.72 per hour

Keystone Development Bootstrap with Service Catalog

Posted by Adam Young on December 07, 2016 12:01 AM

My Last post showed how to get a working Keystone server. Or did it.

$ openstack service list
The service catalog is empty.

Turns out, to do most things with Keystone, you need a service catalog, and I didn’t have one defined. To fix it, rerun bootstrap with a few more options.

Rerun the bootstrap command with the additional parameters to create the identity service and the endpoints that implement it.

Note: I used 127.0.0.1 Explicitly elsewhere, so I did that here, too, for consistency. You can use localhost if you prefer, or an explicit hostname, so long as it works for you.

keystone-manage bootstrap --bootstrap-password keystone  --bootstrap-service-name keystone --bootstrap-admin-url http://127.0.0.1:35357  --bootstrap-public-url http://127.0.0.1:5000  --bootstrap-internal-url http://127.0.0.1:5000  --bootstrap-region-id RegionOne

Restart Keystone and now:

$ openstack service list
You are not authorized to perform the requested action: identity:list_services (HTTP 403) (Request-ID: req-3dfd0b6e-c4c9-443b-b374-243acdeda75e)

Hmmm. Seems I need a role on a project: add in the following params:

 --bootstrap-project-name admin      --bootstrap-role-name admin

So now my whole command line looks like this:

keystone-manage bootstrap \
--bootstrap-password keystone \
--bootstrap-service-name keystone \
--bootstrap-admin-url http://127.0.0.1:35357 \
--bootstrap-public-url http://127.0.0.1:5000 \
--bootstrap-internal-url http://127.0.0.1:5000 \
--bootstrap-project-name admin      \
--bootstrap-role-name admin
--bootstrap-region-id RegionOne

Let’s try again:

$ openstack service list
You are not authorized to perform the requested action: identity:list_services (HTTP 403) (Request-ID: req-b225c12a-8769-4322-955f-fb921d0f6834)

What?

OK, let’s see what is in the token. Running:

openstack token issue --debug

Will get me a token like this (formatted for legibility):

{
  "token": {
    "is_domain": false,
    "methods": [
      "password"
    ],
    "roles": [
      {
        "id": "0073eb4ee8b044409448168f8ca7fe80",
        "name": "admin"
      }
    ],
    "expires_at": "2016-12-07T00:02:13.000000Z",
    "project": {
      "domain": {
        "id": "default",
        "name": "Default"
      },
      "id": "f84f16ef1f2f45cd80580329ab2c00b0",
      "name": "admin"
    },
    "catalog": [
      {
        "endpoints": [
          {
            "url": "http://127.0.0.1:5000",
            "interface": "internal",
            "region": "RegionOne",
            "region_id": "RegionOne",
            "id": "78b654d00f3845f8a73d23793a2485ed"
          },
          {
            "url": "http://127.0.0.1:35357",
            "interface": "admin",
            "region": "RegionOne",
            "region_id": "RegionOne",
            "id": "81956b9544da41a5873ecddd287fb13b"
          },
          {
            "url": "http://127.0.0.1:5000",
            "interface": "public",
            "region": "RegionOne",
            "region_id": "RegionOne",
            "id": "c3ed6ca53a8b4dcfadf9fb6835905b1e"
          }
        ],
        "type": "identity",
        "id": "b5d4af37070041db969b64bf3a57dcb3",
        "name": "keystone"
      }
    ],
    "user": {
      "domain": {
        "id": "default",
        "name": "Default"
      },
      "password_expires_at": null,
      "name": "admin",
      "id": "bc72530345094d0e9ba53a275d2df9e8"
    },
    "audit_ids": [
      "UQc953wpQvGHa3YokNeNgQ"
    ],
    "issued_at": "2016-12-06T23:02:13.000000Z"
  }
}

So the roles are set correctly. But…maybe the policy is not. There is currently no policy.json in /etc/keystone. And maybe my wsgi App is not finding it.

sudo cp /opt/stack/keystone/etc/policy.json /etc/keystone/

Restart the wsgi applications and …

$ openstack service list
+----------------------------------+----------+----------+
| ID                               | Name     | Type     |
+----------------------------------+----------+----------+
| b5d4af37070041db969b64bf3a57dcb3 | keystone | identity |
+----------------------------------+----------+----------+

Episode 17 - Cyphercon Interview with Korgo

Posted by Open Source Security Podcast on December 06, 2016 02:07 PM
Josh and Kurt talk to Michael Goetzman about Cyphercon

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/296503873&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Keystone development server on Fedora 25

Posted by Adam Young on December 06, 2016 03:42 AM

While Unit tests are essential to development, I often want to check the complete flow of a feature against a running keystone server as well. I recently upgraded to Fedora 25, and had to reset my environment. Here is how I set up for development.

Update: turns out there is more.

The Keystone server is unusual in that it requires no other OpenStack services in order to run. Most other services require a Keystone server, but Keystone itself only requires MySQL. As such, it is not worth the effort (and Python hassle) of running devstack. You can run the Keystone server right out of the source directory in a virtual environment.

The code I need for Keystone has been committed for a while. To start clean, I rebase my local git repository to master and to run tox -r to recreate the virtual environment.

I’m going to use that virtual environment along with the directions on the official Keystone development site.

First I need a Database.

 sudo dnf -y  install mariadb-server
 sudo systemctl enable mariadb.service
 sudo systemctl start mariadb.service

Check that The MySQL monitor works.

$ mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.1.19-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Now, configure the data base according to the official setup docs:

I want to end up with MySQL using SQLAlchemy via the following configuration:

connection = mysql+pymysql://keystone:keystone@127.0.0.1/keystone

This is what works on F25. It is a little different frm the older install guides. I am running as the no-root user `ayoung`

mysql -u root
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    ->   IDENTIFIED BY 'keystone';

That is not sufficient to connect, as shown by this test:

mysql -h 127.0.0.1 keystone -u keystone –password=keystone

Ensure need MySQL listening on a newtork socket.

$ getent services mysql
mysql                 3306/tcp
$ telnet 127.0.0.1 3306
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Y

Turns Out what I needed was:

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'    IDENTIFIED BY 'keystone';

This is not a production grade solution, but should work for development.

Enable the virtual environemnt:

. .tox/py27/bin/activate

Update /etc/keystone.conf as per the above doc and try the db sync:

keystone-manage db_sync
...
keystone-manage db_version
109

You will need uwsgi to run as the webserver. Don’t try to use the system package. On F24, at least, the system one was out of date. Since this is a development setup, let’s match the upstream approach and use pip to install it in the venv.

pip install uwsgi

Now try to run the server:

 uwsgi --http 127.0.0.1:35357 --wsgi-file $(which keystone-wsgi-admin)

And test:

curl localhost:35357
{"versions": {"values": [{"status": "stable", "updated": "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links": [{"href": "http://localhost:35357/v3/", "rel": "self"}]}, {"status": "deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://localhost:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}

Now I want to run the bootstrap code to initialize the database tables:

 keystone-manage bootstrap --bootstrap-password keystone

Remember to run the public port server in a separate console window (but also in the venv)

. .tox/py27/bin/activate
uwsgi --http 127.0.0.1:5000 --wsgi-file $(which keystone-wsgi-public )

To run the sample data (again in another venv window)

 pip install python-openstackclient
ADMIN_PASSWORD=keystone tools/sample_data.sh

Here is my keystone.rc file for talking to this server. The OS_IDENTITY_API_VERSION bypasses discovery, which is probably not a long term solution.

unset `env | awk -F= '/OS_/ {print $1}' | xargs`

export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://127.0.0.1:5000/v3

Make sure token issue work:

. ~/devel/openstack/keystone.rc 
openstack token issue
+------------+-----------------------------------------------------------------+
| Field      | Value                                                           |
+------------+-----------------------------------------------------------------+
| expires    | 2016-12-06T01:09:23+0000                                        |
| id         | gAAAAABYRgGzX_ZixdkZBmS-Ut9uGphBhfSw8rdnTBBar6waqfrghdQWi3PLgjI |
|            | ah6HL9pxGvdmGm8pHCCos7yo4D28LRmROrSRf8Yy1dEE9bMQGcCrFuG4QCe_m2E |
|            | SdqNoB3LMhfCPyCbm3705_Blo_h6f5Cst-fLZuUFyItKkgo4BYZUDpGxk       |
| project_id | f84f16ef1f2f45cd80580329ab2c00b0                                |
| user_id    | bc72530345094d0e9ba53a275d2df9e8                                |
+------------+-----------------------------------------------------------------+

Airports, Goats, Computers, and Users

Posted by Josh Bressers on December 04, 2016 05:29 PM
Last week I had the joy traveling through airports right after the United States Thanksgiving holiday. Now I don't know how many of you have ever tried to travel the week after Thanksgiving but it's kind of crazy, there are a lot of people, way more than usual, and a significant number of them have probably never been on an airplane or if they travel by air they don't do it very often. The joke I like to tell people is that there are folks at the airport wondering why they can't bring their goat onto the airplane. I’m not going to use this post to discuss the merits of airport security (that’s a whole different conversation), it’s really about coexisting with existing security systems.


Now on this trip I didn't see any goats, I was hoping to see something I could classify as truly bizarre, so this was a disappointment to me. There were two dogs but they were surprisingly well behaved. However, all the madness I witnessed got me thinking about Security in an environment where a substantial number of the users are woefully unaware of the security all around them. The frequent travelers know how things work, they keep it moving smoothly, they’re aware of the security and make sure they stay out of trouble. It’s not about if something makes you more or less secure, it’s about the goal of getting from the door to the plane as quickly and painlessly as possible. Many of the infrequent travels aren’t worry about moving through the airport quickly, they’re worried about getting their stuff onto the plane. Some of this stuff shouldn’t be brought through an airport.


Now let’s think about how computer security works for most organizations. You’re not dealing with the frequent travels, you’re dealing with the holiday horde trying to smuggle a jug of motor oil through security. It’s not that these people are bad or stupid, it’s really just that they don’t worry about how things work, they’re not going to be back in the airport until next Thanksgiving. In a lot of organizations the users aren’t trying to be stupid, they just don’t understand security in a lot of instances. Browsing Facebook on the work computer isn’t seen as a bad idea, it’s their version of smuggling contraband through airport security. They don’t see what it hurts, they’re not worried about the general flow of things. If their computer gets ransomware it’s not really their problem. We’ve pushed security off to another group nobody really likes.


What does this all mean? I’m not looking to solve this problem, it’s well known that you can’t fix problems until you understand them. I just happened to notice this trend while making my way through the airport, looking for a goat. It’s not that users are stupid, they’re not as clueless as we think either, they’re just not invested in the process. It’s not something they want to care about, it’s something preventing them from doing what they want to. Can we get them invested in the airport process?


If I had to guess, we’re never going to fix users, we have to fix the tools and environment.

Understanding SELinux Roles

Posted by Dan Walsh on December 02, 2016 10:03 PM
I received a container bugzilla today for someone who was attempting to assign a container process to the object_r role.  Hopefully this blog will help explain how roles work with SELinux.

When we describe SELinux we often concentrate on Type Enforcement, which is the most important and most used feature of SELinux.  This is what describe in the SELinux Coloring book as Dogs and Cats. We also describe MLS/MCS Separation in the coloring book.

Lets look at the SELinux labels
The SELinux labels consist of four parts, User, Role, Type and Level.  Often look something like

user_u:role_r:type_t:level

One area I do not cover is Roles and SELinux Users.

The analogy I like to use for the Users and Roles is around Russian dolls.  In that the User controls the reachable roles and the roles control the reachable types.

When we create an SELinux User, we have to specify which roles are reachable within the user.  (We also specify which levels are are available to the user.

semanage user -l

         Labeling   MLS/       MLS/                       
SELinux User    Prefix     MCS Level  MCS Range       SELinux Roles

guest_u             user       s0         s0                             guest_r
root                    user       s0         s0-s0:c0.c1023        staff_r sysadm_r system_r unconfined_r

staff_u               user       s0         s0-s0:c0.c1023        staff_r sysadm_r system_r unconfined_r
sysadm_u         user       s0         s0-s0:c0.c1023        sysadm_r
system_u          user       s0         s0-s0:c0.c1023        system_r unconfined_r
unconfined_u    user       s0         s0-s0:c0.c1023        system_r unconfined_r
user_u               user       s0         s0                             user_r
xguest_u           user       s0         s0                             xguest_r


In the example above you see the Staff_u user is able to reach the staff_r sysadm_r system_r unconfined_r roles, and is able to have any level in the MCS Range s0-s0:c0.c1023.

Notice also the system_u user, which can reach system_r unconfined_r roles as well as the complete MCS Range.  System_u is the default user for all processes started at boot or started by systemd.  SELinux users are inherited by children processes by default.

You can not assign an SELinux user a role that is not listed,  The kernel will reject it with a permission denied.


# runcon staff_u:user_r:user_t:s0 sh
runcon: invalid context: ‘staff_u:user_r:user_t:s0’: Invalid argument


The next part of the Russian dolls is roles.

Here is the roles available on my Fedora Rawhide machine.


# seinfo -r

Roles: 14
   auditadm_r
   dbadm_r
   guest_r
   staff_r
   user_r
   logadm_r
   object_r
   secadm_r
   sysadm_r
   system_r
   webadm_r
   xguest_r
   nx_server_r

unconfined_r

Roles control which types are available to a role.

# seinfo -rwebadm_r -x
   webadm_r
      Dominated Roles:
         webadm_r
      Types:
         webadm_t


In the example above the only valid type available to the webadm_r is the webadm_t.   So if you attempted to transition from webadm_t to a different type the SELinux kernel would block the access.

system_r role is the default role for all processes started at boot, most of the other roles are "user" roles.

seinfo -rsystem_r -x | wc -l
968


As you can see there are over 950 types that can be used with the system_r.

Other roles of note:

object_r is not really a role, but more of a place holder.  Roles only make sense for processes, not for files
on the file system.  But the SELinux label requires a role for all labels.  object_r is the role that we use to fill the objects on disks role.  Changing a process to run as object_r or trying to assign a different role to a file will always be denied by the kernel.

RBAC - Roles Based Access Control

SELinux Users and Roles are mainly used to implement RBAC.  The idea is to control what an adminsitrator can do on a system when he is logged in.  For certain activities he has to switch his roles.  For example, he might log in to a system as a normal user role, but needs to switch to an admin role when he needs to administrate the system.

Most of the other roles that are assigned above are user roles.  I have written about switching roles in the past when dealing with confined administrators and users.  I usually run as the staff_r but switch to the unconfined_r (through sudo) when I become the administrator of my system.

Most people who use SELinux do not deal with RBAC all that often, but hopefully this blog helps remove some of the confusion on the use of Roles.

Episode 16 - Cat and mouse

Posted by Open Source Security Podcast on December 02, 2016 08:50 PM
Josh and Kurt talk about cybercrime and regulation.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/295920212&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


SELinux Security Policy – Part2: Labels

Posted by Miroslav Grepl on November 29, 2016 09:28 AM

From the previous blog we know that SELinux policy consists of rules, each of which describes an INTERACTION between processes and system resources.

In the second part of this blog series I will tell you more about LABELS. Where SELinux labels are stored and how they look in reality.  Labels are an important part of SELinux since all SELinux decisions are based on them. As my colleague Dan Walsh says:

SELinux is all about labels

Where are SELinux labels stored?

SELinux labels are stored in extended attributes (abbreviated xattr) of file systems that support them – ext2, ext3, ext4 and others.

How can I show labels placed in XATTR?

# getfattr -n security.selinux /etc/passwd
getfattr: Removing leading '/' from absolute path names
# file: etc/passwd
security.selinux="system_u:object_r:passwd_file_t:s0"

Is there another way to show it?

# ls -Z /etc/passwd
system_u:object_r:passwd_file_t:s0 /etc/passwd

-Z” option is your friend. In most cases it is related to SELinux and help us either show SELinux labels or modify them directly. You can check it for mv, cp, ps user commands for example.

From the above example of SELinux label  you can see that SELinux labels consist of FOUR parts with the following template

SELinux user:SELinux role:SELinux type:SELinux category

SELinux users Not the same as Linux users.
Several Linux users can be mapped to a single SELinux user.
object_u is a placeholder for Linux system resources.
system_u is a placeholder for Linux processes.
Can be limited to a set of SELinux roles.
SELinux role SELinux users can have multiple roles but only one can be active.
object_r is a placeholder for Linux system resources.
system_r is a placeholder for system processes.
Can be limited to a set of SELinux types.
 SELinux type Security model known as TYPE ENFORCEMENT.
In 99% you care only about TYPES.
policy rules and interactions between types.
 SELinux category Allow users to mark resources with compartment tags (MCS1, MCS2).
Used for RHEL virtualization and for container security.
s0 as the placeholder for default category.
s0:c1 can not access s0:c2.

In Fedora we ship TARGETED SELinux policy featuring mainly TYPE ENFORCEMENT. It means we mostly care only about TYPES. We can re-define the introductory statement of policy rules with this knowledge from

ALLOW LABEL1 LABEL2:OBJECT_CLASS PERMISSION;

to

ALLOW TYPE1 TYPE2:OBJECT_CLASS PERMISSION;

Where TYPE1 could be APACHE_T process type for apache processes and TYPE2 could be file type for apache logging files. In that case we declare the following SELinux policy rules

ALLOW APACHE_T APACHE_LOG_T:FILE READ;

Now you know that if we talk about TYPES, we talk about LABELS with respect to TARGETED policy, which is the default policy used in FEDORA.


Episode 15 - Cyber Black Monday

Posted by Open Source Security Podcast on November 29, 2016 06:12 AM
Josh and Kurt talk about Cyber Monday security tips.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/295266221&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


The Economics of stealing a Tesla with a phone

Posted by Josh Bressers on November 28, 2016 12:03 AM
A few days ago there was a story about how to steal a Tesla by installing malware on the owner's phone. If you look at the big picture view of this problem it's not all that bad, but our security brains want to make a huge deal out of this. Now I'm not saying that Tesla shouldn't fix this problem, especially since it's going to be a trivial fix. What we want to think about is how all these working parts have to fit together. This is something we're not very good at in the security universe; there can be one single horrible problem, but when we paint the full picture, it's not what it seems.

Firstly, the idea of being able to take full control over a car from a phone sounds terrible. It is terrible and when a problem like this is found, it should always be fixed. But this also isn't something that's going to affect millions (it probably won't even affect hundreds). This is the sort of problem where you have an attacker targeting you specifically. If someone wants to target you, there are a lot of things they can do, putting a rootkit on your phone to steal your car is one of the least scary thing. The reality is that if you're the target of a well funded adversary, you're going to lose, period. So we can ignore that scenario.

Let's move to the car itself. A Tesla, or most any stolen car today, doesn't have a lot of value, the risk vs reward is very low. I suspect a Tesla has so many serial numbers embedded in the equipment you couldn't resell any of the parts. I also bet it has enough gear on board that they can tell you where your car is with a margin of error around three inches. Stealing then trying to do something with such a vehicle probably has far more risk than any possible reward.

Now if you keep anything of value in your car, and many of us do, that could be a great opportunity for an adversary. But of course now we're back to the point if you have control over someone's phone, is your goal to steal something out of their car? Probably not. Additionally if we think as an adversary, once we break into the car, even if we leave no trace, the record of unlocking the doors is probably logged somewhere. An adversary on this level will want to remain very anonymous, and again, if your target has something of value it would be far less risky to just mug them.

Here is where the security world tends to fall apart from an economics perspective. We like to consider a particular problem or attack in a very narrow context. Gaining total control over a car does sound terrible, and if we only look at it in that context, it's a huge deal. If we look at the big picture though, it's not all that bad in reality. How many security bugs and misconfigurations have we spent millions dealing with as quickly as possible, when in the big picture, it wasn't all that big of a deal. Security is one of those things that more often than not is dealt with on an emotional level rather than one of pure logic and economics. Science and reason lead to good decisions, emotion does not.

Leave your comments on Twitter

JSON Home Tests and Keystone API changes

Posted by Adam Young on November 24, 2016 03:31 AM

If you change the public signature of an API, or add a new API in Keystone, there is a good chance the Tests that confirm JSON home layout will break.  And that test is fairly unfriendly:  It compares a JSON doc with another JSON doc, and spews out the entirety of both JSON docs, without telling you which section breaks.  Here is how I deal with it:

First, run the test in a manner that you can do some qyerying. I did this:

 

tox -e py27 -- keystone.tests.unit.test_versions.VersionTestCase.test_json_home_v3 2>&1 | less

That lets me search through the output dynamically.  It runs only the one test.

 

Here is the change I made to the keystone/assignment/routers.py file:

 
routers.append(
    router.Router(controllers.AccessV3(),
    'url_patterns',
    'url_pattern',
    resource_descriptions=self.v3_resources))

If I run the test and search through the output for the value url_pattern I see that this section is new:

 u'http://docs.openstack.org/api/openstack-identity/3/rel/url_pattern': {u'href-template': u'/url_patterns/{url_pattern_id}',
 u'href-vars': {u'url_pattern_id': u'http://docs.openstack.org/api/openstack-identity/3/param/url_pattern_id'}},
 u'http://docs.openstack.org/api/openstack-identity/3/rel/url_patterns': {u'href': u'/url_patterns'},

Sorry for the formatting:  it is really long lines in the output.

To start, I modify the test_versions.py file to add something that I think will be comparable:

 json_home.build_v3_resource_relation('url_pattern'): {
 'href-template': '/url_pattern/{url_pattern_id}',
 'href-vars': {
 'url_pattern_id': json_home.Parameters.URL_PATTERN_ID},

Rerunning the test, I now see that it matches earlier:  this is in the expected output:

 'http://docs.openstack.org/api/openstack-identity/3/rel/url_pattern': {'hints': {'status': 'experimental'},
 'href-template': '/url_patterns/{url_pattern_id}',
 'href-vars': {'url_pattern_id': 'http://docs.openstack.org/api/openstack-identity/3/param/pattern_id'}},

Which looks good, but I need the second line, with url_patterns.  So I add:

 json_home.build_v3_resource_relation('url_patterns'): {
 'href': '/url_patterns'},

Note that it is href, and not href-template.