Fedora security Planet

Remember kids, if you're going to disclose, disclose responsibly!

Posted by Josh Bressers on March 28, 2017 02:02 AM
If you pay any attention to the security universe, you're aware that Tavis Ormandy is basically on fire right now with his security research. He found the Cloudflare data leak issue a few weeks back, and is currently going to town on LastPass. The LastPass crew seems to be dealing with this pretty well, I'm not seeing a lot of complaining, mostly just info and fixes which is the right way to do these things.

There are however a bunch of people complaining about how Tavis and Google Project Zero in general tend to disclose the issues. These people are wrong, I've been there, it's not fun, but as crazy as it may seem to the ouside, the Project Zero crew knows what they're doing.

Firstly let's get two things out of the way.

1) If nobody is complaining about what you're doing, you're not doing anything interesting (Tavis is clearly doing very interesting things).

2) Disclosure is hard, there isn't a perfect solution, what Project Zero does may seem heartless to some, but it's currently the best way. The alternative is an abusive relationship.

A long time ago I was a vendor receiving security reports from Tavis, and I won't lie, it wasn't fun. I remember complaining and trying to slow things down to a pace I thought was more reasonable. Few of us have any extra time and a new vulnerability disclosure means there's extra work to do. Sometimes a disclosure isn't very detailed or lacks important information. The disclosure date proposed may not line up with product schedules. You could have another more important issue you're working on already. There are lots of reasons to dread dealing with these issues as a vendor.

All that said, it's still OK to complain, and every now and then the criticism is good. We should always be thinking about how we do things, what makes sense today won't make sense tomorrow. The way Google Project Zero does disclosure today was pretty crazy even five years ago. Now it's how things have to work. The world moves very fast now, and as we've seen from various document dumps over the last few years, there are no secrets. If you think you can keep a security issue quiet for a year you are sadly mistaken. It's possible that was once true (I suspect it never was, but that's another conversation). Either way it's not true anymore. If you know about a security flaw it's quite likely someone else does too, and once you start talking to another group about it, the odds of leaking grow at an alarming rate.

The way things used to work is changing rapidly. Anytime there is change, there are always the trailblazers and laggards. We know we can't develop secure software, but we can respond quickly. Spend time where you can make a difference, not chasing the mythical perfect solution.

If your main contribution to society is complaining, you should probably rethink your purpose.

Episode 39 - Flash on your dishwasher

Posted by Open Source Security Podcast on March 28, 2017 01:08 AM
Josh and Kurt discuss certificates, OpenSSL, dishwashers, Flash, and laptop travel bans.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/314794586&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes



Inverse Law of CVEs

Posted by Josh Bressers on March 23, 2017 11:26 PM
I've started a project to put the CVE data into Elasticsearch and see if there is anything clever we can learn about it. Ever if there isn't anything overly clever, it's fun to do. And I get to make pretty graphs, which everyone likes to look at.

I stuck a few of my early results on Twitter because it seemed like a fun thing to do. One of the graphs I put up was comparing the 3 BSDs. The image is below.


You can see that none of these graphs has enough data to really draw any conclusions from, again, I did this for fun. I did get one response claiming NetBSD is the best, because their graph is the smallest. I've actually heard this argument a few times over the past month, so I decided it's time to write about it. Especially since I'm sure I'll find many more examples like this while I'm weeding through this mountain of CVE data.

Let's make up a new law, I'll call it the "Inverse Law of CVEs". It goes like this - "The fewer CVE IDs something has has, the less secure it is".

That doesn't make sense to most people. If you have something that is bad, fewer bad things is certainly better than more bad things. This is generally true for physical concepts brains can understand. Less crime is good. Fewer accidents is good. When it comes to something like how many CVE IDs your project or product has, this idea gets turned on its head. Less is probably bad when we think about CVE IDs. There's probably some sort of line somewhere where if you cross it things flip back to bad (wait until I get to PHP). We'll call that the security maginot line because bad security decided to sneak in through the north.

If you have something with very very few CVE IDs it doesn't mean it's secure, it means nobody is looking for security issues. It's easy to understand that if something is used by a large diverse set of users, it will get more bug reports (some of which will be security bugs) and it will get more security attention from both good guys and bad guys because it's a bigger target. If something has very few users, it's quite likely there hasn't been a lot of security attention paid to it. I suspect what the above graphs really mean is Free BSD is more popular than OpenBSD, which is more popular than NetBSD. Random internet searches seem to back this up.

I'm not entirely sure what to do with all this data. Part of the fun is understanding how to classify it all. I'm not a data scientist so there will be much learning. If you have any ideas by all means let me know, I'm quite open to suggestions. Once I have better data I may consider trying to find at what point a project has enough CVE IDs to be considered on the right path, and which have so many they've crossed over to the bad place.

Episode 38 - We Ruin Everything

Posted by Open Source Security Podcast on March 22, 2017 01:34 AM
Josh and Kurt discuss disclosing your password, pwn2own, wikileaks, Back Orifice, HTTPS inspection, and antivirus.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/313701429&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Supporting large key sizes in FreeIPA certificates

Posted by Fraser Tweedale on March 21, 2017 12:59 AM

A couple of issues around key sizes in FreeIPA certificates have come to my attention this week: how to issue certificates for large key sizes, and how to deploy FreeIPA with a 4096-bit key. In this post I’ll discuss the situation with each of these issues. Though related, they are different issues so I’ll address each separately.

Issuing certificates with large key sizes

While researching the second issue I stumbled across issue #6319: ipa cert-request limits key size to 1024,2048,3072,4096 bits. To wit:

ftweedal% ipa cert-request alice-8192.csr --principal alice
ipa: ERROR: Certificate operation cannot be completed:
  Key Parameters 1024,2048,3072,4096 Not Matched

The solution is straightforward. Each certificate profile configures the key types and sizes that will be accepted by that profile. The default profile is configured to allow up to 4096-bit keys, so the certificate request containing an 8192-bit key fails. The profile configuration parameter involved is:

policyset.<name>.<n>.constraint.params.keyParameters=1024,2048,3072,4096

If you append 8192 to that list and update the profile configuration via ipa certprofile-mod (or create a new profile via ipa certprofile-import), then everything will work!

Deploying FreeIPA with IPA CA signing key > 2048-bits

When you deploy FreeIPA today, the IPA CA has a 2048-bit RSA key. There is currently no way to change this, but Dogtag does support configuring the key size when spawning a CA instance, so it should not be hard to support this in FreeIPA. I created issue #6790 to track this.

Looking beyond RSA, there is also issue #3951: ECC Support for the CA which concerns supporting a elliptic curve signing key in the FreeIPA CA. Once again, Dogtag supports EC signing algorithms, so supporting this in FreeIPA should be a matter of deciding the ipa-server-install(1) options and mechanically adjusting the pkispawn configuration.

If you have use cases for large signing keys and/or NIST ECC keys or other algorithms, please do not hesitate to leave comments in the issues linked above, or get in touch with the FreeIPA team on the freeipa-users@redhat.com mailing list or #freeipa on Freenode.

Installing R Packages in Fedora as a user

Posted by Adam Young on March 16, 2017 01:35 AM

When I was trying to run R code that required additional packages, I got the error message:

Installing packages into ‘/usr/lib64/R/library’
(as ‘lib’ is unspecified)
Warning in install.packages(new.pkg, dependencies = TRUE) :
 'lib = "/usr/lib64/R/library"' is not writable

Summary: If you create the following directory then R will install package files in there instead.

~/R/x86_64-redhat-linux-gnu-library/3.3/

Here’s the more detail steps I took.

In order to work around this, I tried running just the install command from an interactive R prompt. In this case, the package was “SuperLearner”

> install.packages("SuperLearner")
Installing package into ‘/usr/lib64/R/library’
(as ‘lib’ is unspecified)
Warning in install.packages("SuperLearner") :
  'lib = "/usr/lib64/R/library"' is not writable
Would you like to use a personal library instead?  (y/n) y
Would you like to create a personal library
~/R/x86_64-redhat-linux-gnu-library/3.3
to install packages into?  (y/n) y

After this, a dialog window popped up and had me select a CRAN mirror (I picked the geographical closest one) and it was off and running.

It errored out later with :

bit-ops.c:1:15: fatal error: R.h: No such file or directory
 #include 

Which looks like I am missing the development libraries for R. I’ll return to this in a bit.

If I exit out of R and check the directory ls ~/R/x86_64-redhat-linux-gnu-library/3.3/ I can now see it is populated.

The Base R install from fedora owns the library directory:

 rpmquery -f /usr/lib64/R/library/Matrix/
R-core-3.3.2-3.fc25.x86_64

And we don’t want to mix the core libraries with user installed libraries. Leet me try a different one, now that the local users libary directory structure has been created:

> install.package(“doRNG”)

Similar error…ok, let’s take care of the compile error.

sudo dnf install R-core-devel

And rerun the install in an R session now completes successfully. I tried on a different machine and had to install the ‘ed’ command line tool first.

Security, Consumer Reports, and Failure

Posted by Josh Bressers on March 12, 2017 09:03 PM
Last week there was a story about Consumer Reports doing security testing of products.


As one can imagine there were a fair number of “they’ll get it wrong” sort of comments. They will get it wrong, at first, but that’s not a reason to pick on these guys. They’re quite brave to take this task on, it’s nearly impossible if you think about the state of security (especially consumer security). But this is how things start. There is no industry that has gone from broken to perfect in one step. It’s a long hard road when you have to deal with systemic problems in an industry. Consumer product security problems may be larger and more complex than any other industry has ever had to solve thanks to things such as globalization and how inexpensive tiny computers have become.

If you think about the auto industry, you’re talking about something that costs thousands of dollars. Safety is easy to justify as it’s going to be less than the overall cost of the vehicle. Now if we think about tiny computing devices, you could be talking about chips that cost less than one dollar. If the cost of security and safety will be more than the initial cost of the computing hardware it can be impossible to justify that cost. If adding security doubles the cost of something, the manufacturers will try very hard to find ways around having to include such features. There are always bizarre technicalities that can help avoid regulation, groups like Consumer Reports help with accountability.

Here is where Consumer Reports and other testing labs will be incredibly important to this story. Even if there is regulation a manufacturer chooses to ignore, a group like Consumer Reports can still review the product. Consumer Reports will get things very wrong at first, sometimes it will be hilariously wrong. But that’s OK, it’s how everything starts. If you look back at any sort of safety and security in the consumer space, it took a long time, sometimes decades, to get it right. Cybersecurity will be no different, it’s going to take a long time to even understand the problem.

Our default reaction to mistakes is often one of ridicule, this is one of those times we have to be mindful of how dangerous this attitude is. If we see a group trying to do the right thing but getting it wrong, we need to offer advice, not mockery. If we don’t engage in a useful and serious way nobody will take us seriously. There are a lot of smart security folks out there, we can help make the world a better place this time. Sometimes things can look hopeless and horrible, but things will get better. It’ll take time, it won’t be easy, but things will get better thanks to efforts such as this one.

Episode 37 - Your bathtub is more dangerous than a shark

Posted by Open Source Security Podcast on March 09, 2017 12:40 AM
Josh and Kurt discuss how the Vault 7 leaks shows we live in the Neuromancer world, and this is likely the new normal.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/311442678&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Episode 36 - A Good Enough Podcast

Posted by Open Source Security Podcast on March 05, 2017 06:48 PM
Josh and Kurt discuss an IoT bear, Alexa and Siri, Google's E2Email and S/MIME.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/310851037&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Better Resolution of Kerberos Credential Caches

Posted by Nathaniel McCallum on March 03, 2017 03:45 PM

DevConf is a great time of year. Lots of developers gather in one place and we get to discuss integration issues between projects that may not have a direct relationship. One of those issues this year was the desktop integration of Kerberos authentication.

GNOME Online Accounts has supported the creation of Kerberos accounts since nearly the beginning, thanks to the effort of Debarshi Ray. However, we were made aware of an issue this year that had not come up before. Namely, in a variety of cases GSSAPI would not be able to complete authentication for non-default TGTs.

Roughly, this meant that if you logged into Kerberos using two different accounts GSSAPI would only be able to complete authentication using your default credential cache - meaning the last account you logged into. Users could work around this problem by using kswitch to change their default credential cache. However, since authentication transparently failed, there was no indication to the user that this could work. So the user experience was particularly poor.

This difficulty became even more noticable after the Fedora deployment of Kerberos by Patrick Uiterwijk. Many Fedora developers also use Kerberos for other realms, so the pain was spreading.

I am happy to say that we have discovered a cure for this malady!

Matt Rogers worked with upstream to merge this patch which causes GSSAPI to do the RightThing™. Robbie Harwood landed the patch in Fedora (rawhide, 26, 25). So we believe this issue to be resolved.

If you’re a Fedora 25 user, please help us test the fix! There is a pending update for krb5 on Bodhi. The easy way to reproduce this issue is as follows:

  1. Log in with the Kerberos account you want to use for the test.
  2. Log in with another Kerberos account.
  3. Confirm that the second account is default with klist.
  4. Attempt to login to a service using the first credential and GSSAPI. The easiest way to do this is probably to go to a Kerberos protected website using your browser (assming it is properly configured for GSSAPI).
  5. Before the patch, automatic login should fail. Afterwards, it shouldn’t.

Enjoy!

What the Oscars can teach us about security

Posted by Josh Bressers on March 02, 2017 07:05 PM
If you watched the 89th Academy Awards you saw a pretty big mistake at the end of the show, the short story is Warren Beatty was handed the wrong envelope, he opened it, looked at it, then gave it to Faye Dunaway to read, which she did. The wrong people came on stage and started giving speeches, confused scrambling happened, and the correct winner was brought on stage. No doubt this will be talked about for many years to come as one of the most interesting and exciting events in the history of the awards ceremony.

People make mistakes, we won’t dwell on how the wrong envelope made it into the announcer’s hands. The details of how this error came to be isn’t what’s important for this discussion. The important lesson for us is watch Warren Beatty’s behavior. He clearly knew something was wrong, if you watch the video of him, you can tell things aren’t right. But he just kept going, gave the card to Faye Dunaway, and she read the name of the movie on the card. These people aren’t some young amateurs here, these are seasoned actors. It’s not their first rodeo. So why did this happen?

The lesson for us all is to understand that when things start to break down, people will fall back to their instincts. The presenters knew their job was to open the card and read the name. Their job wasn’t to think about it or question what they were handed. As soon as they knew something was wrong, they went on autopilot and did what was expected. This happens with computer security all the time. If people get a scary phishing email, they will often go into autopilot and do things they wouldn’t do if they kept a level head. Most attackers know how this works and they prey on this behavior. It’s really easy to claim you’d never be so stupid as to download that attachment or click on that link, but you’re not under stress. Once you’re under stress, everything changes.

This is why police, firefighters, and soldiers get a lot of training. You want these people to do the right thing when they enter autopilot mode. As soon as a situation starts to get out of hand, training kicks in and these people will do whatever they were trained to do without thinking about it. Training works, there’s a reason they train so much. Most people aren’t trained like this so they generally make poor decisions when under stress.

So what should we take away from all this? The thing we as security professionals needs to keep in mind is how this behavior works. If you have a system that isn’t essentially “secure by default”, anytime someone find themselves under mental stress, they’re going to take the path of least resistance. If this path of least resistance is also something dangerous happening, you’re not designing for security. Even security experts will have this problem, we don’t have superpowers that let us make good choices in times of high stress. It doesn’t matter how smart you think you are, when you’re under a lot of stress, you will go into autopilot, you will make bad choices if bad choices are the defaults.

Episode 35 - Crazy Cosmic Accident

Posted by Open Source Security Podcast on February 28, 2017 03:04 AM
Josh and Kurt discuss SHA-1 and cloudbleed. Bug bounties come up, we compare security to the Higgs boson, and IPv6 comes up at the end.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/309898784&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes



A Farewell To Git

Posted by Robbie Harwood on February 26, 2017 05:00 AM

Last week, by chance, I wrote the Git tutorial I'd been threatening friends with. And I say by chance, of course, because this week the ability to generate SHA-1 collisions was all but dropped on the world.

Which, let's be clear, is horrible news for everyone who now has to move their software off of SHA-1. But this work isn't entirely new: for the most part, it's a practicalization of work published in 2015. Which follows on the heels of a string of weakings dating all the way back into early 2005 (or even earlier, depending on what you're willing to count). During which time there has been growing concern from the security-minded about software that used SHA-1, and even more importantly, serious efforts from industry leaders to futureproof their code.

It's a shame that Git, released in mid-2005, didn't heed the warnings. From John Gilmore, no less. But that was my main point in the previous post: Linus - and Git - abhor abstraction (and politeness) in any form. And it is to their detriment.

Problem?

So in practical terms, to you and me (both relatively normal Git users), what does the ability to generate SHA-1 collisions mean?

To be fair, Linus did think about this. And he's right, as far as I know: Git prefers the local version of a hash, so there is no danger of a remote overwriting it. But the problem space is bigger than that, and therefore not a "non-issue" (as he put it).

The important discussion is this: what if the collided hash isn't in the local repository already?

Of course, there are many hashes (most, even) which are not in any given local repository. We, as hypothetical attackers, need concern ourselves with predicting a hash that the user will download (which they do not already have) that we can substitute a malicious collision for. We also need to do this in such a way that the user will not notice: pulling a commit or branch for purposes of code review will not be sufficient, for instance.

I think the easiest way to ensnare users is to catch emergency re-clones, and similar operations (toot toot), during which the user places more trust than they ought in the server. If everything looks right on the surface, and the old repository is in a semi-destroyed state, are we really going to review the immense amount of code looking for problems? In a project the size of this blog: maybe. Certainly not for the larger projects.

And of course there are a few clever offshoots of this idea that could be exploited (how many people do you think really check that the code they reviewed in the web tool matches the contents of the commit they just merged?), but that interests me less than this attack I thought up this morning.

In last week's post, I mentioned release branches. These are an model I work with a great deal, both as a contributor to established upstream projects and as a maintainer for distributions. A release branch, of course, mostly consists of commits which are the result of cherry-picking the development branch. And in that post, I suggested further the use of a flag - -x - which embeds the hash of the original commit in the release branch's version (in the commit messsage, specifically).

Having the "original" commit hash easily accessible from the release branch is advantageous because we are working around a design decision in Git: that the uniqueness of commits takes into account their parents. More specifically, two commits which make the exact same changes to the exact same files but with different parents are, for purposes of Git, considered different commits. They will have different SHA-1 hashes.

This design decision becomes a design failure when we want to make release branches (which the kernel itself does...) and need to backport (i.e., duplicate changing only the parent) commits from stable branches. So to paper around the issue, we use -x, and then we can then (by hand!) extract the original commit hash from the backport's message. It's worth noting that in "patch-based" systems (Darcs, Pijul, and friends), the original commit and the backport are the same commit, just applied on different branches. They do not suffer from this problem.

As each commit contains parent information, Git can normally be quite good at leaving no references unresolved by just downloading all the hashes that are referred to. But the cherry-pick hashes are not references that are visible to Git, and even if we added logic to try to detect them, it would never be good enough. Git also has the misfeature of allowing unique substrings of hashes at any point where full hashes could be used, which, in hindsight, seems designed to enable collision (and is a decision that GPG also made for fingerprints, where it is a problem today).

All that remains at that point is for an unsuspecting user to have a release branch checked out without a full copy of the master branch. Which is surprisingly likely for a distro packager (speaking from experience), or for new users who don't really need years of development and so made a shallow clone (git clone --depth 1 or so) in order to save time/bandwidth.

This practice of embedding SHA-1 hashes in commits is also why I predict that migrating Git off of SHA-1, if it even happens, will require an effort on the scale of migrating off of SVN (which still hasn't happened for many modern projects!).

Final thoughts

I would love it if the ideas we abandoned in the quest for speed uber alles would return. I see projects like Pijul, improving on Darcs, written in a modern language, and boasting performance better than Git, and it gives me hope. BitKeeper has an open source license now, so perhaps we will think about weave merge once again. It doesn't feel like too much of stretch to imagine a resurrected Monotone pushing the importance of integrity (cryptographic or otherwise) and abstraction, or perhaps large corporate players will continue to forcibly drag Mercurial along with them. Or maybe a new tool that hasn't yet seen attention will steal the show.

Just please, not Git. Not again. Not still. No more.

I am a Cranky, White, Male Feminist

Posted by Stephen Gallagher on February 25, 2017 02:52 AM

Today, I was re-reading an linux.com article from 2014 by Leslie Hawthorne which had been reshared by the Linux Foundation Facebook account yesterday in honor of #GirlDay2017 (which I was regrettably unaware of until it was over). It wasn’t so much the specific content of the article that got me thinking, but instead the level of discourse that it “inspired” on the Facebook thread that pointed me there (I will not link to it as it is unpleasant and reflects poorly on The Linux Foundation, an organization which is in most circumstances largely benevolent).

In the article, Hawthorne describes the difficulties that she faced as a woman in getting involved in technology (including being dissuaded by her own family out of fear for her future social interactions). While in her case, she ultimately ended up involved in the open-source community (albeit through a roundabout journey), she explained the sexism that plagued this entire process, both casual and explicit.

What caught my attention (and drew my ire) was the response to this article. This included such thoughtful responses as “Come to my place baby, I’ll show you my computer” as well as completely tone-deaf assertions that if women really wanted to be involved in tech, they’d stick it out.

Seriously, what is wrong with some people? What could possibly compel you to “well, actually” a post about a person’s own personal experience? That part is bad enough, but to turn the conversation into a deeply creepy sexual innuendo is simply disgusting.

Let me be clear about something: I am a grey-haired, cis-gendered male of Eastern European descent. As Patrick Stewart famously said:

patrickstewart

I am also the parent of two young girls, one of whom is celebrating her sixth birthday today. The fact of the timing is part of what has set me off. You see, this daughter of mine is deeply interested in technology and has been since a very early age. She’s a huge fan of Star Wars, LEGOs and point-and-click adventure games. She is going to have a very different experience from Ms. Hawthorne’s growing up, because her family is far more supportive of her interests in “nerdy” pursuits.

But still I worry. No matter how supportive her family is: Will this world be willing to accept her when she’s ready to join it? How much pressure is the world at large going to put on her to follow “traditional” female roles. (By “traditional” I basically mean the set of things that were decided on in the 1940s and 1950s and suddenly became the whole history of womanhood…)

So let me make my position perfectly clear.  I am a grey-haired, cis-gendered male of Eastern European descent. I am a feminist, an ally and a human-rights advocate. If I see bigotry, sexism, racism, ageism or any other “-ism” that isn’t humanism in my workplace, around town, on social media or in the news, I will take a stand against it, I will fight it in whatever way is in my power and I will do whatever I can to make a place for women (and any other marginalized group) in the technology world.

Also, let me be absolutely clear about something: if I am interviewing two candidates for a job (any job, at my current employer or otherwise) of similar levels of suitability, I will fall on the side of hiring the woman, ethnic minority or non-cis-gendered person over a Caucasian man. No, this is not “reverse racism” or whatever privileged BS you think it is. Simply put: this is a set of people who have had to work at least twice as hard to get to the same point as their privileged Caucasion male counterpart and I am damned sure that I’m going to hire the person with that determination.

As my last point (and I honestly considered not addressing it), I want to call out the ignorant jerks who claim, quote “Computer science isn’t a social process at all, it’s a completely logical process. People interested in comp. sci. will pursue it in spite of people, not because of it. If you value building relationships more than logical systems, then clearly computer science isn’t for you.” When you say this, you are saying that this business should only permit socially-inept males into the club. So let me use some of your “completely logical process” to counter this – and I use the term extremely liberally – argument.

In computer science, we have an expression: “garbage in, garbage out”. What it essentially means is that when you write a function or program that processes data, if you feed it bad data in, you generally get bad (or worthless… or harmful…) data back out. This is however not limited to code. It is true of any complex system, which includes social and corporate culture. If the only input you have into your system design is that of egocentric, anti-social men, then the only things you can ever produce are those things that can be thought of by egocentric, anti-social men. If you want instead to have a unique, innovative idea, then you have to be willing to listen to ideas that do not fit into the narrow worldview that is currently available to you.

Pushing people away and then making assertions that “if people were pushed away so easily, then they didn’t really belong here” is the most deplorable ego-wank I can think of. You’re simultaneously disregarding someone’s potential new idea while helping to remove all of their future contributions from the available pool while at the same time making yourself feel superior because you think you’re “stronger” than they are.

To those who are reading this and might still feel that way, let me remind you of something: chances are, you were bullied as a child (I know I was). There are two kinds of people who come away from that environment. One is the type who remembers what it was like and tries their best to shield others from similar fates. The other is the type that finds a pond where they can be the big fish and then gets their “revenge” by being a bully themselves to someone else.

If you’re one of those “big fish”, let me be clear: I intend to be an osprey.


SHA-1 is dead, long live SHA-1!

Posted by Josh Bressers on February 24, 2017 01:45 AM
Unless you’ve been living under a rock, you heard that some researchers managed to create a SHA-1 collision. The short story as to why this matters is the whole purpose of a hashing algorithm is to make it impossible to generate collisions on purpose. Unfortunately though impossible things are usually also impossible so in reality we just make sure it’s really really hard to generate a collision. Thanks to Moore’s Law, hard things don’t stay hard forever. This is why MD5 had to go live on a farm out in the country, and we’re not allowed to see it anymore … because it’s having too much fun. SHA-1 will get to join it soon.

The details about this attack are widely published at this point, but that’s not what I want to discuss, I want to bring things up a level and discuss the problem of algorithm deprecation. SHA-1 was basically on the way out. We knew this day was coming, we just didn’t know when. The attack isn’t super practical yet, but give it a few years and I’m sure there will be some interesting breakthroughs against SHA-1. SHA-2 will be next, which is why SHA-3 is a thing now. At the end of the day though this is why we can’t have nice things.

A long time ago there weren’t a bunch of expired standards. There were mostly just current standards and what we would call “old” standards. We kept them around because it was less work than telling them we didn’t want to be friends anymore. Sure they might show up and eat a few chips now and then, but nobody really cared. Then researchers started to look at these old algorithms and protocols as a way to attack modern systems. That’s when things got crazy.

It’s a bit like someone bribing one of your old annoying friends to sneak the attacker through your back door during a party. The friend knows you don’t really like him anymore, so it won’t really matter if he gets caught. Thus began the long and horrible journey to start marking things as unsafe. Remember how long it took before MD5 wasn’t used anymore? How about SSL 2 or SSHv1? It’s not easy to get rid of widely used standards even if they’re unsafe. Anytime something works it won't be replaced without a good reason. Good reasons are easier to find these days than they were even a few years ago.

This brings us to the recent SHA-1 news. I think it's going better this time, a lot better. The browsers already have plans to deprecate it. There are plenty of good replacements ready to go. Did we ever discuss killing off md5 before it was clearly dead? Not really. It wasn't until a zero day md5 attack was made public that it was decided maybe we should stop using it. Everyone knew it was bad for them, but they figured it wasn’t that big of a deal. I feel like everyone understands SHA-1 isn’t a huge deal yet, but it’s time to get rid of it now while there’s still time.

This is the world we live in now. If you can't move quickly you will fail. It's not a competitive advantage, it's a requirement for survival. Old standards no longer ride into the sunset quietly, they get their lunch money stolen, jacket ripped, then hung by a belt loop on the fence.

Episode 34 - Bathing in Ebola Virus

Posted by Open Source Security Podcast on February 22, 2017 09:26 PM
Josh and Kurt discuss RSA, the cryptographer's panel and of course, AI.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/309062655&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Our Bootloader Problem

Posted by Nathaniel McCallum on February 21, 2017 11:05 PM

GRUB, it is time we broke up. It’s not you, it’s me. Okay, it’s you. The last 15+ years have some great (read: painful) memories. But it is time to call it quits.

Red Hat Linux (not RHEL) deprecated LILO for version 9 (PDF; hat tip: Spot). This means that Fedora has used GRUB as its bootloader since the very first release: Fedora Core 1.

GRUB was designed for a world where bootloaders had to locate a Linux kernel on a filesystem. This meant it needed support for all the filesystems anyone might conceivably use. It was also built for a world where dual-booting meant having a bootloader implemented menu to choose between operating systems.

The UEFI world we live in today looks nothing like this. UEFI requires support for a standard filesystem. This filesystem, which for all intents and purposes duplicates the contents of /boot, is required on every Linux system which boots UEFI. So UEFI loads the bootloader from the UEFI partition and then the bootloader loads the kernel from the /boot partition.

Did you know that UEFI can just boot the kernel directly? It can!

The situation, however, is much worse than just duplicated effort. With the exception of Apple hardware, practically all UEFI implementations ship with Secure Boot and a TPM enabled by default. Only appropriately signed UEFI code will be run. This means we now introduce a [shim][shim] which is signed. This, in turn, loads GRUB from the UEFI partition.

This means that our boot process now looks like this:

  • UEFI filesystem
    1. shim
    2. GRUB
  • /boot filesystem
    1. Linux

It gets worse. Microsoft OEMs are now enabling BitLocker by default. BitLocker seals (encrypts) the Windows partition to the TPM PCRs. This means that if the boot process changes (and you have no backup of the key), you can’t decrypt your data. So remember that great boot menu that GRUB provided so we can dual-boot with Windows? It can never work, cryptographically.

The user experience of this process is particularly painful. Users who manage to get Fedora installed will see a nice GRUB menu entry for Windows. But if they select it, they are immediately greeted with a terrifying message telling them that the boot configuration has changed and their encrypted data is inaccessible.

To recap, where Secure Boot is enabled (pretty much all Intel hardware), we must use the boot menu provided by UEFI. If we don’t, the PCRs of the TPM have unknown hashes and anything sealed to the boot state will fail to decrypt.

The good news is that Intel provides a reference implementation of UEFI, and it includes pretty much everything we’d ever need. This means that most vendors get it pretty much correct as well. OEMs are even using these facilities for their own (hidden) recovery partitions.

So why not just have UEFI boot the kernel directly? There are still some drawbacks to this approach.

First, it requires signing every build of the kernel. This is definitely undesirable since kernels are updated pretty regularly.

Second, every kernel upgrade would mean a write to UEFI NVRAM. There are some concerns about the longevity of the hardware under such frequent UEFI writes.

Third, it exposes kernels as a menu option in UEFI. This menu typically contains operating systems, not individual kernels, which results in a poor user experience. Most users don’t need to care about what kernel they boot. There should be a bootloader which loads the most recently installed kernel and falls back to older kernels if the new kernels fail to boot. All of this can be done without a menu (unless the user presses a key).

Fortunately, systemd already implements precisely such a bootloader. Previously, this bootloader was called gummiboot. But it has since been merged into the systemd repository as systemd-boot.

With systemd-boot, our boot process can look like this:

  • UEFI filesystem
    1. shim
    2. systemd-boot
    3. Linux

It would even be possible (though, not necessarily desirable) to sign systemd-boot directly and get rid of the shim.

In short, we need to stop trying to make GRUB work in our current context and switch to something designed specifically for the needs of our modern systems. We already ship this code in systemd. Further, systemd already ships a tool for managing the bootloader. We just need to enable it in Anaconda and test it.

Who’s with me!?

P.S. - It would be very helpful if we could get some good documentation on manually migrating from GRUB to systemd-boot. This would at least enable the testing of this setup by brave users.

Wildcard certificates in FreeIPA

Posted by Fraser Tweedale on February 20, 2017 04:55 AM

The FreeIPA team sometimes gets asked about wildcard certificate support. A wildcard certificate is an X.509 certificate where the DNS-ID has a wildcard in it (typically as the most specific domain component, e.g. *.cloudapps.example.com). Most TLS libraries match wildcard domains in the obvious way.

In this blog post we will discuss the state of wildcard certificates in FreeIPA, but before proceeding it is fitting to point out that wildcard certificates are deprecated, and for good reason. While the compromise of any TLS private key is a serious matter, the attacker can only impersonate the entities whose names appear on the certificate (typically one or a handful of DNS addresses). But a wildcard certificate can impersonate any host whose name happens to match the wildcard value.

In time, validation of wildcard domains will be disabled by default and (hopefully) eventually removed from TLS libraries. The emergence of protocols like ACME that allow automated domain validation and certificate issuance mean that there is no real need for wildcard certificates anymore, but a lot of programs are yet to implement ACME or similar; therefore there is still a perceived need for wildcard certificates. In my opinion some of this boils down to lack of awareness of novel solutions like ACME, but there can also be a lack of willingness to spend the time and money to implement them, or a desire to avoid changing deployed systems, or taking a "wait and see" approach when it comes to new, security-related protocols or technologies. So for the time being, some organisations have good reasons to want wildcard certificates.

FreeIPA currently has no special support for wildcard certificates, but with support for custom certificate profiles, we can create and use a profile for issuing wildcard certificates.

Creating a wildcard certificate profile in FreeIPA

This procedure works on FreeIPA 4.2 (RHEL 7.2) and later.

First, kinit admin and export an existing service certificate profile configuration to a file:

ftweedal% ipa certprofile-show caIPAserviceCert --out wildcard.cfg
---------------------------------------------------
Profile configuration stored in file 'wildcard.cfg'
---------------------------------------------------
  Profile ID: caIPAserviceCert
  Profile description: Standard profile for network services
  Store issued certificates: TRUE

Modify the profile; the minimal diff is:

--- wildcard.cfg.bak
+++ wildcard.cfg
@@ -19 +19 @@
-policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=EXAMPLE.COM
+policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM
@@ -108 +108 @@
-profileId=caIPAserviceCert
+profileId=wildcard

Now import the modified configuration as a new profile called wildcard:

ftweedal% ipa certprofile-import wildcard \
    --file wildcard.cfg \
    --desc 'Wildcard certificates' \
    --store 1
---------------------------
Imported profile "wildcard"
---------------------------
  Profile ID: wildcard
  Profile description: Wildcard certificates
  Store issued certificates: TRUE

Next, set up a CA ACL to allow the wildcard profile to be used with the cloudapps.example.com host:

ftweedal% ipa caacl-add wildcard-hosts
-----------------------------
Added CA ACL "wildcard-hosts"
-----------------------------
  ACL name: wildcard-hosts
  Enabled: TRUE

ftweedal% ipa caacl-add-profile wildcard-hosts --certprofiles wildcard
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
-------------------------
Number of members added 1
-------------------------

ftweedal% ipa caacl-add-host wildcard-hosts --hosts cloudapps.example.com
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
  Profiles: wildcard
  Hosts: cloudapps.example.com
-------------------------
Number of members added 1
-------------------------

An additional step is required in FreeIPA 4.4 (RHEL 7.3) and later (it does not apply to FreeIPA < 4.4):

ftweedal% ipa caacl-add-ca wildcard-hosts --cas ipa
  ACL name: wildcard-hosts
  Enabled: TRUE
  CAs: ipa
-------------------------
Number of members added 1
-------------------------

Then create a CSR with subject CN=cloudapps.example.com (details omitted), and issue the certificate:

ftweedal% ipa cert-request my.csr \
    --principal host/cloudapps.example.com \
    --profile wildcard
  Issuing CA: ipa
  Certificate: MIIEJzCCAw+gAwIBAgIBCzANBgkqhkiG9w0BAQsFADBBMR8...
  Subject: CN=*.cloudapps.example.com,O=EXAMPLE.COM
  Issuer: CN=Certificate Authority,O=EXAMPLE.COM
  Not Before: Mon Feb 20 04:21:41 2017 UTC
  Not After: Thu Feb 21 04:21:41 2019 UTC
  Serial number: 11
  Serial number (hex): 0xB

Alternatively, you can use Certmonger to request the certificate:

ftweedal% ipa-getcert request \
  -d /etc/httpd/alias -p /etc/httpd/alias/pwdfile.txt \
  -n wildcardCert \
  -T wildcard

This will request a certificate for the current host. The -T option specifies the profile to use.

Discussion

Observe that the subject common name (CN) in the CSR does not contain the wildcard. FreeIPA requires naming information in the CSR to perfectly match the subject principal. As mentioned in the introduction, FreeIPA has no specific support for wildcard certificates, so if a wildcard were included in the CSR, it would not match the subject principal and the request would be rejected.

When constructing the certificate, Dogtag performs a variable substitution into a subject name string. That string contains the literal wildcard and the period to its right, and the common name (CN) from the CSR gets substituted in after that. The relevant line in the profile configuration is:

policyset.serverCertSet.1.default.params.name=CN=*.$request.req_subject_name.cn$, o=EXAMPLE.COM

When it comes to wildcards in Subject Alternative Name DNS-IDs, it might be possible to configure a Dogtag profile to add this in a similar way to the above, but I do not recommend it, nor am I motivated to work out a reliable way to do this, given that wildcard certificates are deprecated. (By the time TLS libraries eventually remove support for treating the subject CN as a DNS-ID, I will have little sympathy for organisations that still haven’t moved away from wildcard certs).

In conclusion: you shouldn’t use wildcard certificates, and FreeIPA has no special support for them, but if you really need to, you can do it with a custom certificate profile.

Episode 33 - Everybody who went to the circus is in the circus (RSA 2017)

Posted by Open Source Security Podcast on February 15, 2017 06:22 AM
Josh and Kurt are at the same place at the same time! We discuss our RSA sessions and how things went. Talk of CVE IDs, open source libraries, Wordpress, and early morning sessions.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/307825712&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Reality Based Security

Posted by Josh Bressers on February 13, 2017 04:37 AM
If I demand you jump off the roof and fly, and you say no, can I call you a defeatist? What would you think? To a reasonable person it would be insane to associate this attitude with being a defeatist. There are certain expectations that fall within the confines of reality. Expecting things to happen outside of those rules is reckless and can often be dangerous.

Yet in the universe of cybersecurity we do this constantly. Anyone who doesn’t pretend we can fix problems is a defeatist and part of the problem. We just have to work harder and not claim something can’t be done, that’s how we’ll fix everything! After being called a defeatist during a discussion, I decided to write some things down. We spend a lot of time trying to fly off of roofs instead of looking for practical realistic solutions for our security problems.

The way cybersecurity works today someone will say “this is a problem”. Maybe it’s IoT, or ransomware, or antivirus, secure coding, security vulnerabilities; whatever, pick something, there’s plenty to choose from. It’s rarely in a general context though, it will be sort of specific, for example “we have to teach developers how to stop adding security flaws to software”. Someone else will say “we can’t fix that”, then they get called a defeatist for being negative and it’s assumed the defeatists are the problem. The real problem is they’re not wrong. It can’t be fixed. We will never see humans write error free code, there is no amount of training we can give them. Pretending it can is what’s dangerous. Pretending we can fix problems we can’t is lying.

The world isn’t fairy dust and rainbows. We can’t wish for more security and get it. We can’t claim to be working on a problem if we have no clue what it is or how to fix it. I’ll pick on IoT for a moment. How many security IoT “experts” exist now? The number is non trivial. Does anyone have any ideas how to understand the IoT security problems? Talking about how to fix IoT doesn’t make sense today, we don’t even really understand what’s wrong. Is the problem devices that never get updates? What about poor authentication? Maybe managing the devices is the problem? It’s not one thing, it’s a lot of things put together in a martini shaker, shook up, then dumped out in a heap. We can’t fix IoT because we don’t know what it even is in many instances. I’m not a defeatist, I’m trying to live in reality and think about the actual problems. It’s a lot easier to focus on solutions for problems you don’t understand. You will find a solution, those solutions won’t make sense though.

So what do we do now? There isn’t a quick answer, there isn’t an easy answer. The first step is to admit you have a problem though. Defeatists are a real thing, there’s no question about it. The trick is to look at the people who might be claiming something can’t be fixed. Are they giving up, or are they trying to reframe the conversation? If you declare them a defeatist, the conversation is now over, you killed it. On the other side of the coin, pretending things are fine is more dangerous than giving up, you’re living in a fantasy. The only correct solution is reality based security. Have honest and real conversations, don’t be afraid to ask hard questions, don’t be afraid to declare something unfixable. An unfixable problem is really just one that needs new ideas.

You can't fly off the roof, but trampolines are pretty awesome.

I'm @joshbressers on Twitter, talk to me.

Episode 32 - Gambling as a Service

Posted by Open Source Security Podcast on February 08, 2017 01:35 AM
Josh and Kurt discuss random numbers, a lot. Also slot machines, gambling, and dice.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/306639696&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


A sweet metaphor

Posted by Stephen Gallagher on February 06, 2017 05:17 PM

If you’ve spent any time in the tech world lately, you’ve probably heard about the “Pets vs. Cattle” metaphor for describing system deployments. To recap: the idea is that administrators treat their systems as animals: some they treat very much like a pet; they care for them, monitor them intently and if they get “sick”, nurse them back to help. Other systems are more like livestock: their value to them is in their ready availability and if any individual one gets sick, lamed, etc. you simply euthanize it and then go get a replacement.

Leaving aside the dreadfully inaccurate representation of how ranchers treat their cattle, this metaphor is flawed in a number of other ways. It’s constantly trotted out as being representative of “the new way of doing things vs. the old way”. In reality, I cannot think of a realistic environment that would ever be able to move exclusively to the “new way”, with all of their machines being small, easily-replaceable “cattle”.

No matter how much the user-facing services might be replaced with scalable pods, somewhere behind that will always be one or more databases. While databases may have load-balancers, failover and other high-availability and performance options, ultimately they will always be “pets”. You can’t have an infinite number of them, because the replication storm would destroy you, and you can’t kill them off arbitrarily without risking data loss.

The same is true (perhaps doubly so) for storage servers. While it may be possible to treat the interface layer as “cattle”, there’s no way that you would expect to see the actual storage itself being clobbered and overwritten.

The main problem I have with the traditional metaphor is that it doesn’t demonstrate the compatibility of both modes of operation. Yes, there’s a lot of value to moving your front-end services to the high resilience that virtualization and containerization can provide, but that’s not to say that it can continue to function without the help of those low-level pets as well. It would be nice if every part of the system from bottom to top was perfectly interchangeable, but it’s unlikely to happen.

So, I’d like to propose a different metaphor to describe things (in keeping with the animal husbandry theme): beekeeping. Beehives are (to me) a perfect example of how a modern hybrid-mode system is set up. In each hive you have thousands of completely replaceable workers and drones; they gather nectar and support the hive, but the loss of any one (or even dozens) makes no meaningful difference to the hive’s production.

However, each hive also has a queen bee; one entity responsible for controlling the hive and making sure that it continues to function as a coherent whole. If the queen dies or is otherwise removed from the hive, the entire system collapses on itself. I think this is a perfect metaphor for those low-level services like databases, storage and domain control.

This metaphor better represents how the different approaches need to work together. “Pets” don’t provide any obvious benefit to their owners (save companionship), but in the computing world, those systems are fundamental to keeping things running. And with the beekeeping metaphor, we even have a representative for the collaborative output… and it even rhymes with “money”.


There are no militant moderates in security

Posted by Josh Bressers on February 06, 2017 04:16 PM
There are no militant moderates. Moderates never stand out for having a crazy opinion or idea, moderates don’t pick fights with anyone they can. Moderates get the work done. We could look at the current political climate, how many moderate reasonable views get attention? Exactly. I’m not going to talk about politics, that dumpster fire doesn’t need any more attention than it’s already getting. I am however going to discuss a topic I’m calling “security moderates”, or the people who are doing the real security work. They are sane, reasonable, smart, and actually doing things that matter. You might be one, you might know one or two. If I was going to guess, they’re a pretty big group. And they get ignored quite a lot because they're too busy getting work done to put on a show.

I’m going to split existing security talent into some sort of spectrum. There’s nothing more fun than grouping people together in overly generalized ways. I’m going to use three groups. You have the old guard on one side (I dare not mention left or right lest the political types have a fit). This is the crowd I wrote about last week; The people who want to protect their existing empires. On the other side you have a lot of crazy untested ideas, many of which nobody knows if they work or not. Most of them won’t work, at best they're a distraction, at worst they are dangerous.

Then in the middle we have our moderates. This group is the vast majority of security practitioners. The old guard think these people are a bunch of idiots who can’t possibly know as much as they do. After all, 1999 was the high point of security! The new crazy ideas group thinks these people are wasting their time on old ideas, their new hip ideas are the future. Have you actually seen homomorphic end point at rest encryption antivirus? It’s totally the future!

Now here’s the real challenge. How many conferences and journals have papers about reasonable practices that work? None. They want sensational talks about the new and exciting future, or maybe just new and exciting. In a way I don’t blame them, new and exciting is, well, new and exciting. I also think this is doing a disservice to the people getting work done in many ways. Security has never been an industry that has made huge leaps driven by new technology. It’s been an industry that has slowly marched forward (not fast enough, but that’s another topic). Some industries see huge breakthroughs every now and then. Think about how relativity changed physics overnight. I won’t say security will never see such a breakthrough, but I think we would be foolish to hope for one. The reality is our progress is made slowly and methodically. This is why putting a huge focus on crazy new ideas isn’t helping, it’s distracting. How many of those new and crazy ideas from a year ago are even still ideas anymore? Not many.

What do we do about this sad state of affairs? We have to give the silent majority a voice. Anyone reading this has done something interesting and useful. In some way you’ve moved the industry forward, you may not realize it in all cases because it’s not sensational. You may not want to talk about it because you don’t think it’s important, or you don’t like talking, or you’re sick of the fringe players criticizing everything you do. The first thing you should do is think about what you’re doing that works. We all have little tricks we like to use that really make a difference.

Next write it down. This is harder than it sounds, but it’s important. Most of these ideas aren’t going to be full papers, but that’s OK. Industry changing ideas don’t really exist, small incremental change is what we need. It could be something simple like adding an extra step during application deployment or even adding a banned function to your banned.h file. The important part is explaining what you did, why you did it, and what the outcome was (even if it was a failure, sharing things that don’t work has value). Some ideas could be conference talks, but you still need to write things down to get talks accepted. Just writing it down isn’t enough though. If nobody ever sees your writing, you’re not really writing.  Publish your writing somewhere, it’s never been easier to publish your work. Blogs are free, there are plenty of groups to find and interact with (reddit, forums, twitter, facebook). There is literally a security conference every day of the year. Find a venue, tell your story.

There are no militant moderates, this is a good thing. We have enough militants with agendas. What we need more than ever are reasonable and sane moderates with great ideas, making a difference every day. If the sane middle starts to work together. Things will get better, and we will see the change we need.

Have an idea how to do this, let me know. @joshbressers on Twitter

Github, Facebook, And Bad Crypto (oh My!)

Posted by Robbie Harwood on February 05, 2017 05:00 AM

(If you were hoping for me to take potshots at OpenSSL, there will be none of that this week. As a single item, this is worse, and besides, I've been having a very pleasant interaction with the OpenSSL developers. Even if their Contributor License Agreement process is just as irritating as any other CLA process.)

GitHub decided to look at the problem of token recovery, and have decided on an approach. Given the title and lead-in, you can probably can tell already that I'm not happy about it. And to their credit: this is a difficult problem, and the mechanics are fiddly. They've set up a bug bounty program for any specification or implementation errors, but I have a bug right here that they will pay me exactly $0 for, won't acknowledge, and won't fix.

The bug is that this specification should never have been written, and this protocol should never be implemented.

Them's fightin' words, alright. I can back this up. Surprise: this is my niche. And speaking of which, it may surprise them to know that centralized accounts, federated identity, and OTP (one time pad) tokens are all solved problems. And they're solved together, too, by the beast whose name I shall now speak: Kerberos. My puppy (at least in RPM-land).

I won't talk too much about Kerberos here, both because I already do that a lot and also because the article at your local library is pretty good. (It's also a bit of a mess.) Suffice it to say: a protocol providing strong cryptographic guarantees about authentication as well as automatic establishment of a secure channel between any two servers without ever passing user keys across the internet in any form. It's older than I am, which makes it unpopular with the startup world.

Tokens

The motivation for this new process, it seems, is that fixing lost tokens is "hard". This is somewhat intentional, it turns out: one is supposed to guard the token, and it is supposed to be difficult to reissue to harden against attackers. And as a user experience, this of course is suboptimal: a user's phone burns out, and then whoops, they can't log in anymore.

To go off on another tangent for a moment: there are three types of two-factor auth token which are relevant here. The first is conceptually the simplest: a time-locked SMS is sent to your registered phone number, and one types the digits into the login form. Fine as far as it goes, but it requires that any potential adversary not also be able to eavesdrop on one's SMS communication at any point.

The second is what we often call "hard tokens". These are devices about the size of my thumb (usually on a keychain) on which there is typically a six-digit display and a button. One pushes the button, and it displays a series of numbers that the user then types into the login prompt. This works because the server has a notion of what the token's state is, and can confirm their agreement. This one is actually pretty good, but the tokens themselves are not cheap. In particular, since most users have smartphones, the hard token is often replaced with an app (if you are using Google Authenticator which is closed source, please switch to the open source FreeOTP). This increases the attack surface by requiring that the device itself be trusted. More on this in a minute.

The third is relatively new, and is associated with helpful acronyms like "FIDO" and "U2F". These are physical tokens that you insert into a USB port port, push a button, and it performs an entire challenge-response handshake to show existence of the token, and thereby provide a second factor. Nifty, except for the part where the browser is talking to hardware directly now. Also the part where Mozilla Firefox still refuses to implement it, which is especially irritating in the modern web because while there is an addon that provides the functionality, everyone wrote their detection scripts based on User-agent. Shame.

So, three types of tokens. And if I've written this correctly, the intended reaction is that U2F tokens should seem the clear winner, browser gunk notwithstanding. And that's not even wrong. But here's the thing: while writing this article, I realized that I have lost my U2F token. And then I realized that I haven't used this token since a couple weeks after its acquisition, and therefore it isn't tied to any accounts. But that itself is interesting. It turns out that the only times I would use this token are for GitHub logins and social media logins: basically, anything with an account that's tied to my name. All three sites.

The three sites being, of course, GitHub, Facebook, and Twitter. Facebook, while having what looks like nice 2fa integration, I predominantly use on mobile, so I can't enable a u2f token at all, and a software token will get me nowhere because being the same device, it's not a second factor at all. Twitter does not have 2fa integration at all as far as I can tell, but I don't care because I also mostly use that on mobile.

And then there's GitHub. See, they noticed that you can't use u2f on mobile, and decided that the best course of action was to require another token be present. This puts the u2f token on an equal power level with the other token. To emphasize: this means that anything you could do with a u2f token, you can also do with the software token (on the same device! See above) or SMS token (these are used in the recovery process, so still relevant). I reported this at launch of u2f integration, and received the nicest "we will not be fixing this and would prefer not to talk to you" email I've gotten in a while.

Back on track

I read through their more in-depth announcement as well as the spec. Predictably, I have some thoughts.

  • Step four in the process (from the announcement) is "Contact GitHub Support.". In particular, "GitHub Support can then use this information as part of a risk-based analysis to decide if proof of account ownership has been established in order to disable two-factor authentication.". So this means that an actual human is involved in this process. Which means the whole thing is vulnerable to social engineering, on top of the other things I'm going to say below.

  • "Forcing users to everywhere use an email address has privacy implications, potentially allowing service providers to collude to track individuals' activity across many domains." (from the spec). This would be a lot more genuine if it were not coming from Facebook, kings of the real names policy (for which, by the way, I still know people who have had difficulty registering under their real name). And um... both GitHub and Facebook require an email address to be associated anyway. Everything about "anti-tracking" coming from Facebook feels like honey through the mouth of a snake.

  • They worry about analytics scripts leaking information in login pages. I have a suggestion. It's really lightweight. In fact, it's negative code. And bear with me a moment, I know this is hard, but: what if you didn't put analytics scripts on login pages?

  • Fundamental misunderstanding of authentication providers, especially federated ones. They seem to believe that such systems came into exist fifteen years ago (I'm twenty-four right now, and you'll remember that Kerberos is older than me), that users are unwilling to disclose their identity to services (how will you do login without this? Also, you're Facebook.), and that caching doesn't exist.

  • This bullet point is just to re-emphasize the previous one. Seriously, we're trusting these companies with our data.

  • icon-152px. This is a required (MUST) field in the JSON messages that are passed around on the wire. It is indicated to be "The URL of a 152x152 pixel PNG file representing the issuer". It is never mentioned again. Its purpose is not explained. The only possible uses I can come up with are showing pretty graphics to the user (eww) or for actual human verification (TERRIFYING). So some other questions: why 152? Why does it need to be square? Why a PNG? What is a "pixel PNG"? What does it mean to be "representing the issuer"? Does it need to be blue and look like the Fedora logo? Also: it's the URL of a page. So there's now a mechanism to point clients at arbitrary URLs, which is fun.

  • It's just OAuth. They even call out their own protocol as a simplification of OAuth.

  • Let me lose any friends I made with the previous point by saying that OAuth is terrible and that passing around bearer tokens as your authentication is just wrong.

  • It is stated that "Facebook only stores a token with an encrypted secret that is associated with a Facebook account and does not become valid until it's used in a recovery." (announcement). This is true, but... Facebook can just initiate a recovery at any time. Sure, there'll be a paper trail, but if it means a compromised tarball gets uploaded to a project, that stops mattering. Especially if they were compelled to not disclose such actions...

  • Potshot: why does the announcement call out SQL injection specifically as something they're worried about in designing a protocol that has nothing to do with SQL? Was there a security buzzword quota?

  • "At no point does GitHub exchange any personally identifiable information with Facebook. Likewise, Facebook does not exchange any personally identifiable data with us." As far as I can tell, this is either a grave misunderstanding or an outright lie. The identity information that they claim worry over is passed around; your identity is verified by Facebook to GitHub.

  • Putting a "privacy-policy" field in your wire protocol does not fix this, and is also perplexing. Is the browser supposed to do something with this?

  • There was explicitly "No public review". You ask us to trust your crypto, but won't let us help design it. Supposedly, it was created by "someone well versed in this area". The specification says Brad Hill of Facebook, who I am sure is a great person, but who has according to the internet never implemented anything cryptographic before. The announcement also states that it was "reviewed by numerous experts in the field". Who are unnamed, and do not appear on said specification.

  • Are they going to make an RFC out of this? It's formatted like they are, but I can't imagine this would get past the IETF through anything other than force of personality. Maybe they just wanted to experience the joys (no) of writing SGML for xml2rfc? If so, I will tell you for free: it is real bad. Don't write XML.

  • So why was there no public review? Since they won't say, I can think of only two reasons. The first is that they honestly didn't know any better, which is somewhat sad, but that's the best I can do in order to keep good faith. Because if I don't assume good faith, we're left with them not wanting outside input. Concerns like: making sure no one else does it first, making sure no one tells you to use existing protocols and technologies before you roll it into production, making sure their implementation is the final authority, hiding artificial weaknesses (Dual EC-DRBG anyone?), and so on. And I have to assume good faith.

  • "GitHub values the security of our users' accounts" I don't believe you. Well, that's not quite true: I think you value maintaining users, and account security is taking a backseat to slick web interfaces and Not-Invented-Here.

  • "We're also planning to support reciprocal Facebook account recovery in the near future." Utterly terrifying.

Final thoughts

Okay. I had queued a post about how everyone keeps making free-software GitHub clones, and what a monumental waste that is, but that's going to have to wait. I haven't changed my mind on it, but I can see much more clearly what people are worried about with a proprietary provider, and it needs to be edited for tone. (Then again, what do I write that doesn't?)

So, this is I guess an open invitation. If your identity model would be improved by using Kerberos, by having a CA, the use of one- and two-way trusts, single sign on, and the like: I will ssh into your box and run `ipa-server-install` for you.

And then you too can have those things. And not have to write more code or worry about doing wrong any of the things that went wrong here. And if encounter bugs, I will (be part of the group that will) fix them for you. Because that's how software project ecosystems work, done right. Not a vacuum.

Mapping from iSCSI session to device.

Posted by Adam Young on February 03, 2017 07:11 PM

I was monitoring my system, so I knew the file was /dev/sdb was the new iSCSI target I was trying to turn into a file system. TO provide it, I ran:

iscsiadm -m session --print=3

And saw:

...
		scsi4 Channel 00 Id 0 Lun: 0
		scsi4 Channel 00 Id 0 Lun: 1
			Attached scsi disk sdb		State: running

But what did that do? Using Strace helped me sort it a little. I worked backwards.

stat("/sys/subsystem/scsi/devices/4:0:0:1", 0x7ffc3aab0a50) = -1 ENOENT (No such file or directory)
stat("/sys/bus/scsi/devices/4:0:0:1", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
lstat("/sys/bus/scsi/devices/4:0:0:1/state", {st_mode=S_IFREG|0644, st_size=4096, ...}) = 0
open("/sys/bus/scsi/devices/4:0:0:1/state", O_RDONLY) = 3
read(3, "running\n", 256)               = 8
close(3)                                = 0
write(1, "\t\t\tAttached scsi disk sdb\t\tState"..., 42			Attached scsi disk sdb		State: running

But looking in the file /sys/bus/scsi/devices/4:0:0:1/state I saw only the word “Running” so it must have found the device earlier.

Looking at the complete list of files opened is illuminating, although too long to list here. It starts by enumerating through /sys/class/iscsi_session and hits
/sys/class/iscsi_session/session3. Under /sys/class/iscsi_session/session3/device/target4:0:0 I found:

$ ls -la /sys/class/iscsi_session/session3/device/target4:0:0  
total 0
drwxr-xr-x. 5 root root    0 Feb  3 13:04 .
drwxr-xr-x. 6 root root    0 Feb  3 13:04 ..
drwxr-xr-x. 6 root root    0 Feb  3 13:04 4:0:0:0
drwxr-xr-x. 8 root root    0 Feb  3 13:04 4:0:0:1
drwxr-xr-x. 2 root root    0 Feb  3 14:03 power
lrwxrwxrwx. 1 root root    0 Feb  3 13:04 subsystem -> ../../../../../bus/scsi
-rw-r--r--. 1 root root 4096 Feb  3 13:04 uevent

And, following the symlink:

[ansible@dialga ~]$ ls -la /sys/bus/scsi/devices/4:0:0:1/block
total 0
drwxr-xr-x. 3 root root 0 Feb  3 13:35 .
drwxr-xr-x. 8 root root 0 Feb  3 13:04 ..
drwxr-xr-x. 8 root root 0 Feb  3 13:35 sdb

Notice that /sys/bus/scsi/devices/4:0:0:0/ does not have a subfile called block.

There is probably more to the link than this, but it should be enough to connect the dots. Not sure if there is a way to reverse it short of listing the devices under /sys/bus/scsi/devices/ .

Installing and Running Ansible on Fedora 25

Posted by Adam Young on February 03, 2017 11:30 AM

I have two machines beyond the Laptop on which I am currently typing this article. I want to manage them from my workstation using Ansible. All three machines are running Fedora 25 Workstation.

The two nodes are called dialga and munchlax. You can guess my kids’ interests.

[all]
dialga 
munchlax

Make sure basic Ansible functionality works:

$ ansible -i $PWD/inventory.ini all -m ping
munchlax | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
dialga | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Some config changes I have to make:

Create a new user and group both called ansible on this machine. Change the sudoers file to let the ansible user perform sudo operations without passing a password. This is a security risk in general, but I will be gating all access via my desktop machine and Key based Auth only. I can use my ~/ayoung/.ssh directory to pre-populate this directory, as it only has public keys in it.

A cloud-init install on OpenStack would have set this for me, but since we are talking bare metal here, and no Ironic/PXE, I include this to document what was done manually.

$ sudo cp -a ~ayoung/.ssh/ ~ansible/
$ sudo chown -R ansible:ansible ~ansible/.ssh

Get rid of GSSAPI auth for SSH. I am not using it, and, since I have TGT for my work account, it is slowing down all traffic. Ideally, I would leave GSSAPI enabled, but prioritize Key based auth higher.

$ sudo grep GSSAPI /etc/ssh/sshd_config
# GSSAPI options
#GSSAPIAuthentication yes
GSSAPIAuthentication no
GSSAPICleanupCredentials no
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
#GSSAPIEnablek5users no

Make sure to restart sshd:

sudo systemctl restart sshd

Ensure that the ansible, python2, and dnf_python2 RPMs are installed. Ansible now runs with local modules. This speeds things up, but requires the nodes to have pre-installed code, which I don’t really like. Don’t want to have to update ansible at the start of all playbooks. I am fairly certain that these can all be installed during the initial install of the machine if you chose the additional ansible dnf group.

Episode 31 - XML is never the solution

Posted by Open Source Security Podcast on February 01, 2017 01:37 AM
Josh and Kurt discuss door locks, Ikea, chair testing sounds, electrical safety, autonomous cars, and XML vs JSON.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/305513722&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Barely Functional Keystone Deployment with Docker

Posted by Adam Young on January 31, 2017 04:10 PM

My eventual goal is to deploy Keystone using Kubernetes. However, I want to understand things from the lowest level on up. Since Kubernetes will be driving Docker for my deployment, I wanted to get things working for a single node Docker deployment before I move on to Kubernetes. As such, you’ll notice I took a few short cuts. Mostly, these involve configuration changes. Since I will need to use Kubernetes for deployment and configuration, I’ll postpone doing it right until I get to that layer. With that caveat, let’s begin.

My last post showed how to set up a database and talk to it from a separate container. After I got that working, it stopped, so I am going to back off that a bit, and just focus on the other steps. I do know that the issue was in the setup of the separate Bridge, as when I changed to using the default Bridge network, everything worked fine.

Of the many things I skimped on, the most notable is that I am not doing Fernet tokens, nor am I configuring the Credentials key. These both require outside coordination to have values synchronized between Keystone servers. You would not want the secrets built directly into the Keystone container.

To configure the Keystone database system, I use a single-shot container. This can be thought of as the Command design pattern: package up everything you need to perform an action, and send it to the remote system for execution. In this case, the docker file pulls together the dependencies, which calls a shell script to do the configuration. Here is the Docker file:

FROM index.docker.io/centos:7
MAINTAINER Adam Young <adam>

RUN yum install -y centos-release-openstack-newton &&\
    yum update -y &&\
    yum -y install openstack-keystone mariadb openstack-utils  &&\
    yum -y clean all

COPY configure_keystone.sh /
COPY keystone-configure.sql /
CMD /configure_keystone.sh

The shell script initializes the database using an external SQL file. I use the echo statment for logging/debugging. Passwords are hard coded, as are host names. These should be extracted out to environment variables in the next iteration.

#!/bin/bash

echo -n Database 
mysql -h 172.17.0.2  -P3306 -uroot --password=my-secret-pw < keystone-configure.sql
echo " [COMPLETE]"

echo -n "configuration "
openstack-config  --set  /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@172.17.0.2/keystone
DATABASE_CONN=`openstack-config  --get  /etc/keystone/keystone.conf database connection `
echo $DATABASE_CONN

echo " [COMPLETE]"

echo -n "db-sync "
keystone-manage db_sync
echo " [COMPLETE]"

echo -n "bootstrap "
keystone-manage bootstrap --bootstrap-password=FreeIPA4All
echo " [COMPLETE]"

The SQL file mere creates the Keystone database and initializes access.

-- Don't drop database keystone;
create database keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone'; 
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone'; 
exit

By not dropping the Keystone database, we keep from destroying data if we accidentally run this container twice. It means that, for iterative development, I have to manually delete the database prior to a run, but that is easily done from the command line. I can check the state of the database using:

docker run -it     -e MYSQL_ROOT_PASSWORD="my-secret-pw"    --rm mariadb sh    -c 'exec mysql -h 172.17.0.2 -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"  keystone'

With minor variations on that, I can delete. Once I can connect and confirm that the Database is correctly initialized, I can launch the Keystone container. Here is the Dockerfile

FROM index.docker.io/centos:7
MAINTAINER Adam Young <adam>

RUN yum install -y centos-release-openstack-newton &&\
    yum update -y &&\
    yum -y install openstack-keystone httpd mariadb openstack-utils mod_wsgi &&\
    yum -y clean all

RUN openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@172.17.0.2/keystone &&\
    cp /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d

ADD run-httpd.sh /run-httpd.sh
RUN chmod -v +x /run-httpd.sh

CMD ["/run-httpd.sh"]

This makes use of the resources inside the openstack-keystone RPM to configure the Apache HTTPD instance.

The run-httpd.sh script is lifted from the HTTPD container.

#!/bin/bash
# Copied from
#https://github.com/CentOS/CentOS-Dockerfiles/blob/master/httpd/centos7/run-httpd.sh
# Make sure we're not confused by old, incompletely-shutdown httpd
# context after restarting the container.  httpd won't start correctly
# if it thinks it is already running.
rm -rf /run/httpd/* /tmp/httpd*
exec /usr/sbin/apachectl -DFOREGROUND

This should probably be done as an additional layer on top of the CentOS-Dockerfiles version.

I can then run the Keystone container using:

docker run -it -d  --name openstack-keystone    openstack-keystone

Both containers are running. The configuration container stopped running after it completed its tasks.

$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES
6f6afa855cae        openstack-keystone   "/run-httpd.sh"          29 minutes ago      Up 29 minutes                           openstack-keystone
1127467c0b2b        mariadb:latest       "docker-entrypoint.sh"   17 hours ago        Up 17 hours         3306/tcp            some-mariadb

Confirm access:

$ curl http://172.17.0.3:35357/v3
{"version": {"status": "stable", "updated": "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links": [{"href": "http://172.17.0.3:35357/v3/", "rel": "self"}]}}
$ curl http://172.17.0.3:5000/v3
{"version": {"status": "stable", "updated": "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links": [{"href": "http://172.17.0.3:5000/v3/", "rel": "self"}]}}

Everything you know about security is wrong, stop protecting your empire!

Posted by Josh Bressers on January 30, 2017 12:01 AM
Last week I kept running into old school people trying to justify why something that made sense in the past still makes sense today. Usually I ignore these sort of statements, but I feel like I’m seeing them often enough it’s time to write something up. We’re in the middle of disruptive change. That means that the way security used to work doesn’t work anymore (some people think it does) and in the near future, it won’t work at all. In some instances will actually be harmful if it’s not already.


The real reason I’m writing this up is because there are really two types of leaders. Those who lead to inspire change, and those who build empires. For empire builders, change is their enemy, they don’t welcome the new disrupted future. Here’s a list of the four things I ran into this week that gave me heartburn.


  • You need AV
  • You have to give up usability for security
  • Lock it all down then slowly open things up
  • Firewall everything


Let’s start with AV. A long time ago everyone installed an antivirus application. It’s just what you did, sort of like taking your vitamins. Most people can’t say why, they just know if they didn't do this everyone would think they're weird. Here’s the question for you to think about though: How many times did your AV actually catch something? I bet the answer is very very low, like number of times you’ve seen bigfoot low. And how many times have you seen AV not stop malware? Probably more times than you’ve seen bigfoot. Today malware is big business, they likely outspend the AV companies on R&D. You probably have some control in that phone book sized policy guide that says you need AV. That control is quite literally wasting your time and money. It would be in your best interest to get it changed.


Usability vs security is one of my favorite topics these days. Security lost. It’s not that usability won, it’s that there was never really a battle. Many of us security types don’t realize that though. We believe that there is some eternal struggle between security and usability where we will make reasonable and sound tradeoffs between improving the security of a system and adding a text field here and an extra button there. What really happened was the designers asked to use the bathroom and snuck out through the window. We’re waiting for them to come back and discuss where to add in all our great ideas on security.


Another fan favorite is the best way to improve network security is to lock everything down then start to open it up slowly as devices try to get out. See the above conversation about usability. If you do this, people just work around you. They’ll use their own devices with network access, or just work from home. I’ve seen employees using the open wifi of the coffee shop downstairs. Don’t lock things down, solve problems that matter. If you think this is a neat idea, you’re probably the single biggest security threat your organization has today, so at least identifying the problem won’t take long.


And lastly let’s talk about the old trusty firewall. Firewalls are the friend who shows up to help you move, drinks all your beer instead of helping, then tells you they helped because now you have less stuff to move. I won’t say they have no value, they’re just not great security features anymore. Most network traffic is encrypted (or should be), and the users have their own phones and tablets connecting to who knows what network. Firewalls only work if you can trust your network, you can’t trust your network. Do keep them at the edge though. Zero trust networking doesn’t mean you should purposely build a hostile network.

We’ll leave it there for now. I would encourage you to leave a comment below or tell me how wrong I am on Twitter. I’d love to keep this conversation going. We’re in the middle of a lot of change. I won’t say I’m totally right, but I am trying really hard to understand where things are going, or need to go in some instances. If my silly ramblings above have put you into a murderous rage, you probably need to rethink some life choices, best to do that away from Twitter. I suspect this will be a future podcast topic at some point, these are indeed interesting times.

How wrong am I? Let me know: @joshbressers on Twitter.



Connecting to MariaDB with a network configuration Docker

Posted by Adam Young on January 28, 2017 02:07 AM

Since the “link” directive has been deprecated, I was wondering how to connect to a mariadb instance on a non-default network when both the database and the monitor are running is separate networks. Here is what I got:

First I made sure I could get the link method to work as described on the docker Mariadb site.

Create the network

docker network create --driver bridge maria-bridge

create the database on that network

docker run --network=maria-bridge --name some-mariadb -e \
      MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:latest

create the monitor also on that network. Note that none of the env vars set by link have been set. Fior now, just hard code them

docker run -it --network maria-bridge \
   -e MYSQL_ROOT_PASSWORD="my-secret-pw" \
   --rm mariadb sh \
   -c 'exec mysql -hsome-mariadb -P3306 -uroot -p"$MYSQL_ROOT_PASSWORD"'

Running GUI Applications in a container with SELinux

Posted by Adam Young on January 26, 2017 05:13 PM

As I work more and more with containers, I find myself wanting to make more use of them to segregate running third party apps. Taking the lead of Jessie Frazelle I figured I would try to run the Minecraft client in a Container on Fedora 25. As expected, it was a learning experience, but I got it. Here’s the summary:

I started with Wakaru Himura’s docker-minecraft-client Dockerfile. which was written for Ubuntu. When it didn’t work for me, I started trying for a Fedora based one. It took a couple iterations.

The error indicated that the container was having trouble connecting to, or communicating with, the Unix domain socket used by the X server. It was returjned bythe Java code, and here is an abbreviated version of the stack trace.

You can customize the options to run it in the run.sh and rebuilding the image
No protocol specified
Exception in thread "main" java.lang.InternalError: Can't connect to X11 window server using ':1' as the value of the DISPLAY variable.
	at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method)
	at sun.awt.X11GraphicsEnvironment.access$200(X11GraphicsEnvironment.java:65)
	at sun.awt.X11GraphicsEnvironment$1.run(X11GraphicsEnvironment.java:110)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.awt.X11GraphicsEnvironment.(X11GraphicsEnvironment.java:74)
        . . .
	at net.minecraft.bootstrap.Bootstrap.main(Bootstrap.java:378)

Wakaru’s version uses the Oracle JDK. I’ve been running the OpenJDK from Fedora with Minecraft with few problesm, so I simplified the Java install.

I ended up doing a lot of trial and error to get the X authorization code to find the right information from the parent. One thing I tried was the creation of a user inside the container, to mirror my account outside the container, using Wakaru’s code, but modifying it for my personal account.

FROM index.docker.io/fedora:25
MAINTAINER Adam Young <adam@younglogic.com>

RUN dnf -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel java-1.8.0-openjdk-headless
RUN dnf -y  install strace  xorg-x11-xauth
RUN  dnf -y clean all
COPY Minecraft.jar ./

RUN export uid=14370 gid=14370 && \
    mkdir -p /home/ayoung && \
    echo "ayoung:x:${uid}:${gid}:ayoung,,,:/home/ayoung:/bin/bash" >> /etc/passwd && \
    echo "ayoung:x:${uid}:" >> /etc/group  && \
    chown ${uid}:${gid} -R /home/ayoung


CMD XAUTHORITY=~/.Xauthority  /usr/bin/java -jar ./Minecraft.jar

Running that still gave the error from the X server connection. THe audit log shows that SELinux was denying the connection.

$  sudo tail -f  /var/log/audit/audit.log | grep avc
type=AVC msg=audit(1485401508.386:1616): avc:  denied  { connectto } for  pid=18405 comm="java" path=002F746D702F2E5831312D756E69782F5831 scontext=system_u:system_r:container_t:s0:c151,c769 tcontext=unconfined_u:unconfined_r:xserver_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0

Disable SELinux (For now) to see if we can get a success.

sudo setenforce permissive

And run like this

 docker run -ti --rm -e DISPLAY --user ayoung:ayoung -v /run/user/14370/gdm/Xauthority:/run/user/14370/gdm/Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/ayoung/.Xauthority:/home/ayoung/.Xauthority --net=host minecraft

Et, Viola!:

OK. Let’s deal with SELinux. First, re-enable, and confirm.

$ sudo setenforce enforcing
$ getenforce 
Enforcing

Let’s break apart the AVC: Gentoo’s page has a good summary of the pieces:

Log part Meaning
denied { connectto } The attempt to connect to was denied
for pid=18405 The ID of the process that triggered the AVC. Not useful now, as the process has since been terminated.
comm=”java” The command exec’ed by the process that triggered the AVC. This comes from the line run in the container: CMD XAUTHORITY=~/.Xauthority /usr/bin/java -jar ./Minecraft.jar
path=002F746D702F2E5831312D756E69782F5831 path=/tmp/.X11-unix/X1 in Hex. Can be deciphered using:
/bin/python3 -c ‘print(bytearray.fromhex(“‘$1′”).decode())’
scontext= Security context of the process. IN parts…
— system_u: The system user, since dockerd is running from systemd
— container_t: The container target. Docker specific resources have this label
— s0: User is Level 0
— c151,c769 User context.
tcontext= The security context of the target unix Domain Socket
— unconfined_u: Unconfined User. Me. since it was run from me booting my system, which came from a user prompt.
— unconfined_r: Unconfied Role.
— xserver_t: X server lable, to keep all of X’s resources labeled the same way.
— s0-s0: Target is Level 0
— c0.c1023 Target is Context Subcontext.
tclass=unix_stream_socket Target Class shows it is a Domain socket
permissive=0 SELinux was not running in permissive mode.

More information on the Levels and contexts is available for those who wish to understand them, but I didn’t need them for this. They are used by other access control tools, and we are not going to bother with it for the Fedora desktop system.

When dealing with SELinux problems, we have a couple tools in the toolkit.

  • We can change the context of the caller
  • We can change the labels
  • we can change the policy

Of the three, changing the policy is most common. We don’t want to break existing policy, so we need a new rule that says that containers can talk to domains sockets for the xserver. That policy looks like this:

(allow container_t xserver_t (unix_stream_socket (connectto)))

Else where, we’ve seen that the connection reads the X rules in the Xauthority file, which I have pointing to ~/.Xauthority so a second rule makes that part happy. Here is my complete mycontainer.cil file

(allow container_t xserver_t (unix_stream_socket (connectto)))
(allow container_t user_home_t (dir (read)))

Add that to the systems policy with:

sudo  semodule -i mycontainer.cil

Re-enable SELinux enforeing and run the docker file, and it all works.

It took a lot of troubleshooting to get to that point. Special thanks to grift in #selinux for helping with the policy.

Below this is my raw notes and logs, mostly kept for my own historical perspective. There were a few comasnds I used that I will want to look at again. I have the IORC log and out put from more commands below, too.

The file in question is:

$ ls -Z /tmp/.X11-unix/
 system_u:object_r:user_tmp_t:s0 X0 unconfined_u:object_r:user_tmp_t:s0 X1

And we’ll relabel X1.  Since  this is a test, we’ll do a temporary relabel.  Worst case, everything locks up and we have to reboot.

First I better save this post.  I might get locked out….

$ sudo chcon -t container_t /tmp/.X11-unix/X1
[sudo] password for ayoung: 
chcon: failed to change context of '/tmp/.X11-unix/X1' to ‘unconfined_u:object_r:container_t:s0’: Permission denied

Hmmm.  In the Audit log:

type=AVC msg=audit(1485434178.292:1753): avc: denied { relabelto } for pid=28244 comm="chcon" name="X1" dev="tmpfs" ino=47315 scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:container_t:s0 tclass=sock_file permissive=0

To test, I am going to disable SELinux, relabel, then re-enable it.

 

$ chcon -t container_t /tmp/.X11-unix/X1
$ sudo ls -Z /tmp/.X11-unix/
     system_u:object_r:user_tmp_t:s0 X0  unconfined_u:object_r:container_t:s0 X1
$ sudo setenforce enforcing
$ docker run -ti --rm -e DISPLAY --user ayoung:ayoung -v /run/user/14370/gdm/Xauthority:/run/user/14370/gdm/Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/ayoung/.Xauthority:/home/ayoung/.Xauthority --net=host minecraft
Exception in thread "main" java.awt.AWTError: Can't connect to X11 window server using ':1' as the value of the DISPLAY variable.

and in the audit log:

type=AVC msg=audit(1485434417.111:1789): avc: denied { connectto } for pid=28511 comm="java" path=002F746D702F2E5831312D756E69782F5831 scontext=system_u:system_r:container_t:s0:c124,c220 tcontext=unconfined_u:unconfined_r:xserver_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0

It still has the xserver_t label. Looking at it via ls:

$ sudo ls -Z /tmp/.X11-unix/
ls: cannot access '/tmp/.X11-unix/X1': Permission denied
system_u:object_r:user_tmp_t:s0 X0 (null) X1

Ooops. Probably got lucky there we didn’t crash. Lets reset it,

$ sudo setenforce permissive
$ sudo ls -Z /tmp/.X11-unix/
     system_u:object_r:user_tmp_t:s0 X0  unconfined_u:object_r:container_t:s0 X1
$ chcon -t user_tmp_t /tmp/.X11-unix/X1
$ sudo ls -Z /tmp/.X11-unix/
    system_u:object_r:user_tmp_t:s0 X0	unconfined_u:object_r:user_tmp_t:s0 X1
$ sudo setenforce enforcing
$ sudo ls -Z /tmp/.X11-unix/
    system_u:object_r:user_tmp_t:s0 X0	unconfined_u:object_r:user_tmp_t:s0 X1

Since we can’t do ls, it is safe to assume that user launched X processes will also not be able to connect to the socket. But, since the labels don’t match, I am going to assume, also, that we are looking at the wrong file. The target that was denied had a label of xserver_t, and this had user_tmp_t.

Perhaps it was something in /run/user/14370/gdm/Xauthority ? Let’s look.

$ ls -Z /run/user/14370/gdm/Xauthority
unconfined_u:object_r:user_tmp_t:s0 /run/user/14370/gdm/Xauthority

Nope. That path=002F746D702F2E5831312D756E69782F5831 must be pointing somewhere else. Let’s see if it is an inode.

$  find / -inum 002F746D702F2E5831312D756E69782F5831
find: invalid argument `002F746D702F2E5831312D756E69782F5831' to `-inum'

Nope. What is it?

Here is a hint:

$ sudo netstat -xa | grep X11-unix
unix  2      [ ACC ]     STREAM     LISTENING     29431    @/tmp/.X11-unix/X0
unix  2      [ ACC ]     STREAM     LISTENING     47313    @/tmp/.X11-unix/X1
unix  2      [ ACC ]     STREAM     LISTENING     47314    /tmp/.X11-unix/X1
unix  2      [ ACC ]     STREAM     LISTENING     29432    /tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     1612587  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     40628    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     42628    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     37201    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     30693    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     395122   @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     38481    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1614636  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     38798    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1714036  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     353557   @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     30688    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     42436    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     36697    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     42430    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     42363    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     37936    @/tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     3713614  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     55540    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     48315    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     38629    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     3204892  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     49616    @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1612589  @/tmp/.X11-unix/X1
unix  3      [ ]         STREAM     CONNECTED     1709056  @/tmp/.X11-unix/X1

Resorting to IRC, got some help in #selinux:

grift: path=/tmp/.X11-unix/X1
       /bin/python3 -c 'print(bytearray.fromhex("'$1'").decode())'
       thats now selinux deals with stream connect
        the "connectto" check is done on the process listening on the socket
        basically stream connect/dgram sendto is a two step thing
        1. step on connectto sendto process listening on the socket respectively
        2. step two writing the actual sock file
        allow container_t xserver_t:unix_stream_socket connectto;
        allow container_t user_tmp_t:sock_file write
        you might want to run semodule -DB before you try it
        fedora is kind of quick to hide events

Here is some of the output from the audit log:

    type=AVC msg=audit(1485447098.800:1983): avc:  denied  { connectto } for  pid=1455 comm="java" path=002F746D702F2E5831312D756E69782F5831 scontext=system_u:system_r:container_t:s0:c250,c486 tcontext=unconfined_u:unconfined_r:xserver_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=1
    type=AVC msg=audit(1485447098.800:1984): avc:  denied  { read } for  pid=1455 comm="java" name=".Xauthority" dev="dm-1" ino=393222 scontext=system_u:system_r:container_t:s0:c250,c486 tcontext=system_u:object_r:user_home_t:s0 tclass=dir permissive=1

The convo continued:

grift: echo "(allow container_t xserver_t (unix_stream_socket (connectto)))" > mycontainer.cil
       theres another one that might be related (or not)
       where "java" lists ~/.Xauthority
       echo " (allow container_t user_home_t (dir (read)))" >> mycontainer.cil && semodule -i mycontainer.cil
       run semodule -B to hide it again
       if everything works now for you

Episode 30 - I'm not an expert but I've been yelled at by experts

Posted by Open Source Security Podcast on January 26, 2017 02:02 PM
Josh and Kurt discuss security automation. Machine learning, AI, and a bunch of moral and philosophical boundaries that new future will bring. You've been warned.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/304449487&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Reimagining the Saxophone

Posted by Adam Young on January 25, 2017 06:39 PM

During a trip to Manhattan last winter (Jan 2016 or so) I heard some buskers in Union Square station making sounds that were at once familiar and new.

This is not my video, but this is roughly where they were playing, and this is how Too Many Zooz sounded.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/jMe6Y8GDVEI?feature=oembed" width="584"></iframe>

My whole family stayed and watched for a while.  Entranced.

It turns out, there is a lot of new style music played with old instruments.  Gogol Bordello, Golem, the Pogues, the Dropkick Murphys and many others have done a wonderful job of merging Klezmer and Irish music with Punk, and Post Modern Jukebox has managed to make modern Pop work in older styles. What Too Many Zooz is doing is applying the same ethos to techno/trance/dance.  They call it Brasscore.

I’d call it Jazz.

The wonderful thing about the Tenor Sax is that it sits low enough in the range to cover the low end of the male voice and, with practice, you can get a huge range above.  Sometimes, it needs a little help, though, and you can see Moon Hooch get creative to get notes below the low B flat.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/wwBhxBBa7tE?feature=oembed" width="584"></iframe>

Both groups use effects that were novelties in older music.  Probably the most notable is the use of overtones, that rising squeal that sounds almost electronic.  It turns out there are a whole body of sounds that the dedicated wind player can get from an instrument.  When put together, and played creatively, it can give the impression of a small ensemble, even when just a single Sax player is producing the music.

Derek Brown has put them together masterfully:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="329" src="https://www.youtube.com/embed/8LEuTnykXp4?feature=oembed" width="584"></iframe>

He has a whole body of tutorials on his site that explains the various techniques.  I’ve been following a few of them.  Right now the two things I am working on is Slap Tonguing and Overtones.  For the Overtones work, I am following the manual of the master:

 

It has been a lot of fun work.  I can hit the second Octave B flat pretty consistently, and the C and C sharp with work.  I can cheat up to the D using the low B flat fingering, by starting with the high D key open.

In doing so, I’ve felt my sound get stronger, especially at the top range.  I’m not where I want to be there yet, though.

The slap tonging is, in some ways, harder to learn, as all of the technique is internal to the mouth.  I’ve followed a few different tutorials, but the instructions here have proven to be the most helpful.

I’ve also gone through this sequence many times.

Not there yet, but every now and then, I get the sound.

I’ve really been motivate to practice and master these techniques.  I’ll post video when I feel I have them down sufficiently.

 

Return on Risk Investment

Posted by Josh Bressers on January 24, 2017 08:25 PM
I found myself in a discussion earlier this week that worked its way into return on investment topics. Of course nobody could really agree on what the return was which is sort of how these conversations often work out. It’s really hard to decide what the return on investment is for security features and products. It can be hard to even determine cost sometimes, which should be the easy number to figure out.

All this talk got me thinking about something I’m going to call risk investment. The idea here is that you have a risk, which we’ll think about as the cost. You have an investment of some sort, it could be a product, training, maybe staff. This investment in theory reduces your risk in some measurable way. The reduction of the risk is the return on risk investment. We like to think about these things in the context of money, but risk doesn’t exactly work that way. Risk isn’t something that can often be measured easily. Even incredibly risky behaviors can work out fine, and playing it safe can end horribly. Rather than try to equate everything to money, what if we ignored that for the moment and just worried about risk.

 First, how do you measure your risk? There isn’t a nice answer for this. There are plenty of security frameworks you can use. There are plenty of methodologies that exist, threat modeling, attack surface analysis, pen test reports, architecture reviews, automated scanning of products and infrastructure. There’s no single good answer to this question. I can’t tell you what your risk profile is, you have to decide how you’re going to measure this. What are you protecting? If it’s some sort of regulated data, there will be substantial cost in losing it, so this risk measurement is easy. It’s less obvious if you’re not operating in an environment that has direct cost to having an incident. It’s even possible you have systems and applications that pose zero risk (yeah, I said it).

 Assuming we have a way to determine risk, now we wonder how do you measure the return on controlling risk? This is possibly more tricky than deciding on how to measure your risk. You can’t prove a negative in many instances, there’s no way to say your investment is preventing something from happening. Rather than measure how many times you didn’t get hacked, the right way to think about this is if you were doing nothing, how would you measure your level of risk? We can refer back to our risk measurement method for that. Now we think about where we do have certain protections in place, what will an incident look like? How much less trouble will there be? If you can’t answer this you’re probably in trouble. This is the important data point though. When there is an incident, how do you think your counter measures will help mitigate damage? What was your investment in the risk?

 And now this brings us to our Return on Risk Investment, or RORI as I’ll call it, because I can and who doesn’t like acronyms? Here’s the thing to think about if you’re a security leader. If you have risk, which we all do, you must find some way to measure it. If you can’t measure something you don’t understand it. If you can’t measure your risk, you don’t understand your risk. Once you have your method to understand what’s happening, make note of your risk measurement without any sort of security measures in place, your risk with ideal (not perfect, perfect doesn't exist) measures in place, and your risk with existing measures in place. That will give you an idea of how effective what you’re doing is. Here’s the thing to watch for. If your existing measures are close to the risk level for no measures, that’s not a positive return. Those are things you either should fix or stop doing. Sometimes it’s OK to stop doing something that doesn’t really work. Security theater is real, it doesn’t work, and it wastes money. The trick is to find a balance that can show measurable risk reduction without breaking the bank.


How do you measure risk? Let me know: @joshbressers on Twitter.


Running SAS University Edition on Fedora 25

Posted by Adam Young on January 24, 2017 03:19 AM

My Wife is a statistician. Over the course of her career, she’s done a lot of work coding in SAS, and, due to the expense of licensing, I’ve never been able to run that code myself. So, when I heard about SAS having a free version, I figured I would download it and have a look, maybe see if I could run something.

Like many companies, SAS went the route of shipping a virtual appliance. They chose to use Virtual Box as the virtualization platform. However, when I tried to install and run the VM in virtual box, I found that the mechanism used to build the Virtual Box specific module for the Linux Kernel, the build assumption were not met, and the VM would not run.

Instead of trying to fix that situation, I investigated the possibility of running the virtual appliance via libvirt on my Fedora systems already installed and configured kvm setup. Turns out it was pretty simple.

To start I went through the registration and download process from here. Once I had a login, I was able to download a file called unvbasicvapp__9411008__ova__en__sp0__1.ova.

What is an ova file? It turns out is is a non-compressed tar file.

$ tar -xf unvbasicvapp__9411008__ova__en__sp0__1.ova
$ ls
SAS_University_Edition.mf  SAS_University_Edition.ovf   SAS_University_Edition.vmdk  unvbasicvapp__9411008__ova__en__sp0__1.ova

Now I had to convert the disk image into something that would work for KVM.

$qemu-img convert -O qcow2 SAS_University_Edition.vmdk SAS_University_Edition.qcow2

Then, I used the virt-manager gui to import the VM. TO be sure I met the constraints, I looked inside the SAS_University_Edition.ovf file. It turns out they ship a pretty modest VM: 1024 MB of Memory and 1 Virtual CPU. These are pretty easy constraints to meet, and I might actually up the amount of memory or CPUs in the VM in the future depending on the size of the data sets I ended up playing around with. However, for now, this is enough to make things work.

Add a new VM from the file menu.

Import the existing image

Use the Browse Local button Browse to the directory where you ran the qemu-img convert command above.

Complete the rest of the VM creation. Defaults should suffice. Run the VM inside VM Manager.

Once the Boot process has completed, you should get enough information from the console to connect to the web UI.

Hitting the Web UI from a browser shows the landing screen.

Click Start… and start coding

Hello, World.

 

Episode 29 - The Security of Rogue One

Posted by Open Source Security Podcast on January 22, 2017 11:00 PM
Josh and Kurt discuss the security of the movie Rogue One! Spoiler: Security in the Star Wars universe is worse than security in our universe.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/303899056&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Mechanical Computer


Episode 28 - RSA Conference 2017

Posted by Open Source Security Podcast on January 19, 2017 02:00 PM
Josh and Kurt discuss their involvement in the upcoming 2017 RSA conference: Open Source, CVEs, and Open Source CVE. Of course IoT and encryption manage to come up as topics.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/303432626&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


What does security and USB-C have in common?

Posted by Josh Bressers on January 16, 2017 06:39 PM
I've decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Episode 27 - Prove to me you are human

Posted by Open Source Security Podcast on January 16, 2017 03:45 PM
Josh and Kurt discuss NTP, authentication issues, network security, airplane security, AI, and Minecraft.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/302981179&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Episode 26 - Tell your sister, Stallman was right

Posted by Open Source Security Podcast on January 12, 2017 02:03 PM
Josh and Kurt end up discussing video game speed running, which is really just hacking. We also end up discussing the pitfalls of the modern world where you don't own your software or services. Stallman was right!

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/302260581&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes





Exploring long JSON files with jq

Posted by Adam Young on January 12, 2017 01:39 PM

The JSON file format is used for marshalling data in lots of different applications. If you are new to an application, and don’t know the data, it might be hard to visually parse the JSON and understand what you are seeing.  The jq command line utility can help make it easier to scope in to a section of the file.  This is a starting point.

Kubelet, the daemon that runs on a Kuberenets node, has a web API for returning stats.  To query it from that node:

curl -k https://localhost:10250/stats/

However, the amount of text returned is several thousand lines.  The first few lines look like this:

$ curl -sk https://localhost:10250/stats/ | head 
{
 "name": "/",
 "subcontainers": [
 {
 "name": "/machine.slice"
 },
 {
 "name": "/system.slice"
 },
 {

Since the JSON top level construct is a dictionary, we can use the function keys from jq to enumerate just the keys.

$ curl -sk https://localhost:10250/stats/ | jq keys
[
 "name",
 "spec",
 "stats",
 "subcontainers"
]

To view the subcontainers, use that key:

$ curl -sk https://localhost:10250/stats/ | jq .subcontainers
[
 {
 "name": "/machine.slice"
 },
 {
 "name": "/system.slice"
 },
 {
 "name": "/user.slice"
 }
]

The stats key returns an array:

$ curl -sk https://localhost:10250/stats/ | jq .stats | head
[
 {
 "timestamp": "2017-01-12T13:23:45.301168504Z",
 "cpu": {
 "usage": {
 "total": 420399104294,
 "per_cpu_usage": [
 202178115170,
 218220989124
 ],

How long is it?  use the length function.  Note that jq functions are piped one into the next.

$ curl -sk https://localhost:10250/stats/ | jq ".stats | length"
9

Want to see the keys of an element?  Index it as an array:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[0] | keys"
[
 "cpu",
 "diskio",
 "filesystem",
 "memory",
 "network",
 "task_stats",
 "timestamp"
]

To see a subelement, use the pipe format.  For example, to see the timestamp of the top element,

$ curl -sk https://localhost:10250/stats/ | jq ".stats[0] | .timestamp"
"2017-01-12T13:29:16.162797308Z"

To see a value for all elements, remove the index from the array. Again, use the pipe notation:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[] | .timestamp"
"2017-01-12T13:32:13.732338602Z"
"2017-01-12T13:32:25.713656307Z"
"2017-01-12T13:32:43.443936137Z"
"2017-01-12T13:33:02.796007138Z"
"2017-01-12T13:33:14.53537449Z"
"2017-01-12T13:33:32.540031699Z"
"2017-01-12T13:33:42.732536856Z"
"2017-01-12T13:33:53.235774027Z"
"2017-01-12T13:34:10.351984713Z"

Which shows that the last element of the array is the latest.  Use the index of -1 to reference this value:

$ curl -sk https://localhost:10250/stats/ | jq ".stats[-1] | .timestamp"
"2017-01-12T13:33:53.235774027Z"

 

Edit: added below.

To find an element of a list based on the value of a key, or the value of a sub element, use the pipe notation within the parameter list of a the call to select. I use a slightly different curl query here, note the summary element at the end. I want to get the pod entry that matches a section of a particular pod name.

curl -sk https://localhost:10250/stats/summary | jq ‘.pods[] | select(.podRef | .name | contains(“virt-launcher-testvm”))’

Episode 25 - The future is now

Posted by Open Source Security Podcast on January 10, 2017 02:00 PM
Josh and Kurt end up discussing CES, IoT, WiFi everywhere, and the future.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/301707567%3Fsecret_token%3Ds-KzKrp&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Security Advice: Bad, Terrible, or Awful

Posted by Josh Bressers on January 09, 2017 03:30 PM
As an industry, we suck at giving advice. I don’t mean this in some negative hateful way, it’s just the way it is. It’s human nature really. As a species most of us aren’t very good at giving or receiving advice. There’s always that vision of the wise old person dropping wisdom on the youth like it’s candy. But in reality they don’t like the young people much more than the young people like them. Ever notice the contempt the young and old have for each other? It’s just sort of how things work. If you find someone older and wiser than you who is willing to hand out good advice, stick close to that person. You won’t find many more like that.

Today I’m going to pick on security though. Specifically security advice directed at people who aren’t security geeks. Heck, some of this will probably apply to security geeks too, so let’s just stick to humans as the target audience. Of all our opportunities around advice, I think the favorite is blaming the users for screwing up. It’s never our fault, it’s something they did, or something wasn’t configured correctly, but still probably something they did. How many times have you dealt with someone who clicked a link because they were stupid. Or they opened an attachment because they’re an idiot. Or they typed a password in that web page because they can’t read. The list is long and impressive. Not once did we do anything wrong. Why would we though? It’s not like we made anyone do those things! This is true, but we also didn’t not make them do those things!

Some of the advice we expect people to listen to is good advice. A great example is telling someone to “log out” of their banking site when they’re done. That makes sense, it’s easy enough to understand, and nothing lights on fire if they forget to do this. We also like to tell people things like “check the URL bar”. Why would a normal person do this? They don’t even know what a URL is. They know what a bar is, it’s where they go to calm down after talking to us. What about when we tell people not to open attachments? Even attachments from their Aunt Millie? She promised that cookie recipe months ago, it’s about time cookies.exe showed up!

The real challenge we have is understanding what is good advice that would supplement a properly functional system. Advice and instructions do not replace a proper solution. A lot of advice we give out is really to mask something that’s already broken. The fact that we expect users to care about a URL or attachment is basically nuts. These are failures in the system, not failures with users. We should be investing our resources into solving the root of the problem, not yelling at people for clicking on links. Instead of telling users not to click on attachments, just don’t allow attachments. Expecting behavior from people rarely changes them. At best it creates an environment of shame but it’s more likely it creates an environment of contempt. They don’t like you, you don’t like them.

As a security practitioner, look for ways to eliminate problems without asking users for intervention. A best case situation will be 80% user compliance. That remaining 20% would require more effort to deal with than anyone could handle, and if your solution is getting people to listen, you need 100% all the time which is impossible for humans but not impossible for computers.

It’s like the old saying, an ounce of prevention is worth a pound of cure. Or if you’re a fan of the metric system, 28.34 grams of prevention is worth 453.59 grams of cure!

Do you have some bad advice? Lay it on me! @joshbressers on Twitter.

Looks like you have a bad case of embedded libraries

Posted by Josh Bressers on January 03, 2017 03:39 PM
A long time ago pretty much every application and library carried around its own copy of zlib. zlib is a library that does really fast and really good compression and decompression. If you’re storing data or transmitting data, it’s very likely this library is in use. It’s easy to use and is public domain. It’s no surprise it became the industry standard.

Then one day, CVE-2002-0059 happened. CVE-2002-0059 was a security flaw that was easy to trigger and easy to exploit. It affected network listening applications that used zlib (which was most of them). Today if this came out, it would make heartbleed look like a joke. This was long long ago though, most people didn’t know anything about security (or care in many instances). If you look at the updates that came out because of this flaw, they were huge because literally hundreds of software applications and libraries had to be patched. This affected Windows and Linux, which was most everything back then. Today it would affect every device on the planet. This isn’t an exaggeration. Every. Single. Device.

A lot of people learned a valuable lesson from CVE-2002-0059. That lesson was to stop embedding copies of libraries in your applications. Use the libraries already available on the system. zlib is pretty standard now, you can find it most anywhere, there is basically no reason to carry around your own version of this library in your project anymore. Anyone who does this would be seen as a bit nuts. Except this is how containers work.

Containing Containers

If you pay attention at all, you know the future of most everything is moving back in the direction of applications shipping with all the bits they need to run. Linux containers have essentially a full linux distribution inside them (a very small one of course). Now there’s a good reason for needing containers today. A long time ago, things moved very slowly. It wouldn’t have been crazy to run the same operating system for ten years. There weren’t many updates to anything. Even security updates were pretty rare. You know that if you built an application on top of a certain version of Windows, Solaris, or Linux, it would be around for a long time. Those days are long gone. Things move very very quickly today.

I’m not foolish enough to tell anyone they shouldn’t be including embedded copies of things in their containers. This is basically how containers work. Besides everything is fast now, including the operating system. You can’t count on the level of stability that once existed. This is a good thing because it gives us the ability to create faster than ever before, container technology is how we solve the problem of a fast changing operating system.

The problem we have today is our tools aren’t quite ready to deal with a security nightmare like CVE-2002-0059. If we found a serious problem like this (we sort of did with CVE-2015-7547 which affected glibc) how long would it take you to update all your containers? How would you update them? How would you even know if the flaw affected you?

The answer is most people wouldn’t update their containers quickly, some wouldn’t update them ever. This sort of goes against the whole DevOps concept. The right way this should work is if some horrible flaw is found in a library you’re shipping, your CI/CD infrastructure just magically deals with it. You shouldn’t have to really know or care. Humans are slow and make a lot of mistakes. They’re also hard to predict. All of these traits go against DevOps. The less we have humans do, the better. This has to be the future of security updates. There’s no secret option C where we stop embedding libraries this time. We need tools that can deal with security updates in a totally automated manner. We’re getting there, but we have a long way to go.

If you’re using containers today, and you can’t rebuild everything with the push of a button, you’re not really using containers. You’re running a custom Linux distribution. Don’t roll your own crypto, don’t roll your own distro.

Do you roll your own distro? Tell me, @joshbressers on Twitter.

Episode 24 - The 2016 prediction edition! (yeah, that's right, 2016)

Posted by Open Source Security Podcast on January 03, 2017 01:14 PM
Josh and Kurt discuss 2016 predictions in 2017, what they got right, what they got wrong, and a bunch of other random things.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/300679437&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Future Proof Security

Posted by Josh Bressers on January 02, 2017 04:00 PM
If you’ve ever written code, even a few lines of it, you know there is always some sort of tradeoff between doing it “right” and doing it "now". This is basically the reality of any industry, there is always the right way, and then there’s the way it’s going to get done. If you’ve ever done any sort of home remodeling project you’re well aware of uncovering the sins of the past as soon as that wall gets opened up.


When you’re writing software there are some places you should never try to make this tradeoff though. In the industry we like to call some of these decisions “technical debt”. It’s not called that to be clever, it’s called that because like all debt, someday you have to pay it back, plus interest. Sometimes those loans come with huge interest rates. How many of us have seen entire projects that were thrown out because of the terrible design decisions made way back at the beginning? It’s sadly not uncommon.


Are there times we should never make a tradeoff between “right” and “now”? Yes, yes there are. The single most important is verify data correctness. Especially if you think it’s trusted input. Today’s trusted input is tomorrow’s SQL injection. Let’s use a few examples (these are actual examples I saw in the past with the names of the innocent changed).


Beware the SQL
Once Bob wrote some SQL to return all the names in one of the ‘Users’ table. It’s a simple enough query, the code looks something like this:

def get_clients():
table_name = “clients”
query = ‘SELECT * from Users_’ + table_name


That’s easy enough to understand, for every other ‘get_’ function, you change the table name variable. Someday in the future, they let the intern write some code, and he decides that would be way easier if the table_name variable was passed to the function, and you set it from the URL. Now you have a SQL injection as any remote user can set the table_name variable to anything, including dangerous SQL. If you’re ever doing SQL queries, use prepared statements. Even if you don’t think you need it. It’ll save a lot of trouble later.


Images as far as the eye can see!
There is an application that has some internal icons, they’re used for the buttons that get displayed for users to click on, no big deal. The developer took an existing image library they found under the rug. It has some security flaws but who cares, all the images it displays are shipped by the app, they’re trusted, no big deal.


In a few years the intern (that guy again!) decides that it would be awesome to show images off the Internet. There just happens to be an image library already included in the application, which is a huge win. There’s even some example code that can be copied from where the buttons are drawn!


This one is pretty easy to see. You have a known bad library that used to parse only trusted input. Now it’s parsing untrusted input and is a pretty big problem. There isn’t an easy fix for this one unfortunately. It’s rarely wise to ship embedded libraries in your projects, but everyone does it. I won't tell you to stop doing this, but I also understand this is one of the great problems we have to solve now that open source is everywhere.

These two examples have been grossly simplified, but this stuff has and will continue to happen. If you’re a software developer, be careful with your shortcuts. Always ask yourself the question “what happens if this suddenly starts parsing untrusted input?” It’ll save you a lot of trouble down the road. Never forget that the technical debt bill will show up someday. Make sure you can afford it.

Do you have a clever technical debt story? Tell me, @joshbressers on Twitter.

We are (still) not who we are

Posted by Stephen Gallagher on December 31, 2016 06:28 PM

This article is a reprint. It first appeared on my blog on January 24, 2013. Given the recent high-profile hack of Germany’s defense minister, I decided it was time to run this one again.

 

In authentication, we generally talk about three “factors” for determining identity. A “factor” is a broad category for establishing that you are who you claim to be. The three types of authentication factor are:

  • Something you know (a password, a PIN, the answer to a “security question”, etc.)
  • Something you have (an ATM card, a smart card, a one-time-password token, etc.)
  • Something you are (your fingerprint, retinal pattern, DNA)

Historically, most people have used the first of these three forms most commonly. Whenever you’ve logged into Facebook, you’re entering something you know: your username and password. If you’ve ever used Google’s two-factor authentication to log in, you probably used a code stored on your smartphone to do so.

One of the less common, but growing, authentication methods are the biometrics. A couple years ago, a major PC manufacturer ran a number of television commercials advertising their laptop models with a fingerprint scanner. The claim was that it was easy and secure to unlock the machine with a swipe of a finger. Similarly, Google introduced a service to unlock an Android smart phone by using facial recognition with the built-in camera.

Pay attention folks, because I’m about to remove the scales from your eyes. Those three factors I listed above? I listed them in decreasing order of security. “But how can that be?” you may ask. “How can my unchangeable physical attributes be less secure than a password? Everyone knows passwords aren’t secure.”

The confusion here is due to subtle but important definitions in the meaning of “security”. Most common passwords these days are considered “insecure” because people tend to use short passwords which by definition have a limited entropy pool (meaning it takes a smaller amount of time to run through all the possible combinations in order to brute-force the password or run through a password dictionary). However, the pure computational complexity of the authentication mechanism is not the only contributor to security.

The second factor above, “something you have” (known as a token), is almost always of significantly higher entropy than anything you would ever use as a password. This is to eliminate the brute-force vulnerability of passwords. But it comes with a significant downside as well: something you have is also something that can be physically removed from you. Where a well-chosen password can only be removed from you by social engineering (tricking you into giving it to an inappropriate recipient), a token might be slipped off your desk while you are at lunch.

Both passwords and tokens have an important side-effect that most people never think about until an intrusion has been caught: remediation. When someone has successfully learned your password or stolen your token, you can call up your helpdesk and immediately ask them to reset the password or disable the cryptographic seed in the token. Your security is now restored and you can choose a new password and have a new token sent to you.

However, this is not the case with a biometric system. By its very nature, it is dependent upon something that you cannot change. Moreover, the nature of its supposed security derives from this very fact. The problem here is that it’s significantly easier to acquire a copy of someone’s fingerprint, retinal scan or even blood for a DNA test than it is to steal a password or token device and in many cases it can even be done without the victim knowing.

Many consumer retinal scanners can be fooled by a simple reasonably-high-resolution photograph of the person’s eye (which is extremely easy to accomplish with today’s cameras). Some of the more expensive models will also require a moving picture, but today’s high-resolution smartphone cameras and displays can defeat many of these mechanisms as well. It’s well-documented that Android’s face-unlock feature can be beaten by a simple photograph.

These are all technological limitations and as such it’s plausible that they can be overcome over time with more sensitive equipment. However, the real problem with biometric security lies with its inability to replace a compromised authentication device. Once someone has a copy of your ten fingerprints, or a drop of your blood from a stolen blood-sugar test or a close-up video of your eye from a scoped video camera, there is no way to change this data out. You can’t ask helpdesk to send you new fingers, an eyeball or DNA. Therefore, I contend that I lied to you above. There is no full third factor for authentication, because, given a sufficient amount of time, any use of biometrics will eventually degenerate into a non-factor.

Given this serious limitation, one should never under any circumstances use biometrics as the sole form of authentication for any purpose whatsoever.

One other thought: have you ever heard the argument that you should never use the same password on multiple websites because if it’s stolen on one, they have access to the others? Well, the same is true of your retina. If someone sticks malware on your cellphone to copy an image of your eye that you were using for “face unlock”, guess what? They can probably use that to get into your lab too.

The moral of the story is this: biometrics are minimally useful, since they are only viable until the first exposure across all sites where they are used. As a result, if you are considering initiating a biometric-based security model, I encourage you to save your money (those scanners are expensive!) and look into a two-factor solution involving passwords and a token of some kind.


Episode 23 - We can't patch people

Posted by Open Source Security Podcast on December 28, 2016 03:45 PM
Josh and Kurt talk about scareware, malware, and how hard this stuff is to stop, and how the answer isn't fixing people.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/299913768&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


The art of cutting edge, Doom 2 vs the modern Security Industry

Posted by Josh Bressers on December 25, 2016 06:05 PM
During the holiday, I started playing Doom 2. I bet I’ve not touched this game in more than ten years. I can't even remember the last time I played it. My home directory was full of garbage and it was time to clean it up when I came across doom2.wad. I’ve been carrying this file around in my home directory for nearly twenty years now. It’s always there like an old friend you know you can call at any time, day or night. I decided it was time to install one of the doom engines and give it a go. I picked prboom, it’s something I used a long time ago and doesn’t have any fancy features like mouselook or jumping. Part of the appeal is to keep the experience close to the original. Plus if you could jump a lot of these levels would be substantially easier. The game depends on not having those features.

This game is a work of art. You don’t see games redefining the industry like this anymore. The original Doom is good, but Doom 2 is like adding color to a black and white picture, it adds a certain quality to it. The game has a story, it’s pretty bad but that's not why we play it. The appeal is the mix of puzzles, action, monsters, and just plain cleverness. I love those areas where you have two crazy huge monsters fighting, you wonder which will win, then start running like crazy when you realize the winner is now coming after you. The games today are good, but it’s not exactly the same. The graphics are great, the stories are great, the gameplay is great, but it’s not something new and exciting. Doom was new and exciting. It created a whole new genre of gaming, it became the bar every game that comes after it reaches for. There are plenty of old games that when played today are terrible, even with the glasses of nostalgia on. Doom has terrible graphics, but that doesn’t matter, the game is still fantastic.

This all got me thinking about how industries mature. Crazy new things stop happening, the existing players find a rhythm that works for them and they settle into it. When was the last time we saw a game that redefined the gaming industry? There aren’t many of these events. This brings us to the security industry. We’re at a point where everyone is waiting for an industry defining event. We know it has to happen but nobody knows what it will be.

I bet this is similar to gaming back in the days of Doom. The 486 just came out, it had a ton of horsepower compared to anything that had come before it. Anyone paying attention knew there were going to be awesome advancements. We gave smart people awesome new tools. They delivered.

Back to security now. We have tons of awesome new tools. Cloud, DevOps, Artificial Intelligence, Open Source, microservices, containers. The list is huge and we’re ready for the next big thing. We all know the way we do security today doesn’t really work, a lot of our ideas and practices are based on the best 2004 had to offer. What should we be doing in 2017 and beyond? Are there some big ideas we’re not paying attention to but should be?

Do you have thoughts on the next big thing? Or maybe which Doom 2 level is the best (Industrial Zone). Let me know.

Episode 22 - IoT Wild West

Posted by Open Source Security Podcast on December 25, 2016 01:36 PM
Josh and Kurt talk about planned obsolescence and IoT devices. Should manufacturers brick devices? We also have a crazy discussion about the ethics of hacking back.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/299448186&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes