November 30, 2015

Where is the physical trust boundary?
There's a story of a toothbrush security advisory making the rounds.

This advisory is pretty funny but it matters. The actual issue with the toothbrush isn't a huge deal, an attacker isn't going to do anything exciting with the problems. The interesting issue here is we're at the start of many problems like this we're going to see.

Today some engineers built a clever toothbrush. Tomorrow they're going to build new things, different things. Security will matter for some of them. It won't matter for most of them.

Boundaries of trust

Today when we try to decide if something is a security issue we like to ask the question "Does this cross a trust boundary?" If it does, it's probably a security issue. If no trust boundary is crossed, it's probably not an issue. There are of course lots of corner cases and nuance, but we can generally apply the rule.

Think of it this way. If a user can delete their own files, that's not crossing a trust boundary, that's just doing something silly. If a user can delete someone else's files, that's not good.

This starts to get weird when we think about real things though.

Boundaries of physical trust?

What happens in the physical world? What counts as a trust boundary? In the toothbrush example above an attacker could gain knowledge of how someone is using a toothbrush. That's technically a trust boundary (an attacker can gain data they're not supposed to have), but let's face it, it's not a big deal. If your credit card number was also included in the data, sure no question there.

But as such, we're talking about data that isn't exciting. You can make the argument about tracking data from a user over the course of time and across devices, let's not go there right now. Let's just keep the thinking small and contained.

Where do we draw the line?

If we think about physical devices, what are our lines? A concept of just a trust boundary doesn't really work here. I can think of three lines, all of which are important, but not equally important.
  1. Safety
  2. Harm
  3. Annoyance


When I say safety I'm thinking about a device that could literally kill a person. This could be something like disabling the brakes on a car. Making a toaster start a fire. Catastrophic events. I don't think anyone would ever claim this class of issues isn't a problem. They are serious, I would expect any vendor to take these very seriously.


Harm would be where someone or something can be hurt. Nothing catastrophic. Think maybe a small burn, or a scrape. Perhaps making someone fall down when using a scooter, or burn themselves with a device. We could argue this category for a while. Things will get fuzzy between if the problem is catastrophic. Some vendors will be less willing to deal with these but I bet most get fixed quickly.


Annoyance is where things are going to get out of hand. This is where the toothbrush advisory lives. In the case of a toothbrush it's not going to be a huge deal. Should the vendor fix it? Probably. Should you get a new toothbrush over it? Probably not.

The nuance will be which annoying problems deserve fixes and which ones don't? Some of these problems could cost you money. What if an attacker can turn up your thermostat so your furnace runs constantly? Now we have an issue that can cost real money. What if we have a problem where your 3D printer ruins a spool of filament? What if the oven burns the Christmas goose?

Where is our trust boundary in the world of annoying problems? You can't just draw the line at money and goods. What happens if you can ring a person's door bell and they have to keep getting up to check the door? Things start to get really weird.

Do you think a consumer will be willing to spend an extra $10 for "better security"? I doubt it. In the event a device will harm or kill a person there are government agencies to step in and stop such products. There are no agencies for leaking data and even if there were they would have limited resources. Compare "annoyance security" to all the products sold today that don't actually work, who is policing those?

As of right now our future is going to be one where everything is connected to the Internet, none of it is secure, and nobody cares.

Join the conversation, hit me up on twitter, I'm @joshbressers

November 20, 2015

If your outcome is perfect or nothing, nothing always wins
This tweet

Led to this thread

The short version is there are some developers from Red Hat working on gcc attempting to prevent ROP style attacks. More than one person has accused this work of being pointless and a waste of time. It's not, the waste of time is arguing about why trying new things is dumb.

Here's the important thing security people always screw up.

The only waste of time is if you do nothing and complain about the people who are doing something.

It is possible the ROP work that's being done won't end up preventing anything. If that's true the absolute worst thing that will result is learning a lesson. It's all too easy in the security space to act like this. If it's not perfect you can make the argument it's bad. It's a common trait of a dysfunctional group.

This is however true in crypto, never invent your own crypto algorithm.

But in the context of humanity, this is how progress happens. First someone has an idea, it might be a terrible idea, but they work on it, then they get help, the people helping expand and change the idea, eventually, after people work together, the end is greater than the means. Or if it's a bad idea, it goes nowhere. Failure only exists if you learn nothing.

This isn't how security has worked, it's probably why everything seems so broken. The problem isn't the normal people, it's the security people. Here's how a normal security idea happens:
  1. Idea
  3. Give up
That's madness.

From now on, if someone has an idea and you think it's silly, say nothing. Just sit and watch. If you're right it will light on fire and you can run around giving hi5s. It probably won't though. If someone starts something, and others come to help, it's going to grow into something, or they'll fail and learn something. This is how humans learn and get better. It's how open source works, it's why open source won. It's why security is losing.

The current happy ending to the ROP thread is it's going to continue, the naysayers seem to have calmed down for now. I was a bit worried for a while I'll admit. I have no doubt they'll be back though.

Help or shut up. That is all.

Join the conversation, hit me up on twitter, I'm @joshbressers

November 18, 2015

Translating Between RDO/RHOS and Upstream OpenStack releases

There is a straight forward mapping between the version numbers used for RDO and Red Hat Enterprise Linux OpenStack Platform release numbers, and the upstream releases of OpenStack. I can never keep them straight. So, I write code.

UPDATE: missed Juno before…this is why we code review


upstream = ['Austin', 'Bexar', 'Cactus', 'Diablo', 'Essex', 'Folsom',
            'Grizzly', 'Havana', 'Icehouse', 'Juno', 'Kilo', 'Liberty',
            'Mitaka', 'N', 'O', 'P', 'Q', 'R', 'S']

for v in range(0, len(upstream) - 3):
    print "RHOS Version %s = upstream %s" % (v, upstream[v + 3])

RHOS Version 0 = upstream Diablo
RHOS Version 1 = upstream Essex
RHOS Version 2 = upstream Folsom
RHOS Version 3 = upstream Grizzly
RHOS Version 4 = upstream Havana
RHOS Version 5 = upstream Icehouse
RHOS Version 6 = upstream Juno
RHOS Version 7 = upstream Kilo
RHOS Version 8 = upstream Liberty
RHOS Version 9 = upstream Mitaka
RHOS Version 10 = upstream N
RHOS Version 11 = upstream O
RHOS Version 12 = upstream P
RHOS Version 13 = upstream Q
RHOS Version 14 = upstream R
RHOS Version 15 = upstream S

I’ll update one we have names for N and O.

November 16, 2015

Your containers were built in some guy's barn!
Today containers are a bit like how cars used to work a long long long time ago. You couldn't really buy a car, you had to build it yourself or find someone who could build one for you in their barn. The parts were terrible and things would break all the time. It probably ran on steam or was pulled by a horse.

Containers aren't magic. Well they are for most people. Almost all technology is basically magic for almost everyone. There are some who understand it but generally speaking, it's complicated. People know enough to get by which is fine, but that also means you have to trust your supplier. Your car is probably magic to you. You put gas in a hole in the back, then you can press buttons, push peddles, and turn wheels to transport you places. I'm sure a lot of people at this point are running through the basics of how cars work in their heads to reassure themselves its' not magic and they know what's going on!

They're magic, unless you own an engine hoist (and know how to use it).

Now let's think about containers in this context. For the vast majority of container users, they get a file from somewhere, it's full of stuff that doesn't make a lot of sense. Then they run some commands they found on the internet, then some magic happens, then they repeat this twiddling things here and there until on try 47 they have a working container.

It's easy to say it doesn't matter where the container content came from, or who wrote the dockerfile, or what happens at build time. It's easy because we're still very early in the life of this technology. Most things are still fresh enough that security can squeak by. Most technology is fresh enough you don't have to worry about API or ABI issues. Most technology is new enough it mostly works.

Except even with as new as this technology is, we are starting to see reports of how many security flaws exist in docker images. This will only get worse, not better, if nothing changes. Almost nobody is paying attention, containers mean we don't have to care about this stuff, right!? We're at a point where we have guys building cars in their barns. Would you trust your family in a car built in some guy's barn? No, you want a car built with good parts and has been safety tested. Your containers are being built in some guy's barn.

If nothing changes, imagine what the future will look like. What if we had containers in 1995. There would still be people deploying Windows 95 in a container and putting it on the Internet. In 20 years, there are still going to be containers we use today being deployed. Imagine still seeing Heartbleed in 20 years if nothing changes, the thought is horrifying.

Of course I'm a bit over dramatic about all this, but the basic premise is sound. You have to understand what your container bits are. Make sure your supplier can support them. Make sure your supplier knows what they're shipping. Demand containers built with high quality parts, not pieces of old tractors found in some barn. We need secure software supply chains, there are only a few places doing it today, start asking questions and paying attention.

Join the conversation, hit me up on twitter, I'm @joshbressers

November 11, 2015

Is the Linux ransomware the first of many?
If you pay any attention to the news, no doubt the story of the Linux ransomware that's making the rounds. There has been much said about the technical merits of this, but there are two things I keep wondering.

Is this a singular incident, or the first of many?

You could argue this either way. It might be a one off blip, it might be the first of more to come. We shouldn't start to get worked up just yet. If there's another one of these before the year ends I'm going to stock up on coffee for the impending long nights.

Why now?

Why are we seeing this now? Linux and Apache have been running a lot of web servers for a very long time. Is there something different now that wasn't there before? Unpatched software isn't new. Ransomware is sort of new. Drive-by attacks aren't new. What is new is the amount of attention this thing is getting.

It is helpful that the author made a mistake so the technical analysis is more interesting that it would be otherwise. I wonder if this wouldn't have been nearly as exciting without that.

If this is the first of many, 2016 could be a long year. Let's hope it's an anomaly.

Join the conversation, hit me up on twitter, I'm @joshbressers

November 10, 2015

You don't have Nixon to kick around any more!
There has been a bit of noise lately around some groups not taking security as seriously as they should. Or maybe it's the security folks don't think they take it as seriously as they should. Someday there is going to be a security mushroom cloud! When there is, you won't have Nixon Security to kick around anymore!

Does it matter?

I keep thinking about people who predict the end of the world, there hasn't been one of these in a while now. The joke is always "someday they'll be right".

We're a bit like this when it comes to computer security. The security guys have been saying for a very long time "someday you'll wish you listened to us!" I'm not sure this will even happen though. There will be localized events of course, but I doubt there will be one singular thing, it'll likely be a long slow burn.

The future won't be packetized.

The world is different now, I don't think there will be some huge changing event, but it's for the exact reason we think it will. Open source won, but it doesn't mean security wins next, it means security wins never.

Will there be a major security event that makes everyone start paying attention? I don't think so. If you look at history, a singular major event can cause a group to quickly change direction and unite them all. This happened to Microsoft, their SDL program got created, things like Nimda and Code Red gave them purpose and direction. But Microsoft was a single entity, one person could demand they change direction and everyone had to listen. If you didn't listen, you got a new job.

Imagine what would happen if anyone inside an open source project did this, even if they are viewed as the "leader"? It would be a circus. You would have one group claiming this is great (that's us), one claiming this is dumb (those are the armchair security goofs) and a large group who wouldn't care or change their behavior because there's no incentive.

You can't "hack" open source. A single project can be attacked or have a terrible security record. Individual projects may change how they work, but fundamentally the whole ecosystem won't drastically change. Nobody can attack everything, they can only attack small bits. Now don't think this is necessarily bad. It's how open source works and it is what it is. Open source won I dare not question the methodology.

At the end of the day the way we start to get security to where we want it will be with a few important ideas. Once we have containers that can be secured, some bugs go away for example. I always say there is no security silver bullet. There isn't one, there will be many. It's the only way any of this will work out. Expecting everyone to be a security expert doesn't work, expecting volunteers to care about security doesn't work.

The future of open source security lies with the integrators. The people who take lots of random projects and put them together. That's where the accountability lives, it's where it belongs. I don't' know what that means yet, but I suspect we'll find out in the near future as security continues to be a hot topic.

It's a shame I'm not musical. Security Mushroom Cloud would be a great band name.

Join the conversation, hit me up on twitter, I'm @joshbressers

November 06, 2015

Leadership in Software Development Part 4

Principle #10 – Build A Team

Principle #11 – Employ Your Team In Accordance With Its Capabilities

No one owns the code. Everyone owns the code. While not everyone has the same capabilities, we all contribute to a common code source, and we all want it to be as high a quality as possible. There are a handful of tools that are essential to distributed development: Bug trackers and Wikis and etherpads are essential, but so is IRC, email, and, lately, a code review system. These are the tools by which you communicate.  Communication is key, and respectful communication is essential.

Your job as a leader is not just to communicate, but to ensure that others are communicating.  You need the right technical tools, and the right attitude.  To build a team, you need to set a vision where the project is going, and help people realize how that vision works to their advantage.  You need to be willing to get obstacles out of developers way.  You need to make sure you are not that obstacle.

Inspire cross-team communication.  Keep it light, without letting humor degenerate into something hurtful.  Keep a constant eye out for bottlenecks that will discourage contributors. If two people are working on two different aspects of the same project, put them in communication with each other.  Facilitate the language choices to make sure they have a common solution, even if they have different problems to solve.

Diversity of interests helps build a team.  Some people are more detailed oriented, and do wonderful code reviews.  Some people are dreamers, that have grand ideas for project directions. Some people have skills in more esoteric areas like cryptography or databases.  The net total set of skills you have on a team increases that teams capabilities.  Thus, balance off the need for different skills with the need for a commonality of purpose.

At some level, programming is programming. But, there is a different skill set in doing user interface than in doing highly performant multiprocess number crunching. Your community coalesces around your project due to both a shared need and a common skill-set. Make sure that your project stays within its bounds.

But…that isn’t always possible. When I was a team leader for a Java based web application running on JBoss, we were affected by a Kernel scheduling issue that caused JBoss to get killed by the Out of Memory (OOM) killer. While it didn’t devolve on me to fix the kernel, it did mean that I had to understand the problem and find a work around for our company. I had enough of a Linux background at that point that I was able to get us onto a stable kernel until the problem was resolved upstream.

However, I was also aware that I was spread too thin. as a team leader, I picked up all of the tasks that had to be done, but that were too small to justify distracting a team member that was working on a strategic problem. I was the lead programmer, QA engineer, system administrator, as well as the guy that had to do all of the management tasks that had nothing to do with coding. Something had to give, and I got my boss to hire a system administrator. Knowing what your team needs in order to succeed, and knowing that you don’t have the skill-set in house is vital to getting a product build and

Leadership in Software Development Part 3

Principle #7 – Keep Your Team Informed

Communication is the key to any operation. In the Army, they taught that an Infantry Soldier needs to do three things in order to succeed: Shoot, move, and communicate. Well, there should be very little gun fire in open source development, so shooting is less essential. Movement to, since most things happen via network. But communication is paramount. Tell people what you are going to do. A great decision left not communicated is no decision. In the absence of information, people will make assumptions. It is easier to correct mistakes early, and to identify them requires review and correction.

You might not know everything that people want to know. Tell them that. Maybe you don’t have a release schedule. Knowing that there is no fixed release schedule is better than wondering when the release is going to come.

Maybe you haven’t had time to review a patch. Let the submitter know that, and you will get to it when you can. In that exchange, you might learn that it really is not a high priority issue, and you can prioritize down.

Principle #8 – Develop A Sense of Responsibility In Your Team

Tough to do this, but straightforward. Set the example. Show the people that are involved with your project that you believe in it. Some will stick, some will not. The ones that do will do so for varied reasons. But not all will have the bigger picture.

Free and Open Source Software (FOSS) carries with it a built in sense of ownership, and a corresponding responsibility. Once you realize that you “can” fix something, you often feel you “should.” The trick is to get people to “do.” This means inspiring them to do so.  Respectful communication is key.  Go back and reread “Schofield’s Definition of Discipline.”

A sense of responsibility means that they learn the full software development life cycle.  Just finishing a feature is the starting point; it needs to be tested, documented, and released, and all these things require effort as well.  They are not “fun” and they can be grinding.  If a patch conflicts with another patch, it has to be rebased and deconflicted, often manually.  This can be frustrating as well. Only if a team member “owns” the patch will they be willing to put in the effort to see it through.

A trickier aspect is to get a developer from an organization with a particular perspective to understand demands from other organizations.  We’ve seen this a lot in Open Stack.  I work for a company that distributes software that other companies need to install.  OpenStack is a product to me. My customers have a particular set of demands.  I work very closely with people from other companies that have large, public clouds.  Open Stack is something that they deploy, often right out of the upstream repository, to a limited set of deployments.  It is a key part of their revenue model, and if it breaks, they suffer.  Both perspectives are important, and can be complementary.  Features built for one model can often be critical to the other model….once they are understood.  You have to be responsible to all of the downstream deployers of your project, not just the ones that pay your paycheck, or the project suffers.

Principle #9 – Ensure Each Task is Understood, Supervised and Accomplished

Not everyone gets to do the fun stuff. But sometimes, people don’t even realize what needs to be done to get a project done. Writing the code that gets executed at run time is the focus, but there is all the other stuff: packing and install scripting, start up and cleanup, Database management, and automated QA. People need to write unit tests to go along with their features. All of this stuff is important, and as the project leader, you have to make sure it happens. For a small project, you may do it all. However, as some noted Linus doesn’t scale. The Linux kernel has a strong delegation culture, and a slew of companies that fill in the gaps to make sure that aspects of the Kernel are solid.

A bug tracker is key.  I am a fan of the approach where everything goes in the bug tracker as it minimizes the number of systems to check.  Bug trackers are not the best for submitting code patches, though, and so linking the patch submission process to bugs is essential.  We’ve been using Gerrit on OpenStack and it is a fairly successful approach.  FreeIPA does everything on the mailing list, which also works, so long as you periodically comb the list to cross reference submitted versus merged changes.  The larger the project, the more essential it is to automate the tracking process.

Now, not every bug is going to get fixed in every release, and not every feature will be implemented.  Prioritize and select among them, and make sure the most essential efforts get priority.  It is OK to postpone a feature to the next release, and then the one after that if the priority is just not there; often, you find a key feature emerges that obviates an old approach.

The hardest thing to do is to tell someone that the patch that they have put a lot of effort in to writing is not acceptable.  This is often due to the need for testing, or because the patch is going in the wrong direction.  If this happens, make sure you approach the developer with the respect due and explain clearly why you are chosing not to include their patch.  Keep an open mind:  often today’s bad approach is, in retrospect, what you wish you had done a year from now.  Keep the code around and retrievable, but understand that each submission has its cost.  Just adding code to a project may increase the load on testing, docs, and deployment, and you have to justify that effort.  If you are going to perform a task, you need to ensure all the specified and implied tasks that surround it are accomplished, too.


Leadership in Software Development Part 2

Principle #6 – Know Your Personnel and Look Out for Their Well Being

In an Open Source software project, who are “your people?” Your people are your community. Whether they are a fellow developer from your own company, the guy that pops in once every couple of months to make a typo fix, or someone that just reports bugs, they are all the people that lead to the success (or lack thereof) of your project.

Since they don’t report to you (normally) you can’t look out for their well being the same way an Army Officer is expected to take care of the Soldier’s in the unit. You won’t be checking their feet for frostbite unless it is after a drunken Meetup on a winter night. Most open source developers will not meet each other face to face.

What you do need to do is to be aware of the reasons that the people that are drawn to your project have for getting involved. The most common reasons are that your project are essential to them getting their “day Job” done. As such, taking care of them means doing right by the project. Probably most important is to be responsive to patch submissions. If a user submits a patch, it means that they care about the feature or bug addressed by that patch. It might be essential for them putting your product into live deployment, or shipping their own product. You have to be smart: balance stability against responsiveness. Communicate, don’t let changes sit unanswered.

As with most organizations, there are going to be different viewpoints on topics. As a leader, it is not your job to make every last decision. Part of being a grown up is letting go of control, especially about the things that you care less about. Take input from many community members on process, code standards, dependencies, and let consensus grow. Sometimes you need to make the big decisions, just don’t feel the need to be all the time.

One of the quotes on leadership that has made the deepest impression on me is Schofield’s Definition of Discipline:

The discipline which makes the soldiers of a free country reliable in battle is not to be gained by harsh or tyrannical treatment. On the contrary, such treatment is far more likely to destroy than to make an army. It is possible to impart instruction and to give commands in such a manner and such a tone of voice to inspire in the soldier no feeling but an intense desire to obey, while the opposite manner and tone of voice cannot fail to excite strong resentment and a desire to disobey. The one mode or the other of dealing with subordinates springs from a corresponding spirit in the breast of the commander. He who feels the respect which is due to others cannot fail to inspire in them regard for himself, while he who feels, and hence manifests, disrespect toward others, especially his inferiors, cannot fail to inspire hatred against himself.

You have to respect the people in your community, especially the ones that you disagree with the most. Your communication should be respectful. It is easy to assume the worst in someone, to get angry, and to lose your head. You will regret it. And nothing disappears from the internet, at least not quickly.


Leadership in Software Development Part 1

I’ve been in and out of leadership roles from High School onward. For the past decade and a half, I’ve been a professional software developer. During that time, I’ve been in a leadership position roughly a third of the time. Recently, I was asked to evaluate my Leadership Philosophy (more on that later). I’ve also had to do the annual counselling that My company formalizes.

One tool we learned in the Army was the list of Leadership principals. As part of my evaluation, I want to see how I think they apply to what I do: Software Development in an Open Source project space. Here’s what I’ve come up with so far:

Principle #1 – Know Yourself and Seek Self Improvement
Principle #2 – Be Technically Proficient

At first blush, these may seem to be the same thing. However, this is leadership focused, and the two points emphasize different aspects of competency. It is impossible to lead in software development without knowing what you are doing technically. But Principle 1 is referring to leadership skills in particular, as well as any aspect of your life outside of coding that can impact your job. Punctuality, cleanliness, clarity of communication, focus, temper, and so forth. You might be the smartest code jockey in history, but it doesn’t mean you have all the necessary skills to lead a team.

That said, you should be able to do the job of everyone under you in your team should the need arise. Probably the most important reason for this is so that you can ensure that what each team member is doing contributes to the overall success of the team. If you cannot read SQL, you won’t be able to understand what your DBA is proposing. Code is code, and you should be comfortable with all programming languages and paradigms. What, you are not? Get studying.

Principle #3 – Seek Responsibility and Take Responsibility for Your Actions
Principle #4 – Make Sound and Timely Decisions

Sometimes a technical lead or managerial position gets thrown in your lap, but that usually happens after you have shown that you can do the job. That comes from solving problems.

Now, “Take Responsibility for Your Actions” might sound like advice to be willing to admit when you are wrong. That is only a small part of it. In reality, development is filled with thousands of tiny decisions, and not all of them are going to be optimal. Yes, sometimes you will make big mistakes, or will have to take the hit for a mistake someone on your team made despite your best efforts. That is the lesser part of taking responsibility.

The greater part is understanding that your job is to get high quality software out to your customers. You need to ensure that all of the links in that chain are covered. You might have the greatest solution to a technical problem, but if the user can’t install it, or if upgrading from a previous version will ruin the uptime of a critical system, you have more work to do. Quality Assurance takes a lot of resources, but it is essential to building quality code. You need to make the decisions that will affect not just the current code base, but the ongoing process of improving code. It is easier to make corrections closer to the decisions than down the road: make decisions, look at the impact they are having, and make continual adjustments.

Probably most important is code reviewing. Fortunately, the programming world has caught on to the fact that code reviews are good. The projects I’ve worked on these past few years have all had mandatory code review policies. However, as a senior person, your responsibility to code review is greater than ever, as you will set the tone for the people on your team.If you do cursory reviews, others will either do cursory review, or will shame you. Neither is good for your team.

You are also responsible for making sure that the code reviews are checking for certain standards. PEP8 under python, Java coding conventions, and so forth are necessary but not sufficient. This is another case where technical proficiency comes in. You need to know the requirements of your code base. For example, if you are using a single threaded dispatcher like Eventlet, make sure that none of the code you introduce blocks, or you will deadlock you application.

Principle #5 – Set the Example

Lead by example can only occur if you have the skills to do something well yourself. Often, in a review, you need to provide an example to a developer how you think they should redo a piece of code. But it also refers to that limitless list from “Know yourself.” If you show up at Noon, your developers will show up at noon. If your reviews are lackadaisical, theirs will be as well. If you are rude to members of your team, they will pick up on it, and the cohesion of the unit will suffer. If you communicate poorly, the team will communicate poorly.

Of course, this being the Open Source world, it doesn’t always happen like that. Often, someone else will step up and fill the vacuum. One member of your team that believes in good communication may take it upon themselves to be the “information pump” of the group. Well mannered project members may be better able to soothe over ruffled feathers. But often we also see communities fall apart due to rudeness and poor software development practices. Best not leave it to chance.

A Work in Progress

A Work in Progress
(For Robin)

You keep asking me for a Melody
Something sort of Upbeat and in a Major Key
Now you know that I’m not Lazy
And I did not Forget
So, sorry my dear sister, but
Your song ain’t written yet
sorry my dear sister, but
Your song ain’t written yet

Sorry that its had to keep you waiting
The process has become a bit frustrating
the rhymes that I have written
are too awkward to accept
Its a work in progress still
Your song ain’t written yet.
Yeah, its a work in progress still
Your song ain’t written yet.

Lately you’ve been going through some tough times.
Wondering what’s the pattern in your life lines.
You know there’s so much out there
that you want to go and get
Just wait a little longer now
Your song ain’t written yet.
Just a little patience now
Your song ain’t written yet.

Soon you will perceive a brand new melody
Something sort-of upbeat, and in a major key
that will make you heart beat quicken
put that bounce back in your step
Just ’cause you haven’t heard it don’t mean
Your song ain’t written yet.
Its a work in progress still
Your song ain’t written yet.

(Copyright Adam Young, 2014, all rights reserved)

November 04, 2015

Risk report update: April to October 2015

Picture of risk playing cardsIn April 2015 we took a look at a years worth of branded vulnerabilities, separating out those that mattered from those that didn’t. Six months have passed so let’s take this opportunity to update the report with the new vulnerabilities that mattered across all Red Hat products.

ABRT (April 2015) CVE-2015-3315:

ABRT (Automatic Bug Reporting Tool) is a tool to help users to detect defects in applications and to create a bug report. ABRT was vulnerable to multiple race condition and symbolic link flaws. A local attacker could use these flaws to potentially escalate their privileges on an affected system to root.

This issue affected Red Hat Enterprise Linux 7 and updates were made available. A working public exploit is available for this issue. Other products and versions of Enterprise Linux were either not affected or not vulnerable to privilege escalation.

JBoss Operations Network open APIs (April 2015) CVE-2015-0297:

Red Hat JBoss Operations Network is a middleware management solution that provides a single point of control to deploy, manage, and monitor JBoss Enterprise Middleware, applications, and services. The JBoss Operations Network server did not correctly restrict access to certain remote APIs which could allow a remote, unauthenticated attacker to execute arbitrary Java methods. We’re not aware of active exploitation of this issue. Updates were made available.

“Venom” (May 2015) CVE-2015-3456:

Venom was a branded flaw which affected QEMU. A privileged user of a guest virtual machine could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the host’s QEMU process corresponding to the guest.

A number of Red Hat products were affected and updates were released. Red Hat products by default would block arbitrary code execution as SELinux sVirt protection confines each QEMU process.

“LogJam” (May 2015) CVE-2015-4000:

TLS connections using the Diffie-Hellman key exchange protocol were found to be vulnerable to an attack in which a man-in-the-middle attacker could downgrade vulnerable TLS connections to weak cryptography which could then be broken to decrypt the connection.

Like Poodle and Freak, this issue is hard to exploit as it requires a man in the middle attack. We’re not aware of active exploitation of this issue. Various packages providing cryptography were updated.

BIND DoS (July 2015) CVE-2015-5477:

A flaw in the Berkeley Internet Name Domain (BIND) allowed a remote attacker to cause named (functioning as an authoritative DNS server or a DNS resolver) to exit, causing a denial of service against BIND.

This issue affected the versions of BIND shipped with all versions of Red Hat Enterprise Linux. A public exploit exists for this issue. Updates were available the same day as the issue was public.

libuser privilege escalation (July 2015) CVE-2015-3246:

The libuser library implements a interface for manipulating and administering user and group accounts. Flaws in libuser could allow authenticated local users with shell access to escalate privileges to root.

Red Hat Enterprise Linux 6 and 7 were affected and updates available same day as issue was public. Red Hat Enterprise Linux 5 was affected and a mitigation was published.  A public exploit exists for this issue.

Firefox lock file stealing via PDF reader (August 2015) CVE-2015-4495:

A flaw in Mozilla Firefox could allow an attacker to access local files with the permissions of the user running Firefox. Public exploits exist for this issue, including as part of Metasploit, and targeting Linux systems.

This issue affected Firefox shipped with versions of Red Hat Enterprise Linux and updates were available the next day after the issue was public.

Firefox add-on permission warning (August 2015) CVE-2015-4498:

Mozilla Firefox normally warns a user when trying to install an add-on if initiated by a web page.  A flaw allowed this dialog to be bypassed.

This issue affected Firefox shipped with Red Hat Enterprise Linux versions and updates were available the same day as the issue was public.


The issues examined in this report were included because they were meaningful.  This includes the issues that are of a high severity and are likely easy to be exploited (or already have a public working exploit), as well as issues that were highly visible or branded (with a name or logo), regardless of their severity.

Between 1 April 2015 and 31 October 2015 for every Red Hat product there were 39 Critical Red Hat Security Advisories released, addressing 192 Critical vulnerabilities.  Aside from the issues in this report which were rated as having Critical security impact, all other issues with a Critical rating were part of Red Hat Enterprise Linux products and were browser-related: Firefox, Chromium, Adobe Flash, and Java (due to the browser plugin).

Our dedicated Product Security team continue to analyse threats and vulnerabilities against all our products every day, and provide relevant advice and updates through the customer portal. Customers can call on this expertise to ensure that they respond quickly to address the issues that matter.  Hear more about vulnerability handling in our upcoming virtual event: Secure Foundations for Today and Tomorrow.

CVE-2015-5602 and SELinux?

How is SELinux helpful?

That is one of the most common questions that we get when a new CVE (Common Vulnerabilities and Exposures) appears. We explain SELinux as a technology for process isolation to mitigate attacks via privilege escalation.

A real example of this attack can be seen in CVE-2015-5602 known as Unauthorized Privilege Escalation in sudo. Under certain conditions, this security issue allows you to modify any file on your system. From there it follows that you are able to modify the /etc/shadow file, containing secure user account data. To demonstrate how SELinux could help you here we would remind a SELinux feature called SELinux Confined Users.

SELinux confined users

On Fedora systems, the default Targeted security policy is enforced to confine commonly used applications/services to mitigate attacks on a system. With this policy, Linux users are unconfined by default. It means there are no restrictions for attacks coming from these users. CVE-2015-5602 is such an example. Fortunately, you can configure SELinux to confine also Linux users how it is described in Confining users with SELinux in RHEL and Confining Users on Fedora as a part of process isolation for Linux users.

I personally use SELinux confined users by default to take all advantages of process isolation for Linux users on my Fedora system.

In my case mgrepl Linux user is mapped to staff_u SELinux user

# semanage login -l |grep mgrepl

Login Name SELinux User MLS/MCS Range
mgrepl staff_u s0-s0:c0.c1023

who is supposed to be a SELinux login user with common administrative permissions and he is able to run sudo in the dedicated SELinux domain.

type_transition staff_t sudo_exec_t : process staff_sudo_t;

It tells me if staff_u SELinux user executes sudo then there is a SELinux transition to staff_sudo_t domain. With configured sudoers we can see

$ sudo -e ~/test.txt
$ ps -efZ | grep sudo
staff_u:staff_r:staff_sudo_t:s0-s0:c0.c1023 root 5390 4925 0 23:04 pts/3 00:00:00 sudo -e /home/mgrepl/test.txt

CVE-2015-5602 vs. confined SELinux users

With followed steps to reproduce of CVE-2015-5602 and with defined SELinux confinement for this Linux user using semanage utility

# semanage login -a -s staff_u usr
$ ssh usr@localohst
[usr@localhost ~]$ ln -s /etc/shadow ~/temp/test.txt
[usr@localhost ~]$ id -Z

we can try to edit ~/temp/test.txt file to access /etc/shadow

[usr@localhost ~]$ sudo -e ~/temp/test.txt
sudoedit: /home/usr/temp/test.txt: Permission denied
[usr@localhost ~]$ getenforce

That’s it.


And the following log event is generated for this denied.

type=AVC msg=audit(1446584115.930:558): avc: denied { read } for pid=3098 comm="sudoedit" name="shadow" dev="dm-1" ino=1049344 scontext=staff_u:staff_r:staff_sudo_t:s0 tcontext=system_u:object_r:shadow_t:s0 tclass=file permissive=0

Are you now thinking about SELinux confined users?

I would like to thank Daniel Kopeček <> for a heads-up and co-authoring this post.

FreeIPA PKI: current plans and a future vision

FreeIPA’s X.509 PKI features (based on Dogtag Certificate System) continue to be an area of interest for users and customers. In this post I summarise recently-added PKI features in FreeIPA, work in progress, and what we plan to do in future releases. Then I will outline my personal vision for what the future of PKI in FreeIPA should look like, noting how it will address pain points and limitations of the existing architecture.

Recent changes and work in progress

In the past only a single certificate profile was supported (appropriate for TLS-enabled services) but as of FreeIPA 4.2 multiple certificate profiles are supported (including custom profiles), as are user certificates. CA ACL rules define which profiles can be used to issue certificates to particular principals (users, groups, hosts, hostgroups and/or services). The FreeIPA framework (not Dogtag) enforces CA ACLs.

Custom profiles support means that the PKI can be used for a huge number of use cases, but it is still up to the user or operator to provide a suitable PKCS #10 certificate signing request (CSR).

I am currently working on implementing support for lightweight sub-CAs in Dogtag and FreeIPA so that sub-CAs can be easily created and used to issue certificates. The CA ACLs concept will be extended to include sub-CAs so that use of certain profiles can be restricted to particular CAs.

Problems with the current architecture

To put this this all in context, please study the following crappy diagram of the current FreeIPA PKI architecture:

|   User   |
|          |  1. Generate CSR
| +------+ |     (somehow... poor user)
| | krb5 | |
| |ticket| |
     |                           +-----------+
     | 2. ipa cert-request       |           |
     |    (CSR payload)          |   389DS   |
     v                           |           |
+--------------------+           +-----------+
|  FreeIPA   +-------+                 ^
|            |krb5   |                 |
|            |proxy  <-----------------+
| +-------+  |ticket |   3. Validate CSR
| |RA cert|  +-------+   4. Enforce CA ACLs
      | 5. Dogtag cert request
      |    (CSR payload)
| Dogtag | 6. Issue certificate

The Dogtag CA is the entity that actually issues certificates. FreeIPA requests certificates from Dogtag with the RA Agent credential (an X.509 client certificate) with which the FreeIPA framework has authority to use any profile that accepts RA Agent authentication to issue a certificate. This is a longstanding violation of an important framework design principle: the framework should only ever operate with the privileges of the authenticated principal.

Another problem is that users are burdened with the responsibility of crafting a CSR that is correct for the profile that will be used. This is a nontrivial task even for common types of certificates – it is downright painful once exotic extensions come into play. There is a lot that a user can get wrong, which may result in an invalid CSR or cause Dogtag to reject a request because it does not contain the data required by the profile. Furthermore it is reasonable to expect that any data that appear on a certificate are (or could be) stored in the directory, and could be populated into a certificate automatically according to the profile rather than by copying the data from the CSR.

On the topic of exotic extensions: although FreeIPA ensures that requested extension values of common extensions are appropriate and correspond to the subject principal’s attributes (e.g. making sure that all Subject Alternative Names are valid), no validation of uncommon extensions is performed. Nor should it be – not in the FreeIPA framework, especially; the complexity of validating extension values does not belong here, and validation is impossible if we have not yet taught FreeIPA about the extension or how to validate it, or if the validation involves custom LDAP schema. This is the problem we have with the IECUserRoles extension which we support with a profile but cannot validate – user self-service must be prohibited for profiles like this and certificate administrators must be trusted to only issue certificates with appropriate extension values.

Planned work to address (some of) these issues

The framework privilege separation (lack thereof) issue is tracked in FreeIPA ticket #5011: [RFE] Forward CA requests to Dogtag or helper by GSSAPI. This will remove the RA Agent credential and CA ACL enforcement logic from FreeIPA. Instead, the framework will obtain a proxy ticket to talk to Dogtag on behalf of the requestor principal, and Dogtag will authenticate the user, consult CA ACLs and (if all is well) continue with the certificate issuance process (which could still fail if the data in the CSR does not satisfy the profile requirements).

Implementation details for this ticket are not yet worked out but it will involve creating a service principal for Dogtag and giving Dogtag access to a keytab, performing GSSAPI authentication (probably in a Java servlet realm implementation) and providing a new profile authorisation class to read and enforce CA ACLs. Tomcat configuration and FreeIPA profile configurations will have to be updated (during upgrade) to use the new classes.

Ticket #4899: [RFE] mechanism to map principal info into certificate requests was filed to improve user experience when creating CSRs for a particular profile. An openssl req configuration file template could be stored for each profile and a command added to fill out the template and return the appropriate config for a given user, host or service. We could go further and supply config templates for other programs, or even create the whole CSR at once. Or even make it part of the cert-request command, bypassing a number of steps! The point is that there is currently a lot of busy-work around requesting certificates that is not necessary, and we can save all certificate users time and pain by improving the process.

With these enhancements, the architecture diagram changes to remove the RA certificate and provide assistance to the user in generating the CSR (which is abstracted as the user reading data from 389DS):

|   User   | 1a. Read CSR template / attributes
|          |<--------------------------+
| +------+ |                           |
| | krb5 | |                           |
| |ticket| | 1b. Generate CSR          |
+-+--|---+-+                           |
     |                                 |
     | 2. ipa cert-request             |
     |    (CSR payload)                |
     v                                 |
+-----------+                          |
|  FreeIPA  |                          |
|           |                    +-----------+
|    +------+                    |           |
|    |krb5  |  3. Validate CSR   |   389DS   |
|    |proxy <------------------->|           |
|    |ticket|                    +-----------+
+----+--|---+                          ^
        |                              |
        | 4. Dogtag cert request       |
        |    (CSR payload)             |
        v                              |
+--------------------+                 |
|  Dogtag    +-------+                 |
|            |krb5   |                 |
|            |proxy  <-----------------+
|            |ticket |    5. Enforce CA ACLs
|            +-------+
  6. Issue certificate

Future of FreeIPA PKI: my vision

There are still a number of issues that the improved architecture does not address. The data in CSRs still have to be just right. There is no way to validate exotic or unknown extension data, limiting use cases or restricting user self-service and burdening certificate issuers with the responsiblity of getting it right. There is no way to pull data from custom LDAP schema into certificates or even to automatically include data that we know is in the directory on certificates (e.g. email, KRB5PrincipalName or other kinds of alternative names).

The central concept of my vision for the future of FreeIPA’s PKI is that Dogtag should read from LDAP all the data it needs to produce a certificate according to the nominated profile (except for the subject public key which must be supplied by the requestor). This relieves the FreeIPA framework and Dogtag of most validation requirements, because we would ignore all data submitted except for the subject public key, subject principal, requestor principal and profile ID (CA ACLs would still need to be enforced).

In this architecture the PKCS #10 CSR devolves to a glorified public key format. In fact the planned CSR template feature is completely subsumed! We would undoubtedly continue to support PKCS #10 CSRs, and it would make sense to continue validating aspects of the CSR to catch obvious user errors; but this would be a UX nicety, not an essential security check.

The architecture sketch now becomes:

|   User   |
|          | 1. Generate keypair
| +------+ |
| | krb5 | |
| |ticket| |
     | 2. ipa cert-request
     |    (PUBKEY payload)
|   FreeIPA    |
|              |                 +-----------+
| +----------+ |                 |           |
| |krb5 proxy| |                 |   389DS   |
| |  ticket  | |                 |           |
+-+----|-----+-+                 +-----------+
       |                               ^
       | 3. Dogtag cert request        |
       |    (PUBKEY payload)           |
       v                               |
+--------------------+                 |
|  Dogtag    +-------+                 |
|            |krb5   |                 |
|            |proxy  <-----------------+
|            |ticket |    4. Enforce CA ACLs
|            +-------+    5. Read data to be included on cert
  6. Issue certificate

Consider the IECUserRoles example under this new architecture and observe the following advantages:

  • The user is relieved of the difficult task of producing a CSR with exotic extension data.
  • The profile reads the needed data (assuming it exists in standard or custom schema), allowing IECUserRoles or other exotic extensions to be easily supported.
  • Because we are not accepting raw extension data that cannot be validated, user self-service can be allowed (appropriate write access controls must still exist for the attributes involved, though) and admins are relieved of crafting or verifying the correct extension values.

In terms of implementation, over and above what was already planned this architecture will require several new Dogtag profile policy modules to be implemented, and these will be more complex (e.g. they will read data from LDAP). Pleasantly, these do not actually have to be implemented in or be formally a part of Dogtag – we can write, maintain and ship these Java classes as part of FreeIPA and easily configure Dogtag to use them.

In return we can remove a lot of validation logic from FreeIPA and profile configurations will be easier to write and understand (decide which extensions you want and trust the corresponding profile policy class to "do the right thing").

Importantly, it becomes possible for administrators to provide their own profile components implementing the relevant Java interface that read custom schema into esoteric or custom X.509 extensions, supporting any use case that we (the FreeIPA developers) don’t know about or can’t justify the effort to implement. Although this is technically possible today, moving to this approach in FreeIPA will simplify the process and provide significant prior art and expertise to help users or customers who want to do this.

Concluding thoughts

There are plans for other FreeIPA PKI features that I have not mentioned in this post, such as Let’s Encrypt / ACME support, or an interactive "profile builder" feature. The proposed architecture changes do not directly impact these features although simplifying profile configuration in any way would make the profile builder a more worthwhile / tractable feature.

The vision I have outlined here is my own at this point – although I have hinted at it over the past few months this post is my first real effort to expound and promote it. It is a significant shift from how we are currently doing things and will be a substantial amount of work but I hope that people will see the value in reducing user and administrator workload and being able to support new X.509 use cases without significant ongoing effort by the FreeIPA or Dogtag development teams.

Feedback on my proposal is strongly encouraged! You can leave comments here, send an email to me ( or the FreeIPA development mailing list ( or continue the discussion on IRC (#freeipa on Freenode).

November 03, 2015

Hack your meetings
I don't think I've ever sat down to a discussion about security that doesn't end with a plan to fix every problem ever, which of course means we have a rather impressive plan where failure is the only possible outcome.

Security people are terrible at scoping
I'm not entirely sure why this is, but almost every security discussions spirals out of control and topics that are totally unrelated seem to always come up, and sometime dominate the conversation. Part of me suspects it's because there is so much to do, it's hard to know where to start.

I've recently dealt with a few meetings that had drastically different outcomes. The first got stuck on details, oceans will need to be boiled. The second meeting was fast and insanely productive. The reason why this meeting was fantastic took me a while to figure out. We were all social engineered and it was glorious.

Meeting #1
The first meeting was a pretty typical security meeting. We have a bunch of problems, no idea where to even start, so we kept getting deeper and deeper, never solving anything. It wasn't a bad group, I don't think less of anyone. I was without a doubt acting just like everyone else. In fact I had more than one of these this week. I'm sure I'll have more next week.

Meeting #2
The meeting I'm calling meeting 2 was a crazy event unlike one I've ever had. We ended with a ton of actions and everyone happy with the results. It took me an hour of reflection to figure out what happened. One of the people on the call managed to social engineered everyone else. I have no idea if he knows this, it doesn't matter because it was awesome and I'm totally stealing the technique.

A topic would come up, it would get some discussion, know basically what we had to do, then we would hear "We should do X, I'll own the task". After the first ten minutes one person owned almost everything. After a while the other meeting attendees started taking tasks away because one person had too many.

This was brilliant.

Of course I could see this backfire if you have a meeting full of people happy to let you take all the actions, but most groups don't work like this. In almost every setting everyone wants to be an important contributing member.

I'm now eager to try this technique out. I'm sure there is nuance I'm not aware of yet, but that's half the fun in making any new idea your own.

Give it a try, let me know how it goes.

Join the conversation, hit me up on twitter, I'm @joshbressers

October 27, 2015

The Third Group
Anytime you do anything, no matter how small or big, there will always be three groups of people involved. How we interact with these groups can affect the outcome of our decisions and projects. If you don't know they exist it can be detrimental to what you're working on. If you know who they are and how to deal with them, a great deal of pain can be avoided, and you will put yourself in a better position to succeed.

The first group are those who agree with whatever is it you're doing. This group is easy to deal with as they are already in agreement. You don't have to do anything special with this group. We're not going to spend any time talking about them.

The second group is reasonable people who will listen to what you have to say. Some will come to agree with you, some won't. The ones who don't agree with you possibly won't even tell you they disagree with you. If what you're doing is a good idea you'll get almost everyone in the second group to support you, if you don't ignore them. This is the group you ignore the most, but it's where you should put most of your energy.

The third group is filled with unreasonable people. These are people that you can prove your point beyond a reasonable doubt and they still won't believe you. There is absolutely nothing you can say to this group that will make a difference. These are the people who deny evidence, you can't understand why they deny the facts, and you will spend most of your time trying to bring them to your side. This group is not only disagreeable, its' dangerous to your cause. You waste your time with the third group while you alienate the second group. This is where most people incorrectly invest almost all their time and energy.

The second group will view the conversations between the first group and the third group and decide they're both insane. Members of the first and third group are generally there for some emotional reason. They're not always using facts or reality to justify their position. You cannot convince someone if they believe they have the moral high ground. So don't try.

Time spent trying to convince the third group is time not spend engaging the second group. Nobody wants to be ignored.

The Example

As always, these concepts are easier to understand with an example. Let's use climate change because the third group is really loud, but not very large.

The first group are the climate scientists. Pretty much all of them. They agree that climate change is real.

The second group is most people. Some have heard about climate change, a lot will believe it's real. Some could be a bit skeptical but with a little coddling they'll come around.

The third group are the deniers. These people are claiming that CO2 is a vegetable. They will never change their minds. No really never. I bet you just thought about how you could convince them just now. See how easy this trap is?

The first group spends huge amounts of time trying to talk to the third group. How often do you hear of debates, or rebuttals, or "conversations" between the first and third group here. How often do you hear about the scientists trying to target the second group? Even if it is happening it's not interesting so only first-third interactions get the attention.

The second group will start to think the scientists are just as looney as the third group. Most conversations between group one and three will end in shouting. A reasonable person won't know who to believe. The only way around this is to ignore the third group completely. Any time you spend talking to the third group hurts your relationship with the second group.

What now?

Start to think about the places you see this in your own dealings. Password debates. Closed vs open source. Which language is best. The list could go on forever. How do you usually approach these? Do you focus on the people who disagree with you instead of the people who are in the middle?

The trick with security is we have no idea how to even talk to the second group. And we rather enjoy arguing with the third. While talking to the second group can be tricky, the biggest thing at this point is to just know when you're burning time and good will by engaging with the third group. Walk away, you can't win, failure is the only option if you keep arguing.

Join the conversation, hit me up on twitter, I'm @joshbressers

October 22, 2015

Red Hat CVE Database Revamp

Since 2009, Red Hat has provided details of vulnerabilities with CVE names as part of our mission to provide as much information around vulnerabilities that affect Red Hat products as possible.  These CVE pages distill information from a variety of sources to provide an overview of each flaw, including information like a description of the flaw, CVSSv2 scores, impact, public dates, and any corresponding errata that corrected the flaw in Red Hat products.

Over time this has grown to include more information, such as CWE identifiers, statements, and links to external resources that note the flaw (such as upstream advisories, etc.).  We’re pleased to note that the CVE pages have been improved yet again to provide even more information.

Beyond just a UI refresh, and deeper integration into the Red Hat Customer Portal, the CVE pages now also display specific “mitigation” information on flaws where such information is provided.  This is an area where we highlight certain steps that can be taken to prevent the exploitability of a flaw without requiring a package update. Obviously this is not applicable to all flaws, so it is noted only where it is relevant.

In addition, the CVE pages now display the “affectedness” of certain products in relation to these flaws.  For instance, in the past, you would know that an issue affected a certain product either by seeing that an erratum was available (as noted on the CVE page) or by visiting Bugzilla and trying to sort through comments and other metadata that is not easily consumable.  The CVE pages now display this information directly on the page so it is no longer required that a visitor spend time poking around in Bugzilla to see if something they are interested in is affected (but has not yet had an erratum released).

To further explain how this works, the pages will not show products that would not be affected by the flaw.  For instance, a flaw against the mutt email client would not note that JBoss EAP is unaffected because EAP does not ship, and has never shipped, the mutt email client.  However, if a flaw affected mutt on Red Hat Enterprise Linux 6, but not Red Hat Enterprise Linux 5 or 7, the CVE page might show an erratum for Red Hat Enterprise Linux 6 and show that mutt on Red Hat Enterprise Linux 5 and 7 is unaffected.  Previously, this may have been noted as part of a statement on the page, but that was by no means guaranteed.  You would have to look in Bugzilla to see if any comments or metadata noted this; now it is quite plainly noted on the pages directly.

This section of the page, entitled “Affected Packages State”, is a table that lists the affected platform, package, and a state.  This state can be:

  • “Affected”: this package is affected by this flaw on this platform
  • “Not affected”: this package, which ships on this platform, is not affected by this flaw
  • “Fix deferred”: this package is affected by this flaw on this platform, and may be fixed in the future
  • “Under investigation”: it is currently unknown whether or not this flaw affects this package on this platform, and it is under investigation
  • “Will not fix”: this package is affected by this flaw on this platform, but there is currently no intention to fix it (this would primarily be for flaws that are of Low or Moderate impact that pose no significant risk to customers)

For instance, the page for CVE-2015-5279 would look like this, noting the above affected states:

new-cve-pagesBy being explicit about the state of packages on the CVE pages, visitors will know exactly what is affected by this CVE, without having to jump through hoops and spend time digging into Bugzilla comments.

Other improvements that come with the recent changes include enhanced searching capabilities.  You can now search for CVEs by keyword, so searching for all vulnerabilities that mention “openssl” or “bind” or “XSS” are now possible.  In addition, you can filter by year and impact rating.

The Red Hat CVE pages are a primary source of vulnerability information for many, a gateway of sorts that collects the most important information that visitors are often interested in, with links to further sources of information that are of interest to the vulnerability researcher.

Red Hat continues to look for ways to provide extra value to our customers.  These enhancements and changes are designed to make your jobs easier, and we believe that they will become an even greater resource for our customers and visitors.  We hope you agree!

October 20, 2015

How do we talk to normal people?
How do we talk to the regular people? What's going to motivate them? What matters to them?

You can easily make the case that business is driven by financial rewards, but what can we say or do to get normal people to understand us, to care? Money? Privacy? Donuts?

I'm not saying we're going to turn people into experts, I'm not even suggesting they will reach a point of being slightly competent. Most people can't fix their car, or wire their house, or fix their pipes. Some can, but most can't. People don't need to really know anything about security, they don't want to, so there's no point in us even trying. When we do try, they get confused and scared. So really this comes down to:

Don't talk to normal people

Talking to them really only makes things worse. What we really need is them to trust the security people. Trust that we'll do our jobs (which we're not currently). Trust that the products they buy will be reasonably secure (which they're not currently). Trust that the industry has their best interest in mind (which they don't currently). So in summary, we are failing in every way.

Luckily for us most people don't seem to be noticing yet.

It's also important to clarify that some people will never trust us. Look at climate change denial. Ignore these people. Every denier you talk to who is convinced Google sneaks into their house at night and steals one sock is wasted time and effort. Focus on people who will listen. As humans we like to get caught up with this "third" group, thinking we can convince them. We can't, don't try. (The first group is us, the second is reasonable people, we will talk about this some other day)

So back to expectations of normal people.

I'm not sure how to even describe this. I try to think of analogies, or to compare it to existing industries. Nothing fits. Any analogy we use, ever existing industry, generally has relatively understood models surrounding them. Safes have a physical proximity requirement, the safety of cars doesn't account for malicious actors, doors really only keep out honest people. None of these work.

We know what some of the problems are, but we don't really have a way to tell people about them. We can't use terms that are even moderately complex. Every time I work through this I keep coming back to trust. We need people to trust us. I hate saying that, blind trust is never a good thing. We have to earn it.

Trust me, I'm an expert!

So let's assume our only solution for the masses at this point is "trust". How will anyone know who to trust? Should I trust the guy in the suit? What about the guy who looks homeless? That person over there uses really big words!

Let's think about some groups that demand a certain amount of trust. You trust your bank enough to hold your money. You have to trust doctors and nurses. You probably trust engineers who build your buildings and roads. You trust your teachers.

The commonality there seems to be education and certification. You're not going to visit a doctor who has no education, nor an engineer who failed his certification exam. Would that work for us? We have some certifications, but the situation is bleak at best, and the brightest folks have zero formal qualifications.

Additionally, who is honestly going to make certifications a big deal, everything we need know changes ever 6 months.

As I write this post I find myself getting more and more confused. I wonder if there's any way to fix anything. Let's just start simple. What's important? Building trust, so here's how we're going to do it.
  1. Do not talk, only answer questions (and don't be a pedantic jerk when you do)
  2. Understand your message, know it like the back of your hand
  3. Be able to describe the issue without using any lingo (NONE)
  4. Once you think you understand their challenges, needs, and asks; GOTO 1
I'm not saying this will work, I'm hopeful though that if we start practicing some level of professionalism we can build trust. Nobody ever built real trust by talking, you build trust by listening. Maybe we've spent so much time being right we never noticed we were wrong.

Join the conversation, hit me up on twitter, I'm @joshbressers

October 19, 2015


While I tend to play up bug 968696 for dramatic effect, the reality is we have a logical contradiction on what we mean by ‘admin’ when talking about RBAC.

In early iterations of OpenStack, roles were global. This is reflected in many of the Policy checks that only look for the global role. However, prior to the Keystone-Light rewrite, role assignments became scoped to tenants. This shows up in the Keystone git history. As this pattern got established, some people wrote policy checks that assert:

role==admin and tenant_id=resource.tenant_id

This contradicts the global-ness of the admin roles. If I assign

(‘joeuser’, ‘admin’,’mytenant’)

I’ve just granted them the ability to perform all of the admin operations.

Thus, today we have a situation where, unless the user rewrites the default policy, they have to only assign the role admins to users that are trusted to be admins on the whole deployment.

We have a few choices.

  1. Remove Admin from the scoping for projects. Admin is a special role reserved only for system admins. Replace project scoped admins with ‘manager’ or some other comparable role. This is actually the easiest solution.
  2. Create a special project for administrative actions. Cloud admin users are assigned to this project. Communicate that project Id to the remote systems. This is what the policy.v3cloudsample.json file ( recommends.

However, 2 is really not practical without some significant engineering. For a new deployment, it would require the following steps.

  1. Every single policy file would have to be “templatized”
  2. Then deployment mechanism would have to create the admin project, get the id for it, and string replace it in the policy file.

We could make this happen in Devstack. The same is true of Puppet, OSAD, and Fuel. There would be a lag and the downstream mechanisms would eventually pick it up, multiple releases down the road.
I went through this logic back when I started proposing the Dynamic Policy approach. If OpenStack managed policy deployment via an inte4rnal mechanism, then adding things like the admin_project_id becomes trivial.

While I think Dynamic Policy provides a lot of value, I concede that it is overkill for just substituting in a single value. The real reason I am backing off Dynamic Policy for the moment is that we need to better understand what part of policy should be dynamic and what part should be static; we are just getting that clean now.

There is an additional dimension to the admin_project_id issue that several developers want solved. In larger deployments, different users should have administrative capabilities on different endpoints. Sometimes this is segregated by service (storage admins vs network admins) and sometimes by region.

Having a special project clearly communicates the intention of RBAC. But even clearer would be to have the role assignment explicitly on the catalog item itself. Which of the following statements would you think is clearer?

  1. Assign Joe the admin role on the project designated to administer endpoint 0816.
  2. Assign Joe the admin role on endpoint 0816.

I think you will agree that it is the latter. Making this happened would not be too difficult on the Keystone side, and would require fairly simple changes on the policy enforcement of the remote projects. We’ve already discussed “endpoint binding of tokens” where an endpoint needs to know its own ID. Having a new “scope” in a token that is endpoint_id would be fairly easy to execute.

One drawback, though, it that all of the client tooling would need to change. Horizon, openstackclient, and keystoneauth would need to handle “endpoint” as the scope. This includes third party integrations, which we do not control.

All of these constraints drive toward a solution where we link the admin project to the existing endpoint ids.

Make the catalog a separate domain.
Make regions, services, and endpoints projects
Use the rules of Hierarchical Multitenancy to manage the role assignments for a project.

On the enforcing side, endpoints *must* know their own ID. They would have checks that assert token.project_id = self.endpoint_id.

This is the “least magic” approach. It reuses existing abstractions without radically altering them. The chance of a collision between an existing project_id and and endpoint_id is vanishingly small\, and could be addressed by modifying one or the other accordingly. The biggest effort would be in updating the policy files, but this seems to be within the capability of cross project efforts.

We will be discussing this at the Cross Project session at the summit on Global Admin

Please read this, process it, and be ready to help come to a proper conclusion of this bug.

“admin”-ness not properly scoped
Original Dynamic Policy Post
Current Dynamic Policy Wiki
Endpoint_ID from URL
Catalog Scoped Roles

October 13, 2015

How do we talk to business?
How many times have you tried to get buyin for a security idea at work, or with a client, only to have them say "no". Even though you knew it was really important, they still made the wrong decision.

We've all seen this more times than we can count. We usually walk away grumbling about how sorry they'll be someday. Some of them will be, some won't. The reason is always the same though:

You're bad at talking to the business world

You can easily make the argument that money is a big motivator for a business. For some it's the only motivator. Businesses want save money, prevent problems, be competitive, and stay off the front page for bad news. The business folks don't care about technical details as much as they worry about running their business. They don't worry about which TLS library is the best. They want to know how something is going to make their lives easier (or harder).

If we can't frame our arguments in this context, we have no argument we're really just wasting time.

Making their lives easier

We need to answer the question, how can security make lives easier? Don't answer too quickly, it's complicated.

Everything has tradeoffs. If we add a security product or process, what's going to be neglected? If we purchase a security solution, what aren't we purchasing with those funds? Some businesses would compare these choices to buying food or tires. If you're hungry, you can't eat tires.

We actually have two problems to solve.
  1. Is this problem actually important
  2. How can I show the value
Is something important is always tricky. When you're a security person, lots of things seem important but aren't really. Let's say inside your corporate network someone wants to disable their firewall. Is that important? It could be. Is missing payroll because of the firewall more important? Yes.

First you have to decide how important is the thing you have in mind. I generally ponder if I'd be willing to get fired over this. If the answer is "no", it's probably not very important. We'll talk about how to determine what's important in the future (it's really hard to do).

Let's assume we have something that is important.

Now how do we bring this to the people in charge?

Historically I would write extremely long emails or talk to people at length about how smart I am and how great my idea is. This never works.

You should write up a business proposal. Lay out the costs, benefits, requirements, features, all of it. This is the sort of thing business people like to see. It's possible you may even figure out what you're proposing is a terrible idea before you even get it in front of someone who can write a check. Think for a minute what happens when you develop a reputation for only showing up with good well documented ideas? Right.

Here's how this usually works. Someone has an idea, then it gets debated for days or weeks. It's not uncommon to spend more time actually discussing an idea than it is to implement the thing. By writing down what's going on, there is no ambiguity, there's no misunderstanding, there's no pointless discussion about ketchup.

I actually did this a while back. There was discussion about a feature, it had lasted for weeks, nobody had a good answer and the general idea kept going back and forth. I wrote up a proper business proposal and it actually changed my mind, it was a HORRIBLE idea (I was in favor of it before that). I spent literally less than a single work day and cast in stone our decision. In about 6 hours I managed to negate hundreds of hours of debate. It was awesome.

The language of the business is one of requirements, costs, and benefits. It's not about outsmarting anyone or seeing who knows the biggest word. There's still plenty of nuance here, but for now if you're looking to make the most splash, you need to learn how to write a business plan. I'll leave how you do this as an exercise to the reader, there are plenty of examples.

Join the conversation, hit me up on twitter, I'm @joshbressers

October 06, 2015

What's filling the vacuum?
Anytime there's some sort of vacuum, something will appear to fill the gap. In this context we're going to look at what's filling the vacuum in security. There are a lot of smart people, but we're failing horribly at getting our message out.

The answer to this isn't simple. You have to look at what's getting attention that doesn't deserve to get attention. Just because we know a product, service, or idea is hogwash doesn't mean non security people know this. They have to attempt to find someone to trust, then listen to what they have to say. Unfortunately when you're talking about extremely complex and technical problems, they listen to whoever they can understand as there's no way they can determine who is technically more correct. They're going to follow whoever sounds the smartest.

If you've never seen the musical "The Music Man" you should. This is what we're dealing with.

Rather than dwell on it and try to call out the snake oil, we should put our effort into the messaging. We'll never have a better message than this group, but we really only need to be good enough, not perfect. We always strive for our messages to be perfect, but that's an impossible goal. The goal here is to sound smarter than the con men. This is harder than it sounds unfortunately.

We can use the crypto backdoor conversation as a good example. There are many groups claiming we should have backdoors in our crypto to keep ourselves safer. Security people know this is a bad idea, but here's what the conversation sounds like.


We need crypto backdoors to stop the bad guys, trust us, we're the good guys


<random nonsense>, backdoors don't work
We don't do a good job of telling people why backdoors dont' work. Why should they trust us, why don't backdoors work, who will keep us safe? Our first instinct would be to frame the discussion like this:

  1. Backdoors never work
  2. Look at the TSA key fiasco
  3. Encryption is hard, there's no way to get this right

This argument wont' work. The facts aren't what are important. You have to think about how you make people feel. We just confused them, so now they don't like us. Technical details are fine if you're talking to technical people, but any decent technical person probably doesn't need this explained.

We have to think about how can we make people feel bad about encryption backdoors? That's the argument we need. What can we say that gives them the feels?

I don't know if these work, they're just some ideas I have. I've yet to engage anyone on this topic.

What are things people worry about? They do value their privacy. The old "if you have nothing to fear you have nothing to hide" argument only works when it's not your neighbor who has access to your secrets.

Here's what I would ask
Are you OK with your neighbor/wife/parent having access to your secrets?
Then see where to conversation goes. You can't get technical, we have to focus on emotions, which is super hard for most security people. If you try this out, let me know how it goes.

Join the conversation, hit me up on twitter, I'm @joshbressers

September 29, 2015

We're losing the battle for security
The security people are currently losing the battle to win the hearts and minds of the people. The war is far from over but it's not currently looking good for our team.

As with all problems, if there is a vacuum, something or someone end up filling it. This is happening right now in security. There are a lot of really smart security people out there. We generally know what's wrong, and sometimes even know how to fix it, but the people we need to listen aren't. I don't blame them either, we're not telling them what they need to know.

On the other side though, we also think we understand the problems, but we don't really. Everything we know comes from an echo chamber inside a vacuum. We understand our problems, not their problems.

We have to move our conversations into the streets, the board rooms, and the CIO offices. Today all these people think we're just a bunch of nuts ranting about crazy things. The problem isn't that we're all crazy, it's that we're not talking to people correctly, which also means we're not listening either.

We have to stop talking about how nobody knows anything and start talking about how we're going to help people. Security isn't important to them, they have something they want to do, so we have to help them understand how what we do is important and will help them. We have to figure out how to talk about what we do in words they understand and will motivate them.

How many times have you tried to explain to someone why they should use a firewall and even though it should have been completely obvious, they didn't use it?

How many times have you tried to get a security bug fixed but nobody cared?

How many times have you tried to get a security feature, like stack protector, enabled by developers but nobody wanted to listen?

There are literally thousands of examples we could cover. In virtually every example we failed because we weren't telling the right story. We might have thought we were talking about security, but we really were saying "I'm going to cost more money and make your life harder".

It's time we figure out how to tell these stories. I don't have all the answers, but I'm starting to notice some patterns now that I've escaped from the institution.

There are three important things we're going to discuss in the next few posts:

  1. What's filling the vacuum?
  2. How do we talk to the business world?
  3. How do we talk to normal people?
The vacuum is currently being filled by a lot of snake oil. I'm not interested in calling specific people out, you know who they are. We'll talk about what we can learn from this group. They know how to interact with people, they're successfully getting people to buy their broken toys. This group will go away if we learn how to talk about what we do.

Then we'll talk about what motivates a business. They don't really care about security, they care about making their business successful. So how can we ensure security is part of the solution? We know what's going to happen if there's no security involved.

Lastly we'll talk about the normal people. Folks like your neighbors or parents. Who don't have a clue what's going on, and never will. This group is going to be the hardest of all to talk to. I sort of covered this group in a previous post: How can we describe a buffer overflow in common terms? These are people who have to be forced to wear seat belts, it's not going to be pleasant.

If you have any good stories or examples that would make these stories better, be sure to let me know.

Join the conversation, hit me up on twitter, I'm @joshbressers

September 27, 2015


OpenStack is a big distributed system. FreeIPA is designed for security in distributed system. In order to develop and test each of them, separately or together, I need a distributed system. Virtualization has been a key technology for making this kind of work possible. OpenStack is great of managing virtualization. Added to that is the benefits found when we “fly our own airplanes.” Thus, I am using OpenStack to develop OpenStack.

Steve Okay took this while waiting for a flight to LAS

Early to Rise
757-200 lifts off Rwy 1 at SFO at sunrise. Credit Steve Okay. Used With Permission

One of my tasks is to make it possible to easily reproduce my development environment. In the past, I would have done something like this with Bash. However, I have been coding in Python enough the past 3 years that it is as easy (if not easier) for me to think in Python than in Bash. And again, there are the benefits of actually testing and working with the OpenStack Client APIs.

In order to install Nova properly, I need two networks, a public one that connects to the outside world, and private one that Nova could manage. Each network needs a subnet, and the public subnet needs a router to connect to the public network.

Horizon Network Topology Screen  showing multiple  successful Ossipee networks setups.

Horizon Network Topology

For development, I need two virtual machines. One will run as the IPA Server, and one will run as the OpenStack controller (all-in-one install). In future deployments, I will need multiple controllers fronted by HA proxy, so I want a pattern that will extend.

This is a development setup, which means that I will be coding, as I do, via trial and error. I need to be able to return to a clean setup with a minimum of fuss. Not just to wipe everything, but perhaps only a teardown and recreate the hosts, or a single host, or the network.

I realized I wanted a system that ran as set of tasks. Each would run forward to create, or backwards to teardown. I wanted to be able to compose individual tasks into larger tasks and so forth.

There are many tools out there that I could have used. Ansible 2 OpenStack modules are based on the Shade library. There are other orchestration tools. However, for a small task, I want small code. The whole thing is a single python module, and is understandable in a single viewing. This is personal code, tailored to my exact inseam and sleeve length. It is easy for me to see how to modify it in the future if I want it idempotent or to handle adding random hosts to an existing network.

Well, at least that is how it started. Like most things, it has grown a bit as it is used. My whole team needs the same setups as I have. But still, this is not meant to be a shippable project, this is a software “jig” and will not be maintained after it is needed.

However, the code is worth recording. There are a couple things I feel it offers. First, it shows how to use the python-nova and python-neutron clients with a Session, getting the configuration from the command line;

    def session(self):
        if not self._session:
            auth_plugin = ksc_auth.load_from_argparse_arguments(self.args)
                if not auth_plugin.auth_url:
                    logging.error('OS_AUTH_URL not set.  Aborting.')
            except AttributeError:

            self._session = ksc_session.Session.load_from_cli_options(

        return self._session

It has examples, including the order, of the necessary Neutron commands to build a network and connect it to an external one;

As you can see here, the order is

  1. Router
  2. Network
  3. SubNet
  4. RouterInterface

and the reversal of the steps to tear it down:

  1. RouterInterface
  2. Subnet
  3. Network
  4. Router

Beyond the virtual infrastructure necessary to Run an OpenStack install, Ossipee creates a host enttry for each virtual machine, and resets the OpenSSH known_hosts entry by removing old values:['ssh-keygen', '-R', ip_address])

And suppressing the unknown_key check for the first connection:

                 '-o', 'StrictHostKeyChecking=no',
                 '-o', 'PasswordAuthentication=no',
                 '-l', self.plan.profile['cloud_user'],
                 ip_address, 'hostname'])

Finally, it generates an Ansible inventory file that can be used with our playbooks to install IPA and OpenStack. More about them later.

September 25, 2015

Keystone Unit Tests

Running the Keystone Unit tests takes a long time.

To start with a blank slate, you want to make sure you have the latest from master and a clean git repository.

cd /opt/stack/keystone
git checkout master
git rebase origin/master

clean -xdf keystone/
time tox -r
  py27: commands succeeded
ERROR:   py34: commands failed
  pep8: commands succeeded
  docs: commands succeeded
  genconfig: commands succeeded

real	8m17.530s
user	33m1.851s
sys	0m56.828s

The -r option to tox recreates the tox virtual environments. Additional runs should go faster
time tox

  py27: commands succeeded
ERROR:   py34: commands failed
  pep8: commands succeeded
  docs: commands succeeded
  genconfig: commands succeeded

real	5m52.367s
user	30m57.366s
sys	0m35.403s

To run just the py27 tests:

time tox -e py27

Ran: 5695 tests in 243.0000 sec.
  py27: commands succeeded
  congratulations :)

real	4m18.144s
user	28m51.506s
sys	0m31.286s

Not much faster, so we know where most of the time goes. It also reported the slowest tests:
keystone.tests.unit.token.test_fernet_provider.TestFernetKeyRotation.test_rotation 2.856

So we have 5000+ test that take 4 minutes to run.

Running just a single test:

time tox -e py27  -- keystone.tests.unit.token.test_fernet_provider.TestFernetKeyRotation.test_rotation

Ran: 1 tests in 4.0000 sec.

  py27: commands succeeded
  congratulations :)

real	0m17.200s
user	0m15.802s
sys	0m1.681s

17 Seconds is a little long, considering the test itself only ran for four seconds of it. Once in a while is not a problem, but if this breaks the flow of thought during coding, it is problematic.

What can we shave off? Lets see if we can avoid the discovery step, run inside the venv, and specify exactly the test we want to run;

. .tox/py27/bin/activate
 time python -m keystone.tests.unit.token.test_fernet_provider.TestFernetKeyRotation.test_rotation
Tests running...

Ran 1 test in 2.770s

real	0m3.137s
user	0m2.708s
sys	0m0.428s

That seems to have had only an overhead of a second.

OK, what about some of the end-to-end test that set up an HTTP listener and talk to the database, such as those in: keystone.tests.unit.test_v3_auth?

time python -m keystone.tests.unit.test_v3_auth
Tests running...
Ran 329 tests in 91.925s

real	1m32.459s
user	1m28.260s
sys	0m4.669s

Fast enough for a pre-commit check, but not for “run after each change.” How about a single test?

time python -m keystone.tests.unit.test_v3_auth.TestAuth.test_disabled_default_project_domain_result_in_unscoped_token
Tests running...

Ran 1 test in 0.965s

real	0m1.382s
user	0m1.308s
sys	0m0.076s

I think it is important to run the tests before you write a line of code, and to run the tests continuously. But if you don’t run the entire body of unit tests, how can you make sure you are exercising the code you wrote? One technique is to put in a break-point.

I want to work on the roles infrastructure. Specifically, I want to make the assignment of one (prior) role imply the assignment of another (inferred) role. I won’t go in to the whole design, but I will start with the database structure. Role inference is a a many-to-many relationship. As such, I need to implement a table which has two IDs: prior_role_id and inferred_role_id. Lets start with the database migrations for that.

time python -m keystone.tests.unit.test_sql_upgrade
Tests running...

Ran 30 tests in 3.528s

real	0m3.948s
user	0m3.874s
sys	0m0.075s

OK…full disclosure, I’m writing this because I did too much before writing tests, my tests were hanging, and I want to redo things slower and more controlled to find out what went wrong. I have some placeholders for migrations: a way to keep from changing the migration number for my review as other reviews get merged/ They just execute:

def upgrade(migrate_engine):

So..I’m going to cherry-pick this commit and run the migration test.

migrate.exceptions.ScriptError: You can only have one Python script per version, but you have: /opt/stack/keystone/keystone/common/sql/migrate_repo/versions/ and /opt/stack/keystone/keystone/common/sql/migrate_repo/versions/

Already caught up with me…

$ git mv keystone/common/sql/migrate_repo/versions/ keystone/common/sql/migrate_repo/versions/ 
(py27)[ayoung@ayoung541 keystone]$ time python -m keystone.tests.unit.test_sql_upgrade
Tests running...

Ran 30 tests in 3.576s

real	0m4.028s
user	0m3.951s
sys	0m0.081s

OK…lets see what happens if I put a breakpoint in one of these tests.

def upgrade(migrate_engine):
     import pdb; pdb.set_trace()

And run

(py27)[ayoung@ayoung541 keystone]$ time python -m keystone.tests.unit.test_sql_upgrade
Tests running...
-> import pdb; pdb.set_trace()

Ctrl C to kill the test (0r cont to keep running). This may not always work; some of the more complex tests will do manipulations of the thread libraries, and will keep the breakpoints from interrupting the debugging thread. For these cases, use rpdb and telnet.

More info about running the tests in OpenStack can be dfound here:
I wrote about using rpdb to debug here:

September 23, 2015

Encryption you don’t control is not a security feature

Catching up on my blog reading, this morning, led me to an article discussing Apple’s iMessage program and, specifically, the encryption it uses and how it’s implemented.  Go ahead and read the article; I’ll wait.

The TL;DR of that article is this: encryption you don’t control is not a security feature.  It’s great that Apple implemented encryption in their messaging software but since the user has no control over the implementation or the keys (especially the key distribution, management, and trust) users shouldn’t expect this type of encryption system to actually protect them.

For Apple, it’s all about UI and making it easy for the user.  In reality, what they’ve done is dumbed down the entire process and forced users to remain ignorant of their own security.  Many users applaud these types of “just make it work and make it pretty” interfaces but at the same time you end up with an uneducated user who doesn’t even realize that their data is at risk.  Honestly, it’s 2015… if you don’t understand information security… well, to quote my friend Larry “when you’re dumb, you suffer”.

Yes, that’s harsh.  But it’s time for people to wake up and take responsibility for their naked pictures or email messages being publicized.  I’m assuming most everyone makes at least a little effort toward physically securing their homes (e.g. locking doors and windows).  Why shouldn’t your data be any less protected?

In comparison, I’ll use Pidgin and OTR as an example of a better way to encrypt messaging systems.  OTR doesn’t use outside mechanisms for handling keys, it clearly displays whether or not a message is simply encrypted (untrusted) or whether you’ve verified the key, and it’s simple to use.

One thing I’ll say about Apple’s iMessage is that it at least starts to fix the problem.  I’d rather have ciphertext being sent across the network than plaintext.  Users just need to understand what the risks are and evaluate whether they are okay with those risks or not.

September 22, 2015

How to build trust
One the hardest things we have to do is to build trust.

It's not hard for everyone, just us specifically. It's not in our nature.

Security people tend not to trust anyone. Everything we do is based on not trusting anyone, it's literally our job. Trust is a two way street. If you expect someone to trust you, you have to trust them to a certain degree. This is our first problem. We don't trust anybody, for good reason often, but it's a problem. We have to learn how to trust others so we can get them to trust us. This is of course easier said than done. Would you trust someone with your password? I wouldn't, but a lot of people do. This is a place where they won't understand why we don't trust them. Of course sharing a password isn't a great idea, but that's not the point.

I have a recent example that sort of explains the problem. It's not related to security, but the idea is there. A friend does graphic design work and was tasked to create a logo. This is easy enough, he made a few rather nice logos for the client to choose from, but then things went crazy. None were good enough, so they just kept bikeshedding the logos. The designer was of course very upset as this isn't productive and honestly, the end result always ends up looking almost exactly like one of the first few logos. Furthermore, the people commenting aren't graphics people, so many of the suggestions were just silly. Because they didn't trust the designer, now the designer doesn't trust them.

So how could this scenario have gone down? Ideally you look at what the designer gives you, you can give some feedback along what you think, things like "It has too many colors" or "It's not bright enough", not "The second letter A should be 3 piexels to the left". You have to trust your designer will give you something that does what you need it to do. It won't be perfect, it just has to be good enough. And in time as trust is built between you and the designer, the results will just keep getting better.

How many times have you sent back a presentation or whitepaper because it wasn't perfect? Or decided to just do something yourself because the writer wasn't doing a good enough job? Those people no longer like you. They think you're a rude inconsiderate jerk. They're probably right.

You can't just show up and demand trust, that never works. You can't demand perfection. Everyone is good at their own things, you have to trust that if you're working with a writer, or designer, or developer, they're going to do a job that's good enough, possibly better than you could ever do, if you let them.

Join the conversation, hit me up on twitter, I'm @joshbressers

September 18, 2015

Using the ipa CLI from an unenrolled workstation

FreeIPA is a useful tool for managing hosts. I find myself wanting to do work on remote systems from my desktop using the ipa CLI. Here’s how I set it up.

I have installed the IPA server on a RedHat cloud image, so the default user for remote access is cloud-user. For Fedora it would be ‘fedora’ and for Centos it would be ‘centos’, but the rest is the same.

My remote host has a FQDN of ipa.ayoung.os1.test, a non-routable IPv4 address and an entry in my /etc/hosts file that references it. ipa.ayoung.os1.test

I can ssh to the host via:
ssh cloud-user@ipa.ayoung.os1.test

I’ll make a local directory to stash files:

mkdir /tmp/ayoung.os1
scp  cloud-user@openstack.ayoung.os1.test:/etc/krb5.conf /tmp/ayoung.os1
scp  cloud-user@openstack.ayoung.os1.test:/etc/ipa/default.conf /tmp/ayoung.os1/ipa.conf
curl -o /tmp/ayoung.os1/ca.crt http://ipa.ayoung.os1.test/ipa/config/ca.crt

I can get a Kerberos TGT once I’ve set the appropriate Environment variables.

export KRB5CCNAME=/tmp/ayoung.os1/ccache
export KRB5_CONFIG=/tmp/ayoung.os1/krb5.conf
kinit admin@AYOUNG.OS1.TEST
Password for admin@AYOUNG.OS1.TEST: 
[ayoung@ayoung541 ayoung.os1]$ klist
Ticket cache: FILE:/tmp/ayoung.os1/ccache
Default principal: admin@AYOUNG.OS1.TEST

Valid starting       Expires              Service principal
09/18/2015 13:37:23  09/19/2015 13:37:20  krbtgt/AYOUNG.OS1.TEST@AYOUNG.OS1.TEST

IPA uses NSS as the cryptography libary,and assumes the certificates are stored in /etc/ipa/nssdb/.
Older versions had it in /etc/pki/nssdb. Since my laptop is not enrolled as an IPA client, I need to make this directory and populate the NSS certificate store.

sudo mkdir  /etc/ipa/nssdb
sudo chown 666 /etc/ipa/nssdb
sudo certutil -N -d /etc/ipa/nssdb
sudo certutil -d /etc/ipa/nssdb -A -n 'IPA CA' -t CT,, -a -i /tmp/ayoung.os1/ca.crt 
sudo chmod 644 /etc/ipa/nssdb/*

Test that the NSS Database works

certutil -d /etc/ipa/nssdb -L

Certificate Nickname                                         Trust Attributes

IPA CA                                                       CT,, 

Run the ipa client like this:

$ ipa -c /tmp/ayoung.os1/ipa.conf user-find
2 users matched
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 733200000
  GID: 733200000
  Account disabled: False
  Password: True
  Kerberos keys available: True

  User login: ayoung
  First name: Adam
  Last name: Young
  Home directory: /home/ayoung
  Login shell: /bin/sh
  Email address: ayoung@ayoung.os1.test
  UID: 733200001
  GID: 733200001
  Account disabled: False
  Password: True
  Kerberos keys available: True
Number of entries returned 2

September 17, 2015

Important security notice regarding signing key and distribution of Red Hat Ceph Storage on Ubuntu and CentOS

Last week, Red Hat investigated an intrusion on the sites of both the Ceph community project ( and Inktank (, which were hosted on a computer system outside of Red Hat infrastructure. provided releases of the Red Hat Ceph product for Ubuntu and CentOS operating systems. Those product versions were signed with an Inktank signing key (id 5438C7019DCEEEAD). provided the upstream packages for the Ceph community versions signed with a Ceph signing key (id 7EBFDD5D17ED316D). While the investigation into the intrusion is ongoing, our initial focus was on the integrity of the software and distribution channel for both sites.

To date, our investigation has not discovered any compromised code available for download on these sites. We can not not fully rule out the possibility that some compromised code was available for download at some point in the past.

For, all builds were verified matching known good builds from a clean system. However, we can no longer trust the integrity of the Inktank signing key, and therefore have re-signed these versions of the Red Hat Ceph Storage products with the standard Red Hat release key. Customers of Red Hat Ceph Storage products should only use versions signed by the Red Hat release key.

For, the Ceph community has created a new signing key (id E84AC2C0460F3994) for verifying their downloads.  See for more details.

Customer data was not stored on the compromised system. The system did have usernames and hashes of the fixed passwords we supplied to customers to authenticate downloads.

To reiterate, based on our investigation to date, the customers of the CentOS and Ubuntu versions of Red Hat Ceph Storage should take action as a precautionary measure to download the rebuilt and newly-signed product versions. We have identified and notified those customers directly.

Customers using Red Hat Ceph Storage products for Red Hat Enterprise Linux are not affected by this issue. Other Red Hat products are also not affected.

Customers who have any questions or need help moving to the new builds should contact Red Hat support or their Technical Account Manager.

September 16, 2015

How come MCS Confinement is not working in SELinux even in enforcing mode?
MCS separation is a key feature in sVirt technology.

We currently use it for separation of our Virtual machines using libvirt to launch vms with different MCS labels.  SELinux sandbox relies on it to separate out its sandboxes.  OpenShift relies on this technology for separating users, and now docker uses it to separate containers.  

When I discover a hammer, everything looks like a nail.

I recently saw this email.

"I have trouble understanding how MCS labels work, they are not being enforced on my RHEL7 system even though selinux is "enforcing" and the policy used is "targeted". I don't think I should be able to access those files:

$ ls -lZ /tmp/accounts-users /tmp/accounts-admin
-rw-rw-r--. backup backup guest_u:object_r:user_tmp_t:s0:c3
-rw-rw-r--. backup backup guest_u:object_r:user_tmp_t:s0:c99
backup@test ~ $ id
uid=1000(backup) gid=1000(backup) groups=1000(backup)

root@test ~ # getenforce

I can still access them even though they have different labels (c3 and
c99 as opposed to my user having c1).
backup@test ~ $ cat /tmp/accounts-users
domenico balance: -30
backup@test ~ $ cat /tmp/accounts-admin
don't lend money to domenico

Am I missing something?

MCS Is different then type enforcement.

We decided not to apply MCS Separation to every type.    We only apply it to the types that we plan on running in a Multi-Tennant way.  Basically it is for objects that we want to share the same access to the system, but not to each other.  We introduced an attribute called mcs_constrained_type.

On my Fedora Rawhide box I can look for these types:

seinfo -amcs_constrained_type -x

If you add the mcs_constrained_type attribute to a type the kernel will start enforcing MCS separation on the type.

Adding a policy like this will MCS confine guest_t

# cat myguest.te 
policy_module(mymcs, 1.0)
    type guest_t;
    attribute mcs_constrained_type;

typeattribute guest_t mcs_constrained_type;

# make -f /usr/share/selinux/devel/Makefile
# semodule -i myguest.pp

Now I want to test this out.  First i have to allow the guest_u user to use multiple MCS labels.  You would not
have to do this with non user types. 

# semanage user -m -r s0-s0:c0.c1023 guest_u

Create content to read and change it MCS label

# echo Read It > /tmp/test
# chcon -l s0:c1,c2 /tmp/test
# ls -Z /tmp/test
unconfined_u:object_r:user_tmp_t:s0:c1,c2 /tmp/test

Now login as a guest user

# id -Z
# cat /tmp/test
Read It

Now login as a guest user with a different MCS type

# id -Z
# cat /tmp/test
cat: /tmp/test: Permission denied

September 15, 2015

How can we describe a buffer overflow in common terms?
We can't.

You think you can, but you can't. This reminds of the Feynman video where he's asked how magnets work and he doesn't explain it, he explains why he can't explain it.

Our problem is we're generally too clever to know when to stop. There are limits to our cleverness unfortunately.

I'm picking on buffer overflows in this case because they're something that's pretty universal throughout the security universe. Most everyone knows what they are, how they work, and we all think we could explain it to our grandma.

There are two problems here.

1) You can't explain away some of the fundamental principals behind computing.

Even if we want to take away as much technical detail as possible, there are some basic ideas that regular people don't know. Computers are magic to most people. When I say most people I mean probably 90% or more of the people. When I say magic, I mean actual magic, not the joking sort of "I really know this isn't magic but I'm being funny". All they know is they push this button and they can pay their bills. They have zero idea what's going on. If someone doesn't understand the difference between a CPU, RAM, and a potato, how on earth will you explain the instruction register to them?

2) They don't care.

Most people just don't genuinely care. Some will pretend to be nice, but a lot won't even do that. Even if we found a nice way to explain this stuff (which we can't), We can't make people care what we're saying. If we're dealing with the likes of a CIO or CEO, they don't care what a buffer overflow is, they don't care how Heartbleed works. They have their goals and while security is important, it's not why they wake up each morning. Some people think they care, but then when we start to talk, they figure out they really don't. Most are nice enough they will let us talk while they're thinking about eating cookies.

So what do we do about it?

The answer is to drive the discussion around the problems. Rather than trying to explain technical details to someone, we have to build trust with them. They need to be able to trust us on some level. If there's a buffer overflow in something, we need to be able to say "here is the patch" or "here is how we can fix this" for example. Then if we've built up trust, we don't have to try to explain exactly what's going on, just that it's something we should care about.

We'll cover how to build trust in the next post.

Join the conversation, hit me up on twitter, I'm @joshbressers

September 13, 2015

libselinux is a liar!!!
On an SELinux enabled machine, why does getenforce in a docker container say it is disabled?

SELinux is not namespaced

This means that there is only one SELinux rules base for all containers on a system.  When we attempt to confine containers we want to prevent them from writing to kernel file systems, which might be one mechanism for escape.  One of those file systems would be /proc/fs/selinux, and we also want to control there access to things like /proc/self/attr/* field.

By default Docker processes run as svirt_lxc_net_t and they are prevented from doing (almost) all SELinux operations.  But processes within containers do not know that they are running within a container.  SELinux aware applications are going to attempt to do SELinux operations, especially if they are running as root.

For example,  if you are running yum/dnf/rpm inside of a docker build container and the tools sees that SELinux is enabled, the tool is going to attempt to set labels on the file system, if SELinux blocks the setting of these file labels these calls will fail causing the tool will fail and exit.  Because of it SELinux aware applications within containers would mostly fail.

Libselinux is a liar

We obviously do not want  these apps failing,  so we decided to make libselinux lie to the processes.  libselinux checks if /proc/fs/selinux is mounted onto the system and whether it is mounted read/write.  If /proc/fs/selinux not mounted read/write, libselinux will report to calling applications that SELinux is disabled.  In containers we don't mount these file systems by default or we mount it read/only causing libselinux to report that it is disabled.

# getenforce
# docker run --rm fedora id -Z
id: --context (-Z) works only on an SELinux-enabled kernel

# docker run --rm -v /sys/fs/selinux:/sys/fs/selinux:ro fedora id -Z
id: --context (-Z) works only on an SELinux-enabled kernel
# docker run --rm -v /sys/fs/selinux:/sys/fs/selinux fedora id -Z

When SELinux aware applications like yum/dnf/rpm see SELinux is disabled, they stop trying to do SELinux operations, and succeed within containers.

Applications work well even though SELinux is very much enforcing, and controlling their activity.

I believe that SELInux is the best tool we currently use to make Contaieners actually contain.

In this case SELinux disabled does not make me cry. 

September 09, 2015

SELinux insides – Part2: Neverallow assertions

Usually if we describe how to create a local policy, how to generate a new policy, how to add additional rules, we always talk about ALLOW rules and sometimes about DONTAUDIT rules. But we have another Access Vector (AV) rules – AUDITALLOW and NEVERALLOW.

ALLOW allows defined rules
DONTAUDIT stops the auditing of denial messages
AUDITALLOW audits events defined by a rule
NEVERALLOW specifies that an allow rule must not be generated for the operation

In this blog post, I would like to describe more details about NEVERALLOW rules on real examples and announce that we turn them back on in Fedora 23/Rawhide.

But why do we need to have NEVERALLOW rules/assertions? The answer is pretty easy. We need to be sure that we do not allow any unwanted/unsecure/dangerous actions. For example, we do not want to allow ordinary services to access /etc/shadow and NEVERALLOW assertions give us this ability. In the policy, we declare rules like

neverallow ~can_read_shadow_passwords shadow_t:file read;

It ensures that the policy will not contain rules allowing any domain without can_read_shadow_passwords attribute read access to /etc/shadow (otherwise the policy won’t compile).

What does it mean in practice? We can demonstrate it with the following rules on a system where NEVERALLOW assertion checks are enabled.

$ cat neverallow_test.cil
(allow sssd_t shadow_t (file (read)))

$ sudo semodule -i neverallow_test.cil
Neverallow found that matches avrule at line 310 of /var/lib/selinux/targeted/tmp/modules/100/authlogin/cil
Binary policy creation failed at line 1 of /var/lib/selinux/targeted/tmp/modules/400/neverallow_test/cil
Failed to generate binary


$ cat neverallow_test.cil
(typeattributeset can_read_shadow_passwords (sssd_t))
(allow sssd_t shadow_t (file (read)))

$ sudo semodule -i neverallow_test.cil
$ sesearch -A -s sssd_t -t shadow_t
allow sssd_t shadow_t : file read ;

In the first case, we were not able to define ALLOW rule because of existing NEVERALLOW rule in the policy. In the second case, we assigned can_read_shadow_passwords attribute to sssd_t to pass this NEVERALLOW rule.

As I mentioned above we turned this assertion check back on in Fedora 23/Rawhide with a new 2.4 userspace release which contains some optimalization in libsepol. Before that it took a long time to get a build of Fedora distribution policy with enabled NEVERALLOW check.

SEMOD_EXP="/usr/bin/semodule_expand -a"

was a part of our Fedora selinux-policy.spec file.

$ man semodule_expand
-a Do not check assertions. This will cause the policy to not check any neverallow rules.

Together with that we also changed to not check policy assertions during load, by setting expand-check=0 in /etc/selinux/semanage.conf. This option affects our practice examples.

So for a long time we did not have this check and we needed to be really careful with a rules which could conflict with defined assertions in the policy. With the latest Fedora SELinux userspace and policy packages, we no longer use “-a” option in selinux-policy.spec file and we modified semanage.conf to contain expand-check=1.

September 08, 2015

Automatic decryption of TLS private keys with Deo

Deo is a protocol for network-bound encryption which provides for automatic decryption of secrets when a client is on a given network, and an implementation of the protocol. Importantly, it is not a key escrow service.

The original use case for Deo was automatic decryption of encrypted disks, e.g. for servers in datacentres or employee laptops when inside the corporate firewall. This provides convenience and time savings for operators but if disks are not on the secure network (e.g. due to warranty service or theft) they cannot be automatically decrypted. A typical configuration will fall back to password-based decryption, so choosing a secure passphrase is still important.

A high-level description of the protocol and specific details about the disk encryption use case including LUKS integration is found on the FreeIPA wiki. Source code is available at GitHub.

In this post we will explore an alternative use case for Deo: automatic decryption of TLS keys. Before we get to that, let’s review how Deo works.

Deo protocol

The Deo server uses two sets of keys: one for TLS – providing privacy and authentication for the network connection – and the other for encryption and decryption of secrets. All communication between client and server is protected by TLS.

A client who wishes to encrypt a secret first asks the Deo server for its encryption certificate (which may be accompanied by intermediate certificates forming a chain to the trust root). It then uses the public key to encrypt the secret and stores the resulting ciphertext along with some metadata.

To decrypt the secret, the client transmits the stored ciphertext to the Deo server, which decrypts and returns the secret.

Keen observers will note that the client must trust the server not to store, divulge or misuse the secret, which it learns during the decryption operation. Nathaniel McCallum has made progress on a protocol that does not permit the server or eavesdroppers to learn the secret, strengthening the scheme against offline attacks, but this has not been implemented in Deo yet.

TLS private keys in Deo

Anyone who has deployed TLS or administered web servers know that it is a nusiance to have to enter the passphrase to decrypt the private key(s) when starting or restarting the server. If a server restarts unexpectedly and no operator is on hand to supply the passphrase, it cannot come up. There are few secure technical solutions to this problem. Disturbingly it is frequently suggested to store the private key in the clear.

If a server offers the right configuration or interfaces, it should be possible to use Deo to automatically decrypt the secret keys including TLS private keys. In this example we will use Deo to decrypt Apache httpd / mod_ssl keys. The examples assume that a deo-decryptd server is running at deo.ipa.local on the default port (5700).

mod_ssl for Apache provides the SSLPassPhraseDialog directive. The default value builtin causes mod_ssl to prompt for the passphrase although on Fedora (and perhaps other systemd-based OSes) the standard mod_ssl configuration uses a helper script to acquire the passphrase in a systemd-friendly way:

SSLPassPhraseDialog exec:/usr/libexec/httpd-ssl-pass-dialog

Let’s see it in action:

[f22-4:~] ftweedal% sudo systemctl restart httpd
Enter SSL pass phrase for f22-4.ipa.local:443 (RSA) : ********

If we look inside /usr/libexec/httpd-ssl-pass-dialog we see that the exec:... directive uses command line arguments to indicate the server and key type:

exec /bin/systemd-ask-password "Enter SSL pass phrase for $1 ($2) : "

Apache expects the script to print the passphrase on standard output. We can write a passphrase helper that conforms to this interface but uses Deo to decrypt the passphrase, falling back to prompting if decryption fails or the Deo server is unavailable. Deo ciphertext files will be stored under /etc/httpd/deo.d/ (an arbitrary decision). The complete helper script, which is saved as /usr/libexec/httpd-deo-helper, is:

[ -f "$DEO_FILE" ] && deo decrypt < "$DEO_FILE" && echo && exit
exec /bin/systemd-ask-password "Enter SSL pass phrase for $1 ($2) : "

The behaviour of this script is:

  1. Check for the existence of a file in the deo.d/ directory relating to the server indicated in the first command argument.
  2. If the file exists, attempt to deo decrypt it and exit if successful.
  3. If the file does not exist or if decryption fails, fall back to systemd-ask-password.

We must also update the Apache configuration to use the new helper:

SSLPassPhraseDialog exec:/usr/libexec/httpd-deo-helper

Next we need to create a Deo ciphertext file for each server. The following shell command will read the passphrase (the same one used to encrypt the private key) from standard input, deo encrypt it and write it to the appropriate file in deo.d/:

(stty -echo; read LINE; echo -n "$LINE") \
  | deo encrypt -a /etc/ipa/ca.pem deo.ipa.local \
  > /etc/httpd/deo.d/f22-4.ipa.local:443

Finally, I had to apply appropriate SELinux labels to the httpd-deo-helper script and deo.d/ files and extend the policy to allow processes in the httpd_passwd_t domain to read Apache config files and talk over the network. The labelling commands are:

% semanage fcontext -a -t httpd_passwd_exec_t /usr/libexec/httpd-deo-helper
% restorecon /usr/libexec/httpd-deo-helper
% restorecon -R /etc/httpd

The SELinux type enforcement (TE) module source looks like:

policy_module(httpd_deo, 1.0.0)

require {
        type httpd_passwd_t;
        type httpd_config_t;
        type unreserved_port_t;
        class dir { search };
        class file { read getattr open };
        class tcp_socket { name_connect };

allow httpd_passwd_t httpd_config_t:dir search;
allow httpd_passwd_t httpd_config_t:file { read getattr open };
allow httpd_passwd_t unreserved_port_t:tcp_socket name_connect;

Now that all of this is in place, when the Apache server starts, if the deo-decryptd server is accessible (and its certificates are still valid) the passphrase will be decrypted automatically and used to decrypt the private key; an operator does not need to provide it. Mission accomplished!


The encrypted secret is the same passphrase used to encrypt the key, so a good passphrase must be used. There is no option to only support Deo decryption (although I guess that password fallback would usually be wanted anyway.) Support for using Deo on its own or in conjunction with non-password-based encryption methods necessarily results in more complicated designs that are not supported by mod_ssl’s limited configurability in this regard.

Our implementation is based on an ad-hoc design specific to Apache (e.g. the deo.d/ directory and the naming convention of files therein.) The general design may be widely applicable but for other servers the details will differ (if they support the helper paradigm at all; see next section.)

Finally, we have not implemented any plugins for Deo itself, unlike the disk encryption use case where there is a dedicated command (deo cryptsetup) for people to use. In my opinion the design presented in this post is simple enough not to warrant it but if a common configuration layout was adopted by popular server software it might make sense to provide a plugin.

What about { mod_nss , nginx , … }?

The ability to do Deo decryption with mod_ssl hinges on the SSLPassPhraseDialog directive and in particular its ability to execute a helper program and provide it with enough information to distinguish the target key. mod_nss and nginx’s ssl_module have directives to provide the password(s) in a flat file but no support for invoking helper programs.

NSS works well with PKCS #11 modules so it might be possible to implement a module that uses Deo to decrypt key material. This approach would benefit any other programs that use PKCS #11 but I have not yet looked closely at this option.

The nginx code base is modern and clean and if the developers are receptive it would be worthwhile to add behaviour similar to Apache’s SSLPassPhraseDialog.

For other servers, check the documentation. If you wish to implement for Deo in a program that you work on – either directly or by invoking helper programs – you may find the following OpenSSL and NSS API documentation useful:

Concluding notes

Deo emerged from disk encryption use cases but the protocol is useful in other contexts, including operator-less decryption of secrets used by network servers. We examined a straightforward implementation of Deo-based automatic TLS private key decryption for Apache with mod_ssl and also saw that current versions of mod_nss (for Apache) and nginx don’t support the underlying design. Supporting Deo decryption in a PKCS #11 module is an area for further investigation.

Future revisions of the Deo protocol may offer better trust characteristics; it could be possible to prevent the server from learning the secret. Use of Deo as a part of a larger escrow protocol is another area being explored.

If you have questions or ideas about other uses for Deo, please start a conversation on the <script type="text/javascript"> </script><noscript>freeipa-users at redhat dot com</noscript> mailing list or in #freeipa on Freenode, or raise an issue on GitHub.

Being a nice security person
Sometimes it's really hard to be nice to someone. This is especially true if you think they're not very smart. Respect is a two way street though. If you think someone's an idiot, they probably think you're an idiot. You're both going to end up right once it's all over though.

As an industry we overestimate how much people know about security, which I think is the root of our problem.

I was talking to a peer of mine one day and was complaining about someone not understanding what I thought was an obvious security concept (I don't recall the details anymore, but it's irrelevant). She then said to me words I will never forget "I think you overestimate how much everyone else knows about security".

That statement changed my life. It's why I'm writing this blog now.

I've been paying attention to security for longer than I can remember. It's been at least 20 years, probably more. I was a teenager back when I started this journey. It's easy sometimes to think someone should just know something, it's all so obvious! When they don't, we of course decide they're dumb and we stop respecting them. I remember in my younger days being just brutal to people who didn't know something I did. It was all quite silly really.

The next time there's a clear misunderstanding, here's what you need to do. Stop talking and listen first. See what they're saying. Do they sort of get it? Do they not get it at all? Are they making up nonsense? Listening is easy and you can always start to think about donuts if you get bored. I won't lie, some people are just giant bags of gas, most aren't though.

Now, once you start to understand the other person, try to speak their language. Use words they understand. Terms like buffer overflow, XSS, remote code execution, DoS, APT, these don't matter to most people. They're all "security bugs". We'll talk about language in the future, but for now, just be patient. Your patience will be worth more than anything else you do. Remember that everyone knows something you don't, so while they need your help for security, you need their help for something else, even if you don't know what that is yet.

Some people won't deserve your respect, I'm not suggesting we become whipping posts, but the majority of people you should probably pay attention to. Just slow down long enough to talk to them properly. You'll be amazed what you'll learn.

Join the conversation, hit me up on twitter, I'm @joshbressers

September 03, 2015

Everyone is afraid of us
How many times have you been afraid to say something about security because you knew if you're wrong, you're going to be destroyed in public about it by your peers?

How many times did you try really hard to completely discredit someone who said something wrong about security?

How many times have you been wrong but still argued because you didn't want to admit it?

How many good ideas never saw the light of day because of this?

I think one of the bigger problems the security industry tends to have is a trait for being overly pedantic. This is true of technical people in general, but in security we turn it up to 11. Now don't get me wrong, sometimes you need this, there's no such thing as crypto that's half right. When we work with normal people though, we can't be so pedantic.

This of course isn't a hard and fast rule. Sometimes we need the details to be correct, sometimes we don't. You have to use your best judgement, but if you're not sure I suggest you lean toward being understanding (rather than overly critical).

Let's go through some examples, just for fun.

Question"Hey guys, I'm trying to understand if this patch is correct for a buffer overflow, could someone give it a review?"
Answer"Actually that bug was a buffer overflow caused by an integer overflow."

We just ensured this person will never ask us for help again. This is a detail they probably don't really care about. Is the patch right? If not, help them understand what's going on. Use small words. If they ask questions, be patient. The right way to answer this would have been to look at the patch and ack it if it works, or offer advice on how to fix it if it's still not done.

Question"Hi everybody, I'm working on adding SSL support to my application. The documentation isn't great though, are there any examples I could look at?"
Answer"SSL is dead, use TLS!"

While that answer is technically correct (which is the best kind of correct), it's still not helpful. When you give someone an answer, we have to try and be helpful. If you're dealing with another security person you can probably be borderline unhelpful as they should know better, but remember, normal people think we're all crazy, don't support this theory.

Most people call TLS SSL because they don't know the difference, honestly to most people there is no difference. The differences between TLS and SSL are huge of course, but if someone is looking for help to enable TLS in their application and they decide to call it SSL, it's an opportunity to educate them. They don't need to be experts, but if you're using a crypto library, you need to sort of know what's gong on.

And finally.

Question"Hey, I need help with a new XOR encryption algorithm I'm building."
Answer"You're an idiot"

This one is probably OK ;)

If you have any examples to share, I'd love to collect them to use in the future.

By being patient and understanding is how we build trust. You don't build trust by being harsh. We'll never make a difference with most people without trust, so this is important. Now when you're dealing with some technical people, this is the exact opposite, it's the old show me the code argument, it doesn't matter how nice you are, if your code is trash you're not trusted or respected. This doesn't work with regular people though. They don't get warm fuzzies form reading code, they like to talk to people in a civilized manner using words they understand.

It's not easy, but we should all be smart enough to figure it out. Good luck.

Join the conversation, hit me up on twitter, I'm @joshbressers

September 02, 2015

Factoring RSA Keys With TLS Perfect Forward Secrecy

What is being disclosed today?

Back in 1996, Arjen Lenstra described an attack against an optimization (called the Chinese Remainder Theorem optimization, or RSA-CRT for short). If a fault happened during the computation of a signature (using the RSA-CRT optimization), an attacker might be able to recover the private key from the signature (an “RSA-CRT key leak”). At the time, use of cryptography on the Internet was uncommon, and even ten years later, most TLS (or HTTPS) connections were immune to this problem by design because they did not use RSA signatures. This changed gradually, when forward secrecy for TLS was recommended and introduced by many web sites.

We evaluated the source code of several free software TLS implementations to see if they implement hardening against this particular side-channel attack, and discovered that it is missing in some of these implementations. In addition, we used a TLS crawler to perform TLS handshakes with servers on the Internet, and collected evidence that this kind of hardening is still needed, and missing in some of the server implementations: We saw several RSA-CRT key leaks, where we should not have observed any at all.

The technical report, “Factoring RSA Keys With TLS Perfect Forward Secrecy”, is available in PDF format.

What is the impact of this vulnerability?

An observer of the private key leak can use this information to cryptographically impersonate the server, after redirecting network traffic, conducting a man-in-the-middle attack. Either the client making the TLS handshake can see this leak, or a passive observer capturing network traffic. The key leak also enables decryption of connections which do not use forward secrecy, without the need for a man-in-the-middle attack. However, forward secrecy must be enabled in the server for this kind of key leak to happen in the first place, and with such a server configuration, most clients will use forward secrecy, so an active attack will be required for configurations which can theoretically lead to RSA-CRT key leaks.

Does this break RSA?

No. Lenstra’s attack is a so-called side-channel attack, which means that it does not attack RSA directly. Rather, it exploits unexpected implementation behavior. RSA, and the RSA-CRT optimization with appropriate hardening, is still considered secure.

Are Red Hat products affected?

The short answer is: no.

The longer answer is that some of our products do not implement the recommend hardening that protects against RSA-CRT key leaks. (OpenSSL and NSS already have RSA-CRT hardening.) We will continue to work with upstream projects and help them to implement this additional defense, as we did with Oracle in OpenJDK (which led to the CVE-2015-0478 fix in April this year). None of the key leaks we observed in the wild could be attributed to these open-source projects, and no key leaks showed up in our lab testing, which is why this additional hardening, while certainly desirable to have, does not seem critical at this time.

In the process of this disclosure, we consulted some of our partners and suppliers, particularly those involved in the distribution of RPM packages. They indicated that they already implement RSA-CRT hardening, at least in the configurations we use.

What would an attack look like?

The attack itself is unobservable because the attacker performs an off-line mathematical computation on data extracted from the TLS handshake. The leak itself could be noticed by an intrusion detection system if it checks all TLS handshakes for mathematical correctness.

For the key leaks we have observed, we do not think there is a way for remote attackers to produce key leaks at will, in the sense that an attacker could manipulate the server over the network in such a way that the probability of a key leak in a particular TLS handshake increases. The only thing the attacker can do is to capture as many handshakes as possible, perhaps by initiating many such handshakes themselves.

How difficult is the mathematical computation required to recover the key?

Once the necessary data is collected, the actual computation is marginally more complicated than a regular RSA signature verification. In short, it is quite cheap in terms of computing cost, particularly in comparison to other cryptographic attacks.

Does it make sense to disable forward secrecy, as a precaution?

No. If you expect that a key leak might happen in the future, it could well have happened already. Disabling forward secrecy would enable passive observers of past key leaks to decrypt future TLS sessions, from passively captured network traffic, without having to redirect client connections. This means that disabling forward secrecy generally makes things worse. (Disabling forward secrecy and replacing the server certificate with a new one would work, though.)

How can something called Perfect Forward Secrecy expose servers to additional vulnerabilities?

“Perfect Forward Secrecy“ is just a name given to a particular tweak of the TLS protocol. It does not magically turn TLS into a perfect protocol (that is, resistant to all attacks), particularly if the implementation is incorrect or runs on faulty hardware.

Have you notified the affected vendors?

We tried to notify the affected vendors, and several of them engaged in a productive conversation. All browser PKI certificates for which we observed key leaks have been replaced and revoked.

Does this vulnerability have an name?

We think that “RSA-CRT hardening” (for the countermeasure) and “RSA-CRT key leaks” (for a successful side-channel attack) is sufficiently short and descriptive, and no branding is appropriate. We expect that several CVE IDs will be assigned for the underlying vulnerabilties leading to RSA-CRT key leaks. Some vendors may also assign CVE IDs for RSA-CRT hardening, although no key leaks have been seen in practice so far.

You are bad at talking to people
You're probably bad at talking to people. I don't mean your friends you play D&D or Halo or whatever hip game people play now, I mean humans, like the guy who serves you coffee in the morning.

We've all had more than once instance where we said something and ended up with a room full of people staring at us because it wasn't terribly nice or thoughtful. At the time you had no idea anything was wrong, you still might not.

This is the single biggest thing you have to learn not to do. Normal people have extremely thin skin. You can't call them horrible things, they don't like it. If you do it too often, they'll just never talk to you again. We'll get to this at a future date though.

Security people are mostly the sort of introverts who make other introverts look like party animals. When was the last time you talked to someone who when asked what a buffer overflow is first asks "heap or stack"? Who wasn't your Mom?

But it's not all bad. I'm going to pick on security people relentlessly on this blog. I'm going to make us look over the top silly sometimes, but that's because the target audience isn't the muggles, it's to help us all get better at doing the things that have to happen to secure the world. If we don't do this, nobody will and things will just keep getting worse. There are problems like none we've ever seen before, so we need solutions like we've never seen before. Our single biggest threat is a suit with swagger pretending to be a security person. We know they can't be trusted, but who will listen to us?

Some of you don't care and are probably going to disagree with everything I say. Some of you have to do this. You know you have to, you don't want to, but that's too bad.

So here's how we're going to look at this. Working with the regular people, we're not trying to be like them, we're going to pull off the greatest social engineering feat of our lives. We're a smart group, nobody will disagree with that, so we're going to use our extreme cleverness to fit in. We'll still go home, put on an old t-shirt, make origami wookies, and drink Mountain Dew. While we're at work though, we're going to be business people. We're going to dress nice, speak nice, and act nice. The only real difference than the actual business folks is we know we're putting on a show, they don't.

So for now, when you're talking to someone, be mindful of what you say. Listen more than you speak. Be kind. If they get something wrong, don't destroy them, politely suggest the right answer and if they don't agree, move on, you won't convince them any different. Ask questions, good questions. Don't just talk at people, talk with them.

And most importantly remember the person you're talking with is almost certainly a reasonable human trying to do what they think is right. It's when you insult or try to belittle them that they turn into someone out to get you, so don't treat them poorly.

We'll talk about all this stuff more in the future, but for now just try to keep a cool head when you talk to someone, especially if they're wrong.

Join the conversation, hit me up on twitter, I'm @joshbressers
Delegating certificate issuance in FreeIPA

FreeIPA 4.2 brings several certificate management improvements including custom profiles and user certificates. Along with the explosion in certificate use cases that are now support comes the question of how to manage certificate issuance, along two dimensions: which entities can be issued what kinds of certificates, and who can actually request a certificate? The first aspect is managed via CA ACLs, which were explained in a previous article. In this post I detail how FreeIPA decides whether a requesting principal is allowed to request a certificate for the subject principal, and how to delegate the authority to issue certificates.

Self-service requests

The simplest scenario is a principal using cert-request to request a certificate for itself as the certificate subject. This action is permitted for user and host principals but the request is still subject to CA ACLs; if no CA ACL permits issuance for the combination of subject principal and certificate profile, the request will fail.

Implementation-wise, self-service works because there are directory server ACIs that permit bound principals to modify their own userCertificate attribute; there is no explicit permission object.


Hosts may request certificates for any hosts and services that are managed by the requesting host. These relationships are managed via the ipa host-{add,remove}-managedby commands, and a single host or service may be managed by multiple hosts.

This rule is implemented using directory server ACIs that allow hosts to write the userCertificate attribute when the managedby relationship exists, otherwise not. In the IPA framework, we conduct a permission check to see if the bound (requesting) principal can write the subject principal’s attribute. This is nicer (and probably faster) than interpreting the managedby attribute in the FreeIPA framework.

If you are interested, the ACI rules look like this:

dn: cn=services,cn=accounts,$SUFFIX
aci: (targetattr="userCertificate || krbPrincipalKey")(version 3.0;
      acl "Hosts can manage service Certificates and kerberos keys";
      allow(write) userattr = "parent[0,1].managedby#USERDN";)

dn: cn=computers,cn=accounts,$SUFFIX
aci: (targetattr="userCertificate || krbPrincipalKey")(version 3.0;
      acl "Hosts can manage other host Certificates and kerberos keys";
      allow(write) userattr = "parent[0,1].managedby#USERDN";)

As usual, these requests are also subject to CA ACLs.

Finally, subjectAltName dNSName values are matched against hosts (if the subject principal is a host) or services (if it’s a service); they are treated as additional subject principals and the same permission and CA ACL checks are carried out for each.


FreeIPA’s Role Based Access Control (RBAC) system is used to assign certificate issuance permissions to users (or other principal types). There are several permissions related to certificate management:

Request Certificate

The main permission that allows a user to request certificates for other principals.

Request Certificate with SubjectAltName

This permission allows a user (one who already has Request Certificate permission) to request a certificate with the subjectAltName extension (the check is skipped when the request is self-service or initated by a host principal). Regardless of this permission we comprehensively validate the SAN extension whenever present in a CSR (and always have), so I’m not sure why this exists as a separate permission. I proposed to remove this permission and allow SAN by default but the conversation died.

Request Certificate ignoring CA ACLs (new in FreeIPA 4.2)

The main use case for this permission is where a certain profile is not appropriate for self-service. For example, if you want to issue certificates bearing some estoeric or custom extension unknown to (and therefore not validatable by) FreeIPA, you can define a profile that copies the extension data verbatim from the CSR. Such a profile ought not be made available for self-service via CA ACLs, but this permission will allow a privileged user to issue the certificates on behalf of others.

System: Manage User Certificates (new in FreeIPA 4.2.1)

Permits writing the userCertificate attribute of user entries.

System: Manage Host Certificates

Permits writing the userCertificate attribute of host entries.

System: Modify Services

Permits writing the userCertificate attribute of service entries.

There are other permissions related to revocation and retrieving certificate information from the Dogtag CA. It might make sense for certificate administrators to have some of these permissions but they are not needed for issuance and I will not detail them here.

The RBAC system is used to group permissions into privileges and privileges into roles. Users, user groups, hosts, host groups and services can then be assigned to a role. Let’s walk through an example: we want members of the user-cert-managers group to be able to issue certificates for users. The SAN extension will be allowed, but CA ACLs may not be bypassed.

It bears mention that there is a default privilege called Certificate Administrators that contains most of the certificate management permissions; for this example we will create a new privilege that contains only the required permissions. We will use the ipa CLI program to implement this scenario, but it can also be done using the web UI. Assuming we have a privileged Kerberos ticket, let’s first create a new privilege and add to it the required permissions:

ftweedal% ipa privilege-add "Issue User Certificate"
Added privilege "Issue User Certificate"
  Privilege name: Issue User Certificate

ftweedal% ipa privilege-add-permission "Issue User Certificate" \
    --permission "Request Certificate" \
    --permission "Request Certificate with SubjectAltName" \
    --permission "System: Manage User Certificates"
  Privilege name: Issue User Certificate
  Permissions: Request Certificate,
               Request Certificate with SubjectAltName,
               System: Manage User Certificates
Number of permissions added 3

Next we create a new role and add the privilege we just created:

ftweedal% ipa role-add "User Certificate Manager"
Added role "User Certificate Manager"
  Role name: User Certificate Manager

ftweedal% ipa role-add-privilege "User Certificate Manager" \
    --privilege "Issue User Certificate"
  Role name: User Certificate Manager
  Privileges: Issue User Certificate
Number of privileges added 1

Finally we add the user-cert-managers group (which we assume already exists) to the role:

ftweedal% ipa role-add-member "User Certificate Manager" \
    --groups user-cert-managers
  Role name: User Certificate Manager
  Member groups: user-cert-managers
  Privileges: Issue User Certificate
Number of members added 1

With that, users who are members of the user-cert-managers group will be able to request certificates for all users.


In addition to self-service, FreeIPA offers a couple of ways to delegate certificate request permissions. For hosts, the managedby relationship grants permission to request certificates for services and other hosts. For users, RBAC can be used to grant permission to manage user, host and service principals, even separately as needs dictate. In all cases except where the RBAC Request Certificate ignoring CA ACLs permission applies, CA ACLs are enforced.

Looking ahead, I can see scope for augmenting or complementing CA ACLs – which currently are concerned with the subject or target principal and care nothing about the requesting principal – with a mechanism to control which principals may issue requests involving a particular profile. But how much this is wanted we will wait and see; it is one of many possible improvents to FreeIPA’s certificate management and all will have to be judged according to the demand and impact.

August 31, 2015

What is Sober Security?
As an industry, security professionals are really bad at speaking to people. I don't just mean speaking to normal humans, I even mean each other even. We're a group of pedantic grumpy people. We don't understand how someone can't understand what we do. We're impatient, we don't like to have to explain ourselves, and we hate being wrong (any many of us are right quite a lot).

I work at Red Hat, I used to be part of the group that did all the security updates, but now I've moved on to be a security strategist. That means I mostly speak with non security people both inside and outside the company. I've already apologized to a bunch of them, I now see how bad we can treat others. By "defeating" normal people we don't win, they decide we're crazy horrible people and they don't talk to us anymore, we end up losing but we don't even know it. The only reason anyone is paying attention at all right now is because security just can't be ignored anymore, they don't want to talk to us, they just don't have anywhere else to go ... yet. If the security professionals don't step up and start working with everyone else, we're going to end up with a lot of weasels pretending to be security people. If you have a fast talking fraud up against a grouchy security dude, I'll let you guess who everyone is going to listen to.

I've not tried very hard in the past to explain things to anyone really, but that's changed. I now have to explain extremely technical concepts to people who don't know what a buffer overflow is. I can't use acronyms or jargon, it doesn't mean anything to my audience. I'm probably learning more than they are, for our lot talking to people is hard, really hard, the hardest thing many of us will ever do, but it's something that has to be done. The whole industry needs to think about this. Part of why everything is so broken is because nobody has any idea what's going on and that's our fault, not theirs.

How do we fix it?

I won't lie, I don't have any answers. I do however have some great people to work with, a solid background in the industry, and top notch security peers. I'm going to use this blog to talk about what I learn about talking to people. Hopefully there will be some others out there who can benefit from what I learn, and if you have something to share, by all means let me know.

The pioneers get the arrows as they say. Let's hope I don't get too many. Stay tuned for what I expect to be a most interesting adventure.

Join the conversation, hit me up on twitter, I'm @joshbressers

August 27, 2015

nsenter gains SELinux support
nsenter is a program that allows you to run program with namespaces of other processes

This tool is often used to enter containers like docker, systemd-nspawn or rocket.   It can be used for debugging or for scripting
tools to work inside of containers.  One problem that it had was the process that would be entering the container could potentially
be attacked by processes within the container.   From an SELinux point of view, you might be injecting an unconfined_t process
into a container that is running as svirt_lxc_net_t.  We wanted a way to change the process context when it entered the container
to match the pid of the process who's namespaces you are entering.

As of util-linux-2.27, nsenter now has this support.

man nsenter
       -Z, --follow-context
              Set the SELinux  security  context  used  for  executing  a  new process according to already running process specified by --tar‐get PID. (The util-linux has to be compiled with SELinux support otherwise the option is unavailable.)

docker exec

Already did this but this gives debuggers, testers, scriptors a new tool to use with namespaces and containers.

August 20, 2015

Embedded systems, meet the Internet of Things

An article in Military Embedded Systems magazine discusses the evolution of embedded systems as influenced by the Internet of Things (IoT): Embedded systems, meet the Internet of Things.

The article notes: “In many ways, embedded systems are the progenitor of the Internet of Things (IoT) – and now IoT is changing key aspects of how we design and build military embedded systems. In fact, the new model for embedded systems within IoT might best be described as design, build, maintain, update, extend, and evolve.”

August 19, 2015

Secure distribution of RPM packages

This blog post looks at the final part of creating secure software: shipping it to users in a safe way. It explains how to use transport security and package signatures to achieve this goal.

yum versus rpm

There are two commonly used tools related to RPM package management, yum and rpm. (Recent Fedora versions have replaced yum with dnf, a rewrite with similar functionality.) The yum tool inspects package sources (repositories), downloads RPM packages, and makes sure that required dependencies are installed along with fresh package installations and package updates. yum uses rpm as a library to install packages. yum repositories are defined by .repo files in /etc/yum.repos.d, or by yum plugins for repository management (such as subscription-manager for Red Hat subscription management). rpm is the low-level tool which operates on explicit set of RPM packages. rpm provides both a set of command-line tools, and a library to process RPM packages. In contrast to yum, package dependencies are checked, but violations are not resolved automatically. This means that rpm typically relies on yum to tell it what to do exactly; the recipe for a change to a package set is called a transaction. Securing package distribution at the yum layer resembles transport layer security. The rpm security mechanism is more like end-to-end security (in fact, rpm uses OpenPGP internally, which has traditionally been used for end-to-end email protection).

Transport security with yum

Transport security is comparatively easy to implement. The web server just needs to serve the package repository metadata (repomd.xml and its descendants) over HTTPS instead of HTTP. On the client, a .repo file in /etc/yum.repos.d has to look like this:

name=gnu-hello for Fedora $releasever

$releasever expands to the Fedora version at run time (like “22”). By default, end-to-end security with RPM signatures is enabled (see the next section), but we will focus on transport security first.

yum will verify the cryptographic digests contained in the metadata files, so serving the metadata over HTTPS is sufficient, but offering the .rpm files over HTTPS as well is a sensible precaution. The metadata can instruct yum to download packages from absolute, unrelated URLs, so it is necessary to inspect the metadata to make sure it does not contain such absolute “http://” URLs. However, transport security with a third-party mirror network is quite meaningless, particularly if anyone can join the mirror network (as it is the case with CentOS, Debian, Fedora, and others). Rather than attacking the HTTPS connections directly, an attacker could just become part of the mirror network. There are two fundamentally different approaches to achieve some degree of transport security.

Fedora provides a centralized, non-mirrored Fedora-run metalink service which provides a list if active mirrors and the expected cryptographic digest of the repomd.xml files. yum uses this information to select a mirror and verify that it serves the up-to-date, untampered repomd.xml. The chain of cryptographic digests is verified from there, eventually leading to verification of the .rpm file contents. This is how the long-standing Fedora bug 998 was eventually fixed.

Red Hat uses a different option to distribute Red Hat Enterprise Linux and its RPM-based products: a content-distribution network, managed by a trusted third party. Furthermore, the repositories provided by Red Hat use a separate public key infrastructure which is managed by Red Hat, so breaches in the browser PKI (that is, compromises of certificate authorities or misissued individual certificates) do not affect the transport security checks yum provides. Organizations that wish to implement something similar can use the sslcacert configuration switch of yum. This is the way Red Hat Satellite 6 implements transport security as well. Transport security has the advantage that it is straightforward to set up (it is not more difficult than to enable HTTPS). It also guards against manipulation at a lower level, and will detect tampering before data is passed to complex file format parsers such as SQLite, RPM, or the XZ decompressor. However, end-to-end security is often more desirable, and we cover that in the next section.

End-to-end security with RPM signatures

RPM package signatures can be used to implement cryptographic integrity checks for RPM packages. This approach is end-to-end in the sense that the package build infrastructure at the vendor can use an offline or half-online private key (such as one stored in hardware security module), and the final system which consumes these packages can directly verify the signatures because they are built into the .rpm package files. Intermediates such as proxies and caches (which are sometimes used to separate production servers from the Internet) cannot tamper with these signatures. In contrast, transport security protections are weakened or lost in such an environment.

Generating RPM signatures

To add an RPM signature to a .rpm signature, you need to generate a GnuPG key first, using gpg --gen-key. Let’s assume that this key has the user ID “”. We first export the public key part to a file in a special directory, otherwise rpmsign will not be able to verify the signatures we create as it uses the RPM database as a source of trusted signing keys (and not the user GnuPG keyring):

$ mkdir $HOME/rpm-signing-keys
$ gpg --export -a > $HOME/rpm-signing-keys/example-com.key

The name of the directory $HOME/rpm-signing-keys does not matter, but the name of the file containing the public key must end in “.key”. On Red Hat Enterprise Linux 7, CentOS 7, and Fedora, you may have to install the rpm-sign package, which contains the rpmsign program. The rpmsign command to create the signature looks like this:

$ rpmsign -D '_gpg_name' --addsign hello-2.10.1-1.el6.x86_64.rpm
Enter pass phrase:
Pass phrase is good.

(On success, there is no output after the file name on the last line, and the shell prompt reappears.) The file hello-2.10.1-1.el6.x86_64.rpm is overwritten in place, with a variant that contains the signature embedded into the RPM header. The presence of a signature can be checked with this command:

$ rpm -Kv -D "_keyringpath $HOME/rpm-signing-keys" hello-2.10.1-1.el6.x86_64.rpm
    Header V4 RSA/SHA1 Signature, key ID de337997: OK
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: OK
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

If the output of this command contains “NOKEY” lines instead, like the following, it means that the public key in the directory $HOME/rpm-signing-keys has not been loaded successfully:

    Header V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    Header SHA1 digest: OK (b2be54480baf46542bcf395358aef540f596c0b1)
    V4 RSA/SHA1 Signature, key ID de337997: NOKEY
    MD5 digest: OK (6969408a8d61c74877691457e9e297c6)

Afterwards, the RPM files can be distributed as usual and served over HTTP or HTTPS, as if they were unsigned.

Consuming RPM signatures

To enable RPM signature checking in rpm explicitly, the yum repository file must contain a gpgcheck=1 line, as in:

name=gnu-hello for Fedora $releasever

Once signature checks are enabled in this way, package installation will fail with a NOKEY error until the signing key used by .rpm files in the repository is added to the system RPM database. This can be achieved with a command like this:

$ rpm --import

The file needs to be transported over a trusted channel, hence the use of an https:// URL in the example. (It is also possible to instruct the user to download the file from a trusted web site, copy it to the target system, and import it directly from the file system.) Afterwards, package installation works as before.

After a key has been import, it will appear in the output of the “rpm -qa” command:

$ rpm -qa | grep ^gpg-pubkey-

More information about the key can be obtained with “rpm -qi gpg-pubkey-ab0e12ef-de337997”, and the key can be removed again using the “rpm --erase gpg-pubkey-ab0e12ef-de337997”, just as if it were a regular RPM package.

Note: Package signatures are only checked by yum if the package is downloaded from a repository (which has checking enabled). This happens if the package is specified as a name or name-version-release on the yum command line. If the yum command line names a file or URL instead, or the rpm command is used, no signature check is performed in current versions of Red Hat Enterprise Linux, Fedora, or CentOS.

Issues to avoid

When publishing RPM software repositories, the following should be avoided:

  1. The recommended yum repository configuration uses baseurl lines containing http:// URLs.
  2. The recommended yum repository configuration explicitly disables RPM signature checking with gpgcheck=0.
  3. There are optional instructions to import RPM keys, but these instructions do not tell the system administrator to disable the gpgcheck=0 line in the default yum configuration provided by the independent software vendor.
  4. The recommended “rpm --import” command refers to the public key file using an http:// URL.

The first three deficiencies in particular open the system up to a straightforward man-in-the-middle attack on package downloads. An attacker can replace the repository or RPM files while they are downloaded, thus gaining the ability execute arbitrary commands when they are installed. As outlined in the article on the PKI used by the Red Hat CDN, some enterprise networks perform TLS intercept, and HTTPS downloads will fail. This possibility is not sufficient to justify weakening package authentication for all customers, such as recommending to use http:// instead of https:// in the yum configuration. Similarly, some customers do not want to perform the extra step involving “rpm --import”, but again, this is not an excuse to disable verification for everyone, as long as RPM signatures are actually available in the repository. (Some software delivery processes make it difficult to create such end-to-end verifiable signatures.)


If you are creating a repository of packages you should ensure give your users a secure way to consume them. You can do this by following these recommendations:

  • Use https:// URLs everywhere in configuration advice regarding RPM repository setup for yum.
  • Create a signing key and use them to sign RPM packages, as outlined above.
  • Make sure RPM signature checking is enabled in the yum configuration.
  • Use an https:// URL to download the public key in the setup instructions.

We acknowledge that package signing might not be possible for everyone, but software downloads over HTTPS downloads are straightforward to implement and should always be used.

August 15, 2015

Distributing Secrets with Custodia

My last blog post described a crypto library I created named JWCrypto. I've built this library as a building block of Custodia, a Service that helps sharing Secrets, Keys, Passwords in distributed applications like micro service architectures built on containers.

Custodia is itself a building block of a new FreeIPA feature to improve the experience of setting up replicas. In fact Custodia at the moment is mostly plumbing for this feature, and although the plumbing is all there, it is not very usable outside of the FreeIPA project without some thinkering.

The past week I was at Flock where I gave a presentation on the problem of distributing Secrets Securely, which is based on my work and my thinking about the general problem and how I applied that thinking to build a generic service which I then specializes for use by FreeIPA. If you are curious, I have posted the slides I used during my talk, and they assure me soon there will soon be video recordings of all the talks available online.

August 11, 2015

Tokenless Keystone

Keystone Tokens are bearer tokens, and bearer tokens are vulnerable to replay attacks. What if we wanted to get rid of them?

Keystone comes out of the early days of OpenStack. When you have a single monolith, you can have a password table. Once you start having multiple, related services, you need a shared sign on mechanism for them. While the Keystone token mechanism is not elegant, it served the need: to avoid copying the user’s password to multiple services.

Today, the tokens represent two things. First, that the user has authenticated, and second, that the user is performing an operation in a specific scope. The token has a set of roles assigned to it, and those roles are on either a project or a domain.

Cryoptographic Authentication

In an enterprise that has a centralized authentication mechanism such as Kerberos or X509 Client Certificates, an application typically performs a cryptographic handshake to authenticate the user, and then performs an additional query against a centralized directory to see what groups the user belongs to in order to perform an access control check.

This leads to the first things that could replace tokens: reuse the existing authentication mechanisms to provide access to the remote services. Putting Nova or Glance behind a web server running mod_auth_kerb (or better yet, mod_auth_gssapi, but I get ahead of myself) for Kerberos or using HTTPS with Client Certificate can both be done across the public internet now. Both mechanisms have their pros and cons. Once the user authenticates, the service could then query the roles assignments for the user from Keystone instead of validating a token. The data would be the same.


What if those mechanisms are not acceptable? There is still Federation. Keystone today can serve out tokens using either OpenID connect or SAML. There is no reason these same mechanisms could not be put in front of the other services of OpenStack, with Keystone filling out the Role information either in the assertions or via Server lookup.

All of these mechanism have a greater cost in network calls to the service endpoint, although not necessary to the user, who may not have to make the round trips to Keystone in order to fetch a token (OK SAML is very chatty.) What other options do we have?

Signed Requests

If we don’t care about browser support, and focus on just the CLI, then a few more options open up. Keystone could become a registry for public keys, and users could authenticate by signing the request that go to Nova. The signature of a request would only be slightly larger than the size of a Fernet token, and the user would be able to greatly reduce the number of web calls. There would be slightly larger overhead due to the asymmetric cryptography.

Unfortunately, this is not a real option from the browser;  the browser support for “naked keys” is currently not sufficient to ensure the operations will succeed.  Usage of Client X509 Certificates is still the best way to ensure cryptography from the browser, and will not necessarily support arbitrary document signing.  I expect this to change over time, but I suspect the browser support will be uneven at best for a while.


Both versions of OAUTH  are designed to address distributed authorization.   Without cryptographic signing, the OAUTH (1) protocol degenerates to bearer tokens.  With Cryptography, the OAUTH (2) protocol is roughly comparable to SAML; OpenID connect is based on OAUTH2, so this should be no surprise. So, while Keystone tokens could be replaced with some form of OAUTH, and it would at least be closer to a standard, it wouldn’t radically change the current approach to Keystone tokens. OAUTH either gives us what we have with Keystone tokens today or what we would have with SAML.


Keystone tokens provide minimal additional security benefits for an all-in-one deployment. Instead of putting the User ID and password into the body of the request, the user could pass them via the standard Basic-Auth mechanism with no change in the degree of security. This provides parity with how Keystone is deployed today. No outside service should be able to access the Message Broker. Calls between services should be done on internal (localhost) interfaces or domain sockets, not require passwords, and trust the authorization context as set by the caller.

One Time Passwords

One time paswords (OTPs) in conjunction with Basic Auth or some other way to curry the data to the server provides an interesting alternative. In theory, the user could pass the OTP along at the start of the request, the Horizon server would be responsible for timestamping it, and the password could then be used for the duration. This seems impractical, as we are essentially generating a new bearer token. For all-in-one deployments they would work as well as Basic-Auth.

'CVE-2015-4495 and SELinux', Or why doesn't SELinux confine Firefox?
Why don't we confine Firefox with SELinux?

That is one of the most often asked questions, especially after a new CVE like CVE-2015-4495, shows up.  This vulnerability in firefox allows a remote session to grab any files in your home directory.  If you can read the file then firefox can read it and send it back to the website that infected your browser.

The big problem with confining desktop applications is the way the desktop has been designed.

I wrote about confining the desktop several years ago. 

As I explained then the problem is applications are allowed to communicate with each other in lots of different ways. Here are just a few.

*   X Windows.  All apps need full access to the X Server. I tried several years ago to block applications access to the keyboard settings, in order to block keystroke logging, (google xspy).  I was able to get it to work but a lot of applications started to break.  Other access that you would want to block in X would be screen capture, access to the cut/paste buffer. But blocking
these would cause too much breakage on the system.  XAce was an attempt to add MAC controls to X and is used in MLS environments but I believe it causes to much breakage.
*   File system access.  Users expect firefox to be able to upload and download files anywhere they want on the desktop.  If I was czar of the OS, I could state that upload files must go into ~/Upload and Download files go into ~/Download, but then users would want to upload photos from ~/Photos.  Or to create their own random directories.  Blocking access to any particular directory including .ssh would be difficult, since someone probably has a web based ssh session or some other tool that can use ssh public key to authenticate.  (This is the biggest weakness in described in CVE-2015-4495
*   Dbus communications as well as gnome shell, shared memory, Kernel Keyring, Access to the camera, and microphone ...

Every one expects all of these to just work, so blocking these with MAC tools and SELinux is most likely to lead to "setenforce 0" then actually adding a lot of security.

Helper Applications.

One of the biggest problems with confining a browser, is helper applications.  Lets imagine I ran firefox with SELinux type firefox_t.  The user clicks on a .odf file or a .doc file, the browser downloads the file and launches LibreOffice so the user
can view the file.  Should LibreOffice run as LibreOffice_t or firefox_t?  If it runs as LibreOffice_t then if the LibreOffice_t app was looking at a different document, the content might be able to subvert the process.  If I run the LibreOffice as firefox_t, what happens when the user launched a document off of his desktop, it will not launch a new LibreOffice it will just communicate with the running LibreOffice and launch the document, making it accessible to firefox_t.

Confining Plugins.

For several years now we have been confining plugins with SELinux in Firefox and Chrome.  This prevents tools like flashplugin
from having much access to the desktop.  But we have had to add booleans to turn off the confinement, since certain plugins, end up wanting more access.

mozilla_plugin_bind_unreserved_ports --> off
mozilla_plugin_can_network_connect --> off
mozilla_plugin_use_bluejeans --> off
mozilla_plugin_use_gps --> off
mozilla_plugin_use_spice --> off
unconfined_mozilla_plugin_transition --> on

SELinux Sandbox

I did introduce the SELinux Sandbox a few years ago.

The SELinux sandbox would allow you to confine desktop applications using container technologies including SELinux.  You could run firefox, LibreOffice, evince ... in their own isolated desktops.  It is quite popular, but users must choose to use it.  It does not work by default, and it can cause unexpected breakage, for example you are not allowed to cut and paste from one window to another.

Hope on the way.

Alex Larsson is working on a new project to change the way desktop applications run, called Sandboxed Applications.

Alex explains that their are two main goals of his project.

* We want to make it possible for 3rd parties to create and distribute applications that works on multiple distributions.
* We want to run the applications with as little access as possible to the host. (For example user files or network access)

The second goal might allow us to really lock down firefox and friends in a way similar to what Android is able to do on your cell phone (SELinux/SEAndroid blocks lots of access on the web browser.)

Imagine that when a user says he wants upload a file he talks to the desktop rather then directly to firefox, and the desktop
hands the file to firefox.  Firefox could then be prevented from touching anything in the homedir.  Also if a user wanted to
save a file, firefox would ask the desktop to launch the file browser, which would run in the desktop context.   When the user
selected where to save the file, the browser would give a descriptor to firefox to write the file.

Similar controls could isolate firefox from the camera microphone etc.

Wayland which will eventually replace X Windows, also provides for better isolation of applications.

Needless to say, I am anxiously waiting to see what Alex and friends come up with.

The combination of Container Techonolgy including Namespaces and SELinux gives us a chance at controling the desktop

August 06, 2015

User certificates and custom profiles with FreeIPA 4.2

The FreeIPA 4.2 release introduces some long-awaited certificate management features: user certificates and custom certificate profiles. In this blog post, we will examine the background and motivations for these features, then carry out a real-world scenario where both these features are used: user S/MIME certificates for email protection.

Custom profiles

FreeIPA uses the Dogtag Certificate System PKI for issuance of X.509 certificates. Although Dogtag ships with many certificate profiles, and could be configured with profiles for almost any conceivable use case, FreeIPA only used a single profile for the issuance of certificates to service and host principals. (The name of this profile was caIPAserviceCert, but it hardcoded and not user-visible).

The caIPAserviceCert profile was suitable for the standard TLS server authentication use case, but there are many use cases for which it was not suitable; especially those that require particular Key Usage or Extended Key Usage assertions or esoteric certificate extensions, to say nothing of client-oriented profiles.

It was possible (and remains possible) to use the deployed Dogtag instance directly to accomplish almost any certificate management goal, but Dogtag lacks knowledge of the FreeIPA schema so the burden of validating requests falls entirely on administrators. This runs contrary to FreeIPA’s goal of easy administration and the expectations of users.

The certprofile-import command allows new profiles to be imported into Dogtag, while certprofile-mod, certprofile-del, certprofile-show and certprofile-find do what they say on the label. Only profiles that are shipped as part of FreeIPA (at time of writing only caIPAserviceCert) or added via certprofile-import are visible to FreeIPA.

An important per-profile configuration that affects FreeIPA is the ipaCertprofileStoreIssued attribute, which is exposed on the command line as --store=BOOL. This attribute tells the cert-request command what to do with certificates issued using that profile. If TRUE, certificates are added to the target principal’s userCertificate attribute; if FALSE, the issued certificate is delievered to the client in the command result but nothing is stored in the FreeIPA directory (though the certificate is still stored in Dogtag’s database). The option to not store issued certificates is desirable in uses cases that involve the issuance of many short-lived certificates.

Finally, cert-request learned the --profile-id option to specify which profile to use. It is optional and defaults to caIPAserviceCert.

User certificates

Prior to FreeIPA 4.2 certificates could only be issued for host and service principals. The same capability now exists for user principals. Although cert-request treats user principals in substantially the same way as host or service principals there are a few important differences:

  • The subject Common Name in the certificate request must match the FreeIPA user name.
  • The subject email address (if present) must match one of the user’s email addresses.
  • All Subject Alternative Name rfc822Name values must match one of the user’s email addresses.
  • Like services and hosts, KRB5PrincipalName SAN is permitted if it matches the principal.
  • dNSName and other SAN types are prohibited.


With support for custom certificate profiles, there must be a way to control which profiles can be used for issuing certificates to which principals. For example, if there was a profile for Puppet masters, it would be sensible to restrict use of that profile to hosts that are members of a some Puppet-related group. This is the purpose of CA ACLs.

CA ACLs are created with the caacl-add command. Users and groups can be added or removed with the caacl-add-user and caacl-remove-user commands. Similarly, caacl-{add,remove}-host for hosts and hostgroups, and caacl-{add,remove}-service.

If you are familiar with FreeIPA’s Host-based Access Control (HBAC) policy feature these commands might remind you of the hbacrule commands. That is no coincidence! The hbcarule commands were my guide for implementing the caacl commands, and the same underlying machinery – libipa_hbac via pyhbac – is used by both plugins to enforce their policies.

Putting it all together

Let’s put these features to use with a realistic scenario. A certain group of users in your organisation must use S/MIME for securing their email communications. To use S/MIME, these users must be issued a certificate with emailProtection asserted in the Extended Key Usage certificate extension. Only the authorised users should be able to have such a certificate issued.

To address this scenario we will:

  1. create a new certificate profile for S/MIME certificates;
  2. create a group for S/MIME users and a CA ACL to allow members of that group access to the new profile;
  3. generate a signing request and issue a cert-request command using the new profile.

Let’s begin.

Creating an S/MIME profile

We export the default profile to use as a starting point for the S/MIME profile:

% ipa certprofile-show --out smime.cfg caIPAserviceCert

Inspecting the profile, we find the Extended Key Usage extension configuration containing the line:


The Extended Key Usage extension is defined in RFC 5280 § The two OIDs in the default profile are for TLS WWW server authentication and TLS WWW client authentication respectively. For S/MIME, we need to assert the Email protection key usage, so we change this line to:


We also remove the profileId=caIPAserviceCert and set an appropriate value for the desc and name fields. Now we can import the new profile:

% ipa certprofile-import smime --file smime.cfg \
  --desc "S/MIME certificates" --store TRUE
Imported profile "smime"
Profile ID: smime
Profile description: S/MIME certificates
Store issued certificates: TRUE

Defining the CA ACL

We will define a new group for S/MIME users, and a CA ACL to allow users in that group access to the smime profile:

% ipa group-add smime_users
Added group "smime_users"
  Group name: smime_users
  GID: 1148600006

% ipa caacl-add smime_acl
Added CA ACL "smime_acl"
  ACL name: smime_acl
  Enabled: TRUE

% ipa caacl-add-user smime_acl --group smime_users
  ACL name: smime_acl
  Enabled: TRUE
  User Groups: smime_users
Number of members added 1

% ipa caacl-add-profile smime_acl --certprofile smime
  ACL name: smime_acl
  Enabled: TRUE
  Profiles: smime
  User Groups: smime_users
Number of members added 1

Creating and issuing a cert request

Finally we need to create a PKCS #10 certificate signing request (CSR) and issue a certificate via the cert-request command. We will do this for the user alice. Because this certificate is for email protection Alice’s email address should be in the Subject Alternative Name (SAN) extension; we must include it in the CSR.

The following OpenSSL config file can be used to generate the certificate request:

[ req ]
prompt = no
encrypt_key = no

distinguished_name = dn
req_extensions = exts

[ dn ]
commonName = "alice"

[ exts ]

We create and then inspect the CSR (the genrsa step can be skipped if you already have a key):

% openssl genrsa -out key.pem 2048
Generating RSA private key, 2048 bit long modulus
e is 65537 (0x10001)
% openssl req -new -key key.pem -out alice.csr -config alice.conf
% openssl req -text < alice.csr
Certificate Request:
        Version: 0 (0x0)
        Subject: CN=alice
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (1024 bit)
                Exponent: 65537 (0x10001)
        Requested Extensions:
            X509v3 Subject Alternative Name: 
    Signature Algorithm: sha1WithRSAEncryption

Observe that the common name is the user’s name alice, and that alice@ipa.local is present as an rfc822Name in the SAN extension.

Now let’s request the certificate:

% ipa cert-request alice.req --principal alice --profile-id smime
ipa: ERROR: Insufficient access: Principal 'alice' is not
  permitted to use CA '.' with profile 'smime' for certificate

Oops! The CA ACL policy prohibited this issuance because we forgot to add alice to the smime_users group. (The not permitted to use CA '.' part is a reference to the upcoming sub-CAs feature). Let’s add the user to the appropriate group and try again:

% ipa group-add-member smime_users --user alice
  Group name: smime_users
  GID: 1148600006
  Member users: alice
Number of members added 1

% ipa cert-request alice.req --principal alice --profile-id smime
  Certificate: MIIEJzCCAw+gAwIBAgIBEDANBgkqhkiG9w0BAQsFADBBMR...
  Subject: CN=alice,O=IPA.LOCAL 201507271443
  Issuer: CN=Certificate Authority,O=IPA.LOCAL 201507271443
  Not Before: Thu Aug 06 04:09:10 2015 UTC
  Not After: Sun Aug 06 04:09:10 2017 UTC
  Fingerprint (MD5): 9f:8e:e0:a3:c6:37:e0:a4:a5:e4:6b:d9:14:66:67:dd
  Fingerprint (SHA1): 57:6e:d5:07:8f:ef:d6:ac:36:b8:75:e0:6c:d7:4f:7d:f9:6c:ab:22
  Serial number: 16
  Serial number (hex): 0x10

Success! We can see that the certificate was added to the user’s userCertificate attribute, or export the certificate to inspect it (parts of the certificate are elided below) or import it into an email program:

% ipa user-show alice
  User login: alice
  First name: Alice
  Last name: Able
  Home directory: /home/alice
  Login shell: /bin/sh
  Email address: alice@ipa.local
  UID: 1148600001
  GID: 1148600001
  Certificate: MIIEJzCCAw+gAwIBAgIBEDANBgkqhkiG9w0BAQsFADBBMR...
  Account disabled: False
  Password: True
  Member of groups: smime_users, ipausers
  Kerberos keys available: True

% ipa cert-show 16 --out alice.pem >/dev/null
% openssl x509 -text < alice.pem
        Version: 3 (0x2)
        Serial Number: 16 (0x10)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: O=IPA.LOCAL 201507271443, CN=Certificate Authority
            Not Before: Aug  6 04:09:10 2015 GMT
            Not After : Aug  6 04:09:10 2017 GMT
        Subject: O=IPA.LOCAL 201507271443, CN=alice
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Authority Key Identifier: 

            Authority Information Access: 
                OCSP - URI:http://ipa-ca.ipa.local/ca/ocsp

            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment
            X509v3 Extended Key Usage: 
                E-mail Protection
            X509v3 CRL Distribution Points: 

                Full Name:
                CRL Issuer:
                  DirName: O = ipaca, CN = Certificate Authority

            X509v3 Subject Key Identifier: 
            X509v3 Subject Alternative Name: 
    Signature Algorithm: sha256WithRSAEncryption


The ability to define and control access to custom certificate profiles and the extension of FreeIPA’s certificate management features to user principals open the door to many use cases that were previously not supported. Although the certificate management features available in FreeIPA 4.2 are a big step forward, there are still several areas for improvement, outlined below.

First, the Dogtag certificate profile format is obtuse. Documentation will make it bearable, but documentation is no substitute for good UX. An interactive profile builder would be a complex feature to implement but we might go there. Alternatively, a public, curated, searchable (even from FreeIPA’s web UI) repository of profiles for various use cases might be a better use of resources and would allow users and customers to help each other.

Next, the ability to create and use sub-CAs is an oft-requested feature and important for many use cases. Work is ongoing to bring this to FreeIPA soon. See the Sub-CAs design page for details.

Thirdly, the FreeIPA framework currently has authority to perform all kinds of privileged operations on the Dogtag instance. This runs contrary to the framework philosophy which advocates for the framework only having the privileges of the current user, with ACIs (and CA ACLs) enforced in the backends (in this case Dogtag). Ticket #5011 was filed to address this discrepancy.

Finally, the request interface between FreeIPA and Dogtag is quite limited; the only substantive information conveyed is whatever is in the CSR. There is minimal capability for FreeIPA to convey additional data with a request, and any time we (or a user or customer) want to broaden the interface to support new kinds of data (e.g. esoteric certificate extensions containing values from custom attributes), changes would have to be made to both FreeIPA and Dogtag. This approach does not scale.

I have a vision for how to address this final point in a future version of FreeIPA. It will be the subject of future blog posts, talks and eventually – hopefully – design proposals and patches! For now, I hope you have enjoyed this introduction to some of the new certificate management capabilities in FreeIPA 4.2 and find them useful. And remember that feedback, bug reports and help with development are always appreciated!

August 05, 2015

Template for a KeystoneV3.rc

If you are moving from Keystone v2 to v3 call, you need more variables in your environment. Here is a template for an update keystone.rc for V3, in jinja format:

export OS_AUTH_URL=http://{{ keystone_hostname }}:5000/v3
export OS_USERNAME={{ username }}
export OS_PASSWORD={{ password }}
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME={{ project_name }}

MVEL as an attack vector

Java-based expression languages provide significant flexibility when using middleware products such as Business Rules Management System (BRMS). This flexibility comes at a price as there are significant security concerns in their use. In this article MVEL is used in JBoss BRMS to demonstrate some of the problems. Other products might be exposed to the same risk.

MVEL is an expression language, mostly used for making basic logic available in application-specific languages and configuration files, such as XML. It’s not intended for some serious object-oriented programming, just simple expressions as in “data.value == 1”. On a surface it doesn’t look like something inherently dangerous.

JBoss BRMS is a middleware product designed to implement Business Rules. The open source counterpart of JBoss BRMS is called drools. The product is intended to allow businesses (especially financial) to implement the decision logic used in their organization’s operations. The product contains a rules repository, an execution engine, and some authoring tools. The business rules themselves are written in a drools rules language. An interesting approach has been chosen for the implementation of drools rules language. The language is complied into MVEL for execution, and it allows the use of MVEL expressions directly, where expressions are applicable.

There is however an implementation detail that makes MVEL usage in middleware products a security concern. MVEL is compiled into plain Java and, as such, allows access to any Java objects and methods that are available to the hosting application. It was initially intended as an expression language that allowed simple programmatic expressions in otherwise non-programmatic configuration files, so this was never a concern: configuration files are usually editable only by the site admins anyway, so from a security perspective adding an expression in a config file is not much different from adding a call in a Java class of an application and deploying it. The same was true for BRMS up to version 5: any drools rule would be deployed as a separate file in repository, so any code in drools rules would be only available for deployment by authorized personnel, usually as part of the company workflow following the code review and other such procedures.

This changed in BRMS (and BPMS) 6. A new WYSIWYG tool was introduced that allowed constructing the rules graphically in a browser session, and testing them right away. So any person with rule authoring permissions (role known as “analyst” rather than “admin”) would be able to do this. The drools rules would allow writing arbitrary MVEL expressions, that in turn allow any calls to any Java classes deployed on the application server without restrictions, including the system ones. This means an analyst would be able to write Sys.exit() in a rule and testing this rule would shut down the server! Basically, the graphical rule editor allowed authenticated arbitrary code execution for non-admin users.

A similar problem existed in JBoss Fuse Service Works 6. While the drools engine that ships with it does not come with any graphical tool to author rules, so the rules must be deployed on the server as before, it comes with RTGov component that has some MVEL interfaces exposed. Sending an RTGov request with an MVEL expression in it would again allow authenticated arbitrary code execution for any user that has RTGov permissions.

This behaviour was caught early on in the development cycle for BxMS/FSW version 6, and a fix was implemented. The fix involves running the application server with Java Security Manager (JSM) turned on, and adding extra configuration files for MVEL-only security policies. After the fix was applied, only the limited number of Java classes were allowed to be used inside MVEL expressions, which were safe for use in legitimate Drools rules and RTGov interfaces, the specific RCE vulnerability was considered solved.

Further problems arose when products went into testing with the fix applied and some regressions were run. It was discovered that it wasn’t a good idea to make the fix with JSM enabled the default setup for productions servers as this caused the servers would run slow. Very slow. Resource consumption was excessive and performance suffered dramatically. It became obvious that making MVEL/JSM fix the default for high-performance production environment was a not an -option.

A solution was found after considerable consultation between Development, QE and Project Management. The following proposals where made for any company running BRMS:

  • When deploying BRMS/BPMS on a high-performance production server, it is suggested to disable JSM, but at the same time not to allow any “analyst”-role users to use these systems for rule development. It is recommended to use these servers for running the rules and applications developed separately and achieving maximum performance, while eliminating the vulnerability by disabling the whole attack vector by disallowing the rule development altogether.
  • When BRMS is deployed on development servers used by rule developers and analysts, it is suggested to run these servers with JSM enabled. Since these are not production servers, they do not require mission critical performance in processing real-time customer data, they are only used for application and rule development. As such, a little sacrifice in performance on a non mission-critical server is a fair trade-off for a tighter security model.
  • The toughest situation arises when a server is deployed in a “BRMS-as-a-service” configuration. In other words when rule development is exposed to customers over the Web (even through VPN-protected Extranet). In this case no other choice is available but to enable complete JSM protection, and accept all the consequences of the performance hit. Without it, any customer with minimal “rule writing and testing” privileges can completely take over the server (and any other co-hosted customers’ data as well), A very undesirable result to avoid.

Similar solutions are recommended for FSW. Since only RTGov exposes the weakness, it is recommended to run RTGov as a separate server with JSM enabled. For high performance production servers, it is recommended not to install or enable the RTGov component, which eliminates the risk of exposure of MVEL-based attack vectors, making it possible to run them without JSM at full speed.

Other approaches are being considered by the development team for new implementation of MVEL fix in the future BRMS versions. Once such idea was to run a dedicated MVEL-only app server under JSM separate from the main app server that runs all other parts of the applications, but other proposals were talked about as well. Stay tuned for more information once the decisions are made.

July 31, 2015

CIL – Part2: Module priorities

In my previous blog, I talked about CIL performance improvements. In this blog post, I would like to introduce another cool feature called module priorities. If you check the link, you can see a nice blog post published by Petr Lautrbach about this new feature.

With new SELinux userspace, we are able to use priorities for SELinux policy modules. It means you can ship own ipa policy module, which is based on distribution policy module, with additional rules and load it with higher priority. No more different names for policy modules and higher priority wins.

# semodule --list-modules=full | grep ipa
400 ipa pp
100 ipa pp

Of course, you can always say you want to use distro policy module and add just additional fixes. Yes, it works fine for some cases when you add just minor fixes which you are not able to get to distro policy for some reasons. Actually you can also package this local policy how Lukas Vrabec wrote in his blog.

Another way how to deal with this case is a fact you can ship SELinux policy for your application at all and don’t be a part of distro policy. Yes, we can see these cases.

For example

# semodule --list-modules=full | grep docker
400 docker pp

But what are disadvantages with this way?

* you need to know how to write SELinux policy
* you need to maintain this policy and reflect the latest distro policy changes
* you need to do “hacks” in your policies if you need to have some interfaces for types shipped by distro policy
* you would get your policy to upstream and check if there is no conflict with distribution policy if they do a merge with the same upstream

From Fedora/RHEL point of view, this was always a problem how to deal with policies for big projects like Cluster, Gluster, OpenShift and so on. We tried to get these policies out of distro policy but it was really hard to do a correct rpm handling and then we faced my above mentioned points.

So is there any easy way how to deal with it? Yes, it is. We ships a policy for a project in our distribution policy and this project takes this policy, adds additional fixes, creates pull requests against distribution policy and if there will be different timelines then it will be shipped by this project. And that’s it! It can be done easily using module priorities.

For example, we have Gluster policy in Fedora by default.

# semodule --list-modules=full | grep gluster
100 glusterd pp

And now, Gluster team needs to do a new release but it causes some SELinux issues. Gluster folks can take distribution policy, add additional rules and package it.

Then we will see something like

# semodule --list-modules=full | grep gluster
100 glusterd pp
400 glusterd pp

In the mean time, Gluster folks can do pull requests with all changes against disitribution policy and they can still ship the same policy. The Gluster policy is a part of distribution policy, it can be easily usptream-able and moreover, it can be disabled in distribution policy by default.

# semodule --list-modules=full | grep gluster
400 gluster cil
100 glusterd pp disabled

$ matchpathcon /usr/sbin/glusterfsd
/usr/sbin/glusterfsd system_u:object_r:glusterd_exec_t:s0

This model is really fantastic and give us answers for lot of issues.

July 30, 2015

Using Ansible to add a NetworkManager connection

The Virtual Machine has two interfaces, but only one is connected to a network. How can I connect the second one?

To check the status of the networks with NetworkManagers Command Line Interface (nmcli) run:

$ sudo nmcli device
eth0    ethernet  connected     System eth0 
eth1    ethernet  disconnected  --          
lo      loopback  unmanaged     --

To bring it up manually:

$ sudo nmcli connection add type ethernet ifname eth1  con-name ethernet-eth1
Connection 'ethernet-eth1' (a13aeb2c-630f-4de6-b735-760264927263) successfully added.

To Automate the same thing via Ansible, we can use the command: module, but that will execute every time unless we check that the interface has an IP address. If it does; we want to skip it. We can check that using the predfined facts variables. Each interface has a variable in the form of ansible_interface, which is a dictionary containing details about the host. Here is what my host has for the interfaces:

        "ansible_eth0": {
            "active": true,
            "device": "eth0",
            "ipv4": {
                "address": "",
                "netmask": "",
                "network": ""
            "ipv6": [
                    "address": "fe80::f816:3eff:fed0:510f",
                    "prefix": "64",
                    "scope": "link"
            "macaddress": "fa:16:3e:d0:51:0f",
            "module": "virtio_net",
            "mtu": 1500,
            "promisc": false,
            "type": "ether"
        "ansible_eth1": {
            "active": true,
            "device": "eth1",
            "macaddress": "fa:16:3e:38:31:71",
            "module": "virtio_net",
            "mtu": 1500,
            "promisc": false,
            "type": "ether"

You can see that, while eth0 has an ipv4 section, eth1 has no such section. Thus, to gate the playbook task on the present of the variable, use a when clause.

Here is the completed task:

  - name: Add second ethernet interface
    command: nmcli connection  add type ethernet ifname eth1  con-name ethernet-eth1
    when: ansible_eth1.ipv4 is not defined

Now, there is an Ansible module for Network Manager, but it is in 2.0 version of Ansible which is not yet released. I want this using the version of Ansible I (and my team) have installed on Fedora 22. Once 2.0 comes out, many of these “one-offs” will use the core modules.