Inmates running the asylum. The fox guarding the henhouse. You've no doubt heard these terms before. They’re clever phrases that highlight how the wrong people are often in charge of things. It's convenient to think that the wrong people are running the show elsewhere but have you taken the time to reflect inward and determine how this very dilemma might be affecting your organization? I see this happening all the time in terms of security assessments. In organizations both large and small, I see improper testing - or no testing at all - of the systems that should otherwise be in scope and fair game for assessment. The very people in charge of security assessments are the ones who are determining how things are going to turn out.
I see everyone from CIOs to network admins and all of the IT/security roles in between setting parameters on what can and cannot be tested. Ditto for how it can be tested. Oftentimes, an external security assessment/penetration test is performed but not everything is being looked at. Sometimes it's something relatively benign like a marketing website but other times it's actual enterprise applications that are being overlooked (often in the name of “someone else is hosting it and therefore responsible for it"). Other times, I hear people stating that their auditors or customers aren't asking for certain systems to be tested, therefore, it doesn't really matter.
I think the general consensus of the stakeholders who are reading the actual assessment reports is that they are getting the current status of everything. But that's not the case in many situations. There’s no doubt that people need what they need and nothing more. In fact, legal counsel would likely advise to do the bare minimum, document the bare minimum, and share the bare minimum. That’s just how the world works. At the end of the day, people presumably know what they want/need. I'm just not convinced that the approaches that are currently being taken whereby IT and security staff are defining the systems to be tested along how they need to be tested (and, therefore, the outcomes) is the best approach.
Most security assessment reports have a notes/scope section that outlines what was tested and what was not. However, what's often missing is all of the details regarding other things that people don't often think about in this context such as:
- How the systems that may not have been tested can/will impact those that were tested if they have exploitable vulnerabilities
- Whether or not authentication was used (it’s important to distinguish the two – and do both)
- What network security controls, i.e. firewall and IPS, were disabled or left in place (you have to look past any blocking)
- What level of manual analysis/penetration testing was performed, including how much time was spent and whether or not social engineering/email phishing were a part of the testing (it makes a difference!)
There are so many caveats associated with modern-day security testing that no one really knows if everything has been looked at in the proper ways. So, what do you do? Do you question the validity of existing testing methods and reports? Do you step back and ask tougher questions of those who were doing the testing? Perhaps there needs to be an arm's-length entity involved with defining what gets tested and how it gets tested, including very specific approaches, systems in scope, and tools that are to be used.
This challenge is similar to certain aspects of healthcare – something we can all relate to. Take, for instance, when a patient gets a MRI or CAT scan, the results for the radiologist to analyze will be different than a more focused x-ray or ultrasound. Perhaps the prescribing doctor thinks the patient just needs an x-ray when, in fact, they actually need a PET scan. That very thing happened to my mother when she was fighting lung cancer. Her doctors focused on her lungs and hardly anything else. Her chemotherapy was working well and her lungs continued to look good over time. What was missed, however, was the cancer that had spread to other parts of her body. The prescribed diagnostics were focused on what was thought to be important but they completely missed what was going on in the rest of her body. Unfortunately, given how much time had passed for the cancer to spread elsewhere (while being overlooked), the outcome was not a positive one. Similarly, it’s important to remember that any security testing that’s assumed to paint the entire picture may not be doing that at all.
Are the inmates running the asylum? Is the fox guarding the henhouse? Perhaps a bit here and there. However, it’s not always people in IT and security intentionally “limiting” the scope of security testing by keeping certain systems out of the loop and looking the other way to generate false or misrepresented outcomes. I do know that is going on in many situations but this issue is a bit more complicated. I don't think there’s a good solution to this other than closer involvement on the part of management, internal auditors, and outside parties involved with scoping and performing these assessments. If anything, the main focus should be ensuring that expectations are properly set.
A false sense of security is the enemy of decision makers. Never, ever should someone reading a security assessment report assume that all the proper testing has been done or that the report is a complete and accurate reflection of where things truly stand. Odds are that’s it not.