October is promoted as cyber security awareness month in the US and across the European Union. We’re all for increasing awareness of security issues and threats, so we’re in, but we know our average SecurityStreet reader likely works in information security and is already “aware.”
Last year we did this through a series of primers designed to help you educate your users on the risks they face daily, and how to protect themselves. If that sounds of interest, it’s not too late to make use of them to educate your users about phishing, mobile threats, basic password hygiene, avoiding cloud crises, and the value of vigilance.
This year we’re focusing on the executive team. Given the number of high profile breaches in the past year alone, the C-suite and Board are starting to pay attention to cyber security and the potential business risk in terms of liability, loss of reputation, and revenue impact.
Over the next few weeks, we’ll be covering why security matters now; data custodianship; building security into the corporate culture through policies and user education; how organizations can make security into a strength and advantage; and crisis communications and response.
This week, let’s talk about coordinated disclosure and working with the security community.
There are no guarantees in the safety, security, and resilience of technology. Even with airtight security teams, policies, processes, and tools operating at peak efficiency, there will always be newly discovered software vulnerabilities. Your organization simply can't test for every scenario and predict everything that might happen, there will never be enough money or time. Security inevitably becomes a balancing act. If you're too cavalier about it, you're inviting disaster. On the flip-side, you can invest as much as possible in as many security audits as you can afford, but sooner or later you still have to ship product to stay in business.
It's usually only a matter of time until a company receives word from a researcher somewhere that they've found a security flaw in your product, service, or website. This is an opportunity for you and your business to demonstrate your leadership and commitment to customer experience and care. It all comes down to how you respond to the research. It's not unreasonable to have imperfections in your code, and oddly enough, it's often easier for third parties to spot these issues. This could be for any number of reasons: they're less immersed in the detail; they have different background experience or unique knowledge and skill sets; they identify a user scenario you did not foresee. Some flaws are found accidentally, as was the case with five year old Kristoffer Von Hassel, who wanted to play his father's Xbox games.
However a researcher makes their discovery, it will likely lead to an awkward conversation, and you may feel defensive having poured resources and man hours into development. What happens next—specifically how your organization responds to this kind of feedback—will mark you as either a great company that responds to its imperfections in a healthy, productive way... or one that would rather put its customers and intellectual property at risk by denying the problem exists, or worse, by trying to take legal action against the researcher.
To distinguish your company as one that truly cares about its customers, leverage the research community to make your products the best they can be. Researchers are effectively providing you with advanced product testing, which not only helps you ensure you are not putting your customers at risk, it can also help you drive innovation and demonstrate agility and leadership to your customers, prospects, and the industry as a whole. Building security into your offerings provides a competitive advantage and helps you increase customer loyalty and satisfaction. In other words, you can flip the bad news of a vulnerability disclosure on its head and use it as an opportunity to get customer attention and show that you take their security seriously.
And the opposite can be true too of course — if you ignore or even try to take action against a researcher who finds a vulnerability in your product, you might be seen as more keen to cover up an issue than actually fix the problem.
Below are some suggestions on how to work effectively with researchers. Being open to the security community and their feedback is a great first step; we recommend you build on this with internal processes to streamline your response and increase efficacy of communications. This is a coordinated disclosure program.
Some questions to get you started:
- Do you have a written policy on your website detailing how your organization's position on working with researchers, as well as a description of how your company receives and processes security feedback?
- Do you have an email address where researchers can send you their vulnerability findings? We recommend having a PGP key or some kind of encryption in place so people can submit this sensitive information to you privately and securely.
- Once a researcher sends you vulnerability information, do you have an internal system in place that ensures the right people get this information, and are those people empowered to take action?
- Have you considered a bug bounty program?
You don't have to re-invent the wheel here if you don't have the internal resources to get all of the above steps up and running. There are a number of websites dedicated to helping your organization work with the security community, such as HackerOne and BugCrowd, where researchers can submit their findings safely.
As we mentioned in last week's blog post, the key thing to remember is that you are a custodian of your customers' data, not an owner. There are various levels of engagement an organization can have with the security community and you will need to find the right level to suit your resources and needs. Even as a bare minimum, positive interactions with the security community and vulnerability disclosures demonstrates that you care about your customers, and it reflects well on your organization as a whole.