How Do We De-Criminalize Security Research? AKA What’s Next for the CFAA?

Blog Post created by jenellis Employee on Jan 26, 2015

jazzyjeff.jpgAnyone who read my breakdown on the President’s proposal for cybersecurity legislation will know that I’m very concerned that both the current version of the Computer Fraud and Abuse Act (CFAA), and the update recently proposed by the Administration, have and will have a strong chilling effect on security research. You will also know that I believe that the security community can and must play a central role in changing it for the better. This post is about how we do that.


A quick recap

The CFAA is currently worded in a very vague way that not only creates confusion and doubt for researchers, but also allows for a very wide margin of prosecutorial discretion in the way the statute is applied.  It contains both criminal and civil actions, the penalties for which are pretty harsh, and that increases the severity of the risk for researchers. Too often, we see the CFAA being used as a stick to threaten researchers by companies that are not willing or able to face up to their responsibilities, and don’t want to be publicly embarrassed for not doing so. These factors have resulted in many researchers deciding not to conduct or disclose research, or being forced into not doing so.


The new proposal is potentially worse. It makes the penalties even harsher and, while it does attempt to create more clarity on what is or is not fair game, it is worded in such a way that a great deal of research activity could be subject to legal action. For more details on that, look at the other post.


Still, I believe that opening the CFAA for discussion is A Good Thing. It affords us an opportunity to highlight the issues and propose some solutions.  This latter part is where we stumble; we are frequently more comfortable pointing out weakness and failures than recommending solutions. We must move beyond this if our industry is to survive, and if we ever hope to create a more secure ecosystem.


While I believe everyone will pay the price if we cannot solve this problem – in the form of an inherently insecure ecosystem that threatens our privacy, our economy, and potentially our safety – the more immediate risk of imprisonment or other penalties is carried by researchers. In other words, no one is going to care more about this issue or be more motivated to fix it than us.


So how do we do that?

I've spent the past year asking and being asked that question, and unfortunately my answer right now is that I don’t know. Most people I know in the community agree with the basic premise of having an anti-hacking law of some kind, and we need to be careful that any effort to decriminalize research does not inadvertently create a backdoor in the law for criminals to abuse.


Finding a solution is tough, but I have faith in the extraordinary perseverance and intelligence of our community, and I believe together we can find a solution. That sounds like a cheesy cop out. What I mean is that while I don’t know in technical detail all the possible use cases that will test the law, I know great researchers that live them every day. And while I don’t know how to write law or policy, I know smart, experienced lawyers that do, and that care about this issue. And though I’m learning how to navigate DC, there are amazing people already well-engaged there that recognize the problem and advocate for change. Collaboration then is the key.


Getting started

As I said, I've spent a lot of time discussing this problem and potential solutions and I thought sharing some of that thought process might help kick start a discussion – not on the problem, but on a potential solution.


What we’re likely looking at here is an exemption or “carve out” to the law for research. Below are some ways we might think of doing that, all of which have problems – and here I am guilty of my own crime of flagging the problem without necessarily having a solution. Hopefully though this will stimulate discussion that leads to a proposed solution.


A role-based approach

One of the most common suggestions I've heard is that you exempt researchers based expressly on the fact that they ARE researchers. There are a few problems with this. Firstly, when I use the term “researcher” I mean someone that finds and discloses a security flaw. That could be a security professional, but it could just as easily be a student, child, or Joe Internet User who unintentionally stumbles on an issue while trying to use a website. People reporting findings for the first time have no way of establishing their credibility. Defining researchers is tough and likely defeats its own purpose.


I’ve heard the idea of some kind of registration for researchers be kicked around, and those outside the community will often point to the legal or medical professions where there is a governing body within the community that sets a mutually agreed bar and polices it. I can feel many shuddering as they read that – ours is not a community that enjoys the concept of conformity or being told what to do. Even if that evolves over time, registration and self-government don’t address the point above that ANYONE can be a researcher in the sense of uncovering a vulnerability.


Then too there is the sad fact that some people may work as “white hat” security professionals during the day, but by night they wear a different colored hat altogether (or a balaclava if you believe the stock imagery strewn across the internet). If they commit a crime they should be punished for it accordingly and should not have a Get Out of Jail Free card just because they are a security professional by day.


A behavior-based approach

Perhaps the easiest way to recognize research then is through behavior. There may be a set of activities we can point to and say “That is real research; that should be exempt.”


The major challenge with this is that much researcher behavior may not be distinguishable from the initial stages of an attack. Both a researcher and a criminal may scan the internet looking to see where a certain vulnerability is in effect. Both a researcher and a criminal may access sensitive personally identifiable information through a glitch on a website. It seems to me that what they do with that information afterwards would indicate whether they are a criminal or not, and as an aside, I would have thought most criminal acts would be covered by other statutes, eg. theft, destruction of property, fraud. This is not how this law currently works, but perhaps it merits further discussion.


A problem with this could be that you would have to consider every possible scenario and set down rules for it, and that’s simply not feasible. Still, I think investigating various scenarios and determining what behavior should be considered “safe” is a worthwhile exercise. If nothing else, it can help to clarify what is risky and what is not under the current statute. Uncertainty over this is one of the main factors chilling research today.


This could potentially be addressed through an effort that creates guidelines for research behavior, allowing for effective differentiation between research and criminal activity. For example, as a community we could agree on thresholds for the number of records touched, number of systems impacted, or communication timelines. There are challenges with this approach too – for one thing we don’t have great precedents of the community adopting standards like this. Secondly, even if we could see something like this endorsed by the Department of Justice and prosecutors, it would not protect researchers from civil litigation.  And then there is the potential of forcing a timeline for self-identification, which would raise the likelihood of incomplete or inconclusive research, and the probability of “cease and desist” notifications over meaningful and accurate disclosures.


A disclosure-based approach

This again is about behavior, but focused exclusively around disclosure. The first challenge with this stems from that fact that anyone can be a researcher. If you stumble on a discovery, can you be expected to know the right way to disclose? Can you expect students and enthusiasts to know?


Before you get to that though, there is the matter of agreeing on a “right” way to disclose. Best practices and guidelines abound on this topic, but the community varies hugely in its views between full, coordinated, and private disclosures. Advocates of full disclosure will generally point to the head-in-the-ground or defensive response of the vast majority of vendors – unfortunately companies with an open attitude to research are still the exception, not the norm. And those companies are not the ones likely to sue you under the CFAA.


This does raise one interesting idea – basing the proposal not just on how the researcher discloses, but also on how the vendor responds. In other words, a vendor would only be able to pursue a researcher if they had also satisfied various requirements for responding to the disclosure. This would at least spread the accountability so that it isn’t solely on the shoulders of the researcher. Over time it would hopefully engender a more collaborative approach and we’d see civil litigations against researchers disappear.  This is the approach proposed in a recent submission for a security research exemption to the Digital Millennium Copyright Act (DMCA).


An intent-based approach

This brings me to my last suggestion, and the one that I think the Administration tried to lean towards in its latest proposal.


One of the criticisms of the current CFAA has long been that it does not consider intent.  That’s actually a bit of an over-simplification as it is always the job of the prosecutor to prove that the defendant was really up to no good. But essentially the statute doesn’t contain any actual provision for intent, or mens rea for those craving a bit of Latin.


This is the point at which I should remind you that I’m not a lawyer (I don’t even play one on TV). However, to the limited degree that I understand this, I do want to flag that the legal concept of intent is NOT the same as the common usage understanding of it. It’s not enough to simply say “I intended X.” or “I didn’t intend Y” and expect that it will neutralize the outcome of your actions.


Still, I’ve been a fan of an exemption based on intent for a while because, as I’ve already stated: 1) anyone can be a researcher, and 2) some of the activities of research and cybercrime will be the same. So I thought understanding intent was the only viable way to demarcate research from crime. It’s a common legal concept, present in many laws, hence there being a nice Latin term for it.  And in law, precedent seems to carry weight, so I thought intent would be our way in.


Unfortunately the new proposal highlights how hard this is to put into practice. It introduces the notion of acting “willfully”, which it defines as:


“Intentionally to undertake an act that the person knows to be wrongful.”


So now we have a concept of intent. But what does “wrongful” mean?  Does it mean I knowingly embarrassed a company through a disclosure, potentially causing negative impact to revenue and reputation? Does it mean I pressured the company to invest time and resources into patching an issue, again with a potential negative impact to the bottom line? If so, the vast majority of bona fide researchers will meet the criteria set out above to prove bad intent, as will media reporting on research, and anyone sharing the information over social media.


This doesn’t necessarily mean we should just abandon the idea of an intent-based approach. The very fact that the Administration introduced intent into their proposal indicates that there may be merit to pursuing this approach.  It could be a question of needing to fine-tune the language and test the use cases, rather than giving up on it altogether.  We may have the ability to clarify and codify what criteria demonstrates and documents good intent. What do you think?


Next steps

It’s time the security research community came up with its own proposal for improving the CFAA. It won’t be easy; most of us have never done anything like this before and we probably don’t know enough Latin. But it’s worth the effort of trying. Again, researchers bear the most immediate risk. And researchers are the ones that understand the issues and nuances best. It falls to this community then to lead the way on finding a solution.


The above are some initial ideas, but this by no means exhausts the conversation. What would you do? What have I not considered? (A lot certainly.) How can we move this conversation forward to find OUR solution?


~ @infosecjen