Skip navigation
All Places > Information Security > Blog

CISO Series: Budgeting


I have provided a brief overview of the genesis of the CISO series, and now it is time to tackle our first topic: security budgets. Whether you’re the CISO of a large public company or leading security at an early-stage startup, rich in headcount or forced to be tight with the purse strings, reporting into the CIO, COO, or elsewhere in the organization, the fact remains that budget conversations are among the most critical and strategic conversations a security executive can have. Often times, setting a budget plan equates to prioritizing security projects for the business, which gives even more weight to the process.


In this series, we have captured some recommendations for CISOs seeking to use budgetary discussions for career growth; the takeaways often bleed into one another, so don’t be surprised when you see overlap. The crux is that, as a CISO, you must make a case for budget in terms which are easily understood by upper management, while sidestepping the common stigmas that still plague security teams today (getting past that house of ‘no’ banner). Use empowerment, rather than fear, to your advantage.


Of the many CISOs I’ve spoken to, all proved that they take their role seriously, especially the fiduciary duty to stakeholders, customers, and all aspects of the business ecosystem.


Key Takeaways


1. Whatever you do, don’t under deliver.


over-promise-and-under-deliver.pngOne CISO labeled this the “deadly sin” of budgeting, and for good reason: in nearly all the discussions I had, CISOs agreed that promising the moon to get more budget will come back to bite you. “Do not ask for more budget than you will effectively be able to use,” another underscored. “You need to gain trust, especially if you’re new to the position. Convince the board that you’re effectively running security by not allowing money to be spent without results.”


In the same vein, CISOs have to spend the money that they ask for – so coming in significantly under budget will not win you points either, especially if your company reports to the street. “I’m hyper aware of forecast, versus budget,” one interview subject explained. “Where I work, the budget is mostly guesswork; the forecast is what really matters. I have a weekly meeting with finance to walk through department spend: what’s been delayed, what might not be happening, and where we can pull from to compensate for the fact that some work may not be starting.”


Unsurprisingly, the human element plays a large part in determining how much a security team can reasonably deliver. Projects rarely finish on time, be fully aware in planning how other teams impact your ability to execute and deliver. Moreover, security professionals are in high demand but short supply, and some degree of turnover is inevitable – so plan with attrition in mind.


So, in financial conversations, how do you set expectations accordingly? It’s all about delivering value; CISOs who have had successful budget discussions said they focused on efforts that support business initiatives, as these find the most support and help to gain internal champions. “I create a prioritized list of initiatives, and IT often has final say over what’s above or below the line,” says a CISO. “They can sometimes see security as simply a cost center, so I always make a point to schedule a conversation that underscores which parts are crucial to the business.”


“The budget plan you deliver may be carried to higher echelons,” adds another, “so understand how the influence you exert can gain you a seat at the adults’ table.”


The idea that a CISO’s job hinges on influence, rather than command and control, is one that resonated throughout nearly all the interviews (they must be more of a personal trainer than a drill sergeant). To establish clout, one interview subject said, “I try to present my teams as force multipliers. In other words, what can they deliver that will magnify the impact of other key business initiatives? I don’t necessarily mean from a revenue or cost reduction standpoint, more so in the ability of the business to be compliant with contractual obligations that the business is already under.”


2. Budgets are about more than just the cost of technology.


2713670.jpgWhile under delivering can be a serious setback, that’s wasn’t the only cardinal sin of budgeting that CISOs underscored. Another common mistake: “starting with the technology – simply looking at the solutions you have in place and not taking external factors into account.”


Why is this a problem, exactly? “The best way to screw up budget is to look at all the different tools and solutions you have,” explains one CISO. “You then say, ‘Oh I need an antimalware solution because I don’t have one, so I’m going to go ahead and budget for that.’ I call this silo budgeting, and it will mess things up. Give other departments the chance to add input. During the discovery process, talk to partner teams to capture their requirements, concerns, and success criteria. Perfect compliance with that guidance won’t be required, but it can help inform your strategy and earn you internal champions. Their participation will help ensure that the business sees value. And when the business sees value, everybody wins.”


Avoiding a myopic, technology-driven view of budget not only ensures a stronger security program, it also helps in conversations with finance. “You will need to justify your decisions,” was something that several CISOs told me. “There’s often a perception that things have been done a certain way in the past, so people will ask, ‘Why do you need more money or more headcount now?’ Have those conversations early, and be patient when having them. [One CISO used a sock puppet analogy here.] And remember, the world has changed and breaches have huge repercussions, which you can use to your advantage.” (We’ll explore this concept more in takeaway #4.)


Another added: “Look at the business plan and let that inform your security strategy. Evaluate the basics – what you need to do to keep the lights on – as well as what you can do to protect and acquire revenue. What revenue streams may be generated, and what controls do you have in place to protect those revenue streams? What risk might be introduced into the organization, based on the direction that the business is going to take?” One CISO factored IoT issues into his strategic plan. “Be aware of what you connect to the Internet,” he advised, alluding to the fact that more Internet connectivity will create more entry points for attackers. 


Headcount is also a key element. Several of the CISOs I spoke to were at high-growth organizations, but even those that weren’t echoed the need to consider the human element in order to maintain or get to scale. One CISO emphasized the importance of the decision to keep in-house work versus hiring an external agency: “Does it make sense for me to hire technicians for my data center, or can I pool the work? Should I outsource this service, which would support the SMB community?” Regardless of whether it’s your team, a partner, or a contractor, hours equal dollars spent. It’s just a question of what makes the most sense from a resource perspective. “I look closely at the scope of effort and say, okay here’s what I believe the hours will be,” recommended a CISO. “That way I can estimate the amount of money it will require. Once the list is vetted, we start plugging in capital dollars – hardware and software licensing, consulting, special services, and so on to get a final number.”


3. Prioritize your budget effectively. Understand what’s “must do” vs. “could do.”


“Some things need to get done. Period.”


2713729.jpgBudgeting is an exercise between wants and needs. In nearly every conversation I had, CISOs felt the pain of having to say goodbye to projects that simply didn’t warrant time or money that particular year. The trick is to prioritize accordingly. One CISO shared his team’s strategy, which was highly effective: “My team looks at what we want to do over the next 18 months. It’s not a laundry list, it’s a targeted game plan that we hash out, argue over, and discuss at length. If we don’t think we can complete a particular initiative, then we cut it – we’re not going to ask for the money if we can’t deliver.”


In most cases, the CFO planning group and IT weigh in after priorities have been determined. A strong strategy is to establish a collaborative dialogue in which security can explain the underlying rationale, to gain buy-in from other parties. As one CISO explained, “We draw a line with IT. While projects below the line can still be funded, the understanding is that they simply aren’t a high priority. That’s when we start plugging in numbers.”


“When projects are not well understood, they get cut and security suffers,” adds another. “That’s on me, because it means I didn’t establish the value well enough.” Lower priority activities typically included general maintenance, such as systems nearing end of life and other routine enhancements perceived as taking more time than they are worth.


There is an art to building the case for a higher priority activity. Compliance mandates, unsurprisingly, tend to float to the top. Many of the CISOs I spoke to acknowledged that PCI almost always falls above the line and one “sprinkles PCI data throughout” his network in order to be strategic about leveraging compliance to his advantage. One freely admitted that “compliance does not equal security, but it certainly helps to lay the groundwork.” Another added, “External clients are excellent motivators – you don’t have to sell the business on something if their biggest client will.” Then there are the CISOs who have high profile projects, such as building a SOC, in which case it’s less arduous to get stakeholder buy-in: “Adding an incident management team was a big company initiative when we were building the SOC.”


CISOs must inevitably capitulate, to a certain extent. “A lot of what we’re driven to do is to use our enterprise licensing better,” a CISO at a large corporation told me. “That can be counter to good security, so my job is to look at how we can be cost effective while still being focused on more advanced threat detection and response.”

In the News:

Technically Relevant:

Management Interest:

Slightly Less Random


  • Royals crowned kings of improbability and MLB
    • Some of you don’t follow sporty-ball, and I guess that that’s okay. The season for America’s favorite past time (Baseball) just ended, as the Kansas City Royals won the World Series in New York against the Mets. It was a great game that ended in extra innings, after an amazing come back… So if you see blue and white KC stuff around- that’s why.

What is VERIS?

Posted by treyford Employee Oct 28, 2015

If you'd like to understand more of the nuts and bolts about VERIS, join us for a webcast November 5 2015 at 2pm ET: Understanding VERIS: the DBIR's Secret Decoder Ring


Data driven security is all the rage, and laughably few of us encode and analyze our programs… and for good reason. It isn’t easy. This post will talk about VERIS, a framework for describing security incidents in a precise way.


We all have a plan, a security program, compliance regulations, and super busy calendars—but what is working? The answer is hidden in plain sight, it just needs to be analyzed. And this is why we all love the DBIR.


If you aren’t familiar with Verizon’s DBIR (Data Breach Investigation Report), check it out. I (and most of the industry) consider it the seminal report documenting trends in successful attacks and defensive failures.


Sports analogies are unavoidable here, and I won’t apologize for them. The “Monday morning Quarterback” is a perfect analogy, and it applies to any sport, or activity. When you look back at a performance, just like the coaches do with the quarterback on Monday morning, you discuss more than outcomes, you talk about “what happened,” and “why it happened.”


Structured review, a meaningful critique, is based upon objective and accurate data.


Talking about incidents is hard. People take things personally, public statements are carefully tuned by PR, Marketing, and Legal teams, security professionals provide perspective to the news on very little in the way of facts— and that makes for difficult take aways for the rest of us.


Incidents happening in-house are often treated in a surprisingly similar fashion: carefully filtered facts get documented in writing, post mortem reports are often only narrative based, and the observations and lessons learned are limited to point-in-time assessments, or correlated only to recent audit findings or pinned to a convenient project.


Meaningful analysis across events requires a commitment to pragmatic event recording- this means structured data… which is why I’m excited to discuss VERIS.


VERIS - Vocabulary for Event Recording and Incident Sharing

“VERIS is a set of metrics designed to provide a common language for describing security incidents in a structured and repeatable manner”


The overall goal “is to lay a foundation from which we can constructively and cooperatively learn from our experiences to better measure and manage risk”


By studying what incidents were stopped (near misses) and what path incidents came from, we can objectively evaluate our program strategies… this, in my opinion, is the magic of VERIS.


If our mission, as security professionals, is to inform the business of risk, ultimately stopping “the big one” — there is very little appetite to allow an attack to repeat itself.


The A4

So VERIS describes an event using the 4 A’s - and it’s pretty simple when you think about it.

Actors take Actions, Assets have Attributes.

Yes. That’s a blinding flash of the obvious.

Taking the obvious even further:

  • Actors often take lots of Actions
  • Assets may have multiple Actions taken against them
  • Assets may have multiple Attributes affected


So it makes sense this is more of a nested schema than something Excel spreadsheet friendly…


Got it. Makes sense. Now what?

Get familiar with the A4 structure. We’ve got some videos here to save you some reading- but you’ll want to read up after the overview.


First up, here are some videos giving an overview of Actors, Actions, Assets, and Attributes:






Read more from here:

Today the Library of Congress officially publishes its rule-making for the latest round of exemption requests for the Digital Millennium Copyright Act (DMCA).  The advance notice of its findings revealed some good news for security researchers as the rule-making includes a new exemption to the DMCA for security research:


“(i) Computer programs, where the circumvention is undertaken on a lawfully acquired device or machine on which the computer program operates solely for the purpose of good-faith security research and does not violate any applicable law, including without limitation the Computer Fraud and Abuse Act of 1986, as amended and codified in title 18, United States Code; and provided, however, that, except as to voting machines, such circumvention is initiated no earlier than 12 months after the effective date of this regulation, and the device or machine is one of the following:


(A) A device or machine primarily designed for use by individual consumers (including voting machines);


(B) A motorized land vehicle; or


(C) A medical device designed for whole or partial implantation in patients or a corresponding personal monitoring system, that is not and will not be used by patients or for patient care.


(ii) For purposes of this exemption, “good-faith security research” means accessing a computer program solely for purposes of goodfaith testing, investigation and/or correction of a security flaw or vulnerability, where such activity is carried out in a controlled environment designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices or machines on which the computer program operates, or those who use such devices or machines, and is not used or maintained in a manner that facilitates copyright infringement.”


Basically this means that good-faith security research on consumer-centric devices, motor vehicles and implantable medical devices is no longer considered a violation of the DMCA (with caveats detailed below). It’s a significant step forward for security research, reflecting a positive shift in the importance placed on research as a means of protecting consumers from harm.


A brief history of the DMCA


The DMCA was passed in 1998 and criminalizes efforts to circumvent technical controls that are designed to stop copyright infringement. It also criminalizes the production and dissemination of technologies created for the purpose of circumventing these technical controls. That’s an incredibly simplified explanation of what the law does, and this is a good time for me to remind you that I’m not a lawyer.


The statute includes a number of exceptions that relate to security research – one for reverse engineering (section 1201 (f)), encryption research (section 1201(g)), and security testing (section 1201(j)); however, these are very limited in what they allow. Acknowledging that technology moves fast, the statute also includes provisions for a new rule-making every three years, during which, requests for new and additional exemptions can be made. These are reviewed through a lengthy process that includes opportunities for support and opposition to the exemptions to be lodged with the Library of Congress. After reviewing these arguments, the Copyright Office makes a recommendation to the Library of Congress, who then issues a rule-making that either approves or rejects the submitted exemptions. Exemptions that are approved will automatically expire at the end of the three year window (as opposed to the exceptions, which are permanent unless subject to a change via legislative reform through Congress).


Today’s rule-making is the product of the latest round of exemption requests. A number of submissions relating to research were filed – a couple for a broad security research exemption, one for medical devices, one for cars, and even something for tractors.  The Library of Congress effectively rolled these into one exemption, which is why it covers consumer-centric devices, automobiles, and implantable medical devices.


What does the new exemption mean for security research?

Well firstly, it’s an important acknowledgement of two things: 1) that research is critical for consumer protection, and 2) that laws like the DMCA can negatively impact research.


This is significant not only in what it allows within the context of the DMCA, but also that it sets a precedent and presents an opportunity for a broader discussion on these two points in the Government arena.


In terms of what is specifically allowed now, users are able to circumvent technical protections to conduct research on consumer-centric devices, automobiles, and implantable medical devices (that are not or will not be used for patient care).


This is not carte-blanche though, and it’s important to understand that.  There are a number of limits and questions raised by the language of the exemption:


  • You are allowed to circumvent technical controls to conduct research, but you are NOT allowed to make, sell, or otherwise disseminate tools for circumventing these controls. So you can only conduct research to the extent that it doesn’t require such tools.


  • The exemption won’t come into effect for a year. This is so other relevant agencies can update their policies. In his article on Boing Boing, Corey Doctorow points out that “the power to impose waiting times on exemptions at these hearings is not anywhere in the statute, is without precedent, and has no basis in law.” (Interestingly, the Library of Congress is excluding research on voting machines from the year delay.)


It remains to be seen what the agencies referenced (Department of Transport, Environmental Protection Agency, and the Food and Drug Administration) will do and how that will impact the way this exemption can be applied. It’s probably fair to say the exemption’s dissenters will be actively lobbying them to find a way to limit the impact of the exemption.  It falls to those of is in the security research community to try to engage these agencies to ensure they understand why research is important, and to try to address any concerns they may have.


  • The research must apply to consumer-centric devices (or cars, or implantable medical devices). What does that mean and where do you draw the line?  For example, we regularly hear of research findings in SOHO routers or printers.  These are devices designed for use in both home and work environments.  Do they count as “primarily designed for use by individual consumers?” I really hope these kinds of devices are included in this classification as they do represent a great deal of consumer risk. It's also somewhat strange to me that we're not granting business users the same protections we're giving individual consumer users.


  • The exemption does NOT allow for research on devices relating to critical infrastructure or nuclear power. It’s understandable that these areas raise considerable concern, but at the same time, do we really want flaws in these systems to be left unmitigated?  Doesn’t that create more opportunities for bad actors to attack high value targets, potentially with very serious repercussions?


  • For medical devices, the research cannot be conducted on devices that are being, or will be, used for patient care.  That seems pretty reasonable to me.


Also, it’s important to remember that, as noted above, the rule-making resets every three years, so this exemption will be in effect for a maximum of two years before we have to reapply and go through the entire process again (because of the year delay the Library of Congress has imposed on the exemption).


But it IS a positive step?


Yes, despite these qualifiers and limitations, I believe it is a positive step.  This is not just because in-and-of itself it enables more research to be conducted without concern of legal action, but also because it may indicate a bigger shift.


Just last week, I wrote a blog about a proposed legislative amendment that was significantly rewritten in response to feedback from the security research community.  The TL;DR of that post is that it seems like a hugely positive step that the amendment’s authors were prepared to engage and listen to the research community, and were concerned about avoiding negative consequences that would chill security research.


Couple that with this exemption being approved, and I continue to have hope that we’re starting to see a shift in the way the Government sector understands and values security research.  I’m also seeing a shift in the way the research community engages the Government, and how’re we’re participating in discussions that will shape our industry.


It’s not a silver bullet; given the complexity of the challenges we’re addressing, we’re not going to solve concerns around the right-to-research overnight. It’s going to be a long path, but every step counts.  And today I think we may have taken “one giant leap” for research-kind


~ @infosecjen

Since joining Rapid7 I’ve gotten to work on some pretty cool projects, the most recent of which is capturing a body of knowledge for the community… by CISOs, for CISOs.


The evolution of the CISO role, of course, is nothing new, and there’s plenty of analysis on it for anyone who’s interested (for example, Forrester has a great report called Evolve To Become The CISO Of 2018 Or Face Extinction). The mission of this working group is to enable CISOs to connect directly with each other, and although the ultimate goal is to produce content from which others might benefit, by no means does this limit the agenda or impair group members’ ability to be forthright and open.


It’s a no-holds-barred discussion, and I love it.


Over the next few weeks, I’ll be recapping a few of the biggest takeaways from some of these meetings, relating some of the experiences of our members (anonymously, unless otherwise specified), and distilling lessons learned, recommended practices, and other pearls of wisdom.


Those involved in this effort have my most sincere thanks. CISOs are notoriously strapped for time and pulled in many different directions, yet the group has been willing to share their knowledge, recount personal experiences, and tackle key issues. I could write a book on the insights gleaned from these discussions (and I just might!) but for now, blogging should suffice.


First up: security budgets.


If CISOs must learn to speak the language of business leaders, budgeting is a logical starting point. And when I raised this with the group, there was no shortage of feedback and recommendations. So stay tuned for my upcoming posts in this series- if you have any questions or comments, I’d love to hear them.



I keep getting asked about what’s happening in the news. Because I’m so efficient—and that’s hacker-speak for lazy—I go to a couple key sources for news. One of my absolute favorites is Patrick Gray’s


Since I'm often sharing links of note and important news, I thought I'd share this information with a broader audience in case it helps you out, too. So for this week, here's a small selection of some recent news:


Breaches you should know about:


Interesting vulnerabilities you should have heard about:


Management related content


Properly Random

2015-10-26 01.19.06 pm.png

Concluding our National Cyber Security Awareness Month webcast series, next week I’ll be joining a discussion around how to develop, nurture, and retain good security staff:


Building an Effective Security Team

Wednesday, October 28th at 11am ET/ 8am PT and 4pm BST

  • Chris Calvert, Senior Strategy Manager, Red Team and Cyber Threat Intelligence at TELUS
  • David Henning, Director of Network Security at Hughes Network Systems
  • Bob Lord, CISO in Residence at Rapid7


It’s widely known that the demand for skilled security professionals far exceeds the supply, which means organizations are working hard to identify and attract strong candidates. Investing in the development of future security professionals will also help to address the skills shortage – the question is, how do we do that?


To achieve long-term success, organizations must have a balanced security team that takes into account the capabilities of each individual. For example, some team members will be technical, while others need to have stronger leadership and managerial skills. Having clear-cut opportunities for professional development is key to attracting job seekers; understand the composition of the team and factor that into their career growth plan – people want to work for a company that demonstrates a commitment to their future. The group as a whole must have a clear understanding of program goals, success criteria, and how their work aligns with key business objectives.


Join us Wednesday to explore this core topic on our webcast. We’ll also talk about how to create a culture of excellence, setting developmental targets, when to leverage services to fill the talent gap, and how to nurture the next generation of security professionals.

[NOTE: No post about legislation is complete without a lot of acronyms representing lengthy and forgettable names of bills. There are three main ones that I talk about in this post:


CISA – the Cyber Information Sharing Act of 2015 – Senate bill that will likely go to vote soon.  The bill aims to facilitate cybersecurity information sharing and create a framework for private and government participation.


ICPA – the International Cybercrime Prevention Act of 2015 – proposed bill to extend law enforcement authorities and penalties in order to deter cybercrime. Proposed by Senators Graham and Whitehouse of the Senate Committee on the Judiciary.


CFAA – the Computer Fraud and Abuse Act – main “anti-hacking” law in the US.  Passed in 1986 and updated a number of times since then, the law is considered by many to be out-of-date and in dire need of reform.]


[UPDATE - 10.21.15] CISA went to the floor for debate yesterday. Today we heard that the Whitehouse/Graham Amendment will not advance to a vote with CISA. According to The Hill, "Whitehouse has some parliamentary options to bring the amendment to a vote still at his disposal but any attempt would likely be blocked by opponents of the provision."


[ORIGINAL POST - 10.20.15] After several years of debate around a cybersecurity information sharing bill, it looks like we’re getting closer to something passing. Earlier this year, the House passed two information sharing bills with pretty strong support.  The Senate side proposal – the Cyber Information Sharing Act of 2015 (CISA) – has been considerably more controversial, but is likely to go to a vote very soon, possibly even in the next few days. A number of concerns have been raised around this bill – mostly around privacy, data handling, and the way the Government can use information that is shared. These issues are hopefully being tackled in a number of amendments; Senate leaders unofficially agreed to a limit of 22 amendments, made up of the manager’s amendment (which is put forward by the bill’s primary sponsors), plus 11 Democratic amendments and 10 Republican amendments.  You can see a full list of the proposed amendments here thanks to New America’s fantastic round up.


Included among these is Amendment 2713, proposed by Senators Whitehouse and Graham. This amendment has been controversial, and has raised alarm in the civil liberties and security research communities, resulting in a letter requesting that the amendment not go to a vote with CISA. While I was initially one of the most vocal over these concerns, I am now comfortable that the current language of the amendment, attached at the bottom of this post, does not have negative implications for researchers. (Caveat – this language is the latest available as of writing this on October 20, 2015. I reserve the right to change my position if the language changes )


A brief history of Amendment 2713


The amendment is an abridged version of the International Cybercrime Prevention Act (ICPA) (also known as the Graham/Whitehouse bill). The full bill proposed eight key areas of legislative updates to extend law enforcement authorities in order to reduce cybercrime; the CISA amendment cuts this down to four main areas of focus:


  • Extending authority to prosecute the sale of US financial information overseas
  • Formalizing authority for law enforcement to shutdown botnets
  • Increasing the penalties for cyberattacks on critical infrastructure
  • Criminalizing the trafficking of botnets and the sale of exploits to bad actors


These feel like pretty reasonable goals to me.  In particular, updating the law to tackle the burgeoning botnet problem makes sense, and I like the idea of a clear legal framework for shutting down botnets. Yet, for a long time I was very worried about both the full bill, and the shorter abridged version of the Amendment. In fact, I even testified on the bill to the Senate Judiciary Subcommittee on Crime and Terrorism. For those that want to see the entire hearing, there’s a video here. This is probably the point at which I should remind people that I’m not a lawyer. (NOTE: the hearing was held before the bill was abridged and submitted as an amendment for CISA).


Most of my testimony focuses on updates to the Computer Fraud and Abuse Act (CFAA), the main anti-hacking law in the US.  In the case of the Amendment, three of the four sections – critical infrastructure, shutting down botnets, trafficking in botnets and exploits – relate to the CFAA. As I’ve written before, this law is a big concern as it chills security research, which results in more opportunities for cybercriminals in my opinion. The CFAA is woefully out of date, and lacks clear boundaries for what is permissible – a state exacerbated by the law containing both criminal and civil causes of action. The TL;DR of my testimony is that I was concerned that, far from improving the situation with the CFAA, the ICPA would actually make matters for researchers worse.


In relation to the Amendment, my particular concern was the language of Section 4 (though I also provided feedback on Sections 2 and 3). It was not much of a stretch for me to imagine a scenario in which a researcher disclosing a vulnerability finding at a conference might satisfy all the requirements of Section 4 as they were written in the original proposal, resulting in the potential for them to be prosecuted.


So what changed?


In a nutshell, the language changed.  And it changed because the people writing it really LISTENED to our feedback.


Maybe you can sense my surprise here. Those in the security research community that are familiar with the CFAA have felt frustration over its lack of clarity and misuse for years, and in many cases we’ve come to expect the worst of the Government. Recently though, we’ve started to see a change.  We saw the Department of Justice proactively address the research community on the issue of CFAA prosecutions at Black Hat this year. We’ve seen various federal agencies, such as the FDA and FTC call for researchers to help them update their position on, and understanding of, cybersecurity challenges. We’ve even seen the introduction of legislative proposals designed to support and protect research, such as the Breaking Down Barriers to Innovation Act introduced by Senator Wyden and Representative Polis.  In short, we’ve seen the start of a huge shift in attitudes towards security research in the Government sector.


Don’t get me wrong, it’s not all sunshine and kittens.  A new bill proposal “To provide greater transparency, accountability, and safety authority to the National Highway Traffic Safety Administration” explicitly makes research on cars illegal, which is, at best, short-sighted, and puts consumers at risk. Similarly, the recent proposal for implementation of the Wassenaar Arrangement’s export controls on intrusion software highlighted that we in the research community have some work to do to educate the Government on the value, role, and practices of security research – work which I’m happy to say is now underway.


Yet my experience working with the offices of Senators Whitehouse and Graham highlighted that we are making progress and that there are people in the Government that genuinely value the role of security research in reducing cybercrime. Throughout their collaborative engagement, the staff mentioned numerous times that they wanted to ensure security research would not be harmed by unintended consequences of the language. OK, I know security people can – cough – sometimes be a little cynical – cough, so you’re possibly thinking that words are all well and good, but the proof is in actions.  I agree. So it was probably around the time that I was reviewing redraft number 5 or so that I started to believe they may be serious about this whole “no negative consequences” thing.


Yeah, I know I sound like I drank some funky cool aid while I was on the Hill, but today an updated version of the Amendment – attached at the bottom of this post – was distributed on the Hill. Again, a quick reminder that it might change, and if so, my position may change too, but at the time of writing this, the current language is pretty well crafted. I’ve spent some time vetting it with both lawyers and security researchers, and we really tried to find ways that the language would cause problems for research, but it looks solid. The drafters listened to all the weird examples and edge case anecdotes we threw at them, and even the suspicious slurs against the concept of “prosecutorial discretion,” and they wrote language that holds up against this scrutiny and does not create negative consequences for research – at least as far as I can see (and remember, I’m not a lawyer).


I’d like to pause here to thank the drafters for that diligence, and their grace in the face of some very blunt feedback.


So what does this mean for the security community?


In the short-term it means that a significantly better Amendment will go to the vote with CISA. I do wish it had gone through more public debate – it feels like that process was side-stepped – but if it’s going to go to vote, I’m very glad that the language has been updated as it has. Any debate that takes place around CISA is likely to focus on the core challenges around privacy, the way the Government can use information that is shared, and the controversial issue of “countermeasures.” Resolution on those issues is enormously important, so it’s good that the language of Amendment 2713 is not so concerning that it needs to steal focus from CISA.  As an aside, Rapid7 is supportive of legislation that creates a clear framework for sharing cyber threat information, establishes meaningful protections for Personally Identifiable Information (PII), and clarifies the role and responsibilities of government entities in the process.  We think CISA has work to do to get there, but are hopeful that the discussion around the 22 amendments will result in a better outcome.


Back to Amendment 2713… If it passes, the most significant immediate outcome will be greater law enforcement authority for addressing botnets, which is a good thing, I hope.


To be clear though, this process wasn’t a silver bullet that miraculously fixed the CFAA. If the Amendment passes, the CFAA will gain a new provision ((a)(8) for those that want to get nerdy about it), but it will still lack a definition of its core principle: “authorization.” That means it still lacks a clear line for what constitutes a violation and what does not. And it will still be out of date, and will still contain both civil and criminal causes of action, providing a handy stick that defensive vendors will use to threaten researchers.


The process for addressing these issues in the CFAA will be considerably harder and will take far longer. It will likely be a bloody battle as there are many angry voices on all sides of this debate and it has already raged for a long time. But there may be some possibility for light at the end of the tunnel if we can continue to work with legislators to find common ground. For me, that’s the most significant take away from Amendment 2713 – hope for more productive collaboration, and by extension, hope for better cybersecurity legislation in the future.


~ @infosecjen

Rapid7 22x21 Ads PRINT_Shark.jpgWe all know, from experience or the Verizon DBIR, that stolen credentials are the most common attack vector. Users still present massive risk to our organizations, yet there’s plenty of debate about the effectiveness of user training. Meanwhile, users are getting all the FUD of breaches in the news, and aren’t yet armed to have constructive conversations about them.


Now, this is not to say there aren’t awesome security teams running security training programs out there – there most definitely are. But no matter how well-crafted the message, one small, very busy security team pushing out security information or training to users gives only one point of contact. That’s just not enough for anything— let alone something as complex as security — to stick. Thankfully the conversation is shifting from security being something for just the “technical folks” to worry about, to security as a shared responsibility in which everyone needs to – and can – be involved.



After all, security doesn’t impact only the security team. Security isn’t important only inside the workplace. Why should conversations about security awareness be?


For people to be truly aware, learn, and take responsibility, there must be conversation, overlap, and multiple points of contact. That’s why this October for National Cyber Security Awareness Month, Rapid7 has taken security awareness outside the office. We have placed ads on the MBTA in Boston – where Rapid7 has its Headquarters and Cambridge office. Commuters can visit to test their knowledge of a few low hanging fruit, get some quick tips, and educate themselves on why these things are important.


We've also put together three NCSAM email templates ready for sharing to your company, family, and friends to encourage them to engage and brush up on security pointers. Lastly, visitors to the interactive site are invited to refer a colleague to test their security chops – increasing touch-points with the content and starting conversations across organizations.


While Rapid7’s focus is and always will be on innovative security software and services, it is important – especially during NCSAM – to look at the big picture impact to our community, which includes non-security roles.


Let us know what you think! Do you want to see security awareness ads in your city? What other things should we, as an industry, do to get the general public’s attention to help them think more about their own security practices?

In helping to evaluate and recommend areas for security improvement, I frequently consult with boards on the state of their organization’s security program. Having had many of these conversations, I’ve seen board members repeatedly ask some of the same questions; they clearly are concerned about the overall security posture of the business, but lack the deep-rooted technical background of a longtime security practitioner. Which is where I come in…


On October 21st I’ll join several other panelists as we answer common boardroom security questions – the so-called “basics” of security – and discuss how these issues fit into the bigger picture of our evolving digital lives. There have been many major shifts in the security and technology space, and the rate of innovation shows no signs of slowing. The question is, how does this affect business leaders, and what can they do to lay the groundwork for a strong security program?


Your Evolving Digital Life: Security Basics for Business Leaders

Wednesday, October 21st at 11am ET/ 8am PT and 4pm BST

    • Allan Abrams, Director of Governance and Compliance at Teleflora
    • Chad Currier, Technology Infrastructure Director at Cardinal Innovations
    • Bob Lord (Bob Lord), CISO in Residence at Rapid7
    • Nicholas Percoco (Nicholas Percoco), Vice President of Global Services at Rapid7


Technology has become increasingly more pervasive and connected to the Internet. From wearables, to smart homes, to connected cars, it’s never been more important to secure your digital life. The Internet of Things gives attackers more entry points from which to pivot, and so strong security hygiene is a must. In boardroom discussions, I frequently get asked the following questions:


  • How can I avoid getting my email hacked?
  • What are some best practices for security my phone?
  • What does a sophisticated phishing attack look like?
  • What are the characteristics of a strong password?


These may seem silly, but they’re actually great questions. Lots of people think they know the answers, when really they’re overlooking the fundamentals. Our moderator, Bob Lord, also has some great insight into the most effective attack techniques, and how they sidestep common security measures to go for the weakest entry point.


Feel free to post in the comments if you have any other questions you’d like us to tackle!

In the first of four Cyber Security Awareness Month webcasts, a panel of security experts, including Bob Lord, CISO in Residence at Rapid7, Ed Adams, President and CEO at Security Innovation, Chris Secrest, Information Security Manager at MetaBank, and Josh Feinblum, VP of Information Security at Rapid7, came together to discuss, "How to Make your Workplace Cyber-Safe". They touched upon how to create a security-centric culture, combating common threats targeted at users, characteristics of an effective security awareness program, and best practices for managing passwords and devices. Read on to learn the top 3 takeaways from this webinar:


1. Security should be a reflex – A strong sign that an organization has successfully created a security-centric culture is if secure actions are reflexes for users across the organization. For example – has it become second nature for employees to know how to treat sensitive data, when it’s okay to share information, and how to spot phishing attacks? If employees aren’t sure about something, do they ask security or just click? If users are asking before acting, it’s a pretty good indicator that a security-centric culture has successfully started to spread.


2. It takes 2 Factor Authentication – Every user can be a pathway in. Any given user may not be the most impactful entry point – but they can be the first step to lateral movement within an environment. Be skeptical of all user activity, and use 2 factor authentication to remove risky users from the equation. Don’t let one mistake from a risky user impact your organization. A successful hack is substantially more difficult when 2 factor authentication is in play, and can make the act just challenging enough that the attacker may move on to an easier target.


3. Security is a Team Sport – Teach users at your organization to be more skeptical. Hiring more security professionals isn’t enough to improve security – you need security-smart eyes and ears all over the organization. Plus, you'll benefit from less hostile, more understanding relationships between security and other business units. Build bridges not walls! Integrate security into your culture, and groups around the organization will start to recognize the need to bring security into projects earlier. Don’t just give users rules to blindly follow – teach them how attackers work and think, and empower users to make decisions when the security team is not around.


To listen to the full discussion: view the on-demand webcast now.


Learn more and register for additional sessions in our Cyber Security Awareness Month Webcast Series.

A few years ago BYOD was one of the biggest trends in information security. Fast forward to present day, and mobile devices in the workplace have become the norm. Yet challenges to securing a mobile workforce still remain.


If anything, the situation has become more complex: as consumer hardware increasingly makes its way into the enterprise, we’re no longer talking about just phones, laptops, and tablets. Greater numbers of remote workers means that home offices also need to be secured, and the rise of the Internet of Things means that “smart” devices like refrigerators, toasters, and even automobiles can be potential entry points on the home network from which an attacker can pivot. I, for one, travel so much for my job that some days I don’t touch a computer, relying primarily on my phone to get work done.


Join us on Wednesday for the second webcast in Rapid7’s Cyber Security Awareness Month webcast series:

Work Anywhere: Securing Your Mobile Workforce

October 14th at 11am ET/ 8am PT and 4pm BST

  • Cameron Chavers, Manager of IT Risk Management and Security Team at Mosaic Sales Solutions
  • Tas Giakouminakis, Co-Founder and CTO at Rapid7
  • Bob Lord, CISO in Residence at Rapid7
  • Jerry McCarthy, Senior Security Engineer at Acosta, Inc.

During the discussion we’ll touch upon how work environments are changing, as well as tips for coping with the shift. Bob Lord will also share his controversial perspectives on the state of BYOD (hint: he thinks it’s fundamentally flawed).


It’s important to remember: Whatever the policy, productivity always wins. People will strive to find the quickest and most efficient means of completing a task, and that path may not always align with the preferences of the security and IT teams.


Nevertheless, there are ways in which businesses can gain confidence amidst so much uncertainty. Creating a security-centric culture to which everyone feels they can contribute is an excellent way of minimizing risk, and was explored in a recent webcast. Additionally, knowing what people are doing with systems and on the network is paramount. A strong security strategy should focus on gaining visibility into user behavior across the entire mobile ecosystem, from the endpoint to the cloud, to quickly detect attacks.


We’ll drill into all this and more next week. If you have other topic suggestions, I’d love to hear them.

Joel Cardella

The End Of The Internet

Posted by Joel Cardella Employee Oct 12, 2015

On Sept 24th, ARIN announced it had finally run out of IPv4 addresses. The open pool of IPv4 addresses is now gone, and the only way to get them now is via a transfer from another party who owns them or IP ranges which are returned to ARIN.


The switch to IPv6 is imminent. Once switched, the number of available public addresses available will be roughly 4.2 x 10^37. This will be more than capable of handling our need for interconnection for decades.



What Does This Announcement Mean?


This means we have reached the limit of what 32-bit addressing can give us, about 4.2 billion things. The projections on Internet Of Everything devices however number 25 billion by 2020. This means that we will likely be playing a shell game for a while of using the open IPv4 addresses while trying to build and deploy IoE devices at breakneck speeds. But eventually, we hit a wall.


Where Did All The IPv4 Space Go?


By 1995, the last restrictions on using the Internet to carry commercial traffic were lifted, allowing for the emergence of the modern day internet, which continues to evolve. Likely in anticipation of this, the Department of Defense becomes the owner of the largest number of IPv4 addresses, with over 218 million usable. This is just over 5% of the total IPv4 addressable space.


The rest of it is used by innovations within an information revolution, which we are currently (still!) in the middle of. Globalization, commercialization, e-business and mobile technologies are the biggest drivers of IP use. Innovations like cloud computing, the Internet Of Everything, and a voracious appetite for data all lead to the inevitability of using up a finite resource.


How Should Organizations Proceed?


The most obvious answer is making the switch to IPv6 as soon as possible. But, the switch to IPv6 is not easy, and many companies have not even started. Adoption is slow and no one is yet ready to “turn off” IPv4. That’s going to take a lot of agreement among a lot of people. Just a few of the issues at risk are software compatibility, replacement costs of hardware, and ease of use/deployment based on current skill and knowledge.


The DoD started working on an IPv6 implementation in 2003. They sought to demonstrate IPv6 viability for all network backbones by 2008, and implement. Of course it took until 2012 for the government to decide it had met the initial goals! But the point is, they had proven IPv6 is viable, and then they disabled the functionality. So the DoD backbones still use IPv4, which is more limited, hampers their innovation and impacts the DoD’s very specialized version of the Internet of Everything.


As software consultants, we will often cite IPv6 as a security concern and suggest that organizations disable IPv6 services. This is not because we think IPv6 is less secure. We understand that organizational maturity is a key factor in securing an enterprise of any size, and complications with understanding and managing IPv6 increase the attack surface for threat actors. It’s simply too large a negative risk to manage and secure IPv4 and IPv6 in parallel. Until the entire enterprise is ready to make the switch, which for some could take many years, we will continue to recommend this.


It’s estimated that 15%-20% of allocated IPv4 addresses are not being utilized. That means the shift to larger spaces will likely take some time. It also opens up the possibilities of IPv4 marketplaces, similar to what we have today with domain names. ARIN has already said they will not limit the number of transfer requests as of Sept 24. The DoD might also start releasing unallocated space back if they make the full switch to IPv6. This would breathe some limited life into the kicking corpse of IPv4.


Unfortunately, none of this is optimal, and certainly not desirable. But there is hope. Requirements to be “IPv6 ready” have been in place for all systems being placed on the DoD networks, including commercial products, for quite some time. This has spurred a number of organizations to work towards this goal in order to keep their business relationships with the DoD. In this regard, the DoD actually inspires change and progress to be made. We just need now to lead a charge to make the switch in the commercial industrial space.


The Next Steps


The right thing is to start planning your IPv6 transition now.


  • First and foremost, start documenting your network requirements. For any organization which seeks to increase its maturity we find a lack of documentation being the biggest contributing factor to security posture. Organize your network assets, classify them using a simple classification scheme, and identify which assets are IPv6 ready. For the ones that are not, make sure that your equipment refresh plans include IPv6 support. Also ensure your current and future purchase pans include this.
  • Document your IPv6 allocated and unallocated space, to prepare your addressing plan. This will be instrumental in helping you make the transition when you are ready.
  • Educate yourself on IPv6! The assumptions of IPv4 do not always exist in IPv6.
  • Work with your providers on IPv6 requirements. They need to support it, and they need to support you.
  • Test your plans and assumptions! Monitor for reliability and performance – in test and production
  • Advertise in DNS by updating with AAAA records pointing to IPv6 addresses


For more information on making the switch to IPv6, TeamARIN has some great information and resources for you.


This post was co-authored by Joel Cardella and Cindy Jones. You can reach us on Twitter at @JoelConverses and @SinderzNAshes.


EDIT: Corrected Cindy's twitter handle!

NCSAM Banner_no text-01.pngThis Thursday, our Cyber Security Awareness Month webcast series kicks off with a look at security in the workplace:


How to Make Your Workplace Cyber-Safe

Thursday, October 8th at 11am ET/ 8am PT and 4pm BST

  • Bob Lord, CISO in Residence at Rapid7
  • Ed Adams, President and CEO at Security Innovation
  • Chris Secrest, Information Security Manager at MetaBank
  • Josh Feinblum, Vice President of Information Security at Rapid7


We won’t just be targeting security practitioners, either – anyone who works in an office can benefit.


You may be thinking, “But I don’t work in security, so why is that my concern?”


While it’s the duty of security and IT to mitigate risk and ensure that the security program adheres to industry best practices, it’s in everyone’s best interest to ensure that your workplace is cyber-safe. Breaches pose a threat not just to customer information but also PII of employees and partners, not to mention create service outages that strain the entire organization and cause reputational/brand damage with long-lasting effects.


The best way to prevent that scenario is to create a security-centric culture to which everyone feels they can contribute. This prevents the that’s-not-my-job mentality that can torpedo even a strong security program. Human error is the mostly frequently seen security incident pattern, according to the 2015 Verizon Data Breach Investigations Report, and so mitigating user risk in the workplace is a highly effective means of bolstering security across the business.


During the webcast, three panelists will join me for a discussion on how to make a workplace cyber-safe by creating a security-centric culture. Our moderator, Bob Lord, has some excellent ideas about starting with the breach and working backwards to determine how far an attacker can get. For example – if a laptop gets lost or stolen, what’s the severity? Has the hard drive been encrypted? Similarly, assume a user gets phished. What does the kill chain look like in that scenario?


Some of the other topics we’ll cover include:


  • Characteristics of an effective security awareness program: How to make sure that everyone understands risk, and the way in which their footprint impacts the business.
  • Managing passwords and devices: What happens when employees lose a device that’s storing company information? We’ll touch on encryption and also general password hygiene.
  • Common threats targeted at users: Social engineering and phishing are common and effective mechanisms for infiltrating a network. Workers need to realize that things have gotten way more sophisticated than a Nigerian prince asking for money.


- @TheCustos

[ETA: Added in James Lee's excellent State of the Metasploit Framework talk, which I stupidly omitted by accident!]


Once you hang around in infosec for a little while, you learn that each of the major cons have their own reputation, their own mini-scene. This one's got the great parties, that one has the best speakers, that other one is where the fresh research is presented, et cetera. One I kept hearing lots of good things about -- full of great content and really great people -- was Derbycon, a newer con entering its 5th year this year in Louisville, Kentucky.


With these words of praise in mind I went to Louisville last weekend and learned very quickly that Derbycon really does live up to its great reputation. It's a space where not only are n00bs (like me) welcome, but even seasoned pros bask in the positivity and family feel of the space. I don't think I've ever seen quite so many whole families with kids at an infosec con as I did at Derby. Maybe it's the genteel kindness of Louisville that rubs off on the attendees, but at Derby everyone was so friendly and the whole con felt very welcoming. Linecon, barcon, outside-the-front-entrance-smoking-con -- anywhere you went you had a great conversation with someone. (And it really can't be a coincidence that the 2 Black badges up for auction at the closing ceremonies each went for $7000, with all money going to Hackers for Charity. That's really amazing.)


... True enough, the beer and bourbon were flowing a-plenty -- and boy was that bourbon good -- and the community and surrounding company were the best part of the con. No surprise, the talks were top-notch too. I'm embedding a few videos of my favorite sessions below, but admittedly I am not as up on my technical knowledge as most of you. You can peruse the ENTIRE list of Derbycon 5 talks in this playlist:


But for those of you looking for a little taste of it, take a look:

The State of the Metasploit Framework -- by Egypt

What changed this year? What community contributions did we see? What are the cool and new shiny things are in Metasploit Framework that you might have missed? Everyone who uses Metasploit should tune in.


The Opening Keynote - Information Security Today and in the Future -- featuring Ed Skoudis, John Strand, Chris Nickerson, Kevin Johnson & HD Moore

This was a really fascinating keynote -- there was a lot of emphasis on pen testing in this, but it touches on a lot of topics from the importance of relationships with your IT and devops team to educating the workforce. There's a ton in here, give it a listen.


Started from the bottom and now I'm here: How to ruin your life by getting everything you ever wanted - by Chris Nickerson

I missed this one in person and regret it IMMENSELY. Thankfully, Egypt shared it on twitter with a hearty endorsement, and I hugely agree. This isn't a tech talk, but if you work in infosec or with people who work in infosec... you need to see this talk. What happens to "infosec rockstars"? What is the real cost? What is the state of our community today?


Gray Hat PowerShell - by Ben Ten/@Ben0xA

Now this one IS a more technical talk, so if you already grok Powershell this one's for you (not for Powershell newbies). I couldn't get in to this as the line was out the door and around the hotel so... check it out.


The Metasploit Town Hall -- with todb, Egypt, thelightcosine & busterbcook

Back again to Derbycon, our High Priests of Metasploit give the community an update on what's new in Metasploit and take questions from those in attendance on what they'd like to see or improve.


Developers: Care and Feeding  -- by Bill Sempf

If you work with developers, and feel like you and they are speaking two very different languages and have massively different priorities, you need to hear this talk.


Other random things I learned at Derby:

  1. Some of you guys can drink a lot -- a LOT -- of bourbon and beer. Wow.
  2. It is entirely likely that you will walk in to the Hyatt bar on any Derbycon evening and see several Cards Against Humanity games going on concurrently
  3. I have now righted a GREAT wrong in my life and finally saw the "classic" 90s movie Hackers thanks to the 20th anniversary screening at Derby. (Yes, yes, I know, it's unfathomable that I hadn't seen it before. But now I can shout HACK THE PLANET!!!! with the best of them.)
  4. Judging by the references and fsociety shirts I saw, Mr Robot seems to be pretty popular in our scene -- and I'm glad, because I already can't wait for season 2.
  5. I know it's not the 90s anymore, but The Crystal Method can still really rock the house, and some of you look quite lovely in blinky cyberpunk headgear.
  6. If you are at all a light sleeper, make sure you book a hotel room above the 10th floor. I was on the 4th floor, and the parties were on the 2nd floor and, well, not much sleep was had.
  7. The Meme-Fu from some of the speakers at Derby was just so damn high:

'Til next year, Derbycon.  Let's keep that welcoming feeling going even outside of Louisville.

Filter Blog

By date: By tag: