Skip navigation
All Places > Information Security > Blog

As part of a Rapid7 managed services unit, the Security Advisory Services team at Rapid7 specializes in security assessments for organizations. Using the CIS Critical Security Controls (formerly the SANS 20 Critical Controls) as a baseline, the team assesses and evaluates strengths and gaps, and makes recommendations on closing those gaps.


The interesting thing about the critical security controls is how well they scale to work for organizations of any size, from very small to very large. They are written in easy to understand business language, so non-security people can easily grasp what they do. They cover many parts of an organization, including people, processes and technology. As a subset of the priority 1 items in the NIST 800-53 special publication, they are also highly relevant and complementary to many established frameworks.


The Security Advisory Services team will be posting a blog series on each of the controls. These posts are based on our experience over the last two years of our assessment activity with the controls, and how we feel each control can be approached, implemented and evaluated. If you are interested in learning more about the CIS Critical Controls, stay tuned here as we roll out posts weekly. Thanks for your interest and we look forward to sharing our knowledge with you!


To find the posts, search for the tag "CIS 20"

I’m excited to be shaving my head at Shaves that Save at the RSA Conference US 2017—the second annual event where information security professionals go bald to raise money to fund a cure for childhood cancer and the St Baldrick’s Foundation.  I hope you can join us a for a whole lot of fun—head shaving, a great DJ, a bar to benefit St Baldrick’s, and an appearance by Stormtroopers and other Star Wars characters from the 501st Legion. And while we’ll have a lot of fun, the bigger goal is to raise money for research that will help save kids’ lives.



The event is on Wednesday, February 15th from 6-7:30 PM in the Viewing Room across from the South Expo hall.  You don’t need to register for the event, but you do need an RSA pass. (An expo pass is fine.  Don't have one?  You can register for an Expo Pass.


We already have 12 shavees signed up from across the InfoSec industry!  I’m honored to join Josh Corman (Atlantic Council), Diana Kelley (IBM), Pete Lindstrom (IDC), Ed Moyle (ISACA), Rich Mogull (Securosis), Chris Nickerson (LARES), Michael Nickle (CA), Nick Selby (Secure Ideas Incident Response Team) and others in InfoSec to stand in solidarity with kids who typically lose their hair while undergoing treatment for cancer, and to help fund critical research.


I’ve been supporting St Baldrick’s for a number of years, and this is the third time I’m shaving my head. I was introduced to the foundation through a corporate partnership with NetApp who is a large St Baldrick’s supporter.  Since then, I’ve gotten to know a number of kids and families impact by cancer, and seen that they deserve better.  I’ve met kids who ultimately lost their battle.  I’ve seen kids who have taken chemo for over a 1000 days in treatment.  Thankfully I’ve seen a bunch where the treatment has worked, but many live in fear of a reoccurrence or long-term side effects from chemotherapy. These kids just want to be kids, and I’ve learned so much from their amazing attitudes as they persevere through treatment.


Unfortunately for these kids, only 4% of US Federal funding for cancer research is solely dedicated to childhood cancer, and St. Baldrick’s Foundation helps fill the funding gap as the largest non-government funder of childhood cancer research grants.  St. Baldrick’s research has helped more of them survive, and provides hope for a cure for others.  No child should have to fight cancer or suffer the effects of treatment.


How can you help?

  • At the RSA Conference?  Come cheer on the shavees!  We have a number of people shaving their head for the first time, and your energy makes it even better!
  • Donate to the St Baldrick’s Foundation (a U.S. non-profit 501 (c)3 organization) to support critical research.  You can donate from the event page.
  • Shave with us?  We have space left for a few more people if you want to join us.
  • Promote #ShavesThatSave on social media to help get the word out about the event.


I’d like the thank all the volunteers making this event a success:  Rapid7's Event Management Team for bringing the event to life, DJ Ka'nete for donating his services, MIS Training Institute, Entrust Datacard, the 501st Legion, Golden Gate Garrison, co-organizers Nick Selby and Davin Baker, and all the other volunteers and shavees.

One of the most nerve-wrecking things a person can do is give a talk to a group of people. As a matter of fact, approximately 3 out of 4 people suffer from speech anxiety. This is further exacerbated in an industry and community like ours where many of us are introverts and/or suffer from "imposter syndrome". We think we aren't as smart or good at something as we actually are. We often feel like someone else has done a better job explaining a theory or area in information security than we ever could. We also often feel like we have nothing new or interesting to contribute, but that isn't true!


The people who make up our community have a diverse skill set. Each of us has experiences and a pool of knowledge that are unique to us, even when they may seem similar to someone else's. We each have a unique voice, way of thinking, and ways of processing information. This is why the Security BSides Las Vegas' Proving Grounds track is so near and dear to me.


For those who are unfamiliar with what we do, Proving Grounds gives a platform for folks who have never spoken at a nation conference (DEF CON, RSA, DerbyCon) to give their first talk in a "safe" environment. We pair them with a mentor, who is someone established in the community who has experience at presenting. They work together so the first-time speakers can take their submitted outline and abstract and turn it into a well thought-out talk. The mentors help with everything from how the presentation looks, the flow of the information being shared, to presenting tips and tricks. The mentors are there the day of their partner's talk for moral support, and we also offer new presenters a chance to practice in the room they'll present in before the days of the con.


For the past four years, I have worked together with SecurityMoey as the co-director of this track. I leapt at the opportunity to work with him because I wished that there were something like this when I was preparing for my first talk. I’m an extremely nervous and anxious presenter—so much so, that I usually spend the 10 minutes or so before my talk in the bathroom trying to calm down and pump myself up. I also had a problem when I first started to submit CFPs where I didn’t know what information was relevant to the review board, what was too much or too little, or how to tailor a talk to an audience. I more or less winged it for a couple of years until I had watched enough talks and gotten enough peer feedback that I felt comfortable with how I wanted to present my information. It was a lot more work than it could have been, which is another benefit of the Proving Grounds track.


I can easily go on about how passionate I am about this program and how important mentoring is to our community. In fact, Moey and I presented on this at DerbyCon a few years ago. What it all boils down to is this: We have an awesome community, and we need to continue to grow by welcoming new people and ideas to our conferences.


Deadline for submission is February 15th, so submit soon! To submit your talk proposal, go here:


Link to the talk Moey and I did at DerbyCon: mentorship-michael-ortega-magen-wu

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.


This paper covers the often occult art of penetration testing, and seeks to demystify the process, techniques, and tools that pentesters use to break into enterprise networks. By drawing on the experiences of dozens of pentesters in the field, based on real, qualified data drawn from the real-life experiences of those pentesters, we're able to suss out the most common vulnerabilities that are exploited, the most common network misconfigurations that are leveraged, and the most effective methods we've found to compromise high-value credentials.


Finding: Detection is Everything

Probably the most actionable finding we discovered is that most organizations that conduct penetration testing exercises have a severe lack of usable, reliable intrusion detection capabilities. Over two-thirds of our pentesters completely avoided detection during the engagement. This is especially concerning given that most assessments don't put a premium on stealth; due to constraints in time and scope, pentesters generate an enormous amount of malicious traffic. In an ideal network, these would be setting off alarm bells everywhere. Most engagements end with recommendations to implement some kind of incident detection and response, regardless of the specific techniques for compromise were used.

Finding: Enterprise Size and Industry Doesn't Matter

When we started this study, we expected to find quantitative differences between small networks and large networks, and between different industries. After all, you might expect a large, financial industry enterprise of over 1,000 employees would be better equipped to detect and defend against unwelcome attackers due to the security resources available and required by various compliance regimes and regulatory requirements. Or, you might believe that a small, online-only retail startup would be more nimble and more familiar with the threats facing their business.


Alas, this isn't the case. As it turns out, the detection and prevention rates are nearly identical between large and small enterprises, and no industry seemed to fare any better or worse when it came to successful compromises.


This is almost certainly due to the fact that IT infrastructure pretty much everywhere is built using the same software and hardware components. Thus, all networks tend to be vulnerable to the same common misconfigurations that have the same vulnerability profiles when patch management isn't firing at 100%. There are certainly differences in the details -- especially when it comes to custom-designed web applications -- but even those tend to have the same sorts of frameworks and components that power them.


The Human Touch

Finally, if you're not really into reading a bunch of stats and graphs, we have a number of "Under the Hoodie" sidebar stories, pulled from real-life engagements. For example, while discussing common web application vulnerabilities, we're able to share a story of how a number of otherwise lowish-severity, external web application issues lead to the eventual compromise of the entire internal back-end network. Not only are these stories fun to read, they do a pretty great job of illustrating how unrelated issues can conspire on an attacker's behalf to lead to surprising levels of unauthorized access.


I hope you take a moment to download the paper and take a look at our findings; I don't know of any other research out there that explores the nuts and bolts of penetration testing in quite the depth or breadth that this report provides. In addition, we'll be covering the material at our booth at the RSA security conference next week in San Francisco, as well as hosting a number of "Ask a Pentester" sessions. Andrew and I will both be there, and we love nothing more than connecting with people who are interested in Rapid7's research efforts, so definitely stop by.

Note: Rebekah Brown was the astute catalyst for the search for insecure broadcast equipment and the major contributor to this post.


Reports have surfaced recently of local radio station broadcasts being hijacked and used to play anti-Donald Trump songs ( The devices that were taken over are Barix Exstream systems, though there are several other brands of broadcasters, including Pkyo, that are configured and setup the same way as these devices and would also be vulnerable to this type of hijacking.


Devices by these manufacturers work in pairs. In the most basic operating mode, one encodes and transmits a stream over an internal network or over the internet and the other receives it and blasts it to speakers or to a transmitter.




Because they work in tandem, if you can gain access to one of these devices, you have information about the other one, including the IP address and port(s) it’s listening on. After seeing the story, we were curious about the extent of the exposure.

The View from the Control Room

We reviewed the January 31, 2017 port 80 scan data set from Rapid7’s Project Sonar to try to identify Barix Instreamer/Exstreamer devices and Pyko devices based on some key string markers we identified from a cadre of PDF manuals. We found over a thousand of them listening on port 80 and accessible without authentication. They seem to be somewhat popular on almost every continent and are especially popular here in the United States.




Many of these devices have their administration interfaces on something besides port 80, so this is likely just a sample of the scope of the problem.


Because they operate in pairs, once you gain access to one device, you can learn about their counterparts directly from the administration screens:







It’s trivial to reconfigure either the source or destination points to send or receive different streams and it’s likely these devices go untouched for months or even years. It’s also trivial to create a script to push a new configuration to all the devices very quickly (we estimated five minutes or less).


What is truly alarming is not only are these devices set up to be on the internet without any sort of authentication, but that this issue has been brought up several times in the past. The exposure – which in this case, is really a misconfiguration issue and not strictly a software vulnerability – was identified as early as April 2016, and this specific hijacking technique emerged shortly after the inauguration.


Coming Out of a Spot

The obvious question is that if this issue was identified nearly a year ago, why are there still systems that are susceptible on the internet? The answer is that just because an issue is identified does not automatically mean that the individuals responsible for securing them are aware that they are vulnerable or of what the impact would be. As much as we as an industry talk about information sharing, often we aren’t sharing the right information with the right people. Station owners and operators do not always have a technical or security background, and may not read the security news or blogs. Even when the main stream media published information on the impacted model and version, system operators may not know that they are using that particular model for their broadcast, or they may simply miss the brief media exposure.


We cannot and should not assume that people are aware of the issues that are discovered, and therefore we are making a greater effort to inform U.S. station owners by reaching out to them directly in coordination with the National Coordinating Center for Communications (COMM-ISAC) and the National Association of Broadcasters (NAB). We've offered not only to inform these operators that they are vulnerable, but also to help them understand the technical measures that are required to secure their systems, down to walking through how to set a password. What is intuitive to some is not always intuitive to others.


Cross Fade Out

While hijacking a station to play offensive music is certainly not good, the situation could have been — and still can be — much more serious. There are significant political tensions in the U.S. right now, and a coordinated attack against the nearly 300 devices we identified in this country could cause targeted chaos and panic. Considering how easy it is to access and take control of these devices, a coordinated hijacking of these broadcast streams is not such a far-fetched scenario, so it is imperative to secure these systems to reduce the potential impact of future attacks.


You can reach out to for more information about the methodology we used to identify and validate the status of these devices.

NOTE: Tom Sellers, Jon Hart, Derek Abdine and (really) the entire Rapid7 Labs team made this post possible.


On the internet, no one may know if you’re of the canine persuasion, but with a little time and just a few resources they can easily determine whether you’re running an open “devops-ish” server or not. We’re loosely defining devops-ish as:


  • MongoDB
  • CouchDB
  • Elasticsearch


for this post, but we have a much broader definition and more data coming later this year. We use the term “devops” as these technologies tend to be used by individuals or shops that are emulating the development and deployment practices found in the “DevOps” — — communities.


Why are we focusing on about devops-ish servers? I’m glad you asked!


The Rise of Ransomware


If you follow IT news, you’re likely aware that attackers who are focused on ransomware for revenue generation have taken to the internet searching for easy marks to prey upon. In this case the would-be victims are those running production database servers directly connected to the internet with no authentication.


Here’s a smattering of news articles on the subject:



The core reason why attackers are targeting devops-ish technologies is that most of these servers have a default configurations which have tended to be wide open (i.e. they listen on all IP addresses and have no authentication) to facilitate easy experimentation  exploration. Said configuration means you can give a new technology a test on your local workstation to see if you like the features or API but it also means that — if you’re not careful — you’ll be exposing real data to the world if you deploy them the same way on the internet.


Attackers have been ramping up their scans for these devops-ish services. We’ve seen this activity in our network of honeypots (Project Heisenberg):




We’ll be showing probes for more services, including CouchDB, in an upcoming post/report.


When attackers find targets, they often take advantage of these open configurations by encrypting the contents of the databases and leaving little “love notes” in the form of table names or index names with instructions on where to deposit bitcoins to get the keys back to your data.  In other cases, the contents of the databases are dumped and kept by the attacker but wiped from the target, then demanding a ransom for the return of the kidnapped data. In other cases, the data is wiped from the target and not kept by the attackers, making anyone who gives in to these demands in for a double-whammy – paying the ransom and not getting any data in return.


Not all exposed and/or ransomed services contain real data, but attackers have automated the process of finding and encrypting target systems, so it doesn’t matter if they corrupt test databases which will just get deleted as it hasn’t cost them any more time or money to do so. And, because the captive systems are still wide open, there have been cases where multiple attacker groups have encrypted systems — at least they fight amongst themselves as well as attack you.


Herding Servers on the Wide-Open Range Internet


Using Project Sonar — — we surveyed the internet for these three devops databases. NOTE: we have a much larger ongoing study that includes a myriad of devops-ish and “big data” technologies but we’re focusing on these three servers for this post given the timeliness of their respective attacks. We try to be good Netizens, so we have more rules in place when it comes to scanning than others do. For example, if you ask us not to scan your internet subnet, we won’t. We will also never perform scans requiring credentials/authentication. Finally, we’re one of the more profound telemetry gatherers which means many subnets choose to block us. I mention this, first, since many readers will be apt to compare our numbers with the results from their own scans or from other telemetry resources. Scanning the Internet is a messy bit of engineering, science and digital alchemy so there will be differences between various researchers.


We found:


  • ~56,000 MongoDB servers
  • ~18,000 Elasticsearch servers
  • ~4,500 CouchDB servers



Of those 50% MongoDB servers were captive, 58% of Elasticsearch were captive and 10% of CouchDB servers were captive:


A large percentage of each of these devops-ish databases are in “the cloud”:




and several of those listed do provide secure deployment guides like this one for MongoDB from Digital Ocean: duction-mongodb-server. However, others have no such guides, or have broken links to such guides and most do not offer base images that are secure by default when it comes to these services.


Exposed and Unaware


If you do run one of these databases on the internet it would be wise to check your configuration to ensure that you are not exposing them to the internet or at the very least have authentication enabled and rudimentary network security groups configured to limit access. Attackers are continuing to scan for open systems and will continue to encrypt and hold systems for ransom. There’s virtually no risk in it for them and it’s extremely easy money for them, since the reconnaissance for and subsequent attacking of exposed instances likely often happens from behind anonymization services or from unwitting third party nodes compromised previously.


Leaving the configuration open can cause other issues beyond the exposure of the functionality provided by the service(s) in question. Over 100 of the CouchDB servers are exposing some form of PII (going solely by table/db name) and much larger percentages of MongoDB and Elasticsearch open databases seem to have some interesting data available as well. Yes, we can see your table/database names. If we can, so can anyone who makes a connection attempt to your service.


We (and attackers) can also see configuration information, meaning we know just how out of date your servers, like MongoDB, are:



So, while you’re checking how secure your access configurations are, it may also be a good time to ensure that you are up to date on the latest security patches (the story is similarly sad for CouchDB and Elasticsearch).

What Can You Do?


Use automation (most of you are deploying in the cloud) and within that automation use secure configurations. Each of the three technologies mentioned have security guides that “come with” them:



It’s also wise to configure your development and testing environments the same way you do production (hey, you’re the one who wanted to play with devops-ian technologies so why not go full monty?).


You should also configure your monitoring services and vulnerability management program to identify and alert if your internet-facing systems are exposing an insecure configuration. Even the best shops make deployment mistakes on occasion.


If you are a victim of a captive server, there is little you can do to recover outside restoring from backups. If you don’t have backups, it’s up to you do decide just how valuable your data is/was before you consider paying a ransom. If you are a business, also consider reporting the issue to the proper authorities in your locale as part your incident response process.


What’s Next?


We’re adding more devops-ish and data science-ish technologies to our Sonar scans and Heisenberg honeypots and putting together a larger report to help provide context on the state of the exposure of these services and to try to give you some advance notice as to when attackers are preying on new server types. If there are database or server technologies you’d like us to include in our more comprehensive study, drop a note in the comments or to


Burning sky header image by photophilde used CC-BY-SA

2016 kept us on our toes right up to the very end - and its last curveball will have implications lasting well past the beginning of the new year.


Speculation on Russian hacking is nothing new, but it picked up notably with the DNC hack prior to the presidential election and the subsequent release of stolen emails, which the intelligence community later described as an information operation aimed at influencing the election. And then on December 29th we saw the US government's response, the coordinated release of a joint report detailing the hacking efforts attributed to Russian intelligence agencies, economic sanctions, and the expulsion of Russian diplomats.


This blog is not going to discuss the merits – or otherwise - of various political actions, nor whether cyberespionage should warrant different responses to other types of espionage. Instead, I’m going to focus on the learnings we can take away from the Joint Analysis Report (JAR). The report is not perfect, but nonetheless, I believe it can be valuable in helping help us, as an industry, improve, so I’m choosing to focus on those points in this post.


The Joint Analysis Report won’t change much for some defenders, while for others it means a reevaluation of their threat model and security posture. But given that the private sector has been tracking these actors for years, it’s difficult to imagine anyone saying that they are truly surprised Russian entities have hacked US entities. Many of the indicators of compromise (IOCs) listed in the JAR have been seen before -- either in commercial or open source reporting. That being said, there are still critical takeaways for network defenders.


1) The US government is escalating its response to cyber espionage. The government has only recently begun to publicly attribute cyberattacks to nation states, including attributing the Sony attacks to North Korea, a series of industrial espionage-related attacks to Chinese PLA officers, and a series of attacks against the financial sector to Iran-backed actors. But none of those attack claims came with the expulsion of diplomats or suspected intelligence officers. The most recent case of a diplomat being declared persona non grata (that we could readily find) was in 2013 when three Venezuelan officials were expelled from the US in response to the expulsion of US diplomats from Venezuela. Prior to that was in 2012, when a top Syrian diplomat was expelled from the Washington Embassy in response to the massacre of civilians in the Syrian town of Houla. Clearly, this is not a step that the United States take lightly.


These actions are more significant to government entities than they are to the private sector, but being able to frame the problem is crucial to understanding how to address it. Information and influence operations have been going on for decades, and the concept that nations use the cyber domain as a means to carry out these information operations is not surprising. This is the first time, however, that the use of the cyber domain means has been met with a public response that has previously been reserved for conventional attacks. If this becomes the new normal then we should expect to see more reports of this nature and should be prepared to act as needed.


2) The motivation of the attackers that are detailed in the report is significant. We tend to think of cyber operations as fitting into three buckets: cyberespionage, cybercrime, or hactivism. The actions described in the JAR and in the statement from the President describe influence operations. Not only do the attackers want to steal information, but they are actively trying to influence opinions, which is an area of cyber-based activity we are likely to see increasing. The entities listed in the JAR, who are primarily political organizations (and there are far more political organizations out there than just the two primary parties’ HQ), as well as organizations such as think tanks, should reevaluate their threat models and their security postures. It is not just about protecting credit card information or PII, anything and everything is on the table.


The methods that are being used are not new – spear-phishing, credential harvesting, exploiting known vulnerabilities, etc. – and that fact should tell people how important basic network security is and will remain. There was no mention of zero-days or use of previously undetected malware. Companies need to understand that the basics are just as, or even more, important when dealing with advanced actors.


3) We need to work with what we have – and that doesn’t mean we just plug and play IOCs. It’s up to us to take the next step. So, what is there to do with the IOCs? There are a lot of people who are disappointed about the quality and level of detail of the IOCs on the JAR. It is possible that what has been published is the best the government could give us at the TLP: White level, or that the government analysts who focus on making recommendations to policy makers simply do not know what companies need to defend their networks (hint: it is not a Google IP address). We, as defenders, should never just take a set of IOCs and plug them into our security appliances without reviewing and understanding what they are and how they should be used.


Defenders should not focus on generating alerts directly off the IOCs provided, but should do a more detailed analysis of the behaviors that they signify. In many cases, even after an IOC is no longer valid, it can tell a story about an attacker behavior, allowing defenders to identify signs of those behaviors, rather than the actual indicators that are presented. IOC timing is also important. We know from open source reporting, as well as some of the details in the JAR, that this activity did not happen recently, some of it has been going on for years. That means that if we are able to look back through logs for activity that occurred in the past then the IOCs will be more useful than if we try and use them from this point in time forward, because once they are public it is less likely that the attackers will still be employing them in the way they did in the past.


We may not always get all of the details around an IOC, but it’s our job as defenders to do what we can with what we have, especially if we are an organization who fits the targeting profile of a particular actor. Yes, it would be easier if the government could give us all of the information we needed in the format that we needed, but reality dictates that we will still have to do some of our own analysis.


We should not be focusing on any one aspect of the government response, whether it is the lack of published information clearly providing attribution to Russia, or the list of less-than-ideal IOCs. There are still lessons that we, as decisions makers and network defenders, can take away. Focusing on those lessons requires an understanding of our own networks, our threat profile, and yes, sometimes even the geo-political aspects of current events so that we can respond in a way that will help us to identify threats and mitigate risk.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.


Screen Shot 2016-12-26 at 7.20.22 PM.png

You may or may not know this about me, but I am kind of an overly optimistic sunshine and rainbows person, especially when it comes to threat intelligence. I love analysis, I love tackling difficult problems, connecting dots, and finding ways to stop malicious actors from successfully attacking our networks.


Even though 2016 tried to do a number on us (bears, raccoons, whatever...)


Screen Shot 2016-12-30 at 7.51.22 PM.png

I believe that we can come through relatively unscathed, and in 2017 we can make threat intelligence even better by alleviating a lot of confusion and addressing many of the misunderstandings that make it more difficult to integrate threat intelligence into information security operations. In the spirit of the new year, we have compiled of a list of Threat Intelligence Resolutions for 2017.


Don’t chase shiny threat intel objects


Intelligence work, especially in the cyber realm, is complex, involved, and often time-consuming. The output isn’t always earth-shattering; new rules to detect threats, additional indicators to search for during an investigation, a brief to a CISO on emerging threats, situational awareness for the SOC so they better understand the alerts they respond to. Believe it or not in this media frenzied world, that is the way it is supposed to be. Things don’t have to be sensationalized to be relevant. In fact, many of the things that you will discover through analysis won’t be sensational but they are still important. Don’t


discount these things or ignore them in order to go chase shiny threat intelligence objects – things that look and sound amazing and important but likely have little relevance to you. Be aware that those shiny things exist, but do not let them take away from the things that are relevant to you.



It is also important to note that not everything out there that gets a lot of attention is bad – sometimes something is big because it is a big deal and something you need to focus on. Knowing what is just a shiny object and what is significant comes down to knowing what is important to you and your organization, which brings us to resolution #2.


Identify your threat intelligence requirements


Screen Shot 2016-12-26 at 8.17.47 PM.png

Requirements are the foundation of any intelligence work. Without them you could spend all of your time finding interesting things about threats without actually contributing to the success of your information security program.

There are many types and names for intelligence requirements: national intelligence requirements, standing intelligence requirements, priority intelligence requirements – but they are all a result of a process that identifies what information is important and worth focusing on. As an analyst, you should not be focusing on something that does not directly tie back to an intelligence requirement. If you do not currently have intelligence requirements and are instead going off of some vague guidance like “tell me about bad things on the internet” it is much more likely that you will struggle with resolution #1 and end up chasing the newest and shiniest threat rather than what is important to you and your organization.


There are many different ways to approach threat intelligence requirements – they can be based off of business requirements, previous incidents, current events, or a combination of the above. Scott Roberts and Rick Holland have both written posts to help organizations develop intelligence requirements, and they are excellent places to start with this resolution. (They can be found here and here.)


Be picky about your sources


One of the things we collectively struggled with in 2016 was helping people understand the difference between threat intelligence and threat feeds. Threat intelligence is the result of following the intelligence cycle - from developing requirements, through collection and processing, analysis, and dissemination. For a (much) more in depth look into the intelligence cycle read JP 2-0, the publication on Joint Intelligence [PDF].


Threat feeds sit solidly in the collection/processing phase of the intelligence cycle - they are not finished intelligence, but you can't have finished intelligence without collection, and threat feeds can provide the pieces needed to conduct analysis and produce threat intelligence. There are other sources of collection besides feeds, including alerts issued by government agencies or commercial intelligence providers that often contain lists of IOCs. With all of these things it is important to ask questions about the indicators themselves:


  • Where does the information come from? A honeypot? Is it low interaction or high interaction? Does it include scanning data? Are there specific attack types that they are monitoring for? Is it from an incident response investigation? When did that investigation occur? Are the indicators pulled directly from other threat feeds/sources? If so, which ones?
  • What is included in the feed? Is it simply IOCs or is there additional information or context available? Remember, this type of information must still be analyzed and it can be very difficult to do that without additional context.
  • When was the information collected? Some types of information are good for long periods, but some are extremely perishable and it is important to know when the information was collected, not just when you received it. It is also important to know if you should be using indicators to look back through historical logs or generate alerts for future activity.


Tactical indicators have dominated the threat intelligence space and many organizations employ them without a solid understanding of what threats are being conveyed in the feeds or where the information comes from, simply because they are assure that they have the "best threat feed" or the "most comprehensive collection" or maybe they come from a

government agency with a fancy logo (although let's be honest, not that fancy) but elf gif.gif

you should never blindly trust those indicators, or you will end up with a pile of false positives. Or a really bad cup of coffee.


It isn’t always easy to find out what is in threat feeds, but it isn’t impossible. If threat feeds are part of your intelligence program then make it your New Year’s resolution to understand where the data in the feeds comes from, how often it is updated, where you need to go to find out additional information about any of the indicators in the feeds, and whether or not it will support your intelligence requirements. If you can’t find that information out then it may be a good idea to also start looking for feeds that you know more about.



Look OUTSIDE of the echo chamber

It is amazing how many people you can find to agree with your assessment (or agree with your disagreement of someone else's assessment) if you continue to look to the same individuals or the same circles. It is almost as if there are biases as work - wait, we know a thing or two about biases! <This Graphic Explains 20 Cognitive Biases That Affect Your Decision-Making>  Confirmation bias, bandwagoning, take your pick. When we only expose ourselves to certain things within the cyber threat intelligence realm we severely limit our understanding of the problems that we are facing and the many different factors that influence them. We also tend to overlook a lot of intelligence literature that can help us understand how we should be addresses those problems. Cyber intelligence is not so new and unique that we cannot learn from traditional intelligence practices.

Here are some good resources on intelligence analysis and research:


Kent Center Occasional Papers — Central Intelligence Agency

The Kent Center, a component of the employee-only Sherman Kent School for Intelligence Analysis at CIA University, strives to promote the theory, doctrine, and practice of intelligence analysis.


Congressional Research Service

The Congressional Research Service, a component of the Library of Congress, conducts research and analysis for Congress on a broad range of national policy issues.


The Council on Foreign Relations

The Council on Foreign Relations (CFR) is an independent, nonpartisan membership organization, think tank, and publisher.


Don’t be a cotton headed ninny muggins

Now this is where the hopeful optimist in me really comes out. One of the things that has bothered me most in 2016 is the needless fighting and arguments over, well, just about everything. Don't get me wrong, we need healthy debate and disagreement in our industry. We need people to challenge our assumptions and help us identify our biases. We need people to fill in any additional details that they may have regarding the analysis in question. What we don't need is people being jerks or discounting analysis without having seen a single piece of information that the analysis was based off of. There are a lot of smart people out there, and if someone publishes something you disagree with or your question then there are plenty of ways to get in touch with them or voice your opinion in a way that will make our collective understanding of intelligence analysis better.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.


This holiday season, eager little hacker girls and boys around the world will be tearing open their new IoT gadgets and geegaws, and set to work on evading tamper evident seals, proxying communications, and reversing firmware, in search of a Haxmas miracle of 0day. But instead of exploiting these newly discovered vulnerabilities, many will instead notice their hearts growing three sizes larger, and wish to disclose these new vulns in a reasonable and coordinated way in order to bring attention to the problem and ultimately see a fix for the discovered issues.


In the spirit of HaXmas, then, I'd like to take a moment to talk directly to the good-hearted hackers out there about how one might go about disclosing vulnerabilities in a way that maximizes the chances that your finding will get the right kind of attention.


Keep It Secret, Keep it Santa


First and foremost, I'd urge any researcher to consider the upsides of keeping your disclosure confidential for the short term. While it might be tempting to tweet a 140-character summary publically to the vendor's alias, dropping this kind of bomb on the social media staff of an electronics company is kind of a jerk move, and only encourages an adversarial relationship from there on out. In the best case, the company most able to fix the issue isn't likely to work with you once you've published, and in the worst, you might trigger a defensive reflex where the vendor refuses to acknowledge the bug at all.


Instead, consider writing a probing email to the company's email aliases of security@, secure@, abuse@, support@, and info@, along the lines of, "Hi, I seem to have found a software vulnerability with your product, who can I talk to?" This is likely to get a human response, and you can figure out from there who to talk to about your fresh new vulnerability.


The Spirit of Giving


You could also go a step further, and check the vendor's website to see if they offer a bug bounty for discovered issues, or even peek in on HackerOne's community-curated directory of security contacts and bug bounties. For example, searching for Rapid7 gives a pointer to our disclosure policies, contact information, and PGP key.


However, be careful when deciding to participate in a bug bounty. While the vast majority of bounty programs out there are well-intentioned, some come with an agreement that you will never, ever, ever, in a million years, ever disclose the bug to anyone else, ever -- even if the vendor doesn't deign to acknowledge or fix the issue. This can leave you in a sticky situation, even if you end up getting paid out. If you agree to terms like that, you can limit your options for public disclosure down the line if the fix is non-existent or incomplete.


Because of these kinds of constraints, I tend to avoid bug bounties, and merely offer up the information for free. It's totally okay to ask about a bounty program, of course, but be sure that you're not phrasing your request that can be read as an extortion attempt -- that can be taken as an attack, and again, trigger a negative reaction from the vendor.


No Reindeer Games


In the happy case where you establish communications with the vendor, it's best to be as clear and as direct as possible. If you plan to publish your findings on your blog, say so, and offer exactly what and when you plan to publish. Giving vendors deadlines -- in a friendly, non-threatening, matter-of-fact way -- turns out to be a great motivator for getting your issue prioritized internally there. Be prepared to negotiate around the specifics, of course -- you might not know exactly how to fix a bug, and how long that'll take, and the moment you disclose, they probably don't, either.


Most importantly, though, try to avoid over-playing your discovery. Consider what an adversary actually has to do to exploit the bug -- maybe they need to be physically close by, or already have an authorized account, or something like that. Being upfront with those details can help frame the risk to other users, and can tamp down irrational fears about the bug.


Finally, try to avoid blaming the vendor too harshly. Bugs happen -- it's inherent in the way we write, assemble, and ship software for general purpose computers. Assume the vendor isn't staffed with incompetents and imbeciles, and that they actually do care about protecting their customers. Treating your vendor with respect will engender a pretty typical honey versus vinegar effect, and you're much more likely to see a fix quickly.


Sing it Loud For All to Hear


Assuming you've hit your clearly-stated disclosure deadline, it's time to publish your findings. Again, you're not trying to shame the vendor with your disclosure -- you're helping other people make better informed decisions about the security of their own devices, giving other researchers a specific, documented case study of a vulnerability discovered in a shipping product, and teaching the general public about How Security Works. Again, effectively communicating the vulnerability is critical. Avoid generalities, and offer specifics -- screenshots, step-by-step instructions on how you found it, and ideally, a Metasploit module to demonstrate the effects of an exploit. Doing this helps move other researchers along in helping them to completely understand your unique findings and perhaps apply your learnings to their own efforts.


Ideally, there's a fix already available and distributed, and if so, you should clearly state that, early on in your disclosure. If there isn't, though, offer up some kind of solution to the problem you've discovered. Nearly always, there is a way to work around the issue through some non-default configuration, or a network-level defense, or something like that. Sometimes, the best advice is to avoid using the product all together, but that tends to be the last course of defense.


Happy HaXmas!


Given the recently enacted DMCA research exemption on consumer devices, I do expect to see an uptick in disclosing issues that center around consumer electronics. This is ultimately a good thing -- when people tinker with their own devices, they are more empowered to make better decisions on how a technology can actually affect their lives. The disclosure process, though, can be almost as challenging as the initial hackery as finding and exploiting vulnerabilities in the first place. You're dealing with emotional people who are often unfamiliar with the norms of security research, and you may well be the first security expert they've talked to. Make the most of your newfound status as a security ambassador, and try to be helpful when delivering your bad news.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.


On the seventh day of Haxmas, the Cyber gave to me: a list of seven Rapid7 comments to government policy proposals! Oh, tis a magical season.


It was an active 2016 for Rapid7's policy team. When government agencies and commissions proposed rules or guidelines affecting security, we often submitted formal "comments" advocating for sound cybersecurity policies and greater protection of security researchers. These comments are typically a cross-team effort, reflecting the input of our policy, technical, industry experts, and submitted with the goal of helping government better protect users and researchers and advance a strong cybersecurity ecosystem.


Below is an overview of the comments we submitted over the past year. This list does not encompass the entirety of our engagement with government bodies, only the formal written comments we issued in 2016. Without further ado:


1.     Comments to the National Institute for Standards and Technology (NIST), Feb. 23: NIST asked for public feedback on its Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity. The Framework is a great starting point for developing risk-based cybersecurity programs, and Rapid7's comments expressed support for the Framework. Our comments also urged updates to better account for user-based attacks and ransomware, to include vulnerability disclosure and handling policies, and to expand the Framework beyond critical infrastructure. We also urged NIST to encourage greater use of multi-factor authrntication and more productive information sharing. Our comments are available here [PDF]: amework-022316.pdf


2.     Comments to the Copyright Office, Mar. 3: The Copyright Office asked for input on its (forthcoming) study of Section 1201 of the DMCA. Teaming up with Bugcrowd and HackerOne, Rapid7 submitted comments that detailed how Section 1201 creates liability for good faith security researchers without protecting copyright, and suggested specific reforms to improve the Copyright Office's process of creating exemptions to Section 1201. Our comments are available here [PDF]: -joint-comments-to-us-copyright-office-s…


3.     Comments to the Food and Drug Administration (FDA), Apr. 25: The FDA requested comments for its postmarket guidance for cybersecurity of medical devices. Rapid7 submitted comments praising the FDA's holistic view of the cybersecurity lifecycle, use of the NIST Framework, and recommendation that companies adopt vulnerability disclosure policies. Rapid7's comments urged FDA guidance to include more objective risk assessment and more robust security update guidelines. Our comments are available here [PDF]: ft-guidance-for-postmarket-management-of…


4.     Comments to the Dept. of Commerce's National Telecommunications and Information Administration (NTIA), Jun. 1: NTIA asked for public comments for its (forthcoming) "green paper" examining a wide range of policy issues related to the Internet of Things. Rapid7's comprehensive comments detailed – among other things – specific technical and policy challenges for IoT security, including insufficient update practices, unclear device ownership, opaque supply chains, the need for security researchers, and the role of strong encryption. Our comments are available here [PDF]: ternet-of-things-rfc-060116.pdf


5.     Comments to the President's Commission on Enhancing National Security (CENC), Sep. 9: The CENC solicited comments as it drafted its comprehensive report on steps the government can take to improve cybersecurity in the next few years. Rapid7's comments urged the government to focus on known vulnerabilities in critical infrastructure, protect strong encryption from mandates to weaken it, leverage independent security researchers as a workforce, encourage adoption of vulnerability disclosure and handling policies, promote multi-factor authentication, and support formal rules for government disclosure of vulnerabilities. Our comments are available here [PDF]: i-090916.pdf


6.     Comments to the Copyright Office, Oct. 28: The Copyright Office asked for additional comments on its (forthcoming) study of Section 1201 reforms. This round of comments focused on recommending specific statutory changes to the DMCA to better protect researchers from liability for good faith security research that does not infringe on copyright. Rapid7 submitted these comments jointly with Bugcrowd, HackerOne, and Luta Security. The comments are available here [PDF]: luta-security-joint-comments-to-copyrigh…


7.     Comments to the National Highway Traffic Safety Administration (NHTSA), Nov. 30: NHTSA asked for comments on its voluntary best practices for vehicle cybersecurity. Rapid7's comments recommended that the best practices prioritize security updating, encourage automakers to be transparent about cybersecurity features, and tie vulnerability disclosure and reporting policies to standards that facilitate positive interaction between researchers and vendors. Our comments are available here [PDF]: ybersecurity-best-practices-for-modern-v…



2017 is shaping up to be an exciting year for cybersecurity policy. The past year made cybersecurity issues even more mainstream, and comments on proposed rules laid a lot of intellectual groundwork for helpful changes that can bolster security and safety. We are looking forward to keeping up the drumbeat for the security community next year. Happy Holidays, and best wishes for a good 2017 to you!

(A Story by Rapid7 Labs)


Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.


Happy Holi-data from Rapid7 Labs!

It’s been a big year for the Rapid7 elves Labs team. Our nigh 200-node strong Heisenberg Cloud honeypot network has enabled us to bring posts & reports such as The Attacker’s Dictionary, Cross-Cloud Adversary Analytics and Mirai botnet tracking to the community, while Project Sonar fueled deep dives into National Exposure as well as ClamAV, fuel tanks and tasty, tasty EXTRABACON.


Our final gift of the year is the greatest gift of all: DATA! We’ve sanitized an extract of our November, 2016 cowrie honeypot data from Heisenberg Cloud. While not the complete data set, it should be good for hours of fun over the holiday break. You can e-mail research [at] rapid7 [dot] com if you have any questions or leave a note here in the comments.


While you’re waiting for that to download, please enjoy our little Haxmas tale…


Once upon a Haxmas eve… CISO Scrooge sat sullen in his office. His demeanor was sour as he reviewed the day’s news reports and sifted through his inbox, but his study was soon interrupted by a cheery minion’s “Merry HaXmas, CISO!”.


CISO Scrooge replied, “Bah! Humbug!”


The minion was taken aback. “HaXmas a humbug, CISO?! You surely don’t mean it!”


“I do, indeed…” grumbled Scrooge. “What is there to be merry about? Every day attackers are compromising sites, stealing credentials and bypassing defenses. It’s almost impossible to keep up. What’s more, the business units and app teams here don’t seem to care a bit about security. So, I say it again ‘Merry HaXmas?’ - HUMBUG!”


Scrooge’s minion knew better than argue and quickly fled to the comforting glow of the pew-pew maps in the security operations center.


As CISO Scrooge returned to his RSS feeds his office lights dimmed and a message popped up on his laptop, accompanied by a disturbing “clank” noise (very disturbing indeed since he had the volume completely muted). No matter how many times he dismissed the popup it returned, clanking all the louder. He finally relented and read the message: “Scrooge, it is required of every CISO that the defender spirit within them should stand firm with resolve in the face of their adversaries. Your spirit is weary and your minions are discouraged. If this continues, all your security plans will be for naught and attackers will run rampant through your defenses. All will be lost.”


Scrooge barely finished uttering, “Hrmph. Nothing but a resourceful security vendor with a crafty marketing message. My ad blocker must be misconfigured and that bulb must have burned out.”


“I AM NO MISCONFIGURATION!” appeared in the message stream, followed by, “Today, you will be visited by three cyber-spirits. Expect their arrivals on the top of each hour. This is your only chance to escape your fate.” Then, the popup disappeared and the office lighting returned to normal. Scrooge went back to his briefing and tried to put the whole thing out of his mind.


The Ghost of HaXmas Past

CISO Scrooge had long finished sifting through news and had moved on to reviewing the first draft of their PCI DSS ROC[i]. His eyes grew heavy as he combed through the tome until he was startled with a bright green light and the appearance of a slender man in a tan plaid 1970’s business suit holding an IBM 3270 keyboard.


“Are you the cyber-spirit, sir, whose coming was foretold to me?”, asked Scrooge.


“I am!”, replied the spirit. “I am the Ghost of Haxmas Past! Come, walk with me!”


As Scrooge stood up they were seemingly transported to a room full of mainframe computers with workers staring drone-like into green-screen terminals.


“Now, this was security, spirit!” exclaimed Scrooge. “No internet…No modems…Granular RACF[ii] access control…” (Scrooge was beaming almost as bright as the spirit!)


“So you had been successful securing your data from attackers?”, asked the spirit.


“Well, yes, but this is when we had control! We had the power to give or deny anyone access to critical resources with a mere series of arcane commands.” As soon as he said this, CISO Scrooge noticed the spirit moving away and motioning him to follow. When he caught up, the scene changed to cubicle-lined floor filled with desktop PCs.


“What about now, were these systems secure?”, inquired the spirit.


“Well, yes. It wasn’t as easy as it was with the mainframe, but as our business tools changed and we started interacting with clients and suppliers on the internet we found solutions that helped us protect our systems and networks and give us visibility into the new attacks that were emerging.”, remarked CISO Scrooge. “It wasn’t easy. In fact, it was much harder than the mainframe, but the business was thriving: growing, diversifying and moving into new markets. If we had stayed in a mainframe mindset we’d have gone out of business.”


The spirit replied, “So, as the business evolved, so did the security challenges, but you had resources to protect your data?”


“Well, yes. But, these were just PCs. No laptops or mobile phones. We still had control!”, noted Scrooge.


“That may be,” noted the spirit, “but if we continued our journey, would this not be the pattern? Technology and business practices change, but there have always been solutions to security problems coming at the same pace?” CISO Scrooge had to admit that as he looked back in his mind, there had always been ways to identify and mitigate threats as they emerged. They may not have always been 100% successful, but the benefits of the “new” to the business were far more substantial than the possible issues that came with it.


The Ghost of Haxmas Present

As CISO Scrooge pondered the spirit’s words he realized he was back at his desk, his screen having locked due to the required inactivity timeout.  He gruffed a bit (he couldn’t understand the 15-minute timeout when at your desk as much as you can’t) and fumbled 3 attempts at his overly-complex password to unlock the screen before he was logged back in. His PCI DSS ROC was minimized and his browser was on a MeTube video (despite the site being blocked on the proxy server). He knew he had no choice but to click “play”. As he did, it seemed to be a live video of the Mooncents coffee shop down the street buzzing with activity. He was seamlessly transported from remote viewer to being right in the shop, next to a young woman in bespoke, authentic, urban attire, sipping a double ristretto venti half-soy nonfat decaf organic chocolate brownie iced vanilla double-shot gingerbread frappuccino. Amongst the patrons were people on laptops, tablets and phones, many of them conducting business for CISO’s company.


“Dude. I am the spirit of Haxmas Present”, she said, softly, as her gaze fixated upon a shadowy figure in the corner. CISO Scrooge turned his own gaze in that direction and noticed a hoodie-clad figure with a sticker-laden laptop. Next to the laptop was a device that looked like a wireless access point and Scrooge could just barely hear the figure chuckling to himself as his fingers danced across the keyboard.


“Is that person doing what I think he’s doing?”, Scrooge asked the spirit.


“Indeed,” she replied. “He’s setup a fake Mooncents access point and is intercepting all the comms of everyone connected to it.”


Scrooges’ eyes got wide as he exclaimed “This is what I mean! These people are just like sheep being led to the shearer. They have no idea what’s happening to them! It’s too easy for attackers to do whatever they want!” As he paused for a breath, the spirit gestured to a woman who just sat down in the corner and opened her laptop, prompting Scrooge to go look at her screen. The woman did work at CISO’s company and she was in Mooncents on her company device, but — much to the surprise of Scrooge — as soon as she entered her credentials, she immediately fired up the VPN Scrooge’s team had setup, ensuring that her communications would not be spied upon. The woman never once left her laptop alone and seemed to be very aware of what she needed to do to stay as safe as possible.


“Do you see what is happening?”, asked the spirit? “Where and how people work today are not as fixed as it was in the past. You have evolved your corporate defenses to the point that attackers need to go to lengths like this or trick users through phishing to get what they desire.”


“Technology I can secure. But how do I secure people?!”, sighed Scrooge.


“Did not this woman do what she needed to keep her and your company’s data safe?”, asked the spirit.


“Well, yes. But it’s so much more work!”, noted Scrooge. “I can’t install security on users, I have to make them aware of the threats and then make it as easy as possible for them to work securely no matter where they are!”[iii]


As soon as he said this, he realized that this was just the next stage in the evolution of the defenses he and his team had been putting into place. The business-growing power inherent in this new mobility and the solid capabilities of his existing defenses forced attackers to behave differently and he understood that he and his team probably needed to as well.


The spirit gave a wry, ironic grin at seeing Scrooge’s internal revelation. She handed him an infographic titled “Ignorance & Want” that showcased why it was important to make sure employees were well-informed and to also stay in tune with how users want to work and make sure his company’s IT offerings were as easy-to-use and functional as all the shiny “cloud” apps.


The Ghost of Haxmas Future

As Scrooge took hold of the infographic the world around him changed. A dark dystopian scene faded into view. Buildings were in shambles and people were moving in zombie-like fashion in the streets. A third, cloaked spirit appeared next to him and pointed towards a disheveled figure hulking over a fire in a barrel. An “eyes” emoji appeared on the OLED screen where the spirit’s face should have been. CISO Scrooge didn’t even need to move closer to see that it was a future him struggling to keep warm to survive in this horrible wasteland.


“Isn’t this a bit much?”, inquired Scrooge. The spirit shrugged and a “whatever” emoji appeared on the screen. Scrooge continued, “I think I’ve got the message. Business processes will keep evolving and moving faster and will never be tethered and isolated again. I need to stay positive and constantly evolve — relying on psychology, education as well as technology — to address the new methods attackers will be adopting. If I don’t, it’s ‘game over’.”


The spirit’s screen flashed a “thumbs up” emoji and CISO Scrooge found himself back at his desk, infographic strangely still in hand with his Haxmas spirt fully renewed. He vowed to keep Haxmas all the year through from now on.


[i] Payment Card Industry Data Security Standard Report on Compliance


[iii] Scrooge eventually also realized he could make use of modern tools such as Insight IDR to combine security & threat event data with user behavior analysis to handle the cases where attackers do successfully breach users.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.



Sam the snowman taught me everything I know about reindeer [disclaimer: not actually true], so it only seemed logical that we bring him back to explain the journey of machine learning. Wait, what? You don’t see the correlation between reindeer and machine learning? Think about it, that movie had everything: Yukon Cornelius, the Bumble, and of course, Rudolph himself. And thus, in sticking with the theme of HaXmas 2016, this post is all about the gifts of early SIEM technology, “big data”, and a scientific process.


SIEM and statistical models – Rudolph doesn’t get to play the reindeer games


Just as Rudolph had conformist Donner’s gift of a fake black nose promising to cover his glowing monstrosity [and it truly was impressive to craft this perfect deception technology with hooves], information security had the gift of early SIEM technology promising to analyze every event against known bad activity to spot malware. The banking industry had just seen significant innovation in the use of statistical analysis [a sizable portion of what we now call “analytics”] for understanding the normal in both online banking and payment card activities. Tracking new actions and comparing them to what is typical for the individual takes a great deal of computing power and early returns in replicating fraud prevention’s success were not good.


SIEM had a great deal working against it when everyone suddenly expected a solution designed solely for log centralization to easily start empowering more complex pattern recognition and anomaly detection. After having witnessed, as consumers, the fraud alerts that can come from anomaly detection, executives starting expecting the same from their team of SIEM analysts. Except, there were problems:

  • the events within an organization vary a great deal more than the login, transfer, and purchase activities of the banking world,
  • the fraud detection technology was solely dedicated to monitoring events in other, larger systems, and
  • SIEM couldn’t handle both data aggregation and analyzing hundreds of different types of events against established norms.

After all, my favorite lesson from data scientists is that “counting is hard”. Keeping track of the occurrence of every type of event for every individual takes a lot of computing power and understanding of each type of event. After attempting to define alerts for transfer size thresholds, port usage, and time-of-day logins, no one understood that services like Skype using unpredictable ports and the most privileged users regularly logging in late to resolve issues would cause a bevy of false positives. This forced most incident response teams to banish advanced statistical analysis to the island of misfit toys, like an elf who wants to be a dentist.


“Big Data” – Yukon Cornelius rescues machine learning from the Island of Misfit Toys

There is probably no better support group friend for the bizarre hero, Yukon Cornelius, than “big data” technologies. Just as NoSQL databases, like Mongo, to map-reduce technologies, like Hadoop, were marketed as the solution to every, conceivable challenge, Yukon proudly announced his heroism to the world. Yukon carried a gun he never used, even when fighting Bumble, and “big data” technology is varied and each kind needs to be weighed against less expensive options for each problem. When jumping into a solution technology-first, most teams attempting to harness “big data” technology came away with new hardware clusters and mixed results; insufficient data and security experts miscast as data experts still prevent returns on machine learning from happening today. yukon.jpg


However, with the right tools and the right training, data scientists and software engineers have used “big data” to rescue machine learning [or statistical analysis, for the old school among you] from its unfit-for-here status. Thanks to “big data”, all of the original goals of statistical analysis in SIEMs are now achievable. This may have led to hundreds of security software companies marketing themselves as “the machine learning silver bullet”, but you just have to decide when to use the gun and when to use the pick axe. If you can cut through the hype to decide when the analytics are right and for which problems machine learning is valuable, you can be a reason that both Hermey, the dentist, and Rudolph, the HaXmas hero, reach their goal.


Visibility - only Santa Claus could get it from a glowing red nose


But just as Rudolph’s red nose didn’t make Santa’s sleigh fly faster, machine learning is not a magic wand you wave at your SIEM to make eggnog pour out. That extra foggy Christmas Eve couldn’t have been foggy all over the globe [or it was more like the ridiculous Day After Tomorrow], but Santa knows how to defy physics to reach the entire planet in a single night, so we can give him the benefit of the doubt and assume he knew when and where he needed a glowing red ball to shine brighter than the world’s best LED headlights. I know that I’ve held a pretty powerful Maglite in the fog and still couldn’t see a thing, so I wouldn’t be able to get around Mt. Washington armed with a glowing reindeer nose.


Similarly, you can’t just hand a machine learning toolkit to any security professional and expect them to start finding the patterns they should care about across those hundreds of data types mentioned above. It takes the right tools, an understanding of the data science process, and enough security domain expertise to apply machine learning to the attacker behaviors well hidden within our chaotically normal environments. Basic anomaly detection and the baselining of users and assets against their peers should be embedded in modern SIEM and EDR solutions to reveal important context and unusual behavior. It’s the more focused problems and data sets that demand the kind of pattern recognition within the characteristics of a website or PHP file only the deliberate development of machine learning algorithms can properly address.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.


As we close out the end of the year, I find it important to reflect on the IoT vulnerability research conducted during 2016 and what we learned from it. There were several exciting IoT vulnerability research projects conducted by Rapid7 employees in 2016, which covered everything from lighting automation solutions to medical devices. Through this research, we want to give the "gift" of more information about IoT security to the community. In the spirit of the celebrating the holidays, let's recap and celebrate each of these projects and some of the more interesting findings.


Comcast XFINITY Home Security System



2016 opened with a research project on the Comcast XFINITY Home Security System which was published in January 2016. Phil Bosco, a member of the Rapid7’s Global Services pen test team, targeted his XFINITY home security systems for evaluation. During testing, Phil discovered that by creating a failure condition in the 2.4 GHz radio frequency band used by the Zigbee communication protocol, the Comcast XFINITY Home Security System would fail open, with the base station failing to recognize or alert on a communications failure with the component sensors. This interesting finding showed us that if communication to the systems sensors is lost, the system would fail to recognize that lost communication. Additionally, this failure also prevented the system from properly alarming when the sensor detected a condition such as a open door or window. This vulnerability allowed anyone capable of interrupting the 2.4 GHz Zigbee communication between sensor and base station to effectively silence the system. Comcast has since fixed this issue.


Osram Sylvania Lightify Automated Lighting


Since automated lighting has become very popular I decided to examine the OSRAM Sylvania LIGHTIFY automated lighting solution. This research project consisted of looking at both the Home and Pro (enterprise) versions. This project ended up revealing a total of nine issues, four in the Home version and five in the Pro. The Pro version had the most interesting of these results which included identifying issues consisting of persistent Cross Site Scripting (XSS) and Weak default WPA2 pre-shared keys (PSKs). The XSS vulnerabilities we found had two entry points with the most entertaining one being an out of band injection using WiFi Service Set Identifier (SSID) to deliver the XSS attack into the Pro web management interface. A good explanation of this type of attack delivery method is explained in a Whiteboard Wednesday video I recorded. Next, the finding that I would consider the most worrisome is the WPA PSK issue. Default passwords have been a scourge of IoT recently. Although, in this case the default password are different across every Pro device produces, closer examination of the WPA PSK revealed they were easily cracked. So how did this happen? Well, in this case the PSK was only eight characters long, which is considered very short for a PSK, and it only used characters that were hexadecimal lowercase (abcdef0123456789) which makes the number of combinations or key space much easier to brute force  and can allow a malicious actor to capture a authentication handshake and brute force it offline in only a few hours.


Bluetooth Low Energy (BLE) Trackers


Screen Shot 2016-12-16 at 2.50.37 PM.png

You ever get curious about those little Bluetooth low energy (BLE) tracker dongles you can hang on your key chain that helps you locate your keys if you misplace them? So did I, but my interest went a little further then finding my lost keys. I was interested in how they worked and what, if any, security issues could be associated to their use or misuse. I purchased several different brands and started testing their ecosystem, yes ecosystem, that is all of the services that make an IoT solution function, which often includes the hardware, mobile applications and cloud APIs. One of the most fascinating aspects of these devices is the crowd GPS concept. How does that work? Let’s say you attach one of the devices to your bicycle and it gets stolen. Every time that bicycle passes within close proximity to another user of that specific product their cell phone will detect your dongle on the bicycle and send the GPS location to the cloud allowing you to identify its location. Kind of neat and I expect it works well if you have an area with a good saturation of users, but if you live in a rural area there's less chance of that working as well. During this project we identified several interesting vulnerabilities related to the tracking identifiers and GPS tracking. For example, we found that the devices' tracking ID was easy identified and in a couple cases was directly related to the BLE physical address. Combining that with some cloud API vulnerabilities, we were able to track a user using the GPS of their device. Additionally, in couple cases we were able to poison the GPS data for other devices. With weak BLE pairing, we were also able to gain access to a couple of the devices and rename them and set off their location alarms which drained the small batteries for the devices.


Animas OneTouch Ping Insulin Pump



Rapid7’s Jay Radcliffe, a member of the Global Services team and security researcher at Rapid7, found several fascinating vulnerabilities while testing the Animas OneTouch Ping insulin pump. Jay is renowned for his medical device research, which has a personal aspect to it as he is diabetic. In the case of the Animas OneTouch, Jay found and reported three vulnerabilities which include cleartext communication, weak pairing between remote and pump, and replay attack vulnerability. During this research project it was determined that these vulnerabilities could be used to potentially dispense insulin remotely, which could impact the safety and health of the user. Jay worked closely with the manufacturer to help create effective mitigations for these vulnerabilities, which can be used to reduce or eliminate the risk. Throughout the project, there was positive collaboration between Jay, Rapid7 and Johnson & Johnson and patients were notified prior to disclosure.




Stepping back and taking a holistic look at all of the vulnerabilities found during these research projects, we can notice a pattern of common issues including:


  • Lack of communication encryption
  • Poor device pairing
  • Confidential data (such as passwords) stored within mobile applications
  • Vulnerability to replay attacks
  • Weak default passwords
  • Cloud API and device management web services vulnerable to common web vulnerabilities


These findings are not a surprise and appear to be issues we commonly encounter when examining an IoT product's ecosystem. What is the solution to resolving these issues then? First, IoT manufactures can easily apply some basic testing across these areas to quickly identify and fix vulnerabilities on products prior to going to market. Second, we as end users can change default passwords, such as the standard login password and WiFi WPA PSK, to protect our devices from many forms of compromise.


It is also important to note that these IoT research projects are just a few examples of the dedication that Rapid7 and its employees have in regards to securing the world of IoT.  These research projects allow us to continually expand our knowledge around IoT security and vulnerabilities. By working closely with vendors to identify and mitigate issues, we can continue to help those vendors in expanding their working knowledge of security, which will flow into future products. Our work also allows us to share this knowledge with consumers so they can make better choices and mitigate common risks that are often found within IoT products.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.


“May you have all the data you need to answer your questions – and may half of the values be corrupted!”

- Ancient Yiddish curse


This year, Christmas (and therefore Haxmas) overlaps with the Jewish festival of Chanukah. The festival commemorates the recapture of the Second Temple. As part of the resulting cleaning and sanctification process, candles were required to burn continuously for seven days – but there was only enough oil for one. Thanks to the intervention of God, that one day of oil burned for eight days, and the resulting eight-day festival was proclaimed.


Unfortunately despite God getting involved in everything from the edibility of owls (it’s in Deuteronomy, look it up) to insufficient stocks of oil, there’s no record of divine intervention to solve for insufficient usable data, and that’s what we’re here to talk about. So pull up a chair, grab yourself a plate of latkes and let’s talk about how you can make data-driven security solutions easier for everyone.


Data-Driven Security


As security problems have grown more complex and widespread, people have attempted to use the growing field of data science to diagnose issues, both on the macro level (industry-wide trends and patterns found in the DBIR) and the micro (responding to individual breaches). The result is Data-Driven Security, covered expertly by our Chief Data Scientist, Bob Rudis, in the book of the same name.


Here at Rapid7, our Data Science team has been working on everything from systems to detect web shells before they run (see our blog post about webshells) to internal projects that improve our customers’ experience. As a result, we’ve seen a lot of data sources from a lot of places, and have some thoughts on what you can do to make data scientists’ problem-solving easier before you even know you have an issue.


Make sure data is available


Chanukah has just kicked off and you want two things: to eat fritters and apply data science to a bug, breach or feature request. Luckily you’ve got a lot of fritters and a lot of data – but how much data do you actually have?


People tend to assume that data science is all about the number of observations. If you’ve got a lot of them, you can do a lot; only a few and you can only do a little. Broadly-speaking, that’s true, but the span of time data covers and the format it takes are also vitally important. Seasonal effects are a well-studied phenomenon in human behavior and, by extension, in data (which one way or another, tends to relate to how humans behave). What people do and how much they do it shifts between seasons, between months, even between days of the week. This means that the length of time your data covers can make the difference between a robust answer and an uncertain one – if you’ve only got Chanukah’s worth, we can draw patterns in broad strokes but we can’t eliminate the impact seasonal changes might have.


The problem with this is that storing a lot of data over a long period of time is hard, potentially expensive and a risk in and of itself – it’s not great for user or customer privacy, and in the event of a breach it’s another juicy thing the attacker can carry off. As a result, people tend to aggregate their raw data, which is fine if you know the questions you’re going to want answering.


If you don’t, though, the same thing that protects aggregated data from malicious attackers will stymie data scientists: it’s very hard, if you’re doing it right, to reverse-engineer aggregation, and so researchers are limited to whatever fields or formats you thought were useful at the time, which may not be the ones they actually need.


One solution to both problems is to keep data over a long span of time, in its raw format, but sample: keep 1 random row out of 1,000, or 1 out of 10,000, or an even higher ratio. That way data scientists can still work with it and avoid seasonality problems, but it becomes a lot harder for attackers to reconstruct the behavior of individual users. It’s still not perfect, but it’s a nice middle ground.


Make sure data is clean


It’s the fourth day of Chanukah, you’ve implemented that nice sampled data store, and you even managed to clean up the sufganiyot the dog pulled off the table and joyously trod into the carpet at its excitement to get human food. You’re ready to get to work, you call the data scientists in, and they watch as this elegant sampled datastore collapses into a pile of mud because 3 months ago someone put a tab in a field that shouldn’t have a tab and now everything has to be manually reconstructed.


If you want data to be reliable, it has to be complete and it has to be clean. By complete we mean that if a particular field only has a meaningful value 1/3rd of the time, for whatever reason, it’s going to be difficult to rely on it (particularly in a machine learning context, say). By clean, we mean that there shouldn’t be unexpected values, particularly the sort of unexpected value that breaks data formatting or reading.


In both cases the answer is data validity checks. Just as engineers have tests – tasks that run every so often to ensure changes haven’t unexpectedly broken code – data storage systems and their associated code need validity checks, which run against a new row every so often and make sure that they have all their values, they’re properly formatted, and those values are about what they should be.


Make sure data is documented


It’s the last day of Chanukah, you’ve got a sampled data store with decent data, the dreidel has rolled under the couch and you can’t get it out, and you just really really want to take your problem and your data and smush them together into a solution. The data scientists read in the data, nothing breaks this time… and are promptly stumped by columns with names like “Date Of Mandatory Optional Delivery Return (DO NOT DELETE, IMPORTANT)” or simply “f”. You can't expect their bodies to harbour that kind of thing.


Every time you build a new data store and get those validity checks set up, you should also be setting up documentation. Where it lives will vary from company to company, but it should exist somewhere and set out what each field means (“the date/time the request was sent”), an example of what sort of value it contains (“2016-12-10 13:42:45”) and any caveats (“The collectors were misconfigured from 2016-12-03 to 2016-12-04, so any timestamps then are one hour off”). That way, data scientists can hit the ground running, rather than spending half their time working out what the data even means.


So, as you prepare for Chanukah and 2017, you should be preparing for data science, too. Make sure your data is (respectfully) collected, make sure your data is clean, and make sure your data is documented. Then you can sit back and eat latke in peace.

Kirk Hayes

The Twelve Pains of Infosec

Posted by Kirk Hayes Employee Dec 24, 2016

One of my favorite Christmas carols is the 12 Days of Christmas. Back in the 90’s, a satire of the song came out in the form of the 12 Pains of Christmas, which had me rolling on the floor in laughter, and still does. Now that I am in information security, I decided it is time for a new satire, maybe this will start a new tradition, and so I am presenting, the 12 Pains of Infosec.

musical-notes.jpg The first thing in infosec that's such a pain to me is your password policy


The second thing in infosec that's such a pain to me is default credentials, and your password policy


The third thing in infosec that's such a pain to me is falling for phishing, default credentials, and your password policy


The fourth thing in infosec that's such a pain to me is patch management, falling for phishing, default credentials, and your password policy


The fifth thing in infosec that's such a pain to me is Windows XP, patch management, falling for phishing, default credentials, and your password policy


The sixth thing in infosec that's such a pain to me is Lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy


The seventh thing in infosec that's such a pain to me is no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy


The eighth thing in infosec that's such a pain to me is users as local admins, no monitoring, Lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy


The ninth thing in infosec that's such a pain to me is lack of management support, users as local admins, no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy


The tenth thing in infosec that's such a pain to me is testing for compliance, lack of management support, users as local admins, no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy


The eleventh thing in infosec that's such a pain to me is no asset management, testing for compliance, lack of management support, users as local admins, no monitoring, lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy


The twelfth thing in infosec that's such a pain to me is trust in antivirus, no asset management, testing for compliance, lack of management support, users as local admins, no monitoring, Lack of input filtering, Windows XP, patch management, falling for phishing, default credentials, and your password policy



The first thing in infosec that's such a pain to me is your password policy

When I go into organizations for penetration tests, one of the easiest ways to get in is through password guessing. Most organizations use a password policy of 8 characters, complexity turned on, and change every 90 days. So, what do the users do? They come up with a simple to remember password that will never repeat. Yes, I am talking about the infamous Winter16 or variations of season and year. If they aren’t using that password, then chances are it is something just as simple. Instead, set a longer password requirement (12 characters or more) and blacklist common words, such as password, seasons, months, and company name.


The second thing in infosec that's such a pain to me is default credentials

The next most common finding I see is the failure to change default credentials. It is such a simple mistake, but one that can cost your organization a ton! This doesn’t just go for web apps like JBOSS and Tomcat, but also for embedded devices. Printers and other embedded devices are a great target since the default credentials aren’t usually changed. They often hold credentials to other systems to help gain access or simply can be used as a pivot point to attack other systems.


The third thing in infosec that's such a pain to me is falling for phishing

Malicious actors go for the weakest link. Often this is the users. Sending a carefully crafted phishing email is almost 100% successful. In fact, even many security professionals fall victim to phishing emails. So, what can we do about it? Well, we must train our employees regularly to spot phishing attempts as well as encourage and reward them for alerting security on phishing attempts. Once reported, add the malicious URL to your blacklist, and redirect to a phishing education page. And for goodness sake, Security Department, please disable the links and remove any tags in the email before forwarding off as "education".


The fourth thing in infosec that's such a pain to me is patch management

There are so many systems out there. It can be hard to patch them all, but having a good patch management process is essential. Ensuring our systems are up to date with the latest patches will help protect those systems from known attacks. It is not just operating system patches that need to be applied, also for any software you have installed.


The fifth thing in infosec that's such a pain to me is Windows XP

Oh Windows XP, how I love and hate thee. Even though Windows XP support went the way of the dodo back in 2014, over 2.5 years later I still see it being used in corporate environments. While I called out Windows XP, it is not uncommon to see Windows 2000, Windows Server 2003, and other unsupported operating system software. While some of the issues with these operating systems have been mitigated, such as MS08_067, many places have not applied the patches or taken the mitigation tactics. That is not to mention what unknown security vulnerabilities that exist and will never be patched. Your best bet is to upgrade to a supported operating system. If you cannot for some reason (required software will not run on newer operating systems), segregate the network to prevent access to the unsupported systems.


The sixth thing in infosec that's such a pain to me is lack of input filtering

We all know and love the OWASP Top 10. Three of the top 10 is included in this pain. Cross-Site Scripting (XSS), SQL Injection (SQLi), HTML Injection, Command Injection, and HTML Redirects are all vulnerabilities that can be solved fully, or at least partially in the case with XSS, with input filtering. Web applications that perform input filtering will remove bad characters, allow only good characters, and perform the input filtering not at the client-side, but at the server-side. Then using output encoding/filtering, XSS is remediated as well.


The seventh thing in infosec that's such a pain to me is no monitoring

In 1974, Muhammad Ali said “His hands can't hit what his eyes can't see” referring to his upcoming fight with George Foreman. This quote bodes true in Infosec as well. You cannot defend what you cannot see. If you are not performing monitoring in your network, and centralized monitoring so you can collaborate the logs, you will miss attacks. As Dr. Eric Cole says “Prevention is ideal, but detection is critical.” This is also critical to REVIEW the logs, meaning you will need good people that know what they are looking for, not just good monitoring software.


The eighth thing in infosec that's such a pain to me is users as local admins

Though for years we have been suggesting to segregate user privileges, yet almost every penetration test I perform I find this to be an issue. Limiting use of accounts to only what is needed to do their job is very hard, but essential to secure your environment. This means not giving local administrator privileges to all users but also using separate accounts for managing the domain, limiting the number of privileged accounts, and monitoring the use of these accounts and group memberships.


The ninth thing in infosec that's such a pain to me is lack of management support

How often do I run into people who want to make a change or changes in their network, yet they do not get the support needed from upper management? A LOT! Sometimes an eye-opening penetration test works wonders.


The tenth thing in infosec that's such a pain to me is testing for compliance

I get it, certain industries require specific guidelines to show adequate security is in place, but when you test only for compliance sake, you are doing a disservice to your organization. When you attempt to limit the scope of the testing or firewall off the systems during the test, you are pulling a blinder over your eyes, and cannot hope to secure your data. Instead, use the need for testing to meet compliance a way to get more management support and enact real change in the organization.


The eleventh thing in infosec that's such a pain to me is no asset management

You can’t protect what you don’t know about. In this regard, employ an asset management system. Know what devices you have and where they are located. Know what software you have, and what patch levels they are at. I can’t tell you how many times I have exploited a system and my customer says “What is that? I don’t think that is ours”, only to find out that it was their system, they just didn’t have good asset management in place.


The twelfth thing in infosec that's such a pain to me is trust in antivirus

A few years ago, I read that antivirus software was only about 30% effective, leading to headlines about the death of antivirus, yet it still is around. Does that mean it is effective in stopping infections on your computer? No. I am often asked “What is the best antivirus I should get for my computer?” My answer is usually to grab a free antivirus like Microsoft Security Essentials, but be careful where you surf on the internet and what you click on. Antivirus will catch the known threats, so I believe it still has some merit, especially on home computers for relatives who do not know better, but the best protection is being careful. Turn on “click to play” for Flash and Java (if you can’t remove Java). Be careful what you click on. Turn on an ad blocker. There is no single “silver bullet” in security that is going to save you. It is a layering of technologies and awareness that will.


I hope you enjoyed the song, and who knows, maybe someone will record a video singing it! (not me!) Whatever holiday you celebrate this season, have a great one. Here’s to a more secure 2017 so I don’t have to write a new song next year. Maybe “I’m dreaming of a secure IoT” would be appropriate.

Filter Blog

By date: By tag: