Skip navigation
All Places > InsightIDR > Blog
1 2 3 4 Previous Next


56 posts

Why modern SIEM security solutions can save you from data and cost headaches.


If you want to reliably detect attacks across your organization, you need to see all of the activity that’s happening on your network. More importantly, that activity needs to be filtered and prioritized by risk -- across assets & users – to help you report on how the team is measurably chipping away at Risk Mountain™.


Today, the only solution capable of flexibly ingesting, correlating, and visualizing data from a sprawling tool stack is a SIEM solution. SIEMs don’t get a lot of love – some might say their deployment felt like a data lake glacier, where budget dollars flowed in, never to leave.



[EditorJen - this picture is spectacular.]


Advances in SIEM tools and customer pain are converging, as organizations are looking to cut losses on stagnant deployments and try a new approach. In this post, let’s cover four misconceptions that you won’t have to suffer from today’s nimble and adaptive SIEMs.


Misconception #1: SIEMs are complex, unwieldy tools that take months to deploy, and a large dedicated staff to keep running.

Reality: Cloud architecture makes SIEM deployment quicker and maintenance easier than ever before.


More SIEM security tools today offer cloud deployment as an option, so there is no longer the need for a large, initial hardware investment. In addition, SIEM providers now provide pre-built analytics in their solutions, so security teams don’t need to spend recurring hours setting up and refining detection rules as analysts comb through more and more data.


The simpler setup of SIEMs running in the cloud, combined with pre-built analytics, means that an organization can get started with SIEM security technology in just a few days instead of months, and that they won't have to continually add staff to keep the SIEM up and running effectively.


When choosing a SIEM, define the use cases you'd like the deployment to tackle and consider a Proof of Concept (POC) before making a purchase; you'll have better expectations for success and see how quickly it can identify threats and risk.


Misconception #2: As SIEMs ingest more data, data processing costs skyrocket into the exorbitant.

Reality: Not all SIEMs come with burdensome cost as deployment size increases.


Traditional SIEM pricing models charge by the quantity of data processed or indexed, but this model is penalizing the marketplace. SIEMs become more effective at detecting attacks as more data sources are added over time, especially those that can identify attacker behaviors.


As a result, any pricing model that discourages you from adding data sources could hamstring your SIEM’s efficacy. Work with your SIEM vendor to determine what data sets you need today and may need in the future, so you can scale effectively without getting burned.


Misconception #3: SIEMs aren’t great at detection. They should primarily be used once you know where to look.

Reality: SIEMs with modern analytics can be extremely effective at detecting real-world attack behaviors in today’s complex environments.


Related to misconception number two above, if you can't process as many data sources as possible—such as endpoints, networks, and cloud services—then you are potentially limiting your SIEM’s ability to detect anomalies and attacks in your environment.


In fact, there are many traces of attackers that require the comprehensive data sets fed into SIEM. Two examples are detecting the use of stolen passwords and lateral movement, extremely common behaviors once an attacker has access to the network. At Rapid7, we detect this by first linking together IP Address > Asset > User data and then using graph mining and entity relationship modeling to track what is “normal” in each environment. Outside of SIEMs and User Behavior Analytics (UBA) solutions, this is incredibly hard to detect.


In a nutshell: SIEM security tools need that data to be effective, so if you restrict the data coming in, it won't be as effective. A SIEM with modern analytics will be capable of detecting real-world attack behaviors earlier in the attack chain.


Misconception #4: SIEMs can ingest and centralize log files and network data, but have limited coverage for cloud services and remote workers.

Reality: Today’s SIEMs can and should account for data coming in from cloud and endpoints.


Network-only data sources may be the norm for more traditional SIEMs on the market, but newer SIEMs also pull in data from endpoints and cloud services to make sure you're detecting attacker behavior no matter where it may occur. Just as the perimeter has shifted from the corporate network walls to the individual user, SIEMs have had to adapt to collect more data from everywhere these users work, namely endpoints and cloud services. Make sure any SIEM security solution you're considering can integrate these additional data sources, not just traditional log files and network data.




At Rapid7, we feel strongly that customers shouldn’t have to deal with these past pitfalls, and this mindset is expressed throughout InsightIDR, our solution for incident detection and response. On Gartner’s Peer Insights page, we’ve been recognized by customers for resetting expectations around time to value and ease of use:


“We are able to monitor many sources with a very small security team and provide our clients with the peace of mind usually only achieved with large security departments.”


“[InsightIDR]… on its own, mitigated against 75% of identified threats within our organisation, but with the simplicity of use even my granny could get to grips with.”


Want to try InsightIDR at your organization? Start with our on-demand 20 minute demo here, or contact us – we want to learn about your challenges and provide you with answers.

I love math. I am even going to own up to having been a "mathlete" and looking forward to the annual UVM Math Contest in high school. I pursued a degree in engineering, so I can now more accurately say that I love applied mathematics, which have a much different goal than pure mathematics. Taking advanced developments in pure mathematics and applying them to various industries in a meaningful manner often takes years or decades. In this post, I want to provide the necessary context for math to add a great deal of value to security operations, but also explain the limitations and issues that will arise when it is relied upon too heavily.


                                      A primer on mathematics-related buzzwords in the security industry

big-data-buzzword.jpgThere are always new buzzwords used to describe security solutions with the hope that they will grab your attention, but often the specific detail of what's being offered is lost or missing. Let's start with my least favorite buzzphrase:


  • Big Data Analytics - This term is widely used today, but is imprecise and means different things to different people. It is intended to mean that a system can process and analyze data at a speed and scale that would have been impossible a decade ago. But that too is vague. Given the amount of data generated by security devices today, scale of continually growing networks, and the speed with which attackers move, being able to crunch enormous amounts of data is a valuable capability for your security vendors to have, but that capability tells you very little about the value derived from it. If someone tries to sell you their product because it uses Cassandra or MongoDB or another of the dozens of NoSQL database technologies in combination with Hadoop or another map/reduce technology, your eyes should gloss over because it is more important how these technologies are being used. Dig deeper and ask "so your platform can process X terabytes in Y seconds, but how does that specifically help me improve the security of my organization?"


Next, let me explain a few of the more specific, but still oversold math-related (and data science) buzzwords:

  • Machine Learning is all about defining algorithms flexible and adaptive enough to learn from historical data and adjust to the changes in a given dataset over time. Some people prefer to call it pattern recognition because it uses clusters of like data and advanced statistical comparisons to predict what would happen if the monitored group were to continue behaving in a reasonably close manner to that previously observed. The main benefit of this field toward security is the possibility of distinguishing the signal from the noise when sifting through tons of data, whether using clustering, prediction models, or something else.
  • Baselining is a part of machine learning that is actually quite simple to explain. Given a significant sample of historical data, you can establish various baselines that show a normal level of any given activity or measurement. The value of baselining comes from detecting when a measured value deviates significantly from the established historical baseline. A simple example is credit card purchases: consider an average credit card user is found to spend between $600 and $800 per week. This is the baseline for credit card spending for this person.
  • Anomaly Detection refers to the area of machine learning that identifies the events or other measurements in a dataset which are significantly different from an established pattern. These detected events are called "outliers", like the Malcolm Gladwell book. Finding anomalous behavior on your network does not inherently mean you have found risky activity, just that these events differ from the vast majority of historically seen events in the organization's baseline. To extend the example above: if the credit card user spends $650 one week and $700 the next, that’s in line with previous spending patterns. Even spending $575 or $830 is outside the established baseline, but not much cause for concern. Detecting an anomaly would be to find that the same user spent over $4,000 in a week. That is an uncharacteristic amount to spend, and the purchases that week should probably be reviewed, but it doesn't immediately mean fraud was committed.
  • Artificial Intelligence is not exactly a mathematics term, but it sometimes gets used as a buzzphrase by security vendors as a synonym for "machine learning". Most science fiction movies focus on the potentially negative consequences of creating artificial intelligence, but the goal is to create machines that can learn, reason, and solve problems the way the awesome brains of animals can today. "Big Blue" and "Watson" showed this field's progress for chess and quiz shows, respectively, but those technologies were being applied to games with set rules and still needed significant teams to manage them. If someone uses this phrase to describe their solution, run, because all other security vendors would be out of business if this advanced research could be consistently applied to motivated attackers that play by no such set of rules when trying to steal from you.
  • Peer Group Analysis is actually as simple as choosing very similar actors (peers) that do, or are expected to, act in very similar manners, then using these groups of peers to identify when one outlier begins to exhibit behavior significantly different from its peers. Peer groups can be similar companies, similar assets, individuals with similar job titles, people with historically similar browsing patterns, or pretty much any cluster of entities with a commonality. The power of peer groups is to compare new behavior against the new behavior of similar actors rather than expecting the historical activity of a single actor to continue in perpetuity.


Make sure the next time someone starts bombarding you with these terms that they can explain why they are using them and the results that you are going to see.


Mathematics will trigger new alerts, but you could just trade one kind of noise for another

The major benefit that user behavior analytics promises to security teams today is the ability to stop relying on the rules and heuristics primarily used for detection in their IPS, SIEM, and other tools. Great! Less work for the security team to maintain and research the latest attack, right? It depends. The time that you currently spend writing and editing rules in your monitoring solutions very well could be taken over by training the analytics, adjusting thresholds, tweaking the meaning of "high risk" versus "low risk" and any number of modifications that are not technically rules setting.


scatter_anomalies.pngIf you move from rules and heuristics to automated anomaly detection and machine learning, there is no question that you are going to see outliers and risky behaviors that you previously did not. Your rules were most likely aimed at identifying patterns that your team somehow knows indicate malicious activity and anomaly detection tools should not be restricted by the knowledge of your team. However, not involving the knowledge of your team means that a great deal of outliers identified will be legitimate to your organization, so instead of having to sift through thousands of false positives that broke a yes/no rule, you will have thousands of false positives on a risk scale from low to high. I have three examples of the kind of false positives that can occur because human beings are not broadly predictable:


  1. Rare events - Certain events occur in our lives that cause significant changes in behavior, and I don't mean having children. When someone changes roles in your organization, they are most likely going to immediately look strange in comparison to their established peer group. Similarly, if your IT staff stays late to patch servers every time a major vulnerability (with graphics and a buzz-name!) is released, this is now some of the most critical administrators and systems in the organization straying from any established baselines.
  2. Periodic events - Someone taking vacation is unlikely to skew your alerting because the algorithms should be tuned to account for a week without activity, but what about annual audits for security, IT, accounting, etc.? What about the ongoing change in messaging systems and collaboration tools that constantly lead to data moving through different servers?
  3. Rare actors - There are always going to be individuals with no meaningful peer; whether it is a server that is accessed by nearly every user in the organization (without their knowledge) like IIS servers or a user that does extremely unique, cutting edge research like basically everyone on the Rapid7 Research team, mathematics has not reached the point where it can determine enough meaningful patterns to predict the behavior of some portion of the organization that you need to monitor.


Aside from dealing with a change in the noise, there is the very real risk that by relying too heavily on canned analytics to detect attacks, you can easily leave yourself open to manipulation. If I believe that your organization is using "big data analytics" as most are, I can pre-emptively start to poison the baseline for what is considered normal by triggering events on your network that set off alerts, but appear to be false positives upon closer investigation. Then, having forced this activity into some form of baseline, it can be used as an attack vector. This is the challenge that scientists always run into when observing humans: anyone that knows they are being observed can choose to act differently than they otherwise would and you won't know.


A final note on anomalies is that a great deal of them are going to be stupid behavior. That's right, I guarantee that negligence is a much more common cause of risky activity in your organization than malice, but an unsupervised machine learning approach will not know the difference.


InsightIDR blends mathematics with knowledge of attacker behavior

This post is not meant to say that applied mathematics have no place in incident detection or investigation. On the contrary, the Rapid7 Data Science team is continuously researching data samples for meaningful patterns to use in both areas. We just believe that you need to apply the science behind these buzzwords appropriately. I would summarize our approach around this in three ways:know-opponent.jpg

  • A blend of techniques: At times, simple alerts are necessary because the activity should either never occur in an organization or occurs so rarely that the security team wants to hear about it - the best example of this is providing someone with domain administrator privileges. Incident response teams always want to know when a new king of the network has been crowned. Some events cannot be assumed good when a solution is baselining or "learning normal", so there should be an extremely easy way for the security team to indicate which activities are permitted to take place in that specific organization.
  • Add domain expertise: Adding security domain knowledge is not unique to Rapid7, but thanks to our research, penetration test, and Metasploit teams, the breadth and depth of our familiarity with the tools attackers use and their stealth techniques is unmatched in the market. We continually use this in our analyses of large datasets to find new indicators of compromise, visualizations, and kinds of data that we will add to InsightIDR. Plus, if we cannot get the new data from your SIEM or existing data source, we will build tools like our endpoint monitor or no-maintenance honeypots to go out there and get the data.
  • Use outliers differently: Almost every user behavior analytics product in the market is using its algorithms to produce an enormous list of events sorted by each one's risk score. We believe in alerting infrequently, so that you can trust it is something worth investigating. Outliers? Anomalies? We are going to expose them and help you to explore the massive amount of data to hopefully discover unwanted activity, but the specific outliers have to pass our own tests for significance and noise level before we will turn them into alerts. Additionally, we will help you look through the data in the context of an investigation because it can often add clarity to traditional "search and compare" methods that your teams are likely using in your SIEM.


So if you want to drop mathematics into your network, flip a switch, and to let its artificial intelligence magically save you from the bad guys, we are not the solution for you. Sadly, though, no solution out there is going to fulfill your desire any time soon.


If you want to learn more about the way InsightIDR does what I described here, please check out our on-demand demo. We think you will appreciate our approach.

Every infosec conference is chatting about the Attack Chain, a visual mapping of the steps an intruder must take to breach a network. If you can detect traces of an attack earlier, you not only have more time to respond, but can stop the unauthorized access to monetizable data and its exfiltration.


Even as attackers and pen-testers continue to evolve their techniques, the Attack Chain continues to provide a great baseline framework to map out your security detection program.




Many of today’s detection solutions only alert on breach of critical assets or anomalous data exfiltration. At this point, the attacker is already at Mission Target, and the damage is likely already done. Similarly, it’s dangerous to over-invest in a particular step – many organizations are focused on detecting malware, but once an attacker has internal access to the network, they have multiple ways to move from Infiltration & Persistence to Mission Target without using malware at all.


This is where Deception Technology comes in. Justin Pagano, our information security lead, remarks in our latest Security Nation podcast, “Deception tech is a subset of detection that focuses on creating an illusion for attackers…for something they want, to make it easier for you to detect when they’re going after it.” And that is the most powerful aspect of deception – it can uniquely detect behavior that is otherwise very hard to spot.


Let’s look at four techniques attackers use every day, and how deception can detect these stealthy behaviors.


1. Attacker has internal network access -> fires off a network scan (e.g. Nmap) to find next targets.


One of the rare times an attacker is at a disadvantage is when he/she first lands on the network. This is because the attacker must learn more about the network infrastructure and where to move next.




As these methods of gaining information continue to shift, they become increasingly difficult to detect by monitoring solutions today. This ranges from running a vulnerability or network scan to traffic collection and manipulation. Even comprehensive SIEM deployments struggle in detecting early reconnaissance, as it’s challenging to identify by log and traffic analysis alone. A countermeasure is to deploy one or multiple Honeypots across the network, a decoy machine/server with no legitimate function for normal users that lurks and reports if it’s been scanned, even if only on a single port.


2. Attacker queries Active Directory to see the full list of users on the network. Tries only 1-2 commonly used passwords (e.g. Fall2016!) across all of those accounts – this is referred to as a vertical brute force.


How would you detect this today? In log files, this would appear as one, two failed authentications. There have been cases where an attacker tries a few combinations each week to stay under the radar. This particular attack vector can be detected by creating a dummy user in Active Directory, say, PatchAdmin. This tantalizing user should not have any business purpose or be associated with any employee. If you alert on any authentications to this account, it’s a great way to detect that someone is up to no good.


3. Attacker has compromised an employee endpoint. Proceeds to dump credentials / hashes via MimiKatz or other tools. Uses pass-the-hash to continue laterally moving to other machines.


There are a few challenges here. Hash extraction and privilege escalation can be performed using Windows Powershell, so no outside malware is required to be successful. That means the behavior can evade anti-virus and anti-malware defenses that rely on identifying “known-bad”.




Further, most SIEM solutions don’t have endpoint visibility, as it’s challenging to setup log forwarding and can result in a lot of added data processing costs. Our Insight Agent [PDF] automatically injects a set of fake credentials onto each endpoint. If this credential is used anywhere else on the network, you’ll receive an automatic alert. Of course, the fake credential doesn’t grant access to any system, so they are safe to use.


4. Attacker has access to confidential materials and wants to move it off the network. Files in the folder get zipped and then copied elsewhere, often an external drop server or stolen cloud storage account.


There’s a layer of complexity here as the attacker might be impersonating a legitimate employee or is a malicious insider themselves. While data exfiltration is late in the attack chain, it’s important to detect critical files being copied or modified. Wade Woolwine, director of breach detection and response notes, “Most of the time, we see command and control actions going over HTTP/HTTPS ports.” This makes exfiltration difficult to detect via firewalls or existing monitoring solutions.


One way to tackle this is to create a dummy file (e.g. Q2-Financials.xls) and place it amongst high-value files. By monitoring all actions taken on this Honey File (opening, editing, copying), you can get file-level visibility without the effort of deploying a standalone File Integrity Monitoring solution. Most importantly, this trap needs to feed into a larger, defense-in-depth detection strategy. It’s not hard to identify unauthorized access of critical assets; the challenge is figuring out the users involved, where else the attacker went, and the entire scope of the attack.


InsightIDR, our incident detection and response solution, comes standard with this growing library of deception technology: Honeypots, Honey Users, Honey Credentials, and Honey Files. This is used in combination with our User Behavior Analytics and endpoint detection to find intruders earlier in the attack chain. To see our deception technology in action, check out the Solution Short below.



Want more? Check out our latest webcast, “Demanding More from Your SIEM,” for a full demo of InsightIDR and to learn the top pain points in SIEM deployments today.

When I speak with prospects and customers about incident detection and response (IDR), I’m almost always discussing the technical pros and cons. Companies look to Rapid7 to combine user behavior analytics (UBA) with endpoint detection and log search to spot malicious behavior in their environment. It’s an effective approach: an analytics engine that triggers based on known attack methods as well as users straying from their normal behavior results in high fidelity detection. Our conversations center on technical features and objections – how can we detect lateral movement, or what does the endpoint agent do, and how can we manage it? That’s the nature of technical sales, I suppose. I’m the sales engineer, and the analysts and engineers that I’m speaking with want to know how our stuff works. The content can be complex at times, but the nature of the conversation is simple.


privacy vs security.pngAn important conversation that is not so simple, and that I don’t have often enough, is a discussion on privacy and IDR. Privacy is a sensitive subject in general, and over the last 15 years (or more), the security community has drawn battle lines between privacy and security.  I’d like to talk about the very real privacy concerns that organizations have when it comes to the data collection and behavioral analysis that is the backbone of any IDR program.


Let’s start by listing off some of the things that make employers and employees leery about incident detection and response.


  • It requires collecting virtually everything about an environment. That means which systems users access and how often, which links they visit, interconnections between different users and systems, where in the world users log in from – and so forth. For certain solutions, this can extend to recording screen actions and messages between employees.
  • Behavioral analysis means that something is always “watching,” regardless of the activity.
  • A person needs to be able to access this data, and sift through it relatively unrestricted.


I’ve framed these bullets in an intentionally negative light to emphasize the concerns. In each case, the entity that either creates or owns the data does not have total control or doesn’t know what’s happening to the data. These are many of the same concerns privacy advocates have when large-scale government data collection and analysis comes up. Disputes regarding the utility of collection and analysis are rare. The focus is on what else could happen with the data, and the host of potential abuses and misuses available. I do not dispute these concerns – but I contend that they are much more easily managed in a private organization. Let’s recast the bullets above into questions an organization needs to answer.


Which parts of the organization will have access to this system?

Consider first the collection of data from across an enterprise. For an effective IDR program, we want to pull authentication logs (centrally and from endpoints – don’t forget those local users!), DNS logs, DHCP logs, firewall logs, VPN, proxy, and on and on. We use this information to profile “normal” for different users and assets, and then call out the aberrations. If I log into my workstation at 8:05 AM each morning and immediately jump over to ESPN to check on my fantasy baseball team (all strictly hypothetical, of course), we’ll be able to see that in the data we’re collecting.

user behavior.jpg

It’s easy to see how this makes employees uneasy. Security can see everything we’re doing, and that’s none of their business! I agree with this sentiment. However, taking a magnifying glass to typical user behavior, such as websites visited or messages sent isn’t the most useful data for the security team. It might be interesting to a human resources department, but this is where checks and balances need to start. An information security team looking to bring in real IDR capabilities needs to take a long and hard look at its internal policies and decide what to do with information on user behavior. If I were running a program, I would make a big point of keeping this data restricted to security and out of the hands of HR. It’s not personal, HR – there’s just no benefit to allowing witch hunts to happen. It’ll distract from the real job of security and alienate employees. One of the best alerting mechanisms in every organization isn’t technology, it’s the employees. If they think that every time they report something it’s going to put a magnifying glass on every inane action they take on their computer, they’re likely to stop speaking up when weird stuff happens. Security gets worse when we start using data collected for IDR purposes for non-IDR use cases.

Who specifically will have access, to what information, and how will that be controlled?

What about people needing unfettered access to all of this data? For starters, it’s absolutely true. When Bad Things™ are detected, at some point a human is going to have to get into the data, confirm it, and then start to look at more data to begin the response. Consider the privacy implications, though; what is to stop a person from arbitrarily looking at whatever they want, whenever they want, from this system?


The truth is organizations deal with this sort of thing every day anyway. Controlling access to data is a core function of many security teams already, and it’s not technology that makes these decisions. Security teams, in concert with the many and varied business units they serve, need to decide who has access to all of this data and, more importantly, regularly re-evaluate that level of access. This is a great place for a risk or privacy officer to step in and act as a check as well. I would not treat access into this system any differently than other systems. Build policy, follow it, and amend regularly.


Back to if I was running this program. I would borrow pretty heavily from successful vulnerability management exception handling processes. Let’s say there’s a vulnerability in your environment that you can’t remediate, because a business critical system relies on it. In this case, we would put an exception in for the vulnerability. We justify the exception with a reason, place a compensating control around it, get management sign off, and tag an expiration date so it isn’t ignored forever. Treat access into this system as an “exception,” documenting who is getting access, why, and define a period in which access will be either re-evaluated or expire, forcing the conversation again. An authority outside of security, such as a risk or privacy officer, should sign off on the process and individual access.


Under what circumstances will this system be accessed, and what are the consequences for abusing that access?

There need to be well-defined consequences for those that violate the rules and policies set forth around a good incident detection and response system. In the same way that security shouldn’t allow HR to perform witch hunts unrelated to security, the security team shouldn’t go on fishing trips (only phishing and hunts). Trawls through data need to be justified. This is for the same reasons as the HR case. Alienating our users hurts everyone in the long run.


Reasonable people are going to disagree over what is acceptable and what is not, and may even disagree with themselves. One Rapid7 customer I spoke with talked about using an analytics tool to track down a relatively basic financial scam going on in their email system. They were clearly justified in both extracting the data and further investigating that user’s activity inside the company. “In an enterprise,” they said, “I think there should be no reasonable expectation of privacy – so any privacy granted is a gift. Govern yourself accordingly.”


Of course, not every organization will have this attitude. The important thing here is to draw a distinct line for day to day use, and note what constitutes justification for crossing that line. That information should be documented and be made readily available, not just in a policy that employees have to accept but never read. Take the time to have the conversation and engage with users. This is a great way to generate goodwill and hear out common objections before a crisis comes up, rather than in the middle of one or after.


Despite the above practitioner’s attitude towards privacy in an enterprise, they were torn. “I don’t like someone else having the ability to look at what I’m doing, simply because they want to.” If we, the security practitioners, have a problem with this, so do our users. Let’s govern ourselves accordingly.

security governance.png

Technology based upon data collection and analysis, like user behavior analytics, is powerful and enables security teams to quickly investigate and act on attackers. The security versus privacy battle lines often get drawn here, but that’s not a new battle and there are plenty of ways to address concerns without going to war. Restrict the use of tools to security, track and control who has access, and make sure the user population understands the purpose and rules that will govern the technology. A security organization that is transparent in its actions and receptive to feedback will find its work to be much easier.

Overcome Nephophobia - Don't be a Shadow IT Ostrich!

Every cloud…..

When I was much younger and we only had three TV channels, I used to know a lot of Names of Things. Lack of necessity and general old age has meant I’ve now long since forgotten most of them (but thanks to Google, my second brain, I can generally “remember” themDinosaur.jpg again as long as there’s data available). Dinosaurs, trees, wild flowers, and clouds were all amongst the subject matters in which my five-year-old self was a bit of an expert. I would point at the sky and wow my parents with my meteorological prowess, all learnt from the pages of a book. Good times. These days I can manage about three cloud names off the top of my head before reaching for the Internet. Cirrus, stratus, cumulonimbus (OK I had to double check the last one).  Failing memory aside, I still love clouds, and frankly there’s little that beats a decent sunset – which wouldn’t be anywhere near as good without some clouds. So assuming you’re still reading and not googling cloud names (because it can’t *just* be me), I’d like you to think of a cloud please, an actual one, not a digital one. Chances are it’s all fluffy and white, the cumulus (oh yeah) type. Of all the words I could use to describe a cumulus cloud “scary” isn’t one of them. But did you know that Nephophobia - the irrational fear of clouds - is a real condition? Nephophobics struggle to look up into the sky, and in some cases won’t even look at a picture of a cloud. Any phobia by its very nature is debilitating, leaving the sufferer feeling anxious at best, or totally unable to function at worst. I live with a six-foot strapping arachnophobe who is reduced to a gibbering wreck at anything larger than a money spider.


Digital Nephophobia

Nephophobia exists in our digital world too. Use of the cloud is written off and immediately written in to policy. “We don’t use the cloud” is something I’ve heard far too frequently. And sometimes “don’t” is more “can’t” (blocked from doing so by government regulation) or “won’t” (we just don’t want to, we don’t trust it), but actually “do…but don’t know it” is more often


the reality. This is where anxiety caused by the cloud is at its most valid – lack of visibility into the cloud services your users are already using (aka Shadow IT) is frankly terrifying for anyone concerned with data privacy or data security. I recently met with an IT Security Manager of a global network, who rightly said “if you’re not providing the services your users need and expect, then whether you like it or not you are probably being exposed to Shadow IT”. Pretending it’s not happening won’t make it go away either, as many a mauled ostrich will merrily testify.


Digital Therapy

Many phobia therapies involve facing the fear head on. Now I’m not suggesting that the best medicine to cure digital nephophobia is to burn the “we don’t use the cloud” policy and open up your network to every cloud service available, far from it. First of all, it’s vital to understand what is really happening within your environment now – which cloud services your users have using without your knowledge. From there you can work out which cloud services you should be formally provisioning, which you should be monitoring, and which you should be locking down. Perform the due diligence – any cloud vendor worth their salt will be able to provide you with the reassurance that their service is secured, with in-depth details of how it is secured, what happens to your data in transit and at rest, how it is segmented from other organisations’ data, who has access, and more.


Set yourself free

Once you’ve worked out what you need, and are confident in the service provider’s


security processes (which are likely going to be on par or indeed even better than those in your own network), the weight of digital nephophobia will begin to lift. The benefits of using the cloud are huge – a huge reduction in provisioning, administration, and maintenance overheads for a start. The speed in which you can provide new services compared to the old world of doing it all in-house is staggering – how many times have you heard users moan about how long it takes IT to bring in a new service? Speaking of moaning – how about those 79 bajillion helpdesk tickets and IMs and calls that come in because The Server’s Down….Again? Distant memories – uptime is another benefit to embracing cloud services.  You’ll be in good company too - organisations from every vertical are using the cloud – financial institutions, governments, healthcare, defense, manufacturing, charities, the list goes on and on.


Tackling Shadow IT is the first step in the journey from Nephophobe to Nephophile

Our aforementioned ostrich friend wants to be a lesson to you. If you can’t see where your problems are, you can’t begin to do something about them, and if you bury your head in the sand you are in dire risk of becoming lion lunch. Visibility into cloud services, whether they are sanctioned or shadow IT services, is a string that every IT Security professional needs to have in their bow. InsightIDR gives you that string (and a whole bunch more too!) – at the tips of your fingers lies a wealth of information on which cloud apps are being accessed, who is using them, when they are being used, and how frequently. And you don’t have to code a bunch of complex queries to access this information – the interactive dashboard has it all:



Want to learn more about how InsightIDR gives organisations insight into cloud services, user behaviour, and accelerates incident investigations by over 20x (told you there were more bow strings available!)? We’d love to show you a demo. And if you would like to know more about our approach to cloud platform security you can read all about here right here.

If you’ve ever been irritated with endpoint detection being a black box and SIEM detection putting the entire onus on you, don’t think you had unreasonable expectations; we have all wondered why solutions were only built at such extremes. As software has evolved and our base expectations with it, a lot more people have started to wonder why it requires so many hours of training just to make solutions do what they are designed to do. Defining a SIEM rule is the perfect example – crafting the right query and adding it to detection lore can take up to an hour, which is fine if you have nothing else to do all day.


Writing a SIEM rule in legacy systems harms security teams by demanding expertiseLondon-black-cab.jpg

The training London black cab drivers endure has been examined a great deal in recent years and for good reason: memorizing the ridiculous layout of London streets for a year before working eliminates “how do we get there?” annoyances and expands a region of the cabby’s brain. However, requiring this level of expertise has been a massive barrier to entry for new drivers, and with the advent of GPS devices [“sat nav” to the angry London taxi drivers], somewhat unnecessary. In the taxi world, this has led to Uber providing rides to consumers at a third of the price.


Similarly, when ArcSight [who defines “simple” different than I] and QRadar were first deployed as the “single pane of glass” for the well-staffed organizations who could make them effective, it took more than six months to develop the skills and expertise necessary to translate the foreign language of many logs into meaningful rules for detection. Now that cloud solutions and continuous deployment made collective learning possible, it feels impractical for Splunk or AlienVault experts to first translate the logs into a language your SIEM can understand, and then use this event format to define each and every one of the alerts that’ll trigger. In this case, the negative impact isn’t the cost of your services, but rather a decrease in how quickly your security team can adapt to new attacker techniques.


Understanding a SIEM rule and the corresponding alert makes every non-expert look for the translator

Whenever a customer walks me through the process of triaging and analyzing an alert, it reminds me of the effort to debug the satellite communication terminals I developed at Raytheon in 1999. We were running FORTRAN on x386 chips and breakpoints weren’t a possibility, so the raw assembly code we traced through resembled the chirps and beeps of R2D2 until you spent a lot of time with it. It wasn’t until you’d acquired this level of expertise that you’d understand how to backtrack from a bizarre message on a tiny screen through five to ten different CALL and GO TO statements until the mistake in the code presented itself.c3po-r2d2-bb8.jpg


Just as I was forced to translate an output to the raw machine data to the actual code written by someone else, today’s SIEM analysts have to translate the alert’s accompanying data to the actual behavior identified and then, the reason why it warranted a rule back when someone else wrote it. It certainly doesn’t feel like you’ve got the information you need to take action; some digging is required before you gain the necessary insight. C-3PO would be really helpful here to immediately explain every alert in plain English.


If your team is going to get more time to do the important work, you need custom alerts for humans, not machines

InsightIDR comes with dozens of useful alerts to anomalous and attacker-like behavior across log and endpoint events, but this extreme is just not enough. Switching from the rules-only of SIEM to the anomalies-only approach of other User Behavior Analytics (UBA) solutions is too dramatic a shift and that’s why you need a solution with both. This is why Rapid7 customers can write custom alerts for the events which are a concern for their organizations. If they want to feed this intelligence to our teams, we’re thrilled, and we may add it to the alerts every new customer gets after testing its noise level.

But right now, assign a junior analyst to make sure you have the alerts you need. We’ve made it dramatically easier to capture the alert in the first place. Deciding to alert whenever someone authenticates from North Korea or it looks like someone is streaming The Night Of from HBO Go will feel like you’ve been handed a sat nav on the day you interview to drive in London. My first experience in debugging Java was a dream after the process I had learned with FORTRAN. Even I can write these custom alerts and even I can understand what it means when someone else’s alert triggers.


If you want to learn more about the way InsightIDR does what I described here, please check out our on-demand demo. We think you will appreciate our approach.

New detections have been introduced regularly since we first started developing our Incident Detection and Response (IDR) solutions four years ago. In fact, as of today, we have a collection of more than 50 of these running across customer data. But what does that mean? And what are the very latest detections to help your security program? Vendors have fancy names for what is under the covers of their tools: “machine learning,” “advanced analytics,” “autonomous sentient artificial intelligence” – ok, I made up the last one, but I bet you could see a vendor using that in their next press release. Our InsightIDR solution uses a variety of analytics, machine learning, and deception technology, but our customers don’t necessarily worry about what we call it, at the end of the day they want one thing: detections of nefarious behavior.


No matter what you call it, the primary reason that InsightIDR’s detections are so effective is that we rely on Rapid7’s team of attack-minded experts to actually help develop the detection techniques that span the attack chain. Sure, some detections require extensive baselining of all user behavior, some require deception technology, and some require advanced log correlation. And while you can’t detect everything, you must continually prioritize the most effective attacker techniques for various stages of the chain and apply a bit of the scientific method to detect them.


In that spirit, we recently introduced three new detections to InsightIDR that demonstrate how different the technology for each real world attacker technique may be when you’re looking for indicators at each stage of the attack chain. And, since it fits well into National Cybersecurity Awareness month, I'd like to raise awareness about realistic attacker techniques to demonstrate why these new detections are more than just average IOCs. We want to make it uncomfortable for attackers exploring your network, forcing them from a casual jog through the forest to being afraid to take the wrong step like they’ve been dropped into a forest in The Hunger Games.


NetBIOS Name Service Poisoning – know when someone is tricking your Windows machines into sharing credentials.

terminator-t-x.jpgYou probably didn’t see it, since few people did, but Terminator 3 brought a new terminator model, T-X, with a significant improvement over the terrifying T-1000: the ability to impersonate both a human and a weapon. While both the T-800 model and T-1000 were terrifying, they clearly had their limitations. When T-800 imitated a [ridiculously fit] human being, it made detecting threats more difficult than looking for shiny metal, but the T-1000 was able to pretend to be any person, including those of authority and yet, in Arnold’s own words, it couldn’t form “complex machines.” This is what made the next T-X model even more terrifying and hard to detect: it could pose as a human and form complex machines.


Similarly, there are many SIEM implementations capable of detecting brute force attempts to impersonate humans by stealing their accounts and a bevy of new user behavior analytics vendors claiming they can spot the T-1000 impersonating an authority figure to access more critical systems, but neither is able to spot a very common complex machine: protocol poisoning. Today, it is very easy for both simulated and real attackers, once on the network, to listen and respond to requests to resolve host names broadcast over the local network segment with tools like Responder. In his retelling of his Hacking Team hack, Phineas Fisher called it “The most useful tool for attacking windows networks when you have access to the internal network, but no domain user.”


Despite attackers pretending to be these trusted systems within the organization, the vast majority of Rapid7 penetration test clients fail to detect this behavior, even with massive detection investments. This is why the Rapid7 InsightIDR team strove to make it absurdly easy to detect. Since InsightIDR already has a presence on the network, the Insight agents are instructed to issue queries for non-existent host names over NBT-NS (as the most vulnerable systems would) and any received responses will expose the spoofer. It’s a little like asking what’s wrong with “Wolfie” when the real mom would clearly know the dog is named “Max” [and yes, I know that was the T-1000 – I barely remember T3].


EMET – install it. Right now. Then, actually monitor what it sees.

In one of Hollywood’s best movies about barroom brawls, Road House, the hero Dalton rented a comfortable room from an unassuming man named “Emmett.”  Emmett was not only a source of comic relief when his house was set ablaze, but he turned out to be valuable asset when the real town proprietors decided to take their town back. We’ll never know whether Dalton should have confided in Emmett more about his challenges at the Double Deuce, but thanks to the magic of hindsight, it seems like he would have at least been a valuable source of information.Emmett-gun.jpg


One of the biggest failings of the security industry is the fact that Microsoft’s free Exploit Mitigation Experience Toolkit (EMET – not pronounced the same, I think) is not currently installed on every Windows machine in existence. We can examine the reasons why in another blog someday, but if your organization actually does have the EMET agent installed broadly, your biggest question is probably whether anything is getting actively blocked by EMET. If you want to find out, you can spend a lot of time managing the Windows Event Collector and building correlation rules or you can simply deploy InsightIDR and receive these alerts alongside all of the notable behaviors and alerts across the attack chain for each user and asset. This valuable source of information can quickly show you someone is on the network attempting to exploit systems, and even when though they were effectively mitigated, this means you can take action much earlier in the attack chain.


Related: [VIDEO] Why You Should be Using EMET


Honey files – because exfiltration is likely to precede the file opening.

If you learned everything you know about wealthy people from movies, as I did, you obviously know that cat burglars discreetly come into your house, slink around various monitoring systems, and carefully look through all of your valuables until the right one is identified, just like Clint Eastwood’s character in Absolute Power. In this scenario, it may feel like the most effective type of detection is a silent alarm which triggers when each jewelry box is opened, but that could lead to a noise of false alarms [sounds like legacy SIEM]. This is how most people have been forced to detect attacks with file integrity management solutions - by filtering through the thousands of file changes that occur every day and hoping to catch the real problem. It's akin to the false alarms with the jewelry boxes where the alarm is more likely because of a pet bumping the jewelry box or another family member perusing.



In the cyber-attack world, cat burglars rarely waste their time opening each and every file on a system until they find the right one. “Smash and grab” is the wrong phrase for it, but they quickly zip entire folders in short order, exfiltrate them to a safe drop server, and move on to the next system because they know their backdoor connection could get severed at any time. This is why the InsightIDR team made it so easy for you to use another form of deception on top of the honey pots, honey users, and honey credentials. If you haven’t managed to stop the attack by the time it reaches the “Mission Target” stage, having some useless files there and alerting even if they’re copied guarantees you know when someone has reached the valuables. It’s like planting a necklace of large, cubic zirconia in a fancy jewelry box and only sounding that silent alarm when the box is swept into the cat burglar’s oversized sack of stolen goodies.


If you want to learn more about detecting attacks with InsightIDR and the rest of Rapid7’s Incident Detection and Response portfolio, we would be happy to give you a free guided demo.


Related: [VIDEO] Solution Short - New Detections in InsightIDR

Do you suffer from too many vague and un-prioritized incident alerts? What about ballooning SIEM data and deployment costs as your organization expands and ingests more data? You’re not alone. Last week, over a hundred infosec folks joined us live for Demanding More out of Your SIEM.


Content Shared in the Webcast

In Gartner’s Feb 2016, “Security Information and Event Management Architecture and Operational Processes,” Anton Chuvakin and Augusto Barros recommend a “Run-Watch-Tune” model in order to achieve a “SIEM Win”. For those with a Gartner subscription, check out the full report here.


While some SIEM vendors recommend 10 full-time analysts for a 24/7 SIEM deployment, at least three full-time employees should serve as the foundation of your deployment. A breakdown of core Run, Watch, and Tune responsibilities:


Run: Maintain operational status, monitor uptime, optimize application and system performance.

We recommend: Take stock of your existing network and security stack – are there more data sources you should be integrating? From talking to customers and our Incident Detection & Response research, top gaps in SIEM integrations are:

  • DHCP. This integration provides a crucial User-Asset-IP link and powers most User Behavior Analytics solutions today.
  • Endpoint Data. If local authentications aren’t centrally logged, attackers can laterally move between endpoints and go undetected by the SIEM. 5 Ways Attackers can Evade a SIEM.
  • Cloud Services. Leading cloud services such as Office 365, Google Apps, and Salesforce expose APIs with audit data, but many SIEMs don’t take advantage of this data.


Watch: Using the SIEM for security monitoring and incident investigation.

We recommend: Today’s organizations are getting way too many alerts – here’s a poll taken during the webcast.


SIEM alert fatigue


Most security teams have to jump between multiple tools during investigations, are getting too many alerts, and are struggling to identify stealthy attacks, such as the use of compromised credentials and lateral movement, that don’t require malware to be successful. Most organizations are alerted on unauthorized access to critical assets, but at that point, intruders are already at Mission Target in the Attack Chain.


Using a SIEM to detect compromise across the attack chain


By mapping your detections to the Attack Chain, you can find intruders earlier and kick them out before data exfiltration occurs.


Tune: Customize SIEM content, create rules for specific business use-cases.

We recommend: Building queries requires specialized SIEM skills and experience manipulating large data sets, a scarce skillset that differs from incident investigation & response experience. If you’ve just been handed the reins to an existing SIEM deployment, it’s worth the time to do a rule review. While technology like User Behavior Analytics provides robust detection for today’s top attack vectors behind breaches, custom work is still necessary to meet specific business needs, such as compliance or a company-specific detection.


What I Learned from the Audience

Throughout the talk, we asked a few questions to learn from the audience. 71% currently have a SIEM, 11% don’t, and 18% don’t but are looking to purchase. Current satisfaction with their existing SIEM for Incident Detection and Response was across the board, with answers ranging from 4-8 on a scale of 1-10. The biggest concern was with data costs, the pricing model behind traditional SIEM solutions.


Growing data costs from traditional SIEM solutions


Top questions from our Q&A:

1. What is the best way to detect pass-the-hash techniques over servers?

The key data source is endpoint event logs. Only local authentication logs contain both the source and destination asset. For a full technical breakdown, check out our whitepaper: Why You Need to Detect More than Pass the Hash, with best practices on how to identify the use of compromised credentials.


2. Is there a way to see all InsightIDR integrations on your website?

Yes – to see the full list, which ranges from network events, endpoint data, existing log aggregators or SIEMs, and more, check out the Insight Platform Supported Event Sources doc here.


3. Is there an [InsightIDR] integration with Nexpose or Metasploit?

Yes! Nexpose, our vulnerability management solution, integrates with InsightIDR to provide visibility and security detection across assets and the users behind them. This provides three key benefits:

  • Put a “face” to your vulnerabilities
  • Automatically place vulnerable assets under greater scrutiny
  • Flag users that use actively exploitable assets


Learn more about the Nexpose-InsightIDR integration here. InsightIDR also integrates with Metasploit to track the success of phishing campaigns on your users.


I Want More from My SIEM Deployment: Why InsightIDR?

InsightIDR works by integrating with your existing network and security stack, including Log Aggregators and SIEMs. The first step is unifying your technology and leveraging SIEM, UBA, and EDR capabilities to leave attackers with nowhere to hide.


InsightIDR can augment or replace your existing SIEM deployment. Organizations that use InsightIDR in sync with their SIEM especially enjoy:

  • User Behavior Analytics: Alerts show the actual users and assets affected, not just an IP address. InsightIDR automatically correlates the millions of events generated every day to the users behind them, highlighting notable behaviors to accelerate incident validation and investigations.
  • Endpoint Detection & Visibility: The blend of the Insight Agent and Endpoint Scan means detection and real-time queries for critical assets and endpoints, even off the corporate network. InsightIDR focuses on detecting intruders earlier in the Attack Chain, meaning you’ll be alerted on local lateral movement, privilege escalation, log deletion, and other suspicious behavior happening on your endpoints.
  • 10x Faster Incident Investigations: The security team can bring real-time user behavior, log search, and endpoint data together in a single visual timeline. No more jumping between disparate log files, retracing user activity across multiple IPs, and requiring physical access to the endpoint to answer questions.


If you’d like to learn more, Demanding More from Your SIEM shows a live InsightIDR demo, complete with Q&A from an engaged audience. Or - contact us for a free guided demo!

This is a guest post by Rapid7 customer, Tom Brown. Faced with a possible data breach after customers reported malicious spam appearing to come from his company, Liberty Wines, he called in the experts.Tom Brown IT Manager Liberty Wines.jpg


The cyber incident came when I was on a trip to eastern Europe. Staff back at the office said our email had gone into meltdown. They claimed we were under attack – that customers were calling in to report that they were receiving emails from us with an unusual attachment, which turned out to be malicious. In just a short space of time we’d also been bombarded by a backscatter of hundreds of thousands of non-delivery receipts related to the original offending email. We had to be sure an internal breach wasn’t to blame. That’s when I called in the experts at Rapid7.


A bit of background: Liberty Wines is a multi-award winning UK based Wine Importer and Wholesaler headquartered in London. The Desktop Support Engineer and I have around 130 endpoints to look after – a mix of desktops, smartphones and laptops – as well as hosted email and a mix of around 30 on premise and hosted servers. With globetrotting Sales and Buying teams logging-on to the network from locations all round the world, and a heterogeneous IT estate, there’s plenty to keep us busy.

Liberty Wines Warehouse with caption.png

I had used Rapid7 software in the past and knew of them as a leader in the security space. When I heard that they had released UserInsight [now InsightUBA] I was intrigued. I soon arranged a live demo and was so impressed with it I allocated budget to get it installed the next (this) financial year.


We had previously identified a need for something to help us track user behaviour and logins but couldn’t find anything suitable. Until UserInsight [now InsightUBA] was launched there really wasn’t anything on the market that could easily scale from an SME like us right up to a large Enterprise deployment. The architecture of the InsightIDR system allows it to fit any size organisation while remaining at a realistic “per endpoint” cost for smaller setups like us.


Anyway, the incident had brought matters forward somewhat and we rapidly purchased and installed InsightIDR to give us the visibility and tools we needed to deal with the crisis at hand. InsightIDR is an expanded version of InsightUBA, it is an integrated detection and investigation solution that leverages user behaviour and endpoint analytics to spot and contain a compromise quickly and effectively, just what we needed.


Down to business

With time of the essence, the Rapid7 team worked closely with me, across three different time zones, to resolve the issue. After using Rapid7’s Quick Start service to get set-up, the product began collecting and analysing data almost straightaway to provide us with the real-time intelligence it needed to spot if Liberty Wines had been breached or not. It scoured our systems looking for traversal, privilege escalation, unusual service account usage, logins from unexpected locations or devices, and so on. We also set Rapid7’s vulnerability management product Nexpose to work identifying any potential security weaknesses in our systems which may have needed urgent attention.


Fortunately, InsightIDR found no suspicious user login or process activity on the network. From analysis of the spoofed email and email logs we worked out that the breach had actually come from a customer. The hackers had cloned a genuine email sent from Liberty Wines to a customer and then mass emailed it out to millions of internet users – some of whom were our customers – with the addition of a malicious JavaScript attachment.


Still, the Rapid7 team reverse engineered and analysed the malware in question to double check it had not penetrated the network. It was a couple weeks before we could say we had collected enough data to be absolutely sure that there was no suspicious activity going on internally. I have to say that without InsightIDR there is no way that we would have been able to confidently assert that our network was, and continues to be, clean.


With the real-time visibility provided by Rapid7, I was also able to draw up a clear and detailed graphical timeline of events for the Liberty Wines board, and inform customers what had happened.


A lasting confidence

Rapid7 pulled out all the stops to help when the call first came through from us, and together we managed to get InsightIDR set-up in a matter of hours.


It’s a great system. It gives you that warm feeling inside by catching any suspicious behaviour on the network months before you’d otherwise discover it. Most IT managers accept that something will get through – that there will be a hole somewhere. So it’s about finding out where it is quickly and being able to take action and that’s what InsightIDR gives you.


Although there was no sign of a breach, the new user and process visibility it gave us did highlight a few areas where we needed to tighten up – particularly on user account security, which was quickly actioned. It allows me to see if a user is trying to access work emails on an unsanctioned mobile device, for example, or if they’re logging on from a foreign country.


We also used Rapid7 Nexpose, which highlighted a number of areas where our patching was falling short. We found plug-ins in unused browsers that were not being updated and it also resulted in us shutting down some legacy systems we had kept running for reference purposes. The risk they posed internally was greater than the need for quick access to old data. Nexpose allowed us to demonstrate this to the business.


Going forward, we’re embarking on a big website rebuild. We are going to make sure it’s bomb-proof before going live. That’s why I’ve already put Rapid7 pen testing into the budget for next year.

In our previous post on third party breaches, we talked about the risk of public compromised credential leaks providing attackers with another ingress vector. This August, InsightIDR, armed with knowledge from a partner, identified a “Very Large Credentials Dump”. Very large? Over 800 million compromised credentials including usernames, passwords, and password hashes were exposed. This pool includes publicly known credential dumps as well as those where the breach source has not been disclosed, but they are available for attackers to re-purpose.


Across our hundreds of customers using InsightIDR to monitor their ecosystem

  • 177 alerts were generated across our U.S. customers
  • 50 alerts were generated across our EMEA & APAC customers

Many customers have already reached out to us to learn more about the alert and, whenever possible, we can provide the exposed passwords and hashes to your team. Below is an example of the alert in InsightIDR (click to expand):


third party compromised credentials breach

By highlighting this security risk, teams can proactively reset passwords before attackers try their hand. Even better, this is only one of the many detections built in InsightIDR to help you find threats earlier in the attack chain, before intruders breach critical assets.


Related Resource: [Video] Understanding the Attack Chain to Detect Intruders


InsightIDR detects attacks before they become a problem

If any users are identified at-risk, one click brings up their user page to see authentications, asset info, cloud services, and more.


InsightIDR tells you the users involved in a breach

Today, our corporate emails not only log into network services, but also cloud services such as Office 365, Salesforce, and Box. As InsightIDR has direct API integrations with those services, you’ll know about any suspicious authentications, whether it be from an unusual location or anomalous admin activity.

By applying User Behavior Analytics to link together IP Addresses, Assets, and Users, InsightIDR detects the top attack vectors behind breaches, including phishing, compromised credentials, and malware.

I received this alert. What can I do?

  • For affected accounts, we recommend resetting the account password & adding the user to the InsightIDR Watchlist.
  • If you’d like more on the credential dump, please use the in-app feedback button, which automatically opens an InsightIDR support ticket. Alternatively, feel free to email If available, we can further share the exact passwords and hashes in the dump upon request.

  • As an added value, if you have other company-owned domains, we can add the domain name to be monitored for future third party breaches.

I want to receive these alerts. What can I do?

The call everyone had been waiting for came in: the shuffleboard table arrived, and was ready to be brought upstairs and constructed! The team had been hard at work all morning in the open-style office space with conference rooms and private offices along the perimeter. The Security Operations Center (SOC) with computers, many monitors and an open layout was behind a PIN activated door. The team wanted something fun in the office to do when they took a break from defending networks.


My office-mates for the week were casually dressed in jeans and either t-shirts or button downs, and they were sweating while laughing and strategizing for how to get a 20-foot shuffleboard table up two flights of stairs and into the office. About five minutes later, the shuffleboard table parts were placed in the open space in the office, and the team was back downstairs figuring out how to dispose of the wood and other protective covering that came with it.  They were calm and happy—the consistent mood throughout the week even when larger puzzles arose. The next morning, the table was fully assembled and there were tests underway for how to straighten the slope.




What does a shuffleboard table have to do with my trip to Alexandria and the team I visited?

The shuffleboard assembly showed me a lot about how some of the best problem solvers work together to get the job done. The team quickly, quietly, and efficiently solves problems regularly, and they have a lot of fun doing so. They work well together—they collaborate together, eat together, smoke together, and joke together. One way that they mark their success: you never heard about the incident that they solved, it’s just solved—similar to how they built the shuffleboard table. One minute, there were many parts in a box that needed to be brought up the stairs and constructed.  A day later, there was a shuffleboard table set up and the packaging has been recycled. Most of the time, however, this teamwork is put to solving some of the largest, most complicated cyber security breaches and problems. Everyone on the team has a distinct role and they rely on each other to creatively problem solve. These are the crime fighters that you don’t see or hear. So, how do they do it?


They divide and conquer. The team is broken up into three smaller teams—there’s an analytic response team, an incident response team, and a threat intelligence team. Their knowledge and collaboration enable quicker threat detection and response and a deep, unparalleled understanding of the threat landscape, user behavior, and attacker behavior.


What are these three different teams and how are they not duplicative?

Analytic Response

The Analytic Response team is a group of people who work in the security operations center and continuously keep an organization’s environment safe. The combination of people and technology of Analytic Response act as “detectors” in the environment. With this team monitoring, detecting, and responding to what’s going on in your environment, when an incident comes up, you gain an understanding of what is happening and how serious it is. There are three tiers of analysts in the SOC, and each has a different role in detecting and responding. They make it possible to detect and respond to threats in hours instead of months. These people eat, sleep, and breathe problem solving and do so calmly and with ease. Many of these analysts have been coding and participating in hacking events since they were young and have a lot of experience spotting anomalies.


Incident Response

The Incident Response team is another subset of this larger IDR ecosystem. This group helps teams come up with proactive strategies so that they have a program. They are also the boots on the ground if there’s an issue; as the team lead put it, “we’re the people you don’t want to see at your organization.” When the Incident Response team is called in unexpectedly, it’s because there’s a cyber-incident that needs to be solved, immediately. They examine and make sense of the virtual crime scene.


Threat Intelligence

The Threat Intelligence team analyzes information on threats and generates intelligence that feeds both analytic and incident response and gives all of the teams situational awareness of emerging and evolving threats. Our leader of the threat intelligence practice is a former Marine Corps network warfare analyst. Threat intelligence helps defenders understand threats and their implications and speeds decision making in the most urgent situations.


The three teams that make up Rapid7’s broader IDR Services all support each other and make it better for the customer. They may seem like three distinct teams, but they all come together to solve problems quickly and create a vast amount of knowledge to be used by all. The analytic response team is made more efficient by threat intelligence, and the incident response team helps customers experiencing major incidents and utilizes the work done by both teams to solve the problems. They are a integrated, fun, quirky team that calmly and easily solves problems… and they also find time for shuffleboard!


Learn more about Analytic Response here.

When we take a look at the last ten years, what’s changed in attacker methodology, and how has it changed our response? Some old-school methods continue to find success - attackers continue to opportunistically exploit old vulnerabilities and use weak/stolen credentials to move around the network. However, the work of the good guys, reliably detecting and responding to threats, has shifted to accommodate an attack surface that now includes mobile devices, cloud services, and a global workforce that expects access to critical information anywhere, anytime.


Today, failure across incident detection to remediation not only results in risk for your critical data, but can result in an attacker overstaying their welcome. We discussed this topic with our incident response teams, who have responded to hundreds of breaches, to develop a new whitepaper that shares how Incident Response has changed and how they prioritize strategic initiatives today. This comes with a framework we use with customers today to measure and improve security programs. Download your copy of A Decade of Incident Response: IDR Evolution & Evaluation here.




Incident Detection & Response, Then and Now

Since 2006, every step in breach response has continued to evolve – this infographic highlights key differences. For example, breach readiness was an afterthought to availability and optimizing the speed of business processes. Previously, there was little chance of falling victim to a sophisticated targeted attack leveraging a combination of vulnerabilities, compromised credentials, and malware.


But today, IT teams are expected to prepare thoroughly in the event of a breach, implementing network defense in depth and organizing and restricting data along least privilege principles. If we look back a decade, it was much easier to retrace how and where an incident occurred and respond accordingly. Today’s IR pros must combine expertise in a growing list of areas from forensics to incident management and ensure breach response covers everything from technical analysis to getting the business back up and running.


On the other hand, at containment and recovery has continued to improve over the past decade. Thanks to well-rehearsed programs, combined with system image and data restoration processes, IT can return a user’s machine in just a day. Security teams can contain threats remotely and use technology to provide scrutiny over previously compromised users/assets.


Incident Response Maturity

You can find out more on all of this in the infographic and the new Rapid7 whitepaper: A Decade of Incident Response. Too many security professionals are concerned with how their programs compare to those of their peers. This is the wrong approach. As you evolve your security program, worry only about one thing: how your program measures up against your attackers.


In the paper, you're asked seven questions to determine the maturity of your Incident Detection and Response program. We’ve based this framework on decades of Rapid7 industry experience and we think it’ll provide a great place to start evaluating where you need to make changes. Want to learn more about Rapid7’s technology and services for incident detection and response? Check out InsightIDR, which combines the best capabilities of UBA, SIEM, and EDR to relentlessly detect attacks across your network.


IDR Then and Now.PNG



Eric Sun

Earlier this week, we had a great webcast all about User Behavior Analytics (UBA). If you’d like to learn why organizations are benefiting from UBA, including how it works, top use cases, and pitfalls to avoid, along with a demo of Rapid7 InsightIDR, check out on-demand: User Behavior Analytics: As Easy as ABC or the UBA Buyer's Tool Kit.



During the InsightIDR demo, which includes top SIEM, UBA, and EDR capabilities in a single solution, we had a lot of attendee questions (34!). We grouped the majority of questions into key themes, with seven Q&A listed below. Want more? Leave a comment!


1. Is [InsightIDR] a SIEM?

Yes. We call InsightIDR the SIEM you’ve always wanted, armed with the detection you’ll always need. Built hand-in-hand with incident responders, our focus is to help you reliably find intruders earlier in the attack chain. This is accomplished by integrating with your existing network and security stack, including other log aggregators. However, unlike traditional SIEMs, we require no hardware, come prebuilt with behavior analytics and intruder traps, and monitor endpoints and cloud solutions – all without having to dedicate multiple team members to the project.


2. Is InsightIDR a cloud solution?

Yes. InsightIDR was designed to equip security teams with modern data processing without the significant overhead of managing the infrastructure. Your log data is aggregated on-premise through an Insight Collector, then securely sent to our multi-tenant analytics cloud, hosted on Amazon Web Services. More information on the Insight Platform cloud architecture.


3. Does InsightIDR assist with PCI or SOX compliance, or would I need a different Rapid7 solution?

Not with every requirement, but many, including tricky ones. As InsightIDR helps you detect and investigate attackers on your network, it can help with many unique compliance requirements. The underlying user behavior analytics will save you time retracing user activity (who had what IP?), as well as increase the efficiency of your existing stack (over the past month, which users generated the most IPS alerts?). Most notably, you can aggregate, store, and create dashboards out of your log data to solve tricky requirements like, “Track and Monitor Access to Network Resources and Cardholder Data.” More on how InsightIDR helps with PCI Compliance.


4. Is it possible to see all shadow cloud SAAS solutions used by our internal users?

Yes. InsightIDR gets visibility into cloud services in two ways: (1) direct API integrations with leading services, such as Office 365, Salesforce, and Box, and (2) analyzing Firewall, Web Proxy, and DNS traffic. Through the latter, InsightIDR will identify hundreds of cloud services, giving your team visibility into what’s really happening on the network.


InsightIDR-Shadow-Cloud-Services.PNG5. Where does InsightUBA leave off and InsightIDR begin?

InsightIDR includes everything in InsightUBA, along with major developments in three key areas:

  • Fully Searchable Data Set
  • Endpoint Interrogation and Hunting
  • Custom Compliance Dashboards

For a deeper breakdown, check out “What’s the difference between InsightIDR & InsightUBA?


6. Can we use InsightIDR/UBA with Nexpose?

Yes! Nexpose and InsightIDR integrate to provide visibility and security detection across assets and the users behind them. With this combination, you can see exactly which users have which vulnerabilities, putting a face and context to the vuln. If you dynamically tag assets in Nexpose as critical, such as those in the DMZ or containing a software package unique to domain controllers, those are automatically tagged in InsightIDR as restricted assets. Restricted assets in InsightIDR come with a higher level of scrutiny – you’ll receive an alert for notable behavior like lateral movement, endpoint log deletion, and anomalous admin activity.



7. If endpoint devices are not joined to the domain, can the agents collect endpoint information to send to InsightIDR?

Yes. From working with our pen testers and incident response teams, we realize it’s essential to have coverage for the endpoint. We suggest customers deploy the Endpoint Scan for the main network, which provides incident detection without having to deploy and manage an agent. For remote workers and critical assets not joined to the domain, our Continuous Agent is available, which provides real-time detection, endpoint interrogation, and even a built-in Intruder Trap, Honey Credentials, to detect pass-the-hash and other password attacks.


Huge thanks to everyone that attended the live or on-demand webcast – please share your thoughts below. If you want to discuss if InsightIDR is right for your organization, request a free guided demo here.

The attack surface is growing, and it is critical for enterprises to be able to detect and respond to incidents quickly and thoroughly. We recommend modeling your security program after the Attack Chain, which graphically shows the steps that intruders follow to breach a company.


This applies no matter what type of attack intruders employ, whether it be exploiting a vulnerability, stealing credentials via phishing or using malware. The steps in order are: infiltration and persistence, explore network, lateral movement, mission target and maintain presence. If an attacker is discovered early in the chain, it’s possible to stop the attack -- before they steal valuable data. While many organizations focus their detection on critical assets, nearly all struggle to identify earlier signs of intruder behavior, such as network reconnaissance and lateral movement.


infographic 2.JPG


The Verizon Data Breach Investigations Report has continued to list the top three attack vectors behind breaches as compromised credentials, malware and phishing. While organizations are spending more money than ever on Incident Detection and Response, security teams are still plagued with vague, un-prioritized alerts -- many of which are false-positives. More than ever, Incident Detection is challenging and requires the right combination of expertise backed by reliable technology.


To help illustrate why detection isn't working today and how InsightIDR can reveal intruders at every step in the attack chain, we created the infographic, “Disrupt the Attack Chain: Rapid7’s Approach to Incident Detection & Response.”By integrating with your existing network and security stack, InsightIDR applies both user behavior analytics and custom intruder traps to detect intruders quicker and add more context needed to triage the alerts.


Key benefits include: breadth of coverage, speed of detection, and having user context across all of your data. Want to find out the other three? Check out the infographic to see our vision towards Incident Detection and Response.

As the saying goes, ‘there is no such thing as a free lunch.’ In life, including the technology sector, many things are more expensive than they appear. A free game app encourages in-app purchases to enhance the playing experience, while a new phone requires a monthly plan for data, calling, and texting capabilities. In the security industry, one technology that stands out for its hidden costs is Security Information and Event Management (SIEM) tools. During initial deployment, use, and maintenance, SIEMs typically have three costs that will surprise your organization’s security team: growing hardware costs, unpredictable data costs, and data management expenses.


Nearly all SIEM deployments start with the purchase of hardware. It seems as though it is a one-time purchase, but unfortunately it doesn't stop there. Your log data expands as new employees are hired and more systems are brought online, and so too does the data storage needed for those logs. While more users increase the need for hardware, additional SIEM features also significantly impact the hardware load, resulting in surprise cost increases. A ‘keeping the lights on’ cost comes into play, as more budget is required to manage the growing hardware deployment. While your company is ideally growing and expanding, with this comes an increasing need for more and more hardware for your mountains of security data.


The second hidden cost is the expense associated with processing and indexing your data. As your machine data grows exponentially, so will your vendor bill. Most SIEM vendors charge by data volume (measured in either events per second, data indexed, or average data volume processed) which Gartner estimates are doubling annually, resulting in expanding license costs. As mentioned previously, the goal of most organizations is growth and development, which means that this expansion also requires growth in data and data costs, costs that can become difficult to afford.


SIEM-tools-hidden-costs.jpgFinally, the most challenging cost will come when looking for the expertise required to get the most out of your SIEM deployment. SIEM products are difficult. Writing and tuning detection rules, performing incident investigations, and understanding proprietary search languages means that operators need both security knowledge and specialized SIEM tool expertise. Simply adding ‘manage SIEM’ to an existing employee’s workload isn’t a feasible option for a successful deployment. A survey from elQnetworks reports that 52% of respondents require two or more full-time employees to manage their current SIEM deployment. This means that the deployment of a SIEM requires not just one person, but a dedicated team of people to set up and maintain it. This new addition of team members can end up costing a company much more than they initially intended. Dr. Anton Chuvakin, Research Vice President at Gartner, mentions in his blog that this cost, as well as other unexpected ‘hard’ and ‘soft’ costs, can make the total hidden costs of a SIEM project range anywhere from 10% of the SIEM license cost to as much as twenty times that of the license cost.


Knowing that SIEMs can come with a much larger price tag than they initially appear to might cause someone to ask if there are any alternatives - ones that won’t break the bank. At Rapid7, we understand that security teams are strained, security data management is a pain, and you’re already facing a mountain of stale, prioritized alerts. From working hand-in-hand with security teams and incident responders, we’ve built the SIEM you've always wanted - InsightIDR. It’s your fully integrated detection and investigation solution that also tackles these hidden costs head-on.  For the challenge of ever-expanding hardware, InsightIDR has been designed to run on our cloud-based Insight platform. You don't have to worry about growing and watering a hardware farm to make your logs fully searchable and safe from modification by attackers. In order to eliminate surprise data costs, our pricing is based on monitored assets, not data volume processed. Finally, to help solve the challenges of needing a wide range of talent for a successful deployment, InsightIDR comes with prebuilt detections from our penetration testing team, Rapid7 research, and customer collaborations. You can finally get visibility and detection throughout your organization without it becoming a second full-time job.


For more information about InsightIDR watch our 20-minute demo here.