Skip navigation
All Places > Services > Blog > Tags data-analytics
1 2 3 Previous Next


34 Posts tagged with the data-analytics tag

This post is the fifth in a series examining the roles of search and analytics in the incident-detection-to-response lifecycle. To read the first four, click here, here, here, and here.


While a lot of people may think it's a controversial topic, stating that a SIEM doesn't detect attacks is really just a clarification. SIEM security solutions are designed to aggregate, store, and provide access to your logs. Given these goals, it was only logical (and initially life-altering) to be able to build rules for alerting on any identifying text or patterns found in the data you have stored in one place. But enabling teams to build detection on top of their data is not the same as selling them the sort of detection they would get the first week they connect an antivirus [yes, seriously], IPS, or sandboxing solution. If you want effective detection from your SIEM, you have a lot of hiring and investment to do.


Spending your entire monitoring budget on a SIEM leads to a systemic problem

SIEM solutions sunk costBudget planning is never enjoyable in an organization, but that's especially true when you're not seen as a revenue-generating department, as security is typically classified. Once your management team recognizes the need to invest more seriously in monitoring, it’s critical they understand that just one SIEM product isn’t going to solve all challenges. In fact, budget for just one SIEM solution is a worst case scenario. This is almost guaranteed to lead down a path where the security team views the SIEM as a failed investment and treats it as shelfware because the team is too small to get value from it, while the management team believes they have provided ample funds to start seeing results. This becomes a larger problem in subsequent budget planning, when new monitoring money is needed to obtain alternative, ready-to-use detection tools for the team. Due to improper expectations, those outside security fall into the sunk cost fallacy and instead demand any incremental funds go into the SIEM until it is made effective. They underestimate what this would take and are unwilling to accept the team's abandonment of the sizable purchase, so an under-funded team continues to be blamed for failing to make an off-the-shelf solution work with no incremental staff or software developers.


Related Resource: [VIDEO] SIEM tools don't have to be hard, here's how.


Successful security teams have considered the SIEM solution a starting point

Nothing about my initial statement was meant to say SIEMs have no value for detection. Your security team needs a place to quickly access the organization's data. It needs a central place to investigate the alerts coming from the many security devices. It needs a place to build custom rules specific only to your organization. You just need to be prepared for the implementation effort, challenge of keeping all the necessary data flowing in consistently, and the staff necessary to both build effective detection and retire rules when they become ineffective. If you cannot meet these three criteria, you need to go with either a managed service or an alternative software solution.


Buying a SIEM and sending it data will not improve your detection. You need a team capable of translating the data output from your infrastructure into a common language the SIEM understands. You need a team capable of interpreting the out-of-the-box rules available in the SIEM and tuning them to your organization. You need a team capable of triaging and analyzing every alert to confirm incidents prior to having effective detection. You need a team who can learn the search query language and best ways to pivot across disparate data sets during investigations. You also need a team with the ability to manage hardware, architecture, and the "hot vs. cold" storage balance for your unique capabilities and needs.


road-warrior-mechanic.jpgInflexibility has driven many to build complex, custom aggregation and stripped a lot of context to ease data normalization. If you can’t feed your SIEM with data from a newer source or cannot build ingestion for a type of data you value, you’ve probably switched vendors or built some complex series of forwarders and connectors with a team dedicated to keeping them running. This can become a regular chore to not only manage, but re-architect, when more data sources inevitably come online as your business starts using more and more modern services, some of which will likely be in the cloud. Remaining successful requires creative mechanics who can fix anything with a little love, like my favorite mechanic in [still the best Mad Max movie] Road Warrior. In many of these cases, you and this ever-growing SIEM maintenance team need to identify the important fields (which you hopefully know) and normalize the logs yourselves to ensure they are searchable when it's important.


The team needs to build the value on top with custom code. Whether it is old school analytics (most people would call trend charts) or newer analytics (involving supervised machine learning and temporal behavioral analysis), a great deal of security expertise and historical analysis goes into the most reliable detection organizations have built on the data they happen to store in their SIEM. The resource-blessed security team is not operating with default SIEM detection, but rather using its data as source for their complex software, which looks for patterns, and strings together search queries to improve their chances and use the available search capabilities for the investigation. This doesn't mean your team is using the SIEM as a database. The best SIEM solutions out there just allow you to build this custom code on top of their data by providing software development kits. This works extremely well for these exceedingly rare well-staffed teams, but brings none of the effective detection back to the other 95% of businesses who purchase the same SIEM product.


Search and Analytics products should dramatically reduce the customization workload for your team

If you want to detect attacks without staffing a large team with software development skills, you need to seek alternative solutions. For your incident response team to start spending more time hunting and less time building and updating custom software, you need an analytics solution which automates as much of this detection and data enrichment as possible. So do these teams still need a SIEM today? No, but they do still need to satisfy a lot of the use cases:

  • They still need to collect and store the data in a centralized place.
  • They still need search the data (only hopefully with a faster response than today).
  • They still need to leverage this data for detection purposes.
  • They still need to build some alerts very specific to their organizations.
  • They still need a link between their many data sources and threat intelligence.


They just don't need a "SIEM solution" to meet these needs. You can combine a flexible machine data search solution and expansive security analytics solution to meet all of these needs. You should be looking for a single solution combining the two.


If you want to learn about detection without the need for custom code, check out an InsightIDR demo. If searching through machine data is your current need, you can start a free trial of Logentries here and search your data within seconds.

This is a guest post by Ed Tittel. Ed, a regular contributor to, has been writing about information security topics since the mid-1990s. He contributed to the first five editions of the CISSP Study Guide (Sybex, 6e, 2012, ISBN: 978-1-119-31427-3) and to two editions of Computer Forensics JumpStart (Sybex, 2e, 2011, ISBN: 978-0-470-93166-0), and still writes and blogs regularly on security topics for websites including Tom’s IT Pro,,, and various TechTarget outlets including Learn more about or contact Ed through his website.



Working with computer logs is something of an ongoing adventure in discovery. The data from such logs is amenable to many uses and applications, particularly when it comes to monitoring and maintaining security. But even after a security breach or incident has occurred, log data can also provide information about how an attack was carried out, the IP address (or addresses) from which it originated, and other packet data from network communications that could be used to identify the source of attack and possibly also, the identity of the attacker. This means presenting log data in a court of law as evidence to support specific allegations or accusations. How does that work?


Documentary or Digital Evidence and the Hearsay Rule

In legal matters, a special consideration called the hearsay rule normally applies to evidence that may be admitted in court for a judge or a jury to consider in assessing or disproving the truth of various assertions, or in deciding guilt or innocence for an accused party. The hearsay rule states that “testimony or documents which quote persons not in court are not admissible.” This provision in the law is intended to prevent information provided by third parties who cannot be questioned about their testimony or documents, or whose credibility or veracity can be neither proven nor impeached, from affecting the outcome of a decision of guilt or innocence. For the layperson, it’s clearly tied to the notion that the accused has the right to face and question those who accuse him or her in the courtroom as the legal process works to its final conclusion.

But what about digital evidence, then? Computer logs capture all kinds of information routinely, either at regular intervals or in response to specific events. Because an accused party cannot face or question software in the courtroom, does this mean that logs and other similar computer-generated data are not admissible as evidence? Absolutely not, but there are a few “catches” involved.


The Business Records Exception…

As is happens there are some kinds of information and documents that are NOT excluded by the hearsay rule as explained in the Federal Rule of Evidence 803(6). Most specifically, “Records of regularly conducted activity,” are excluded. These are defined in the afore-cited publication as “A memorandum, report, record, or data compilation, in any form, of acts, events, conditions, or diagnoses, made at or near the time by, or from information transmitted by, a person with knowledge, if kept in the course of a regularly conducted business activity, and if it was the regular practice of that business activity to make the memorandum, report, record or data compilation, as shown by the testimony of the custodian or other qualified witness, …, unless the source of information or the method or circumstances of preparation indicate lack of trustworthiness. The term ‘business’ as used in this paragraph indicates business, institution, association, profession, occupation, and calling of every kind, whether or not conducted for profit.”


Whew! That’s a lot to digest, but here is what it means: As long as the party that wishes to use log data as evidence can show that it routinely collected log records before (and during) the events or activities captured in those logs, they should be admissible as evidence in court. A responsible person would have to be able to truthfully testify that logging was already in use by that time, and that the log data presented as evidence is a true and faithful (that is, unaltered) copy of the original data logged at the time the alleged events or activities occurred for that evidence to stand. But because logs are designed to provide a record of events and activities it will be close to impossible for the other side of the case to argue such evidence as inadmissible per se. As long as you can produce one or more credible witnesses, with supporting documentation (memos, file dates, and so forth) to show that logging started some time before the alleged events or activities occurred, and can provide records to show that the log data presented in court is identical to what was originally captured and has not been altered since, your logs can indeed tell their story in the courtroom.


Note: my thanks to Neil Broom, President of the Technical Resource Center, and a regular forensics examiner and expert witness on digital forensics, and an author of Computer Forensics JumpStart for his clear and helpful guidance in explaining log data as legal evidence in the courtroom.



Logentries by Rapid7 makes it simple to collect, analyze and ensure the security of your log data. Start centralizing your log data today with a free 30-day Logentries trial. Click here to get started.

This post is the final in a series examining the roles of search and analytics in the incident-detection-to-response lifecycle. To read the previous six, click one, two, three, four, five, or six.


As a final discussion topic around search and analytics, it is really important to talk about the different teams involved, who all have different questions to ask of the same overall data. Roles and titles can vary greatly, but for simplicity, I’ll refer to the three primary groups as ‘IT Ops’ [for anyone looking to maintain servers and software required for the employees and partners of the organization], ‘DevOps’ [for anyone focused on the availability and quality of servers and software for external customers], and ‘security’ [for the individuals looking to prevent and detect malicious activities]. Let’s talk about the order in which these teams generally develop in an organization and how that leads to politics between teams and the duplication of efforts, using the now-classic Lethal Weapon series.roger-murtaugh.jpg


IT operations monitored logs first, but not last

For our purposes here, I’m going to consider the IT Ops team the Roger Murtaugh of the company because they were likely there first, so they were around to acquire the necessary tools and lay out “the way things are done”. When the first email servers were stood up and various infrastructure deployed, IT Ops were manually checking the health of each system and receiving floods of automated emails with complex error codes when the slightest problem inevitably occurred. This was never an easy job, but the gradual addition of ticketing systems, helplines, email distribution lists, and pagers to their processes made it possible to handle the day-to-day needs and off-hours emergencies. Centralized log management solutions were then brought in to simplify the entire monitoring and troubleshooting process, and the first automated trend charts around CPU usage, errors, and bandwidth made drastic improvements on initial notifications. Troubleshooting could finally be done from a single place, so IT Ops was happy with the way things were working.


lethal-weapon-frightened-riggs.jpegIf your organization has custom-developed software, as many do today, at least one DevOps team has gotten sick of similarly SSHing into production systems to verify a new release is functional or track down a bug. For the IT Ops team, realizing this new team is looking to implement a log management solution was probably a similar shock to when Murtaugh first learned he had to work with that loose cannon, Riggs, who did things his own way. Given that change can be scary, providing high-level permissions to access your prized tools and letting another team do things “their way” both feel inherently risky. Even once these teams come to an agreement to either share a solution or work independently, they are typically minimizing very specific risks to the business with little thought to the risks with which the security team must obsess.


You're mistaken if you've assumed no one else needs your logs

One of my favorite cognitive biases to consider, because we can all rarely avoid it, is the "focusing illusion" best summarized as "nothing in life is as important as you think it is while you are thinking about it". This is relevant here because when you are paid, trained, and thanked for using the tools and data at your disposal for very clear purposes, you are unlikely to realize just how much value is in there for others with very different questions or that sometimes those other questions could be more important to the company than your own. Whether or not Murtaugh and Riggs in your organization have become partners, they most likely see security as a pest [like Leo Getz] or like Internal Affairs [Riggs's future wife, Lorna Cole] because the effort to reduce the risk of attacks and data leaks often requires very different efforts than keeping applications functional and available for customers, be they internal or external. Meanwhile, in reality, just as all of these characters had the same goal [of stopping serious criminals], the security team has the same overarching goal as IT Ops and DevOps: eliminating reasons the rest of the company would be unable to accomplish its goals [and yes, I remember that Leo Getz was originally a money-launderer]. If you're in IT Ops or DevOps, your security team wants you to know that they would really like access to your logs. It would help them reduce the risks they worry about, even if you don't completely understand why.Leo-getz.jpg


It’s healthier not to differentiate between “logs” and “security logs”

Microsoft may have biased us all by labeling some logs as "security logs" and others for different purposes, but the easiest example of non-security logs with value to security are DHCP logs; if your security team doesn't have access to DHCP lease grants and releases, it can be very difficult to find out which machine is operating at any given IP address in an incident investigation. The same goes for machine data the DevOps team considers only relevant to them - if you can see who made the last change to the production servers before they went down, you can also see whose account might have been stolen and used to change settings preventing outside access to production systems. If you never ask the security team if your logs would be useful, you are likely to make the wrong assumptions. Besides, they should absolutely have the security tools capable of ignoring anything they don't need.


Getting a single solution for all parties to use can save you a lot

Though the security team should have the tools that help them find what they care about in your logs, far too many organizations have three different tools for three different teams when there could be a single one. You don't need pivot tables or fancy finance visualizations to realize that buying a log management solution for IT Ops, a separate search solution for DevOps, and also a SIEM for the security team is a very costly way to solve different problems with the same data. Even if you forget about the three different purchases, failing to take advantage of discounted tiers of pricing, you can see that three different groups have to manage three different solutions focused on search and analytics which only differ by the questions they ask and use cases to which they apply. There may have been technology challenges preventing this before, but you should demand better now. Ask for real-time search and analytics flexible enough to satisfy Murtaugh's, Riggs's, Getz's, and Cole's problems. You all want to analyze your data to remove anything which could possibly hinder your organization's progress.

If you are a part of the IT Ops or DevOps teams, you can start a free trial of Logentries here and search your data within seconds.
If you are on the security side of things, check out an InsightIDR demo.

This post is the penultimate in a series examining the roles of search and analytics in the incident-detection-to-response lifecycle. To read the first five, click here, here, here, here, and here.


shovel-pan.JPGAs the amount of data within our organizations continues to grow at a nearly exponential rate, we are forced to look for more efficient ways to process it all for security use cases. If we want to keep up as these problems and systems scale, we need to learn from industries much older than computing. For example, gold miners have graduated from just a shovel and pan to automating as much of the process as possible, reducing the manual labor to the areas machines just cannot handle. From the beginning of the data analysis process, there are four pieces of your security data processing which can be automated and a fifth you need to have human eyes and minds to handle.


Automate the collection from all potentially valuable data deposits

pale-rider-large-nugget.jpegIf you were a fan of either Pale Rider in the 80s or White Fang in the 90s, you can probably recall the way gold was mined for centuries. Once someone decided they had a potential gold deposit, pick axes and shovels were popular tools to free the gold particles from the surrounding rock. Using blunt instruments in very specific places is quite similar to the original process to debug software issues and look for malicious activity by logging in to a specific system and searching through the logs. In recent years, over a dozen stages of crushing rock, pumping water, applying pressure, and various other methods of filtration have been introduced to reduce the need for luck when “digging for gold”. Thankfully, for every security team’s sanity, data prospecting from system to system was similarly seen as too laborious to scale around ten years ago and any relevant data is now centralized. You should demand a software solution which offers multiple options for data collection (read: not just a syslog feed) and allows you to monitor your data sources for inactivity or sudden failures in collection.


Automate the normalization of data

I won’t force the analogy into every section because reducing large chunks of rock down to just the valuable minerals doesn't involve the understanding of the rock and addition of other rock in the way more data must be used to help interpret data. In order to simplify the later processing of the raw data, you must address an area less frequently automated by off-the-shelf software solutions: normalization. Many solutions put the onus on the customer to configure connectors, match specific fields in their logs to the “normalized” data field, and maintain this parsing and translation as logs change over time and new devices are deployed. There is no reason this shouldn’t be automated for the vast majority of data sources common to organizations today. Your team shouldn’t have to deal with these manual tasks unless you are working to monitor your own internal systems through custom logs.


Automate the internal attribution of activity

After the data is normalized, a massive amount of time is generally spent figuring out the actions that were actually taken, and it shouldn't be so time-consuming. If your automation stops there, it takes a great deal of working knowledge just to take the normalized data correlated with other information by a single shared attribute: concurrency. Your team shouldn’t have to first identify a specific log line or other data point as being of interest, and then conduct a few more queries or “pivots” to other data before you can determine which user was responsible for the action and on which asset. This attribution should be automatically performed as the data is ingested. Why would you ever want to look at the data without knowing these details?


Automate the behavioral analysis

The last stage of automation you should demand of your monitoring software is the explanation of the behavior. What did the user actually do on the asset? You shouldn’t have to diligently decipher these details every time you investigate the same kind of activity in your machine data. Without the context of the amount of data transmitted externally or the destination organization, a lot of time can be spent simply to find out your code repository vendor now has a few new public IP addresses. And as long as the previous stage is automated, you can immediately see that a software developer transmitted this sudden increase in data from her primary machine. These details should all be obtainable at a glance. Critical thinking should be reserved for determination of intent and recognition of new, questionable behavior.


gold-panning.jpgClick here to learn more about User Behavior Analytics.


Ease the manual analysis with usability, visualizations, and flexible search

Once you have automated these first four stages of data analysis, the security team can spend a lot more of its time deciding whether the activity is malicious and what should be done about it. It's like the process of panning the loose surface sediment in hopes of leaving nothing but the gold and other high specific gravity materials for manual review. It doesn't scale to only perform these actions all day because you don’t know where to look in the data, but it is very effective as a complement to the automated analysis. With the four stages of automation, you can already have enough direction and context to know where you’re looking before taking this last action and planning remediation. By pairing the automation with data visualizations and rapid search capabilities, you can make this final stage as painless and quick as possible for your team to act with confidence.


If your team needs to use the extensive log, and other machine, data in your organization to effectively detect attackers as they laterally move from initial compromise to multiple endpoints and, eventually, the systems containing the most valuable data, you should not be forced to build every stage of the processing yourselves like in the old days.


If you want to learn Rapid7’s approach to automating the first 80%, check out an InsightIDR demo. If the last 20% of manual effort is your challenge, you can start a free trial of Logentries here and search your data within seconds.

This post is the fourth in a series examining the roles of search and analytics in the incident-detection-to-response lifecycle. To read the first three, click here, here, and here.


Nearly a year ago, I likened the incident handling process to continuous flow manufacturing to help focus on the two major bottlenecks: alert triage and incident analysis. At the time of these posts, most incident response teams were just starting to find value amid the buzz of “security analytics,” so I focused on how analytics could help those struggling with search alone. As more teams have brought the two capabilities together, the best practices for balancing them to smooth the process have started to become apparent, so here is what we've heard thus far.


Analytics are meant to replace slow and manual search—but not everywhere

ace-ventura-sliding-door.pngYou need to be able to use search as if you’re zooming in to find the supporting details for your initial conclusion. It would be great if we could solve every incident like Ace Ventura deducing that Roger Podacter was murdered by remembering a string of seemingly unrelated data points, but as Don Norman explained in The Design of Everyday Things, the limitations of the human working memory demand better designed tools to balance the trade-off between “knowledge in the head” and “knowledge in the world”. Since investigating an incident involves a great deal of information be gathered, “knowledge in the world” must be maximized through analytics and effective visualizations for the context they bring to minimize the necessary “knowledge in the head”.


Search cannot be your only tool for alert triage. Search results provide immediate "knowledge in the head", but little else. They are great for answering clear and pointed questions, but there is not enough context in search results alone. Receiving an alert is the beginning of the incident handling process and if the alert is nothing more than a log line, triage can be a gut feeling based only on previous experience with the specific alert triggered. Sometimes, seeing the log line in the context of the events immediately preceding and following it is enough to understand what occurred, but often, higher level questions need to be asked, such as “who did it?”, “on which asset?”, and “is this common for that person and system?” None of these questions immediately explain the root cause of the triggered alert, but they make it exponentially easier to triage. If you need to manually search your machine data for the answer to every one of these questions to triage every alert, you need to provide your team with a great deal more “knowledge in the world”. This should include manicured views containing the answer to most of these questions on display next to the alert, or a simple click away.


If you are analyzing dozens of incidents per day, every repetitive action is building the bottleneck. This is where analytics play a part in the incident handling process, as they only add value if the conclusion they make is easy to understand. Dozens to hundreds (and hopefully not thousands) of alerts per day require a progression of quick conclusions which others have made in the past before finding a very specific answer to the incident at hand. Easy-to-understand analytics are the answers to the many early questions you need to ask to before you understand the root cause of the alert and decide whether it was either malicious or an action which should be permitted in the future. If the analyst performing triage and the incident analyst (who may be the same person) have access to these analytics as soon as they receive the potential incident in their queue, it vastly reduces the amount of manual searching through data they must perform to close the incident faster than previously possible.


It hurts the flow when you cannot immediately access the data you need

clue-board-game.jpgIn the previous post of this series, I covered the need to collect data outside (and inside) the traditional perimeter. An adjacent problem is the situations in which the data is being collected but not readily made available to the IR team. Needing to wait for the data to be restored or indexed is a bit like if you were playing Clue with your family and every time someone said "It was Colonel Mustard, in the conservatory, with the candlestick", you all took out your phones and caught up on emails for thirty minutes as the player-to-the-left searched through filing cabinets for her cards disproving the suggestion.


Expensive tools and verbose data sources have led too many teams to delay investigation until the data is acquired for indexing. The first common cause of this challenge is caused by the realities of departmental budgets, which for the majority of teams to pick and choose the data they make unconditionally available to search. Sadly, this leads many teams to choose between firewall, DNS, or web proxy data to have immediately searchable for incident responders, storing the rest on the device itself or pushing it to cold storage. Then, when an incident is deemed severe enough, the team is waiting hours to restore (or forward) the data and get it indexed for search. This sounds crazy to anyone who hasn't had to stay under a budget, but these three devices are logging so much noise in a typical day that it becomes the only perceived sacrifice the team can make. Your team shouldn't have to worry about exceeding data thresholds or adding more members to your cluster just to have this key data constantly available.


Setting reminders to search the data once it's available severely challenges the "without interruption" aspect of continuous flow. The second guaranteed "data loading..." way to slow your team's response to an alert is using technology that takes up to twenty minutes to make it searchable. If receiving an email alert and copying it to your calendar to remind you to investigate in twenty to thirty minutes doesn't sound like the best use of your time, you are likely to be frustrated by the inability to search at the moment an incident has your attention. Unless you're the one incident responder who never has enough to do, you'll probably be busy looking into something else by that time, meaning an even longer gap before the initial alert is investigated, or worse, it gets forgotten altogether until you or someone else on the team reviews the open incidents hours later.


Everything from the query language to the search results needs to be designed for junior analysts

No matter the maturity of your team, the best search and analytics capabilities can offer little value if they don't account for a single truth: you have to hire and train entry-level analysts just as any technical team does. If every tool at your team's disposal is designed with a focus only on the type of data it obtains and not on the usability, your junior analysts have steep learning curves to climb before they are contributing to the efficiency of the entire process. This is not the new hires' faults. If you need to learn a complex query language, become and expert on each data type, or decipher the search result screens which perfectly depict "information overload" before you can contribute to the team, you need to get more comfortable blaming the software at your disposal. Going back to Don Norman, we have all become too accustomed to blaming the user when the design is at fault. Every software solution (for search or otherwise) your team uses needs to be designed for an entry-level analyst.


If searching through machine data is your current need, you can start a free trial of Logentries here and search your data within seconds. If you want to learn more about building your incident response capabilities, check out our incident response toolkit.

This post is the third in a series examining the roles of search and analytics in the incident-detection-to-response lifecycle. To read the first two, click here and here.


In the second blog of this series, I touched on the need for solutions more flexible than the traditional SIEM architecture focused primarily on receiving logs from the security appliances in your infrastructure. This wasn’t only a passing comment; in the past five years, the approach to compromising an organization and corresponding work to defend against it have changed significantly as new technologies have emerged for use by both sides.


Attackers get in by a variety of perimeter-blind routes, which means an expansion of the data sets security teams need to interpret

Whether you use the Verizon Data Breach Investigations Report as your source of truth or rely on anecdotal evidence from the details which gradually emerge, it is clear that web application attacks and stolen credentials comprise the majority of the successful entry points in recent years. This is especially frightening for companies using perimeter security devices as their only source of prevention and/or detection. Not only are they not watching the right entry points for most of today's attacks, but even if they possess the tools and knowledge to effectively analyze it, they likely have no access to the data which would help them detect these attacks.


The unfortunate reality for security teams today is that they cannot simply forget about perimeter devices and dedicate months of their lives to overhauling their entire infrastructure while learning the most advanced practices for mitigating these primary attack vectors. Using firewalls, intrusion detection/prevention systems, and two-factor authentication on VPNs should be considered a starting point for security teams. Removing these tools from the equation would mean initial compromise techniques currently with shrinking success rates would reclaim the top of the list. This gradual evolution of intrusion techniques forces the need for a continually expanding breadth of knowledge on the security team and more direct access to other teams' knowledge when an incident warrants it.


Tracking lateral movement is near-impossible if you only have data on centralized servers

blue-streak-border.jpgUnless you expose your servers with the most sensitive [read: monetizable] information to the internet, the attackers are not going to stop after the first compromise. Even in the most poorly secured environments, it would be rare to store the data most critical to the business in a web application on the network's edge. Once the initial compromise is successful, there are many ways to pivot to other hosts, once inside. Harvesting credentials for use in moving from system to system is a popular method of stealthy reconnaissance as more is discovered, and it can be done with polymorphic malware to evade known-hash detection or with any number of attacker-at-keyboard toolkits. This presents a massive challenge for investigations in the vast majority of organizations who are unable to monitor the wealth of information isolated on their endpoints.


As if it weren't a large enough challenge to monitor every endpoint on the network, and at your employees' homes, an attacker can also move laterally to one of your organization's cloud environments, whether it hosts employee data, customer data, or other valuable data like source code. Segmenting valuable data across distinct cloud services is a great way for modern enterprises to reduce the impact of a compromise, but they all need to be monitored and secured the same as you would any area of your physical network. Otherwise, analyzing an incident without activity from your managed clouds can feel like the scene at the end of Blue Streak when the police are powerless to follow Miles any further because he has crossed the border into Mexico. No portion of your environment, even virtual and hosted infrastructures, should be just beyond the reach of your security team.


You need to analyze machine data from inside and outside the traditional perimeter to effectively investigate today’s attacks

All of this seems like a daunting assignment because it is. If you don't have the benefit of a large team who understands the structured and unstructured data across your environment and the resources to automate the correlation and internal attribution across all of these data sources, your team is going to spend a significant number of hours duplicating manual efforts to analyze the incidents you detect. There will be incidents closed in minutes because of their familiar feel to a senior analyst and there will be incidents which take junior analysts days to effectively investigate, but there will be little time for other duties outside of alert triage or incident analysis.


disparate-search.jpgIf you have a different user interface for your perimeter devices, malware detection, endpoint detection, and cloud monitoring, you likely spend a lot of time switching between monitors and browser tabs to look for a string to tie events together. Too much of this has become a cascade of similar searches across unlike data sets. The ability to search will always be a necessity [as I wrote in the first post of this series], but siloed search is too manual and slow to stand alone as an investigative tool, especially if it means different search capabilities in different places. Search cannot be an afterthought. Each data source can provide valuable context to search results after analytics have been applied.


The paired solution needs to be a single place to pivot through relevant data from your entire environment. This was once dubbed a "single pane of glass" and today, it is at the center of what analytics solutions now aim to offer incident responders. If the data is structured and understood, speed analysis by automating the classification of data as user or system behavior. Detection and analysis solutions from other security vendors need to be a source of more context, rather than another console for independent analysis. If classification is currently a challenge, allow easy exploration through search. One place to send the breadth of data. One place to access a mix of automatically-drawn conclusions and conclusions requiring a human mind to interpret.


Click here to learn more about User Behavior Analytics.


If you want to learn more about building your incident response capabilities, check out our incident response toolkit. If searching through machine data is your current need, you can start a free trial of Logentries here and search your data within seconds.

This post is the second in a series examining the roles of search and analytics in the incident-detection-to-response lifecycle. To read the previous, click here.


Various security vendors have made very public declarations claiming everything from “SIEM is dead.” to asking if it has merely “lost its magic”. Whatever your stance on SIEM, what’s important to recognize is that while technologies may fail to solve a problem, this doesn’t make the problem any less serious or prevalent.


The debate over SIEM’s demise is a distraction

SIEM’s supposed magical life is unlikely to suddenly end for another decade because of the time it takes for the momentum for the industry’s largest investment to fade. Despite this impressive market life expectancy, former SIEM vendors are clamoring to pile on and replace the dead SIEM 1.0 with their SIEM 2.0, or even SIEM 3.0. But the only reason these statements have made such an impact is the wide-ranging expectations of those engaged. Teams properly equipped to take the blank slate that often qualifies as SIEM technology and build an effective incident handling process on top of it realize its value. The other 95% of the security teams who invest the majority of their annual budget in simply deploying a SIEM continue turning it into shelfware to dust off whenever a high-risk audit appears in the calendar.



Unlike Jumanji, where a single toy automagically made jungles and creatures appear, SIEM solutions generate magic not through the technology, but through the people customizing and using it. I’ll skip the obvious Harry Potter reference here to avoid insulting any SIEM engineers who don’t like fantasy films, but if you cannot afford to hire a team of these wizards, you are doomed to carry on with the same low ROI as the other organizations out there that receive hundreds of thousands of alerts per day and day-long efforts to construct all of the complex queries necessary to answer enough questions to close an incident investigation.


These important problems are not close to disintegrating

If you ask the wrong people, SIEM solved security in 2006. Realistically, though, it solved a lot of the leading challenges  security and IT faced at the time by putting the historical data in one place. Centralization of logs made it possible for IT, networking, and security teams to address their biggest problem: accessing the data from their most critical servers and networking devices in a single place to troubleshoot and identify any issues quickly. Never before could all teams who need to investigate outages, software errors, and security incidents and even those who needed to answer every auditors’ questions all go to a single place to do so within just a couple of days of digging through the data.


However, despite having done more to address these problems than any preceding products, the biggest reason SIEM’s magic death is being so heavily discussed is its failure to consistently solve the most challenging of them: all cyber compromised detection needs in a “single pane of glass”. Again, the wizards out there have to manage, with the help of data scientists joining the team, to build some very impressive SIEM-based detection cores for their incident response armies, but the vast majority of organizations bounce from an IPS management console to a SIEM solution for some more details before obtaining more details from an endpoint forensics solution and pushing them through an unstructured data search solution to answer the final questions necessary to respond appropriately to the incident in question.


SIEM is rapidly losing trust because of its inability to adapt and aforementioned dependence on internal experts

There is absolutely a wealth of valuable data in even the least-used SIEM solution deployed today, but these largely outdated software solutions were designed to operate in an environment significantly different from today’s. When organizations ran all third-party software and internally developed software on physical servers they either housed inside their offices or hosted in a rented space, there was an illusion of control because of the option to pull the physical power plug from the systems. However, thanks to companies like Google and Amazon, no successful company today still operates in that version of reality. Ten-year old solutions should not be expected to efficiently adapt to the modern need for elastic cloud computing and constant access to the corporate network via mobile devices.



Just as we learned from Days of Thunder, even the most effective driver (or incident responder) cannot perform with technology meant for a different environment. When Cole Trickle first moved from open-wheel vehicles to stock cars, he had to swallow his pride and let Harry teach him how to use the phenomenal car he’d built for him. Now, I’m no stock car racing expert [I’m not even a passing fan], but I do know I’m not insulting SIEM solutions with this analogy. You cannot just make some minor modifications to a stock car and compete in an open-wheel race, just as you cannot simply start piping data from your cloud and mobile management systems into your pre-existing SIEM server(s). You can invest in more hardware, hire a larger team, and even build a complex ETL pipeline to get your data into more modern data stores leveraging Hadoop [buzzword!], but the truth is, as you fall victim to this sunk cost fallacy, you’ll spend insanely large sums of money trying to realize the value you could get by other means for a fraction of the cost.


Modern flexible solutions built for the security problems are needed to address these challenges

If you want to get back to addressing the problems SIEM was meant to solve, but do so in the modern landscape, you need to switch to search and analytics solutions built solely to address these very problems of one place to search all data, detect and investigate attacks in this very environment in which we live today. You could even reduce the time your team regularly spends maintaining the hardware and custom software enough to focus more on the day-to-day questions they need to answer.


Are you building in-house software? That’s a facetious question. Everyone is. If you want to deploy a solution now designed to adapt as technology evolves, you need it to be immensely flexible to the machine data it ingests. If the goal is simple log management and search, you don’t need an overly complex solution bolted onto an existing SIEM. You need flexible machine data search.


If your goal is simplifying how your team automates the alerts from your many complex detection technologies so you can spend more and more time hunting for new threats, you need behavioral analysis designed for the continually growing data generated by the people and systems in your organization. If you need to analyze incidents across your endpoints and managed cloud environments, you need an analytics solution designed solely for today’s incident response teams.


If the problems described here are similar to yours, Rapid7 has a number of Incident Detection & Response solutions that you can read about here. They are continuing to adapt as technology and the attackers do.

This post is the first in a series examining the roles of search and analytics in the incident-detection-to-response lifecycle.


Strong data analytics have recently enabled security teams to simplify and speed incident detection and investigation, but at some point of every incident investigation, a search through machine data is nearly always necessary to answer a one-time question before the investigation can be closed. Whether your incident response team is just trying to combat the flood of alerts coming from a series of noisy detection appliances or seasoned specialists who have automated a sizable amount of data enrichment to focus on manually hunting the sophisticated, there are many reasons you need the ability to search through disparately sourced machine data for answers.


Attackers try for easy, then adapt and iterate

For those who mostly hear about massive data breaches via the major media outlets prior to the substantive details being publicized, hindsight bias brings a great deal of thoughts like "they should have known to watch their medical record database more closely" and "how could they have not watched the applications on POS devices better?", but this is drastically over-simplifying the attacks, themselves. Every attack plays out as the intruders adjust to the preventive controls in place. Stealing partner credentials was likely not the first technique attempted for initially gaining access to the Target network, but it was the first to work. Though the initial compromise typically occurs via either stolen credentials or web application exploit, the next step taken will be the same isolated trial-and-error process of trying some basics like harvesting more credentials and scanning for potential new hosts or a five year-old remote exploit, yet it is guided by what works in that one moment in time.



If you've watched Braveheart as many times as I have [and you probably haven't], you may see the similarities between its [historically inaccurate] battle tactics and a cyber-attack. Whether it was luring a large number of soldiers into a gully before revealing your army on the cliff above, pretending to have no plan for the English cavalry, or a gentlemen's agreement with the onrushing Irish, every skirmish was started and progressed in a different way. Being crafty doesn't always lead to success, but it leads to a lot more success than quitting after trying the easiest tactic and failing.


This unpredictable flow of every attempted compromise is why organizations so often don't know what data they should be collecting for investigative reasons. There will always be high value data sources in organizations which provide necessary context for detection and investigation, but they will have limitations when attackers adapt to what they successfully compromise. Analytics just won't help explore the data which may only be relevant to an incident investigation one time.


Every organization and corresponding environment is unique

identifying-your-blindspots.jpgIt is frequently underestimated just how organically an every network grows, but as evidenced by the recent report on the IRS feeling lost to upgrade its Windows systems, it is extremely challenging just to identify every endpoint and server with access to your network. Add to that just how easy it is for software developers to spin up a small cloud environment or a financial analyst to take work home on an iPad, and you start to see the attack surface expand rapidly.


If you are limiting the data you collect for monitoring and investigation to what is easily centralized or on the managed perimeter, your incident response team is forced to close a great deal of incoming alerts and follow-up investigations without the high level of confidence they would prefer. They simply don't have enough of the data they would need to know what happened.


Custom application logs and unstructured evidentiary data are needed for confident incident analysis

Of even greater concern than the large blind spots in the structured data is the unstructured data often unavailable to security teams. If your organization has a great deal of internally-developed applications, as most modern businesses do, and you're only monitoring the logs from your firewall, domain controller, and web proxy, you'll be blind to an attacker moving from server to server in your production environment, looking for some data to monetize. For those who think this is rare, our Logentries team estimates that thirty percent of the logs we process for our customers are completely custom. In these scenarios in which the relevant data to completing incident analysis could be unstructured and unique to your company, your security team needs the ability to search it quickly for data points which tie it to the broader investigation. Inflexible log management solutions are of no help here.


Understanding the root cause can take many forms

What incident response teams rarely say, but typically know, is that the major reason incident management is so difficult is that you're using machine data generated as a result of some actions and working to determine exactly what an actor, be it human or software, did to cause this resulting data generation. Then, you not only have to understand the behavior, but also have a high enough level of confidence to explain the actor's intentions to either take action (when malicious) or close the investigation (when benign). Every person and team has its own comfort level, but the process to reach it is different for every type of initial behavior and too often isn't reached because of insufficient data or the inability to properly explore it in a timely fashion.


If you really want to understand the root cause and close your investigations at a pace to avoid alert fatigue, you need the context around user and endpoint behavior you can only obtain from an analytics solution plus the flexibility and ease of machine data search solutions. You cannot limit yourself to one or the other and maintain that high level of confidence. If this sounds interesting to you, we do have a number of Incident Detection & Response solutions that you can read about here.

The Third Court of Appeals upheld the Federal Trade Commission’s decision to sue Wyndham Worldwide for at least three data breach incidents that occurred between 2008 and 2010. The incident exposed more than 600,000 consumer payment card account numbers and led to more than $10 million dollars in fraud loss, according to the FTC complaint. Wyndham Worldwide had challenged the FTC complaint in an appellate court, saying the FTC was over-reaching its authority, however lost the appeal in a 3-0 vote. The unanimous ruling is important, because it shows the government is taking bold steps toward holding data custodians accountable for the data in their care, and the courts are agreeing with them.


The Wall Street Journal blogged about this, and put a call out to CIOs to be careful about how they handle data security. “CIO[s] should act defensively to mitigate the company’s exposure to claims by the FTC and other government regulators” states the authors.


The article mentions several important points:

  • Compliance with NIST Cyber Security Framework. The National Institute of Standards and Technology Cyber Security Framework is guidance, based on existing standards and good security practices, to better mange and reduce organizational risk. This is becoming an implied de facto standard for cyber security. The challenge for organizations is determining the relevance and how to implement the more than 350 recommendations in the NIST CSF.
  • Updating of data and privacy policies. Even if your company has data security polices, when were they last reviewed and revised to include defense against the most recent threats? Any organization that handles HIPAA data or PCI data is required to do ongoing reviews to ensure their security measures are current and compliant, and may be required to demonstrate this to auditors.
  • Report by respected third-party consultant. A security assessment is a key step in understanding your organization’s level of readiness and maturity. It reveals security gaps, the associated risks, and can help organizations factor high-impact investments into their future business plans. Annual security assessments from respected security consultants can help your organization adapt to new threats, increase employee awareness, and assist in the formulation of a strong security strategy.


The government is getting serious about the seriousness of data breaches. The gap between what is required for protecting data and the knowledge of organizations to implement this is widening. As data continues to grow, and more rules are passed on how it is to be governed, this gap, and the accompanying fines, will become tantamount issues for enterprises to manage.

Rapid7’s Global Services organization has experience in all of these areas, and partners with clients to assess organizational security maturity, provide recommendations and advice on how to address gaps in security processes and procedures, and can assist in the development of security programs and policy. These engagements help clients reduce their security risk though the delivery of robust, repeatable and easily governed processes.

I am happy to answer any questions you might have regarding security maturity, cybersecurity frameworks, or a host of other information security services. Please feel free to contact me @JoelConverses on Twitter or Skype. I look forward to chatting with you!

- Joel Cardella