Skip navigation
All Places > Information Security > Blog

Disclosure Summary

ManageEngine OpUtils is an enterprise switch port and IP address management system. Rapid7's Deral Heiland discovered a persistent cross-site scripting (XSS) vulnerability, as well as a number of insecure direct object references. The vendor and CERT have been notified of these issues. The version tested was OpUtils 8.0, which was the most recent version at the time of initial disclosure. As of today, the current version offered by ManageEngine is OpUtils 12.0.


R7-2016-02.1: Multiple Persistent XSS Vulnerabilities

While examining ManageEngine OpUtils v8.0, an enterprise switch port and IP address management software, it was discovered to be vulnerable to a persistent cross-site scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent XSS containing JavaScript and HTML code into various fields within the products Application Program Interface (API) and the old style User Interface (UI) . When this data is viewed within the web console the code will execute within the context of the authenticated user. This can allow a malicious actor to conduct attacks which can be used to modify the systems configuration, compromise data, take control of the product or launch attacks against the authenticated user's hosts system.


The first series of persistent XSS attacks were delivered to the OpUtils product via the network discovery process. When a network device is configured with SNMP,  the SNMP OID object sysDescr can contain HTML or JavaScript code.  The code will be delivered to the product for persistent display and execution without proper input sanitization. This is a similar vulnerability to those disclosed as Multiple Disclosures for Multiple Network Manage Systems.


The following example  shows the results of discovering a network device where the SNMP sysDescr has been set to <SCRIPT>alert(“XSS-sysDescr”)<SCRIPT> . In this example, when device is viewed within OpUtils API UI web console, the JavaScript executes rendering an alert box within the authenticated users; web browser.



Figure 1: JavaScript Alert Box


After switching version 8.0 from the API UI to the old UI schema several other XSS injection points where identified. This includes persistent XSS attacks, which was also delivered to the OpUtils old UI interface via the network discovery process. If the network device is configured with SNMP and the following SNMP OID objects contain HTML or JavaScript code, the code will be delivered to the product for persistent display and execution.






sysDescr and sysLocation triggered when viewed within IP History as shown in Figure 2 and Figure 3.



Figure 2: sysDescr injected XSS



Figure 3: sysLocation injected XSS


In addition, sysDescr,  sysLocation, and sysName triggered when viewed within device history as shown in Figure 4.



Figure 4: sysName injected XSS


The second method of injection involved SNMP trap messages. By spoofing an SNMP trap message and altering the data within that trap message, a malicious actor can inject HTML and JavaScript code into the product. When the trap information is viewed within the SNMP Trap Receiver the code will execute within the context of the authenticated user.  Figure 5 shows an example attack where a trap message was used with the following HTML code “<embed src=//>” to embed flash into the Trap Receiver section of the UI.



Figure 5: XSS Via SNMP Trap Injection



R7-2016-02.2: Multiple Insecure Direct Object References

During testing, it was discovered that URLs ending in .cc are accessible without proper authentication. This allowed for retrieval of a portion of the web page. The following URLs are able to be accessed without authentication:










As a result of this direct access without authentication, an attacker is able to view the HTML of the web page “” Here, it was discovered that the product's configured SNMP community string is transmitted in clear text as shown in Figure 6.



Figure 6: Information leakage via Insecure Direct Object Reference


Disclosure Timeline

Thu, Jan 14, 2016: Issues discovered by Deral Heiland of Rapid7, Inc.

Fri, Jan 15, 2016: Initial contact to vendor

Mon, Feb 15, 2016: Details disclosed to CERT, tracked as VU#400736

Wed, Mar 9, 2016: Clarification requested by the vendor, via CERT

Thu, Mar 17, 2016: Public disclosure of R7-2016-02

This advisory was written by the discoverer of the NPort issue, Joakim Kennedy of Rapid7, Inc.


Securing legacy hardware is a difficult task, especially when the hardware is being connected in a way that was never initially intended. One way of making legacy hardware more connectable is to use serial servers. The serial server acts as a bridge and allows serial devices to communicate over TCP/IP. The device then appears on the network as a normal network-connected device. This allows for remote administration of, for example, medical devices, industrial automation applications, and point of sales (POS) systems as if they were connected directly to the computer with a serial cable.


Fig1: Moxa NPort used to connect a glucometer (source).


By connecting these devices to a network, the inherent security of the serial device is, in most scenarios, completely compromised. Many serial devices’ security hinges on physical access. If you have physical access to the devices, you are authorized to talk to the device. When these devices are connected to the internet via a serial server, the physical access model does not apply anymore, and the security is entirely dependent on the security offered by the serial server.  In most scenarios, these serial servers should NEVER be connected to a public network.


The Devices

In this blog post, we are reporting serial servers exposed on the internet which are manufactured by Moxa. The serial servers can be configured via multiple interfaces, the most common being a web interface or a terminal over SSH or TELNET. At the time this blog post was written, over 5000 web servers could be fingerprinted as Moxa devices.


These devices are designed to be as simple as possible to setup and consequently the server is very permissive in who is allowed to connect to the server. For example, Moxa’s NPort series enables a web interface and a TELNET which can be used to configure the server, neither of which are password protected by default. The consumer is not forced to set a password and many consumers are using the default, the non-password protected setup.

We have found over 2200 devices accessible over the internet in which 46% of them are not password protected. Most of the internet connected devices are located in Russia and Taiwan but many devices are also located in the USA and Europe.


Figure 2: Geographic location of the 2200 internet connected devices.


Figure 3: Geographic location of the unprotected devices connected to the internet.


Figure 4: Breakdown of the model types connected to the internet.


Figure 5: Breakdown of the model type for the unprotected devices connected to the internet.


The most common connected device models are from the NPort 5100 series. The NPort 5100 series are “are designed to make your industrial serial devices Internet ready instantly, and are well-suited for POS security market applications”.


The Vulnerabilities

We reported in 2013 about serial servers connected to the internet and security implications. The same issues that were reported then are also applicable for these devices. When connecting to one of these devices which is not password protected over TELNET, the following menu is presented:



Model name       : xxxxxxx

MAC address      : xx:xx:xx:xx:xx:xx

Serial No        : xxxxxxxx

Firmware version : x.x.xx Build xxxxxxxx

System uptime    : 5 days, 12h:53m:49s


<< Main Menu >>

  (1) Basic settings

  (2) Network settings

  (3) Serial settings

  (4) DIO setting

  (5) Operating settings

  (6) Accessible IP settings

  (7) Auto warning settings

  (8) Monitor

  (9) Ping

  (a) Change password

  (b) Advanced network settings

  (l) Load factory default

  (v) View settings

  (s) Save/Restart

  (q) Quit


Key in your selection:


The TELNET interface allows the same configuration options as the web interface. Both of these interfaces can be protected by setting a password.


The NPort device can operate in multiple modes. One is Real COM mode. In this mode, with COM/TTY drivers provided by the vendor, all serial signals are transmitted intact and the behaviour is identical to plugging the serial device into the COM port. In this mode, up to 4 different hosts can be connected. Connecting to a serial device connected to a NPort is very simple. One simply has to download the Real TTY drivers, install them, enter the IP address to connect to and the device shows up as being plugged in. No authentication is required.


The only way of restricting who can connect the device is by using the IP white listing option to restrict the IPs which can connect to the serial device or to use the TCP Client Mode. In the TCP Client Mode, the serial server initiates connections to predetermined hosts when serial data arrives.


The serial server does not offer any encryption, so all data is sent in the clear. This makes it possible to eavesdrop on the communication.


The lack of authentication on these devices, and the lack of encryption even when authentication is possible, was reported to CERT, and after some discussion, CVE-2016-1529 was assigned to identify this issue. More generally, CWE-306, Missing Authentication for Critical Function, appears to apply to Moxa NPort devices.



As these serial servers are likely connected to something very sensitive, these devices should NEVER be directly connected to the internet. If remote access is required, and since these devices do not offer encrypted traffic, connect the serial servers to a local network which can only be accessible via, for example, a VPN. Also, restrict the IPs which can connect to the serial device, and don’t forget to password protect the admin consoles.



There is still little awareness on what can happen if you connect devices directly to the internet. With search engines like Shodan, it is very easy to find these devices, making it important to secure them. Securing legacy hardware is still very difficult, and this how not to do it. Security is being compromised for convenience, and consumers are, in many cases, just using the default settings. The easier you make it for yourself to connect, the easier you make it for the attacker.

Disclosure Timeline

Fri, Jan 15, 2016: Initial contact to the vendor

Mon, Jan 18, 2016: Response received from the vendor and details provided.

Mon, Feb 1, 2016: Details disclosed to CERT as VU#757136

Mon, Feb 1, 2016: CVE-2016-1529 assigned

Thu, Mar 17, 2016: Public disclosure (planned).

On Mar. 3rd, Rapid7, Bugcrowd, and HackerOne submitted joint comments to the Copyright Office urging them to provide additional protections for security researchers. The Copyright Office requested public input as part of a study on Section 1201 of the Digital Millennium Copyright Act (DMCA). Our comments to the Copyright Office focused on reforming Sec. 1201 to enable security research and protect researchers.


Our comments are available here.




Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs) to access copyrighted works, including software, without permission of the owner. That hinders a lot of security research, tinkering, and independent repair. Violations of Sec. 1201 can carry potentially stiff criminal and civil penalties. To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years. These temporary exemptions automatically expire at the end of the three-year window, and advocates for them must reapply every time the exemption window opens.


Sec. 1201 includes a permanent exception to the prohibition on circumventing TPMs for security testing, but the exception is quite limited – in part because researchers are still required to get prior permission from the software owner, as we describe in more detail below. Because the permanent exemption is limited, many researchers, organizations, and companies (including Rapid7) urged the Copyright Office to use its power to grant a temporary three-year exemption for security testing that would not require researchers to get prior permission. The Copyright office did so in Oct. 2015, granting an exemption to Sec. 1201 for good faith security research that circumvents TPMs without permission. However, this exemption will expire at the end of the the three year exemption window,  after which security researchers will have to start from zero in re-applying for another temporary exemption.


The Copyright Office then announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study, as the Office put it, to assess the operation of Sec. 1201, including the permanent exemptions and the 3-year rulemaking process. This study comes at a time that House Judiciary Committee Chairman Goodlatte is reviewing copyright law with an eye towards possible updates, so the Copyright Office's study may help inform that effort. Rapid7 supports the goal of protecting copyrighted works, but hopes to see legal reforms that reduce the overbreadth of copyright law so that it no longer unnecessarily restrains security research on software.


Overview of Comments


For its study, the Copyright Office asked a series of questions on Sec. 1201 and invited the public to submit answers. Below are some of the questions, and the responses we provided in our comments.


"Please provide any insights or observations regarding the role and effectiveness of the prohibition on circumvention of technological measures in section 1201(a)."


Our comments to the Copyright Office emphasized that Sec. 1201 adversely affects security research by forbidding researchers from unlocking TPMs to analyze software for vulnerabilities. We argued that good faith researchers do not seek to infringe copyright, but rather to evaluate and test software for flaws that could cause harm to individuals and businesses. The risk of harm resulting from exploitation of software vulnerabilities can be quite serious, as Rapid7 Senior Security Consultant Jay Radcliffe described in 2015 comments to the Copyright Office. Society would benefit – and copyright interests would not be weakened – by raising awareness and urging correction of such software vulnerabilities.


"How should section 1201 accommodate interests that are outside of core copyright concerns[?]"


Our comments responded that the Copyright Office should consider non-copyright interests only for scaling back restrictions under Sec. 1201 – for example, the Copyright Office should weigh the chilling effect Sec. 1201 has on security research in determining whether to grant an exemption for research to Sec. 1201. However, we argued that the Copyright Office should not consider non-copyright interests in denying an exemption, because copyright law is not the appropriate means of advancing non-copyright interests at the expense of activity that does not infringe copyright, like security research.


Should section 1201 be adjusted to provide for presumptive renewal of previously granted exemptions—for example, when there is no meaningful opposition to renewal—or otherwise be modified to streamline the process of continuing an existing exemption?


Our comments supported this commonsense concept. Currently, the three-year exemptions expire and must be re-applied for, which is a complex and resource-intensive process. We argued that a presumption of renewal should not hinge on a lack of "meaningful opposition," since the opposition to the 2015 security researcher exemption is unlikely to abate – though that opposition is largely based on concerns wholly distinct from copyright, like vehicular safety. Our comments also suggested that any presumption of renewal of exceptions to Sec. 1201 should be overcome only by a strong standard, such as a material change in circumstances.


Please assess whether the existing categories of permanent exemptions are necessary, relevant, and/or sufficient. How do the permanent exemptions affect the current state of reverse engineering, encryption research, and security testing?


Our comments said that Sec. 1201(j)'s permanent exemption for security testing was not adequate for several reasons. The security testing exemption requires the testing to be performed for the sole purpose of benefiting the owner or operator of the computer system – meaning research taken for the benefit of software users or the public at large may not qualify. The security testing exemption also requires researchers to obtain authorization of owners or operators of computers prior to circumventing software TPMs – so the owners and operators can dictate the circumstances of any research that takes place, which may chill truly independent research. Finally, the security testing exemption only applies if the research violates no other laws – yet research can implicate many laws with legal uncertainty in different jurisdictions. These and other problems with Sec. 1201's permanent exemptions should give impetus for improvements – such as removing the requirements1) that the researcher must obtain authorization before circumventing TPMs, 2) that the security testing must be performed solely for the benefit of the computer owner, and 3) that the research not violate any other laws.



We sincerely appreciate the Copyright Office conducting this public study of Sec. 1201 and providing the opportunity to submit comments. Rapid7 submitted comments with HackerOne and Bugcrowd to demonstrate unity on the importance of reforming Sec. 1201 to enable good faith security research. Although the public comment period for this study is now closed, potential next steps include a second set of comments in response to any of the 60+ organizations and individuals that provided input to the Copyright Office's study, as well as potential legislation or other Congressional action on Sec. 1201. For each next step, we will aim to work with our industry colleagues and other stakeholders to propose reforms that can protect both copyright and independent security research.

This is the third post in a three-part series on threat intelligence foundations, discussing the fundamentals of how threat intelligence can be used in security operations. Here's Part 1 and Part 2.


Intelligence Analysis in Security Operations

In the first two parts of this series we talked about frameworks for understanding and approaching intelligence: the levels of intelligence (strategic, operational, tactical) as well as the different types of intelligence (technical, current, long-term, etc). Regardless of the level or type of intelligence, the consistent theme was the need for analysis. Analysis is the core of intelligence, it takes data and turns it into intelligence that we can use to help us make informed decisions about complicated issues.


Analysis: The Missing Piece

I recently gave a talk at RSA where I compared the traditional intelligence cycle: Screen Shot 2016-03-11 at 12.10.47 PM.png




to what the intelligence cycle often looks like in cyber threat intelligence:     Screen Shot 2016-03-11 at 12.11.07 PM.png


We are good at collection and processing, and we are good at dissemination, however we tend to leave a lot of the critical parts of the cycle out which results in overwhelming alerts, excessive false positives, and really, really confused people.


It’s easy to joke about or complain about, but here is the thing...analysis is hard. Saying that we should do more/better/more timely analysis is easy. Actually doing it is not, especially in a new and still developing field like cyber threat intelligence. Models and methods help us understand the process, but even determining what model to use can be difficult. There are multiple approaches; some work better in certain situations and others work best in others.


What is Analysis?

The goal of intelligence analysis is to evaluate and interpret information in order to reduce uncertainty, provide warnings of threats, and help make informed decisions. Colin Powell gave perhaps the most succinct guidelines for intelligence analysis when he said: “Tell me what you know, tell me what you don’t know, tell me what you think. Always distinguish which is which”. This statement sums up intelligence analysis.


Analysts take what is known—usually information that has been collected either by the analyst themselves or by others—identify gaps in the knowledge that might dictate a new collection requirement or may present a bias that needs to be taken into consideration, and then determine what they think that information means.


Before you begin any analysis you should have an idea of what it is that you are trying to figure out. Ideally this would be driven by requirements from leadership, teams you support, or some other form of standing intelligence needs. There are many situations in CTI, however, where those requirements are not as well defined as we might hope. Understanding what it is that the organization needs from threat intelligence is critical. Therefore, step one should always be to understand what problems, concerns, or issues you are trying to address.


Analytic Models

Once you understand what questions you are trying to answer through your analysis, there are various analytic models that can be used to conduct analysis. I have listed some good resources available to help understand some of the more popular models that are often used in threat intelligence.


Different models are used for different purposes. The SWOT method is good for conducting higher-level analysis to understand how your own strengths and weaknesses compared to an adversary’s capabilities. F3EAD, the Diamond Model, and the Kill Chain and are useful for analyzing specific instructions or how different incidents or intrusions may be related. Target Centric Intelligence is a lesser known model, but can help with not only understanding individual incidents, but provides a collaborative approach to intelligence including the decision makers, collectors, and analysts in an iterative process aimed at avoiding the stove-piping and miscommunications that are often present in intelligence operations.



A final note on collection

In many cases, analysis can only be as good as the information that it is based off of. Intelligence analysts are trained to evaluate the source of information in order to better understand if there are biases or concerns about the reliability that need to be taken into account. In cyber threat intelligence we, by and large, rely on data collected by others and may not have much information on its source, reliability, or applicability. This is one of the reasons that analyzing information from your own network is so important, however it is also important that we, as a community, are as transparent as possible with the information we are providing to others to be used in their analysis. There are always concerns about revealing sources and methods, so we need to find a balance between protecting those methods and enabling good analysis.

This is the second post in a three-part series on threat intelligence foundations, discussing the fundamentals of how threat intelligence can be used in security operations. Read Part One here.


Tinker, Tailor, Soldier, Spy: Utilizing Multiple Types of Intelligence

Just as there are different operational levels of intelligence—discussed in detail in the first post of this series—there are also different types of intelligence that can be leveraged in an organization to help them better understand, prepare for, and respond to threats facing them.


Don’t laugh—but a great basic resource for understanding the types of intelligence is the CIA’s Kid Zone, where they break intelligence down for the 6-12th graders that we all are at heart (or K-5, no judgement here).


They break intelligence down into several different types:

  • Scientific and Technical – providing information on adversary technologies and capabilities.

  • Current – looking at day-to-day events and their implications.

  • Warning – giving notice of of urgent matters that may require immediate attention.

  • Estimative – looking at what might be or what might happen.

  • Research – providing an in-depth study of an issue.


While most organizations may not work with all of these types of intelligence, or do so in the same way that the CIA does (and please don't tell me if you do), it is useful to understand the spectrum and what each type provides. The different types of intelligence require varying levels of human analysis and time. Some, like technical intelligence, are easier to automate and therefore can be produced at a regular cadence, while some, like threat landscape research, will always rely heavily on human analysis.


Screen Shot 2016-03-09 at 6.48.23 PM.png

Technical Intelligence

In information security operations, technical intelligence is used to understand the capabilities and the technologies used by an adversary. It can include details such as IP addresses and domains used in command and control, names and hashes of malicious files, as well as some TTP details such as vulnerabilities that a particular actor targets or a particular callback pattern for a beaconing implant.


Technical intelligence is most often used in machine-to-machine operations, and is therefore automated as much as possible to handle the large volume of information. In many cases, technical intelligence does not contain much context, even if context is available in other places, because machines do not care as much about the context as their humans do. A firewall doesn’t need to know why to block traffic to a malicious domain, it just needs to do it. The human on the other end of that firewall change might want to know, however, in case the change ends up triggering a massive amount of alerts. Technical intelligence must have been analyzed prior to consumption, otherwise it is just data or information at best. For more information see Robert Lee’s post on the data vs information vs intelligence debate.


If you are not using technical intelligence that you generated yourself, it is critical that you understand the source of the technical intelligence and how it was analyzed, especially if it was analyzed using automated means. I am going out on a limb here by stating that there is a way to analyze and produce threat intelligence in an automated fashion that can be utilized machine-to-machine. Do NOT prove me wrong—do the analysis!


Current Intelligence

Current Intelligence deals with day-to-day events and situations that may require immediate action. I have heard several people say that, “news isn’t intelligence,” and that is a true statement; however, threat information in the public domain, when analyzed for implications to your specific organization, network, or operations, becomes intelligence.


An example of the use of current intelligence is a report that an exploit kit has integrated a vulnerability that was just announced three days ago. If you know that you are on a thirty-day patch cycle that means (best case) you have twenty-seven days where you will be vulnerable to these attacks. Understanding how this threat impacts your organization and how to detect and block malicious activity associated with it is an example of current intelligence. Current intelligence can also be generated from information within an organization’s networks. Analyzing an intrusion or a spearphishing attack against executives can also generate current intelligence that needs to be acted on quickly.


When you do generate current intelligence from your own network, document it! It can then contribute to threat trending and threat landscape research, which we will discuss shortly. It can also be shared with other organizations.


Threat Trending (Estimation)

All of the intelligence gathered at the tactical level (technical intelligence, current intelligence) can be further analyzed to generate threat trends. Threat trending takes time because of the nature of trending, you are analyzing patterns over time to see how things change and how they stay the same. Threat trending can be an analysis of a particular threat that has impacted your network repeatedly, or it can be an analysis of how an actor group or malware family has evolved over time. The more relevant a threat trend is to your network or organization, the more useful it will be to you.


Threat trending allows us to move from an analysis of something that we have seen and know is bad towards predicting or estimating future threats.


Threat Landscape Research

Speaking of trending, there has been a long trend in intelligence analysis of focusing on time-sensitive, current intelligence at the expense of longer term, strategic research. Consider how many tactical level, technical IOCs we have in the community compared to strategic intelligence resources. How many new programs are focused on providing “real-time intelligence” versus “deliberate, in-depth analysis.” There are legitimate reasons for that: there are not enough analysts as it is, and they are usually focused on the time-sensitive tasks because they are, well, time sensitive. In addition, we don’t always have the right data to conduct strategic level analysis, both because we are not accustomed to collecting it from our own networks and most people who are willing to share tactical indicators of threats are not as willing to share information on how those threats impacted them.


We need to change this, because you cannot (or should not) make decisions about the future of your security program without a strategy, and you cannot (or should not) have a security strategy without understanding the logic behind it. Threat landscape research—which is a long term analysis of the threats in your environment, what they target, how they operate, and how you are able to respond to those threats—will drive your strategy. The tactical level information you have been collecting and analyzing from your network on a daily basis can all contribute to threat landscape research. Current intelligence, yours and public domain information, can also contribute to threat landscape research. One framework for capturing and analyzing this information is VERIS—the Vocabulary for Event Recording and Incident Sharing, which the DBIR is based off of. Just remember, this type of intelligence analysis takes time and effort, but it will be worth it.


Information Sharing

There is currently an emphasis on sharing IOCs and other technical information, however any of the types of intelligence we have discussed in this post are good candidates for information sharing. Sharing information on best practices and processes is also incredibly beneficial.


Sharing information on what has been seen in an organization’s network is a good way to understand new threats as they emerge and increase situational awareness. Information sharing essentially generates intelligence to warn others of threats that may impact them. Information sharing is becoming increasingly automated, which is great for handling higher volumes of information, however, unless there is an additional layer of analysis that focuses on how this information is relevant or impacts your organization then it will stay information (not intelligence) and will not be as useful as it could be. For more information see Alex Pinto’s presentation on his recent research on measuring the effectiveness of threat intelligence sharing.


Even if you are not yet convinced of the value of generating your own intelligence from your environment, consuming threat intelligence still requires analysis to understand how it is relevant to you and what actions you should take. A solid understanding of the different types of intelligence and how they are used will help guide how you should approach that analysis.

This is the first post in a three-part series on threat intelligence foundations, discussing the fundamentals of how threat intelligence can be used in security operations.


There is a consensus among many in threat intelligence that the way the community has approached threat intelligence in the past -  i.e, the “Threat Data → SIEM → Magical Security Rainbows” approach has left something to be desired, and that something is usually analysis. Rick Holland (@rickhholland) warned us early on that we were on the wrong track with his 2012 post My Threat Intelligence Can Beat Up Your Threat Intelligence where he wrote “The real story on threat intelligence is your organization’s ability to develop your own."


There are ways that we can take advantage of the threat intelligence that currently exists while learning how to better leverage the threat intelligence in our own networks. Doing this requires an understanding of intelligence fundamentals and how they can be applied in security operations. This series is designed to help those interested in threat intelligence -whether just starting out or re-evaluating their existing programs - understand the underlying fundamentals of threat intelligence and intelligence analysis.


In the first part of this three-part series we will discuss the levels of intelligence and the various ways threat intelligence can be utilized in operations.


Threat Intelligence Levels in Security Operations: Crawl

When an organization is determining how to best integrate threat intelligence into their security operations it is helpful to have a framework detailing the different ways that intelligence can be effectively utilized.


Traditionally, intelligence levels have aligned to the levels of warfare: strategic, operational, and tactical. There are several reasons for this alignment: it can help identify the decision makers at each level; it identifies the purpose of that intelligence, whether it is to inform policy and planning or to help detect or deter an attack; it can help dictate what actions should be taken as a result of receiving that intelligence.


At any level of intelligence it is critical to assess the value to your organization specifically. Please answer this for yourself, your team, and your organization, “How does this information add perspective to our security program? What decisions will this information assist us in making?”


Strategic intelligence

Strategic intelligence is intelligence that informs the board and the business. It helps them understand broader trends that are facing their organizations and other similar organizations in order to assist in the development of a strategy. Strategic Intelligence comes from analyzing longer term trends, and often takes the shape of analytic reports such as the DBIR and Congressional Research Service (CRS) reports. Strategic intelligence assists key decision makers in determining what threats are most impactful to their businesses and future plans, and what long-term efforts they may need to take to mitigate them.


The key to implementing strategic intelligence in your own business is to apply this knowledge in the context of your own priorities, data, and attack surface. No commercial or annual trend report can tell you what is important to your organization or how certain threat trends may impact you specifically.


Strategic intelligence - like all types of intelligence - is a tool that can be used to shape future decisions, but it cannot make those decisions for you.


Operational Intelligence

Operational intelligence provides intelligence about specific attacks that may impact an organization. Operational intelligence is rooted in the concept of military operations - a series of plans or engagements that may take place at different times or locations, but have the same overarching goal. It could include identified campaigns targeting an entire sector, or it could be hacktivist or botnet operations targeting one specific organization through a series of attacks.


Information Sharing and Analysis Centers (ISACs) and Organizations (ISAOs) are good places to find operational intelligence.


Operational intelligence is geared towards higher-level security personnel, but unlike strategic intelligence it dictates actions that need to be taken in the near to mid-term rather than the long term. It can help inform decisions such as whether to increase security awareness training, how to staff a SOC during an identified adversary operation, or whether to temporarily deny requests for exceptions to the firewall policy. Operational intelligence is one of the best candidates for information sharing. If you see something that is going on that may impact others in the near term, *please* share that information. It can help other organizations determine if they need to take action as well.


Operational intelligence is only useful when those receiving the intelligence have the authority to make changes to policies or procedures in order to counter the threats.


Tactical Intelligence

Tactical Intelligence focuses on the the “what” (Indicators of Compromise) and the “how” (Tactics, Techniques, and Procedures) of an attacker’s actions with the intent of using that knowledge to prevent, detect, or respond to incidents. Do attackers tend to use a particular method to gain initial access, such as social engineering or vulnerability exploitation? Do they use a particular tool or set of tools to escalate privilege and move laterally? What indicators of compromise might allow you to detect these activities? For a good list of various source of tactical intelligence check out Herman Slatman's list of threat intelligence resources.


Tactical intelligence is geared towards security personnel who are actively monitoring their environment and gathering reports from employees who report anomalous activity or social engineering attempts. Tactical Intelligence can also be used in hunt operations, where we are looking to identify attacker behaviors that vary only slightly from a typical user’s behavior. This type of intelligence requires more advanced resources, such as extensive logging, user behavioral analytics, endpoint visibility, and trained analysts. It also requires a security-conscious workforce, as some indicators may not be captured or alerted on without first being reported by an employee. You will always have more employees than attack sensors…listen to them, train them, gather the information they can provide, analyze it, and then act upon it.


Tactical threat intelligence provides specific, but perishable, information that security personnel can act on.


Understanding how threat intelligence operates at different levels can help an organization understand where it needs to focus their efforts and what it can do with the threat intelligence it has access to. It can also help guide how the organization should approach intelligence in the future. The intelligence you can generate from your own network will always be the most actionable intelligence, regardless of the level.


For more information on the levels of intelligence and the levels of warfare, check out these resources:

Deral Heiland

What's In A Hostname?

Posted by Deral Heiland Employee Mar 9, 2016

Like the proverbial cat, curiosity can often get me in trouble, but often enough, curiosity helps us create better security. It seems like every time I encounter a product with a web management console, I end up feeding it data that it wasn't expecting.


As an example, while configuring a wireless bridge that had a discovery function that would identify and list all Wi-Fi devices in the radio range, I thought: "I wonder what would happen if I broadcast a service set identifier (SSID) containing format string specifiers?"


I set up a soft AP on my Linux host using airobase-ng and configured the SSID to broadcast %x%x%x. I was shocked when the discovered AP's SSID displayed data from the wireless bridge's process stack as shown in Figure 1:



Figure 1: Format String Injected Via SSID


This data confirmed that this wireless bridge appliance was vulnerable to a format string exploit. This lead to the discovery of multiple devices vulnerable to injection attacks within the web management consoles via SSID, including format strings, persistent cross-site scripting (XSS) and cross-site request forgery (CSRF) (more details of these are discussed in a whitepaper I released at Blackhat).


Unfortunately, attacks against web management interfaces don’t stop with SSIDs. So many products inevitably consume data from various resources and then display that data within the web management console without conducting any validation checks of that data first. This often leads to vulnerabilities being exploited via the web management interfaces, and it appears to not be going away any time soon. Recently Matthew Kienow and myself released a number of advisories where XSS attacks were injected into web management consoles of Network Management Systems (NMS) using SNMP.


Again—several months back—while on a pen testing engagement, a coworker was running an open source tool used to launch relay style attacks. This tool captured hostname information from the network and stored it as part of its function and of course it had a web interface. Sadly his testing was interfering with my testing, so for fun I changed my Linux systems hostname to “><script>alert(“YOU-HAVE-BEEN-HACKED”)</script> .


Initially I wasn't sure if this XSS attack would work, but soon enough I heard a loud scream come from his corner of the room. Now this brings me around to the purpose of this blog: What would the impact be if everyone changed the name of their host system to contain XSS data—such as “><iframe>?  I am scared to even imagine the number of products that use the hostname data and display it within their web management interface. Based on all my testing against various application and embedded devices that use web interfaces for management, I have found roughly 40% of the systems I have tested to be vulnerable to some form of XSS injection attacks.


So, I wonder how many administration web consoles have this sort of problem with hostname parsing?


Want to Help Us Find Out?

Now if this idea intrigues you, don’t rush out and start renaming your systems, as even a simple XSS such as “><iframe>—which should create a simple box on the screen (Figure 2)—can have serious impact on the web interface functionality of some products and could easily prevent it from functioning normally.


However, if you want to try this out, first make sure you have permission and that you do it within a controlled environment—not within your production environment. If you end up giving this a try, I ask that you share the results with us at (PGP KeyID: 0x8AD4DB8D) so we can follow-up with the results in a future blog.


Also, I highly recommend that you contact the product vendor for ethical disclosure so they can fix the issues.



Figure 2: <iframe> box



I am looking forward to hearing back on what you find.

Mobile app hacking is nothing new. Many people have performed different assessments and there are even courses all about it. Even so, many penetration testers may still be hesitant about performing these types of assessments, or may not do them well. Mobile application hacking is much like other forms of hacking. You can’t get really good unless you regularly practice. So how can we get experience hacking mobile applications? Well, with over 1.5 million apps in the Google Play store and the Apple App store, there is no shortage of apps to play with. There are also numerous purposely vulnerable mobile apps you can download and test as well.


There are a number of different techniques for analyzing mobile applications. They include:


  • File System Analysis
  • Network Analysis
  • Source Code Analysis
  • Dynamic Analysis


For the purpose of this blog entry, we will be focusing on File System Analysis on Android. We will expand this into a series if there is a demand for it.


To access the file system contents of an app, you need the appropriate permissions. On Android, that usually means root access. During engagements, I have had customers say “Well you have root access. Without that you wouldn’t have gotten to that data, and most people’s devices aren’t rooted.” A point well taken, and since I am in the business of showing true risk to an organization, I figured what better way than to create a tool that would allow access to the file system contents without root access, and thus, backHack was born.


backHack was created over 2 years ago, but I got busy and put the tool on the backburner. Fast forward to a few weeks ago when I found a new game: Alto’s Adventure. The game is awesome for a time killer, and beautifully made. It took a long time to get to the next level and collect coins, and I decided it was time to dust off backHack and see what I could do with the application.


Instead of just telling you what I did, I will show you, and I encourage you to follow along on your own. First, we need to make sure we have Android Studio installed, or at least ADB (Android Debug Bridge) accessible in our PATH. We also need to have debugging enabled on our device. At this point, issue the command ‘adb devices’ and make sure your device is showing as connected.




Now we run backHack. (python




backHack has been designed with a simple menu system that would be easy enough for an infant to use. We first need to select what app we want to “hack”. For that, choose option 1, then select either option 1 to list all apps on the device, option 2 to search for an app, or option 3 to type in the name of the app. For our purposes we are looking at Alto’s adventure, so I will choose option 2, type in ‘alto’, and find the app name of ‘com.noodlecake.altosadventure’. I then copy and paste that name under option 3, returning me to the main menu.




Next, I backup the app by selecting option 2. For this step, we will be prompted to unlock our device and confirm the backup operation.




Once the backup is complete, backHack extracts the backup, placing the files system contents under apps/<APPNAME>. In this case, it is apps/com.noodlecake.altosadventure.




We then can poke around the file system and see what is there. Some good places to look are under the sp folder (shared_prefs) and the db folder (databases). In the case of Alto’s Adventure, there is a XML file named com.noodlecake.altosadventure.xml.




When we look at this file, we find settings for the app, including coins and level. I find it fun to make changes, and see what it does, so let’s do that. We set coins to 999999999 and level to 60. (60 is the highest level currently, and we don’t want to be greedy by going for $1,000,000,000 coins do we?)




After saving the file, we then go back to backHack and select option 3. This will repack the app and restore to your device. Again, you will be prompted to confirm the restore operation on the device.




Now that the app has been restored, we then open the application and see what happened. Boom! 999,999,999 coins, and level 61! (Notice the entry in the XML file was for currentGoalLevel, which we set to 60. The entry actually means “completedGoalLevel”. Also, coins are at 1,000,000,000. Guess they round up?)




While this is a fun way to get extra lives, coins, or level up on a game, the same methodology can be used in any app. For instance, how about modifying your United app to show you have 14,000,000 miles, are Premier 1K, and Star Alliance Gold?




Many times more than just modifying how an app behaves, you may find passwords, or other sensitive information stored in the file system, and backHack shows the risk better than having a rooted device, since now ANY device that is unlocked is able to be accessed.




The Attacker's Dictionary

Posted by royhodgman Employee Mar 1, 2016

Rapid7 is publishing a report about the passwords attackers use when they scan the internet indiscriminately. You can pick up a copy at booth #4215 at the RSA Conference this week, or online right here. The following post describes some of what is investigated in the report.


Announcing the Attacker's Dictionary

Rapid7's Project Sonar periodically scans the internet across a variety of ports and protocols, allowing us to study the global exposure to common vulnerabilities as well as trends in software deployment (this analysis of binary executables stems from Project Sonar).


As a complement to Project Sonar, we run another project called Heisenberg which listens for scanning activity. Whereas Project Sonar sends out lots of packets to discover what is running on devices connected to the Internet, Project Heisenberg listens for and records the packets being sent by Project Sonar and other Internet-wide scanning projects.


The datasets collected by Project Heisenberg let us study what other people are trying to examine or exploit. Of particular interest are scanning projects which attempt to use credentials to log into services that we do not provide. We cannot say for sure what the intention is of a device attempting to log into a nonexistent RDP server running on an IP address which has never advertised its presence, but we believe that behavior is suspect and worth analyzing.


How Project Heisenberg Works

Project Heisenberg is a collection of low interaction honeypots deployed around the world. The honeypots run on IP addresses which we have not published, and we expect that the only traffic directed to the honeypots would come from projects or services scanning a wide range of IP addresses. When an unsolicited connection attempt is made to one of our honeypots, we store all the data sent to the honeypot in a central location for further analysis.


In this post we will explore some of the data we have collected related to Remote Desktop Prodocol (RDP) login attempts.


RDP Summary Data

We have collected RDP passwords over a 334 day period, from 2015-03-12 to 2016-02-09.


During that time we have recorded 221203 different attempts to log in, coming from 5076 distinct IP addresses across 119 different countries, using 1806 different usernames and 3969 different passwords.


Because it wouldn't be a discussion of passwords without a top 10 list, the top 10 passwords that we collected are:




































And because we have information not only about passwords, but also about the usernames that are being used, here are the top 10 that were collected:




































We see on average 662.28 login attempts every day, but the actual daily number varies quite a bit. The chart below shows the number of events per day since we started collecting data. Notice the heavy activity in the first four months, which skews the average high.




In addition to the username and password being used in the login attempts that we captured, we also collected the IP address of the device making the login attempt. To the best of the ability of the GeoIP database we used, here are the top 15 countries from which the collected login attempts originate:



country code







United States




South Korea












United Kingdom





















With the data broken down by country, we can recreate the chart above to show activity by country for the top 5 countries:





RDP Highlights

There is even more information to be found in this data beyond counting passwords, usernames and countries.

We guess that these passwords are selected because whomever is conducting these scans believes that there is a chance they will work. Maybe the scanners have inside knowledge about actual usernames and passwords in use, or maybe they're just using passwords that have been made available from previous security breaches in which account credentials were leaked.


In order to look into this, we compared all the passwords collected by Project Heisenberg to passwords listed in two different collections of leaked passwords. The first is a list of passwords collected from leaked password databases by Crackstation. The second list comes from Mark Burnett.


In the table below we list how many of the top N passwords are found in these password lists:


top password count

num in any list

































This means that 8 of the 10 most frequently used passwords were also found in published lists of leaked passwords. But looking back at the top 10 passwords above, they are not very complex and so it is not surprising that they appear in a list of leaked passwords.


This observation prompted us to look at the complexity of the passwords we collected. Just about any time you sign up for a service on the internet – be it a social networking site, an online bank, or a music streaming service – you will be asked to provide a username and password. Many times your chosen password will be evaluated during the signup process and you will be given feedback about how suitable or secure it is.



Password evaluation is a tricky and inexact art that consists of various components. Some of the many aspects that a password evaluator may take into consideration include:


  • length
  • presence of dictionary words
  • runs of characters (aaabbbcddddd)
  • presence of non alphanumeric characters (!@#$%^&*)
  • common substitutions (1 for l [lowercase L], 0 for O [uppercase o])


Different password evaluators will place different values on each of these (and other) characteristics to decide whether a password is "good" or "strong" or "secure". We looked at a few of these password evaluators, and found zxcvbn to be well documented and maintained, so we ran all the passwords through it to compute a complexity score for each one. We then looked at how password complexity is related to finding a password in a list of leaked passwords.



# passwords



crackstation %


Burnett %


any %


all %

























































The above table shows the complexity of the collected passwords, as well as how many were found in different password lists.


For instance, with complexity level 4, there were 352 passwords classified as being that complex, 7 of which were found in the crackstation list, and 4 of which were found in the Burnett list. Furthermore, 8 of the passwords were found in at least one of the password lists, meaning that if you had all the password lists, you would find 2.27% of the passwords classified as having a complexity value of 4. Similarly, looking across all the password lists, you would find 3 (0.85%) passwords present in each of the lists.


From this we extrapolate that as passwords get more complex, fewer and fewer are found in the lists of leaked passwords. Since we see that attackers try passwords that are stupendously simple, like single character passwords, and much more complex passwords that are typically not found in the usual password lists, we can surmise that these attackers are not tied to these lists in any practical way -- they clearly have other sources for likely credentials to try.


Finally, we wanted to know what the population of possible targets looks like. How many endpoints on the internet have an RDP server running, waiting for connections? Since we have experience from Project Sonar, on 2016-02-02 the Rapid7 Labs team ran a Sonar scan to see how many IPs have port 3389 open listening for tcp traffic. We found that 10822679 different IP addresses meet that criteria, spread out all over the world.


So What?

With this dataset we can learn about how people looking to log into RDP servers operate. We have much more detail in the report, but some our findings include:

  • We see that many times a day, every day, our honeypots are contacted by a variety of entities.
  • We see that many of these entities try to log into an RDP service which is not there, using a variety of credentials.
  • We see that a majority of the login attempts use simple passwords, most of which are present in collections of leaked passwords.
  • We see that as passwords get more complex, they are less and less likely to be present in collections of leaked passwords.
  • We see that there is a significant population of RDP enabled endpoints connected to the internet.


But wait, there's more!

If this interests you and you would like to learn more, come talk to us at booth #4215 the RSA Conference.

If you've been involved in patch frenzies for any reasonable amount of time, you might remember last year's hullabaloo around GHOST, a vulnerability in glibc's gethostbyname() function. Well, another year, another resolver bug.


gethostbyname(), meet getaddrinfo()

This time, it's an exploitable vulnerability in glibc's getaddrinfo(). Like GHOST, this will affect loads and loads of Linux client and server applications, and like GHOST, it's pretty difficult to "scan the Internet" for it, since it's a bug in shared library code. Google reports they have a working private exploit, and I know those rascals on the Metasploit team have been poking at the vulnerability today, so do yourself a favor and patch and reboot your affected systems as soon as practical.


The Long Tail of IoT

Unfortunately, as the Ars Technica article points out, there are certainly loads and loads of IoT devices out in the world that aren't likely to see a patch any time soon. So, for all those devices you can't reasonably patch, your network administrator could take a look at the mitigations published by RedHat, and consider the impact of limiting the actual on-the-wire size of DNS replies in your environment. While it's may be a heavy-handed strategy, it will buy you time to ferret out all those IoT devices that people have squirrelled away on your network.


Take A Breath

Finally, as with GHOST, there is a valid reason to be concerned, but we don't think this is the end-of-the-internet-as-we-know-it.


The bad news is that an exploit against at least one vector is known to exist, and the impact can be nasty if an attacker can segfault your processes with a malformed DNS response, and worse if they're clever and lucky enough to pop a shell. Plenty of legacy systems will be affected. So that all sounds pretty bad, yes?


But, ultimately, this bug is far more difficult to exploit than many. It's difficult to target (by both bad guys and good guys), and the attacks tend to require client interaction. As for those legacy systems? They tend to have, if not bigger problems, adjacent and better understood problems, like Shellshock and Heartbleed.


The bottom line is that you should patch (as with any CVE-classified bug), but I wouldn't expect the Internet to come crashing down over this.


Are Rapid7's Products Impacted?

We're still investigating which of Rapid7's products are impacted, and will update customers as we know more.  So far, we can confirm that both physical and virtual Nexpose appliances are affected and operating systems for them will need to be updated. Nexpose hosted engines are also affected and are being patched as I type. In both cases, we will reach out to any affected customers to advise on any action that needs to be taken by them.


Nexpose Coverage

Meanwhile, Nexpose picked up the glibc patch update earlier today, and it's going through analysis now; we can expect a check for Nexpose customers shortly, as we're targeting tomorrow's regular release for that. Armed with a Nexpose check, you can get a decent idea of what your threat exposure is to this bug-that-shall-not-be-branded, on the chance that it really does take off in the coming days.

Thanks to everyone who joined our webinar on How to Build Threat Intelligence into your Incident Detection and Response Program. We got so many great questions during the session that we decided to follow up with a post answering them and addressing the trends and themes we continue to see around threat intelligence.

TL/DR for those of you who don't have time to read all of the responses (we got a lot of questions):

  • Threat intelligence is a process, not something you buy. That means you will have to put work in in order to get results.
  • Threat intelligence works best when it is integrated across your security operations and is not viewed as a stand-alone function
  • Strategic, Operational, and Tactical threat intelligence (including technical indicators) are used differently and gathered using different methods.

Do you see threat intelligence as a proactive approach to cyber monitoring or a just a better way of responding to cyber threats? If you see it as proactive, how, since the intelligence is based on events, TTPs,that have already occurred?


A misconception about threat intelligence is that it is focused exclusively on alerting or monitoring. We talked about indicators of compromise and how to use them for detection and response, but there is a lot more to threat intelligence than IOCs. 


When threat intelligence is properly implemented in a security program it contributes to prevention, detection, and response. Understanding the high level, strategic threats facing your organization helps determine how to improve overall security posture.


All intelligence must be based on facts,( i.e. things that have already occurred or that we already know), but those facts that allow us to create models that can be used to identify trends and assess what controls should be put in place to prevent attacks. 


As prevention comes into alignment, it is important to maintain awareness of new threats leveraging operational and tactical intelligence,taking actions to protect your organization before they are able to impact you.


I can see the usefulness of tactical, operational and technical intelligence. How would you be able to establish strategic intelligence?


Strategic Intelligence is intelligence that informs leadership or decisions makers on the overarching threats to the organization or business. Think of this as informing high level decision making based on evidence, seeing the forest without being distracted by the trees.


Information that contributes to strategic intelligence is gathered and analyzed over a longer period of time than other types of threat intelligence. The key to utilizing strategic intelligence is being able to apply it in the context of your own data and attack surface. An example would be intelligence that financially motivated cyber criminals are targeting third party vendors in order to gain access to retail networks. This information could be used to assess whether a business would be vulnerable to this type of attack and identify longer term changes that need to take place to reduce the risk, such as network segmentation, audits of existing third-party access, and development of policies to limit access.


What is the difference between Strategic and Operational Intelligence?


Strategic intelligence focuses on long term threats and their implications while operational intelligence focuses on short term threats that may need to be mitigated immediately. Implementing strategic and operational intelligence often involves asking the same questions: who and why. With strategic intelligence you are evaluating the attackers - focusing on their tactics and motivations rather than geographical location - to determine how those threats may impact you in the future. With operational intelligence you are evaluating who is actually being targeted and how so that you can determine if you need to take any immediate actions in response to the the threat.

What is positive control and why is it important?

Positive control is the aspirational state of a technical security program . This means that only authorized users and systems are on the network, and that accounts and information are accessed only by approved users. Before you start assessing your network to understand what “normal” looks like, take care and be sure that you are not including attacker activity in your baseline.



If you are being targeted by an identified entity, what should you do to build intelligence on possible attacks?

Active and overt attacks fall into the realm of operational intelligence. You can gather intelligence on these attacks from social media, blog posts, or alerts from places like US-CERT, ISACs, ISAOs other sharing groups. Some questions you should be asking and answering as you gather information are:

  • Who else is being targeted? Can we share information with them on this attack?
  • How have the attackers operated in the past?
  • What are we seeing now that can help us protect ourselves?


What is done in Tactical Monitoring?

Tactical Intelligence tends to focus on mechanisms- the “how” of what an attacker does. Do they tend to use a particular method to gain initial access? A particular tool or set of tools to escalate privilege and move laterally? What social engineering or reconnaissance activities do they typically engage in prior to an attack? Tactical intelligence is geared towards security personnel who are actively monitoring their environment as well as gathering reports from employees who report strange activities or social engineering attempts. Tactical Intelligence can also be used by hunters who are seeking to identify a behavior that may be a normal user behavior but is also a behavior that is used by an attacker to avoid detection. This type of intelligence requires more advanced resources, such as extensive logging, behavioral analytics, endpoint visibility, and trained analysts. It also requires a security-conscious workforce, as some indicators may not be captured or flagged by logs without first being reported by an employee.


Can you point me to resources where to gather information regarding strategic, tactical and operational intelligence?

Before you start gathering information it is important to have a solid understanding of the different levels of threat intelligence. CPNI released a whitepaper covering four types of threat intelligence that we discussed on the webinar: elligence_whitepaper-2015.pdf


- Or - if you are an intelligence purist and find that four types of threat intelligence is one type too many (or if you’re just feeling rambunctious) you can refer to JP 2-0, Joint Intelligence, for in-depth understanding of the levels of intelligence and their traditional application.


Once you are ready, here are some places to look for specific types of intelligence:


Strategic Intelligence can be gathered through open source trend reports such as the DBIR, DBIR industry snapshots, or other industry specific reports that are frequently released.


Operational Intelligence is often time sensitive and can be gather by monitoring social media, government alert like US-CERT, or by coordinating with partners in your industry.


Tactical Intelligence can be gathered using commercial or open sources, such as blogs, threat feeds, or analytic white papers. Tactical Intelligence should tell you how an actor operates, the tools and techniques that they use, and give you an idea of what activities you can monitor for on your own network. At this level understanding your users and how the normally behave is critical, because threat actors will try to mimic those same behaviors and being able to identify a deviation, no matter how small, can be extremely significant. 


What is open source threat intelligence?

Open Source intelligence (OSINT) is the product of gathering and analyzing data gathered from publicly available sources: the open internet, social media, media, etc.

More here:

For more information on the other types of intelligence collection disciplines:


Open source threat intelligence is OSINT that focuses specifically on threats. In many cases you will be able to gather OSINT but will still have to do the analysis of the potential impact of the threat on your organization.


What are ISACs and ISAOs? Where can I find a list of them?

Most private sector information sharing is conducted through Information Sharing and Analysis Centers organized primarily by sectors (usually critical infrastructure, a list is located here:


In the United States, under President Obama’s executive Order 13691, DHS was directed to improve information sharing between the US government’s National Cybersecurity and Communications Integration Center (NCCIC) and private sectors. This executive order serves as the platform to include those outside the traditional critical infrastructure sectors, Information Sharing and Analysis Organizations.


What specific tools are used for threat intelligence?

This is a great question, and I think underscores a big misunderstanding out there. Threat Intelligence is a process, not a product bought or service retained. Any tool you use should help augment your processes. There are a few broad classifications of tools out there, including threat intelligence platforms and data analytics tools. The best way to find the right tools is to identify what problem you are trying to solve with threat intelligence, develop a manual process that works for you, and then look for tools that will help make that manual process easier or more efficient.


Can a solution or framework be tailored to support organizations at different levels of cyber security maturity and awareness, or is there a minimum requirement?

There *is* a certain level of awareness that is required to implement a threat intelligence program. Notice that we didn’t say maturity - we feel that any level program can benefit from threat intelligence, but there is a lot that goes into a organization being ready to utilize it.


At the very basic level an organization needs to understand what threat intelligence is, what is isn’t, understand the problems that they are trying to solve with threat intel, and have a person or a team who is responsible for threat intel. An organization with this base level understanding is far ahead of many others.


When discussing the more technical implementations of threat intelligence such as threat feeds or platforms then there are some barriers to entry. Aside from those situations, nearly any organization can work to better understand the threats facing them and how they should start to posture themselves to prevent or respond to those threats. Regardless of where you are, if you understand how threat intelligence works and start to implement it appropriately then you will be better off regardless of what else you are dealing with.


How do you stop an attacker once discovered? ACL IPS etc?

Scoping the attack is the first stage, which requires both investigation and forensics. The investigation team will identify various attributes used in the attack (tools, tactics, procedures), and then will go back and explore the rest of your systems for those attributes.


As systems get added, the recursive scoping loop continues until no new systems are added.


Once scoping is done, there are a number of actions to be taken- and the complexity involved in deciding exactly what happens (and when) grows exponentially. A short (and anything but comprehensive) list of considerations include:

  • Executive briefing and action plan signoff
  • Estimate business impact by the recovery actions to be executed
  • Isolate compromised systems
  • Lock or change passwords on all compromised accounts with key material in the scoped systems
  • Patch and harden all systems in the organization against vulnerability classes used by the attacker
  • Identify exactly what data was impacted, consult with legal regarding regulatory or contractual required next steps
  • Safely and securely restore impacted services to the business


Obviously there are a lot of variables at play here, and every incident is unique.

This stuff is extremely hard, if it was easy- everyone would be doing it.

Call us if you need help.


When I find a system that has been compromised, can you tell me where it came from?

You’re asking the right question here- getting a sense of the attacker’s motivation and tactics is extremely valuable. Answering “who did this” and “where did they come from” is a lot more difficult than simply pointing at the source IP for initial point of entry or command and control.


Tactical Intelligence from the investigation will help answer these questions.


What should be the first step after knowing that the host has been compromised by zero day attack?

Run around, scream and shout.

In all seriousness, you won’t start off with the knowledge of zero-day being used to compromise an asset. Discovering that 0day was used in a compromise, by definition, means that an investigation was performed when the root-cause identified at the point of infection was, in fact, 0day. At that point you will hopefully have gathered more information about the incident that you can then analyze to better understand the situation you are facing.

Harley Geiger

I've joined Rapid7!

Posted by Harley Geiger Employee Feb 10, 2016

Hello! My name is Harley Geiger and I joined Rapid7 as director of public policy, based out of our Washington, DC-area office. I actually joined a little more than a month ago, but there's been a lot going on! I'm excited to be a part of a team dedicated to making our interconnected world a safer place.


Rapid7 has demonstrated a commitment to helping promote legal protections for the security research community. I am a lawyer, not a technologist, and part of the value I hope to add is as a representative of security researchers' interests before government and lawmaking bodies – to help craft policies that recognize the vital role researchers play in strengthening digital products and services, and to help prevent reflexive anti-hacking regulations. I will also work to educate the public and other security researchers about the impact laws and legislation may have on cybersecurity.


Security researchers are on the front lines of dangerous ambiguities in the law. Discovering and patching security vulnerabilities is a highly valuable service – vulnerabilities can put property, safety, and dignity at risk. Yet finding software vulnerabilities often means using the software in ways the original coders do not expect or authorize, which can create legal issues. Unfortunately, many computer crime laws - like the Computer Fraud and Abuse Act (CFAA) - were enacted decades ago and make little distinction between beneficial security research and malicious hacking. And, due to the steady stream of breaches, there is constant pressure on policymakers to expand these laws even further.


I believe the issues currently facing security researchers also have broader societal implications that will grow in importance. Modern life is teeming with computers, but the future will be even more digitized. The laws governing our interactions with computers and software will increasingly control our interactions with everyday objects – including those we supposedly own – potentially chilling cybersecurity research, repair, and innovation when these activities should be broadly encouraged. We, collectively, will need greater freedom to oversee, modify, and secure the code around us than the law presently affords.


That is a major reason why the opportunity to lead Rapid7's public policy activities held a lot of appeal for me. I strongly support Rapid7's mission of making digital products and services safer for all users. In addition, it helped that I got to know Rapid7's leadership team years before joining. I first met Corey Thomas, Lee Weiner, and Jen Ellis while working on "Aaron's Law" for Rep. Zoe Lofgren in the US House of Representatives. After working for Rep. Lofgren, I was Senior Counsel and Advocacy Director at the Center for Democracy & Technology (CDT), where I again collaborated with Rapid7 on cybersecurity legislation. I've been consistently impressed by the team's overall effectiveness and dedication.


Now that I'm part of the team, I look forward to working with all of you to modernize how the law approaches security research and cybersecurity. Please let me know if you have ideas for collaboration or opportunities to spread our message. Thank you!


Harley Geiger

Director of Public Policy



I’ve never been one for New Year’s resolutions. I’ve seen how they tend to exist only for short-term motivation rather than long-term achievement. Resolutions are just not specific enough and there’s no tangible means for accomplishing anything of real value. Just check out your local gym by mid-February. It’s all cleared out. The people who energetically vowed to make changes late last year have simply lost their resolve.


But it’s not just a personal thing. The cycle of resolve-try-forget exists in our professional lives as well. If you manage an information security program or somehow have your hands in the IT risk equation, you have to be careful not to get on that diet-like roller coaster. You need a plan. You need specific steps to take. You have to hold yourself accountable. The very moment you say something high-level that you want to accomplish with your information security program – with no specific details or deadlines – is the very moment you hop on the road of good intentions. We all know where that leads.


For example, let’s say you resolve to do the following for your security program this year:

  • Do more security assessments
  • Follow-up on security assessment results sooner
  • Perform additional security monitoring
  • Send more security awareness emails to users
  • Not get hacked
  • Talk to management about what’s happening on the network


You write these down on a whiteboard in your conference room so everyone can see them. With your staff being exposed to these resolutions during your team weekly meetings, they’ll keep them on the top of their minds and things will take care of themselves, right? Absolutely not! Just ask the guy who vowed to eat less and exercise more. He’s not at the gym so you’ve got a better chance of tracking him down.


Take a look at each of the above resolutions. Notice anything missing? They’re not specific. There are no documented steps that need to be taken to accomplish them. There are no deadlines. They’re mere wishes. Dreams at best. If you want to start accomplishing things in information security, you have to get serious and document actual goals. You then have to “manage” your goals which means that you revisit them on a periodic and consistent basis, i.e. daily, and take steps every week to make each goal become reality. Goals are not all that different from security metrics that you might have. They’re specific and tangible. They’re also reasonable and attainable.


I’m convinced that if we were to look at the root causes of all the publicly-known breaches, we’d certainly see politics, ignorance, and downright bad luck at the root of all of them. But odds are excellent that we’d also see that the people in charge had no goals for managing information security or they were, at least, mismanaging them.


Take a look at your security program and determine what you want to accomplish this year. It’ll be obvious but it won’t be easy. It’s up to you to make things happen. It takes more than resolve. It takes the proper philosophy and, most importantly, discipline.

Through our recent publication of numerous security issues of Internet-connected baby monitors, we were able to comprehensively raise awareness of the real-world risks facing those devices. Further, we were able to work with a number of vendors to get key security problems resolved, resulting in major increases of security within that particular market space. Today, Rapid7 is continuing this effort in applying security research to the Internet of Things (IoT) with the release of information on two new security research projects that have also improved the safety and privacy of families.


With this most recent research, we have once again been able to work with vendors to resolve serious security issues impacting their platforms and hope that vendors considering related products are able to take note of these findings so that the overall market can improve beyond just these particular instances. We also hope that consumers are able to use these issues as examples of the potential risks of leveraging IoT products within their own family. As usual, relevant vendors were notified and CERT proved instrumental in connecting us with the vendors in question, per our usual disclosure policy.


Fisher-Price Smart Toy®


The Fisher-Price Smart Toy® is an innovative line of digital "stuffed animals" that provide both educational and entertainment options for children ranging in ages from 3-8 years old. While the device is able to function without Internet-connected capabilities, its functionality is enhanced over Wi-Fi through a companion mobile application for parents and updates to device activities. Plus, let's face it, a "smart" toy doesn't really get very smart without some real-time Internet connectivity!


The issues for the Fisher-Price Smart Toy® were disclosed to CERT under vulnerability note VU#745448.


Vulnerability R7-2015-27: Improper Authentication Handling (CVE-2015-8269)

Through analysis of the Fisher-Price Smart Toy® at hardware, software, and network levels, it was determined that many of the platform's web service (API) calls were not appropriately verifying the "sender" of messages, allowing for a would-be attacker to send requests that shouldn't be authorized under ideal operating conditions. The following is a list of APIs that were found at risk to this lack of proper authorization and associated impacts due to that vulnerability.


  • Find all customers (sequential integer), which provides a list of those customers' toy details (toy ID, toy name, toy type, and associated child profile)
  • Find all children's profiles, which provides their name, birthdate, gender, language, and which toys they have played with
  • Create, edit, or delete children's profiles on any customer's account, which will be displayed within a parent's mobile application
  • Alter what toys a customer's account has (e.g. delete toys, add someone else's toy to a different account), effectively allowing an attacker to 'hijack' the device's built-in functionality
  • Find the status of whether a parent is actively using their associated mobile application or if a child is interacting with their toy
  • Read access to miscellaneous data, such as what game packs are attached to a profile, what purchases were made by a customer, and scores for games



Most clearly, the ability for an unauthorized person to gain even basic details about a child (e.g. their name, date of birth, gender, spoken language) is something most parents would be concerned about. While in the particular, names and birthdays are nominally non-secret pieces of data, these could be combined later with a more complete profile of the child in order to facilitate any number of social engineering or other malicious campaigns against either the child or the child's caregivers.


Additionally, because a remote user could hijack the device's functionality and manipulate account data, they could effectively force the toy to perform actions that the child user didn't intend, interfering with normal operation of the device.


Disclosure Timeline for R7-2015-27

Fri, Nov 13, 2015: Initial research and discovery by Mark Stanislav of Rapid7, Inc.

Mon, Nov 23, 2015: Initial contact to the vendor.

Tue, Dec 08, 2016: Details disclosed to CERT as VU#745448.

Thu, Jan 07, 2016: Disclosure details acknowledged by the vendor.

Tue, Jan 19, 2016: Issues addressed as reported by the vendor.

Tue, Feb 02, 2016: Public disclosure of R7-2015-27.



hereO GPS Platform

The hereO GPS Platform provides family members a connected and integrated means to easily keep track of the location and activity of each other through the use of both a multi-platform mobile application and a cellular-enabled watch that is targeted at use by children ranging in ages from 3-12 years old. Much like a traditional social network, family members can be invited into a group and then have varying levels of access to each other, determined by administrative users. Additional features of this platform include intra-family communication (i.e. messaging), notifications for people coming and/or going from a specific location (i.e. geo fences), and even a panic-alert function.


The issues for the hereO GPS Platform were disclosed to CERT under vulnerability note VU#213384.


Vulnerability R7-2015-24: Authorization Bypass

Through analysis of the hereO GPS Platform at software and network levels, it was determined that an authorization flaw existed within the platform's web service (API) calls related to account invitations to a family's group were not adequately protected against manipulation. Through the use of a pawn account that an attacker controls, they are able to send a request for authorization into a family's group they are targeting, but by abusing an API vulnerability, allow their pawn account to accept that request on that targeted family's behalf. The following diagram shows the effective attacker's workflow used to conduct this attack.




By abusing this vulnerability, an attacker could add their account to any family's group, with minimal notification that anything has gone wrong. These notifications were also found to be able to get manipulated through clever social-engineering by creating the attacker's "real name" with messages such as, 'This is only a test, please ignore.'


Once this exploit has been carried out, the attacker would have access to every family member's location, location history, and be allowed to abuse other platform features as desired. Because the security issue applies to controlling who is allowed to be a family member, the rest of this functionality performs as intended and not itself any form of vulnerability.


Disclosure Timeline for R7-2015-24

Sat, Oct 24, 2015: Issue discovered by Mark Stanislav of Rapid7, Inc.

Thu, Oct 29, 2015: Internal review by Rapid7, Inc.

Mon, Nov 02, 2015: Initial vendor contact.

Tue, Nov 23, 2015: Details disclosed to CERT, VU#213384 assigned.

Tue, Dec 15, 2015: Details disclosed to the vendor.

Tue, Dec 15, 2015: Issue resolved as reported by the vendor.

Tue, Feb 02, 2016: Public disclosure of R7-2015-24.



This research helps to further underline the nascency of the Internet of Things with regard to information security. While many clever & useful ideas are constantly being innovated for market segments that may have never even existed before, this agility into consumers's hands must be delicately weighed against the potential risks of the technology's use.


Still, it's important to be mindful that all technologies contain bugs that can often impact the security of the ecosystem powering a sometimes complex mixture of protocols, standards, and components. While the issues explained here were detrimental to their user's privacy and safety, they were also issues that we've seen so many organization's make.


For this reason, it's critical that vendors creating the next generation of IoT products & platforms leverage industry initiatives, such as and OTA's IoT Trust Framework, to better the security of these technologies before they enter consumer's hands and homes.


If you're curious about some of the techniques to approach research such as this, please take a look at a previously published primer on IoT hacking that discusses some of the approaches and technologies used to conduct this research.

While looking into the SSH key issue outlined in the ICS-CERT ISCA-15-309-01 advisory, it became clear that the Dropbear SSH daemon did not enforce authentication, and a possible backdoor account was discovered in the product.  All results are from analyzing and running firmware version 1322_D1.98, which was released in response to the ICS-CERT advisory.

This issue was discovered and disclosed as part of research resulting in Rapid7's disclosure of R7-2015-25, involving a number of known vulnerabilities present in the Advantech firmware. Given that CVE-2015-7938 represents a new vulnerability, however, it was held back until January, 2016.


Product Description

The Advantech EKI series products are Modbus gateways used to connect serial devices to TCP/IP networks. They are typically found in industrial control environments. The firmware analyzed is specific to the EKI-1322 GPRS (General Packet Radio Service) IP gateway device, but given the scope of ICSA-15-309-01, it is presumed these issues are present on other EKI products.



This issue was discovered by HD Moore of Rapid7, Inc.



As of the 1.98 version of the firmware, The Dropbear daemon included had been heavily modified. As a result, it does not actually enforce authentication. During testing, any user is able to able to bypass authentication by using any public key and password.


In addition, there may be a backdoor hardcoded into this version of the binary as well, using the username and password of "remote_debug_please:remote_debug_please", as shown in the partial firmware analysis below:


.text:000294F8                 ADD     R0, R0, #0x2C   ; haystack
.text:000294FC                 LDR     R1, =aRemote_debug_p ; "remote_debug_please"
.text:00029500                 LDR     R3, =strstr


Note that it is unconfirmed if this backdoor account is reachable on a production device by an otherwise unauthenticated attacker; its presence was merely noted during binary analysis, and the vendor has not acknowledged the purpose or existence of this account.



The authentication bypass issue is resolved in EKI-1322_D2.00_FW, available from the vendor's website as of December 30, 2015. Customers are urged to install this firmware at their earliest opportunity.

In the event that firmware cannot be installed, users of these devices should ensure that sufficient network segmentation is in place, and only trusted users and devices are able to communicate to the EKI-123* device.


Disclosure Timeline

This issue was disclosed via Rapid7's usual disclosure policy.


  • Wed, Nov 11, 2015: Initial contact to vendor
  • Tue, Dec 01, 2015: R7-2015-25.4 disclosed to CERT
  • Tue, Dec 01, 2015: VU#352776 assigned by CERT
  • Wed, Dec 09, 2015: Receipt of VU#352776 confirmed by ICS-CERT
  • Wed, Dec 30, 2015: EKI-1322_D2.00_FW released by the vendor
  • Tue, Jan 05, 2016: Bulletin ICSA-15-344-01 updated by ICS-CERT
  • Fri, Jan 15, 2016: R7-2015-26 publicly disclosed by Rapid7

Filter Blog

By date: By tag: