Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

681 posts

Mirai FAQ: When IoT Attacks

Posted by todb Employee Oct 24, 2016

Unless you've been blessed with some long DNS TTLs, you probably noticed that some name-brand chunks of the Internet seemed to go missing on Friday, October 21, including Twitter, GitHub, and Pandora. Over the weekend, it became clear that this was another (yes, another) IoT-based denial-of-service attack, where many thousands of devices with direct access to the internet participated in a wide-scale attack on DynDNS, unbeknownst to their legitimate owners, as part of a botnet called "Mirai."


What is Mirai?

Mirai is a botnet -- a malicious software application that is designed to gain unauthorized access to Linux-powered devices and conscript them into a distributed infrastructure of clients. Once enlisted, these machines have the capability to perform a variety of denial-of-service attacks against a target dictated by the attacker. In the Friday attacks, the target was Dynamic Network Services' managed DNS service (heretofore referred to as simply "Dyn").


How does Mirai work?

In order to gain access to IoT devices (and really, any Linux computer running telnet), Mirai does not exploit any software vulnerabilities. Instead, it simply tries to guess telnet login credentials for computers accessible via telnet from the internet. Some of these username and password combinations are pretty bad choices for anything hanging out on the internet, like "admin / admin" and "root / root," and some are associated with specific video surveillance systems, like "root / juantec" and "root / klv123." The complete list of credentials is published at GitHub, as part of the Mirai source code.


Once compromised, software is installed on that device that can kick off a variety of attacks as described in the source code, such as UDP or ACK flooding, DNS water torture, HTTP request flooding, and other volume-based attacks.


In the most recent attack, Dyn's services were knocked offline. Since Dyn provided DNS services exclusively for some major services, that meant that we could no longer figure out "where" on the Internet these services lived.


How big is Mirai?

Given the vagaries of internet-wide scanning, it's hard to say how many devices were involved in the Mirai botnet, but the order of magnitude looks to be in the hundreds of thousands range. For a sense of scale, we can look at the recent scans from the National Exposure Index, where we found 15 million apparent telnet servers. We also peeked at a recent Sonar scan of HTTPS certificates, where we found about 315,000 web servers providing a certificate associated with Dahua Technologies, one of the vendors of video surveillance systems that was targeted in the attack. Not all of these telnet servers or video systems are going to be vulnerable, and there are other vendors associated with the attack, but this "hundreds of thousands" figure seems about right.


With all these compromised and compromisable devices, Mirai is capable of sustaining hundreds of gigabytes per second of traffic against a chosen target.


What's Being Done to Fix This?

For this immediate issue, it looks like the heroic engineers at Dyn have been busy reconfiguring their routing in order to be able to weather further attacks. At the same time, their downstream customers are implementing more robust fall-back strategies with other DNS providers. This is not a vote of no confidence against Dyn, of course; disasters and outages happen, and it's only prudent for name-brand services to have fall-backs like this in place.


The fundamental problem of having many, many thousands of insecure devices on the internet remains an issue, though. BCP38 describes techniques for filtering traffic at the edge of an Internet Service Provider's network, which helps defend against DoS attack schemes that generate packets with forged source addresses, but this isn't particularly helpful against the threat demonstrated by Mirai.


What Can I Do?

First and foremost, you should not be exposing your telnet ports to the internet. Period. Full stop. End of story.


It doesn't matter how much you think you need unfettered access to telnet over the internet, you need to stop it. Now. There are much better alternative protocols, such as SSH for shell access, and HTTPS for GUI-based control, both of which offer modern security features like encryption and mutual authentication. Don't merely change your telnet access credentials; stop using them, and make it impossible for others to control your network bandwidth via telnet.


If you rely on a cloud service -- and who doesn't, these days -- then you should find out what their redundancy plans are in the event of not only an attack on their infrastructure, but an attack on their upstream providers. Reputable providers are quite forthcoming with sharing this information with their customers, and usually publish real-time status pages, like this one.


The Post-Mirai Reality

Unfortunately, the cost associated with exposing insecure devices is not just borne by the operators of these devices. While it may be creepy to know that anonymous, internet-based attackers can access your home or office camera feeds, the attacker in this case was not interested in those video streams at all. Instead, the attacker only cared about the processing power and network bandwidth of the vulnerable device.


Solving for externalities like this is extremely difficult, but given our track record, we know that technology professionals are pretty gifted at coming up with novel solutions to seemingly intractable problems. I'm confident we can come up with a solution that protects IoT devices, protects the rest of the network from those IoT devices, and still manages to preserve the open and distributed nature of the internet.

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face.


This year, NCSAM is also focused on taking steps towards online safety, including how to have more secure accounts. In 2016, just like in most of the last 15 years, we learned new information about recent and not so recent data breaches at large organizations, during which sensitive account information was made public. Essentially, these breaches have unearthed data on what puts accounts at higher risk for a breach. Putting aside the concerns about non-password account information being made public, one of the factors that determines how bad a data breach is for users is the format of leaked passwords.


  • Are they plaintext?
    • Plaintext passwords are just the actual password that a user would type.
    • For example, the password "taco" is stored as "taco" and when made public, can be used by an attacker right away.
  • Have they been hashed?
    • Hashed passwords are mathematical one way transformations of the original password, meaning that it is easy to transform the password into a hash, but given a hash, it's very difficult to recover the original password.
    • For example, the password "taco" is stored as "f869ce1c8414a264bb11e14a2c8850ed" when hashed with the MD5 hash algorithm, and the attacker must recover the original password from this hash in order to use it.
  • Have they been salted and hashed?
    • Hashed passwords are good, but there are several tools and methods that can be used to try to reveal the original password.
    • There are even dictionaries that connect hashes back to their original passwords. Submitting "f869ce1c8414a264bb11e14a2c8850ed" to reveals that the word "taco" was used to generate that hash.
    • Adding a "salt" to a password, means to add extra data to it before it gets hashed.
    • For example, the password "taco" is combined with the word "salsa" before being hashed, and the resulting hash is stored as "6b8dc43f9be3051e994cafdabadc2398".
    • Now, an attacker looking up the hash "6b8dc43f9be3051e994cafdabadc2398" in a dictionary won't find anything, and will be forced to create a new dictionary which ideally is time consuming.
  • Have they been hashed with a well studied unbroken algorithm?
  • Have they been hashed multiple times? Or with a computationally expensive algorithm? Or with a memory expensive algorithm?
    • These and other questions get into the nitty gritty of how passwords can be stored scurely so that they are of little use to an attacker once they are made public.


Luckily, there are plenty of resources for security engineers to follow in order to make their sites more secure, and in particular, their storage of passwords more secure even if they are disclosed. Dropbox has an interesting post about how they store passwords, and this talk by Alec Muffet from Facebook, which describes their methods for storing passwords, is really interesting. In fact, there is an entire conference dedicated to passwords and the engineering that goes into keeping them secure. This site tracks published details about password storage polices of various sites, and this presentation provides the motivation for doing so.


That's great, but I'm not a security engineer, what do I need to know about passwords?


There is an unending list of articles, blog posts, howto guides and comics written about passwords. Passwords are going away. Passwords will eventually go away. Passwords are here to stay. Passwords are insecure. Two factor authentication will save us all. Biometrics will save us all. Whatever your opinion you probably have multiple accounts with multiple websites and ideally you're using multiple passwords. It's a good idea to recognize that whether or not the sites you use are doing a good job of protecting your passwords, you too can take steps to make your password use more secure.


If you take nothing else away from this post, remember to setup a password manager (there are many), actually use it to create different passwords for each account you have, routinely look into whether your account information has been leaked recently, and if it has, change the password associated with that account.


What's the big deal?


If you have an account with an online service, like an email provider, a social network, or an ecommerce site, then it is very likely that you have a password associated with that account. It's also likely that you have more than a few accounts, and having so many accounts you have most likely been tempted to use the same or similar usernames and passwords across accounts.


While there are clear benefits (among some privacy / tracking drawbacks) to having a consistent identity across services (,,, there are clear drawbacks to using the same password across services, mainly that if one of these services is attacked and account information is leaked, your accounts with identical or similar usernames at the other services could be vulnerable to misuse by an attacker.


Ok, but who cares? It's just my (hotmail | twitter | ebay | farmersonly) account.


You should care, these accounts paint a very detailed picture of who you are and what you do. For instance, you email has a record of emails you have sent and of those sent to you, and from that an attacker can learn a surprising amount about you. With email providers that offer effectively unlimited email storage and provide little incentive for users to erase emails, it's nearly impossible for a user to be sure that nothing useful to an attacker is buried somewhere inside.


Furthermore, your email (and social media accounts) are effectively an extension of you. When an attacker has control of your account, emails, tweets, snaps sent from your account are accepted as coming from you, and attackers can take advantage of those assumptions and the trust that you've built up with you contacts.


For example, consider the Stranded Traveler Scam in which an attacker sends a message to one or more of your contacts claiming to be in a bad situation somewhere far away, and if they could just wire some money, things would surely work out. There are news reports about these types of scams all the time (2011, 2011, 2012. 2013, 2014, 2015, 2016) Because the email has come from your account and bears your name, your relatives, friends and coworkers are more likely to believe it is actually you writing the message than a scammer. Similar attacks involve sending malware in attachments and requesting wire transfers for family members or executives, or requesting w-2 forms for employees. None of these attacks require that takeover of your account, but are certainly strengthened by it.


Really, how often does this happen? Can't I just deal with it when I hear about it on the news?


You could do that, and it would be better than not doing anything at all, but breaches that leak account information happen surprisingly frequently and they don't always make the news that you read. Sometimes, we don't learn about them for weeks or years after they happen, meaning that passwords you used a long time ago may have been known to attackers for a while before you were made aware of a breach.


Is my password really out there?


Sometimes. Maybe. It's hard to say. Often, sites will hash passwords before they are stored. However, different sites use different hash methods which have different security characteristics, and some methods previously believed to be secure are no longer considered so.


Shouldn't these sites be more secure?


That would be nice, but data security is a difficult and quickly changing field and not every site prioritizes security as highly as you might like.


Fine, what should I do?


You should to a few things:

  1. Use a password manager
  2. Use a different password for every account you have
    • Now that you have a password manager storing all your passwords, there's no need to reuse passwords
  3. Use complex passwords
    • Most password managers can create long random strings of letters, numbers and symbols for you. Since the password manager stores these passwords and you don't have to remember them, there's no need to use simple or short passwords.
  4. Keep an eye on sites that catalog leaked account information.
    • Have a look from time to time at sites that keep track of leaked accounts to see if your account has been leaked. is usually kept up to date and is easy to use.

Whether employees realize it or not, they can wreak havoc on internal and external security protocols. Employees' daily activities (both work and personal) on their work devices (computers, smartphone, and tablets) or on their company’s network can inflict damage. Often called “insider threats,” employees’ actions, both unintentional or intentional, are worth paying heed to whenever possible. Gartner’s Avivah Litan reported on this thoroughly in her “Best Practices for Managing Insider Security Threats, 2016 Update,” where it is made clear that businesses aren’t on their own in regards to responding to this growing, often unpredictable and unmitigated challenge.


Avivah recommends four strong approaches you should consider when developing an insider threat program, which she reinforces with analysis, data, and additional content. In the hopes of expanding this list and providing even more thoughts, Rapid7’s security experts, inspired by the four outlined approaches in the report, got together to provide some additional approaches. Where applicable we have also provided links to resources, both product and research, that can help your team combat insider threats. After all, we’re in this together!


Recommendation #1: “Start insider threat programs with a risk assessment, in order to prioritize efforts. Deploy business processes and technologies that prevent many insider threats in the first place.”


Yes, but be wary of risk assessments not tied to your business objectives and limitations. While a risk assessment is a solid starting point, any miscues could do more harm than good. For example, poor risk assessment will actually overload the security team and create paralysis unless it is relevant, actionable, and sustainable. Some risk assessment services use both people and technology to examine your own technologies, people, and processes. But they are not all created equal and it is important to review the way they will measure risk and what tools they might use.


It’s important to consider two types of risk assessment: a one-off audit done by a third party (like a penetration test or a cybersecurity maturity assessment), as well as continuous assessment and good security hygiene. One thing we often see are risk assessments using tools from the CVSS-only scoring world of legacy vulnerability management players. In this world you are ultimately left with a list of hundreds of ‘critical’ vulns, which is a list you’ll never get through to even start thinking about insider threats. Even many penetration tests / external risk assessments fall into the trap of providing a list of problems without context and focus on what attackers really care about. It’s important for teams to do this risk assessment, but do it in a way that properly prioritizes this beyond CVSS and takes into account the age of the vulns, exploitability, and more.


Our solutions can help:

In case you didn’t know, our solutions, Nexpose and Metasploit, let you go beyond “vulnerability assessment” to exactly what Gartner is suggesting – a RISK assessment. Because Nexpose prioritizes vulnerabilities by the ease with which they’d be used in an attack, our risk scores are a true picture of how susceptible a system is to a breach, whether insider or external. Nexpose vulnerability checks include checks for default passwords, and limiting these vulns make it more difficult for insider threats to access systems they shouldn’t. Metasploit lets you conduct simulated phishing attacks on your employees, and it lets you test their ability to spot not just suspicious links but suspicious requests for information.


Recommendation #2: “Combine technical methods with nontechnical controls, such as security awareness training linked to employee monitoring, for the most successful results in your insider threat program.”


Don’t dehumanize the employee experience. People are the key for sure here and security awareness training should run the gamut from overall education to phishing exercises. It’s critical for businesses to iterate to employees that although there will be monitoring for security purposes, their privacy can continue to be respected.


For a more successful deployment, executive staff and the security team must ensure that employees have transparency and trust into the process. One of the best alerting mechanisms in every organization isn't technology, it's the employees. If users worry about being under the user behavior magnifying glass after they report something, they're likely to stop speaking up when weird stuff happens. A best practice is to have an authority outside of security, such as a risk or privacy officer, explicitly define who has access to the technology, to what information, and ensure that the policy is regularly reviewed and enforced. During security awareness and compliance training, the security team has an opportunity to share the importance of identifying anomalous behavior, since compromised credentials has been a top attack vector behind breaches (Verizon Data Breach Investigations Report) for five years running.


Our solutions can help:

New technology, such as User Behavior Analytics, can correlate the millions of actions taken on the network to the users and assets behind them. This can expose risky user behavior, whether it be unintentional negligence or malicious insider threat. When InsightIDR is first deployed in a customer environment, the technology starts to create a baseline of typical user behavior. Many organizations immediately improve their security posture and validate existing controls by identifying non-expiring passwords, shared accounts, and administrators they otherwise did not know about. From there, InsightIDR highlights notable and anomalous behavior that may be indicative of a compromised or rogue employee account.



Recommendation #3: “(For technical insider threat programs) Start with readily available data, such as directory data and access logs, to achieve quicker results. Increase the types of information ingested over time, recognizing analytics can only be as good as the data they consume.”


Yes and YES. This is something we truly believe in, and we think that you can not only get a lot out of the data you already have, but you can do it more easily than ever before. If you want to identify insider threats, you need to first understand what is normal behavior for your users and the first step is to tie the majority of events back to those users. This requires Active Directory, to provide details on who is logged into each device at any given time, and it requires DHCP, to know which IP address is assigned to these devices. The next big step is to obtain endpoint data, such as local event logs and process details, to increase the types of behavior you can see.


Ultimately, start with basic data analytics to look for known patterns of malicious behavior and look for solutions that have automated the collection, unification, and correlation of your data. It is also smart to add on top of that folks who have done this before in order to get immediate benefit for any security analytics program.


Our solutions can help:

In general, insider threats are a risk to an organization, whether they’re intentional or unintentional. Rapid7 combines both technology, with our InsightIDR solution, and a team of incident responders constantly understand what the highest risk users and assets are in the organization. This evaluation is not just a look at the technological systems, but it’s also a deep understanding of the business. This allows the team to be more targeted in their hunting for threats later and to put policies in place to minimize insider threat risk later on. The time up front makes it easier to mitigate risk through prioritization of what’s most important to the organization. The deep knowledge the team has of attacker behavior and user behavior helps them better identify insider threats.



Recommendation #4: “Start with basic data analytics to look for known patterns of malicious behavior. Graduate to more advanced analytics like anomaly detection once your organization is able to manage the results.”


Analytics reign supreme. This is where the focus of analytics needs to be more automated. Let’s assume you’ve done all the above, and did it right, the last thing you now want to do is learn how to write or even manage analytics. Focus on automating the analysis as much as possible. This isn’t about just having a library of analytics created with an attacker’s mindset, it also needs to be considered in how log search is performed and visualized. It’s no longer acceptable to spend time data splunking around when your goal is to find that insider threat before it hurts you.


Our solutions can help:

If you don't have log aggregation in place, this is a great start, as it will save you lots of time and headaches during incident investigations, and is an integral part of meeting today's compliance standards. Most log aggregation tools come with the ability to create custom alerts, which can help solve basic use-cases and provide visibility into the environment. InsightIDR works by creating a User-IP-Asset link from integrating with Active Directory, DHCP, and Endpoint Data, as well as an existing network and security stack. What takes InsightIDR beyond just analyzing logs are Intruder Traps, such as Honey Pots and Honey Credentials, which reliably detect intruders earlier in the attack chain.


Ultimately, there’s a lot of methods to consider when developing an insider threat program. But as can be seen above, not all approaches are made equal. It’s imperative to be thoughtful and conscientious about how insider threats are approached and handled.

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face.


2016 can be characterized as the year that IoT security research took off, both here at Rapid7 and across the security researcher community. While IoT security research was certainly being conducted in years past, it was contentious within our community as "too easy" or simply "stunt hacking." But as the body of research in this space grows along with the prevalence of IoT, it's become obvious that this sort of "easy" hacking is critical to not only the security of these devices in and of themselves, but to the safety of the people nearby these devices.


After all, the hallmarks of IoT is both the ability to interact with the real, physical space around the devices, and to communicate with other devices. Security failures, therefore, can both directly affect the humans who are relying on them and open attack vectors against nearby devices and the networks they are connected to.


With that in mind, I'd like take a moment to consider the more noteworthy events in IoT Security, and how together, they make the case that this year is when we stopped considering IoT security "junk hacking," and started taking these things seriously.


IoT Security in 2016


In January, Brian Knopf announced that the "I Am the Cavalry" security research advocacy group will be publishing an open, collaboration-driven cybersecurity rating system for IoT devices, in contrast to Underwriter Labs' proprietary standards. This isn't the first announced strategy to comprehensively rate the security of IoT devices, but it's certainly the most open. Transparency is crucial in security research and testing, since it allows for independent verification, reproducibility, and accountability.


In March, researcher Wish Wu of Trend Micro reported a pair of vulnerabilities in a Qualcomm component of the Android operating system which, if exploited, can allow a malicious application to grab root-level permissions. Wu presented these findings and others in May at the Hack in the Box security conference. While these bugs have been patched by Google, these findings remain significant for two reasons. One, Android is rapidly becoming the de facto standard operating system for the Internet of Things -- not just smartphones -- so bugs in Android can have downstream effects in everything from television set-top boxes, to children's toys, to medical devices. Second, and more worrying, many of these devices do not have the capability to get timely security updates or to upgrade their operating systems. Given the lack of moving parts in these machines, they can chug along for years without patches.


In May, researcher Earlance Fernandes and others at the University of Michigan demonstrated several vulnerabilities in Samsung's SmartThings platform for home automation, which centered around abusing and subverting applications to gain privileges, including a remote attack to reset a door lock PIN. Design issues of over-privileged applications are certainly not limited to Samsung's software engineering, but is commonplace in IoT infrastructure. Oftentimes, the services and applications that ship on IoT devices aren't designed with privilege constraints in mind. If the service does what it's supposed to do with administrative / root privileges, there isn't a lot of incentive to design it to work in a less permissive model, especially when designers aren't considering the motives or opportunity of a malicious user. In other words, while most traditional computers have pretty solid user privilege models, IoT devices often ignore this fundamental security concept.


In September, the Michigan Senate Judiciary Committee passed a bill that forbids some forms of "car hacking," but importantly, includes protections for research activities. These exemptions weren't included in the original text of the bill, but thanks to the efforts of Rapid7's Harley Geiger, we've seen a significant change in the way that legislators in Michigan view automotive cyber security and the value of security research there. While this bill is not yet law, the significance of this shift in thinking can't be understated.


Also in September, some of the early fears of widespread IoT-based insecurity manifested in the highly public IoT-powered DDoS attack against journalist Brian Krebs. This attack was made possible, in part, by the massive population of unpatched, unmanaged, and unsecured home routers, a topic explored way back in January by yours truly. Of course, I'm not saying I called it, but... I kinda called it.


Some closing pictures


Who doesn't love a good Google Trends graph? We can see from the below that interest in the "IoT Security" search term has doubled since the beginning of 2016, and I'd be surprised to see it hit any significant decline in the years to come.


iot security google trends graph.png


While much of this interest is pretty doomy and/or gloomy, it's healthy to be considering IoT security today, and I'm glad that IoT appears to be getting the respect and serious attention it deserves in the security research community. It's only through the efforts of individual, focused security researchers that we'll able to get a handle on the issues that bedevil the IoT-scape. Otherwise, we're looking at a future as envisioned by Joy of Tech:

Joy of Tech Internet of Ransomeware Things.png

Due to a lack of certificate validation with a configured remote Microsoft Exchange server, Nine leaks associated Microsoft Exchange user credentials, mail envelopes and their attachments, mailbox synchronization information, calendar entries and tasks. This issue presents itself regardless of SSL/TLS trust settings within the Nine server settings panel.


October 13, 2016 update: Version 3.1.0 was released by the vendor to address these issues.




Discovered by Derek Abdine  of Rapid7, Inc., and disclosed in accordance with Rapid7's disclosure policy.


Product Description


The Nine mobile application for Android is a Microsoft Exchange client that allows users to synchronize their corporate email, tasks, and calendar entries to Android-based devices (phones, tablets, etc.). At the time of writing, Nine is listed in the Google Play store with 500,000 - 1,000,000 installs.




An attacker in a privileged position within the same network as the mobile device running Nine can man-in-the-middle (MitM) traffic to the remote Exchange server (such as in the case of outlook365 corporate email). Attacks can be trivialized in open wireless environments, or by WiFi stalking unsuspecting Nine users with a rogue wireless access point (WAP herein) to trick the mobile device into connecting to an attacker-controlled network.


In one scenario, an attacker may setup a WAP in a backpack broadcasting a well-known SSID, such as "Starbucks," bridged to a 3G/4G mobile data connection. The attacker could funnel HTTPS traffic to mitmproxy which serves self-signed certificates from an otherwise invalid certificate authority (CA). From that point on, the attacker would merely wait for a Nine user to come within range of the rogue WAP. Communication between Nine and the remote Exchange ActiveSync service may happen when the victim opens his or her phone, when an email is received (and push is enabled), or when the phone polls the remote service. All communication packets contain the victim's credentials in a HTTP basic authentication header.


In a variant of the above scenario, an attacker may visit the same open WiFi (for example, on an airliner or in a coffee shop, etc.) environment an unsuspecting user is in and poison that user's DNS queries for the Exchange server. The rest of the attack would work as explained above.


The image below depicts a decrypted capture of MitM'd traffic by mitmproxy, an open source tool. The highlighted area in red contains base64-encoded account credentials.






As mentioned earlier, users can disable push synchronization in Nine and synchronize manually, only in trusted networks (or over VPN connections to trusted networks).


IT administrators can look for MUA strings prepended with "Nine-" in their ActiveSync logs, and determine appropriate next steps for those users who are currently using the NineFolders app to access organization data. Customers of Rapid7's InsightIDR can identify Nine clients by simply searching for "where(/Nine-/i)" in the Log Search page.


Disclosure Timeline

This vulnerability is being disclosed in accordance with Rapid7's disclosure policy.


  • Tue, Aug 09, 2016: Attempted contact to the vendor.
  • Thu, Aug 25, 2016: Disclosed details to CERT.
  • Fri, Aug 25, 2016: CVE-2016-6533 assigned by CERT.
  • Tue, Oct 11, 2016: Public disclosure.
  • Wed, Oct 12, 2016: Vendor response with notification of fixed version (release timing TBD).
  • Thu, Oct 13, 2016: Vendor released version 3.1.0 to address these issues.

Being great is, well… great, right? But as we all know it doesn’t happen in a vacuum, it’s an equation:


Greatness = Individual Excellence + Teamwork + Meaningful Customer Relationships


Coincidentally (or not), these items make up three of the five core values we strive towards here at Rapid7 – the other two play a role as well in ‘Disciplined Risk Taking’ and ‘Continuous Learning’, but we all know blog posts need three things, it’s some sort of Internet rule. Now, let’s be honest, public displays of boasting are not what we are about here, but when you witness a tidal wave of public support from your customers on the Gartner Peer Insights portal and, simultaneously, your company comes out on top of the coverage for the SANS Top 20 Security Controls (2016 PDF poster), you have to pause for just a moment to let people know.


This is important, especially during National Cybersecurity Awareness month, because it’s all about our customers and employees working together to create killer solutions and services. And in this world where we all want the benefits of being interconnected but understand the risks, the heroes have become the IT and security teams. Equipping these teams is what drives us each day. Below is more info on each of these accolades, and a big thank you to our entire community for giving us this amazing moment.


Rapid7 Provides the Most Coverage for the SANS Top 20 Critical Security Controls

Many organizations rely on the SANS Top 20 Critical Security Controls (now a joint venture with SANS and the Center for Internet Security) to help them understand what they can do to minimize risk and harden resiliency. The Critical Security Controls run the gamut from asset identification and management to continuous monitoring and secure configurations. How does it work? Well SANS surveyed industry vendors in March 2016, using the Center for Internet Security (CIS) document “A Measurement Companion to the CIS Critical Security Controls (Version 6)” as the baseline. The “heat map” below has shaded areas totaling the number of measurements a vendor covers divided by the total number of measurements listed for that Critical Control. As you see below, Rapid7 leads the way.


SANS top 20 critical controls vendor rankings

This is a representation of our full portfolio including pen testing (Metasploit), vulnerability management (Nexpose), application security (AppSpider), and SIEM/UBA/EDR (InsightIDR). If you are already using one of our products in one area, we should show you how our solutions work together to get you even more coverage. Ultimately though, this helps people understand that our solutions provide the quality, usability, and ultimately, the insight that security professionals need to get the job done.


Gartner Peer Insight: Security Product Reviews for Rapid7 at the Top

If you haven’t checked out Gartner Peer Insights yet, it’s a resource fed by the user community themselves where they provide in-depth reviews about products they are using, ranging from SIEM and UBA, to vulnerability management, and application security. We are proud of what our customers say about us, and we are always listening for ways to improve their experience and success using our solutions. Below you'll see where Rapid7 stacks up in terms of overall peer rating on Gartner Peer Insight in the SIEM category:

gartner review for SIEM security solutions

Go take a look at what folks are saying, and then do your own searches for the solutions you need!


And if you have any questions or need to talk to us about any of our solutions just let us know in the comments or contact us page. Now that we’re done celebrating we’re back at work, with all of you, to keep progressing!

Today we are announcing three vulnerabilities in the Animas OneTouch Ping insulin pump system, a popular pump with a blood glucose meter that services as a remote control via RF communication. Before we get into the technical details, we want to flag that we believe the risk of wide scale exploitation of these insulin pump vulnerabilities is relatively low, and we don’t believe this is cause for panic.  We recommend that users of the devices consult their healthcare providers before making major decisions regarding the use of these devices. More on that further down in this post.


Users should also be receiving notification of this issue, along with details for mitigating it, directly from Animas Corporation, via physical mail. We recommend you pay close attention to this communication.


Product description

The OneTouch Ping is a popular medical device used by diabetic patients to self-administer insulin. According to the vendor's website, it is a "two-part system" that "communicates wirelessly to deliver insulin." The two devices communicate in the 900mhz band using a proprietary management protocol.


Summary of findings

The OneTouch Ping insulin pump system uses cleartext communications rather than encrypted communications, in its proprietary wireless management protocol. Due to this lack of encryption, Rapid7 researcher Jay Radcliffe discovered that a remote attacker can spoof the Meter Remote and trigger unauthorized insulin injections.


Due to these insulin vulnerabilities, an adversary within sufficient proximity (which can depend on the radio transmission equipment being used) can remotely harm users of the system and potentially cause them to have hypoglycemic reaction, if he or she does not cancel the insulin delivery on the pump.


These issues have been reported to the vendor, Animas Corporation, CERT/CC, the FDA and DHS. Animas has been highly responsive and is proactively notifying users of the devices, and recommending mitigations for the risks.


Findings and analysis

Three major findings were discovered during the analysis of the product. For raw, uncommented packet data, please see the addendum at the end of this advisory.


R7-2016-07.1: Communications transmitted in cleartext (CVE-2016-5084)

Packet captures demonstrate that the communications between the remote and the pump are transmitted in the clear. During the normal course of operation, de-identified blood glucose results and insulin dosage data is being leaked out for eavesdroppers to remotely receive.


R7-2016-07.2: Weak pairing between remote and pump (CVE-2016-5085)

There is a pairing process that is done during the setup of the pump that partners the pump with a remote. This is to prevent the pump from taking commands from other remotes that it might accidentally pick up transmissions from. The pairing process is done through a 5 packet exchange in the clear where the two devices exchange serial numbers and some header information. This is used to generate a CRC32 "key" (for lack of a better term). This key is used by the remote and pump in all future transmissions and is transmitted in the clear. The 5 packets are identical every time pairing process is done between the remote and insulin pump. This eliminates the possibility of the devices using encryption. Animas patent documents do not outline what exactly is used in the CRC generation, but it includes no encryption.


Attackers can trivially sniff the remote/pump key and then spoof being the remote or the pump. This can be done without knowledge of how the key is generated. This vulnerability can be used to remotely dispense insulin and potentially cause the patient to have a hypoglycemic reaction.


R7-2016-07.3: Lack of replay attack prevention or transmission assurance (CVE-2016-5086)

Communication between the pump and remote have no sequence numbers, timestamps, or other forms of defense against replay attacks. Because of this, attackers can capture remote transmissions and replay them later to perform an insulin bolus without special knowledge, which can potentially cause them to have hypoglycemic reaction.


In addition, the protocol the remote meter and pump use to communicate does not have elements that guarantee the devices have received the packets in a certain order or at all. It is believed that the weakness in this protocol would allow an attacker to perform a spoofed remote attack from a considerable distance from the user/patient. This would be done by a sufficiently powered remote sending commands to the pump in the blind, without needing to hear the acknowledgement packets.


This video demonstrates an attack on the Animas OneTouch Ping.



Attack range

The OneTouch Ping does not communicate on 802.11 WiFi, or otherwise communicate on the internet. However, it is believed these attacks could be performed from one to two kilometers away, if not substantially further, using sufficient elevation and off-the-shelf radio transmission gear available to ham radio hobbyists.


While the normal use case between the remote and pump is approximately 10 meters. In 2011, Barnaby Jack of McAfee, Inc. claimed an ability to perform a 900mHz band attack from 90 meters away with an external directional antenna (a commercial 3 element yagi), however he did not execute this attack against the OneTouch Ping.



Using industry standard encryption with a unique key pair would mitigate these issues.


Affected users can avoid these issues entirely by disabling the radio (RF) functionality of the device. On the OneTouch Ping Insulin Pump, this is done through the Setup -> Advanced -> Meter/10 screen, and selecting "RF = OFF".


In addition, the vendor has provided other mitigations for these issues, described on their website and in letters being sent to all patients using the pump and health care professionals.


Patients should consult with their own endocrinologist about any aspect of their ongoing medical care.


Researcher's note regarding mitigations

I am Jay Radcliffe, security researcher at Rapid7 and Type I diabetic. Five years ago when I first disclosed security vulnerabilities in an insulin device, I was shocked and overwhelmed with the number of concerned users that came to me looking for advice and help. Here we are again with new research on a medical device. If you are not technical and read the security advisory, you are probably more than worried. I would be too. So let me help clarify and explain some things from a patient perspective. Know that the device I did 90% of my research on, was the device I had attached to me for several years; I know how important this device is to a diabetic’s health.


First, know that we take risks every day. We leave the house. We drive a car. We eat a muffin. We guess the amount of carbs. All entail risk. This research uncovers a previously unknown risk. This is similar to saying that there is risk of an asteroid hitting you, a car accident occurring or miscalculating the amount of insulin for that muffin you ate. Some of those risks are low (asteroid) some are high (insulin). This knowledge of risk allows individuals to make personal decisions. Most people are at limited risk of any of the issues related to this research. These are sophisticated attacks that require being physically close to a pump. Some people will choose to see this as significant, and for that they can turn off the rf/remote features of the pump and eliminate that risk.


Second, always take care of your diabetes first. We all know the dangers of high blood sugar and low blood sugar too. These risks often far outweigh the risks highlighted in this research. If you are concerned, work with your endocrinologist and device vendor to make sure you are making the best choices. Removing an insulin pump from a diabetic over this risk is similar to never taking an airplane because it might crash.


Third, this research is done to make sure the future of our devices are safe. As these devices get more advanced, and eventually connect to the internet (directly or indirectly), the level of risk goes up dramatically. This research highlights why it is so important to wait for vendors, regulators and researchers to fully work on these highly complex devices. This is not something to be rushed into as there is a patient’s life on the line. We all want the best technology right away, but done in a reckless, haphazard way puts the whole process back for everyone.


I do not pump right now. I take shots manually. Not because of the security risks of insulin pumps, but because that is what my doctor and I have chosen. If any of my children became diabetic and the medical staff recommended putting them on a pump, I would not hesitate to put them on an OneTouch Ping. It is not perfect, but nothing is. In this process I have worked with Animas and its parent company, Johnson & Johnson, and know that they are focused on taking care of the patient and doing what is right.


Finally, please know that neither Animas nor Johnson & Johnson has paid me or Rapid7 for any of the research done on the device described here. This is just the advice of one parent and a person who has spent 17 years counting carbs and taking a risk on how much insulin is right.


Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.


  • Thu, Apr 14, 2016: Attempted to contact the vendor at,, and several other aliases at both domains.
  • Thu, Apr 21, 2016: Details disclosed to the vendor at (PGP KeyID: 0xEC69B12DFF06A1CA)
  • Mon, Apr 25, 2016: Animas initiated complaint handling process
  • Fri, May 06, 2016: Further clarified details with vendor
  • Mon, May 09, 2016: Details disclosed to CERT
  • Thu, Jun 16, 2016: CVEs assigned by CERT
  • Jul-Sep, 2016: Worked with Animas on validating the reported vulnerabilities
  • Wed, Sep 21, 2016: Mitigations provided by the vendor
  • Tue, Oct 04, 2016: Public disclosure


Addendum: Sample Packet Data

The following describes a sample packet captured between the insulin pump and the remote meter.


 REMOTE: 00 00 00 04 A3 5A 92 B2  4C 00 0E 0F .....Z..L...
 REMOTE: 00 00 00 04 A3 5A 92 B2  4C 00 0E 0F .....Z..L...
 PUMP:   00 00 FF 00 1A D1 81 81 ........
 REMOTE: 20 00 0E 00 BF DB CC 6F ......o
 PUMP:   03 00 F1 04 16 B9 B9 87  2C 01 00 00 ........,...
 REMOTE: 03 00 F8 00 31 FD C9 EE ....1...
 PUMP:   03 00 07 04 88 76 DA DD  2C 01 00 00 .....v..,...
 REMOTE: 03 00 12 00 F0 30 0E FC .....0..
 PUMP:   20 00 ED 12 E7 BC 93 43  01 01 27 05 26 02 8F 00 ......C..'.&...
 PUMP:   57 45 45 4B 44 41 59 00  00 00 WEEKDAY...
 PUMP:   05 00 EA 00 D5 8F 84 B3  

October was my favorite month even before I learned it is also National Cybersecurity Awareness Month (NCSAM) in the US and EU. So much the better – it is more difficult to be aware of cybersecurity in the dead of winter or the blaze of summer. But the seasonal competition with Pumpkin Spice Awareness is fierce.


To do our part each National Cybersecurity Awareness Month, Rapid7 publishes content that aims to inform readers about a particular theme, such as the role of executive leadership, and primers to protect users against common threats. This year, Rapid7 will use NCSAM to celebrate security research – launching blog posts and video content showcasing research and raising issues important to researchers.


Rapid7 strongly supports independent research to identify and assess security vulnerabilities with the goal of correcting flaws. Such research strengthens cybersecurity and helps protect consumers by calling attention to flaws that software vendors may have ignored or missed. There are just too many vulnerabilities in complex code to expect vendors' internal security teams to catch everything. Independent researchers are antibodies in our immune system.


This NCSAM is an extra special one for security researchers for a couple reasons. First, new legal protections for security research kick in under the DMCA later this month. Second, October 2016 is the 30th anniversary of a problematic law that chills beneficial security research – the CFAA.


DMCA exception – copyright gets out of the way (for a while)


This October 29th, a new legal protection for researchers will activate: an exemption from liability under Section 1201 of the Digital Millennium Copyright Act (DMCA). The result of a long regulatory battle, this helpful exemption will only last two years, after which we can apply for renewal.


Sec. 1201 of the DMCA prohibits circumventing a technological protection measure (TPM) to copyrighted works (including software). [17 USC 1201(a)(1)(A)] The TPMs can be anything that controls access to the software, such as weak encryption. Violators can incur civil and criminal penalties. Sec. 1201 can hinder security research by forbidding researchers from unlocking licensed software to probe for vulnerabilities.


This problem prompted security researchers – including Rapid7 – to push the Copyright Office to create a shield for research from liability under Sec. 1201. The Copyright Office ultimately did so last October, issuing a rule that limits liability for circumventing TPMs on lawfully acquired (not stolen) consumer devices, medical devices, or land vehicles solely for the purpose of good faith security testing. The Copyright Office delayed activation of the exception for a year, so it takes effect this month. Rapid7 analyzed the exception in more detail here, and is pushing the Copyright Office for greater researcher protections beyond the exception.


The exception is a positive step for researchers, and another signal that policymakers are becoming more aware of the value that independent research can drive for cybersecurity and consumers. However, there are other laws – without clear exceptions – that create legal problems for good faith researchers.


Happy 30th, CFAA – time to grow up


The Computer Fraud and Abuse Act (CFAA) was enacted on October 16th, 1986 – 30 years ago. The internet was in its infancy in 1986, and platforms like social networking or the Internet of Things simply did not exist. Today, the CFAA is out of step with how technology is used. The CFAA's wide-ranging crimes can sweep in both ordinary internet activity and beneficial research.


For example, as written, the CFAA threatens criminal and civil penalties for violations of the website's terms of service, a licensing agreement, or a workplace computer use agreement. [18 USC 1030(a)(2)(C)] People violate these agreements all the time – if they lie about their name on a social network, or they run unapproved programs on their work computer, or they violate terms of service while conducting security research to test whether a service has accidentally made sensitive information available on the open internet.


Another example: the CFAA threatens criminal and civil penalties for causing any impairment to a computer or information. [18 USC 1030(a)(5)] No harm is required. Any unauthorized change to data, no matter how innocuous, is prohibited. Even slowing down a website by scanning it with commercially available vulnerability scanning tools can violate this law.

Since 1986, virtually no legislation has been introduced to meaningfully address the CFAA's overbreadth – with Aaron's Law, sponsored by Rep. Lofgren and Sen. Wyden, being the only notable exception. Even courts are sharply split on how broad the CFAA should be, creating uncertainty for prosecutors, businesses, researchers, and the public.


So for the CFAA's 30th anniversary, Rapid7 renews our call for sensible reform. Although we recognize the real need for computer crime laws to deter and prosecute malicious acts, Rapid7 believes balancing greater flexibility for researchers and innovators with law enforcement needs is increasingly important. As the world becomes more digital, computer users, innovators, and researchers will need greater freedom to use computers in creative or unexpected ways.


More coming for NCSAM


Rapid7 hopes National Cybersecurity Awareness Month will be used to enhance understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges researchers face. To celebrate research over the coming weeks, Rapid7 will – among other things – make new vulnerability disclosures, publish blog posts showcasing some of the best research of the year, and release videos that detail policy problems affecting research.


Stay tuned, and a cheery NCSAM to you.

[Editor’s Note: This is a sneak peek at what Tim will be presenting at UNITED 2016 in November. Learn more and secure your pass at!]


In today’s big data and data science age, you need to think outside the box when it comes to malware and advanced threat protection. For the Analytic Response team at our 24/7 SOC in Alexandria, VA, we use three levels of user behavior analytics to identify and respond to threats. The model is defined as User-Host-Process, or UHP. Using this model and its supporting datasets allows our team to quickly neutralize and protect against advanced threats with a high confidence rate.

What is the User-Host-Process Model?


The UHP model supports our incident response and SOC analysts by adding context to every finding and pinpointing anomalous behavior. At its essence, it asks three main questions:


  • What users are on the network?
  • What hosts are they accessing?
  • What processes are users running on those hosts?

Advanced threat protection with a UHP model


This model also includes several enrichment sources such as operating system anomalies, whitelisting and known evil to help in the decision-making process. Once these datasets are populated, the output from the model can be applied in a variety of different ways.


For example, most modern SIEM solutions alert if a user logs in from a new, foreign country IP address. If you need to validate the alert armed only with log files, you’d be hard-pressed to confirm if the activity is malicious or benign.  Our Analytic Response team uses the UHP model to automatically bring in contextual data on users, hosts, and processes to help validate the alert. Here are artifact examples below:


  • User Account Information
    • Account created, Active Directory, accessed hosts, public IPs...
  • Host Information
    • Destination host purpose, location, owner, operating system, service pack, criticality, sensitivity...
  • Process Information
    • Process name, process id, parent process id, path, hashes, arguments, children, parents, execution order, network connections...


With this supporting data, we build a profiles for each user or artifact found. Circling back to our example “user logged in from a new IP address in a foreign country”, we can add this context:


  • Does the user typically log in and behave in this way?
    • Day/time of login, process execution order, duration of login
  • How often does the user run these particular processes?
    • Common, unique, rare
  • How common is this user's authentication onto this system?
  • How often have these processes executed on this system?


Armed with UHP model data, we have a baseline of user activity to aid in threat validation. If this user has never logged in from this remote IP, seldom logs into the destination system, and their process execution chain deviates from historical activity, we know that this alert needs further investigation.


Analyzing Malware, the UHP Way


Adhering to a UHP model means that for every executable, important metadata and artifacts are collected not only during execution, but also as a static binary. When you’re able to compare binary commonality, arguments, execution frequency and other lower level attributes, you now have additional context to make nuanced decisions about suspected malware.


For example, for the question, “How unique is a process?”, there are several layers to the question. Let’s look at four:


  • Process commonality on a single asset
    • Single host baseline
  • Process commonality at an organizational level
    • Across all of my assets, how many are running this process?
  • Process commonality at an industry/sector level
    • Across organizations in the same vertical, how common is this process?
  • Process commonality for all available datasets.


To be most effective, the User, Host, and Process model applies multiple datasets to a specific question to aid in validation. So in the event that the “U” or user dataset finds no anomalies, the next Host layer is applied.  Finally, the Process layer is applied to find anomalies.


Use Case: (Webshell)


Rapid7 was called to assist on an Incident Response engagement involving potential unauthorized access and suspicious activity on a customer’s public facing web server. The customer had deployed a system running Windows Internet Information Services (IIS) to serve static/dynamic content web pages for their clients.


We started the engagement by pulling data around the users in the environment, hosts, and real-time process executions to build up the UHP model. While in this case, User and Host models didn’t detect any initial anomalies, the real-time process tracking, cross process attributes, baselines and context models was able to identify suspicious command-line execution from the parent process w3wp.exe. This process happens to be the IIS process responsible for running the webserver. Using this data, we pivoted to the weblogs, which identified the suspicious web shell being accessed from a remote IP address. From there we were able to thoroughly remediate the attack.




The Analytic Response team uses models such as UHP to help automate alert validation and add context to findings. Adding in additional datasets from external sources such as VirusTotal, NSRL and IP related tools helps infuse additional context to the alerts, increasing analyst confidence and slashing incident investigation times. For each of our Analytic Response customers, we take into account their unique user, host, and process profiles. By applying the UHP model during alert triage, hunting and incident response, we can quickly identify and protect against advanced threats and malware in your enterprise quickly and accurately.


If you’d like to learn more about Analytic Response, check out our Service Brief [PDF]. If you need Incident Response services, we’re always available: 1-844-RAPID-IR.

Yesterday, the Michigan Senate Judiciary Committee passed a bill – S.B. 0927 – that forbids some forms of vehicle hacking, but includes specific protections for cybersecurity researchers. Rapid7 supports these protections. The bill is not law yet – it has only cleared a Committee in the Senate, but it looks poised to keep advancing in the state legislature. Our background and analysis of the bill is below.


In summary:

  • The amended bill offers legal protections for independent research and repair of vehicle computers. These protections do not exist in current Michigan state law.
  • The amended bill bans some forms of vehicle hacking that damage property or people, but we believe this was already largely prohibited under current Michigan state law.
  • The bill attempts to make penalties for hacking more proportional, but this may not be effective.




Earlier this year, Michigan state Senator Mike Kowall introduced S.B. 0927 to prohibit accessing a motor vehicle's electronic system without authorization. The bill would have punished violations with a potential life sentence. As noted by press reports at the time, the bill's broad language made no distinction between malicious actors, researchers, or harmless access. The original bill is available here.


After S.B. 0927 was introduced, Rapid7 worked with a coalition of cybersecurity researchers and companies to detail concerns that the bill would chill legitimate research. We argued that motorists are safer as a result of independent research efforts that are not necessarily authorized by vehicle manufacturers. For example, in Jul. 2015, researchers found serious security flaws in Jeep software, prompting a recall of 1.4 million vehicles. Blocking independent research to uncover vehicle software flaws would undermine cybersecurity and put motorists at greater risk.


Over a four-month period, Rapid7 worked rather extensively with Sen. Kowall's office and Michigan state executive agencies to minimize the bill's damage to cybersecurity research. We applaud their willingness to consider our concerns and suggestions. The amended bill passed by the Michigan Senate Judiciary Committee, we believe, will help provide researchers with greater legal certainty to independently evaluate and improve vehicle cybersecurity in Michigan.


The Researcher Protections


First, let's examine the bill's protections for researchers – Sec. 5(2)(B); pg. 6, lines 16-21. Explicit protection for cybersecurity researchers does not currently exist in Michigan state law, so we view this provision as a significant step forward.


This provision says researchers do not violate the bill's ban on vehicle hacking if the purpose is to test, refine, or improve the vehicle – and not to damage critical infrastructure, other property, or injure other people. The research must also be performed under safe and controlled conditions. A court would need to interpret what qualifies as "safe and controlled" conditions – hacking a moving vehicle on a highway probably would not qualify, but we would argue that working in one's own garage likely sufficiently limits the risks to other people and property.


The researcher protections do not depend on authorization from the vehicle manufacturer, dealer, or owner. However, because of the inherent safety risks of vehicles, Rapid7 would support a well-crafted requirement that research beyond passive signals monitoring must obtain authorization from the vehicle owner (as distinct from the manufacturer).


The bill offers similar protections for licensed manufacturers, dealers, and mechanics [Sec. 5(2)(A); pg. 6, lines 10-15]. However, both current state law and the bill would not explicitly give vehicle owners (who are not mechanics, or are not conducting research) the right to access their own vehicle computers without manufacturer authorization. However, since Michigan state law does not clearly give owners this ability, the bill is not a step back here. Nonetheless, we would prefer the legislation make clear that it is not a crime for owners to independently access their own vehicle and device software.


The Vehicle Hacking Ban


The amended bill would explicitly prohibit unauthorized access to motor vehicle electronic systems to alter or use vehicle computers, but only if the purpose was to damage the vehicle, injure persons, or damage other property [Sec. 5(1)(c)-(d); pgs. 5-6, lines 23-8]. That is an important limit that should exclude, for example, passive observation of public vehicle signals or attempts to fix (as opposed to damage) a vehicle.


Although the amended bill would introduce a new ban on certain types of vehicle hacking, our take is that this was already illegal under existing Michigan state law. Current Michigan law – at MCL 752.795 – prohibits unauthorized access to "a computer program, computer, computer system, or computer network." The current state definition of "computer" – at MCL 752.792 – is already sweeping enough to encompass vehicle computers and communications systems. Since the law already prohibits unauthorized hacking of vehicle computers, it's difficult to see why this legislation is actually necessary. Although the bill’s definition of "motor vehicle electronic system" is too broad [Sec. 2(11); pgs. 3-4, lines 25-3], its redundancy with current state law makes this legislation less of an expansion than if there were no overlap.


Penalty Changes


The amended bill attempts to create some balance to sentencing under Michigan state computer crime law [Sec. 7(2)(A); pg. 8, line 11]. This provision essentially makes harmless violations of Sec. 5 (which includes the general ban on hacking, including vehicles) a misdemeanor, as opposed to a felony. Current state law – at MCL 752.797(2) – makes all Sec. 5 violations felonies, which is potentially harsh for innocuous offenses. We believe that penalties for unauthorized hacking should be proportionate to the crime, so building additional balance in the law is welcome.


However, this provision is limited and contradictory. The Sec. 7 provision applies only to those who "did not, and did not intend to," acquire/alter/use a computer or data, and if the violation can be "cured without injury or damage." But to violate Sec. 5, the person must have intentionally accessed a computer to acquire/alter/use a computer or data. So the person did not violate Sec. 5 in the first place if the person did not do those things or did not do them intentionally. It’s unclear under what scenario Sec. 7 would kick in and provide a more proportionate sentence – but at least this provision does not appear to cause any harm. We hope this provision can be strengthened and clarified as the bill moves through the Michigan state legislature.




On balance, we think the amended bill is a major improvement on the original, though not perfect. The most important improvements we'd like to see are

  1. Clarifying the penalty limitation in Sec. 7; 
  2. Narrowing the definition of "motor vehicle electrical system" in Sec. 2; and
  3. Limiting criminal liability for owners that access software on vehicle computers they own.


However, the clear protections for independent researchers are quite helpful, and Rapid7 supports them. To us, the researcher protections further demonstrate that lawmakers are recognizing the benefits of independent research to advance safety, security, and innovation. The attempt at creating proportional sentences is also sorely needed and welcome, if inelegantly executed.


The amended bill is at a relatively early stage in the legislative process. It must still pass through the Michigan Senate and House. Nonetheless, it starts off on much more positive footing than it did originally. We intend to track the bill as it moves through the Michigan legislature and hope to see it improve further. In the meantime, we'd welcome feedback from the community.

Today, we're less than fifty days from the next U.S. presidential election, and over the next couple months, I fully expect to see a lot of speculation over the likelihood of someone "hacking the election." But what does that even mean?


The U.S. election system is a massively complex tangle of technology, and, at first, second, and third glance, it appears to embody the absolute worst practices when it comes to information security. There are cleartext, Internet-based entry points to the voting system. There is an aging installed base of voting machines running proprietary, closed-source code, produced by many vendors. And there is a bizarrely distributed model of authority over the election, where no one actually has the power to enforce a common set of security standards.


Sure, it seems bad. Nightmarish, really. But what are the actual risks here? If an adversary wanted to "hack" the "election," what could that adversary actually accomplish?


Online Voting in the U.S.

According to this PDF report from EPIC, the Verified Voting Foundation, and Common Cause, 32 states have some form of Internet-enabled voting. However, those systems are not the kind of easy, point-and-click kind of interface that most people think of when you say "Internet-enabled." They tend to be systems for distributing ballots that the voter needs to print out on paper (ugh), sign (often with a witness's countersignature), and then email or fax back to the state authority for counting.


Systems like these throw up privacy concerns. On a purely technical level, email and fax do not offer any sort of encryption. Ballots cast this way are being passed around the public internet "in the clear," and if an attacker is able to compromise any point along the path of transmission, that attacker can intercept these completed ballots. So, not only does this system do away with any notion of a secret ballot, it does it in such a way that it ignores any modern understanding of cryptographic security.


Clearly, this is a bummer for security professionals. We'd much rather see online voting systems with encryption built in. Web sites use HTTPS, an encrypted protocol, to avoid leaking important things like credit card numbers and passwords over public networks, so we'd like we see at least this level of security for something as critical as a voter's ballot.


That said, actually attacking this system doesn't scale very well for an adversary. First, they would need to target remote, online voters for snooping and interception. These voters are a minority, since most voting in every state happens either in person, or with paper ballots sent in the regular postal mail. Once the vulnerable population is identified, the adversary would then need to either wait for the voters to cast their ballots in order to change those ballots in transit, or vote on behalf of the legitimate voter before she gets a chance to. Active cleartext attacks like this work pretty well against one person or one location, but they are difficult to pull off at the kind of scale needed to tip an election.


Alternatively, the adversary could invent a population of phantom voters, register them to vote remotely, and stuff the ballot box with fake votes. Again, this isn't impossible, but it's also fairly high effort, since voter registration is already somewhat difficult for legitimate voters; automating it at scale just isn't possible in most counties in the U.S..


This leaves the servers that are responsible for collecting online ballots. The easiest thing to do here would be to kick them offline with a standard Denial-of-Service (DoS) attack, so all those emailed ballots would be dropped. This sort of attack would be pretty noticeable by the system maintainers, though, and I would expect people would switch back to paper mail pretty quickly. Remember, these systems aren't intended to be used on election day -- they merely collect absentee ballots, so there is going to be plenty of time to switch to the paper-based backup


A total compromise of the ballot collection servers could enable attackers to alter or destroy votes in a much sneakier way, and an attack like this could potentially avoid detection until after the election is called. On the bright side, this kind of attack appears possible for only five of the Internet-enabled voting states. Only Alabama, Alaska, Arizona, North Dakota, and Missouri have an "Internet portal." None of these states appear to be battleground states according to FiveThirtyEight's latest projections. So, regardless of their security posture (which isn't known), attacking these portals isn't likely to net a lot of gain for attackers wishing to influence the Presidential election one way or the other. If Florida or Pennsylvania had one of these portals, I'd be a lot more worried.


Hacking Voting Machines

Another common theme of "election hacking" stories involves attacking the voting machine directly, in order to alter the votes cast on site. Now, on the one hand, no electronic voting machine is cyber-bulletproof. I have every expectation that these voting computers have some bugs, and some of those bugs weaken the security of the system. I'd love to see open source, auditable voting machine code. Voting is important, and the machines we trust to tabulate results should be trustworthy.


On the other hand, if the adversary needs to physically visit voting machines in order to fiddle with results, then he'd need a whole lot of bodies in a whole lot of polling places in order to make a real dent in the results of an election. Don't get me wrong, wireless networking is getting ubiquitous, and high-gain antennae are a thing. But even with ideal placement and transmission power, the attacker is going to need to be within sight of a polling place in order to conduct practical attacks on a WiFi-enabled voting machine.


So, while such an attack is remote, it's not sitting-in-another-country remote. More like parked-outside-the-polling-place remote. WiFi voting machines are a terrible idea, but they don't appear to be an existential threat to democracy.


Ancillary Attacks: Voter Information

So, rather than attacking ballot-issuing and ballot-counting systems directly, attackers have much more attractive targets available connected to the election. Voter records, for example, are tempting to cyber criminals, since they contain enough personally identifiable information (PII) to kick off identity theft and identity fraud attacks at scale. Unfortunately, those particular cats are already out of the bag. 191 million voter records were accidentally leaked late in 2015, and the FBI warned in August that some state voter databases have suffered breaches.


Altering voter registration records is a big deal, for sure, since such attacks can help an adversary actually affect voter turnout for targeted voting blocs. While that's not what's being reported today, such an attack could not only nudge election results one way or another, but possibly bring into question the integrity of the democratic process itself. After all, "voter fraud," despite being practically non-existent in any recent election in the U.S., is a hot-button political topic. If an attack were detected that involved altering voter records, it would almost certainly be seen as a smoking gun that implies systematic voter fraud, therefore undermining confidence in the election for a huge chunk of the electorate. For more on likely voter data attacks, and what voter registration officials can do to safeguard that information, take a look at ST-16001, Securing Voter Registration Data from US-CERT.


Perception Matters

Of course, "hacking elections" may not involve actually compromising the balloting or vote counting processes at all.


Imagine that someone decided to take down a couple voter information websites. Would this technically interfere with the election process? Maybe, if some people were trying to find out where their polling place is. The obvious effect, though, would be to create the impression that the election is under cyber-attack... and never mind the fact that voter registration and polling place information websites routinely crash under load on election day, despite the best efforts of the people running those sites.


So What Can We Do To Secure Elections?

Election infrastructure is complex, and there are certain to be bugs in any complex system. While elections, just like nearly everything else, are made safer, more convenient, and more efficient with technology, that same technology is going to introduce new risks that we've never had to deal with before and haven't anticipated. Naturally, there's cause for concern there, even if it doesn't rise to the level of Total Democolypse.


If you're in charge of voting technology in your area, we strongly urge you to test your systems now, ahead of the election. You should be attacking the system to see what's possible, and what mitigations are needed to ensure the election will not be affected by any kinks in the system. If you're not sure where to start, feel free to contact - we're happy to connect you with security expertise (either our own or someone else from the security community) that will have a chat with you for free. We all have a vested interest in ensuring voting technology is not compromised, so we want to do what we can to help.


If you're a U.S. voter concerned about the integrity of the election process in your district, feel free to get in touch with your local office of elections and ask them what they've done to ensure that the election experience is resilient against cyber threats. If you're a real go-getter, I encourage you to volunteer with your county as a poll worker, and see what's going on behind the scenes, up close. Every county always needs help around election day, and I can attest that my own experience as an election judge was a fun and rewarding way to protect democracy without being particularly partisan.

NOTE: A version of this essay first appeared in CSM Passcode. You can read that version here: Opinion: Think hackers will tip the vote? Read this first - .

As you may recall, back in December Rapid7 disclosed six vulnerabilities that affect four different Network Management System (NMS) products, discovered by Deral Heiland of Rapid7 and independent researcher Matthew Kienow. In March, Deral followed up with another pair of vulnerabilities for another NMS. Today, we're releasing a new disclosure that covers 11 issues across four vendors. As is our custom, these were all reported to vendors and CERT for coordinated disclosure.


While this disclosure covers a wide range of vulnerabilities discovered (and fixed), the theme of injecting malicious data via SNMP to ultimately gain control of NMS web console browser windows became overwhelming obvious, and deserving of a more in-depth look. To that end, today, Rapid7 would like to offer a complete research report on the subject. From Managed to Mangled: SNMP Exploits for Network Management Systems by Deral, Matthew, and yours truly is available for download here, and we'd love to hear your feedback on this technique in the comments below. We'll all be at DerbyCon as well, and since Matthew and Deral be presenting these findings on Saturday, September 24th, 2016, it will be a fine time to chat about this.


Incidentally, we're quite pleased that every one of these vendors have issued patches to address these issues well before our planned disclosure today. All acted reasonably and responsibly to ensure their customers and users are protected against this technique, and we're confident that going forward, NMSs will do a much better job of inspecting and sanitizing machine-supplied, as well as user-supplied, input.


With that, let's get on with the disclosing!


Rapid7 IdentifierCVE IdentifierClassVendorPatched
R7-2016-11.1CVE-2016-5073XSSCloudViewVersion 2.10a
R7-2016-11.2CVE-2016-5073XSSCloudviewVersion 2.10a
R7-2016-11.3CVE-2016-5074Format StringCloudviewVersion 2.10a
R7-2016-11.4CVE-2016-5075XSSCloudviewVersion 2.10a
R7-2016-11.5CVE-2016-5076DOACloudviewVersion 2.10a
R7-2016-14.1CVE-2016-5642XSSOpmantekVersions 8.5.12G
R7-2016-14.2CVE-2016-5642XSSOpmantekVersions 8.5.12G, 4.3.7c
R7-2016-14.3CVE-2016-5642XSSOpmantekVersions 8.5.12G, 4.3.7c
R7-2016-14.4CVE-2016-6534Cmd InjectionOpmantekVersions 8.5.12G, 4.3.7c


R7-2016-11: Multiple Issues in CloudView NMS

CloudView NMS versions 2.07b and 2.09b is vulnerable to a persistent Cross Site Scripting (XSS) vulnerability over SNMP agent responses and SNMP trap messages, a format string vulnerability in processing SNMP agent responses, a format string vulnerability via telnet login, and an insecure direct object reference issue. These issues were resolved in version 2.10a, available from the vendor. None of these issues require any prior authentication to exploit.


These issues were discovered by Deral Heiland of Rapid7, Inc.


R7-2016-11.1: XSS via SNMP Agent Responses (CVE-2016-5073)

While examining the Network Management System (NMS) software Cloudview NMS, it was discovered to be vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within CloudView’s web management interface. When this data (JavaScript) is viewed within the web console the code will execute within the context of the authenticated user. This will allow a malicious actor to conduct attacks which can be used to modify the systems configuration, compromise data, take control of the product or launch attacks against the authenticated users hosts system.


The first persistent XSS vulnerability is delivered via the network SNMP discovery process. If the network device that is discovered, during the discovery process, is configured with SNMP and the SNMP OID object sysDescr contain HTML or JavaScript code within that field and the discovered device is imported into the database, then code will be delivered to the product for persistent display and execution.


The following example shows the results of discovering a network device where the SNMP sysDescr has been set to <SCRIPT>alert("XSS-sysDescr")<SCRIPT>. In this example, when device is viewed within web console "Device List screen" the JavaScript executes, rendering an alert box within the authenticated users web browser.




R7-2016-11.2: XSS via SNMP Trap Messages (CVE-2016-5073)

The second method of injection involves SNMP trap messages. The CloudView product allows unsolicited traps, which are stored within the logs. A malicious actor can inject HTML and JavaScript code into the product via SNMP trap message. When the SNMP trap message information is viewed the code will execute within the context of the authenticated user. Figure 2 shows an example attack where a trap message was used with the HTML code <embed src=//> to embed flash into the CloudView web console.



R7-2015-11.3: Format String Vulnerability via SNMP (CVE-2016-5074)

Cloudview NMS was also discovered to be vulnerable to a format string vulnerability. This vulnerability allows a malicious actor to inject format string specifiers into the product via the SNMP sysDescr field. If successfully exploited, this could allow a malicious actor to execute code or trigger a denial of service condition within the application. The following Ollydbg screen shot (Figure 3) shows a series of %x that were used within the SNMP sysDescr field of a discovered device to enumerate the stack data from the main process stack and trigger a access violation with %s.



R7-2015-11.4: XSS via Telnet Login (CVE-2016-5075)

A third method was discovered for injecting persistent XSS in the username field of the Remote CLI telnet on port TCP 3082. A malicious actor with network access to this port could inject Javascript or HTML code into the event logs using failed login attempts as shown below:




R7-2015-11.5: Direct Object Access (CVE-2016-5076)

During testing it was also discovered that access to file within the Windows file systems where accessible without proper authentication. This allowed for full file system access on the Windows 2008 server systems running the product. In the following example, the URL was used to retrieve the configuration file "auto.def" on the server without authentication.



Disclosure Timeline

Mon, May 23, 2016 : Initial contact to vendor

Mon, May 23, 2016 : Vendor responded with security contact

Mon, May 23, 2016 : Details provided to vendor security contact

Sun, Jun 05, 2016: Version 2.10a published by the vendor

Thu, Jun 09, 2016 : Disclosed to CERT, tracked as VR-205

Tue, Jun 14, 2016: CVE-2016-5073, CVE-2016-5074, CVE-2016-5075, CVE-2016-5076 assigned by CERT

Wed, Sep 07, 2016: Public disclosure


R7-2016-12: XSS via SNMP Trap Messages in Netikus EventSentry (CVE-2016-5077)

Netikus EventSentry NMS versions,, and are vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This issue was fixed in version, available from the vendor. This issue does not require any prior authentication to exploit.


This issue was discovered by Deral Heiland of Rapid7, Inc.



While examining the Network Management System (NMS) software EventSentry, It was discovered to be vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within EventSentry's web management interface. When this data (JavaScript) is viewed within the web console the code will execute within the context of the authenticated user. This will allow a malicious actor to conduct attacks which can be used to modify the systems configuration, compromise data, take control of the product or launch attacks against the authenticated user's host system.


This injection was conducted using unsolicited SNMP trap messages, which are stored within the SNMP logs on EventSentry. A malicious actor can inject HTML and JavaScript code into the product via SNMP trap message. When the SNMP trap message information is viewed, the code will execute within the context of the authenticated user. By using the following snmptrap command, it was possible to inject the following HTML code <embed src=// 4.swf> to embed flash into the EventSentry web console SNMP logs:


snmptrap -v 1 -c public '1' '' 6 99 '55' 1 s "<embed src=//>"




Disclosure Timeline

Mon, May 23, 2016 : Initial contact to vendor

Mon, May 23, 2016 : Vendor responded with security contact

Mon, May 23, 2016 : Details provided to vendor security contact

Fri, May 27, 2016: Version published by the vendor

Thu, Jun 09, 2016 : Disclosed to CERT, tracked as VR-205

Tue, Jun 14, 2016: CVE-2016-5077 assigned by CERT

Wed, Sep 07, 2016: Public disclosure


R7-2016-13: XSS via SNMP in Paessler PRTG (CVE-2016-5078)

Paessler PRTG NMS version is vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This issue does not require any prior authentication to exploit, and was fixed in version, available from the vendor.


This issue was discovered by Deral Heiland of Rapid7, Inc.



While examining the Network Management System (NMS) software PRTG, it was discovered to be vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within PRTG’s Network Monitor web management interface. When this data (JavaScript) is viewed within the web console the code will execute within the context of the authenticated user. This will allow a malicious actor to conduct attacks which can be used to modify the system configuration, compromise data, take control of the product or launch attacks against the authenticated user's host system.


The persistent XSS vulnerability is delivered via the network SNMP discovery process of a device. If the network device that is discovered contains JavaScript or HTML code specified as the following SNMP OID objects, then the code will be rendered within the context of the authenticated user who views the “System Information” web page of the discovered device.




The following example shows the results of discovering a network device where the SNMP sysDescr has been set to <embed src=//>. In this example, when a device's "System Information" web page is viewed in the web console, the HTML code will download and render the Flash file in the authenticated users web browser.



Disclosure Timeline

Mon, May 23, 2016 : Initial contact to vendor

Tue, May 24, 2016 : Vendor responded with security contact

Tue, May 24, 2016 : Details provided to vendor security contact

Mon, Jun 06, 2016: Version released by the vendor

Thu, Jun 09, 2016 : Disclosed to CERT, tracked as VR-205

Tue, Jun 14, 2016: CVE-2016-5078 assigned by CERT

Wed, Sep 07, 2016: Public disclosure


R7-2016-14: Multiple Issues in Opmantek NMIS

Opmantek NMIS NMS versions 8.5.10G and 4.3.6f are vulnerable to a persistent Cross Site Scripting (XSS) vulnerability over SNMP agent responses and SNMP trap messages, a reflected XSS vulnerability over SNMP agent responses, and a command injection vulnerability. These issues were fixed in versions 8.5.12G and 4.3.7c, available from the vendor.


All three of the XSS attack methods allow an unauthenticated adversary to inject malicious content into the user’s browser session. This could cause arbitrary code execution in an authenticated user's browser session and may be leveraged to conduct further attacks. The code has access to the authenticated user's cookies and would be capable of performing privileged operations in the web application as the authenticated user, allowing for a variety of attacks.


These issues were discovered by independent researcher Matthew Kienow. Note that all three XSS vectors have been assigned the same CVE identifier.


R7-2016-14.1, XSS Injection via SNMP Trap Messages (CVE-2016-5642)

First, a stored (AKA Persistent or Type I) server XSS vulnerability exists due to insufficient filtering of SNMP trap supplies data before the affected software stores and displays the data. Traps that will be processed by NMIS version 8.x depend on the configuration of snmptrapd, the Net-SNMP trap notification receiver. This component may be configured to accept all incoming notifications or may be constrained by defined access control. In the latter case, the adversary must determine the SNMP authorization credentials before launching the attack. Note that NMIS version 4.x does not have the capability of inspecting trap messages, so is unaffected by this issue.


The example configuration for Net-SNMP's snmptrapd <nmisdir>/install/snmptrapd.conf, which ships with NMIS, contains the line "disableAuthorization yes." This directive disables access control checks and accepts all incoming notifications. The affected software is capable of accepting traps from hosts registered or unknown to the system. The stored XSS payload is delivered to the affected software via an object in the malicious SNMP trap. Once the trap is processed it is stored in the SNMP Traps Log. The XSS payload will execute when the user navigates to the SNMP Traps Log widget by clicking on the Service Desk > Logs > Log List menu item, and then clicking the SNMP_Traps link in the List of Available Logs window that appears. The user may also navigate to the non-widget SNMP Traps Log page at http://host:port/cgi-nmis8/ MP_Traps&widget=false.



R7-2016-14.2, XSS Injection via SNMP Agent Responses (CVE-2016-5642)

Second, a stored server XSS vulnerability exists due to insufficient filtering of SNMP agent supplied data before the affected software stores and displays the data. The stored XSS payload is delivered to the affected software during the SNMP data collect operation performed when adding and updating a node. The malicious node utilizes an SNMP agent to supply the desired XSS payload in response to SNMP GetRequest messages for the sysDescr (, sysContact ( and sysLocation ( object identifiers (OIDs). The XSS payload provided for the sysDescr object will execute when the add and update node operation is complete and the results are displayed.


The XSS payload provided for the sysLocation object will execute when the user clicks the Network Status > Network Metrics and Health menu item and then clicks on the link for the malicious node's group. After the Node List and Status window appears, if the user clicks on the link for the malicious node the XSS payload for the sysLocation, sysContact and sysDescr objects execute before the Node Details window appears. If the user keeps the malicious node's Node Details window open it updates at a set interval causing all three XSS payloads to execute repeatedly.



R7-2016-14.3, Reflected XSS Injection via SNMP Agent Responses (CVE-2016-5642)

Third, a reflected (AKA Non-Persistent or Type II) client XSS vulnerability exists due to insufficient filtering of SNMP agent supplied data before the affected software displays the data. The reflected XSS payload is delivered to the affected software during the SNMP Tool walk operation. Any XSS payloads contained in walked OIDs will execute when the results are displayed. Note, the SNMP Tool is not available in NMIS version 4.3.6f.



R7-2016-14.4, Web Application Command Injection

Finally, a command injection vulnerability in the web application component of Opmantek Network Management Information System (NMIS) exists due to insufficient input validation. In NMIS version 8.5.10G the command injection vulnerability exists in the CGI script via the "node" parameter when the "act" parameter is set to "tool_system_finger". The user must be authenticated and granted the tls_finger permission, which does not appear to be enabled by default. However, the software is vulnerable if the tls_finger permission is granted to the authenticated user in the <NMIS Install Dictory>/conf/Access.nmis file. A sample tls_finger permission is defined as follows:


'tls_finger' => {
    'descr' => 'Permit Access tool finger',
    'group' => 'access',
    'level0' => '1',
    'level1' => '0',
    'level2' => '1',
    'level3' => '1',
    'level4' => '1',
    'level5' => '0',
    'name' => 'tls_finger'


In NMIS version 4.3.6f the command injection vulnerability exists in the CGI script via the "node" parameter when the "admin" parameter is set to either "man", "finger", "ping", "trace" or "nslookup". This is exploitable without authentication in the default configuration, since NMIS authentication is not required by default as specified in the <NMIS Install Dictory>/conf/nmis.conf file.


#authentication stuff
# set this to true to require authentication (default=false)


NMIS version 8.5.10G Exploitation

An authenticated user that has been granted the tls_finger permission requests the URL http://host:port/cgi-nmis8/ =%3Bcat%20%2Fusr%2Flocal%2Fnmis8%2Fconf%2FConfig.nmis to dump the NMIS configuration file, as shown below. The config file contains cleartext usernames and passwords for the outgoing notification mail server and NMIS database server, as shown below.



NMIS version 4.3.6f Exploitation

An unauthenticated individual with access to the NMIS server requests the URL http://host:port/cgi-nmis4/ to output the user name associated with the effective user ID of the web server process, as shown below.




Disclosure Timeline

Wed, Jun 01, 2016 : Initial contact to vendor

Thu, Jun 02, 2016 : Vendor responded with security contact

Thu, Jun 02, 2016 : Details provided to vendor security contact

Mon, Jun 13, 2016: Versions 4.3.7c and NMIS 8.5.12G released by the vendor

Wed, Jun 22, 2016 : Disclosed to CERT, tracked as VR-228

Wed, Jun 22, 2016: CVE-2016-5642 assigned by CERT

Tue, Sep 06, 2016: CVE-2016-6534 assigned by CERT

Wed, Sep 07, 2016: Public disclosure


More Information for All Issues

All of these described issues have been fixed by their respective vendors, so users are encouraged to update to the latest versions. For a more in-depth exploration of the SNMP-vectored issues, readers are encourage to download the accompanying paper, From Managed to Mangled: SNMP Exploits for Network Management Systems.

For the past several years, Rapid7's Project Sonar has been performing studies that explore the exposure of the NetBIOS name service on the public IPv4 Internet.  This post serves to describe the particulars behind the study and provide tools and data for future research in this area.


Protocol Overview

Originally conceived in the early 1980s, NetBIOS is a collection of services that allows applications running on different nodes to communicate over a network.  Over time, NetBIOS was adapted to operate on various network types including IBM's PC Network, token ring, Microsoft's MS-Net, Novell NetWare IPX/SPX, and ultimately TCP/IP.

For purposes of this document, we will be discussing NetBIOS over TCP/IP (NBT), documented in RFC 1001 and RFC 1002.


NBT is comprised of three services:


  • A name service for name resolution and registration (137/UDP and 137/TCP)
  • A datagram service for connectionless communication (138/UDP)
  • A session service for session-oriented communication (139/TCP)


The UDP variant of the NetBIOS over TCP/IP Name service on 137/UDP, NBNS, sometimes referred to as WINS (Windows Internet Name Service), is the focus of this study.  NBNS provides services related to NetBIOS names for NetBIOS-enabled nodes and applications.  The core functionality of NBNS includes name querying and registration capabilities and is similar in functionality and on-the-wire format to DNS but with several NetBIOS/NBNS specific details.


Although NetBIOS (and, in turn, NBNS) are predominantly spoken by Microsoft Windows systems, it is also very common to find this service active on OS X systems (netbiosd and/or Samba), Linux/UNIX systems (Samba) and all manner of printers, scanners, multi-function devices, storage devices, etc.  Fire up wireshark or tcpdump on nearly any network that contains or regularly services Windows systems and you will almost certainly see NBNS traffic everywhere:


Screen Shot 2016-09-01 at 12.16.27 PM.png


The history of security issues with NBNS reads much like that of DNS.  Unauthenticated and communicated over a connectionless medium, some attacks against NBNS include:


  • Information disclosure relating to generally internal/private names and addresses
  • Name spoofing, interception, and cache poisoning.


While not exhaustive, some notable security issues relating to NBNS include:


  • Abusing NBNS to attack the Web Proxy Auto-Discovery (WPAD) feature of Microsoft Windows to perform man-in-the-middle attacks, resulting in MS09-008/CVE-2009-0094.
  • Hot Potato, which leveraged WPAD abuse via NBNS in combination with other techniques to achieve privilege escalation on Windows 7 and above.
  • BadTunnel, which utilized NetBIOS/NBNS in new ways to perform man-in-the-middle attacks against target Windows systems, ultimately resulting in Microsoft issuing MS16-077.
  • Abusing NBNS to perform amplification attacks as seen during DDoS attacks as warned by US-CERT's TA14-017a.



Study Overview

Project Sonar's study of NBNS on 137/UDP has been running for a little over two years as of the publication of this document.  For the first year the study ran once per week, but shortly thereafter it was changed to run once per month along with the other UDP studies in an effort to reduce the signal to noise ratio.


The study uses a single, static, 50-byte NetBIOS "node status request" (NBSTAT) probe with a wildcard (*) scope that will return all configured names for the target NetBIOS-enabled node.  A name in this case is in reference to a particular capability a NetBIOS-enabled node has -- for example, this could (and often does) include the configured host name of the system, the workgroup/domain that it belongs to, and more.  In some cases, the presence of a particular type of name can be an indicator of the types of services a node provides.  For a more complete list of the types of names that can be seen in NBNS, see Microsoft's documentation on NetBIOS suffixes.


The probe used by Sonar is identical to the probe used by zmap and the probe used by Nmap.  A Wireshark-decoded sample of this probe can be seen below:




This probe is sent to all public IPv4 addresses, excluding any networks that have requested removal from Sonar's studies, leaving ~3.6b possible target addresses for Sonar.  All responses, NBNS or otherwise, are saved.  Responses that appear to be legitimate NBNS responses are decoded for further analysis.


An example response from a Windows 2012 system:




As a bonus for reconnaissance efforts, RFC 1002 also describes a field included at the end of the node status response that includes statistics about the NetBIOS service on the target node, and one field within here, the "Unit ID", frequently contains the ethernet or other MAC address.


NetBIOS, and in particular NBNS, falls into the same bucket that many other services fall into -- they have no sufficiently valid business reason for being exposed live on the public Internet.  Lacking authentication and riding on top of a connectionless protocol, NBNS has a history of vulnerabilities and effective attacks that can put systems and networks exposing/using this service at risk.  Depending on your level of paranoia, the information disclosed by a listening NBNS endpoint may also constitute a risk.


These reasons, combined with a simple, non-intrusive way of identifying NBNS endpoints on the public IPv4 Internet is why Rapid7's Project Sonar decided to undertake this study.



Data, Tools and Future Research

As part of Rapid7's Project Sonar, all data collected by this NBNS study is shared with the larger security community thanks to  The past two years worth of the NBNS study's data can be found here with the -netbios-137.csv.gz  suffix.  The data is stored as GZIP-compressed CSV, each row of the CSV containing the metadata for the response elicited by the NBNS probe -- timestamp, source and destination IPv4 address, port, IP ID, TTL and, most importantly, the NBNS response (hex encoded).


There are numerous ways one could start analyzing this data, but internally we do much of our first-pass analysis using GNU parallel and Rapid7's dap.  Below is an example command you could run to start your own analysis of this data.  It utilizes dap to parse the CSV, decode the NBNS response and return the data in a more friendly JSON format:


pigz -dc ~/Downloads/20160801-netbios-137.csv.gz | parallel --gnu --pipe "dap csv + select 2 8 + rename 2=ip 8=data + transform data=hexdecode + decode_netbios_status_reply data + remove data + json"


As an example of some of the output you might get from this, anonymized for now:


{"ip":"","data.netbios_names":"MYSTORAGE:00:U WORKGROUP:00:G MYSTORAGE:20:U WORKGROUP:1d:U ","data.netbios_mac":"e5:d8:00:21:10:20","data.netbios_hname":"MYSTORAGE","data.netbios_mac_company":"UNKNOWN","data.netbios_mac_company_name":"UNKNOWN"}
{"ip":"","data.netbios_names":"OFFICE-PC:00:U OFFICE-PC:20:U WORKGROUP:00:G WORKGROUP:1e:G WORKGROUP:1d:U \u0001\u0002__MSBROWSE__\u0002:01:G ","data.netbios_mac":"00:1e:10:1f:8f:ab","data.netbios_hname":"OFFICE-PC","data.netbios_mac_company":"Shenzhen","data.netbios_mac_company_name":"ShenZhen Huawei Communication Technologies Co.,Ltd."}
{"ip":"","data.netbios_names":"DSL_ROUTE:00:U DSL_ROUTE:03:U DSL_ROUTE:20:U \u0001\u0002__MSBROWSE__\u0002:01:G WORKGROUP:1d:U WORKGROUP:1e:G WORKGROUP:00:G ","data.netbios_mac":"00:00:00:00:00:00","data.netbios_hname":"DSL_ROUTE"}


There are also several Metasploit modules for exploring/exploiting NBNS in various ways:


  • auxiliary/scanner/netbios/nbname: performs the same initial probe as the Sonar study against one or more targets but uses the NetBIOS name of the target to perform a follow-up query that will disclose the IPv4 address(es) of the target.  Useful in situations where the target is behind NAT, multi-homed, etc., and this information can potentially be used in future attacks or reconaissance.
  • auxiliary/admin/netbios/netbios_spoof: attempts to spoof a given NetBIOS name (such as WPAD) targeted against specific system
  • auxiliary/spoof/nbns/nbns_response: similar to netbios_spoof but listens for all NBNS requests broadcast on the local network and will attempt to spoof all names (or just a subset by way of regular expressions)
  • auxiliary/server/netbios_spoof_nat: used to exploit BadTunnel


For a protocol that has been around for over 30 years and has had its fair share of research done against it, one might think that there is no more to be discovered, but the discovery of two high profile vulnerabilities in NBNS this year (HotPotato and BadTunnel) shows that there is absolutely more to be had.


If you are curious about NBNS and interested in exploring more, use the data, tools and knowledge provided above.  We'd love to hear your ideas or discoveries either here in the comments or by emailing



by Derek Abdine & Bob Rudis (photo CC-BY-SA Kalle Gustafsson)


Astute readers will no doubt remember the Shadow Brokers leak of the Equation Group exploit kits and hacking tools back in mid-August. More recently, security researchers at SilentSignal noted that it was possible to modify the EXTRABACON exploit from the initial dump to work on newer Cisco ASA (Adaptive Security Appliance) devices, meaning that virtually all ASA devices (8.x to 9.2(4)) are vulnerable and it may be interesting to dig into the vulnerability a bit more from a different perspective.


Now, "vulnerable" is an interesting word to use since:


  • the ASA device must have SNMP enabled and an attacker must have the ability to reach the device via UDP SNMP (yes, SNMP can run over TCP though it's rare to see it working that way) and know the SNMP community string
  • an attacker must also have telnet or SSH access to the devices


This generally makes the EXTRABACON attack something that would occur within an organization's network, specifically from a network segment that has SNMP and telnet/SSH access to a vulnerable device. So, the world is not ending, the internet is not broken and even if an attacker had the necessary access, they are just as likely to crash a Cisco ASA device as they are to gain command-line access to one by using the exploit. Even though there's a high probable loss magnitude1 from a successful exploit, the threat capability2 and threat event frequency3 for attacks would most likely be low in the vast majority of organizations that use these devices to secure their environments. Having said that, EXTRABACON is a pretty critical vulnerability in a core network security infrastructure device and Cisco patches are generally quick and safe to deploy, so it would be prudent for most organizations to deploy the patch as soon as they can obtain and test it.


Cisco did an admirable job responding to the exploit release and has a patch ready for organizations to deploy. We here at Rapid7 Labs wanted to see if it was possible to both identify externally facing Cisco ASA devices and see how many of those devices were still unpatched. Unfortunately, most firewalls aren't going to have their administrative interfaces hanging off the public internet nor are they likely to have telnet, SSH or SNMP enabled from the internet. So, we set our sights on using Project Sonar to identify ASA devices with SSL/IPsec VPN services enabled since:


  • users generally access corporate VPNs over the internet (so we will be able to see them)
  • many organizations deploy SSL VPNs these days versus or in addition to IPsec (or other) VPNs (and, we capture all SSL sites on the internet via Project Sonar)
  • these SSL VPN-enabled Cisco ASAs are easily identified


We found over 50,000 Cisco ASA SSL VPN devices in our most recent SSL scan. Keeping with the spirit of our recent National Exposure research, here's a breakdown of the top 20 countries:


Table 1: Device Counts by Country


Device count


United States






United Kingdom
























Russian Federation
























Czech Republic







Because these are SSL VPN devices, we also have access to the certificates that organizations used to ensure confidentiality and integrity of the communications. Most organizations have one or two (higher availability) VPN devices deployed, but many must deploy significantly more devices for geographic coverage or capacity needs:




Table 2: List of organizations with ten or more VPN ASA devices



Large Japanese telecom provider


Large U.S. multinational technology company


Large U.S. health care provider


Large Vietnamese financial services company


Large Global utilities service provider


Large U.K. financial services company


Large Canadian university


Large Global talent management service provider


Large Global consulting company


Large French multinational manufacturer


Large Brazilian telecom provider


Large Swedish technology services company


Large U.S. database systems provider


Large U.S. health insurance provider


Large U.K. government agency



So What?


The above data is somewhat interesting on its own, but what we really wanted to know is how many of these devices had not been patched yet (meaning that they are technically vulnerable if an attacker is in the right network position). Remember, it's unlikely these organizations have telnet, SSH and SNMP enabled to the internet and researchers in most countries, including those of us here in the United States, are not legally allowed to make credentialed scan attempts on these services without permission. Actually testing for SNMP and telnet/SSH access would have let us identify truly vulnerable systems. After some bantering with the extended team (Brent Cook, Tom Sellers & jhart) and testing against a few known devices, we decided to use hping to determine device uptime from timestamps and see how many devices had been rebooted since release of the original exploits on (roughly) August 15, 2016. We modified our Sonar environment to enable hping studies and then ran the uptime scan across the target device IP list on August 26, 2016, so any system with an uptime > 12 days that has not been rebooted (or is employing some serious timestamp masking techniques) is technically vulnerable. Also remember that organizations who thought their shiny new ASA devices weren't vulnerable also became vulnerable after the August 25, 2016 SilentSignal blog post (meaning that if they thought it was reasonable not to patch and reboot it became unreasonable to think that way on August 25).


So, how many of these organizations patched & rebooted? Well, nearly 12,000 (~24%) of them prevented us from capturing the timestamps. Of the remaining ones, here's how their patch status looks:



We can look at the distribution of uptime in a different way with a histogram, making 6-day buckets (so we can more easily see "Day 12"):




This also shows the weekly patch/reboot cadence that many organizations employ.


Let's go back to our organization list and see what the mean last-reboot time is for them:


Table 3: hping Scan results (2016-08-26)



Mean uptime (days)

Large Japanese telecom provider



Large U.S. multinational technology company



Large U.S. health care provider



Large Vietnamese financial services company



Large Global utilities service provider



Large U.K. financial services company



Large Canadian university



Large Global talent management service provider



Large Global consulting company



Large French multinational manufacturer



Large Brazilian telecom provider



Large Swedish technology services company



Large U.S. database systems provider



Large U.S. health insurance provider



Large U.K. government agency




Two had no uptime data available and two had rebooted/likely patched since the original exploit release.



We ran the uptime scan after the close of the weekend (organizations may have waited until the weekend to patch/reboot after the latest exploit news) and here's how our list looked:


Table 4: hping Scan Results (2016-08-29)



Mean uptime (days)

Large Japanese telecom provider



Large U.S. multinational technology company



Large U.S. health care provider



Large Vietnamese financial services company



Large Global utilities service provider



Large U.K. financial services company



Large Canadian university



Large Global talent management service provider



Large Global consulting company



Large French multinational manufacturer



Large Brazilian telecom provider



Large Swedish technology services company



Large U.S. database systems provider



Large U.S. health insurance provider



Large U.K. government agency




Only one additional organization (highlighted) from our "top" list rebooted (likely patched) since the previous scan, but an additional 4,667 devices from the full data set were rebooted (likely patched).


This bird's eye view of how organizations have reacted to the initial and updated EXTRABACON exploit releases shows that some appear to have assessed the issue as serious enough to react quickly while others have moved a bit more cautiously. It’s important to stress, once again, that attackers need to have far more than external SSL access to exploit these systems. However, also note that the vulnerability is very real and impacts a wide array of Cisco devices beyond these SSL VPNs. So, while you may have assessed this as a low risk, it should not be forgotten and you may want to ensure you have the most up-to-date inventory of what Cisco ASA devices you are using, where they are located and the security configurations on the network segments with access to them.


We just looked for a small, externally visible fraction of these devices and found that only 38% of them have likely been patched. We're eager to hear how organizations assessed this vulnerability disclosure in order to make the update/no update decision. So, if you're brave, drop a note in the comments or feel free to send a note to (all replies to that e-mail will be kept confidential).


1,2,3 Open FAIR Risk Taxonomy [PDF]

Parameters within a Swagger document are insecurely loaded into a browser based documentation. Persistent XSS occurs when this documentation is then hosted together on a public site. This issue was resolved in Swagger-UI 2.2.1.


One of the components used to build the interactive documentation portion of the swagger ecosystem is the Swagger-UI. This interface generates dynamic documentation based on a referenced Swagger document that can interact with the referenced API.  If the swagger document itself contains XSS payloads, the swagger-ui component can be tricked into injecting unescaped content into the DOM.

Product Description

From the README at

"Swagger UI is part of the Swagger project. The Swagger project allows you to produce, visualize and consume your own RESTful services. No proxy or 3rd party services required. Do it your own way.

Swagger UI is a dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation and sandbox from a Swagger-compliant API. Because Swagger UI has no dependencies, you can host it in any server environment, or on your local machine."

The swagger UI will parse a chosen swagger file, and generate dynamic colorful documentation that enables users to interact with a RESTful API.


Scott Lee Davis,, Application Security Researcher, Rapid7



If a swagger file contained in the definitions section, a default value with an XSS payload can be loaded unescaped into the DOM.



  Type: string
  Description: prints xss
  Default: <script>console.log(‘000000000000000000dad0000000000000000000’);</script>


Sanitation of HTML content should be done by an engine built for the job.  The swagger-ui team chose to solve this issue with the npm module santize-html.

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

  • Thu, Jun 09, 2016: Discovery by Scott Lee Davis of Rapid7, Inc.
  • Fri, Jun 17, 2016: Attempted to contact the vendor
  • Mon, Jul 11, 2016: Disclosed details to the vendor at
  • Wed, Jul 27, 2016: Disclosed details to CERT as VR-316
  • Tue, Aug 09, 2016: CVE-2016-5682 assigned by CERT
  • Tue, Aug 23, 2016: Fixed in Swagger-UI 2.2.1
  • Fri, Sep 02, 2016: Public disclosure

Filter Blog

By date: By tag: