Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

774 posts

The Remote Desktop Protocol, commonly referred to as RDP, is a proprietary protocol developed by Microsoft that is used to provide a graphical means of connecting to a network-connected computer. RDP client and server support has been present in varying capacities in most every Windows version since NT. Outside of Microsoft's offerings, there are RDP clients available for most other operating systems. If the nitty gritty of protocols is your thing, Wikipedia's Remote Desktop Protocol article is a good start on your way to a trove of TechNet articles.


RDP is essentially a protocol for dangling your keyboard, mouse and a display for others to use. As you might expect, a juicy protocol like this has a variety of knobs used to control its security capabilities, including controlling user authentication, what encryption is used, and more. The default RDP configuration on older versions of Windows left it vulnerable to several attacks when enabled; however, newer versions have upped the game considerably by requiring Network Level Authentication (NLA) by default. If you are interested in reading more about securing RDP, UC Berkeley has put together a helpful guide, and Tom Sellers, prior to joining Rapid7, wrote about specific risks related to RDP and how to address them.


RDP’s history from a security perspective is varied. Since at least 2002 there have been 20 Microsoft security updates specifically related to RDP and at least 24 separate CVEs:


In more recent times, the Esteemaudit exploit was found as part of the ShadowBrokers leak targeting RDP on Windows 2003 and XP systems, and was perhaps the reason for the most recent RDP vulnerability addressed in CVE-2017-0176.


RDP is disabled by default for all versions of Windows but is very commonly exposed in internal networks for ease of use in a variety of duties like administration and support. I can’t think of a place where I’ve worked where it wasn’t used in some capacity. There is no denying the convenience it provides.


RDP also finds itself exposed on the public internet more often than you might think. Depending on how RDP is configured, exposing it on the public internet ranges from suicidal on the weak end to not-too-unreasonable on the other. It is easy to simply suggest that proper firewall rules or ACLs restricting RDP access to all but trusted IPs is sufficient protection, but all that extra security only gets in the way when Bob-from-Accounting’s IP address changes weekly. Sure, a VPN might be something that RDP could hide behind and be considerably more secure, but you could also argue that a highly secured RDP endpoint on the public internet is comparable security-wise to a VPN.  And when your security-unsavvy family members or friends need help from afar, enabling RDP is definitely an option that is frequently chosen. There have also been reports that scammers have been using RDP as part of their attacks, often convincing unwary users to enable RDP so that “remote support” can be provided.  As you can see and imagine, there are all manner of ways that RDP could end up exposed on the public internet, deliberately or otherwise.


It should come as no surprise, then, to learn that we’ve been doing some poking at the global exposure of RDP on the public IPv4 internet as part of Rapid7 Labs' Project Sonar. Labs first looked at the abuse of RDP from a honeypot’s perspective as part of the Attackers Dictionary researchpublished last year. Around the same time, in early 2016, Sonar observed 10.8 million supposedly open RDP endpoints. As part of the research for Rapid7’s 2016 National Exposure Index, we observed 9 million and 9.4 million supposedly open RDP endpoints in our two measurements in the second quarter of 2016. More recently, as part of the 2017 National Exposure Index, in the first quarter of 2017, Sonar observed 7.2 million supposedly open RDP endpoints.

Exposing an endpoint is one thing, but actually exposing the protocol in question is where the bulk of the risk comes from. As part of running Sonar, we frequently see a variety of honeypots, tarpits, IPs or other security devices that will make it appear as if an endpoint is open when it really isn’t—or when it really isn’t speaking the protocol you are expecting. As such, I’m always skeptical of these initial numbers. Surely there aren’t really 7-10 million systems exposing RDP on the public internet. Right?

Recently, we launched a Sonar study in order to shed more light on the number of systems actually exposing RDP on the public internet. We built on the previous RDP studies which were simple zmap SYN scans, followed up with a full connection to each IP that responded positively and attempted the first in a series of protocol exchanges that occur when an RDP client first contacts an RDP server. This simple, preliminary protocol negotiation mimics what modern RDP clients perform and is similar to what Nmap uses to identify RDP. This 19-byte RDP negotiation request should elicit a response from almost every valid RDP configuration it encounters, from the default (less secure) settings of older RDP versions to the NLA and SSL/TLS requirements of newer defaults:

Screen Shot 2017-07-14 at 7.38.05 AM.png


We analyzed the responses, tallying any that appeared to be from RDP speaking endpoints, counting both error messages indicating possible client or server-side configuration issues as well as success messages.


11 million open 3389/TCP endpoints,and 4.1 million responded in such a way that they were RDP speaking of some manner or another. This number is shockingly high when you remember that this protocol is effectively a way to expose keyboard, mouse and ultimately a Windows desktop over the network. Furthermore, any RDP speaking endpoints discovered by this Sonar study are not applying basic firewall rules or ACLs to protect this service, which brings into question whether or not any of the other basic security practices have been applied to these endpoints.

Given the myriad of ways that RDP could end up exposed on the public Internet as observed in this recent Sonar study, it is hard to say why any one country would have more RDP exposed than another at first glance, but clearly the United States and China have something different going on than everyone else:

Looked at from a different angle, by examining the organizations that own the IPs with exposed RDP endpoints, things start to become much more clear:

Pasted image at 2017_07_14 04_17 PM.png


The vast majority of these providers are known to be cloud, virtual, or physical hosting providers where remote access to a Windows machine is a frequent necessity; it's no surprise, therefore, that they dominate exposure.

We can draw further conclusions by examining the RDP responses we received. Amazingly, over 83% of the RDP endpoints we identified indicated that they were willing to proceed with CredSSP as the security protocol, implying that the endpoint is willing to use one of the more secure protocols to authenticate and protect the RDP session. A small handful in the few thousand range selected SSL/TLS. Just over 15% indicated that they didn’t support SSL/TLS (despite our also proposing CredSSP…) or that they only supported the legacy “Standard RDP Security”, which is susceptible to man-in-the-middle attacks. Over 80% of exposed endpoints supporting common means for securing RDP sessions is rather impressive. Is this a glimmer of hope for the arguably high number of exposed RDP endpoints?

Areas for potential future research could include:


  • Security protocols and supported encryption levels. Nmap has an NSE script that will enumerate the security protocols and encryption levels available for RDP. While 83% of the RDP speaking endpoints support CredSSP, this does not mean that they don’t also support less secure options; it just means that if a client is willing, they can take the more secure route.
  • When TLS/SSL or CredSSP are involved, are organizations following best practices with regard to certificates, including self-signed certificates (perhaps leading to MiTM?), expiration, and weak algorithms?
  • Exploring the functionality of RDP in non-Microsoft client and server implementations


Rapid7’s InsightVM and Metasploit have fingerprinting coverage to identify RDP, and InsightVM has vulnerability coverage for all of the above mentioned RDP vulnerabilities.


Interested in this RDP research? Have ideas for more? Want to collaborate? We’d love to hear from you, either in the comments below or at

What's Up?

Astute readers may have been following the recent news around "SMBLoris" — a proof-of-concept exploit that takes advantage of a vulnerability in the implementation of SMB services on both Windows and Linux, enabling attackers to "kill you softly" with a clever, low-profile application-level denial of service (DoS). This vulnerability impacts all versions of Windows and Samba (the Linux software that provides SMB services on that platform) and Microsoft has stated that is has no current intention to provide a fix for the issue.


Researchers Sean Dillon (Twitter: @zerosum0x0) and Jenna Magius (Twitter: @jennamagius) found the original vulnerability in June (2017) and noted that it was an apparent bug in SMBv1 (you'll remember that particular string of letters from both WannaCry and "NotPetya" outbreaks this year), and Jenna Magius was one of the researchers who more recently noted that all Windows systems — including Windows 10 — exposing port 445 are vulnerable (i.e. disabling SMBv1 won't stop attacks).


This means that the current situation is that all Windows systems exposing port 445 and the majority of Linux systems exposing port 445 are vulnerable to this application-level denial of service attack. If the attack is successful, the system being attacked will need to be rebooted and will still be vulnerable. Researchers have noted that this vulnerability is similar to one from 2009 — Slowloris — that impacted different types of systems with the same technique. It appears, however, that SMBLoris can have a much faster negative impact even on Windows systems with robust hardware configurations.


Is The World Ending?

Yes…in approximately 7.5 billion years when our Sun is estimated to turn into a dwarf star.


However, here are the facts about this vulnerability:


  • It is not, itself, "wormable" as seen with previous SMB-related attacks this year.
  • It is not "ransomware".
  • There is currently no indication of active exploitation of it (we and other researchers are monitoring for this and will provide additional communications & guidance if we discover widespread SMBLoris probes or attacks).
  • It is not any more destructive to a single system than what might happen if you accidentally turned off said system without shutting it down properly.


If you have mobile endpoints (i.e. laptops) that connect to diverse networks or SMB servers exposing port 445 to the internet, then those systems are vulnerable to this SMBLoris exploit and can easily be (temporarily) taken down by attackers.


Your internal systems are also vulnerable to this attack as most organizations do not implement granular controls over port 445 system-to-system communications. This means that an attacker who compromises a system within your network can launch SMBLoris attacks against any assets exposing port 445.


So, while the world is generally safe, there is room for reasoned caution.


What Can We Do?

If you own one or more of the ~4 million internet endpoints exposing this vulnerable protocol on port 445 (as noted in our 2017 Q2 Threat Report) then you should take steps to remove those systems from the internet (it's never a good idea to expose this service directly to the internet anyway).


If you have an active, mobile user base, then those devices should be configured to block access to port 445 when not on the corporate network. Even then, it's a good idea to have well-crafted host firewall rules to restrict access on this port.


You should also be monitoring both your operations logs/alerts and help desk tickets for unusual reports of random system crashes and reboots and handling them through your standard incident response processes.


What Might Attackers Do With SMBLoris?

Denial of service and distributed denial of service (DDoS) attacks are generally used to disable services for:


  • "fun"/retaliation/ideology
  • financial gain (e.g. extortion), and
  • distraction (i.e. keeping operations teams and incident responders busy to cover the tracks for other malicious behaviour)


The CVE Details site shows over 60,000 application/operating system DoS vulnerabilities spread across hundreds of vendor products. It is highly likely that you have many of these other DoS vulnerabilities present on both your internet-facing systems and intranet-systems. In other words, attackers have a plethora of targets to choose from when deciding to use application- or OS-level DoS attacks against an organization.


What makes SMBLoris a bit more insidious and a potential go-to vulnerability for attackers is that it makes it easy to perform nigh-guaranteed widespread DDoS attacks against homogeneous networks exposing port 445. So, while you should not be panicking, you should be working to understand your exposure, creating and deploying configurations to mitigate said exposure, and performing the monitoring outlined above.


If you do not have a threat exercise scenario for application-level DDoS attacks or do not have an incident response plan for such an attack, now would be a great time to work on that. You can use this run book by the Financial and Banking Information Infrastructure Committee (FBIIC) as a starting point or reference.


As stated earlier, we are on the lookout for adversarial activity surrounding SMBLoris, and will update the community if and when we have more information.


(Banner image by Jonas Eklundh)

We cannot believe that we're already into August! Time really flies when the internet is constantly on fire. When it came time to analyze data for our Q2 Threat Report and pull out threat trends and landscape changes, there was plenty to work with. Q2 kept defenders on their toes—from the Shadow Broker’s leaks at the beginning of April (was it really just four months ago?) to the Petya/NotPetya/but-something-crazy-is-definitely-going-on attacks in the final days of the quarter. There were quite a few significant lessons learned in Q2, both about the threat landscape and how defenders can adapt to changes. Some of our key takeaways from Q2:


  • We can’t respond based on how exciting or novel something seems. Many of the exploits leaked by the Shadow Brokers were old, and nearly all of them had patches available. They targeted services that are tried-and-true attack vectors—and we thought that we knew better than to have them exposed. Our initial response to the leaks was lackluster. Many of us moved on once the vulnerabilities were identified, because it seemed so obvious that we should have been protected. It turned out that many people were not, and attackers took advantage of that—though not full advantage, mind you, because there are plenty of exploits in the dump that haven’t been leveraged yet, and our research with Project Sonar indicates that there is plenty of additional opportunity for attackers.
  • Other attacks don’t stop when there is a high-profile security event in the news. Multiple high-profile attacks occupied much of defender time this quarter, but the majority of incidents defenders responded to during that time were not related to the high-profile events. Understanding how to prioritize these breaking news events while still focusing on the threats impacting your organization was a key lesson we highlighted in the Q2 Threat Report.
  • Understanding the factors that impact your threat profile will help make sure that you are focusing on the right threats. The industry you are in may dictate the types of attackers who target you and the tools that they are likely to use, but there are other factors as well. While we saw specific trends emerge on a per-industry basis, we also saw a handful of tactics that were used across all sectors. In addition, we identified key difference in attacker tactics against large organizations and small organizations.


The full report is available here, with all of the data we used in our analysis and some amazing visualizations. If you want even more Q2 threat report goodness, sign up for the webcast Bob Rudis, Tod Beardsley and I are hosting on August 15th.

It's summer in the northern hemisphere and many folks are working their way through carefully crafted reading lists, rounding out each evening exploring fictional lands or investigating engrossing biographies. I'm hoping that by the end of this post, you'll be adding another item to your "must read" list — a tome whose pages are bursting with exploits carried out by crafty, international adversaries and stories of modern day sleuths on the hunt for those who would seek their fortunes by preying on the innocent and ill-prepared. "What work is this?!" you ask (no doubt with eager anticipation). Why, it's none other than the Cisco 2017 Midyear Cybersecurity Report (MCR)!


This year, Rapid7—along with nine other organizations—contributed content for Cisco's mid-year threat landscape review, and it truly is a treasure trove of information with data, intelligence and guidance from a wide range of focus areas across the many disciplines of cybersecurity.


Avid readers of the R7 blog likely remember our foray into "DevOps" server-ransomware earlier this year. We've been using Project Sonar to monitor MongoDB, CouchDB, Elasticsearch and Docker—what we're calling "DevOps" servers—since January, and we've provided a deep-dive into the state of these services in the Cisco 2017 MCR.


You should read the "DevOps" section within the context of the entire report as other sections provide both reinforcement and additional adversary context to our findings, but we wanted to show a small extension of the MongoDB server status here since we've performed a few more scans since we provided the research results in the MCR:


There are two main takeaways from both the current state of MongoDB (and the other "DevOps" servers).


First: Good show! While there are still thousands of MongoDB (and CouchDB, and Elasticsearch, and Docker, etc) exposed to the internet without authentication, the numbers are generally decreasing or holding steady (discrepancies per-scan are expected since mining the internet for data is notoriousily fraught with technical peril) and it seems attackers have realized that the remaining instances out there are likely non-production, forgotten or abandoned systems. It would be great if the owners of these systems yanked them off of the internet, but the issue appears to have been (at least, temporarily) abated.


Second: Be vigilant! Attackers have had ample opportunity to fine tune their "DevOps" discovery kits as well as their ransom/destruction kits. We've all witnessed just how easy it is for our adversaries to gain an internal foothold when they believe it will be beneficial (ref: WannaCry and Petya-not-Petya). It truly is only a matter of time before the techniques they've perfected on the open internet make their way into our gilded internal network cages. It would be very prudent to take this summer lull to scan for open or weak-credentialed "DevOps" servers in your own environments and make sure they're more properly secured before you find yourself standing feeding bills into a Bitcoin ATM when you should be basking in the sun on the beach.


The Cisco 2017 MCR is absolute "must add" to the reading lists of IT and cybersecurity professionals and we hope you take some time to digest it out over the coming days/weeks.

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built into this report, submitting two sets of detailed comments—see here and here—to the Copyright Office with Bugcrowd, HackerOne, and Luta Security, as well as participating in official roundtable discussions. Here we break down why this matters for researchers, what the Copyright Office's study concluded, and how it matches up to Rapid7's recommendations.


What is DMCA Sec. 1201 and why does it matter to researchers?

Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs, like encryption, authentication requirements, region coding, user agents) to access copyrighted works, including software, without permission of the owner. That creates criminal penalties and civil liability for independent security research that does not obtain authorization for each TPM circumvention from the copyright holders of software. This hampers researchers' independence and flexibility. While the Computer Fraud and Abuse Act (CFAA) is more famous and feared by researchers, liability for DMCA Sec. 1201 is arguably broader because it applies to accessing software on devices you may own yourself while CFAA generally applies to accessing computers owned by other people.


To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years in its "triennial rulemaking" process. The permanent exception to the prohibition on circumventing TPMs for security testing is quite limited – in part because researchers are still required to get prior permission from the software owner. The temporary exemptions go beyond the permanent exemptions.


In Oct. 2015 the Copyright Office granted a very helpful exemption to Sec. 1201 for good faith security testing that circumvents TPMs without permission. However, this temporary exemption will expire at the end of the three-year exemption window. In the past, once a temporary exemption expires, advocates must start from scratch in re-applying for another temporary exemption. The temporary exemption is set to expire Oct. 2018, and the renewal process will ramp up in the fall of this year.


Copyright Office study and Rapid7's recommendations

The Copyright Office announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study to weigh legislative and procedural reforms to Sec. 1201, including the permanent exemptions and the three-year rulemaking process. The Copyright Office solicited two sets of public comments and held a roundtable discussion to obtain feedback and recommendations for the study. At each stage, Rapid7 provided recommendations on reforms to empower good faith security researchers while preserving copyright protection against infringement – though, it should be noted, there were several commenters opposed to reforms for researchers on IP protection grounds.


Broadly speaking, the conclusions reached in the Copyright Office's study are quite positive for researchers and largely tracked the recommendations of Rapid7 and other proponents of security research. Here are four key highlights:


  1. Authorization requirement: As noted above, the permanent exemption for security testing under Sec. 1201(j) is limited because it still requires researchers to obtain authorization to circumvent TPMs. Rapid7's recommendation is to remove this requirement entirely because good faith security research does not infringe copyright, yet an authorization requirement compromises independence and speed of research. The Copyright Office's study recommended [at pg. 76] that Congress make this requirement more flexible or remove it entirely. This is arguably the study's most important recommendation for researchers.

  2. Multi-factor test: The permanent exemption for security testing under Sec. 1201(j) also partially conditions liability protection on researchers when the results are used "solely" to promote the security of the computer owner, and when the results are not used in a manner that violates copyright or any other law. Rapid7's recommendations are to remove "solely" (since research can be performed for the security of users or the public at large, not just the computer owner), and not to penalize researchers if their research results are used by unaffiliated third parties to infringe copyright or violate laws. The Copyright Office's study recommended [at pg. 79] that Congress remove the "solely" language, and either clarify or remove the provision penalizing researchers when research results are used by third parties to violate laws or infringe copyright.

  3. Compliance with all other laws: The permanent exemption for security testing only applies if the research does not violate any other law. Rapid7's recommendation is to remove this caveat, since research may implicate obscure or wholly unrelated federal/state/local regulations, those other laws have their own enforcement mechanisms to pursue violators, and removing liability protection under Sec. 1201 would only have the effect of compounding the penalties. Unfortunately, the Copyright Office took a different approach, tersely noting [at pg. 80] that it is unclear whether the requirement to comply with all other laws impedes legitimate security research. The Copyright Office stated they welcome further discussion during the next triennial rulemaking, and Rapid7 may revisit this issue then.

  4. Streamlined renewal for temporary exemptions: As noted above, temporary exemptions expire after three years. In the past, proponents must start from scratch to renew the temporary exemption – a process that involves structured petitions, multiple rounds of comments to the Copyright Office, and countering the arguments of opponents to the exemption. For researchers that want to renew the temporary security testing exemption, but that lack resources and regulatory expertise, this is a burdensome process. Rapid7's recommendation is for the Copyright Office to presume renewal of previously granted temporary exemptions unless there is a material change in circumstances that no longer justifies granting the exemption. In its study, the Copyright Office committed [at pg. 143] to streamlining the paperwork required to renew already granted temporary exemptions. Specifically, the Copyright Office will ask parties requesting renewal to submit a short declaration of the continuing need for an exemption, and whether there has been any material change in circumstances voiding the need for the exemption, and then the Copyright Office will consider renewal based on the evidentiary record and comments from the rulemaking in which the temporary exemption was originally granted. Opponents of renewing exemptions, however, must start from scratch in submitting evidence that a temporary exemption harms the exercise of copyright. 


Conclusion—what's next?

In the world of policy, change typically occurs over time in small (often hard-won) increments before becoming enshrined in law. The Copyright Office's study is one such increment. For the most part, the study is making recommendations to Congress, and it will ultimately be up to Congress (which has its own politics, processes, and advocacy opportunities) to adopt or decline these recommendations. The Copyright Office's study comes at a time that House Judiciary Committee is broadly reviewing copyright law with an eye towards possible updates. However, copyright is a complex and far-reaching field, and it is unclear when Congress will actually take action. Nonetheless, the Copyright Office's opinion on these issues will carry significant weight in Congress' deliberations, so it would have been a heavy blow if the Copyright Office’s study had instead called for tighter restrictions on security research.


Importantly, the Copyright Office's new, streamlined process for renewing already granted temporary exemptions will take effect without Congress' intervention. The streamlined process will be in place for the next "triennial rulemaking," which begins in late 2017 and concludes in 2018, and which will consider whether to renew the temporary exemption for security research. This is a positive, concrete development that will reduce the administrative burden of applying for renewal and increase the likelihood of continued protections for researchers.


The Copyright Office's study noted that "Independent security test[ing] appears to be an important component of current cybersecurity practices". This recognition and subsequent policy shifts on the part of the Copyright Office are very encouraging. Rapid7 believes that removing legal barriers to good faith independent research will ultimately strengthen cybersecurity and innovation, and we hope to soon see legislative reforms that better balance copyright protection with legitimate security testing.


Petya-like Ransomware Explained

Posted by todb Employee Jun 27, 2017

TL;DR summary (June 28 and beyond): A major ransomware attack started in Ukraine yesterday and has spread around the world. The ransomware, which was initially thought to be a modified Petya variant, encrypts files on infected machines and uses multiple mechanisms to both gain entry to target networks and to spread laterally. We have confirmed that once victims’ disks are encrypted, they cannot be decrypted. For this reason, there’s dissent on whether the Petya-like attack should be called ransomware at all. Whatever you call it, our advice is the same: Back up, patch against MS17-010 vulnerabilities (mitigation against internal spread), and block TCP/445 traffic. Don’t pay the ransom, since decryption by the attacker is impossible.


Read on for further information on infection vectors, IOCs, and additional Rapid7 resources.


Initial Analysis:


In the early morning hours of June 27, 2017, UTC+3 time, ransomware that appears to be an updated variant belonging to the Petya family surfaced in Eastern Europe (read a sample summary here). Incident detection and response professionals around the world immediately started connecting this Petya-like ransomware with the same EternalBlue exploits used by the WannaCry ransomware. Since the attack was so widespread, collecting a sample was pretty straightforward, and Rapid7's incident response team is currently analyzing what is actually going on here.


This blog post will be updated throughout the day with what we know about the ransomware, as well as what Rapid7 customers can do to prevent, detect, and respond to it. In the meantime, organizations are strongly advised to take the following actions:


  • Ensure that all Windows systems have been patched against MS17-010 vulnerabilities.
  • Employ network and host-based firewalls to block TCP/445 traffic from untrusted systems. If possible, block 445 inbound to all internet-facing Windows systems.
  • Ensure critical systems and files have up-to-date backups. Backups are the only full mitigation against data loss due to ransomware.


Rapid7 has a ransomware resources page available here. For those already hit by this ransomware, our best guidance right now is to work with law enforcement and incident response experts. Our own incident responders are available 24/7 on the hotline: +1-844-RAPID-IR.


Unfortunately - though we really hate to say so - the bottom line here is that if you don't have thorough and timely backups, paying the ransom will need to remain an option for you. See 14:30 PM update for details.


Update 13:45 PM EDT:


We've confirmed that this ransomworm achieves its initial infection via a malicious document attached to a phishing email, requiring a victim to download and open it (Update: Shortly after this analysis, it became clear that the true initial vector was a malicious application update, not a phishing attack. An unrelated phishing campaign that occurred simultaneously led to some confusion about the initial vector and related IOCs). After the initial attack, it does indeed use the EternalBlue and DoublePulsar exploits to spread laterally. Unlike WannaCry, though, it is currently using these mechanisms to spread only on internal networks.


While this is bad news for compromised organizations, the good news is that the spread directly across the internet is rather limited.


The worse news is that there is still plenty of SMB on the internet to go after. Here's a map of the exposed SMB we've generated from some fresh Sonar data:



Malware rarely stays static for long, so it's only a matter of time before a variant of this malware is released that uses SMB to spread directly across the internet, just like WannaCry.


Update 14:30 PM EDT:


Victims of this attack are directed to contact an email address once they’ve paid the ransom; however, the email account in question has been disabled by the German company that hosts it. Therefore, victims who pay the ransom are reportedly unable to recover their files. More details here.


Update 15:30 PM EDT:


We've identified the IP addresses,,, and as fine candidates to watch for at your firewall. If you get connection attempts there sources from your internal network, either someone is infected, or someone is monkeying around with live malware samples. Jon Hart goes into more detail on these, and their associated domain names, on this gist.


Update 16:50 PM EDT:


There have been some reports of Petya-like infections occurring in networks that seemed to lack the initial infection vector. While this might not appear to be possible, there are scenarios where this can seem to happen. First, recall that infected computers actively search their local network for targets vulnerable to the issues addressed in MS17-010. Second, some of these devices are quite mobile, and hop around networks. If my laptop gets popped by this ransomware in my home network at FooCorp, then I take it to my local coffee shop's wifi, and infect someone from BarCom, when that BarCom employee goes back to the office, his incident response people are going to see this race around their network without the phishing email kicking everything off.


This is one scenario where the phishing component would not be immediately obvious. There may be more to this malware, though, and our own IR engineers are still running through static and dynamic analysis, so we may have more on how this thing vectors around in the coming hours.


Update 18:00 PM EDT:


We've confirmed that this ransomware uses a lightly modified version of mimikatz to extract credentials from memory for use in its psexec and WMI vectors for spreading. Mimikatz is a widely-used open source security tool used primarily by security researchers to understand how credential handling is performed in Windows environments. (Thanks, Tim and Mike!)


Update 20:15 EDT:

For Rapid7 customers, you should be aware that we've already pushed the unique Indicators of Compromise (IOCs) out to all our InsightIDR users, and we've just published a handy HOWTO for InsightVM folks on scanning for MS17-010, which hits the exploit vector being leveraged in this attack. In the meantime, this is a fine time to review your own backup and restore capabilities -- especially the restore part.


It seems unlikely we'll have any more updates through the night, but we're still pursuing analysis work. Once we learn anything new, we'll be updating here.


The Workspaces component of Biscom Secure File Transfer (SFT) version 5.1.1015 is vulnerable to stored cross-site scripting in two fields. An attacker would need to have the ability to create a Workspace and entice a victim to visit the malicious page in order to run malicious Javascript in the context of the victim's browser. Since the victim is necessarily authenticated, this can allow the attacker to perform actions on the Biscom Secure File Transfer instance on the victim's behalf.


Product Description

Biscom Secure File Transfer (SFT) is a web-based solution that replaces insecure FTP and email, allowing end users to easily send and share files without IT involvement. More about the product is available on the vendor's web site.



These issues were discovered by Orlando Barrera II of Rapid7, Inc, and disclosed in accordance with Rapid7's vulnerability disclosure policy.



After authenticating to the Biscom Secure File Transfer web application, an attacker can alter the "Name" and "Description" fields of a Workspace, as shown in Figures 1, 2, and 3.



Figure 1: Adding XSS to Name and Description


Figure 2: Navigating to the Workspace fires the Name field XSS


Figure 3: Navigation to the Workspace fires the Description field XSS


In addition, the File Details component within a Workspace is also vulnerable to stored XSS injection. An attacker can save arbitrary Javascript in the "Description" field of the File Details pane of a file stored in a Workspace.


Figure 4: Editing file details


Figure 5: Navigating to the Workspace fires the File Details Description XSS



As of version 5.1.1025, the issue has been resolved. Absent an update, a web application firewall (WAF) may prevent attackers from entering the malicious XSS, and/or protect users by stripping offending XSS.


Vendor Response

Security is the top priority for Biscom and we appreciate Rapid7 identifying this issue and bringing it to our attention.  Once we were informed, our team moved quickly to release a patch within a week to address the issue. Biscom is not aware of any customers being impacted by this issue and Biscom sees the likelihood as low since an authenticated user is required. Biscom values the sharing of security information and we thank Rapid7 in evaluating our application's security.


Disclosure Timeline

Biscom's response to this issue was exemplary, taking less than 30 days from private notification to a public fix, as seen in the timeline below.

  • Thu, Mar 30, 2017: Discovered and reported by Orlando Barrera II
  • Tue, Apr 04, 2017: Initial contact to the vendor
  • Fri, Apr 07, 2017: Details disclosed to the vendor
  • Thu, Apr 20, 2017: Disclosed to CERT/CC (VRF#17-04-VTJCJ)
  • Wed, May 03, 2017: Fixed version 5.1.1025 provided by the vendor
  • Wed, May 03, 2017: CVE-2017-5241 reserved by Rapid7
  • Tue, Jun 27, 2017: Public disclosure

Senator Ed Markey (D-MA) is poised to introduce legislation to develop a voluntary cybersecurity standards program for the Internet of Things (IoT). The legislation, called the Cyber Shield Act, would enable IoT products that comply with the standards to display a label indicating a strong level of security to consumers – like an Energy Star rating for IoT. Rapid7 supports this legislation and believes greater transparency in the marketplace will enhance cybersecurity and protect consumers.


The burgeoning IoT marketplace holds a great deal of promise for consumers. But as the Mirai botnet made starkly clear, many IoT devices – especially at the inexpensive range of the market – are as secure as leaky boats. Rapid7's Deral Heiland and Tod Beardsley, among many others, have written extensively about IoT's security problems from a technical perspective.


Policymakers have recognized the issue as well and are taking action. Numerous federal agencies (such as FDA and NHTSA) have set forth guidance on IoT security as it relates to their area of authority, and others (such as NIST) have long pushed for consistent company adoption of baseline security frameworks. In addition to these important efforts, we are encouraged that Congress is also actively exploring market-based means to bring information about the security of IoT products to the attention of consumers.


Sen. Markey's Cyber Shield Act would require the Dept. of Commerce to convene public and private sector experts to establish security benchmarks for select connected products. The working group would be encouraged to incorporate existing standards rather than create new ones, and the benchmark would change over time to keep pace with evolving threats and expectations. The process, like that which produced the NIST Cybersecurity Framework, would be open for public review and comment. Manufacturers may voluntarily display "Cyber Shield" labels to IoT products that meet the security benchmarks (as certified by an accredited testing entity).


Rapid7 supports voluntary initiatives to raise awareness to consumers about product security, like that proposed in the Cyber Shield Act. Consumers are often not able to easily determine the level of security in products they purchase, so an accessible and reliable system is needed to help inform purchase decisions. As consumers evaluate and value IoT security more, competing manufacturers will respond by prioritizing secure design, risk management practices, and security processes. Consumers and the IoT industry can both benefit from this approach.


The legislation is not without its challenges, of course. To be effective, the security benchmarks envisioned by the bill must be clear and focused, rather than generally applicable to all connected devices. The program would need buy-in from security experts and responsible manufacturers, and consumers would need to learn how to spot and interpret Cyber Shield labels. But overall, Rapid7 believes Sen. Markey's Cyber Shield legislation could encourage greater transparency and security for IoT. Strengthening the IoT ecosystem will require a multi-pronged approach from policymakers, and we think lawmakers should consider incorporating this concept as part of the plan.

Deral Heiland

In Fear of IoT Security

Posted by Deral Heiland Employee Jun 21, 2017

I wish I had a dime for every time I have heard someone say “With so many vulnerabilities being reported in the Internet of Things, I just don’t trust that technology, so I avoid using any of it." I am left scratching my head because these same people seem to have no issues running a Windows operating system.


As a researcher focused on the Internet of Things (IoT), I regularly release new vulnerabilities around IoT product ecosystems, which can include hardware, mobile application and cloud web servers, and APIs. The main goal when doing this work is better security. The hope is that the knowledge gained and shared in the course of this research will help manufacturers build better products and help those using IoT make better decisions around the deployment and management of IoT technology. Unfortunately, any vulnerability my peers and I discover during research can be—and often is— made out to be worse than it really is. Now don’t get me wrong: I want the work I do to be taken seriously, but we still need to take a complete risk model into consideration when evaluating IoT issues. We can refer back to one of the most succinct risk formulas below:


Risk = Probability x Impact


Unfortunately, I think we have a tendency to forget about this formula. Every time I discuss my IoT findings with reporters, customers, or the general public, I like to discuss the associated risk, because it varies quite a bit based on “how” and “by whom” the affected IoT technology is being used.


A good example of this is the IoT tracking device research I conducted in the past year. If I told you a malicious actor could identify and use a simple low-energy Bluetooth dongle hanging on my keychain “used to find lost keys” to track me anywhere I go via Global Positioning System (GPS) data, what would you say about that?  Our initial response would most likely be that this is horrible and scary, and that response is 100% understandable...but if we think through this using the risk formula, it can easily become both clearer and less dire.


First, what is the probability of someone's taking the time and effort to identify such a flaw in a product so they could track us? This leads to several new questions:

  • Would anyone need or want to track me?
  • Am I a very high-profile person, such as a politician or celebrity?
  • Do I have any reason to believe someone is currently interested in stalking or tracking me?

If the answer is “no,” then the probability part of the equation quickly drops to nearly zero, or “very unlikely.” Remember basic math: If zero is multiplied by anything, it still equals zero—meaning the risk of this occurring is very low. Now if you do fall within the narrow category of users who may be at higher risk, then you might want to consider the benefit versus risk of using such technology, but for the most part few of us fall into that category.


If we talk through the risk of using IoT and apply the basic risk formula, we can better identify the true risk and, armed with that knowledge, focus on properly mitigating and/or reducing it.


When thinking about purchasing and deploying IoT at home or in business, there are more questions we can ask that can help us understand other aspects of the risks associated with IoT:

  • Does the product support over-the-air security patching? Having the ability to quickly patch a product when vulnerabilities are found helps to reduce risk and is a key indicator of a more mature security program on the vendor's part.
  • Has the vendor had an independent security review done on their product? Vendors who proactively test their products prior to going to market often deliver a more secure product; this is also another key indicator of a mature security process.


And, when we deploy our newly purchased IoT technology, what can we do as users/consumers to reduce risk?

  • We should change default passwords and set complex passwords on all accounts and services (and choose devices/services that support this or even stronger authentication mechanisms).
  • We should strongly consider deploying the solution within an isolated VLAN environment to protect our core business or home networks from any malicious traffic aimed at or coming from an IoT component.


I strongly encourage both businesses and consumers to embrace IoT and all the benefits that come with IoT solutions, but to do so with eyes wide open using common-sense risk evaluation, along with deployment and management best practices. This way, you can securely take advantage of all IoT has to offer.


Rapid7 can help you test the security of your IoT devices. Learn more about our IoT security services.

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North American Free Trade Agreement (NAFTA). NAFTA is a trade agreement between the US, Canada, and Mexico, that covers a huge range of topics, from agriculture to healthcare.


Rapid7 submitted comments in response, focusing on 1) preventing data localization, 2) alignment of cybersecurity risk management frameworks, 3) protecting strong encryption, and 4) protecting independent security research.


Rapid7's full comments on the renegotiation of NAFTA are available here.


1) Preserving global free flow of information – preventing data localization

Digital goods and services are increasingly critical to the US economy. By leveraging cloud computing, digital commerce offers significant opportunities to scale globally for individuals and companies of all sizes – not just large companies or tech companies, but for any transnational company that stores customer data.


However, regulations abroad that disrupt the free flow of information, such as "data localization" (requirements that data be stored in a particular jurisdiction), impede both trade and innovation. Data localization erodes the capabilities and cost savings that cloud computing can provide, while adding the significant costs and technical burdens of segregating data collected from particular countries, maintaining servers locally in those countries, and navigating complex geography-based laws. The resulting fragmentation also undermines the fundamental concept of a unified and open global internet.


Rapid7's comments [pages 2-3] recommended that NAFTA should 1) Prevent compulsory localization of data, and 2) Include an express presumption that governments would minimize disruptions to the flow of commercial data across borders.


2) Promote international alignment of cybersecurity risk management frameworks

When NAFTA was originally negotiated, cybersecurity was not the central concern that it is today. Cybersecurity is presently a global affair, and the consequences of malicious cyberattack or accidental breach are not constrained by national borders.


Flexible, comprehensive security standards are important for organizations seeking to protect their systems and data. International interoperability and alignment of cybersecurity practices would benefit companies by enabling them to better assess global risks, make more informed decisions about security, hold international partners and service providers to a consistent standard, and ultimately better protect global customers and constituents. Stronger security abroad will also help limit the spread of malware contagion to the US.


We support the approach taken by the National Institute of Standards and Technology (NIST) in developing the Cybersecurity Framework for Critical Infrastructure. The process was open, transparent, and carefully considered the input of experts from the public and private sector. The NIST Cybersecurity Framework is now seeing impressive adoption among a wide range of organizations, companies, and government agencies – including some critical infrastructure operators in Canada and Mexico.


Rapid's comments [pages 3-4] recommended that NAFTA should 1) recognize the importance of international alignment of cybersecurity standards, and 2) require the Parties to develop a flexible, comprehensive cybersecurity risk management framework through a transparent and open process.


3) Protect strong encryption

Reducing opportunities for attackers and identifying security vulnerabilities are core to cybersecurity. The use of encryption and security testing are key practices in accomplishing these tasks. International regulations that require weakening of encryption or prevent independent security testing ultimately undermine cybersecurity.


Encryption is a fundamental means of protecting data from unauthorized access or use, and Rapid7 believes companies and innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system. Market access rules requiring weakened encryption would create technical barriers to trade and put products with weakened encryption at a competitive disadvantage with uncompromised products. Requirements to weaken encryption would impose significant security risks on US companies by creating diverse new attack surfaces for bad actors, including cybercriminals and unfriendly international governments.


Rapid7's comments [page 5] recommended that NAFTA forbid Parties from conditioning market access for cryptography in commercial applications on the transfer of decryption keys or alteration of the encryption design specifications.


4) Protect independent security research

Good faith security researchers access software and computers to identify and assess security vulnerabilities. To perform security testing effectively, researchers often need to circumvent technological protection measures (TPMs) – such as encryption, login requirements, region coding, user agents, etc. – controlling access to software (a copyrighted work). However, this activity can be chilled by Sec. 1201 of the Digital Millennium Copyright Act (DMCA) of 1998, which forbids circumvention of TPMs without the authorization of the copyright holder.


Good faith security researchers do not seek to infringe copyright, or to interfere with a rightsholder's normal exploitation of protected works. The US Copyright Office recently affirmed that security research is fair use and granted this activity, through its triennial rulemaking process, a temporary exemption from the DMCA's requirement to obtain authorization from the rightsholder before circumventing a TPM to safely conduct security testing on lawfully acquired (i.e., not stolen or "borrowed") consumer products.


Some previous trade agreements have closely mirrored the Digital Millennium Copyright Act's (DMCA) prohibitions on unauthorized circumvention of TPMs in copyrighted works. This approach replicates internationally the overbroad restrictions on independent security testing that the US is now scaling back. Newly negotiated trade agreements should aim to strike a more modern and evenhanded balance between copyright protection and good faith cybersecurity research.


Rapid7's comments [page 6] recommended that any anti-circumvention provisions of NAFTA should be accompanied by provisions exempting security testing of lawfully acquired copyrighted works.


Better trade agreements for the Digital Age?

Data storage and cybersecurity have undergone considerable evolution since NAFTA was negotiated more than a quarter century ago. To the extent that renegotiation may better address trade issues related to digital goods and services, we view the modernization of NAFTA and other agreements as potentially positive. The comments Rapid7 submitted regarding NAFTA will likely apply to other international trade agreements as they come up for renegotiation. We hope the renegotiations result in a broadly equitable and beneficial trade regime that reflects the new realities of the digital economy.

Patrick Laverty

About User Enumeration

Posted by Patrick Laverty Employee Jun 15, 2017

User enumeration is when a malicious actor can use brute-force to either guess or confirm valid users in a system. User enumeration is often a web application vulnerability, though it can also be found in any system that requires user authentication. Two of the most common areas where user enumeration occurs are in a site's login page and its ‘Forgot Password’ functionality.


The malicious actor is looking for differences in the server’s response based on the validity of submitted credentials. The Login form is a common location for this type of behavior. When the user enters an invalid username and password, the server returns a response saying that user ‘rapid7’ does not exist. A malicious actor would know that the problem is not with the password, but that this username does not exist in the system, as shown in Figure 1:



Figure 1: User Does Not Exist


On the other hand, if the user enters a valid username with an invalid password, and the server returns a different response that indicates that the password is incorrect, the malicious actor can then infer that the username is valid, as shown in Figure 2:



Figure 2: Username is Valid


At this point, the malicious actor knows how the server will respond to ‘known good’ and ‘known bad’ input. So, the malicious actor can then perform a brute-force attack with common usernames, or may use census data of common last names and append each letter of the alphabet to generate valid username lists.


Once a list of validated usernames is created, the malicious actor can then perform another round of brute-force testing, but this time against the passwords until access is finally gained.


An effective remediation would be to have the server respond with a generic message that does not indicate which field is incorrect. When the response does not indicate whether the username or the password is incorrect, the malicious actor cannot infer whether usernames are valid. Figure 3 shows an example of a generic error response:



Figure 3: Generic Error Response

The application’s Forgot Password page can also be vulnerable to this kind of attack. Normally, when a user forgets their password, they enter a username in the field and the system sends an email with instructions to reset their password. A vulnerable system will also reveal that the username does not exist, as shown in Figure 4:



Figure 4: Username Does Not Exist


Again, the response from the server should be generic and simply tell the user that, if the username is valid, the system will send an instructional email to the address on record. Figure 5 shows an example of a message that a server could use in its response:


Figure 5: Example Message


Sometimes, user enumeration is not as simple as a server responding with text on the screen. It can also be based on how long it takes a server to respond. A server may take one amount of time to respond for a valid username and a very different (usually longer) amount of time for an invalid username. For example, Outlook Web Access (OWA) often displays this type of behavior. Figure 6 shows this type of attack, using a Metasploit login module.



Figure 6: Invalid and Valid User


In this example, the ‘FAILED LOGIN’ for the user 'RAPID7LAB\admin' took more than 30 seconds to respond and it resulted in a redirect. However, the user 'RAPID7LAB\administrator' got the response ‘FAILED LOGIN, BUT USERNAME IS VALID’ in a fraction of a second. When the response includes ‘BUT USERNAME IS VALID’, this indicates that the username does exist, but the password was incorrect. Due to the explicit notification about the username, we know that the other response, ‘FAILED LOGIN’, is for a username that is not known to the system.


How would you remediate this? One way could be to have the application pad the responses with a random amount of time, throwing off the noticeable difference. This might require some additional coding into an application, or may not be possible on a proprietary application.


Alternately, you could require Two-Factor Authentication (2FA). While the application may still be vulnerable to user enumeration, the malicious actor would have more trouble reaching their end goal of getting valid sets of credentials. Even if a malicious actor can generate user lists and correctly guess credentials, the SMS token may become an unbeatable obstacle that forces the malicious actor to seek easier targets.


One other way to block user enumeration is with a Web Application Firewall (WAF). To perform user enumeration, the malicious actor needs to submit lots of different usernames. A legitimate user should probably never not need to send hundreds or thousands of usernames. A good WAF will detect and block single IP address making many of these requests. Some WAFs will drop these requests entirely, others will issue a negative response, regardless of whether the request is valid.


We recommend testing any part of the web application where user accounts are checked by a server for validity and look for some different types of responses from the server. A different response can be as obvious as an error message or the amount of time a server takes to respond, or a subtler difference, like an extra line of code in a response or a different file being included. Adding 2FA or padding the response time can prevent these types of attacks, as any of these topics discussed could tip off a malicious actor as to whether a username is valid.


You can read about Rapid7's web application security testing solutions here.


National Exposure Index 2017

Posted by todb Employee Jun 14, 2017

Today, Rapid7 is releasing the second National Exposure Index, our effort to quantify the exposure that nations are taking on by offering public services on the internet—not just the webservers (like the one hosting this blog), but also unencrypted POP3, IMAPv4, telnet, database servers, SMB, and all the rest. By mapping the virtual space of the internet to the physical space where the machines hosting these services reside, we can provide greater understanding of each nation’s internet exposure to both active attack and passive monitoring. Even better, we can point to specific regions of the world where we can make real progress on reducing overall risk to critical internet-connected infrastructure.


Measuring Exposure

natexp-2017-small.pngWhen we first embarked on this project in 2016, we set out to answer some fundamental questions about the composition of the myriad services being offered on the internet. While everyone knows that good old HTTP dominates internet traffic, we knew that there are plenty of other services being offered that have no business being on the modern internet. Telnet, for example, is a decades-old remote administration service that offers nothing in the way of encryption and is often configured with default settings, a fact exploited by the devastating Mirai botnet attacks of last October. But, as security professionals and network engineers, we couldn’t say just how many telnet servers were out there. So we counted them.


Doing Something About It

We know today that there are about 10 million apparent telnet servers on the internet, but that fact alone doesn’t do us a lot of good. Sure, it’s down 5 million from last year—a 33% drop that can be attributed almost entirely to the Mirai attacks—but this was the result of a disaster that caused significant disruption, not a planned phase-out of an old protocol.


So, instead of just reporting that there are millions of exposed, insecure services on the global internet, we can point to specific countries where these services reside. This is far more useful, since it helps the technical leadership in those specific countries get a handle on what their exposure is so they can do something about it.


By releasing the National Exposure Index on an annual basis, we hope to track the evolving internet, encourage the wide-scale deployment of more modern, secure, appropriate services, and enable those people in positions of regional authority to better understand their existing, legacy exposure.


Mapping Exposure

We’re pretty pleased with how the report turned out, and encourage you to get a hold of it here. We have also created an interactive, global map so you can cut to the statistics that are most important for you and your region. In addition, we’re releasing the data that backs the report—which we gathered using Rapid7's Project Sonar—in case you’re the sort who wants to do your own investigation. Scanning the entire internet takes a fair amount of effort, and we want to encourage a more open dialogue about the data we've gathered. You’re welcome to head on over to and pick up our raw scanning data, as well as our GitHub repo of the summary data that went into our analysis. If you’d like to collaborate on cutting this data in new and interesting ways, feel free to drop us a line and we’ll be happy to nerd out on all things National Exposure with you.

I’ve never been a big fan of – or have believed in the value of – security policies. Sure, they’re necessary for setting expectations and auditors want to see them. They can also serve as a sort of insurance policy to fall back on when an unexpected security “event” occurs. But, at the end of the day, security policies often contribute minimal value to the overall information security function. As I’ve seen time times before: many organizations have great paperwork but their security program stinks.


If you’re reading this blog, you’re no doubt somehow responsible for information security in your business. You may not be responsible for everything security-related but odds are good that the things you’re doing, such as vulnerability scanning, penetration testing, and incident response, are an integral part of your overall security program. That’s all good but there’s an elephant in the room. It’s your security policies. You know those documents that are scattered about making your information security program look better than it actually is.


Don’t get me wrong. I’m not trying to be critical of or shun your efforts. I know what you’re up against, but I also see what the research shows and what happens when people have a false sense of security. When it comes to policies, one of the best things you can do, starting today, is to stop relying on them to fix your security problems. And tell your peers and management to do the same thing. You can tell them I said so. I’ve heard it a thousand times: “We have a policy for that.” The assumption is since there’s a policy, then all is well in IT. That’s hardly the case.


Looking at areas across your information security program—from security assessments to patch management to mobile computing—you may have a policy mandating this or that. Even a policy as basic as acceptable computer usage that’s known by everyone is likely in place. They’re probably all very well-written and would satisfy the requirements of many a gullible auditor. But how is it working for you? Probably not as well as you had hoped—or as well as management believes it does.


The following scenarios describe very common misconceptions about enterprise security:

  • We have a password policy. Yet all passwords across all systems are nowhere near compliant. Executives are often exempt. Some passwords are unenforceable. Numerous risks are further facilitated because strong passwords are assumed to be in place, but they’re really not.
  • We have an acceptable usage policy. But users don’t know about it. There’s no tracking or accountability for violations. There’s little to no integration with user awareness and training initiatives. Why even state what the rules are when they’re ignored?
  • We have a data backup policy. However, desktops, mobile devices, and even IoT and cloud data are being overlooked. Shadow IT-borne systems are completely out of the loop. The lack of backups of some of the most critical data is unknown until a ransomware infection hits.
  • We have a patch management policy. Still, patches are not being applied (especially third-party patches for Java and Adobe products) and malware and related exploits are still occurring.


You see where I’m going with this.


The interesting thing is that most of the IT and security pros I work with understand the reality of the policy vs. practice disconnect. It’s often people outside of those circles who assume that all is well in security because, after all, it’s documented. I’m convinced that if you’re going to have a truly resilient security environment, you’re going to have to move beyond the paperwork. See this disconnection for what it is. Bring it up in security discussions. Use it as leverage to get more support for your security initiatives. Ask for help. Whatever you do, don’t assume that policies will keep your environment locked down. Assumptions about these types of security challenges make fools of us all.

This is a continuation of our CIS critical security controls blog series.


Workstations form the biggest threat surface in any organization. The CIS Critical Security Controls include workstation and user-focused endpoint security in several of the controls, but Control 8 (Malware Defenses) is the only control to strictly focus on antivirus and malware across the organization. It's pretty important, too: Malware networks are often run by organized criminals who profit from both the stolen identities of end users and access to the extensive computing and network resources that malware is designed to exploit. To a lesser extent, malware is also used for corporate and nation-state espionage, acts of vandalism, strategic attacks on infrastructure, and just about any circumstance where an attacker wants to compromise multiple hosts with minimal effort. Simply put: Malware is a big problem, and it puts everyone at risk. Much like disease prevention, malware prevention relies on a combination of "computing hygiene" and herd immunity; if it's done right, we all have a part in reducing the impact of malware and the risks associated with it.


What this control covers

Control 8 covers malware and antivirus protection at  system, network, and organizational levels. It isn’t limited to workstations, since even servers that don't run Windows are regularly targeted (and affected) by malware. Control 8 should be used to asses infrastructure, IoT, mobile devices, and anything else that can become a target for malicious software—not just endpoints.


This control has been specifically included in version 6 of the CIS Critical Controls in a way that focuses on preventing the spread of worms and other self-propagating malicious code, but it's important to note that the Malware Defenses control is actually just a small subset of a good malware protection program. Following Control 8 will significantly improve any kind of incident response program you're developing, and it'll also help with the "top five" CIS controls, since it's dependent on them for effective implementation.



Antivirus not dead, sun still rises: a note on terminology

The term “malware” can be a little misleading, because it's often used to only describe viruses, or a specific subset of all of the malicious software used to attack information systems. The generally accepted definition of malware includes viruses, worms, ransomware, and anything that is purpose-built to be malicious software; that is what I'm going to stick with here. The nice part of this, though, is that many of the controls and mitigation techniques for viruses also cover typical malware. Another added bonus is that any decent antivirus software still scans for most malware signatures and malicious behavior. Despite claims to the contrary, antivirus is not dead; it just grew up.


How To Implement it

Centralize, automate, and configure

The first step is a pretty big one, but the good news is that it’s also fairly easy if you’ve even partially implemented “big five” critical controls. Asset configuration and management tools, as well as continual patching and careful system configuration, can go a long way in stopping most malware infections. This includes ransomware like CryptoLocker, WannaCrypt, and others, since they rely on poorly-configured or unpatched systems to spread.

Simply put: You probably already have a good start if you’re using any centrally managed antivirus service and managing your workstation and endpoint configuration. Central management of antivirus and antimalware clients is pretty important, since the logs generated by these systems can be used to aid in the incident detection process and generally help with cleanup and response. It’s also important for the obvious reasons: Antivirus still protects against viruses, and centrally managed ones mean you can control precisely how.


Log your incidents, and track them over time

As Cindy Jones discussed in an earlier post on logging, tracking and reporting incident information at a log level is pretty important.800px-John_Deere_2054_DHSP_forestry_swing_machine%2C_Kaibab_National_Forest_1.jpg It also acts as a good indicator of network health and security. Enterprise-level antivirus and antimalware solutions usually have some form of logging facility, and this—in concert with other logs from firewalls, network instruments, and critical systems—will give the security team a clear picture of what’s going on inside the network. Logging both detection and response information from your antivirus is a good way to help color in that picture. Aside from detection statistics, it is critical to log what has been done with it when it’s detected, and where it came from. Unfortunately, relying on individual incidents can be like drinking from a water cannon, so rather than relying on alerts for every incident, track the rate of change and the types of infection until you need to look at individual alerts or systems. If you don’t already have one, you will need to build a service that can monitor the number of infected and damaged machines in order to give you a clear picture of where the malware is.


Antimalware everything, all the time

As I mentioned at the top of this post, network devices and other "non-computer" elements of your organization's information systems are vulnerable to malware. At an organization and policy level, you should be making it clear that everything on your network needs to have an antivirus installed, and anything that is run by your IT team should have an enterprise antivirus client that reports back to you. This is helpful for a few reasons: you will have visibility into the systems with the antivirus or endpoint protection client running, and you also can ensure that you’re not granting network access to devices that may be carrying malware. While it may be tempting to ignore some systems, and some vendors don't make clients for some OSes, it's a good idea to aim for as much antivirus/antimalware coverage as possible.


OS-level malware, removable media, installation and tampering detection

Malware can show up from nearly anywhere, and removable media is a major source of infection. It’s critical that you set up your antivirus policy to scan removable media before it’s allowed on anything, and limit who can install software. Removing root privileges also removes the risk of user-installed software or malware attacking critical system objects, or exploiting access to administrator rights and privileged system objects. I’ve personally responded to ransomware cases where the only thing that limited the damage to the organization (and the end user) was the lack of local administrative privilege on the system that got infected. While this isn’t mentioned in Control 8, it’s brought up in Controls 14 and 5.


Watch your edges

Network-level scanning is definitely helpful, especially if you have the capacity to spot command and control traffic, malicious DNS and URL requests, and other stuff. It’s less helpful if you’re just replicating the work that your antivirus clients are already doing. In this case, IDS and logging are also going to play a huge role—specifically, log session lengths, DNS requests, and traffic patterns to look for access with Command and Control networks used by known malware. Session length logging can also give hints about data exfiltration, and looking at things like failed attempts to authenticate on services may also act as a virus or worm attack indicator. Looking at inbound and outbound network traffic from unusual IP addresses, or known bad actor addresses, will also help in identifying malware patterns as they emerge and localizing any response activities.


One last word on malware prevention

While the CIS doesn't include this in the top five controls, I think Control 8 is19507441846_450dc547b7_b.jpg still one of the most important. Good malware prevention actually does as much to help other people as it does for you and your network; you're cutting down the rate of transmission and infection and helping reduce the threat created by the people who use malware to commit crimes. Robust malware prevention techniques and programs actively reduce the threat to legacy systems and "high risk" networks that can't patch their systems for one reason or another. Fighting malware requires that we treat it like measles or smallpox: vaccinate against it, clean up infections, and monitor populations at risk. While it's often inconvenient or difficult, the end result is safer computing for everyone.






Banner photo courtesy of the author- Safety notice from Angus Railyards (now a grocery store), Montreal.
Flu virus TEM image from Wikimedia Commons courtesy of the CDC's fantactic Public Health Image Library (PHIL)

Forestry Swing machine (and logs) in Kaibab National Forest, AZ also from Wikimedia commons.

Inoculation picture courtesy of The University of Victoria's Flickr feed, originally from The Montreal Star.

This post describes a vulnerability in Yopify (a plugin for various popular e-commerce platforms), as well as remediation steps that have been taken. Yopify leaks the first name, last initial, city, and recent purchase data of customers, all without user authorization. This poses a significant privacy risk for customers. This vulnerability is characterized as: CWE-213 (Intentional Information Disclosure).


Product Description

Yopify is a notification plugin for a variety of e-commerce platforms, including BigCommerce, WooCommerce, Shopify, and LemonStand. The product is produced by Centire, an IT company based in India. Yopify provides a feature where website visitors are, every few seconds, presented with a popup containing information about one of the last 50 purchases from the site, such as the one shown here:


Screen Shot 2017-03-21 at 10.02.28 PM.jpeg


The purported use case is that by demonstrating existing customer activity, it replicates the public bustle and hubbub of a real-world shop, and makes potential customers more engaged. As you can see, the example above contains personal information about the customer (first name, last initial, city).


The various plugin sites for major e-commerce platforms show over 300 reviews of Yopify, which suggests that the number of exploitable sites is at least in the hundreds, and perhaps thousands.



This vulnerability was discovered by Oliver Keyes, a Rapid7, Inc. senior data scientist. This advisory was prepared in accordance with Rapid7's vulnerability disclosure policy.



Yopify works by having the e-commerce site load a JavaScript widget from the Yopify servers, which contains both the code to generate the UI element and the data used to populate it, stored as JSON. This widget does not require any authorization beyond a site-specific API key, which is embedded in the e-commerce site's source code, and is easily extractable with a regular expression.


The result is that by scraping a customer site to grab the API key and then simply running something like:


curl ''


where `3edb675e08e9c7fe22d243e44d184cdf` is the site ID and `t` is a cache buster, someone can remotely grab the data pertaining to the last 50 customers. This is updated as purchases are made. Thus an attacker can poll every few hours for a few days/weeks/months and build up a database of an e-commerce site's customer set and associated purchasers.


The data exposed to this polling was, however, far more extensive than the data displayed. While the pop-up only provides first name and last initial, the JSON blob originally contained first and last names in their entirety, along with city-level geolocation. While the casual online customer wouldn't have seen that, a malicious technical user could have trivially gained enough information to potentially target specific users of specific niche e-commerce sites.


Vendor Statement

We were unaware of this security threat when we created the app, however have taken the necessary steps alongside Shopify and Rapid7 to come to a satisfactory resolution where users privacy exposure is reduced.



Minimally, we believe users should be asked if they want their name, location, and purchase information shown to complete strangers. Sending and display of the information should be off by default to protect customer privacy and safety.

On May 18, Yopify responded to the bug report by Base64-encoding the JSON blob. This obfuscation may stop some unsophisticated users from finding the information, but it is in no way encrypted. However, by May 20 Yopify did make an additional update and the client no longer receives the full last name even in the JSON, but only the first letter of the last name. This is an improvement as it does limit the exposure of user data, and makes identifying individuals more difficult.

However, as personal information is still sent and displayed, we recommend that until Yopify adds opt-in functionality, users with privacy concerns avoid using e-commerce sites with Yopify installed. Users will know it is installed given the frequent pop-ups displaying other customers' information when they browse an affected site. In addition, we encourage e-commerce store owners to request Yopify add this functionality, or remove the widget in order to protect their customers' privacy.


Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's vulnerability disclosure policy.

  • Tue, Feb 28, 2017: Discovered by Rapid7's Oliver Keyes
  • Wed, Mar 22, 2017: Initial vendor contact (Yopify)
  • Thu, Mar 30, 2017: Additional vendor contact (Shopify), as no response received from Yopify yet
  • Thu, Mar 30, 2017: Response from Shopify
  • Fri, Mar 31, 2017: Details sent to, who forwarded information to Yopify
  • Wed, Apr 5, 2017: Additional vendor contact (Centire). Sent reminder to Yopify, and clarification/update to Shopify.
  • Fri, Apr 7, 2017: Additional disclosure to
  • Fri, Apr 7, 2017: Disclosed to CERT/CC
  • Fri, Apr 14, 2017: Reached back out to, no response
  • Wed, Apr 26, 2017: CVE-2017-3211 assigned by CERT/CC
  • Wed, May 17, 2017: Reached back out to Yopify and Centire
  • Thu, May 18, 2017: First response from Yopify; JSON Base64 encoding change deployed
  • Sat, May 20, 2017: Additional patch deployed by Yopify: full last name no longer sent to client, only first letter
  • Wed, May 31, 2017: Public disclosure

Filter Blog

By date: By tag: