Skip navigation
All Places > Information Security > Blog > Tags research
1 2 3 Previous Next

Information Security

771 Posts tagged with the research tag

On Jun. 22, the US Copyright Office released its long-awaited study on Sec. 1201 of the Digital Millennium Copyright Act (DMCA), and it has important implications for independent cybersecurity researchers. Mostly the news is very positive. Rapid7 advocated extensively for researcher protections to be built into this report, submitting two sets of detailed comments—see here and here—to the Copyright Office with Bugcrowd, HackerOne, and Luta Security, as well as participating in official roundtable discussions. Here we break down why this matters for researchers, what the Copyright Office's study concluded, and how it matches up to Rapid7's recommendations.

 

What is DMCA Sec. 1201 and why does it matter to researchers?

Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs, like encryption, authentication requirements, region coding, user agents) to access copyrighted works, including software, without permission of the owner. That creates criminal penalties and civil liability for independent security research that does not obtain authorization for each TPM circumvention from the copyright holders of software. This hampers researchers' independence and flexibility. While the Computer Fraud and Abuse Act (CFAA) is more famous and feared by researchers, liability for DMCA Sec. 1201 is arguably broader because it applies to accessing software on devices you may own yourself while CFAA generally applies to accessing computers owned by other people.

 

To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years in its "triennial rulemaking" process. The permanent exception to the prohibition on circumventing TPMs for security testing is quite limited – in part because researchers are still required to get prior permission from the software owner. The temporary exemptions go beyond the permanent exemptions.

 

In Oct. 2015 the Copyright Office granted a very helpful exemption to Sec. 1201 for good faith security testing that circumvents TPMs without permission. However, this temporary exemption will expire at the end of the three-year exemption window. In the past, once a temporary exemption expires, advocates must start from scratch in re-applying for another temporary exemption. The temporary exemption is set to expire Oct. 2018, and the renewal process will ramp up in the fall of this year.

 

Copyright Office study and Rapid7's recommendations

The Copyright Office announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study to weigh legislative and procedural reforms to Sec. 1201, including the permanent exemptions and the three-year rulemaking process. The Copyright Office solicited two sets of public comments and held a roundtable discussion to obtain feedback and recommendations for the study. At each stage, Rapid7 provided recommendations on reforms to empower good faith security researchers while preserving copyright protection against infringement – though, it should be noted, there were several commenters opposed to reforms for researchers on IP protection grounds.

 

Broadly speaking, the conclusions reached in the Copyright Office's study are quite positive for researchers and largely tracked the recommendations of Rapid7 and other proponents of security research. Here are four key highlights:

 

  1. Authorization requirement: As noted above, the permanent exemption for security testing under Sec. 1201(j) is limited because it still requires researchers to obtain authorization to circumvent TPMs. Rapid7's recommendation is to remove this requirement entirely because good faith security research does not infringe copyright, yet an authorization requirement compromises independence and speed of research. The Copyright Office's study recommended [at pg. 76] that Congress make this requirement more flexible or remove it entirely. This is arguably the study's most important recommendation for researchers.


  2. Multi-factor test: The permanent exemption for security testing under Sec. 1201(j) also partially conditions liability protection on researchers when the results are used "solely" to promote the security of the computer owner, and when the results are not used in a manner that violates copyright or any other law. Rapid7's recommendations are to remove "solely" (since research can be performed for the security of users or the public at large, not just the computer owner), and not to penalize researchers if their research results are used by unaffiliated third parties to infringe copyright or violate laws. The Copyright Office's study recommended [at pg. 79] that Congress remove the "solely" language, and either clarify or remove the provision penalizing researchers when research results are used by third parties to violate laws or infringe copyright.


  3. Compliance with all other laws: The permanent exemption for security testing only applies if the research does not violate any other law. Rapid7's recommendation is to remove this caveat, since research may implicate obscure or wholly unrelated federal/state/local regulations, those other laws have their own enforcement mechanisms to pursue violators, and removing liability protection under Sec. 1201 would only have the effect of compounding the penalties. Unfortunately, the Copyright Office took a different approach, tersely noting [at pg. 80] that it is unclear whether the requirement to comply with all other laws impedes legitimate security research. The Copyright Office stated they welcome further discussion during the next triennial rulemaking, and Rapid7 may revisit this issue then.


  4. Streamlined renewal for temporary exemptions: As noted above, temporary exemptions expire after three years. In the past, proponents must start from scratch to renew the temporary exemption – a process that involves structured petitions, multiple rounds of comments to the Copyright Office, and countering the arguments of opponents to the exemption. For researchers that want to renew the temporary security testing exemption, but that lack resources and regulatory expertise, this is a burdensome process. Rapid7's recommendation is for the Copyright Office to presume renewal of previously granted temporary exemptions unless there is a material change in circumstances that no longer justifies granting the exemption. In its study, the Copyright Office committed [at pg. 143] to streamlining the paperwork required to renew already granted temporary exemptions. Specifically, the Copyright Office will ask parties requesting renewal to submit a short declaration of the continuing need for an exemption, and whether there has been any material change in circumstances voiding the need for the exemption, and then the Copyright Office will consider renewal based on the evidentiary record and comments from the rulemaking in which the temporary exemption was originally granted. Opponents of renewing exemptions, however, must start from scratch in submitting evidence that a temporary exemption harms the exercise of copyright. 


 

Conclusion—what's next?

In the world of policy, change typically occurs over time in small (often hard-won) increments before becoming enshrined in law. The Copyright Office's study is one such increment. For the most part, the study is making recommendations to Congress, and it will ultimately be up to Congress (which has its own politics, processes, and advocacy opportunities) to adopt or decline these recommendations. The Copyright Office's study comes at a time that House Judiciary Committee is broadly reviewing copyright law with an eye towards possible updates. However, copyright is a complex and far-reaching field, and it is unclear when Congress will actually take action. Nonetheless, the Copyright Office's opinion on these issues will carry significant weight in Congress' deliberations, so it would have been a heavy blow if the Copyright Office’s study had instead called for tighter restrictions on security research.

 

Importantly, the Copyright Office's new, streamlined process for renewing already granted temporary exemptions will take effect without Congress' intervention. The streamlined process will be in place for the next "triennial rulemaking," which begins in late 2017 and concludes in 2018, and which will consider whether to renew the temporary exemption for security research. This is a positive, concrete development that will reduce the administrative burden of applying for renewal and increase the likelihood of continued protections for researchers.

 

The Copyright Office's study noted that "Independent security test[ing] appears to be an important component of current cybersecurity practices". This recognition and subsequent policy shifts on the part of the Copyright Office are very encouraging. Rapid7 believes that removing legal barriers to good faith independent research will ultimately strengthen cybersecurity and innovation, and we hope to soon see legislative reforms that better balance copyright protection with legitimate security testing.

In April 2017, President Trump issued an executive order directing a review of all trade agreements. This process is now underway: The United States Trade Representative (USTR) – the nation's lead trade agreement negotiator – formally requested public input on objectives for the renegotiation of the North American Free Trade Agreement (NAFTA). NAFTA is a trade agreement between the US, Canada, and Mexico, that covers a huge range of topics, from agriculture to healthcare.

 

Rapid7 submitted comments in response, focusing on 1) preventing data localization, 2) alignment of cybersecurity risk management frameworks, 3) protecting strong encryption, and 4) protecting independent security research.

 

Rapid7's full comments on the renegotiation of NAFTA are available here.

 

1) Preserving global free flow of information – preventing data localization

Digital goods and services are increasingly critical to the US economy. By leveraging cloud computing, digital commerce offers significant opportunities to scale globally for individuals and companies of all sizes – not just large companies or tech companies, but for any transnational company that stores customer data.

 

However, regulations abroad that disrupt the free flow of information, such as "data localization" (requirements that data be stored in a particular jurisdiction), impede both trade and innovation. Data localization erodes the capabilities and cost savings that cloud computing can provide, while adding the significant costs and technical burdens of segregating data collected from particular countries, maintaining servers locally in those countries, and navigating complex geography-based laws. The resulting fragmentation also undermines the fundamental concept of a unified and open global internet.

 

Rapid7's comments [pages 2-3] recommended that NAFTA should 1) Prevent compulsory localization of data, and 2) Include an express presumption that governments would minimize disruptions to the flow of commercial data across borders.

 

2) Promote international alignment of cybersecurity risk management frameworks

When NAFTA was originally negotiated, cybersecurity was not the central concern that it is today. Cybersecurity is presently a global affair, and the consequences of malicious cyberattack or accidental breach are not constrained by national borders.

 

Flexible, comprehensive security standards are important for organizations seeking to protect their systems and data. International interoperability and alignment of cybersecurity practices would benefit companies by enabling them to better assess global risks, make more informed decisions about security, hold international partners and service providers to a consistent standard, and ultimately better protect global customers and constituents. Stronger security abroad will also help limit the spread of malware contagion to the US.

 

We support the approach taken by the National Institute of Standards and Technology (NIST) in developing the Cybersecurity Framework for Critical Infrastructure. The process was open, transparent, and carefully considered the input of experts from the public and private sector. The NIST Cybersecurity Framework is now seeing impressive adoption among a wide range of organizations, companies, and government agencies – including some critical infrastructure operators in Canada and Mexico.

 

Rapid's comments [pages 3-4] recommended that NAFTA should 1) recognize the importance of international alignment of cybersecurity standards, and 2) require the Parties to develop a flexible, comprehensive cybersecurity risk management framework through a transparent and open process.

 

3) Protect strong encryption

Reducing opportunities for attackers and identifying security vulnerabilities are core to cybersecurity. The use of encryption and security testing are key practices in accomplishing these tasks. International regulations that require weakening of encryption or prevent independent security testing ultimately undermine cybersecurity.

 

Encryption is a fundamental means of protecting data from unauthorized access or use, and Rapid7 believes companies and innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system. Market access rules requiring weakened encryption would create technical barriers to trade and put products with weakened encryption at a competitive disadvantage with uncompromised products. Requirements to weaken encryption would impose significant security risks on US companies by creating diverse new attack surfaces for bad actors, including cybercriminals and unfriendly international governments.

 

Rapid7's comments [page 5] recommended that NAFTA forbid Parties from conditioning market access for cryptography in commercial applications on the transfer of decryption keys or alteration of the encryption design specifications.

 

4) Protect independent security research

Good faith security researchers access software and computers to identify and assess security vulnerabilities. To perform security testing effectively, researchers often need to circumvent technological protection measures (TPMs) – such as encryption, login requirements, region coding, user agents, etc. – controlling access to software (a copyrighted work). However, this activity can be chilled by Sec. 1201 of the Digital Millennium Copyright Act (DMCA) of 1998, which forbids circumvention of TPMs without the authorization of the copyright holder.

 

Good faith security researchers do not seek to infringe copyright, or to interfere with a rightsholder's normal exploitation of protected works. The US Copyright Office recently affirmed that security research is fair use and granted this activity, through its triennial rulemaking process, a temporary exemption from the DMCA's requirement to obtain authorization from the rightsholder before circumventing a TPM to safely conduct security testing on lawfully acquired (i.e., not stolen or "borrowed") consumer products.

 

Some previous trade agreements have closely mirrored the Digital Millennium Copyright Act's (DMCA) prohibitions on unauthorized circumvention of TPMs in copyrighted works. This approach replicates internationally the overbroad restrictions on independent security testing that the US is now scaling back. Newly negotiated trade agreements should aim to strike a more modern and evenhanded balance between copyright protection and good faith cybersecurity research.

 

Rapid7's comments [page 6] recommended that any anti-circumvention provisions of NAFTA should be accompanied by provisions exempting security testing of lawfully acquired copyrighted works.

 

Better trade agreements for the Digital Age?

Data storage and cybersecurity have undergone considerable evolution since NAFTA was negotiated more than a quarter century ago. To the extent that renegotiation may better address trade issues related to digital goods and services, we view the modernization of NAFTA and other agreements as potentially positive. The comments Rapid7 submitted regarding NAFTA will likely apply to other international trade agreements as they come up for renegotiation. We hope the renegotiations result in a broadly equitable and beneficial trade regime that reflects the new realities of the digital economy.

WannaCry Overview

Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server Message Block (SMB) file sharing protocol. It spreads to unpatched devices directly connected to the internet and, once inside an organization, those machines and devices behind the firewall as well. For full details, check out the blog post: Wanna Decryptor (WannaCry) Ransomware Explained.

 

Since last Friday morning (May 12), there have been several other interesting posts about WannaCry from around the security community. Microsoft provided specific guidance to customers on protecting themselves from WannaCry. MalwareTech wrote about how registering a specific domain name triggered a kill switch in the malware, stopping it from spreading. Recorded Future provided a very detailed analysis of the malware’s code.

 

However, the majority of reporting about WannaCry in the general news has been that while MalwareTech’s domain registration has helped slow the spread of WannaCry, a new version that avoids that kill switch will be released soon (or is already here) and that this massive cyberattack will continue unabated as people return to work this week.

 

In order to understand these claims and monitor what has been happening with WannaCry, we have used data collected by Project Sonar and Project Heisenberg to measure the population of SMB hosts directly connected to the internet, and to learn about how devices are scanning for SMB hosts.

 

Part 1: In which Rapid7 uses Sonar to measure the internet

Project Sonar regularly scans the internet on a variety of TCP and UDP ports; the data collected by those scans is available for you to download and analyze at scans.io. WannaCry exploits a vulnerability in devices running Windows with SMB enabled, which typically listens on port 445. Using our most recent Sonar scan data for port 445 and the recog fingerprinting system, we have been able to measure the deployment of SMB servers on the internet, differentiating between those running Samba (the Linux implementation of the SMB protocol) and actual Windows devices running vulnerable versions of SMB.

 

We find that there are over 1 million internet-connected devices that expose SMB on port 445. Of those, over 800,000 run Windows, and — given that these are nodes running on the internet exposing SMB — it is likely that a large percentage of these are vulnerable versions of Windows with SMBv1 still enabled (other researchers estimate up to 30% of these systems are confirmed vulnerable, but that number could be higher).

 

We can look at the geographic distribution of these hosts using the following treemap (ISO3C labels provided where legible):

fig1.png[ Figure 1 ]

 

The United States, Asia, and Europe have large pockets of Windows systems directly exposed to the internet while others have managed to be less exposed (even when compared to their overall IPv4 blocks allocation).

 

We can also look at the various versions of Windows on these hosts:

fig2.png[ Figure 2 ]

 

The vast majority of these are server-based Windows operating systems, but there is also a further unhealthy mix of Windows desktop operating systems in the mix—, some quite old. The operating system version levels also run the gamut of the Windows release history timeline:

fig3.png[ Figure 3 ]

 

Using Sonar, we can get a sense for what is out there on the internet offering SMB services. Some of these devices are researchers running honeypots (like us), and some of these devices are other research tools, but a vast majority represent actual devices configured to run SMB on the public internet. We can see them with our light-touch Sonar scanning, and other researchers with more invasive scanning techniques have been able to positively identify that infection rates are hovering around 2%.

 

Part 2: In which Rapid7 uses Heisenberg to listen to the internet

While Project Sonar scans the internet to learn about what is out there, Project Heisenberg is almost the inverse: it listens to the internet to learn about scanning activity. Since SMB typically runs on port 445, and the WannaCry malware scans port 445 for potential targets, if we look at incoming connection attempts on port 445 to Heisenberg nodes as shown in Figure 4, we can see that scanning activity spiked briefly on 2017-05-10 and 2017-05-11, then increased quite a bit on 2017-05-12, and has stayed at elevated levels since.

 

incoming-connections.png[Figure 4]

 

Not all traffic to Heisenberg on port 445 is an attempt to exploit the SMB vulnerability that WannaCry targets (MS17-010). There is always scanning traffic on port 445 (just look at the activity from 2017-05-01 through 2017-05-09), but a majority of the traffic captured between 2017-05-12 and 2017-05-14 was attempting to exploit MS17-010 and likely came from devices infected with the WannaCry malware. To determine this we matched the raw packets captured by Heisenberg on port 445 against sample packets known to exploit MS17-010.

 

Figure 5 shows the number of unique IP addresses scanning for port 445, grouped by hour between 2017-05-10 and 2017-05-16. The black line shows that at the same time that the number of incoming connections increases (2017-05-12 through 2017-05-14), the number of unique IPs addresses scanning for port 445 also increases. Furthermore, the orange line shows the number of new, never- before- seen IPs scanning for port 445. From this we can see that a majority of the IPs scanning for port 445 between 2017-05-12 and 2017-05-14 were new scanners.

 

unique-source-ips.png[ Figure 5 ]

 

Finally, we see scanning activity from 157 different countries in the month of May, and scanning activity from 133 countries between 2017-05-12 and 2017-05-14. Figure 6 shows the top 20 countries from which we have seen scanning activity, ordered by the number of unique IPs from those countries.

 

ips-by-country.png[ Figure 6 ]

 

While we have seen the volume of scans on port 445 increase compared to historical levels, it appears that the surge in scanning activity seen between 2017-05-12 and 2017-05-14 has started to tail off.

 

So what?

Using data collected by Project Sonar we have been able to measure the deployment of vulnerable devices across the internet, and we can see that there are many of them out there. Using data collected by project Heisenberg, we have seen that while scanning for devices that expose port 445 has been observed for quite some time, the volume of scans on port 445 has increased since 2017-05-12, and a majority of those scans are specifically looking to exploit MS17-010, the SMB vulnerability that the WannaCry malware looks to exploit.

 

MS17-010 will continue to be a vector used by attackers, whether from the WannaCry malware or from something else. Please, follow Microsoft’s advice and patch your systems. If you are a Rapid7 InsightVM or Nexpose customer, or you are running a free 30 day trial, here is a step by step guide on on how you can scan your network to find all of your assets that are potentially at risk for your organization.

 

Coming Soon

If this sort of information about internet wide measurements and analysis is interesting to you, stay tuned for the National Exposure Index 2017. Last year, we used Sonar scans to evaluate the security exposure of all the countries of the world based on the services they exposed on the internet. This year, we have run our studies again, we have improved our methodology and infrastructure, and we have new findings to share.

 

Related:

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.

 

This paper covers the often occult art of penetration testing, and seeks to demystify the process, techniques, and tools that pentesters use to break into enterprise networks. By drawing on the experiences of dozens of pentesters in the field, based on real, qualified data drawn from the real-life experiences of those pentesters, we're able to suss out the most common vulnerabilities that are exploited, the most common network misconfigurations that are leveraged, and the most effective methods we've found to compromise high-value credentials.

 

Finding: Detection is Everything

Probably the most actionable finding we discovered is that most organizations that conduct penetration testing exercises have a severe lack of usable, reliable intrusion detection capabilities. Over two-thirds of our pentesters completely avoided detection during the engagement. This is especially concerning given that most assessments don't put a premium on stealth; due to constraints in time and scope, pentesters generate an enormous amount of malicious traffic. In an ideal network, these would be setting off alarm bells everywhere. Most engagements end with recommendations to implement some kind of incident detection and response, regardless of the specific techniques for compromise were used.

Finding: Enterprise Size and Industry Doesn't Matter

When we started this study, we expected to find quantitative differences between small networks and large networks, and between different industries. After all, you might expect a large, financial industry enterprise of over 1,000 employees would be better equipped to detect and defend against unwelcome attackers due to the security resources available and required by various compliance regimes and regulatory requirements. Or, you might believe that a small, online-only retail startup would be more nimble and more familiar with the threats facing their business.

 

Alas, this isn't the case. As it turns out, the detection and prevention rates are nearly identical between large and small enterprises, and no industry seemed to fare any better or worse when it came to successful compromises.

 

This is almost certainly due to the fact that IT infrastructure pretty much everywhere is built using the same software and hardware components. Thus, all networks tend to be vulnerable to the same common misconfigurations that have the same vulnerability profiles when patch management isn't firing at 100%. There are certainly differences in the details -- especially when it comes to custom-designed web applications -- but even those tend to have the same sorts of frameworks and components that power them.

 

The Human Touch

Finally, if you're not really into reading a bunch of stats and graphs, we have a number of "Under the Hoodie" sidebar stories, pulled from real-life engagements. For example, while discussing common web application vulnerabilities, we're able to share a story of how a number of otherwise lowish-severity, external web application issues lead to the eventual compromise of the entire internal back-end network. Not only are these stories fun to read, they do a pretty great job of illustrating how unrelated issues can conspire on an attacker's behalf to lead to surprising levels of unauthorized access.

 

I hope you take a moment to download the paper and take a look at our findings; I don't know of any other research out there that explores the nuts and bolts of penetration testing in quite the depth or breadth that this report provides. In addition, we'll be covering the material at our booth at the RSA security conference next week in San Francisco, as well as hosting a number of "Ask a Pentester" sessions. Andrew and I will both be there, and we love nothing more than connecting with people who are interested in Rapid7's research efforts, so definitely stop by.

When cybersecurity researchers find a bug in product software, what’s the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers’ disclosure? Questions like these are becoming increasingly important as more software-enabled goods - and the cybersecurity vulnerabilities they carry - enter the marketplace. But more data is needed on how these issues are being dealt with in practice.

 

Today we helped publish a research report [PDF] that investigates attitudes and approaches to vulnerability disclosure and handling. The report is the result of two surveys – one for security researchers, and one for technology providers and operators – launched as part of a National Telecommunications and Information Administration (NTIA) “multistakeholder process” on vulnerability disclosure. The process split into three working groups: one focused on building norms/best practices for multi-party complex disclosure scenarios; one focused on building best practices and guidance for disclosure relating to “cyber safety” issues, and one focused on driving awareness and adoption of vulnerability disclosure and handling best practices. It is this last group, the “Awareness and Adoption Working Group” that devised and issued the surveys in order to understand what researchers and technology providers are doing on this topic today, and why. Rapid7 - along with several other companies, organizations, and individuals - participated in the project (in full disclosure, I am co-chair of the working group) as part of our ongoing focus on supporting security research and promoting collaboration between the security community and technology manufacturers.

 

The surveys, issued in April, investigated the reality around awareness and adoption of vulnerability disclosure best practices. I blogged at the time about why the surveys were important: in a nutshell, while the topic of vulnerability disclosure is not new, adoption of recommended practices is still seen as relatively low. The relationship between researchers and technology providers/operators is often characterized as adversarial, with friction arising from a lack of mutual understanding. The surveys were designed to uncover whether these perceptions are exaggerated, outdated, or truly indicative of what’s really happening. In the latter instance, we wanted to understand the needs or concerns driving behavior.

 

The survey questions focused on past or current behavior for reporting or responding to cybersecurity vulnerabilities, and processes that worked or could be improved. One quick note – our research efforts were somewhat imperfect because, as my data scientist friend Bob Rudis is fond of telling me, we effectively surveyed the internet (sorry Bob!). This was really the only pragmatic option open to us; however, it did result in a certain amount of selection bias in who took the surveys. We made a great deal of effort to promote the surveys as far and wide as possible, particularly through vertical sector alliances and information sharing groups, but we expect respondents have likely dealt with vulnerability disclosure in some way in the past. Nonetheless, we believe the data is valuable, and we’re pleased with the number and quality of responses.

 

There were 285 responses to the vendor survey and 414 to the researcher survey. View the infographic here [PDF].

 

Key findings

Researcher survey

ntia-aag-2016-survey-infographic-block-2.png

  • The vast majority of researchers (92%) generally engage in some form of coordinated vulnerability disclosure.
  • When they have gone a different route (e.g., public disclosure) it has generally been because of frustrated expectations, mostly around communication.
  • The threat of legal action was cited by 60% of researchers as a reason they might not work with a vendor to disclose.
  • Only 15% of researchers expected a bounty in return for disclosure, but 70% expected regular communication about the bug.

 

Vendor survey

  • Vendor responses were generally separable into “more mature” and “less mature” categories. Most of the more mature vendors (between 60 and 80%) used all the processes described in the survey.
  • Most “more mature” technology providers and operators (76%) look internally to develop vulnerability handling procedures, with smaller proportions looking at their peers or at international standards for guidance.
  • More mature vendors reported that a sense of corporate responsibility or the desires of their customers were the reasons they had a disclosure policy.
  • Only one in three surveyed companies considered and/or required suppliers to have their own vulnerability handling procedures.

 

Building on the data for a brighter future

With the rise of the Internet of Things we are seeing unprecedented levels of complexity and connectivity for technology, introducing cybersecurity risk in all sorts of new areas of our lives. Adopting robust mechanisms for identifying and reporting vulnerabilities, and building productive models for collaboration between researchers and technology providers/operators has never been so critical.

 

It is our hope that this data can help guide future efforts to increase awareness and adoption of recommended disclosure and handling practices. We have already seen some very significant evolutions in the vulnerability disclosure landscape – for example, the DMCA exemption for security research; the FDA post-market guidance; and proposed vulnerability disclosure guidance from NHTSA. Additionally, in the past year, we have seen notable names in defense, aviation, automotive, and medical device manufacturing and operating all launch high profile vulnerability disclosure and handling programs. These steps are indicative of an increased level of awareness and appreciation of the value of vulnerability disclosure, and each paves the way for yet more widespread adoption of best practices.

 

The survey data itself offers a hopeful message in this regard - many of the respondents indicated that they clearly understand and appreciate the benefits of a coordinated approach to vulnerability disclosure and handling. Importantly, both researchers and more mature technology providers indicated a willingness to invest time and resources into collaborating so they can create more positive outcomes for technology consumers.

 

Yet, there is still a way to go. The data also indicates that to some extent, there are still perception and communication challenges between researchers and technology providers/operators, the most worrying of which is that 60% of researchers indicated concern over legal threats. Responding to these challenges, the report advises that:

 

“Efforts to improve communication between researchers and vendors should encourage more coordinated, rather than straight-to-public, disclosure. Removing legal barriers, whether through changes in law or clear vulnerability handling policies that indemnify researchers, can also help. Both mature and less mature companies should be urged to look at external standards, such as ISOs, and further explanation of the cost-savings across the software development lifecycle from implementation of vulnerability handling processes may help to do so.”

 

The bottom line is that more work needs to be done to drive continued adoption of vulnerability disclosure and handling best practices. If you are an advocate of coordinated disclosure – great! – keep spreading the word. If you have not previously considered it, now is the perfect time to start investigating it. ISO 29147 is a great starting point, or take a look at some of the example policies such as the Department of Defense or Johnson and Johnson. If you have questions, feel free to post them here in the comments or contact community [at] rapid7 [dot] com.

 

As a final thought, I would like to thank everyone that provided input and feedback on the surveys and the resulting data analysis - there were a lot of you and many of you were very generous with your time. And I would also like to thank everyone that filled in the surveys - thank you for lending us a little insight into your experiences and expectations.

 

~ @infosecjen

Introducing Project Heisenberg Cloud

Project Heisenberg Cloud is a Rapid7 Labs research project with a singular purpose: understand what attackers, researchers and organizations are doing in, across and against cloud environments. This research is based on data collected from a new, Rapid7-developed honeypot framework called Heisenberg along with internet reconnaissance data from Rapid7's Project Sonar.

 

Internet-scale reconnaissance with cloud-inspired automation

Heisenberg honeypots are a modern take on the seminal attacker detection tool. Each Heisenberg node is a lightweight, highly configurable agent that is centrally deployed using well-tested tools, such as terraform, and controlled from a central administration portal. Virtually any honeypot code can be deployed to Heisenberg agents and all agents send back full packet captures for post-interaction analysis.

 

One of the main goals of Heisenberg it to understand attacker methodology. All interaction and packet capture data is synchronized to a central collector and all real-time logs are fed directly into Rapid7's Logentries for live monitoring and historical data mining.

 

Insights into cloud configs and attacker methodology

Rapid7 and Microsoft deployed multiple Heisenberg honeypots in every "zone" of six major cloud providers: Amazon, Azure, Digital Ocean, Rackspace, Google and Softlayer, and examined the service diversity in each of these environments, the type of connection attackers, researchers and organizations are initiating within, against and across these environments.

 

To paint a picture of the services offered in each cloud provider, the research teams used Sonar data collected during Rapid7's 2016 National Exposure study. Some highlights include:

 

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80 of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate - 86% and 74%, respectively - than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).
  • Our honeypots caught "data mashup" businesses attempting to use the cloud to mask illegal content scraping activity.

 

Read More

For more detail on our initial findings with Heisenberg Cloud, please click here to download our report or here for slides from our recent UNITED conference presentation.

 

Acknowledgements

We would like to thank Microsoft and Amazon for engaging with us through the initial stages of this research effort, and as indicated above, we hope they, and other cloud hosting providers will continue to do so as we move forward with the project.

This is a guest post from Art Manion, Technical Manager of the Vulnerability Analysis Team at the CERT Coordination Center (CERT/CC). CERT/CC is part of the Software Engineering Institute at Carnegie Mellon University.

 

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face.

 

The CERT/CC has been doing coordinated vulnerability disclosure (CVD) since 1988. From the position of coordinator, we see the good, bad, and ugly from vendors, security researchers and other stakeholders involved in CVD. In this post, I'm eventually going to give some advice to security researchers. But first, some background discussion about sociotechnical systems, the internet of things, and chilling effects.

 

While there are obvious technological aspects of the creation, discovery, and defense of vulnerabilities, I think of cybersecurity and CVD as sociotechnical systems. Measurable improvements will depend as much on effective social institutions ("stable, valued, recurring patterns of [human] behavior") as they will on technological advances. The basic CVD process itself -- discover, report, wait, publish -- is an institution concerned with socially optimal [PDF] protective behavior. This means humans making decisions, individually and in groups, with different information, incentives, beliefs, and norms. Varying opinions about "optimal" explain why, despite three decades of debate and both offensive and defensive technological advances, vulnerability disclosure remains a controversial topic.

 

To add further complication, the rapid expansion of the internet of things has changed the dynamics of CVD and cybersecurity in general. Too many "things" have been designed with the same disregard to security associated with internet-connected systems of the 1980s, combined with the real potential to cause physical harm. The Mirai botnet, which set DDoS bandwidth records in 2016, used an order of magnitude fewer username and password guesses than the Morris worm did in 1988. Remote control attacks on cars and implantable medical devices have been demonstrated. The stakes involved in software security and CVD are no longer limited to theft of credit cards, account credentials, personal information, and trade or national secrets.

 

In pursuit of socially optimal CVD and with consideration for the new dynamics of IoT, I've become involved in two policy areas: defending beneficial security research and defining CVD process maturity. These two areas intersect when researchers chose CVD as part of their work, and that choice is not without risk to the researcher.

 

The security research community has valid and serious concerns about the chilling effects, real and perceived, of legal barriers and other disincentives [PDF] to performing research and disclosing results. On the other hand, there is a public policy desire to differentiate legitimate and beneficial security research from criminal activity. The confluence of these two forces leads to the following conundrum: If rules for security researchers are codified -- even with broad agreement from researchers, whose opinions differ -- any research activity that falls out of bounds could be considered unethical or even criminal.

 

Codified rules could reduce they grey area created by "exceeds authorized access" and the steady supply of new vulnerabilities (often discovered by accessing things in interesting and unexpected ways). But with CVD, and most complex interactions, the exception is the rule and CVD is full of grey. Honest mistakes, competing interests, language, time zone, and cultural barriers, disagreements and other forms of miscommunication are commonplace. There's still too little information and too many moving parts to codify CVD rules in fair and effective way.

 

Nonetheless, I see value in improving the quality of vulnerability reports and CVD as an institution, so here is some guidance for security researchers who choose the CVD option. Most of this advice is based on my experience at the CERT/CC for the last 15 years, which is bound to include some personal opinion, so caveat lector.

 

  • Be aware. In three decades of debate, a lot has been written about vulnerability disclosure. Read up, talk to your peers. If you're subject to U.S. law and read only one reference, it should be the EFF's Coders' Rights Project Vulnerability Reporting FAQ.

 

  • Be humble. Your vulnerability is not likely the most important out of 14,000+ public disclosures this year. Please think carefully before registering a domain for your vulnerability. You might fully understand the system you're researching, or you might not, particularly if the system has significant, non-traditional-compute context, like implantable medical devices.

 

  • Be confident. If your vulnerability is legit, then you'll be able to demonstrate it, and others will be able to reproduce. You're also allowed to develop your reputation and brand, just not at a disproportionate cost to others.

 

  • Be responsible. CVD is full of externalities. Admit when you're wrong, make clarifications and corrections.

 

  • Be concise. A long, rambling report or advisory costs readers time and mental effort and is usually an indication of a weak or invalid report. If you've got a real vulnerability, demonstrate it clearly and concisely.

 

  • PoC||GTFO. This one goes hand in hand with being concise. Actual PoC may not be necessary, but provide clear evidence of the vulnerable behavior you're reporting and steps for others to reproduce. Videos might help, but they don't cut it by themselves.

 

  • Be clear. Both with your intentions and your communications. You won't reach agreement with everyone, particularly about when to publish, but try to avoid surprises. Use ISO 8601 date/time formats. Simple phrasing, avoid idiom.

 

  • Be professional. Professionals balance humility, confidence, candor, and caution. Professionals don't need to brag. Professionals get results. Let your work speak for itself, don't exaggerate.

 

  • Be empathetic. Vendors and the others you're dealing with have their own perspectives and constraints. Take some extra care with those who are new to CVD.

 

  • Minimize harm. Public disclosure is harmful. It increases risk to affected users (in the short term at least) and costs vendors and defenders time and money. The theory behind CVD is that in the long run, public disclosure is universally better than no disclosure. I'm not generally a fan of analogies in cybersecurity, but harm reduction in public health is one I find useful (informed by, but different than this take). If your research reveals sensitive information, stop at the earliest point of proving the vulnerability, don't share the information, and don't keep it. Use dummy accounts when possible.

 

In a society increasingly dependent on complex systems, security research is important work, and the behavior of researchers matters. At the CERT/CC, much of our CVD effort is focused on helping others build capability and improving institutions, thus, the advice above. We do offer a public CVD service, so if you've reached an impasse, we may be able to help.

 

Art Manion is the Technical Manager of the Vulnerability Analysis team in the CERT Coordination Center (CERT/CC), part of the Software Engineering Institute at Carnegie Mellon University. He has studied vulnerabilities and coordinated responsible disclosure efforts since joining CERT in 2001. After gaining mild notoriety for saying "Don't use Internet Explorer" in a conference presentation, Manion now focuses on policy, advocacy, and rational tinkering approaches to software security, including standards development in ISO/IEC JTC 1 SC 27 Security techniques. Follow Art at @zmanion and CERT at @CERT_Division.

Yesterday, the Michigan Senate Judiciary Committee passed a bill – S.B. 0927 – that forbids some forms of vehicle hacking, but includes specific protections for cybersecurity researchers. Rapid7 supports these protections. The bill is not law yet – it has only cleared a Committee in the Senate, but it looks poised to keep advancing in the state legislature. Our background and analysis of the bill is below.

 

In summary:

  • The amended bill offers legal protections for independent research and repair of vehicle computers. These protections do not exist in current Michigan state law.
  • The amended bill bans some forms of vehicle hacking that damage property or people, but we believe this was already largely prohibited under current Michigan state law.
  • The bill attempts to make penalties for hacking more proportional, but this may not be effective.

 

Background

 

Earlier this year, Michigan state Senator Mike Kowall introduced S.B. 0927 to prohibit accessing a motor vehicle's electronic system without authorization. The bill would have punished violations with a potential life sentence. As noted by press reports at the time, the bill's broad language made no distinction between malicious actors, researchers, or harmless access. The original bill is available here.

 

After S.B. 0927 was introduced, Rapid7 worked with a coalition of cybersecurity researchers and companies to detail concerns that the bill would chill legitimate research. We argued that motorists are safer as a result of independent research efforts that are not necessarily authorized by vehicle manufacturers. For example, in Jul. 2015, researchers found serious security flaws in Jeep software, prompting a recall of 1.4 million vehicles. Blocking independent research to uncover vehicle software flaws would undermine cybersecurity and put motorists at greater risk.

 

Over a four-month period, Rapid7 worked rather extensively with Sen. Kowall's office and Michigan state executive agencies to minimize the bill's damage to cybersecurity research. We applaud their willingness to consider our concerns and suggestions. The amended bill passed by the Michigan Senate Judiciary Committee, we believe, will help provide researchers with greater legal certainty to independently evaluate and improve vehicle cybersecurity in Michigan.

 

The Researcher Protections

 

First, let's examine the bill's protections for researchers – Sec. 5(2)(B); pg. 6, lines 16-21. Explicit protection for cybersecurity researchers does not currently exist in Michigan state law, so we view this provision as a significant step forward.

 

This provision says researchers do not violate the bill's ban on vehicle hacking if the purpose is to test, refine, or improve the vehicle – and not to damage critical infrastructure, other property, or injure other people. The research must also be performed under safe and controlled conditions. A court would need to interpret what qualifies as "safe and controlled" conditions – hacking a moving vehicle on a highway probably would not qualify, but we would argue that working in one's own garage likely sufficiently limits the risks to other people and property.

 

The researcher protections do not depend on authorization from the vehicle manufacturer, dealer, or owner. However, because of the inherent safety risks of vehicles, Rapid7 would support a well-crafted requirement that research beyond passive signals monitoring must obtain authorization from the vehicle owner (as distinct from the manufacturer).

 

The bill offers similar protections for licensed manufacturers, dealers, and mechanics [Sec. 5(2)(A); pg. 6, lines 10-15]. However, both current state law and the bill would not explicitly give vehicle owners (who are not mechanics, or are not conducting research) the right to access their own vehicle computers without manufacturer authorization. However, since Michigan state law does not clearly give owners this ability, the bill is not a step back here. Nonetheless, we would prefer the legislation make clear that it is not a crime for owners to independently access their own vehicle and device software.

 

The Vehicle Hacking Ban

 

The amended bill would explicitly prohibit unauthorized access to motor vehicle electronic systems to alter or use vehicle computers, but only if the purpose was to damage the vehicle, injure persons, or damage other property [Sec. 5(1)(c)-(d); pgs. 5-6, lines 23-8]. That is an important limit that should exclude, for example, passive observation of public vehicle signals or attempts to fix (as opposed to damage) a vehicle.

 

Although the amended bill would introduce a new ban on certain types of vehicle hacking, our take is that this was already illegal under existing Michigan state law. Current Michigan law – at MCL 752.795 – prohibits unauthorized access to "a computer program, computer, computer system, or computer network." The current state definition of "computer" – at MCL 752.792 – is already sweeping enough to encompass vehicle computers and communications systems. Since the law already prohibits unauthorized hacking of vehicle computers, it's difficult to see why this legislation is actually necessary. Although the bill’s definition of "motor vehicle electronic system" is too broad [Sec. 2(11); pgs. 3-4, lines 25-3], its redundancy with current state law makes this legislation less of an expansion than if there were no overlap.

 

Penalty Changes

 

The amended bill attempts to create some balance to sentencing under Michigan state computer crime law [Sec. 7(2)(A); pg. 8, line 11]. This provision essentially makes harmless violations of Sec. 5 (which includes the general ban on hacking, including vehicles) a misdemeanor, as opposed to a felony. Current state law – at MCL 752.797(2) – makes all Sec. 5 violations felonies, which is potentially harsh for innocuous offenses. We believe that penalties for unauthorized hacking should be proportionate to the crime, so building additional balance in the law is welcome.

 

However, this provision is limited and contradictory. The Sec. 7 provision applies only to those who "did not, and did not intend to," acquire/alter/use a computer or data, and if the violation can be "cured without injury or damage." But to violate Sec. 5, the person must have intentionally accessed a computer to acquire/alter/use a computer or data. So the person did not violate Sec. 5 in the first place if the person did not do those things or did not do them intentionally. It’s unclear under what scenario Sec. 7 would kick in and provide a more proportionate sentence – but at least this provision does not appear to cause any harm. We hope this provision can be strengthened and clarified as the bill moves through the Michigan state legislature.

 

Conclusion

 

On balance, we think the amended bill is a major improvement on the original, though not perfect. The most important improvements we'd like to see are

  1. Clarifying the penalty limitation in Sec. 7; 
  2. Narrowing the definition of "motor vehicle electrical system" in Sec. 2; and
  3. Limiting criminal liability for owners that access software on vehicle computers they own.

 

However, the clear protections for independent researchers are quite helpful, and Rapid7 supports them. To us, the researcher protections further demonstrate that lawmakers are recognizing the benefits of independent research to advance safety, security, and innovation. The attempt at creating proportional sentences is also sorely needed and welcome, if inelegantly executed.

 

The amended bill is at a relatively early stage in the legislative process. It must still pass through the Michigan Senate and House. Nonetheless, it starts off on much more positive footing than it did originally. We intend to track the bill as it moves through the Michigan legislature and hope to see it improve further. In the meantime, we'd welcome feedback from the community.

Today, I'm happy to announce the latest research paper from Rapid7, National Exposure Index: Inferring Internet Security Posture by Country through Port Scanning, by Bob Rudis, Jon Hart, and me, Tod Beardsley. This research takes a look at one of the most foundational components of the internet: the millions and millions of individual services that live on the public IP network. thumbnail_National Exposure Index_cover.jpg

 

When people think about "the internet," they tend to think only of the one or two protocols that the World Wide Web runs on, HTTP and HTTPS. Of course, there are loads of other services, but which are actually in use, and at what rate? How much telnet, SSH, FTP, SMTP, or any of the other protocols that run on TCP/IP is actually in use today, where are they all located, and how much of it is inherently insecure due to running over non-encrypted, cleartext channels?

 

While projects like CAIDA and Shodan perform ongoing telemetry that covers important aspects of the internet, we here at Rapid7 are unaware of any ongoing effort to gauge the general deployment of services on public networks. So, we built our own, using Project Sonar, and we have the tooling now to not only answer these fundamental questions about the nature of the internet and come up with more precise questions for specific lines of inquiry.

 

Can you name the top ten TCP protocols offered on the internet? You probably can guess the top two, but did you know that #7 is telnet? Yep, there are 15 million good old, reliable, usually unencrypted telnet out there, offering shells to anyone who cares to peek in on the cleartext password as it's being used.

 

We found some weird things on the national level, too. For instance, about 75% of the servers offering SMB/CIFS services - a (usually) Microsoft service for file sharing and remote administration for Windows machines -  reside in just six countries: the United States, China, Hong Kong, Belgium, Australia and Poland.

 

It's facts like these that made us realize that we have a fundamental gap in our awareness of the services deployed on the public side of firewalls the world over. This gap, in turn, makes it hard to truly understand what the internet is. So, the paper and the associated data we collected (and will continue to collect) can help us all get an understanding of what makes up one of the most significant technologies in use on Planet Earth.


So, you can score a copy of the paper, full of exciting graphs (and absolutely zero pie charts!) here. Or, if you're of a mind to dig into the data behind those graphs, you can score the summary data here and let us know what is lurking in there that you found surprising, shocking, or sobering.

Recently I transitioned from a Principal Consultant role into a new role at Rapid7, as Research Lead with a focus on IoT technology, and it has been a fascinating challenge. Although I have been conducting research for a number of years, covering everything from Format string and Buffer overflow research on Windows applications to exploring embedded appliance and hacking multifunction printers (MFP), conducting research within the IoT world is truly exciting and amazing and has taught me to be even more open minded.

 

That is, open minded to the fact that there are people out there attaching technology to everything and anything. (Even toothbrushes.)

 

oralb-bluetooth.png

 

As a security consultant, over the last eight years I have focused most of my research on operational style attacks, which I have developed and used to compromise systems and data during penetration testing. The concept of operational attacks is the process of using the operational features of a device against itself.

 

As an example, if you know how to ask nicely, MFPs will often give up Active Directory credentials, or as recent research has disclosed, network management systems openly consume SNMP data without questioning its content or where it came from.

 

IoT research is even cooler because now I get the chance to expand my experience into a number of new avenues. Historically I have prided myself in the ability to define risk around my research and communicate it well. With IoT, I initially shuddered at the question: “How do I define Risk?"

 

IoT Risk

In the past, it has been fairly simple to define and explain risk as it relates to operational style attacks within an enterprise environment, but with IoT technology I initially struggled with the concept of risk. This was mainly driven by the fact that most IoT technologies appear to be consumer-grade products. So if someone hacks my toothbrush they may learn how often I brush my teeth. What is the risk there, and how do I measure that risk?

 

The truth is, the deeper I head down this rabbit hole called IoT, the better my understanding of risk grows. A prime example of defining such risk was pointed out by Tod Beardsley in his blog “The Business Impact of Hacked Baby Monitors”. On the first look, we might easily jump to the conclusion that there may not be any serious risk to an enterprise business. But on second take, if a malicious actor can use some innocuous IoT technology to gain a foothold to the home network of one of your employees, they could then potentially pivot onto the corporate network via remote access, such a VPN. This is a valid risk that can be communicated and should be seriously considered.

 

IoT Research

To better define risk, we need to ensure our research involves all aspect of IoT technology. Often when researching and testing IoT, researchers can get a form of tunnel vision where they focus on the technology from a single point of reference, as an example, the device itself.

 

While working and discussing IoT technology with my peers at Rapid7, I have grown to appreciate the complexity of IoT and its ecosystem. Yes, ecosystem—this is where we consider the entire security picture of IoT, and not just one facet of the technology. This includes the three following categories and how each one of these categories interacts and impacts each of the other categories. We cannot test one without the other and consider that testing effective.  We must test each one and also test how they affect each other.

 

ecosystem.png

 

With IoT quickly becoming more than just consumer-grade products, we are starting to see more IoT-based technologies migrating into the enterprise environment. If we are ever going to build a secure IoT world, it is critical during our research that all aspects of the ecosystem are addressed.

 

The knowledge we learn from this research can help enterprises better cope with the new security risk, make better decisions on technology purchases, and help employees stay safe within their home environment—which leads to better security for our enterprises. Thorough research can also deliver valuable knowledge back to the vendors, making it possible to improve product security during the design, creation, and manufacturing of IoT technology, so new vendors and new products are not recreating the same issues over and over.

 

So, as we continue down the road of IoT research, let us focus our efforts on the entire ecosystem. That way we can assure that our efforts lead to a complete picture and culminate in security improvements within the IoT industry.

by Suchin Gururangan & Bob Rudis

 

At Rapid7, we are committed to engaging in research to help defenders understand, detect and defeat attackers. We conduct internet-scale research to gain insight into the volatile threat landscape and share data with the community via initiatives like Project Sonar1 and Heisenberg2. As we crunch this data, we have a better idea of the global exposure to common vulnerabilities and can see emerging patterns in offensive attacks.

 

We also use this data to add intelligence to our products and services. We’re developing machine learning models that use this daily internet telemetry to identify phishing sites and find+classify devices through their certificate and site configurations.

 

We have recently focused our research on how these tools can work together to provide unique insight on the state of the internet. Looking at the internet as a whole can help researchers identify stable, macro level trends in the individual attacks between IP addresses. In this post, we’ll give you window into these explorations.

 

IPv4 Topology

First, a quick primer on IPv4, the fourth version of the Internet Protocol. The topology of IPv4 is characterized by three levels of hierarchy, from smallest to largest: IP addresses, subnets, and autonomous systems (ASes). IP addresses on IPv4 are 32-bit sequences that identify hosts or network interfaces. Subnets are groups of IP addresses, and ASes are blocks of subnets managed by public institutions and private enterprises. IPv4 is divided into about 65,000 ASes, at least 30M subnets, and 232 IP addresses.

 

Malicious ASes

There has been a great deal of academic and industry focus on identifying malicious activity in-and-across autonomous systems3,4,5,6, and for good reasons. Well over 50% of “good” internet traffic comes from a small subset of large, well-defined ocean-like ASes pushing content from Netflix, Google, Facebook, Apple and Amazon. Despite this centralization “cloud” content, we’ll show that the internet has become substantially more fragmented over time, enabling those with malicious intent to stake their claim in less friendly waters. In fact, our longitudinal data on phishing activity across IPv4 presented an interesting trend: a small subset of autonomous systems have regularly hosted a disproportionate amount of malicious activity. In particular, 200 ASes hosted 70% of phishing activity from 2007 to 2015 (data: cleanmx archives7). We wanted to understand what makes some autonomous systems more likely to host malicious activity.

 

overall_density-1.png

top_20-1.png

 

IPv4 Fragmentation

We gathered historical data on the mapping between IP addresses and ASes from 2007 to 2015 to generate a longitudinal map of IPv4. This map clearly suggested IPv4 has been fragmenting. In fact, the total number of ASes has grown 60% in the past decade. During the same period, there has been a rise in the number of small ASes and a decline in the number of large ones. These results make sense given that IPV4 address space has been exhausted. This means that growth in IPv4 access requires the reallocation of existing address space into smaller and smaller independent blocks.

 

RStudioScreenSnapz024.png

 

 

AS Fragmentation

Digging deeper into the Internet hierarchy, we analyzed the composition, size, and fragmentation of malicious ASes.

ARIN, one of the primary registrars of ASes, categorizes subnets based on the number of IP addresses they contain. We found that the smallest subnets available made up on average 56±3.0 percent of a malicious AS.

We inferred the the size of an AS by calculating its maximum amount of addressable space. Malicious ASes were in the 80-90th percentile in size across IPv4.

 

To compute fragmentation, subnets observed in ASes overtime were organized into trees based on parent-child relationships (Figure 3). We then calculated the ratio of the number of root subnets, which have no parents, to the number of subsequent child subnets across the lifetime of the AS. We found that malicious ASes were 10-20% more fragmented than other ASes in IPv4.

 

RStudioScreenSnapz024.png

 

These results suggest that malicious ASes are large and deeply fragmented into small subnets. ARIN fee schedules8 showed that smaller subnets are significantly less expensive to purchase; and, the inexpensive nature of small subnets may allow malicious registrars to purchase many IP blocks for traffic redirection or host proxy servers to better float under the radar.

 

 

Future Work

Further work is required to characterize the exact cost structure of buying subnets, registering IP blocks, and setting up infrastructure in malicious ASes.

 

We'd also like to understand the network and system characteristics that cause attackers to choose to co-opt a specific autonomous system over another. For example, we used Sonar’s historical forwardDNS service and our phishing detection algorithms to characterize all domains that have mapped to these ASes in the past two years. Domains hosted in malicious ASes had features that suggested deliberate use of specific infrastructure. For example, 'wordpress' sites were over-represented in some malicious ASes (like (like AS4808), and GoDaddy was by far the most popular registrar for malicious sites across the board.

 

We can also use our SSL Certificate classifier to understand the distribution of devices hosted in ASes across IPv4, as seen in the chart below:

 

overall_device_density-1.png

 

Each square above shows the probability distribution (a fancier, prettier histogram) of device counts of a particular type. Most ASes host fewer than 100 devices across a majority of categories. Are there skews in the presence of specific devices to propagate phishing attacks from these malicious ASes?

 

Conclusion

Our research presents the following results:

 

  1. A small subset of ASes continue to host a disproportionate amount of malicious activity.

  2. Smaller subnets and ASes are becoming more ubiquitous in IPv4.

  3. Malicious ASes are deeply fragmented

  4. There is a concentrated use of specific infrastructure in malicious ASes

  5. Attackers both co-opt existing devices and stand up their own infrastructure within ASes (a gut-check would suggest this is obvious, but having data to back it up also makes it science).

 

Further work is required to characterize the exact cost structure of buying subnets, registering IP blocks, and setting up infrastructure in malicious ASes along with what network and system characteristics cause attackers to choose to co-opt one device in one autonomous system over another.

 

This research represents an example of how Internet-scale data science can provide valuable insight on the threat landscape. We hope similar macro level research is inspired by these explorations and will be bringing you more insights from Project Sonar & Heisenberg over the coming year.


  1. Sonar intro

  2. Heisenberg intro

  3. G. C. M. Moura, R. Sadre and A. Pras, _Internet Bad Neighborhoods: The spam case,“_ Network and Service Management (CNSM), 2011 7th International Conference on, Paris, 2011, pp. 1-8.

  4. B. Stone-Gross, C. Kruegel, K. Almeroth, A. Moser and E. Kirda, “FIRE: FInding Rogue nEtworks”; doi: 10.1109/ACSAC.2009.29

  5. C. A. Shue, A. J. Kalafut and M. Gupta, “Abnormally Malicious Autonomous Systems and Their Internet Connectivity,”; doi: 10.1109/TNET.2011.2157699

  6. A. J. Kalafut, C. A. Shue and M. Gupta, “Malicious Hubs: Detecting Abnormally Malicious Autonomous Systems,”; doi: 10.1109/INFCOM.2010.5462220

  7. Cleanmx archive

  8. ARIN Fee Schedule

royhodgman

The Attacker's Dictionary

Posted by royhodgman Employee Mar 1, 2016

Rapid7 is publishing a report about the passwords attackers use when they scan the internet indiscriminately. You can pick up a copy at booth #4215 at the RSA Conference this week, or online right here. The following post describes some of what is investigated in the report.

 

Announcing the Attacker's Dictionary

Rapid7's Project Sonar periodically scans the internet across a variety of ports and protocols, allowing us to study the global exposure to common vulnerabilities as well as trends in software deployment (this analysis of binary executables stems from Project Sonar).

 

As a complement to Project Sonar, we run another project called Heisenberg which listens for scanning activity. Whereas Project Sonar sends out lots of packets to discover what is running on devices connected to the Internet, Project Heisenberg listens for and records the packets being sent by Project Sonar and other Internet-wide scanning projects.

 

The datasets collected by Project Heisenberg let us study what other people are trying to examine or exploit. Of particular interest are scanning projects which attempt to use credentials to log into services that we do not provide. We cannot say for sure what the intention is of a device attempting to log into a nonexistent RDP server running on an IP address which has never advertised its presence, but we believe that behavior is suspect and worth analyzing.

 

How Project Heisenberg Works

Project Heisenberg is a collection of low interaction honeypots deployed around the world. The honeypots run on IP addresses which we have not published, and we expect that the only traffic directed to the honeypots would come from projects or services scanning a wide range of IP addresses. When an unsolicited connection attempt is made to one of our honeypots, we store all the data sent to the honeypot in a central location for further analysis.

 

In this post we will explore some of the data we have collected related to Remote Desktop Prodocol (RDP) login attempts.

 

RDP Summary Data

We have collected RDP passwords over a 334 day period, from 2015-03-12 to 2016-02-09.

 

During that time we have recorded 221203 different attempts to log in, coming from 5076 distinct IP addresses across 119 different countries, using 1806 different usernames and 3969 different passwords.

 

Because it wouldn't be a discussion of passwords without a top 10 list, the top 10 passwords that we collected are:

 

password

count

percent

x

11865

5.36%

Zz

10591

4.79%

St@rt123

8014

3.62%

1

5679

2.57%

P@ssw0rd

5630

2.55%

bl4ck4ndwhite

5128

2.32%

admin

4810

2.17%

alex

4032

1.82%

.......

2672

1.21%

administrator

2243

1.01%

 

And because we have information not only about passwords, but also about the usernames that are being used, here are the top 10 that were collected:

 

username

count

percent

administrator

77125

34.87%

Administrator

53427

24.15%

user1

8575

3.88%

admin

4935

2.23%

alex

4051

1.83%

pos

2321

1.05%

demo

1920

0.87%

db2admin

1654

0.75%

Admin

1378

0.62%

sql

1354

0.61%

 

We see on average 662.28 login attempts every day, but the actual daily number varies quite a bit. The chart below shows the number of events per day since we started collecting data. Notice the heavy activity in the first four months, which skews the average high.

 

all_events_by_day.png

 

In addition to the username and password being used in the login attempts that we captured, we also collected the IP address of the device making the login attempt. To the best of the ability of the GeoIP database we used, here are the top 15 countries from which the collected login attempts originate:

 

country

country code

count

percent

China

CN

88227

39.89%

United States

US

54977

24.85%

South Korea

KR

13182

5.96%

Netherlands

NL

10808

4.89%

Vietnam

VN

6565

2.97%

United Kingdom

GB

3983

1.80%

Taiwan

TW

3808

1.72%

France

FR

3709

1.68%

Germany

DE

2488

1.12%

Canada

CA

2349

1.06%

 

With the data broken down by country, we can recreate the chart above to show activity by country for the top 5 countries:

 

 

events_per_day_by_country.png

 

RDP Highlights

There is even more information to be found in this data beyond counting passwords, usernames and countries.

We guess that these passwords are selected because whomever is conducting these scans believes that there is a chance they will work. Maybe the scanners have inside knowledge about actual usernames and passwords in use, or maybe they're just using passwords that have been made available from previous security breaches in which account credentials were leaked.

 

In order to look into this, we compared all the passwords collected by Project Heisenberg to passwords listed in two different collections of leaked passwords. The first is a list of passwords collected from leaked password databases by Crackstation. The second list comes from Mark Burnett.

 

In the table below we list how many of the top N passwords are found in these password lists:

 

top password count

num in any list

percent

1

1

100.00%

2

2

100.00%

3

2

66.67%

4

3

75.00%

5

4

80.00%

10

8

80.00%

50

28

56.00%

100

55

55.00%

1000

430

43.00%

3969

1782

44.90%

 

This means that 8 of the 10 most frequently used passwords were also found in published lists of leaked passwords. But looking back at the top 10 passwords above, they are not very complex and so it is not surprising that they appear in a list of leaked passwords.

 

This observation prompted us to look at the complexity of the passwords we collected. Just about any time you sign up for a service on the internet – be it a social networking site, an online bank, or a music streaming service – you will be asked to provide a username and password. Many times your chosen password will be evaluated during the signup process and you will be given feedback about how suitable or secure it is.

 

 

Password evaluation is a tricky and inexact art that consists of various components. Some of the many aspects that a password evaluator may take into consideration include:

 

  • length
  • presence of dictionary words
  • runs of characters (aaabbbcddddd)
  • presence of non alphanumeric characters (!@#$%^&*)
  • common substitutions (1 for l [lowercase L], 0 for O [uppercase o])

 

Different password evaluators will place different values on each of these (and other) characteristics to decide whether a password is "good" or "strong" or "secure". We looked at a few of these password evaluators, and found zxcvbn to be well documented and maintained, so we ran all the passwords through it to compute a complexity score for each one. We then looked at how password complexity is related to finding a password in a list of leaked passwords.

 

complexity

# passwords

%

crackstation

crackstation %

Burnnet

Burnett %

any

any %

all

all %

0

803

20.23

726

90.41

564

70.24

728

90.66

562

69.99

1

1512

38.10

898

59.39

634

41.93

939

62.10

593

39.22

2

735

18.52

87

11.84

37

5.03

94

12.79

30

4.08

3

567

14.29

13

2.29

5

0.88

13

2.29

5

0.88

4

352

8.87

7

1.99

4

1.14

8

2.27

3

0.85

 

The above table shows the complexity of the collected passwords, as well as how many were found in different password lists.

 

For instance, with complexity level 4, there were 352 passwords classified as being that complex, 7 of which were found in the crackstation list, and 4 of which were found in the Burnett list. Furthermore, 8 of the passwords were found in at least one of the password lists, meaning that if you had all the password lists, you would find 2.27% of the passwords classified as having a complexity value of 4. Similarly, looking across all the password lists, you would find 3 (0.85%) passwords present in each of the lists.

 

From this we extrapolate that as passwords get more complex, fewer and fewer are found in the lists of leaked passwords. Since we see that attackers try passwords that are stupendously simple, like single character passwords, and much more complex passwords that are typically not found in the usual password lists, we can surmise that these attackers are not tied to these lists in any practical way -- they clearly have other sources for likely credentials to try.

 

Finally, we wanted to know what the population of possible targets looks like. How many endpoints on the internet have an RDP server running, waiting for connections? Since we have experience from Project Sonar, on 2016-02-02 the Rapid7 Labs team ran a Sonar scan to see how many IPs have port 3389 open listening for tcp traffic. We found that 10822679 different IP addresses meet that criteria, spread out all over the world.

 

So What?

With this dataset we can learn about how people looking to log into RDP servers operate. We have much more detail in the report, but some our findings include:

  • We see that many times a day, every day, our honeypots are contacted by a variety of entities.
  • We see that many of these entities try to log into an RDP service which is not there, using a variety of credentials.
  • We see that a majority of the login attempts use simple passwords, most of which are present in collections of leaked passwords.
  • We see that as passwords get more complex, they are less and less likely to be present in collections of leaked passwords.
  • We see that there is a significant population of RDP enabled endpoints connected to the internet.

 

But wait, there's more!

If this interests you and you would like to learn more, come talk to us at booth #4215 the RSA Conference.

By now, you’ve probably caught wind of Mark Stanislav’s ten newly disclosed vulnerabilities last week, or seen our whitepaper on baby monitor security – if not, head on over to the IoTSec resources page.

 

You may also have noticed that Rapid7 isn’t really a Consumer Reports-style testing house for consumer gear. We’re much more of an enterprise security services and products company, so what’s the deal with the baby monitors? Why spend time and effort on this?

 

The Decline of Human Dominance

Well, this whole “Internet of Things” is in the midst of really taking off, which I’m sure doesn’t come as news. According to Gartner, we’re on track to see 25 billion-with-a-B of these Things operating in just five years, or something around three to four Things for every human on Earth.

 

Pretty much every electronic appliance in your home is getting a network stack, an operating system kernel, and a cloud-backed service, and it’s not like they have their own network full of routers and endpoints and frequencies to do all this on. They’re using the home’s WiFi network, hopping out to the Internet, and talking to you via your most convenient screen.

 

Pwned From Home

In the meantime, telecommuting increasingly blurs the lines between the “work” network and the “home” network. From my home WiFi, I check my work e-mail, hop on video conferences, commit code to GitHub (both public and private), and interact with Rapid7’s assets directly or via a cloud service pretty much every day. I know I’m not alone on this. The imaginary line between the “internal” corporate network and the “external” network has been a convenient fiction for a while, and it’s getting more and more porous as traversing that boundary makes more and more business sense. After all, I’m crazy productive when I’m not in the office, thanks largely to my trusty 2FA, SSO, and VPN.

 

So, we’re looking at a situation where you have a network full of Things that haven’t been IT-approved (as if that stopped anyone before) all chattering away, while we’re trying to do sensitive stuff like access and write sensitive and proprietary company data, on the very same network.

 

Oh, and if the aftermarket testing we’ve seen (and performed) is to be believed, these devices haven’t had a whole lot of security rigor applied.

 

Compromising a network starts with knocking over that low-hanging fruit, that one device that hasn’t seen a patch in forever, that doesn’t keep logs, that has a silly password on an administrator-level account – pretty much, a device that has all of the classic misfeatures common to video baby monitors and every other early market IoT device.

 

Let’s Get Hacking

Independent research is critical in getting the point across that this IoT revolution is not just nifty and useful. It needs to be handled with care. Otherwise, the IoT space will represent a mountain of shells, pre-built vulnerable platforms, usable by bad guys to get footholds in every home and office network on Earth.

 

If you’re responsible for IT security, maybe it’s time to take a survey of your user base and see if you can get a feel for how many IoT devices are one hop away from your critical assets. Perhaps you can start an education program on password management that goes beyond the local Active Directory, and gets people to take all these passwords seriously. Heck, teach your users how to check and change defaults on their new gadgets, and how to document their changes for when things go south.

 

In the meantime, check out our webinar tomorrow for the technical details of Mark’s research on video baby monitors, and join us over on Reddit and “Ask Me Anything” about IoT security and what we can do to get ahead of these problems.

Usually, these disclosure notices contain one, maybe two vulnerabilities on one product. Not so for this one; we’ve got ten new vulnerabilities to disclose today.

 

If you were out at DEF CON 23, you may have caught Mark Stanislav’s workshop, “The Hand that Rocks the Cradle: Hacking IoT Baby Monitors.” You may have also noticed some light redaction in the slides, since during the course of that research, Mark uncovered a number of new vulnerabilities across several video baby monitors.

 

Vendors were notified, CERT/CC was contacted, and CVEs have all been assigned, per the usual disclosure policy, which brings us to the public disclosure piece, here.

 

For more background and details on the IoT research we've performed here at Rapid7, we've put together a collection of resources on this IoT security research. There, you can find the whitepaper covering many more aspects of IoT security, some frequently asked questions around the research, and a registration link for next week's live webinar with Mark Stanislav and Tod Beardsley.

 

Summary

 

CVE-2015-2886

Remote

R7-2015-11.1

Predictable Information Leak

iBaby M6

CVE-2015-2887

Local Net, Device

R7-2015-11.2

Backdoor Credentials

iBaby M3S

CVE-2015-2882

Local Net, Device

R7-2015-12.1

Backdoor Credentials

Philips In.Sight B120/37

CVE-2015-2883

Remote

R7-2015-12.2

Reflective, Stored XSS

Philips In.Sight B120/37

CVE-2015-2884

Remote

R7-2015-12.3

Direct Browsing

Philips In.Sight B120/37

CVE-2015-2888

Remote

R7-2015-13.1

Authentication Bypass

Summer Baby Zoom Wifi Monitor & Internet Viewing System

CVE-2015-2889

Remote

R7-2015-13.2

Privilege Escalation

Summer Baby Zoom Wifi Monitor & Internet Viewing System

CVE-2015-2885

Local Net, Device

R7-2015-14

Backdoor Credentials

Lens Peek-a-View

CVE-2015-2881

Local Net

R7-2015-15

Backdoor Credentials

Gynoii

CVE-2015-2880

Device

R7-2015-16

Backdoor Credentials

TRENDnet WiFi Baby Cam TV-IP743SIC

 

 

Disclosure Details

 

Vendor: iBaby Labs, Inc.

maxresdefault.jpgThe issues for the iBaby devices were disclosed to CERT under vulnerability note VU#745448.

 

Device: iBaby M6

The vendor's product site for the device assessed is https://ibabylabs.com/ibaby-monitor-m6


Vulnerability R7-2015-11.1: Predictable public information leak (CVE-2015-2886)

The web site ibabycloud.com has a vulnerability by which any authenticated user to the ibabycloud.com service is able to view camera details for any other user, including video recording details, due to a direct object reference vulnerability.

 

The object ID parameter is eight hexadecimal characters, corresponding with the serial number for the device. This small object ID space enables a trivial enumeration attack, where attackers can quickly brute force the object IDs of all cameras.

 

Once an attacker is able to view an account's details, broken links provide a filename that is intended to show available "alert" videos that the camera recorded. Using a generic AWS CloudFront endpoint found via sniffing iOS app functionality, this URL can have the harvested filename appended and data accessed from the account. This effectively allows anyone to view videos that were created from that camera stored on the ibabycloud.com service, until those videos are deleted, without any further authentication.

 

Relevant URLs

 

Additional Details

The ibabycloud.com authentication procedure has been non-functional as of at least June, 2015, continuing through the publication of this paper in September, 2015. These errors started after testing was conducted for this research, and today, do not allow for logins to the cloud service. That noted, it may be possible to still get a valid session via the API and subsequently leverage the site and API to gain these details.

 

Mitigations

Today, this attack is more difficult without prior knowledge of the camera's serial number, as all logins are disabled on the ibabycloud.com website. Attackers must, therefore, acquire specific object IDs by other means, such as sniffing local network traffic.

 

In order to avoid local network traffic cleartext exposure, customers should inquire with the vendor about a firmware update, or cease using the device.

 

 

Device: iBaby M3S

m3-product_0_0.jpgThe vendor's product site for the device assessed is https://ibabylabs.com/ibaby-monitor-m3s

 

Vulnerability R7-2015-11.2, Backdoor Credentials (CVE-2015-2887)

The device ships with hardcoded credentials, accessible from a telnet login prompt and a UART interface, which grants access to the underlying operating system. Those credentials are detailed below.

 

Operating System (via Telnet or UART)

  • Username: admin
  • Password: admin

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, requesting details for ticket #4085

Tue, Jul 07, 2015: Disclosure to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Philips Electronics N.V.

B120_37-IMS-en_US?wid=1250&$jpglarge$The issue for the Philips device was disclosed to CERT under vulnerability note VU#569536.

 

Device: Philips In.Sight B120/37

The vendor's product site for the device assessed is http://www.usa.philips.com/c-p/B120_37/in.sight-wireless-hd-baby-monitor

 

Vulnerability R7-2015-12.1, Backdoor Credentials (CVE-2015-2882)

The device ships with hardcoded and statically generated credentials which can grant access to both the local web server and operating system.

The operating system "admin" and "mg3500" account passwords are present due to the stock firmware used by this camera, which is used by other cameras on the market today.

The web service "admin" statically-generated password was first documented by Paul Price at his blog[1].

In addition, while the telnet service may be disabled by default on the most recent firmware, it can be re-enabled via an issue detailed below.

 

Operating System (via Telnet or UART)

  • Username: root
  • Password: b120root

 

Operating System (via Telnet or UART)

  • Username: admin
  • Password: /ADMIN/

 

Operating System (via Telnet or UART)

  • Username: mg3500
  • Password: merlin

 

Local Web Server

Reachable via http://{device_ip}/cgi-bin/{script_path}

  • Username: user
  • Password: M100-4674448

 

Local Web Server

Reachable via http://{device_ip}/cgi-bin/{script_path}

  • Username: admin
  • Password: M100-4674448
  • A recent update changes this password, but the new password is simply the letter 'i' prefixing the first ten characters of the MD5 hash of the device's MAC address.

 

Vulnerability R7-2015-12.2, Reflective and Stored XSS (CVE-2015-2883)

A web service used on the backend of Philips' cloud service to create remote streaming sessions is vulnerable to reflective and stored XSS. Subsequently, session hijacking is possible due to a lack of an HttpOnly flag.

When accessing the Weaved cloud web service[2] as an authenticated user, multiple pages have a mixture of reflective and stored XSS in them, allowing for potential session hijacking. With this access, a valid streaming session could be generated and eavesdropped upon by an attacker. Two such examples are:

 

Vulnerability R7-2015-12.3, Direct Browsing via Insecure Streaming (CVE-2015-2884)

The method for allowing remote viewing uses an insecure transport, does not offer secure streams protected from attackers, and does not offer sufficient protection for the the camera's internal web applications.

Once a remote viewing stream has been requested, a proxy connection to the camera's internal web service via the cloud provider Yoics[3] is bound to a public hostname and port number. These port numbers appear to range from port 32,000 to 39,000 as determined from testing.This bound port is tied to a hostname with the pattern of proxy[1,3-14].yoics.net, limiting the potential number of port and host combinations to an enumerable level. Given this manageable attack space, attackers can test for for a HTTP 200 response in a reasonably short amount of time.

 

Once found, administrative privilege is available without authentication of any kind to the web scripts available on the device. Further, by accessing a Unicode-enabled streaming URL (known as an "m3u8" URL), a live video/audio stream will be accessible to the camera and appears to stay open for up to 1 hour on that host/port combination. There is no blacklist or whitelist restriction on which IP addresses can access these URLs, as revealed in testing.

 

Relevant URLs

  • Open audio/video stream of a camera: http://proxy{1,3-14}.yoics.net:{32000-39000}/tmp/stream2/stream.m3u8 [no authentication required]
  • Enable Telnet service on camera remotely: http://proxy{1,3-14}.yoics.net:{32000-39000}/cgi-bin/cam_service_enable.cgi [no authentication required]

 

Mitigations

In order to disable the hard-coded credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels. In order to avoid the XSS and cleartext streaming issues with Philips' cloud service, customers should avoid using the remote streaming functionality of the device and inquire with the vendor about the status of a cloud service update.

 

Additional Information

Prior to publication of this report, Philips confirmed with Rapid7 the tested device was discontinued by Philips in 2013, and the current manufacturer and distributor is Gibson Innovations. Gibson has developed a solution for the identified vulnerabilities, an expects to make updates available by September 4, 2015.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, requesting details

Tue, Jul 07, 2015: Philips Responsible Disclosure ticket number 15191319 assigned

Tue, Jul 17, 2015: Phone conference with vendor to discuss issues

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Thu, Aug 27, 2015: Contacted by Weaved to validate R7-2015-12.2

Tue, Sep 01, 2015: Contacted by Philips regarding the role of Gibson Innovations

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Summer Infant

71z07dgKAiL._SX450_.jpgThe issues for the Summer Infant device was disclosed to CERT under vulnerability note VU#837936.

 

Device: Summer Baby Zoom WiFi Monitor & Internet Viewing System

The vendor's product site for the device assessed is http://www.summerinfant.com/monitoring/internet/babyzoomwifi.

 

Vulnerability R7-2015-13.1, Authentication Bypass (CVE-2015-2888)

An authentication bypass allows for the addition of an arbitrary account to any camera, without authentication.

The web service MySnapCam[4] is used to support the camera's functionality, including account management for access. A URL retrievable via an HTTP GET request can be used to add a new user to the camera. This URL does not require any of the camera's administrators to have a valid session to execute this request, allowing anyone requesting the URL with their details against any camera ID to have access added to that device.

After a new user is successfully added, an e-mail will then be sent to an e-mail address provided by the attacker with authentication details for the MySnapCam web site and mobile application. Camera administrators are not notified of the new account.

 

Relevant URL

 

Vulnerability R7-2015-13.2, Privilege Escalation (CVE-2015-2889)

An authenticated, regular user can access an administrative interface that fails to check for privileges, leading to privilege escalation.

 

A "Settings" interface exists for the camera's cloud service administrative user and appears as a link in their interface when they login. If a non-administrative user is logged in to that camera and manually enters that URL, they are able to see the same administrative actions and carry them out as if they had administrative privilege. This allows an unprivileged user to elevate account privileges arbitrarily.

 

Relevant URL

 

Mitigations

In order to avoid exposure to the authentication bypass and privilege escalation, customers should use the device in a local network only mode, and use egress firewall rules to block the camera from the Internet. If Internet access is desired, customers should inquire about an update to Summer Infant's cloud services.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Tue, Sep 01, 2015: Confirmed receipt by vendor

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Lens Laboratories(f)

71yIyoKGwBL._SY355_.jpgThe issues for the Lens Laboratories(f) device was disclosed to CERT under vulnerability note VU#931216.

 

Device: Lens Peek-a-View

The vendor's product site for the device assessed is http://www.amazon.com/Peek---view-Resolution-Wireless-Monitor/dp/B00N5AVMQI/

 

Of special note, it has proven difficult to find a registered domain for this vendor. All references to the vendor point at Amazon directly, but Amazon does not appear to be the manufacturer or vendor.

 

Vulnerability R7-2015-14, Backdoor Credentials (CVE-2015-2885)

The device ships with hardcoded credentials, accessible from a UART interface, which grants access to the underlying operating system, and via the local web service, giving local application access via the web UI.

Due to weak filesystem permissions, the local OS ‘admin’ account has effective ‘root’ privileges.

 

Operating System (via UART)

  • Username: admin
  • Password: 2601hx

 

Local Web Server

Site: http://{device_ip}/web/

  • Username: user
  • Password: user

 

Local Web Server

Site: via http://{device_ip}/web/

  • Username: guest
  • Password: guest

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.

 

Disclosure Timeline

Sat, Jul 04, 2015: Attempted to find vendor contact

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Gynoii, Inc.

71yIyoKGwBL._SY355_.jpgThe issues for the Gynoii devices was disclosed to CERT under vulnerability note VU#738848.

 

Device: Gynoii

The vendor's product site for the device assessed is http://www.gynoii.com/product.html

 

Vulnerability R7-2015-15, Backdoor Credentials (CVE-2015-2881)

The device ships with hardcoded credentials, accessible via the local web service, giving local application access via the web UI.

 

Local Web Server

Site: http://{device_ip}/admin/

  • Username: guest
  • Password: guest

 

Local Web Server

Site: http://{device_ip}/admin/

  • Username: admin
  • Password: 12345

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: TRENDnet

trendnet_tv_ip743sic_wifi_h_264_day_night_ir_1074096.jpgThe issues for the TRENDnet device was disclosed to CERT under vulnerability note VU#136207.

 

Device: TRENDnet WiFi Baby Cam TV-IP743SIC

The vendor's product site for the device under test is http://www.trendnet.com/products/proddetail.asp?prod=235_TV-IP743SIC

 

Vulnerability R7-2015-16: Backdoor Credentials (CVE-2015-2880)

The device ships with hardcoded credentials, accessible via a UART interface, giving local, root-level operating system access.

 

Operating System (via UART)

  • Username: root
  • Password: admin

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, details disclosed to vendor

Sun, Jul 16, 2015: Clarification sought by vendor

Mon, Jul 20, 2015: Clarification provided to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Not Just Baby Monitors

As you can see, there were several new findings across a range of vendors, all operating in the same space. Here at Rapid7, we believe this is not unique to the video baby monitor industry in particular, but is indicative of a larger, systemic problem with IoT in general. We've put together a collection of IoT resources, including a whitepaper and a FAQ, covering these issues, which should fill you in on where we're at on this IoT security journey. Join us next week for a live webinar where Mark Stanislav and Tod Beardsley will discuss these issues further, or just use the #IotSec hashtag on Twitter to catch our attention with a question or comment.

 

In the meantime, keep an eye on those things keeping an eye on your infants and toddlers.

 

Update (Sep 02, 2015): Gynoii acknowledged the above research shortly after publication and are assessing appropriate patch strategies.

Update (Sep 02, 2015): iBaby Labs communicated that access token expiration and secure communication channels have been implemented.

Update (Sep 02, 2015): Summer Infant tweeted that all reported issues have been resolved.

Update (Sep 03, 2015): TRENDnet reports updated firmware available here (version 1.0.3), released on Sep 02, 2015.

 


[1] http://www.ifc0nfig.com/a-close-look-at-the-philips-in-sight-ip-camera-range/

[2] http://www.weaved.com/

[3] https://www.yoics.net

[4] http://www.mysnapcam.com/

Over the past couple of years I've dove into Internet of Things (IoT) security research and found it to be a rather fun (and sometimes terrifying) mixture of technologies, [in]delicately woven together to provide for some pretty useful, and not so useful, devices. It's a very exciting time right now as technologists navigate the nascent waters to determine what the best-of-breed platforms, protocols, and languages that will drive IoT for years to come will be. We're at the HD-DVD v. Blu-Ray phase in a lot of ways, except there aren't just two players in the space, but thousands of organizations all trying to get their special new technology to be the cornerstone of a revolution.

 

Recently, I was invited to speak for the Plug and Play Tech Center as part of my mentorship for their Plug and Play Accelerator portfolio companies. In my presentation, I conveyed to the very people who are creating the newest generation of innovation what real-world security research has uncovered for those early participants in IoT. By explaining the security missteps of vendors and guiding the appropriate ways to avoid those mistakes in the future, my hope is that the audience left with actionable guidance to create a more safe and beneficial contribution to this world.

 

In this blog post I'd like to give a high-level sense of what IoT security research often entails. This is definitely not a comprehensive post, as, I'd have to write a book and then get about a dozen co-authors to accomplish that. This post is intended for the casual security researcher, or even IoT vendor, who wants to know what this research looks like, and where to get started. I hope it spurs additional research into this space, because there's plenty of ground to cover.

 

IoT Has Many Avenues... Where to Start?

There's a long-winded excitement to IoT security research because in many cases there are a number of technology components involved, including mobile applications, Bluetooth, Wi-Fi, device firmware, services, APIs, and a variety of network protocols. Each of these has the potential to provide a researcher with new insight into the operation of the device or become a critical foothold to take a next step in the research process. Where you start with research covering this many areas is certainly a personal preference, but I happen to like to begin with the mobile application.

 

Mobile ApplicationsB1tLaDmIcAAIvQR.png

Because a mobile application is often central to the function of many IoT devices, the contents of the application are likely to provide clues about how the device operates and could even leak hardcoded credentials, protocol details, and keys. Since I am an iOS person, I often use Clutch on a jailbroken iPhone to provide me with an unencrypted version of the device's app. Once you have this, simple CLI tools like strings and otool are a good start to get a sense of what the application is up to.

 

Of course, more advanced application analysis is possible with tools like Hopper or IDA, but you'd be terrified what you can find without much trouble in these applications. Also, I highly recommend taking a look at Dan Mayer's awesome tool, idb, which will help streamline your iOS efforts with a nice GUI and integrated tooling to accomplish tasks more easily.

 

Inside the application, try and look for interesting classes, hardcoded strings, and other details that may provide privilege over the device never intended by the developer. I've found hardcoded root credentials, API keys for Amazon Web Services, URLs never meant to be known to end-users, and manufacturing network configurations. There can be a treasure trove of details if you look intently enough and a lot of flexibility if you instrument against a live application and see how it operates.

 

If you're interested in learning beyond these basics, check out Android Hacker's Handbook and/or the iOS Hacker's Handbook for in-depth knowledge from some of the best minds on each of their respective platforms.

 

Screen Shot 2015-01-01 at 8.33.12 PM.png

Network Traffic

Once I've gotten into the mobile application and have seen what I can see, I like to start using the mobile application and begin interacting with the IoT device. This includes every step that I can witness, from setup to registration to normal usage. Using network analysis standbys such as mitmproxy, Wireshark, Burp Suite, and Ettercap, you can view connections from the mobile application to the device, and the device to the Internet. In many cases, you'll even be able to intercept SSL/TLS connections with little trouble using these types of tools. Some applications (and hopefully more in the future) utilize certificate pinning to thwart MITM attempts of encrypted connections, but of course there's always a way around that.

 

By combining knowledge you gained through looking at the mobile application with live traffic, you can quickly understand how this device functions at a network level and start diving deeper into how it performs authentication, transport security, what inputs the components take, how outputs are rendered, platforms & frameworks being leveraged, and even see data that the developer perhaps never intended you to see. Whether web or mobile, applications often leak data unintentionally by developers who failed to scope a database lookup tightly enough, or send data over before it should have been sent. Whether by design or mistake, API calls and the resulting outputs can lead to some interesting findings.

 

Beyond simple network traffic, interception can actually lead to entire firmware images for the device. While older technologies may have required users to plug in a USB device with a firmware image loaded, IoT is heavily driven on over-the-air upgrades, often via Wi-Fi or cellular networks. Due to this, vendors aren't as likely to widely publish their firmware on FTP sites like you may have been used to with your older home routers. By observing network traffic at the right moment (e.g. user-requested firmware checks, vendor-pushed updates), analysis can lead to either downloading the entire firmware image while it's in transit, or a URL that contains firmware images when your device checks for updates.

 

Embedded Hardware

IMG_0800.jpg

Investigating an IoT device's actual hardware can be quite beneficial to research, especially if you were unable to acquire the firmware via other methods. Further, physical access can quite often lead to administrative console access, giving a direct method to explore the operating system, applications, and configuration in a running environment. While an occasional device may provide network services like SSH or Telnet to connect to it, the credentials to login are rarely provided to end users. Thus, hardware may be one of the only ways to really gain hands-on access to the environment powering your research project's device. There are a number of standards related to hardware debugging that can lead to physical interaction with a device, such as JTAG and UART. Well known hardware hacking tools such as the Bus Pirate open up a new world of IoT research possibilities using these protocols and more.


In the case of JTAG, finding pins could provide deep access to the hardware, including capabilities to dump memory or halt the chipset. Because analyzing test points of hardware

to find the appropriate pins can be time consuming, Joe Grand created a device called the JTAGulator, which automates this process for both JTAG and UART. While JTAGulator is great at identifying JTAG pins, it doesn't really do much with JTAG beyond basic sanity checks. I personally have a SEGGER J-Link that has numerous tools to interact with JTAG on IoT devices, allowing for tasks such as dumping memory via GDB.

 

While JTAG is made for deep, powerful control of hardware components, UART often provides a simple serial console connection to the device's operating environment. With either your existing JTAGulator, or a simple FTDI USB serial cable, software like minicom or screen is all that's needed to interact with a UART connection. Once connected via UART, a researcher can investigate the device from the inside-out, executing applications locally, exploring configuration files, and looking for new avenues of exploitation. While being able to download firmware, or dump memory, or analyze network traffic is hugely important, being able to work in the actual device's operating environment can be a major foothold in understanding how the device is working.


Lastly, if you're still struggling to find firmware images, you may be lucky enough to have a device where the flash can be physically accessed with a testing clip for SOIC packages. A tool such as flashrom can easily leverage a device that speaks SPI to pull the firmware directly off of the flash while still attached to the device. I've personally had a great experience with Xipiter's custom device called the Shikra to pull firmware using flashrom when other tools had failed to get a clean dump of the image. Even better, Shikra also speaks UART and JTAG, too!


Wireless Communications

stick.gif

One reason why IoT is so capable to amaze us is through a distinct lack of network tethering. Without the need for thousands of feet of CAT-6 or serial cables laying around, IoT is able to exist in the form of everything from IP cameras to tiny water sensors. Today, standards like Wi-Fi, Bluetooth Low Energy (BLE) and ZigBee have changed what is possible over the air, enabling IoT innovators to continue to realize their creativity. Much like the other areas I've discussed, a special world of technologies exist to enable researchers to better understand the usage of these wireless protocols as the glue holding IoT function together.


Long-time Wi-Fi hackers are probably quite familiar with software like Kismet, but they may not realize that Kismet can do more than just Wi-Fi. In fact, using hardware such as an Ubertooth One, Kismet can provide Bluetooth capabilities to researchers. Alternatively, there are a number of Ubertooth tools capable of providing insight into Bluetooth traffic. If, however, you're more interested in ZigBee, Joshua Wright's KillerBee framework can be combined with an Atmel Raven USB to have a powerful assessment toolkit in a small form factor. Lastly, If you're really not playing around, a HackRF provides software defined radio (SDR) functionality that won't let you down.


IoT is a Fabric of Technologies

If you've been reading this post, you may be thinking, "...this just seems like a lot existing technologies all bundled together..." -- bingo! You're all-but-guaranteed to have a broad experience in technologies, protocols, and platforms by diving into an IoT security research project. Due to the nascency of IoT, there are very few standardized aspects of a device you'll encounter, ensuring a fairly unique and exciting experience. Perhaps more intriguing is the reality that the research you perform could have a lasting impact on the direction that IoT moves by guiding vendors to make more secure choices.


One such initiative to help guide IoT down a safer path is called BuildItSecure.ly. By creating opt-in relationships with IoT vendors, small and large, the researchers at BuildItSecure.ly can help to inform and influence the decisions being made by the very innovators you're waiting to release their next product. Vendors such as Dropcam, Pinoccio, Belkin, and Wink have all joined the initiative to work with world-class security researchers to make sure their products are more secure. While I am biased due to being a co-founder of the year-old initiative, I have to say that this sort of effort is exactly the kind of goodwill relationship that the research community needs more of and I'm proud to see it resonating with those familiar.

 

I hope that this blog post has helped to inform you of the burgeoning world of IoT security research and that you'll decide to make one of the many devices out there a focus for your next project.

Filter Blog

By date: By tag: