Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

707 posts

Inmates running the asylum. The fox guarding the henhouse. You've no doubt heard these terms before. They’re clever phrases that highlight how the wrong people are often in charge of things. It's convenient to think that the wrong people are running the show elsewhere but have you taken the time to reflect inward and determine how this very dilemma might be affecting your organization? I see this happening all the time in terms of security assessments. In organizations both large and small, I see improper testing - or no testing at all - of the systems that should otherwise be in scope and fair game for assessment. The very people in charge of security assessments are the ones who are determining how things are going to turn out.

I see everyone from CIOs to network admins and all of the IT/security roles in between setting parameters on what can and cannot be tested. Ditto for how it can be tested. Oftentimes, an external security assessment/penetration test is performed but not everything is being looked at. Sometimes it's something relatively benign like a marketing website but other times it's actual enterprise applications that are being overlooked (often in the name of “someone else is hosting it and therefore responsible for it). Other times, I hear people stating that their auditors or customers aren't asking for certain systems to be tested, therefore, it doesn't really matter.

 

I think general consensus of the stakeholders who are reading the actual assessment reports is that they are getting the current status of everything. But that's not the case in many situations. There’s no doubt that people need what they need and nothing more. In fact, legal counsel would likely advise to do the bare minimum, document the bare minimum, and share the bare minimum. That’s just how the world works. At the end of the day, people presumably know what they want/need. I'm just not convinced that the approaches that are currently being taken whereby IT and security staff are defining the systems to be tested along how they need to be tested (and, therefore, the outcomes) is the best approach.

 

Most security assessment reports have a notes/scope section that outlines what was tested and what was not. However, what's often missing is all of the details regarding other things that people don't often think about in this context such as:

  • How the systems that may not have been tested can/will impact those that were tested if they have exploitable vulnerabilities
  • Whether or not authentication was used (it’s important to distinguish the two – and do both)
  • What network security controls, i.e. firewall and IPS, were disabled or left in place (you have to look past any blocking)
  • What level of manual analysis/penetration testing was performed, including how much time was spent and whether or not social engineering/email phishing were a part of the testing (it makes a difference!)

 

There are so many caveats associated with modern-day security testing that no one really knows if everything has been looked at in the proper way. So, what do you do? Do you question the validity of existing testing methods and reports? Do you step back and ask tougher questions of those who were doing the testing? Perhaps there needs to be an arm's-length entity involved with defining what gets tested and how it gets tested, including very specific approaches, systems in scope, and tools that are to be used.

 

This challenge is similar in terms of healthcare – something we can all relate to. Take, for instance, when a patient gets a MRI or CAT scan, the results for the radiologist to analyze will be much different than a more focused x-ray or ultrasound. Perhaps the prescribing doctor thinks the patient just needs an x-ray when, in fact, they need a PET scan. That very thing happened to my mother when she was fighting lung cancer. Her doctors focused on her lungs and hardly anything else. Her chemotherapy was working well and her lungs continued to look good over time. What was missed, however, was the cancer that had spread to other parts of her body. The prescribed diagnostics were focused on what was thought to be important but they completely overlooked what was going on in the rest of her body. Unfortunately, given how much time had passed for the cancer to spread elsewhere (while being overlooked), the outcome was not a positive one. Similarly, it’s important to remember that any security testing that’s assumed to paint the entire picture may not be doing that at all.

 

Are the inmates running the asylum? Is the fox guarding the henhouse? Perhaps a bit here and there. However, it’s not always people in IT and security intentionally “limiting” the scope of security testing by intentionally keeping certain systems out of the loop and looking the other way to generate false or mispresented outcomes. I do know that is going in certain situations but this issue is a bit more complicated. I don't think there’s a good solution to this other than closer involvement on the part of management, internal auditors, and outside parties involved with scoping and performing these assessments. If anything, the main focus should be ensuring that expectations are properly set.

 

A false sense of security is the enemy of decision makers. Never, ever should someone reading a security assessment report assume that all the proper testing has been done or that the report is a complete and accurate reflection of where things truly stand. Odds are that’s it not.

Recently we all have found ourselves talking about the risk and impact of poorly secured IoT technology and who is responsible. Fact is there is enough blame to go around for everyone, but let's not go there. Let us start focusing on solutions that can help secure IoT technology.

 

Usability has been an issue that has plagued us since the beginning of time. As an example, just going back to my youth and seeing my parents VCR flashing 12:00 all the time. We laugh at that, because it showed us their lack of understanding around how technology works, and of course it was not a real risk to anything other then knowing what time it is and not being able to preset and record shows. Today, the inability to understand or configure our technology is much more of a risk than the flashing 12:00 on our parents' VCR. Such misconfigured IoT devices can lead to various compromises of our information or allow our technology to be used in attacks against others. Currently we often find IoT devices working out of the box with every feature enabled and also using default passwords, and of course this approach has come back to haunt us in a number of cases.

 

I am sure we all agree that the days of every feature being enabled and default passwords out the box needs to change. Although, don’t get me wrong, I still think IoT technology should be easy to deploy -- but with security built in. So what should that look like? Let me break it down in a few basic items that I think are paramount to getting to that point.

 

  • password.pngNo default passwords enabled. It is easy enough during deployment of a product to have the user set a strong password. In cases where each device has a unique default password, those should also be changed and if not, the user should be warned that the password has not been changed and then forced to acknowledge that with a multi-step process each time they use the products. Check out this new tool built by the Metasploit team at Rapid7 called IoTSeeker which scans your network for IoT devices and let's you know if the default password is being used.

 

  • checkbox.pngInitial installation should only enable needed services for functionality. These extra services should only be configurable after initial setup. A check box list for features during initial setup is not the way to go. It will only lead to user selecting every one of them just to make sure the installation works. I know, because in a past life I have watched coworkers do exact thing during product installations.

 

 

  • documents-icon-64257.pngGood documentation is critical for walking a user through the secure setup. This documentation, beside covering standard setups and which may include an intuitive web wizard, should include guidance on enabling expanded features or specific services that have security implications the user should be made aware of during setup. With the ever-expanding list of capabilities and features associated with IoT its imperative that the end user is given guidance "Good Documentation" to help with selecting and implementing the most secure methods during setup of their device.

 

  • patching.pngAutomated firmware patching should be the default. If not, the user should be prompted every time they use the product when firmware updates are available. Patching allows us to fix security issues within the products moving forward. We are always going to have problems and having the ability to correct them on the fly is important.

 

 

 

This simple list points out items that create a solid foundation from where we can continue building on IoT security and at the same time maintain a solid resemblance of usability; however, I expect it will still take a while before we see these items commonplace within all new IoT -- and I am looking forward to that day.

 

If you are looking for a second opinion on how you should be securing the IoT devices used within your environment, check out our IoT security services.

by Tod Beardsley and Bob Rudis

 

What's Going On?

 

Early in November, a vulnerability was disclosed affecting Zyxel DSL modems, which are rebranded and distributed to many DSL broadband customers across Europe. Approximately 19 days later, this vulnerability was leveraged in widespread attacks across the Internet, apparently connected with a new round of Mirai botnet activity.

 

If you are a DSL broadband customer, you can check to see if your external TCP port 7547 is accessible to the Internet by using popular public portscanning services provided by WhatsMyIP, SpeedGuide, or your own favorite public portscanning service. If it is, your ISP should be able to confirm if your modem is vulnerable to this attack.

 

Vulnerability Details

 

On November 7, "Kenzo" disclosed two vulnerabilities affecting the Zyxel D1000 DSL modem on the Reverse Engineering blog, here. This DSL modem is used by residential DSL subscribers in Ireland, and appears to be distributed by the Irish ISP, Eir. It's unknown if Kenzo disclosed these issues to either Zyxel or Eir prior to public disclosure.

 

Two issues were identified, both involving the TR-064 SOAP service on the DSL modem, running on TCP port 7547. The first is a command injection vulnerability in the way the device parses new NTP server configurations, where an attacker can enclose an arbitrary shell command in backticks when setting the NewNTPServer1 parameter. The second is an information leak vulnerability where an attacker can access the GetSecurityKeys command to learn the device's WiFi and administrative password.

 

Kenzo provided a proof-of-concept Metasploit module to exercise these vulnerabilities to expose the administrative web service on the Internet-facing side of the modem and to extract the administrative password to that admin web service.

 

On November 26th, the command injection issue was being actively exploited in the wild, apparently as part of another wave of Mirai-style IoT botnet activity. In particular, DSL modems provided to Deutsche Telekom customers in Germany and Austria, under the brandname "Speedport," appeared to be vulnerable. As a result of this attack, many Telekom subscribers were knocked offline on November 27th, and DT has since updated the Speedport firmware.

 

Today, on November 29th, the Metasploit open source project has started work on converting Kenzo's original, special purpose proof-of-concept exploit to a more generally useful Metasploit module that can be used to test the vulnerability in a safer and more controlled way. That work continues on Pull Request #7626.

 

Exploit Activity Details

 

timeline3.png

 

Rapid7’s Heisenberg Cloud started picking up malicious SOAP HTTP POST requests to port 7547 on November 26th. We were able to pick up these requests due to the “spray and pray” nature of the bots searching for vulnerable targets. To-date, we’ve seen over 63,000 unique source IP addresses associated with these attempts to take over the routers, peaking at over 35,000 unique attempts per day on November 27th.

 

As the map below shows, the bots attempting to take over the routers are geographically dispersed.

 

map.png

As the below temporal flow diagram shows, different regions are more prevalent as sources of this activity on different days (the source regions are on the left, the days they were active are on the right).

 

region.png

There was little change in the top 10 source countries (by unique node count) over the attack period, but some definitely stood out more than others (like Brazil and the U.K.).

 

We’ve also seen all malicious payload types, though not all of them appeared on all days as seen in the third chart.

 

cc.png

payloads.png

Not all payloads were evenly distributed across all countries:

 

heatmap.png

 

What Can You Do to Protect Yourself?

 

The vulnerabilities being exploited in these attack are present in Zyxel DSL modems, which are commonly used in European and South American consumer markets, and may be rebranded by regional ISPs, as they are for Eir and Deutsche Telekom. The vulnerabilities described do not appear to affect the cable modems commonly used in the United States.

 

If you are a DSL customer and concerned that you may be vulnerable, you can use popular portscanning services provided by WhatsMyIP, SpeedGuide, or others to assess if your external TCP port 7547 is accessible from the Internet. If the port times out or is otherwise not reachable, you're in the clear. If it is accessible, you should contact your ISP to see if a) this can be restricted, and b) if you are running a vulnerable device.

 

For ISPs, it is A Bad Idea to expose either TR-069 or TR-064 services to arbitrary sources on the Internet; while ISPs may need access to this port to perform routine configuration maintenance on customer equipment, it should be possible for local and edge firewall rules to restrict access to this port to only IP addresses that originate from the ISP's management network.

 

Meanwhile, penetration testers should follow the progress of converting Kenzo's original proof-of-concept to a fully functional Metasploit module over at Pull Request #7626 on Metasploit's GitHub repository. If you are an open source security software developer, we'd love to have your input there.

 

Update (2016-Nov-30): This blog post originally referred to the vulnerable service as TR-069, but it's since become clear this issue is in a related service, TR-064.

Last week, some important new developments in the way the US government deals with hackers were unveiled: the first ever vulnerability disclosure policy of the Department of Defense. Touted by Secretary Ash Carter as a ‘see something, say something’ policy for the digital domain, this not only provides guidelines for reporting security holes in DoD websites, it also marks the first time since hacking became a felony offense over 30 years ago, that there is a legal, safe channel for helpful hackers who want to help secure DoD websites, a way to do so without fear of legal prosecution.

 

This is historic.

 

We don’t often see this type of outreach to the hacker community from most private organizations, let alone the US government. In a survey of the Forbes Global 2000 companies last year, only 6% had a public method to report a security vulnerability. That means 94% of well-funded companies that spend millions on security today, buying all kinds of security products and conducting “industry best practice” security measures and scans, have not yet established a clear public way to tell them about the inevitable: that their networks or products still have security holes, despite their best efforts.

 

The simple fact that all networks and all software can be hacked isn’t a surprise to anyone, especially not attackers. Breach after breach is reported in the news. Yet the software and services upon which the Internet is built have never had the broad and consistent benefit of friendly hacker eyes reporting security holes to get them fixed. Holes go unplugged, breaches continue.

 

This is because instead of creating open door policies for vulnerability disclosure, too many organizations would rather postpone having to deal with it, often until it’s too late. Instead, helpful hackers who “see something,” often don’t “say something” because they were afraid that it might land them in jail.

 

I myself as a hacker have observed and not reported vulnerabilities in the past that I stumbled upon outside of a paid penetration test because of that very real fear. I’ve built vulnerability disclosure programs at two major vendors (Symantec and Microsoft), in order to shield employees of those companies who found other organizations’ vulnerabilities from the concerns that they may face an angry bug recipient alone.

 

Even then, wearing the mighty cape of a mega corporation, I and others trying to disclose security holes to other organizations encountered the same range of reactions that independent researchers face: from ignoring the report, to full on legal threats, and one voicemail that I wish I’d saved because I learned new swear words from it. For me, that is rare. But what wasn’t rare was the fear that often fuels those negative reactions from organizations that haven’t had a lot of experience dealing with vulnerability disclosure.

 

Fear, as they say in Dune, is the mind-killer. Organizations must not fear the friendly hacker, lest they let the little death bring total oblivion.

 

There is no excuse for organizations letting fear of working with hackers prevent them from doing so for defense. There is no excuse for lacking a vulnerability disclosure policy, in any organization, private or public sector.

 

The only barrier is building capabilities to handle what can be daunting in terms of facing the world of hackers. Big companies like Google, Apple, and Microsoft have had to deal with this issue for a very long time, and have worked out systems that work for them. But what about smaller organizations? What about other industries outside of the tech sector? What about IoT? And what about governments, who must walk the line between getting the help they need from the hacker community without accidentally giving free license to nation-states to hack them with an overly permissive policy?

 

There are guidelines for this process in the public domain, too many to list. 2017 will mark my ninth year attending ISO standards meetings, where I’ve dedicated myself to helping create the standards for ISO 29147 Vulnerability disclosure, and ISO 30111 Vulnerability handling processes. Until April of 2016, both of these standards were not available for free. Now the essential one to start with, ISO 29147, is available for download from ISO at no cost. Most people don’t even know it exists, let alone that it’s now free. But both standards act as a guide for best practices, not a roadmap for an organization to start building their vulnerability disclosure program, bit by bit.

 

Enter the first Vulnerability Coordination Maturity Model – a description of 5 capability areas in vulnerability disclosure, that I designed to help organizations gauge their readiness to handle incoming vulnerability reports. These capabilities go beyond engineering, and look at an organization’s overall strengths in executive support, communication, analytics, and incentives.

The VCMM provides an initial baseline, and a way forward for any organization, small or large, public or private, that wants to confront their fear of working with friendly hackers, in defense against very unfriendly attackers.

 

The model was built over my years of vulnerability disclosure experiences, on all sides of that equation. I’ve done so in open source and closed source companies, as the hacker finding the vulnerability or as the vendor responding to incoming vulnerabilities, and as the coordinator between multiple vendors in issues that affected shared libraries, many years before Heartbleed was a common term heard around the dinner table.

 

I was fortunate to be able to present this Vulnerability Coordination Maturity Model at the Rapid7 UNITED Summit a few weeks ago, and my company was honored to work directly with the Department of Defense on this latest bit of Internet history. And though I’m known for creating Microsoft’s first ever bug bounty programs, and advised the Pentagon on the first ever bug bounty program of theirs, now my work focuses more heavily on core vulnerability disclosure capability-building, and helping organizations overcome their fears in dealing with hackers.

 

The way I see it, if 94% of Forbes Global 2000 is still lagging behind the US government in its outreach to helpful hackers, my work is best done far earlier in an organization’s life than when they are ready to create cash incentives for bugs. In fact, not everyone is ready for bug bounties, not public ones anyway, unless they have the fundamentals of vulnerability disclosure ready. But that’s a topic for another day.

 

Today, as we bear witness to a significant positive shift in the US government’s public work with hackers, I’m filled with hope. Hope that the DoD’s new practice of vulnerability disclosure programs and bounties will expand as a successful model to the rest of the US government, hope that other governments will start doing this too, hope that the rest of the Forbes top 2000 will catch up, and hope for every interconnected device looming on the Internet of Things to come.

 

Today, we have no time to fear our friends, no matter where in the world or on the Internet they come from. There is no room for xenophobia when it comes to defending the Internet. Together, we must act as defenders without borders, without fear.

We live in an interesting time for research related to Internet scanning.

 

There is a wealth of data and services to aid in research. Scanning related initiatives like Rapid7's Project Sonar, Censys, Shodan, Shadowserver or any number of other public/semi-public projects have been around for years, collecting massive troves of data.  The data and services built around it has been used for all manner of research.

 

In cases where existing scanning services and data cannot answer burning security research questions, it is not unreasonable for one to slap together some minimal infrastructure to perform Internet wide scans.  Mix the appropriate amounts of zmap or masscan with some carefully selected hosting/cloud providers, a dash of automation, and a crash-course in the legal complexities related to "scanning" and questions you ponder over morning coffee can have answers by day's end.

 

So, from one perspective, there is an abundance of signal.  Data is readily available.

 

There is, unfortunately, a significant amount of noise that must be dealt with.

 

Dig even slightly deep into almost any data produced by these scanning initiatives and you'll have a variety of problems to contend with that can waylay researchers. For example, there are a variety of snags related the collection of the scan data that could influence the results of research:

 

  • Natural fluctuation of IPs and endpoint reachability due to things like DHCP, mobile devices, or misconfiguration.
  • When blacklists or opt-out lists are utilized to allow IP "owners" to opt-out from a given project's scanning efforts, how big is this blacklist?  What IPs are in it?  How has it changed since the last scan?
  • Are there design issues/bugs in the system used to collect the scan data in the first place that influenced the scan results?
  • During a given study, were there routing or other connectivity issues that affected the reachability of targets?
  • Has this data already been collected?  If so, can that data be used instead of performing an additional scan?

 

Worse, even in the absence of any problems related to the collection of the scan data, the data itself is often problematic:

 

  • Size.  Scans of even just a single port and protocol can result in a massive amount of data to be dealt with.  For example, a simple HTTP GET request to every 80/TCP IPv4 endpoint currently results in a compressed archive of over 75G.  Perform deeper HTTP 1.1 vhost scans and you'll quickly have to contend with a terabyte or more.  Data of this size requires special considerations when it comes to the storage, transfer and processing.
  • Variety.  From Internet-connected bovine milking machines, to toasters, *** toys, appliances and an increasingly large number of "smart" or "intelligent" devices are being connected to the Internet, exposing services in places you might not expect them.  For example, pick any TCP port and you can guarantee that some non-trivial number of the responses will be from HTTP services of one type or another.  These potentially unexpected responses may need to be carefully handled during data analysis.
  • Oddities.  There is not a single TCP or UDP port that wouldn't yield a few thousand responses, regardless of how seemingly random the port may be.  12345/TCP?  1337/UDP?  65535/TCP?  Sure.  You can believe that there will be something out there responding on that port in some way.  Oftentimes these responses are the result of some security device between the scan source and destination.  For example, there is a large ISP that responds to any probe on any UDP port with an HTTP 404 response over UDP.  There is a vendor with products and services used to combat DDoS that does something similar, responding to any inbound TCP connection with HTTP responses.

 

Lastly there is the issue of focus.  It is very easy for research that is based on Internet scanning data to quickly venture off course and become distracted. There is seemingly no end to the amount of strange things that will be connected in strange ways to the public IP space that will tempt the typically curious researcher.

 

Be careful out there!

In my last blog post, we reviewed the most prevalent detection strategies and how we can best implement them. This post dives into understanding how to catch what our other systems missed, using attacker behavior analytics and anomaly detection to improve detection.

 

Understand Your Adversary – Attack Methodology Detection

Contextual intelligence feeds introduce higher fidelity and the details needed to gain insight into patterns of attacker behavior. Attackers frequently rotate tools and infrastructure to avoid detection, but when it comes to tactics and techniques, they often stick with what works. The methods they use to deliver malware, perform reconnaissance, and move laterally in a network do not change significantly.

 

A thorough understanding of attacker methodology leads to the creation and refinement of methodology-based detection techniques. Knowledge of applications targeted by attackers enables more focused monitoring of those applications for suspicious behaviors, thus optimizing the effectiveness and efficiency of an organization’s detection program. An example of application anomaly-based detection is webshells on IIS systems:

 

w3wp.PNG

 

It is anomalous for IIS to spawn a command prompt, and the execution of “whoami.exe” and “net.exe” indicate likely reconnaissance activity. By understanding the methods employed by attackers we generate detections that will identify activity without relying on static indicators such as hashes or IPs. In this case we are using the low likelihood of IIS running CMD and the rare occurrence of CMD executing ‘whoami’ and ‘net [command]’ to drive our detection of potential attacker activity.

 

Additionally, attackers must reconnoiter networks both internally and externally to identify target systems and users. Reviewing logs for unusual user-to-system authentication events, suspicious processes (for example, ‘dsquery’, ‘net dom’, ‘ping –n 1’, and ‘whoami’), especially over abbreviated time periods, can provide investigative leads to identify internal reconnaissance.

 

Even without a constant stream of real-time data from endpoints, we can model behavior and identify anomalies based upon the frequency of an item across a group of endpoints. By gathering data on persistent executables across a network, for example, we can perform frequency analysis and identify rare or unique entries for further analysis. Simple techniques like frequency analysis will often reveal investigative leads from prior (or even current) compromises, and can be applied to all manner of network and endpoint data.

 

Expanding beyond a reliance primarily on traditional static indicator-based detection and adding a focus on attacker behavior increases the likelihood of detecting previously unknown malware and skilled attackers. A culmination of multiple detection strategies is necessary for full coverage and visibility: proactive detection technology successfully blocks known-bad, contextual intelligence assists in identifying less common malware and attackers, and methodology-based evidence gathered from thorough observation provides insight into potential indicators of compromise.

 

 

Use the Knowledge You Have

IT and security staff know their organization’s users and systems better than anyone else. They work diligently on their networks every day ensuring uptime of critical components, enablement of user actions, and expedient resolution of problems. Their inherent knowledge of the environment provides incredible depth of detection capabilities. In fact, IT support staff are frequently the first to know something is amiss, regardless if the problem is caused by attacker activity.  Compromised systems may often exhibit unusual symptoms and become problematic for users, who report the problems to their IT support staff.

 

Environment-specific threat detection is Rapid7’s specialty. Our InsightIDR platform continuously monitors user activity, authentication patterns, and process activity to spot suspicious behavior. By tracking user authentication history, we can identify when a user authenticates to a new system, over a new protocol, and from a new IP. By tracking the processes executed on each system we can identify if a user is running applications that deviate from their normal patterns or if they are running suspicious commands (based on our knowledge of attacker methodology). Combining user authentication with process execution history ensures that even if an attacker accesses a legitimate account, his tools and reconnaissance techniques will give him away. Lastly, by combining this data with threat intelligence from previous findings, industry feeds, and attacker profiles we ensure that we prioritize high-fidelity investigative leads and reduce overall detection time, enabling faster and more effective response.

 

Let’s walk through an example: Bob’s account is compromised internally:

 

After compromising the system, an attacker would execute reconnaissance commands that are historically dissimilar to Bob’s normal activity. Bob does not typically run ‘whoami’ on the command line or execute psexec, nor has Bob ever executed a powershell command – those behaviors are investigative elements that individually are not significant enough to alert on, but in aggregate present a trail of suspicious behavior that warrants an investigation.

 

Knowledge of your environment and what is statistically ‘normal’ per user and per system enables a ‘signature-less’ addition to your detection strategy. Regardless of the noisy and easily bypassed malware hashes, domains, IPs, IDS alerts, firewall blocks, and proxy activity your traditional detection technology provides, you can identify attacker activity and ensure that you are not missing events due to stale or inaccurate intel.

 

Once you have identified an attack based on user and system anomaly detection, extract useful indicator data from your investigation and build your own ‘known-bad’ threat feed. By adding internal indicators to your traditional detection systems, you have greater intel context and you can simplify the detection of attacker activity throughout your environment. Properly combining detection strategies dramatically increases the likelihood of attack detection and provides you with the context you need to differentiate between ‘weird’, ‘bad’, and ‘there goes my weekend’.

In early 2015, HD Moore performed one of the first publicly accessible research related to Internet-connected gas station tank gauges, The Internet of Gas Station Tank Gauges.

 

Later that same year, I did a follow-up study that probed a little deeper in The Internet of Gas Station Tank Gauges -- Take #2. As part of that study, we were attempting to see if the exposure of these devices changed in the ~10 months since our initial study as well as probe a little bit deeper to see if there were affected devices that we missed in the initial study due to the study's primitive inspection capabilities at the time. Somewhat unsurprisingly, the answer was no, things hadn't really changed, and even with the additional inspection capabilities we didn't see a wild swing that would be any cause for alarm.

 

Recently, we decided to blow the dust off this study and re-run it for old-time's sake in the event that things had taken a wild swing in either direction or if other interesting patterns could be derived.  Again, we found very little changed.

 

Not-ATGs and the Signal to Noise Ratio

 

What is often overlooked in studies like this is the signal to noise ratio seen in the results, the "signal" being protocol responses you expect to see and the "noise" being responses that are a bit unexpected.  For example, finding SSH servers running on HTTP ports, typically TCP-only services being crudely crammed over UDP, and gobs of unknown, intriguing responses that will keep researchers busy chasing down explanations for years.

 

These ATG studies were no exception.

 

not_tanks.jpg

 

In most recent zmap TCP SYN scan done against port 10001 on November 3, 2016, we found nearly 3.4 million endpoints responding as open. Of those, we had difficulty sending our ATG-specific probes to over 2.8 million endpoints -- some encountered socket level errors, others simply received no responses. It is likely that a large portion of these responses, or lack thereof, are due to devices such as tar-pits, IDS/IPS, etc. The majority of the remaining endpoints appear to be a smattering of HTTP, FTP, SSH and other common services run on odd ports for one reason or another.  And last but not least are a measly couple of thousand ATGs.

 

I hope to explore the signal and noise related problems related to Internet Scanning in a future post.

 

Future ATG Research and Scan Data

 

We believe that is important to be transparent with as much of our research as possible.  Even if a particular research path we take ends up a dead, boring end, by publishing our process and what we did (or didn't) find might help a future wayward researcher who ends up in this particular neck of research navigate accordingly.

 

With that said, this is likely to be our last post related to ATGs unless new research/ideas arise or interesting swings in results occur in future studies. Is there more to be done here?  Absolutely!  Possible areas for future research include:

 

  • Are there additional commands that exposed ATGs might support that provide data that is worth researching, for security or otherwise?
  • Are there other services exposed on these ATGs?  What are the security implications?
  • Are there advancements to be made on the offensive or defensive side relating to ATGs and related technologies?

 

We have published the raw zmap TCP scan results for all of the ATG studies we've done to date here.  We have also started conducting these studies on a monthly basis and these runs will automatically upload to scans.io when complete.

 

As usual, thanks for reading, and we welcome your feedback, comments, criticisms, ideas or collaboration requests here as a comment or by reaching out to us at research@rapid7.com.

 

Enjoy!

In the age of user behavior analytics, next-gen attacks, polymorphic malware, and reticulating anomalies, is there a time and place for threat intelligence? Of course there is! But – and it seems there is always a ‘but’ with threat intelligence – it needs to be carefully applied and managed so that it truly adds value and not just noise. In short, it needs to actually be intelligence, not just data, in order to be valuable to an information security strategy.

 

We used to have the problem of not having enough information. Now we have an information overload. It is possible to gather data on just about anything you can think of, and while that can be a great thing (if you have a team of data scientists on standby), most organizations simply find themselves facing an influx of information that is overwhelming at best and contradictory at worst. Threat intelligence can help solve that problem.

 

What is Threat Intelligence?

As Rick Holland and I mentioned in our talk at UNITED Summit 2016, there are a variety of definitions and explanations for threat intelligence, ranging in size from a paragraph to a field manual. Here’s the distilled definition:

 

“Threat Intelligence helps you make decisions about how to prevent, detect, and respond to attacks.”

 

That’s pretty simple, isn’t it? But it covers a lot of ground. The traditional role of intelligence is to inform policy makers. It doesn’t dictate a particular decision, but informs them with what they need to make critical decisions. The same concept applies to threat intelligence in information security, and it can benefit everyone from a CISO to a vulnerability management engineer to a SOC analyst. All of those individuals have decisions to make about the information security program and threat intelligence arms them with relevant, timely information that will help them make those decisions.

 

Image from dilbert.com

 

If intelligence is making it harder for you to make decisions, then it is not intelligence.

 

When Threat Intelligence Fails

Threat Intelligence can be a polarizing topic –  you hate it or you love it. Chances are that if you hate it, you’ve probably been burned by threat feeds containing millions of indicators from who-knows-where, had to spend hours tracking down information from a vendor report with absolutely no relevance to your network, or simply fed up by the clouds of buzzwords that distract from the actual job of network defense. If you love it, you probably haven’t been burned, and we want to keep it that way.

 

Screen Shot 2016-11-15 at 4.11.40 PM.png

Threat Intelligence fails for a variety of reasons, but the number one reason is irrelevance. Threat feeds with millions of indicators of uncertain origin are not likely to be relevant. Sensationalized threat actor reports with little detail but lots of fear, uncertainty, and doubt (FUD) are not likely to be relevant. Stay away from these, or the likelihood that you end up crying under your desk increases.

 

So how DO you find what is relevant? That starts with understanding your organization and what you are protecting, and then seeking out threat intelligence about attacks and attackers related to those things. This could mean focusing on attackers that target your vertical or the types of data you are protecting. It could mean researching previously successful attacks on the systems or software that you use. By taking the time to understand more about the source and context behind your threat intelligence, you’ll save a ton of pain later in the process.

 

The Time and Place for Threat Intelligence

Two of the most critical factors for threat intel are just that – time and place. If you’re adding hundreds of thousands of indicators with no context and no expiration date, that will result in waves of false positives that dilute any legitimate alerts that are generated. With cloud architectures today, vendors have the ability to anonymously collect feedback from customers, including whether alerts generated by the intel are false positives or not. This crowdsourcing can serve as a feedback loop to continuously improve the quality of intelligence.

Screen Shot 2016-11-15 at 12.47.08 PM.png

For example, with this list, 16 organizations are using it, 252 alerts have been generated across the community, and none have been marked as false positives. The description also contains enough context to help defenders know how to respond to any alerts generated. This has served as valuable threat intelligence.

 

The second half is place – different intelligence should be applied differently in your organization. Strategic intelligence, such as annual trend reports, or warnings on targeted threats to your industry, are meant to help inform decision makers. More technical intelligence, such as network based indicators, can be used as firewall rules to prevent threats from impacting your network. Host based indicators, especially those from your own incidents or from organizations similar to yours, can be used to detect malicious activity on your network. This is why you need to know exactly where your intelligence comes from, as without it, proper application is a serious challenge. Your own incident experience is one of the best sources of relevant intelligence – don’t let it go to waste!

 

To learn about how you can add threat intelligence into InsightIDR, check out the Solution Short below.

 

 

Threat intelligence isn’t as easy as plugging a threat feed into your SIEM. Integrating threat intelligence into your information security program involves (1) understanding your threat profile, (2) selecting appropriate intelligence sources, and (3) contextually applying it to your environment. However, once completed, threat intelligence will serve a very valuable role in protecting your network. Intelligence helps us understand the threats we face – not only with identifying them as they happen, but to understand the implications of those threats and respond accordingly. Intelligence enables us to become persistent and motivated defenders, learning and adapting each step of the way.

To provide insight into the methods devised by Rapid7, we’ll need to revisit the detection methods implemented across InfoSec products and services and how we apply data differently. Rapid7 gathers volumes of threat intelligence on a daily basis - from new penetration testing tools, tactics, and procedures in Metasploit, vulnerability detections in Nexpose, and user behavior anomalies in InsightIDR. By continuously generating, refining and applying threat intelligence, we enable more robust detection strategies to identify adversaries wherever they may hide.

 

Slicing Through the Noise

There are many possible combinations of detection strategies deployed in enterprise environments, with varying levels of efficacy. At a minimum, most organizations have deployed Anti-Virus (AV) software and firewalls, and mature organizations may have web proxies, email scanners, and intrusion detection systems (IDS). These "traditional" detection technologies are suitable for blocking "known-bad" activity, but they provide little insight into the origin, purpose, and intent of detections. Additionally, many of these techniques falter against uncommon threats due to a lack of applicable rulesets or detection context.

 

Consider an AV detection for Mimikatz, a well-known credential dumper: Mimikatz may be detected by AV; however, standard AV detection alerts do not provide the background information required to accurately understand or prioritize the threat. The critical context in this scenario is that the presence of Mimikatz typically indicates an active, human attacker rather than an automated commodity malware infection. Additionally, a Mimikatz detection indicates that an attacker has already circumvented perimeter defenses, has the administrator rights required to dump credentials, and is moving laterally through your environment.

 

Without a thorough understanding or explanation of the samples your detection technologies identify as malicious you do not have the information required to understand the severity of detections. Responders who are not armed with appropriate context cannot differentiate or prioritize low, medium, and high severity events, and they often resort to chasing commodity malware and low severity alerts.

 

Adding Context – Intelligence Implementation

Many organizations integrate ‘threat feeds’ into their existing technology to compensate for the lack of context and to increase detections for less common threats. Threat feeds come in many forms, from open source community-driven lists to paid private feeds. The effectiveness of these feeds strongly depends on a number of factors:

 

  • Intel type (hash, IP, domain, contextual, strategic)
  • Implementation
  • Indicator age
  • Intelligence source

 

When consuming intelligence feeds, context remains the critical element – feeds containing only hashes, domains, and IPs are the least effective form of threat intelligence due to the ease with which an attacker can modify infrastructure and tools. It is important to understand why a particular indicator has been associated with attacker activity, how old the intelligence is (as domains, IPs, tools are often rotated by attackers), and how widely the intelligence has been disseminated (does the attacker know that we know?).

 

We routinely work in environments wherein the customers have enabled every open source threat intel feed and every IDS rule available in their detection products, and they chase thousands of false positives daily. Effective threat intelligence application requires diligence, review, and active research into the origin, age, and type of indicators coming in through threat feeds.

 

Contextual intelligence feeds provide customers not only with indicators of compromise but also a thorough explanation of the attacker use of infrastructure, tools, and particular methodologies. Feeds containing contextual information are far more effective for successful threat detection, for example:

 

MALWARE DETECTED: FUZZY KOALA BACKDOOR

 

The ‘Fuzzy Koala Backdoor’ is a fully-functional remote access utility that communicates to legitimate, compromised servers over DNS using a custom binary protocol. This backdoor provides file upload, file download, command execution, and VNC-type capabilities. The ‘Fuzzy Koala Backdoor’ is typically delivered via spearphishing emails containing Office documents with malicious macros, and is sent via the ‘EvilSpam’ mail utility.

 

Files Created:

%systemdrive%\programdata\iexplore.exe

%systemdrive%\programdata\[a-z]{6}%UUID%.dll

 

Persistence:

HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon

Shell=explorer.exe,%systemdrive%\programdata\iexplore.exe

 

Network Indicators:

Domains:

SuperCoolEngineeringConference.com

 

With that context, a successful detection team can:

  • Look for other anomalous DNS traffic matching the attacker’s protocol to catch additional domains
  • Look for unusual emails containing documents with macros
    • Including header data provided by the attacker’s mail client
  • Identify systems on which Office applications spawned child processes
  • Identify file-based and registry-based indicators of compromise
  • Monitor for traffic to the legitimate compromised domain

 

Similarly, a successful incident detection and response team will build additional strategies to identify underlying attacker techniques and cycle out stale static indicators to minimize false positives.

 

Traditional detection mechanisms, including contextual intelligence feeds, provide security teams the ability to identify and respond to threats in the wild. In our next blog post we’ll discuss approaches for finding previously-unseen malware and attacker activity using hunting and anomaly detection.

Data Science is more than just math. A successful Data Science team and successful Data Science projects require relationships with outside teams, clear communication, as well as good decision making, problem solving and critical thinking abilities. Thus, when we talk about Data Science at Rapid7, we talk about the Data Science Process our teams use to take a Data Science project from inception to completion, where math and analysis are important, but not the only aspects of the project.

 

What are we talking about here?

 

To be clear about terms, we consider Data Science to be composed of several related disciplines, such as statistics, machine learning, visualization and to some extent, data engineering and management.

 

Additionally, Data Science (and machine learning in particular) is the right tool to address some problems. However, it is important to distinguish between machine learning applied to problems for which it is the right tool to solve them, and machine learning applied to problems for the sake of being able to market “more machine learning.”

 

Why does Rapid7 use Data Science?

 

Rapid7’s motivation for using Data Science is to solve complex problems to make our products more effective and our customers happier. Specifically, it is not to be able to market the inclusion of machine learning capabilities in our products. When a problem we’re working on can be best addressed by machine learning, we will try to use machine learning techniques to address that problem, otherwise, we will find another more appropriate way to address the problem, possibly even a simple rule.

 

Ok, walk me through this process

 

When people or groups within Rapid7 (problem owners) have a problem they think can be solved or addressed by the Data Team, they come to us to figure out if we can help. Throughout the entire process described below, the Data Team and the problem owner are in contact to make sure the assumptions we make about the problem, the data, and the solution are correct.

 

There are six steps involved in the Data Science Process:

 

  1. Understand the problem
  2. Identify relevant data
  3. Determine viability
  4. Research research research
  5. Report
  6. Hand off

 

Understand the problem

 

The first step is to get a deep understanding of the problem to be solved. This involves a few rounds of questions and answers with the problem owner to make sure they are convinced that the people from the Data Team really do understand the problem. Without this deep understanding of the problem (Why is it a problem in the first place? Why does it need to be solved? What does a good solution look like?), it is easy to end up with a solution that isn’t what the problem owner really needs.

 

Identify relevant data

 

The next step is to figure out if there is data that can help solve the problem, and if there is, where that data lives. Do we already have it? Do we need to gather it? Buy it? Generate it? Working together with the problem owner, as much as they are able to, we make a determination together about whether or not the data we plan to work with is appropriate for the problem, or if there is other data we should be using.

Determine viability

 

This is the squishiest step of them all, but is crucial. In this step, the Data Team will determine whether or not the relevant data is sufficient to try to solve the problem, or at least start research. This isn’t an attempt to rate the quality of the available data from a mathematical or statistical point of view, but rather to take a step back from the work that’s been done thus far and take stock of whether or not there is enough available data to do anything worthwhile.

 

For instance, if the available data consists of two data sets from two different sources which share a field in column that can link the two, but the intersection of these data sets is some very small proportion of either, it’s worth taking time here to evaluate if there’s more data that can be gathered to make that intersection larger, or whether there’s a different way to join the data.

 

Research research research

 

This is the stage where the Data Science team will do the work they probably consider the most interesting and fun. They will investigate whether different Data Science techniques are appropriate to solve the problem. If machine learning looks viable, they will identify which machine learning algorithms are appropriate and see if they can make an interesting model. Ideally, this is the stage where the problem is solved, or if it can’t be solved, it becomes clear why that is.

 

It is also crucial that through this phase the Data Team and the problem owner consult with each other frequently about the progress being made and the issues that come up. In most cases the problem owner will have more domain knowledge about the problem and the data at hand than the Data Team does, so being able to check in frequently and run new findings by domain experts will help the Data Team quickly readjust their efforts when necessary.

 

Report

 

The reporting stage is an opportunity for the Data Team to communicate back to the problem owner everything they’ve done in the “research research research” phase. This is the inverse of the first stage, in that now the problem owner should be making sure they understand as much as they can about the outcome of the research. The Data Team will explain data they used, why it was useful, the methods they used to transform and analyze that data, and what they’ve come up with as a solution to the problem. Additionally, they will present an overview (or as much detail as the problem owner wants) about specifically what did not work.

 

This stage is crucial to the Data Science Process. It allows both the Data Team and the problem owner to come back together and fully evaluate whether or not the outcome of the research addresses the problem, and if the Data Team has achieved its goal from the first stage when they identified what a good solution would look like.

 

Hand off

 

The final stage of the process is to work with the problem owner to hand off the research output and make it useful. In this stage, both the Data Team and the problem owner need to figure out:

 

  • What will an embodiment of the solution look like? (A command line tool? A hosted service? A shared library? An additional API? A serialized model?)
  • Who will update and maintain that embodiment?
  • Where does it live? And who gets the call at 3:00 a.m. when something breaks?

 

Without a clear plan for how to incorporate the solution into a system that can make use of it, it can easily wither and not be adopted.

 

Great, this seems familiar, how is this specific to Data Science?

 

Except for the details of the “research research research” phase, very little of this is specific to Data Science. This process is derived from and can be applied to other disciplines, but just spelling it out and going through the process has been immensely useful to our Data Team.

 

We have had projects where we didn’t fully understand the problem and found ourselves at the end of a project presenting something back to the problem owner that they already knew or knew how to do.

 

We have had projects where we didn’t fully explore the available data, or determine whether that data was viable, and we worked on incomplete data sets and ended up with a solution that didn’t give the problem owners much confidence in the solution’s utility.

 

We have had projects where we didn’t communicate our findings well or thoroughly to the problem owner and as a result, our research output was not adopted due to incorrect conclusions drawn about it.

 

And we have certainly had projects where everything works well up until the point where we try to figure out where the solution lives, and it sits in limbo instead of being useful.

 

The Data Science Process emphasizes and requires communication throughout, and relies on teams to establish and maintain relationships instead of working in isolation. With a prescribed set of steps that keep the Data Science team and the problem owner in sync, from establishing expectations about the problem and solution at the outset, to working together to find a home for the solution, the Data Science Process has enabled our Data Team to be more successful and useful to the Rapid7 community, internally and externally. It allows us to make our engineers, our consultants, our products and ultimately you, our customers, happier, more efficient, and more secure.

Stored server cross-site scripting (XSS) vulnerabilities in the web application component of OpenNMS via the Simple Network Management Protocol (SNMP). Authentication is not required to exploit.

 

Credit

This issue was discovered by independent researcher Matthew Kienow, and reported by Rapid7.

 

Products Affected

The following versions were tested and successfully exploited:

 

  • OpenNMS version 18.0.0
  • OpenNMS version 18.0.1

 

OpenNMS version 18.0.2-1, released September 20, 2016, corrects the issues.

 

Description

Two cross-site scripting (XSS) vulnerabilities were identified in the web application component of OpenNMS via the Simple Network Management Protocol (SNMP).

 

These vulnerabilities can allow an unauthenticated adversary to inject malicious content into the OpenNMS user’s browser session. This could cause arbitrary code execution in an authenticated user's browser session and may be leveraged to conduct further attacks. The code has access to the authenticated user's cookies and would be capable of performing privileged operations in the web application as the authenticated user, allowing for a variety of attacks.

 

R7-2016-24.1, XSS via SNMP Trap Alerts

First, a stored (AKA Persistent or Type I) server XSS vulnerability exists due to insufficient filtering of SNMP trap supplied data before the affected software stores and displays the data. The stored XSS payload is delivered to the affected software via an object in a malicious SNMP trap. Once the trap is processed it is stored as an event. OpenNMS's Trapd service processes SNMP trap data and accepts traps with any SNMP v1 or v2c community string. The affected software is capable of accepting traps from hosts registered or unknown to the system. Traps containing XSS payloads from hosts unknown to the system will execute when the user navigates to the events list page (http://host:8980/opennms/event/list).

 

R7-2016-24.2, XSS via SNMP Agent Data

Second, a stored server XSS vulnerability exists due to insufficient filtering of SNMP agent supplied data before the affected software stores and displays the data. The stored XSS payload is delivered to the affected software during the SNMP data collection operation performed during a discovery scan. The malicious node utilizes an SNMP agent to supply the desired XSS payload in response to SNMP GetRequest messages for the sysName (1.3.6.1.2.1.1.5) and sysContact (1.3.6.1.2.1.1.4) object identifiers (OIDs). The XSS payload provided for both the sysName and sysContact objects will execute when the user navigates to the page for the malicious node (http://host:8980/opennms/element/node.jsp?node=<ID>` where ID is the malicious node ID).

 

Exploitation

XSS payloads can be injected into the OpenNMS web application via both SNMP traps and the SNMP agent.

 

SNMP Trap

The trap OID 1.3.6.1.4.1.43555 was used to send an SNMPv2 trap in which the trap variables contain the single object sysName (1.3.6.1.2.1.1.5) set to the XSS payload <IMG SRC=/ onerror="alert('SNMP Trap Test')"></IMG>. The attack trap was sent using the Net-SNMP snmptrap tool as follows:

 

snmptrap -v2c -c public OpenNMS_Host '' 1.3.6.1.4.1.43555 SNMPv2-MIB::sysName \
s "<IMG SRC=/ onerror=\"alert('SNMP Trap Test')\"></IMG>"

 

When the user navigates to the events list page, the XSS payload is returned in a response to the user’s browser session and executed. An alert box is displayed that contains the string "SNMP Trap Test", as shown below.

 

fig1-OpenNMS-SNMPTrapsXSS.png

Figure 1: Events List SNMP Trap XSS

 

 

SNMP Agent

A malicious node is operating an SNMP agent that returns the XSS payload <script>alert("sysNameTest");</script> for the sysName (1.3.6.1.2.1.1.5) OID and <IMG SRC=/ onerror=alert(/sysContactTest/) /> for the sysContact (1.3.6.1.2.1.1.4) OID. Once the discovery scan locates and scans the malicious node, the user clicks the Info > Nodes menu item and then clicks the link for the name of the malicious node <script>alert("sysNameTest");</script>. When the node page loads the XSS payloads are returned in a response to the user’s browser session and the code is executed. An alert box is displayed that contains the string "sysNameTest", as shown below, followed by an alert box that contains the string "/sysContactTest/".

 

fig2-OpenNMS-Node-XSS.png

Figure 2: Node XSS

 

Mitigations

Users should update to version 18.0.2-1 to avoid these issues. Absent this fixed version, there is no practical way to use the SNMP functionality of the product in a safe and secure way. SNMP services should be disabled or blocked until this patch can be applied.

 

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

 

  • Sun, Aug 14, 2016: Discovered by Matthew Kienow
  • Wed, Sep 07, 2016: Disclosed by the discoverer to Rapid7
  • Thu, Sep 08, 2016: Disclosed to vendor by Rapid7 at security@opennms.org
  • Thu, Sep 08, 2016: Vendor acknowledged the issue as NMS-8722
  • Wed, Sep 14, 2016: Patch committed as PR#1019
  • Sun, Sep 20, 2016: Version 18.0.2-1 released
  • Fri, Sep 23, 2016: Disclosed to CERT/CC
  • Tue, Sep 27, 2016: CVE-2016-6555 and CVE-2016-6556 assigned by CERT/CC
  • Tue, Nov 15, 2016: Disclosed to the public

Let me tell you a story….

…a few months ago, I was going home from an airport in an Uber with my wife. We recently bought a house and were looking for some renovation work and discussing few ideas on the way. The very next day, I received a call from an unknown number—the caller said “Hello Mr. Dutta, I am [caller’s name] calling from [company name]. I would love to discuss the home renovation project you are planning to undertake in your home”. At this point his words started blurring as my mind was racing in different direction on how did this guy know all these details? Was it the city that informed them? Was it UBER? The timing was crazy! That got me thinking…

 

Security and usability

Everyone loves UBER. It’s quite easy to hail an UBER at a tap. But can UBER be a privacy risk? Can we say that for a better experience and usability, my privacy can be compromised?

 

I started digging deep into this relation. I realized that in websites when security measures like CAPTCHA are added, it makes the website more secure, but the conversation rates for those websites drop significantly as usability is reduced.

 

Looking at health care systems, in certain types of insulin pumps, a physician has all the vital information, including the patient’s blood glucose level, the moment a patient steps into the clinic. To enable this, the insulin pump has an always-on Bluetooth sensor. This convenience comes at the cost of high security risk, where it’s possible to tamper with the device remotelywith serious consequences.

 

Finding a balance

Such examples make us believe that security and usability are two antagonistic goals within system design. Simson Garfinkel, in his doctoral thesis at MIT, argued that there are many instances within which security and usability can be synergistically improved. This is possible by revising the way that specific functionality is implemented in many of today’s operating systems and applications. Garfinkel further explains that in every case considered, it is shown that the perceived antagonism of security and usability can be scaled back or eliminated by revising the underlying designs on which modern systems are conceived. The errors in system design, computer user interfaces, and interaction design can lead to common errors in secure operation.

 

By identifying and correcting these errors, users can naturally and automatically experience more secure operation. For instance, an emerging area is Internet of Things (IoT) devices—this area can benefit hugely from an established set of patterns/ rules/ framework which is optimized for security operations.

 

Widespread attacks

In September 2016, we saw a record-breaking Distributed Denial of Service (DDoS) attacks against the France-based hosting provider OVH. That attack reached over one Terabit per second (1 Tbps), and was carried out via a botnet of infected 150000 IoT devices. Less than a month later, a massive and sustained Internet attack caused outages and network congestion for a large number of web sites. This attack was launched with the help of hacked IoT devices, such as CCTV video cameras and digital video recorders, and impacted websites from high-profile organizations, including Twitter, Amazon, Tumblr, Reddit, Spotify and Netflix.

 

Mirai”, which was used to hijack the connected IoT devices, exploited the default usernames and passwords set by the factory before the devices are shipped to customers. Mirai is capable of launching HTTP floods, as well as various network DDoS attacks, including DNS floods, UDP floods, SYN and ACK floods, GRE IP and GRE ETH floods, STOMP (Simple Text Oriented Message Protocol) flood attacks.

 

Securing usable IoT and making secure IoT usable

While such incidents are scary, IoT devices are awesome!!! They make our lives easier. The potential for IoT is limitless. However, while security is a potential risk, we cannot afford to seize the opportunity to exploit IoT capabilities to its fullest.  What we need is discipline—some kind of governance or rule book on how to securely use these products.

 

Garfinkel refers to them as simple patterns.Developers and the organizations that employ them must analyze their risks, the cost of proposed security measures, and the anticipated benefits. Be it security or usability, neither should be added to a system as an afterthought. Instead, security and usability must be designed into systems from the beginning. By providing pre-packaged solutions to common design problems, patterns can address this deficit.

 

A great example of a Usability Pattern is “Copy and Paste” or “Drag and Drop” that have dramatically changed the usability of computer systems. Similarly, Security Patterns, such as using the Secure Socket Layer (SSL) to “wrap” cleartext protocols and Email-Based Identification and Authentication for resetting passwords, have allowed developers untrained in security to increase the security of their systems. Patterns that align security and usability of IOT devices can create that much-needed rule book for the IoT developers.

 

I want to leave you all with the thoughts that IoT systems must be viewed as socio-technical systems that depend on the social context in which they are embedded to function correctly. The security mechanisms will only be able to provide the intended protection when people actually understand and are able to use them correctly.

 

You can further read about independent IoT research complied by Tod Beardsley here, and read about danger of default passwords in IoT by Deral Heiland here.

 

Thank you for reading!

 

Saurabh Dutta,
Director of UX, Rapid7

In the security industry, as in much of life, a problem we often face is that of balance. We are challenged with finding the balance between an organization’s operational needs and the level of security that can be implemented. In many situations an acceptable, if less than optimal, solution can be found but there are cases where this balance cannot be achieved. I recently saw of case of this on the mailing list of the IETF TLS Working Group where Andrew Kennedy, a representative of a finance industry group, asked for changes to be made to the draft proposal of the next version of TLS. In the eyes of many in the security community honoring the request would undermine one of the goals of the new standard and enable the continuation of real time and after the fact decryption of TLS traffic. From the perspective of the Mr. Kennedy the TLS draft changes break existing security controls for the industry as well as prevents organizations from meeting governmental regulations and requirements. In other words, they actually increase risk for the industry that he represents.


Background on how RSA certificates are used

Before I continue I’ll need to provide a little background information in order to provide context for Mr. Kennedy’s request. At a very high level RSA certificates are a pair of mathematically linked encryption keys. One of the keys, "private," must be protected and known only to the server.  The other key, "public," is expected to be shared with any party trying to communicate with the server.  Within the SSL/TLS protocols RSA certificates can be used for two purposes that are relevant to this discussion: proof of identity and key exchange.

 

Proof of identity

The use of RSA certificates that most people are familiar with is proof of identity. While making a TLS connection to a service, the server must present a valid certificate matching the hostname the client sent the request to. The certificate must be signed by, or be in a chain of signed certificates that are signed by, a certificate that the client trusts. This proves that the server is actually the one that the client intended to communicate with and not some malicious actor.

 

Key exchange

A lesser known use for the RSA certificate is encrypting secret data used during the key exchange portion of TLS session negotiation. Asymmetric encryption, such that which is performed with RSA keys, isn’t actually what is used for encrypting the application traffic in a TLS session. This is due to multiple factors but the most significant is speed. Encrypting all data using the asymmetric encryption would be so slow as to be unusable. Application traffic is encrypted with a symmetric cipher such as AES. For this to work both sides of the conversation need to know a session specific secret key. When RSA key exchange is used the information used to create the session key is generated by the client. The data is then encrypted using the server's RSA public key and transmitted to the sever.

 

One downside of using RSA certificates to perform key exchange is that if the server’s RSA private key is compromised then all TLS communications to that server can be decrypted. The private key of the server’s RSA certificate can be used to decrypt the TLS session setup phase and extract the session key. At this point all traffic for that particular session can be decrypted. What makes this particularly dangerous is that this technique works for any sessions that have been captured in the past. If traffic to a server was captured by a malicious party for three years and at the end of this time the party compromised the RSA certificate used for key exchange during that time then ALL of that traffic could be decrypted and the contents compromised.

 

To address this risk endpoints can be configured to use other methods of key exchange that don’t use long lived keys like RSA certificates. This provides Forward Secrecy where compromise of long-term keys does not compromise past session keys. The TLS Working Group and contributors considered Forward Secrecy important enough that they removed support for RSA static key exchange from the TLS 1.3 draft specification in mid-2014. This change does fantastic things for the security of an endpoint's communications but it severely breaks the ability of authorized parties to monitor traffic.

 

 

The request

This brings us back to Mr. Kennedy’s request. On September 22, 2016 Mr. Kennedy, who is with the Financial Services Roundtable BITS Cybersecurity team, sent an email to the TLS Working Group email list requesting that RSA key exchange remain in the TLS 1.3 specification. His reasons for requesting this essentially boil down to:

 

  • His industry is generally required by regulation and contract to implement security technologies aligning with best practices such as IPS, DLP, malware detection, etc.

 

  • His industry is often required to provide an audit trail of all actions taken by certain employees and systems

 

  • Network and application troubleshooting often require inspection of traffic contents

 

  • Removing RSA key exchange from TLS 1.3 breaks capability to decrypt TLS traffic in real time or retroactively

 

  • At some point regulations, contractual obligations, or technology requirements will force the implementation of TLS 1.3

 

  • Using Man in the Middle (MitM) techniques add overhead, latency, and complexity

 

  • Capturing the required data on the endpoint is subject to failure, adds complexity, and requires control of all endpoints

 

 

He has a point

Having spent some time as an InfoSec professional in the Finance industry I can see the value in his arguments and, whether you agree with him or not, those are legitimate business needs and drivers. From a practical risk perspective for the average financial institution the current TLS specification is “good enough” for general usage because of the extremely low likelihood of attacks against the TLS session itself. On the other hand the ability to detect attacks against infrastructure in real time is critical. Additionally, real risk can come in the form of financial penalties and sanctions due to non-compliance with regulation and rules. Non-State actors are unlikely to decrypt an organizations internal communications, but failure to meet U.S. Security and Exchange Commission (SEC) requirements can have real and measurable impacts in the form lost revenue due to the inability to trade. Further, changes such as the one discussed here can reduce or remove the incentive to adopt new technologies and the benefits they represent when the opportunity arises. This is particularly true in industries where change itself causes risk.

 

 

But there are problems

All of this being said, I think the train has left the station on this one. One of the goals of the current TLS 1.3 draft specification was to remove weak/broken key exchange and lock in forward secrecy. To a large degree this stance is in response to increased awareness of State level surveillance and the potential impacts of compromised server keys. The efforts to harden TLS are also being driven by the large number of TLS and general cryptography related security issues that have been discovered over the last decade. The IETF mailing lists responses to Mr. Kennedy generally trend towards ‘no’, ‘where were you a dozen drafts ago?’, ‘you are doing security wrong’, and ‘that particular technique for surveillance/monitoring will be dead soon, deservedly so’. Obviously there is a lot of nuance in both positions that can’t be captured in a short blog post.

 

 

Unbalanced exchange

One point that I don’t feel like was made strongly enough was that what Mr. Kennedy has asked for has limited, local benefits. Using the RSA certificate to decrypt the traffic only works when you have the certificate from the server side of the conversation. The tools will be unable to decrypt and inspect traffic to 3rd parties, malicious hosts and services within the network, and traffic to endpoints where forward secrecy is a requirement implemented via technical configuration. This does provide value when monitoring for attacks against your web servers but is going to have limited value when deploying Data Loss Prevention (DLP) to detect information exfiltration. It also won’t detect when an organization’s stock brokers are handling trades via Facebook or malware is calling out to command and control servers. While not ideal, these problems are more completely solved in the form of constrained networks and hosts, end point solutions, and logging. In short, the requested change to the TLS draft adds risk to others but doesn’t truly solve the industry’s problems.

 

 

Looking into the future

I think there are a couple of take-aways from this situation:

 

  • Implementing TLS 1.3 as it currently stands WILL break, BY DESIGN, certain methods of traffic surveillance/monitoring.

 

  • Organizations with requirements to monitor traffic will need to either MitM or implement endpoint monitoring (traffic logging, inspection, session key retention).

 

  • Organizations will have to rely on MitM + network egress filtering to deal with traffic between endpoints they don't control. This is not a change to how things are done today.

 

  • Organizations will have many years to deal with this for assets they control. You will still find SSLv2 within enterprises and SSLv3 is still a thing on the Internet.

 

  • Regulations and the speed of technology adaption to the new protocol will slow the progress of implementation. That being said, I expect TLS 1.3 to be deployed much faster than its predecessors.

 

 

At the end of the day

I think Mr. Kennedy’s request to the IETF TLS Working Group was well formulated, appropriately targeted, and based on legitimate business drivers. Despite this I don’t think that he will find the balance between security and business needs that he is looking for. One could argue that the request was too late in the process but I honestly think the outcome would not be different had it been sent during the first draft of the TLS 1.3 specifications. On the other hand, increased awareness and involvement at that time would have allowed more time to review options and work with regulators and vendors to meet the industry’s needs.

Earlier this month Kyle Flaherty wrote a post on the Rapid7 Community Blog about how Rapid7 came out on top for coverage of the Center for Internet Security (CIS) Top 20 Security Controls. In light of recent DDoS events I’d like to take a little time to discuss at a high level what the controls are, how they would help, and what organizations can do to improve their posture in these areas.

What are the Critical Security Controls?

Here is how the CIS describes the Top 20 Critical Security Controls:

The CIS Critical Security Controls (CIS Controls) are a concise, prioritized set of cyber practices created to stop today’s most pervasive and dangerous cyber attacks. The CIS Controls are developed, refined, and validated by a community of leading experts from around the world. Organizations that apply just the first five CIS Controls can reduce their risk of cyberattack by around 85 percent. Implementing all 20 CIS Controls increases the risk reduction to around 94 percent.

 

Each CIS control is made up of a high level concept and contains multiple sub-controls that support this concept. The controls are prioritized and efforts to implement a given control will support and enable the implementation of lower priority controls. Progression through the controls also serves as a measure of security program maturity.

Why do they matter?

You don’t have to be tech-savvy to be aware of the impact that inadequately secured devices can have on organizations and the general Internet. In the last few weeks record breaking DDoS attacks have originated from Internet of Things (IoT) devices. News of these events made the general non-tech press when Brian Krebs was targeted. They gained an all new level of public awareness when they were used to DDoS Dyn’s DNS services last week and impacted Twitter, Spotify, Reddit, GitHub, and others. While the public sees the impacts to Twitter, organizations feel the impact when services like GitHub and Okta aren’t available.

 

These attacks have been tied to the Mirai malware which spreads by logging into Internet accessible Telnet services using a list of factory default credentials. Reports of the botnet’s size vary widely depending on the source and their access to data. Level3 blogged that they have found over 490,000 members of Mirai family botnets. Dyn stated that they saw "10s of millions of IP addresses" during the attack on them. One would hope that a protocol as insecure as Telnet would not continue to be prevalent but recent scans of the Internet by Censys.io reveal over 5.3 million devices that returned a Telnet banner on port 23/TCP. Since Mirai kills the Telnet, SSH, and HTTP services, any devices that were infected at the time of the scan would not be represented.

 

A device doesn’t have to be compromised to be used in a DDoS. Jon Hart, a fellow researcher on the Rapid7 Labs team, recently wrote a blog post describing how public access to certain UDP services can enable Distributed Reflected Denial of Service (DRDoS) attacks. These attacks can allow the attacker to hide the source of the attack often while amplifying the size of the attack. He provided some great data about services that could be used by attackers and provided pointers to the datasets that Rapid7 makes publicly available via Project Sonar. These datasets are the results of Internet IPv4 scanning and provide insight into the prevalence of certain services and potential amplification metrics.

 

I’d like to expand on Jon’s post a bit by talking about two services in particular.  As Jon pointed out 1,768,634 hosts responded to a NetBIOS name service probe on port 137/UDP.  If you dig into data that he linked you will find that 1,657,431 (93.7%) responded with a NetBIOS hostname and in many cases a domain name. There is another UDP study that Project Sonar performs that I think is relevant as well. We scan on 1434/UDP for the Microsoft SQL Browser Service.  This service provides information about the Microsoft SQL Server, which databases it hosts, and on what ports or endpoints they can be found.  If you look at the dataset from 10/03/2016 and process it using Rapid7’s open source DAP and Recog tools you will find that there were 149,344 responses that provided instance and/or server names as well as server version information. Both of these services not only lend themselves to being used in DRDoS attacks, they also leak potentially sensitive data. It's unlikely that services exposed by these hosts were intended to be Internet accessible. Their presence on the Internet present a risk not only to the Internet in general but to the device owners as well.

How do the Critical Security Controls help?

Adoption of the CIS Controls can significantly reduce risk and greatly improve an organization’s ability to respond to security incidents. For example, here are 5 of the 20 CIS Controls that, if followed, would reduce an organizations likelihood of being a source of traffic in a DDoS:

 

  1. Inventory of Authorized and Unauthorized Devices

  2. Inventory of Authorized and Unauthorized Software

  4. Continuous Vulnerability Assessment and Remediation

  9. Limitations and Control of Network Ports, Protocols, and Services

11. Secure Configurations for Network Devices such as Firewalls, Routers, and Switches

 

Yes, those are all obvious security measures. The value here is that the CIS controls provide prioritization of efforts. For example, implementing #9 or #11 above without #1 or #2 is doomed to failure in any complex environment. Additionally, each high level control has between 4 and 14 more tactical sub-controls that support it. Here extracts from a couple of selected example controls:

 

1.1 Deploy an automated asset inventory discovery tool and use it to build a preliminary inventory of systems connected to an organization’s public and private network(s)…

 

1.2 If the organization is dynamically assigning addresses using DHCP, then deploy dynamic host configuration protocol (DHCP) server logging, and use this information to improve the asset inventory and help detect unknown systems.

 

4.1 Run automated vulnerability scanning tools against all systems on the network on a weekly or more frequent basis and deliver prioritized lists of the most critical vulnerabilities to each responsible system administrator

 

9.1 Ensure that only ports, protocols, and services with validated business needs are running on each system.

 

9.4 Verify any server that is visible from the Internet or an untrusted network, and if it is not required for business purposes, move it to an internal VLAN and give it a private address.

 

Each of the sub-controls helps build capability and awareness as well as enables the implementation of later controls. When these controls are baked into an organization’s operational processes security becomes an intrinsic attribute of the environment, not an on demand effort that interrupts business processes when an event occurs. An organization that had implemented these controls would be aware of the services that were exposed to the Internet and the risks that they present. In the case of a previously unknown vulnerability it would have the information required to quickly respond and mitigate the risk.

Next Steps

Here are some steps that you can take to learn about the CIS Controls as well as reduce the likelihood that devices in your environment are used in DDoS attacks.

  • Go to the CIS website and learn about the CIS Controls. They provide high level overviews, FAQs, and the ability to download the CIS Controls for free.

  • If your organization is a service provider or a company with assigned ASNs you can sign up for free Shadowserver reports.  The Shadowserver Foundation scans the Internet for certain services of concern, such as those that could be used in DDoS, and will provide regular reports on these to network owners.

  • Use an external service, such as the Rapid7 Perimeter Scanning Service, or an externally hosted scan engine to perform scans of your Internet accessible IP space. This will provide a more accurate picture of what your organization is exposing to the Internet than that provided by an internally hosted scanner.

  • Use the data provided by Rapid7 Project Sonar, the Censys team, and others on the Scans.IO website.  You can download datasets individually or use the Censys team’s search engine.

 

 

Good Luck!

by Bob Rudis, todb, Derek Abdine & Rapid7 Labs Team

 

What do I need to know?

 

Over the last several days, the traffic generated by the Mirai family of botnets has changed. We’ve been tracking the ramp-up and draw-down patterns of Mirai botnet members and have seen the peaks associated with each reported large scale and micro attack since the DDoS attack against Dyn, Inc. We’ve tracked over 360,000 unique IPv4 addresses associated with Mirai traffic since October 8, 2016 and have been monitoring another ramp up in activity that started around November 4, 2016:

 

hcloud-mirai-01-1.png

At mid-day on November 8, 2016 the traffic volume was as high as the entire day on November 6, 2016, with all indications pointing to a probable significant increase in botnet node accumulation by the end of the day.

 

We’ve also been tracking the countries of origin for the Mirai family traffic. Specifically, we’ve been monitoring the top 10 countries with the most number of Mirai daily nodes. This list has been surprisingly consistent since October 8, 2016.

 

hcloud-mirai-02-1.png

However, on November 6, 2016 the U.S. dropped out of the top 10 originating countries. As we dug into the data, we noticed a significant and sustained drop-off of Mirai nodes from two internet service providers:

 

hcloud-mirai-03-1.png

There are no known impacts from this recent build up, but we are continuing to monitor the Mirai botnet family patterns for any sign of significant change.

 

What is affected?

The Mirai botnet was initially associated with various components of the “internet of things”, specifically internet-enabled cameras, DVRs and other devices not generally associated with malicious traffic or malware infections. There are also indications that variants of Mirai may be associated with traditional computing environments, such as Windows PCs.

 

As we’ve examined the daily Mirai data, a large percentage of connections in each country come from autonomous systems — large block of internet addresses owned by the provider of network services for that block — associated with residential or small-business internet service provider networks.

 

How serious is this?

 

Regardless of the changes we've seen in the Mirai botnet over the last several days, we still do not expect Mirai, or any other online threat, to have an impact on the 2016 United States Presidential Election. The ballot and voting systems in use today are overwhelmingly offline, conducted over approximately 3,000 counties and parishes across the country. Mounting an effective, coordinated, remote attack on these systems is nigh impossible.

 

The most realistic worst-case scenarios we envision for cyber-hijinks this election day are website denial of service attacks, which can impact how people get information about the election. These attacks may (or may not) be executed against voting and election information websites operated by election officials, local and national news organizations, or individual campaigns.

 

If early voting reports are any indication, we expect to see more online interest in this election than the last presidential election, and correspondingly high levels of engagement with election-related websites. Therefore, even if an attack were to occur, it may be difficult for website users to distinguish it from a normal outage due to volume. For more information on election hacking, read this post.

 

How did we find this?

 

We used our collection of Heisenberg Cloud honeypots to capture telnet session data associated with the behaviour of the Mirai botnet family. Heisenberg Cloud consists of 136 honeypot nodes spread across every region/zone of six major cloud providers. The honeypot nodes only track connections and basic behavior in the connections. They are not configured to respond to or decode/interpret Mirai commands.

 

 

What was the timeline?

 

The overall Mirai tracking period covers October 8, 2016 through today, November 8, 2016. All data and charts provided in this report use an extract of data from October 30, 2016 through November 8, 2016.

Filter Blog

By date: By tag: