Skip navigation
All Places > Information Security > Blog

The Rapid7 Security Advisory Service relies heavily on the CIS top 20 critical controls as a framework for security program analysis because they are universally applicable to information security and IT governance. Correct implementation of all 20 of the critical controls greatly reduces security risk, lowers operational costs, and significantly improves any organization’s defensive posture.  The 20 critical controls are divided into System, Network, and Application families, and each control can be subdivided into sections in order to facilitate implementation and analysis.

 

The first of the 20 controls, “Inventory of Authorized and Unauthorized Devices” is split into 6 focused sections relating to network access control, automation and asset management. The control specifically addresses the need for awareness of what’s connected to your network (Tip: Don't forget to scan your network for IoT devices), as well as the need for proper internal inventory management and management automation. Implementing inventory control is probably the least glamorous way to improve a security program, but if it's done right it reduces insider threat and loss risks, cleans up the IT environment and improves the other 19 controls.

What it is:

The Inventory of Authorized and Unauthorized Devices is part of the “systems” group of the CIS top 20 critical security controls. It specifically addresses the need for awareness of what is on your network, as well as awareness of what shouldn’t be. Sections 1.1, 1.3 and 1.4 address the need for automated tracking and inventory, while 1.2, 1.5 and 1.6 are related to device-level network access control and management. The theme of the control is fairly simple; You should be able to see what is on your network, know which systems belong to whom, and use this information to prevent unauthorized users from connecting to the network. High maturity organizations often address the automation and management sections of this control well, but Rapid7 often sees gaps around network access control based on inventory due to the perceived complexity of implementing NAC.

 

How to implement it:

There are numerous effective ways to implement the Inventory of Authorized and Unauthorized Devices control. Many of them will also significantly improve the implementation of other controls relating to network access, asset configuration, and system management. Successful implementations often focus on bridging existing system inventory or configuration management services and device-based network access control. The inventory management portion is usually based on software or endpoint management services such as SCCM, while access control can leverage existing network technology to limit device access to networks.

Robust implementation of DHCP logging and management will effectively address sections 1.1, 1.2, and 1.4 of Critical Control #1. Deploying DHCP logging and using the outputs to establish awareness of what is currently connected to the network is an extremely good first step to full implementation. Tracking DHCP activity has an additional impact on the IT support and management side of the organization, as well; it serves as a sort of “early warning” system for network misconfiguration and management issues. For organizations with a SIEM solution or centralized audit repository, ingested DHCP logs can allow correlation with other security and network events. Correlating the logs against additional system information from tools like SCCM or event monitoring services can also assist with inventory tracking and automated inventory management, which has added benefits on the financial and operations management side of the shop, as well.

 

Admin Tips:

  • For DHCP-enabled network segments that have lower change rates (non-workstation segments) consider adding a detective control such as a notification of a new DHCP lease. Backup, VOIP, or network device management networks are often effective conduits for an attacker’s lateral movement efforts, and usually don’t have a high amount of device churn, so increasing detective controls there may create little administrative overhead and increase the possibility of detecting indicators of compromise.

 

  • The Inventory of Authorized and Unauthorized Devices control also recommends the use of automated inventory tools that scan the network to discover new systems or devices, as well as tracking the changes made to existing systems. While DHCP logging is an effective basic measure, tools such as SCCM, KACE, Munki, and SolarWinds effectively lower the effort and time surrounding inventory management, asset configuration, and system management. 

 

  • Many customers with existing Microsoft Enterprise Agreements may already have licenses available for SCCM. When combined with Certificate Authorities, Group Policies, and some creativity with Powershell, a handful of Administrators can maintain awareness and control of authorized devices to address many aspects of this foundational critical control.

 

 

Even if you don’t use SCCM or Nexpose, most agent-based system discovery and configuration management will allow organizations to address this control and other governance requirements. Effective implementation of inventory based access control will let the organization see and manage what is connecting to their network, which is critical for any good security program. While management tools often require time and effort to deploy, the cost benefit is significant; it allows smaller IT teams to have a major impact on their network quickly, and assists with patching, situational awareness, and malware defense.

Hat tip and thanks to Jason Beatty and Magen Wu for application-specific info and editorial suggestions.

 

Related: The CIS Critical Security Controls Explained – Control 2: Inventory of Authorized and Unauthorized Software

Today, we'd like to announce eight vulnerabilities that affect four Rapid7 products, as described in the table below. While all of these issues are relatively low severity, we want to make sure that our customers have all the information they need to make informed security decisions regarding their networks. If you are a Rapid7 customer who has any questions about these issues, please don't hesitate to contact your customer success manager (CSM), our support team, or leave a comment below.

 

For all of these vulnerabilities, the likelihood of exploitation is low, due to an array of mitigating circumstances, as explained below.

 

Rapid7 would like to thank Noah Beddome, Justin Lemay, Ben Lincoln (all of NCC Group); Justin Steven; and Callum Carney - the independent researchers who discovered and reported these vulnerabilities, and worked with us on pursuing fixes and mitigations.

 

Rapid7 ID

CVE

Product

Vulnerability

Status

NEX-49834

CVE-2017-5230

Nexpose

Hard-Coded Keystore Password

Mitigations available

MS-2417

CVE-2017-5228

Metasploit

stdapi Dir.download() Directory Traversal

Fixed (4.13.0-2017020701)

MS-2417

CVE-2017-5229

Metasploit

extapi Clipboard.parse_dump() Directory Traversal

Fixed (4.13.0-2017020701)

MS-2417

CVE-2017-5231

Metasploit

stdapi CommandDispatcher.cmd_download() Globbing  Directory Traversal

Fixed (4.13.0-2017020701)

PD-9462

CVE-2017-5232

Nexpose

DLL Preloading

Fixed (6.4.24)

PD-9462

CVE-2017-5233

AppSpider Pro

DLL Preloading

Fix in progress (6.14.053)

PD-9462

CVE-2017-5234

Insight Collector

DLL Preloading

Fixed (1.0.16)

PD-9462

CVE-2017-5235

Metasploit Pro

DLL Preloading

Fixed (4.13.0-2017022101)


CVE-2017-5230: Rapid7 Nexpose Static Java Keystore Passphrase

Cybersecurity firm NCC Group discovered a design issue in Rapid7's Nexpose vulnerability management solution, and has released an advisory with the relevant details here. This section briefly summarizes NCC Group's findings, explains the conditions that would need to be met in order to successfully exploit this issue, and offers mitigation advice for Nexpose users.

 

Conditions Required to Exploit

One feature of Nexpose, as with all other vulnerability management products, is the ability to configure a central repository of service account credentials so that a VM solution can login to networked assets and perform a comprehensive, authenticated scan for exposed, and patched, vulnerabilities. Of course, these credentials tend to be sensitive, since they tend to have broad reach across an organization's network, and care must be taken to store them safely.

 

The issue identified by NCC Group revolves around our Java keystore for storing these credentials, which is encrypted with a static, vendor-provided password, "r@p1d7k3y5t0r3." If a malicious actor were to get a hold of this keystore, that person could use this password to decrypt and expose all stored scan credentials. While this is not obviously documented, this password is often known to Nexpose customers and Rapid7 support engineers, since it's used in some backup recovery scenarios.

 

This vulnerability is not likely to offer an attacker much of an advantage however, since they would need to already have extraordinary control over your Nexpose installation in order to exercise it. This is because you need high level privileges to be able to actually get hold of the keystore that contains the stored credentials. So, in order to obtain and decrypt this file, an attacker would need to already have at least root/administrator privileges on the server running the Nexpose console, OR have a Nexpose console "Global Administrator" account, OR have access to a backup of a Nexpose console configuration.

 

If the attacker already has root on the Nexpose console, the jig is up; customers are already advised to restrict access to Nexpose servers through normal operating system and network controls. This level of access would already represent a serious security incident, since the attacker would have complete control over the Nexpose services and could leverage one of any number of techniques to extend privileges to other network assets, such as conducting local man-in-the-middle network monitoring, local memory profiling, or other, more creative techniques to increase access.

 

Similarly, Global Administrator access to the Nexpose console would, at minimum, allow an attacker to obtain a list of every vulnerable system in scope, alter or skip scheduled scans, and create new and malicious custom scan templates.

 

That leaves Nexpose console backups, which we believe represents the most likely attack vector. Sometimes, backups of critical configurations are stored in networked locations that aren't as secure as the backed-up system itself. We advise against this, for obvious reasons; if backups are not secured at least as well as the Nexpose server itself, it is straightforward to restore the backup to a machine under the attacker's control (where he would have root/administrator), and proceed to leverage that local privilege as above.

 

Designing a Fix

While encrypting these credentials at rest is clearly important for safety's sake, eventually these credentials do have to be decrypted, and the key to that decryption has to be stored somewhere. After all, the whole point of a scheduled, authenticated scan is to automate logins. Storing that key offline, in an operator's head, means having to deal with a password prompt anytime a scan kicks off. This would be a significant change in how the product works, and would be a change for the worse.

 

Designing a workable fix to this exposure is challenging. The simple solution is to enable users to pick their own passwords for this keystore, or generate one per installation. This would at least force attackers who have gained access to critical network infrastructure to do the work of either cracking the saved keystore, or do the slightly more complicated work of stepping through the decryption process as it executes.

 

Unfortunately, this approach would immediately render existing backups of the Nexpose console unusable -- a fact that tends to only be important at the least opportune time, after a disaster has taken out the hosting server. Given the privilege requirements of the attack, this trade-off, in our opinion, isn't worth the future disaster of unrestorable backups.

 

While we do expect to implement a new strategy for encrypting stored credentials in a future release, care will need to be taken to both ensure that the customer experience with disaster recovery remains the same and support costs aren't unreasonably impacted by this change.

 

Mitigations for CVE-2017-5320

Until an updated version is available, Nexpose customers who use authenticated scans are advised to take care in securing their Nexpose backups, as well as their Nexpose consoles, to avoid this and other exposures.

 

CVE-2017-5228, CVE-2017-5229, CVE-2017-5231: Metasploit Meterpreter Multiple Directory Traversal Issues

Metasploit Framework contributor and independent security researcher Justin Steven reported three issues in the way Metasploit Meterpreter handles certain directory structures on victim machines, which can ultimately lead to a directory traversal issue on the Meterpreter client. Justin reported his findings in an advisory, here.

 

Conditions Required to Exploit

In order to exploit this issue, we need to first be careful when discussing the "attacker" and "victim." In most cases, a user who is loading and launching Meterpreter on a remote computer is the "attacker," and that remote computer is the "victim." After all, few people actually want Meterpreter running on their machine, since it's normally delivered as a payload to an exploit.

 

However, this vulnerability flips these roles around. If a computer acts as a honeypot, and lures an attacker into loading and running Meterpreter on it, that honeypot machine has a unique opportunity to "hack back" at the original Metasploit user by exploiting these vulnerabilities.

 

So, in order for an attack to be successful, the attacker, in this case, must entice a victim into establishing a Meterpreter session to a computer under the attacker's control. Usually, this will be the direct result of an exploit attempt from a Metasploit user.

 

Designing a Fix

Justin worked closely with the Metasploit Framework team to develop fixes for all three issues. The fixes themselves can be inspected in the open source Metasploit framework repository, at Pull Requests 7930, 7931, and 7932, and ensure that data from Meterpreter sessions is properly inspected, since that data can possibly be evil. Huge thanks to Justin for his continued contributions to Metasploit!

 

Mitigations for CVE-2017-5228, CVE-2017-5229, CVE-2017-5230

In addition to updating Metasploit to at least version 4.3.20, Metasploit users can help protect themselves from the consequences of interacting with a purposefully malicious host with the use of Meterpreter's "Paranoid Mode," which can significantly reduce the threat of this and other undiscovered issues involving malicious Meterpreter sessions.

 

CVE-2017-5232, CVE-2017-5233, CVE-2017-5234, CVE-2017-5235: DLL Preloading

Independent security researcher Callum Carney reported to Rapid7 that the Nexpose and AppSpider installers ship with a DLL Preloading vulnerability, wherein an attacker could trick a user into running malicious code when installing Nexpose for the first time. Further investigation from Rapid7 Platform Delivery teams revealed that the installation applications for Metasploit Pro and the Insight Collector exhibit the same vulnerability.

 

Conditions Required to Exploit

DLL Preloading vulnerabilities are well described by Microsoft, here, but in short, DLL preloading vulnerabilities occur when a program fails to specify an exact path to a system DLL; instead, the program can seek that DLL in a number of default system locations, as well as the current directory.

 

In the case of an installation program, that current directory may be a general "Downloads" folder, which can contain binaries downloaded from all sorts of places.

 

If an attacker can convince a victim to download a malicious DLL, store it in the same location as one of the Rapid7 installers identified above, and then install one of those applications, the victim can trigger the vulnerability. In practice, DLL preloading vulnerabilities occur more often on shared workstations, where the attacker specifically poisons the Downloads directory with a malicious DLL and waits for the victim to download and install an application susceptible to this preloading attack. It is also sometimes possible to exercise a browser vulnerability to download (but not execute) an arbitrary file, and again, wait for the user to run an installer later. In all cases, the attacker must already have write permissions to a directory that contains the Rapid7 product installer.

 

Usually, people only install Rapid7 products once each per machine, so the window of exploitation is also severely limited.

 

Designing a Fix

In the case of Metasploit Pro, Nexpose, and the Insight Collector, the product installers were updated to define exactly where system DLLs are located, and no longer rely on dynamic searching for missing DLLs. An updated installer for Appspider Pro will be made available once testing is completed.

 

Mitigations for CVE-2017-5232, CVE-2017-5233, CVE-2017-5234, CVE-2017-5235

In all cases, users are advised to routinely clean out their "Downloads" folder, as this issue tends to crop up in installer packages in general. Of course, users should be aware of where they are downloading and running executable software, and Microsoft Windows executables support a robust, certificate-based signing procedure that can ensure that Windows binaries are, in fact, what they purport to be.

 

Users who keep historical versions of installers for backup and downgradability purposes should be careful to only launch those installation applications from empty directories, or at least, directories that do not contain unknown, unsigned, and possibly malicious DLLs.

 

Coordinated Disclosure Done Right

NCC Group, Justin Steven, and Callum Carney all approached Rapid7 with these issues privately, and have proven to be excellent and accommodating partners in reporting these vulnerabilities to us. As a publisher of vulnerability information ourselves, Rapid7 knows that this kind of work can at times be combative, unpleasant, and frustrating. Thankfully, that was not the case with these researchers, and we greatly appreciate their willingness to work with us and lend us their expertise.

 

If you're a Rapid7 customer who has any questions about this advisory, please don't hesitate to contact your regular support channel, or leave a comment below.

TL;DR

This week a vulnerability was disclosed, which could result in sensitive data being leaked from websites using Cloudflare's proxy services. The vulnerability - referred to as "Cloudbleed" - does not affect Rapid7's solutions/services.

 

This is a serious security issue, but it’s not a catastrophe. Out of an abundance of caution, we recommend you reset your passwords, starting with your most important accounts (especially admin accounts). A reasonable dose of skepticism and prudence will go a long way in effectively responding to this issue.

 

What’s the story on this Cloudflare vulnerability?

On February 18, 2017 Tavis Ormandy, a vulnerability researcher with Google’s Project Zero, uncovered sensitive data leaking from websites using Cloudflare’s proxy services, which are used for their content delivery network (CDN) and distributed denial-of-service (DDoS) mitigation services. Cloudflare provides a variety of services to a lot of websites - a few million, in fact. Tavis notified Cloudflare immediately. A few features in Cloudflare’s proxy services had been using a flawed HTML parser that leaked uninitialized memory from Cloudflare’s edge servers in some of their HTTP responses. Vulnerable features in Cloudflare’s service were disabled within hours of receiving Tavis’ disclosure, and their services were fully patched with all vulnerable features fully re-enabled within three days. Cloudflare has a detailed write-up about Cloudbleed's underlying issue and their response to it - check it out!

 

This Cloudflare memory leak issue is certainly serious, and it's great to see that Cloudflare is acting responsibly and rapidly after receiving a disclosure of Google’s findings on a Friday night. Most companies require several weeks to respond to vulnerability disclosures, but Cloudflare mitigated the vulnerability within hours and appears to have done the majority of the work required to fully remediate the issue in well under a week, starting on a weekend, which itself is impressive.

 

Why should I care?

Your information may have been leaked. Any vendor’s website using Cloudflare’s proxy service could have exposed your passwords, session cookies, keys, tokens, and other sensitive data. If your organization used this Cloudflare proxy service between September 22, 2016 and February 18, 2017, your data and your customers’ data could have been leaked and cached by search engines. As Ryan Lackey notes, “Regardless, unless it can be shown conclusively that your data was NOT compromised, it would be prudent to act as if it were.”

 

Who is affected by the Cloudflare vulnerability?

Before Tavis' disclosure, data had been leaking for months. It’s too soon to know the full scope of the data that was leaked and the sites and services that were affected (although we're off to a decent start). There is currently a fair amount of confusion and misalignment on the status of various services. For example, Tavis claims to have recovered cached 1Password API data, while 1Password claims users’ password data could not be exposed by this bug.

 

How bad is it, really?

One of the most important things to consider right now is that understanding the full impact of this Cloudflare bug will take some time; it’s too soon to know exactly how deep this goes. However, if we’re using Heartbleed as our de facto “security bug severity measuring stick”, it looks at this point like the Cloudflare bug is not as disastrous.

 

For starters, the Cloudflare bug was centralized in one place (i.e. Cloudflare’s proxy service). While search engines like Google, Bing, and Yahoo cached leaked data from Cloudflare, they were quick to purge these caches with Cloudflare’s help. Cloudflare stopped the bleeding and worked with Google and others to mop up the remaining mess very quickly. As of now, the scope of affected data seems relatively limited. According to Cloudflare, “The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage.”

 

On the other hand, Heartbleed existed for two years before it was disclosed. It also needed to be patched everywhere it existed - it was decentralized - and there are still systems vulnerable to Heartbleed today. There are known instances of attackers using Heartbleed to steal millions of records, months after a patch was released. At this point in time, there’s no evidence of attackers exploiting Cloudbleed.

 

Think about the “best case scenario” for users protecting themselves against the Cloudflare vulnerability vs. Heartbleed. To protect against Cloudbleed, users need to follow a few steps (which we’ve outlined below). To protect themselves from Heartbleed, users had to follow all of these same steps, reroll SSL/TLS certificates, and patch OpenSSL on all of their vulnerable systems.

 

What do I do now?

There are several steps you can take to protect yourself:

  1. Log out and log back into your accounts to inactivate your accounts' sessions, especially for sites/services that are known to have been impacted by this (e.g. Uber).
  2. Clear your browser cookies and cache.
  3. This is a great time to change your passwords, keys, and other potentially affected credentials - something you should be doing regularly anyway! While there was some talk of password manager data being exposed, this shouldn’t scare you away from using these tools. For the vast majority of us, it is the most practical way to ensure we’re using strong, unique passwords on every site with the ability to more easily update those passwords on a regular basis.
  4. Set up two-factor authentication on every one of your accounts that supports it, especially your password manager.
  5. If your website or services used services affected by the Cloudflare vulnerability during the time window mentioned above, force your users to reset all of their authentication credentials (passwords, OAuth tokens, API keys, etc.). Also reset credentials used for system and service accounts.
  6. Keep an eye out for notifications from your vendors, check their websites and blogs, and proactively contact them - especially those that handle your critical and sensitive data - about whether or not they were affected by this bug and how you can continue using their services securely if they were.
  7. If you’re not sure if you’re using an affected site or service, check out this tool: Does it use Cloudflare?

 

Big thanks to my teammate Katie Ledoux for writing this post with me!

As I mentioned in our last post, the 20 critical controls are divided into System, Network, and Application families in order to simplify analysis and implementation. This also allows partial implementation of the controls by security program developers who aren't building a program from scratch, but want to apply all 20 of the controls. The first two controls of the Center for Internet Security's (CIS) Critical Controls are based around inventory; in my experience, they're also often overlooked by most security teams at the level that the CIS and NIST address them. Knowledge and control of inventory is an essential security architecture need - done properly, it gives the security team very strong awareness of the organization's network and personnel environment, and significantly improves detection and response aspects of any security program.

 

The second control, “Inventory of Authorized and Unauthorized Software” is split into 4 sections, each dealing with a different aspect of software management. Much like Control 1, “Inventory of Authorized and Unauthorized Devices”, this control addresses the need for awareness of what’s running on your systems and network, as well as the need for proper internal inventory management. The CIS placed these controls as the "top 2" in much the same way that the NIST Cybersecurity Framework addresses them as "priority 1" controls on the 800-53 framework; inventory and endpoint-level network awareness is critical to decent incident response, protection and defense.


What it is:

The Inventory of Authorized and Unauthorized Software is part of the “systems” group of the 20 critical controls. The theme of the control is fairly simple: You should be able to see what software is on your systems, who installed it, and what it does. You should be able use this information to prevent unauthorized software from being installed on endpoints. The control is well outlined in NIST Special Publication 800-167, and relates back to NIST 800-53 and Cybersecurity Framework recommendations. High-maturity organizations often address the automation and management sections of this control well, but Rapid7 sees gaps around software configuration control based on inventory due to the perceived complexity of implementing software inventory management systems, or endpoint management clients.

 

How to implement it:

Many of the methods used to implement the inventory of authorized and unauthorized software will also significantly improve the implementation of other controls relating to network access, asset configuration, and system management (Controls 1,6,10, 14, 15, 17 and 19). Specifically, Local Administrator access and install rights should not be granted for most users. This limitation also assists with other critical controls that deal with access and authentication. Limiting who can install software also limits who can click “ok” on installations that include malware, adware and other unwanted code. The added bonus to successful removal of admin rights is the lowering of the shadow IT footprint in most organizations, contributing to better internal communication and security awareness.


Once installation rights have been limited, any whitelisting or blacklisting processes should be done in stages, typically starting with a list of unauthorized applications (a blacklist), and finishing with a list of authorized applications that make up the whitelist. This can be rolled out as an authorized software policy first, and followed up with scanning, removal and then, central inventory control. Successful implementations of software inventory control often focus on bridging system configuration management services and software blacklisting and whitelisting. The inventory management portion is usually based on a software inventory tool or endpoint management services such as SCCM, Footprints, or GPO and local policy controls on windows.

 

Beyond administrator and installation rights limiting, and blacklisting, some form of integrity checking and management should be set up. This is possible using only OS-based tools in most cases, and Microsoft includes integrity management tools in Windows 10. Typically, OS level integrity management tools rely on limiting installation based on a list of trusted actors (Installers, sources, etc). In more comprehensive cases, such as some endpoint protection services, there are heuristic and behavior based tools that monitor critical application libraries and paths for change. Since integrity management is intrinsically tied to malware prevention and data protection, implementing this section of the control actually assists with Controls 8,9 and 14: Browser and e-mail configuration, Malware Defenses and Data Protection.

 

Admin Tips:

  • Aside from AppLocker, Microsoft allows GPO based whitelisting for supported versions of Windows. These can be edited locally using “secpol.msc” on everything but the “Home” versions of Windows. Organizations with domain controllers or centrally managed Group Policy Objects can use the same process by accessing “software restriction policies” and adjusting the “designated file types” object to include authorized software. This method is effective for workstations with limited software needs, and single-purpose systems such as application servers or virtual machines that run dedicated software. Apple’s OSX and most flavors of Linux have similar features, although they may be a little harder to access.

 

  • Most endpoint protection suites have some form of integrity protection included as an add-on. Your milage may vary with this, since it can be tricky to tune the alerts from these services, but they're a helpful addition to the software integrity side of things, and can be used as a primary means of integrity control in cases where there's already a good inventory in place. 

 

  • For more general-purpose workstations, a number of client based solutions exist, ranging from antivirus and endpoint protection suites that limit software from a central console to tools like Carbon Black, Power Broker, and the Authority Management Suite integrated into Dell’s KACE.

 

Software inventory management is an important enough topic in security that The National Institute of Standards and Technology has published a guide to implementing software whitelisting which covers most of Control 2. It's part of their cybersecurity series, and is available for free on the NIST website as a PDF or by searching the site for publication 800-167. As I mentioned above, this control, and the device inventory control are critical to having a  responsive security program; getting the inventory side of the office in order will cut down on the amount of work needed when an incident arises, and will make policy development and enforcement far easier.

If you’re investing beyond malware detection, you’ve probably come across User Behavior Analytics (aka UBA, UEBA, SUBA). Why are organizations deploying UBA, and are they finding value in it? In this primer, let’s cover what’s being seen in the industry, and then a bit on how we’re approaching the problem here at Rapid7.

 

What Are Organizations Looking For?

According to the 2016 Verizon DBIR, 63% of data breaches involved weak, default, or compromised credentials. Companies have solid coverage for known malware and their network perimeter, but teams now need visibility into normal and anomalous user behavior. Largely, the response has been to deploy SIEM technology to monitor for these threats. While the tech is helping with log aggregation and correlation, teams aren’t consistently detecting the stealthy behavior real-world attackers are using to breach networks.

 

What Are the Analysts Saying About UBA?

Gartner: In their most recent Market Guide for User and Entity Behavior Analytics, they agree that UEBA vendors can help threat detection across a variety of use cases. However, they don’t make it easy by listing 29 vendors in the report, so be careful with selection – perhaps the most striking prediction is that “by 2020, less than five stand-alone UEBA solutions will remain in the market, with other vendors focusing on specific use cases and outcomes.”

 

Forrester: In the July 2016 Forrester report, Vendor Landscape: Security User Behavior Analytics (SUBA), a key takeaway is to “require a SUBA demonstration with your own data.” Something everyone is agreeing on is the need for user behavior analytics to be a part of a larger analytics suite, aptly named Security Analytics, which extends beyond SIEM to include network analysis and visibility, endpoint visibility, behavioral analysis, and forensic investigative tools. For more on this shift, we hosted guest speaker, Forrester senior analyst Joseph Blankenship, on the webcast, “The Struggle of SIEM”.

 

451 Research: In addition to rallying behind the need to go beyond SIEM with Security Analytics, there’s agreement that even in 2017, there will be a shakeout in the UBA space. That doesn’t just mean life or death for startup vendors, but also the challenge for large SIEM vendors to incorporate UBA into existing legacy platforms.

 

IDG: The suggested approach is under a security operations and analytics platform architecture (SOAPA). While SIEM technology still plays at the core, SOAPA also includes endpoint detection and response, an incident response platform, network security analytics, UBA, vulnerability management, anti-malware sandboxes, and threat intelligence. While that’s certainly a mouthful, the important takeaway is that UBA is only one of the technologies that should work together to detect threats across the entire attack chain.

 

Questions to Consider

  • If you’re looking at User Behavior Analytics, you’ve likely already experienced pain with an existing SIEM. Will you have enough resources to maintain both the SIEM deployment and a separate UBA tool?
  • Can you put the technology to the test? If you don’t have an internal red team, a great time to POC a UBA vendor is when considering a penetration test.
  • For more, check out our evaluation brief: A Matchmakers Guide to UBA Solutions.

 

And, for added context on the go, we just released a new episode all about UBA on the Security Nation podcast:

Security-Nation-UBA-Podcast.PNG

 

The Rapid7 Take

Since the first GA date of our UBA technology in early 2014, we’re proud to be both a first mover and have hundreds of customers using UBA to monitor their environments. However, we found that UBA technology alone still leaves gaps in detection coverage, forcing teams to jump between portals during every incident investigation. For that reason, InsightIDR, our solution for incident detection and response, combines SIEM, UBA, and Endpoint Detection capabilities, without the traditional burdens involved in deploying each of these technologies independently.

 

User-Behavior-Analytics-Primer-Rapid7.PNG

 

In addition to the UBA detecting stealthy behavior, InsightIDR also analyzes real-time endpoint data and uses Deception Technology to reveal behavior unseen by log analysis. Through a robust data search and visualization platform, security teams can bring together log search, user activity, and endpoint data for investigations without jumping between multiple tools. Of course, this is a bold claim - if you’d like to learn more, check out the below 3-minute Solution Overview or check out our webcast, User Behavior Analytics, as easy as ABC.

 

Sam Humphries

Preparing for GDPR

Posted by Sam Humphries Employee Feb 23, 2017

GDPR is coming….. Image result for GDPR is coming

If your organisation does business with Europe, or more specifically does anything with the Personal Data of EU Citizens who aren’t dead (i.e. Natural Persons), then, just like us, you’re going to be in the process of living the dream that is Preparing for the General Data Protection Regulation. For many organisations, this is going to be a gigantic exercise, as even if you have implemented processes and technologies to meet with current regulations there is still additional work to be done.

 

Penalties for infringements of GDPR can be incredibly hefty. They are designed to be dissuasive. Depending on the type of infringement, the fine can be €20 million, or 4% of your worldwide annual turnover, depending on which is the higher amount. Compliance is not optional, unless you fancy being fined eye-watering amounts of money, or you really don’t have any personal data of EU citizens within your control.

 

The Regulation applies from May 25th 2018. That’s the day from which organisations will be held accountable, and depending on which news website you choose to read, many organisations are far from ready at the time of writing this blog. Preparing for GDPR is likely to be a cross-functional exercise, as Legal, Risk & Compliance, IT, and Security all have a part to play. It’s not a small amount of regulation (are they ever?) to read and understand either – there are 99 Articles and 173 Recitals.

 

I expect if you’re reading this, it’s because you’re hunting for solutions, services, and guidance to help you prepare. Whilst no single software or services vendor can act as a magic bullet for GDPR, Rapid7 can certainly help you cover some of the major security aspects of protecting Personal Data, in addition to having solutions to help you detect attackers earlier in the attack chain, and service offerings that can help you proactively test your security measures, we can also jump into the fray if you do find yourselves under attack.

 

Processes and procedures, training, in addition to technology and services all have a part to play in GDPR. Having a good channel partner to work with during this time is vital as many will be able to provide you with the majority of aspects needed. For some organisations, changes to roles and responsibilities are required too – such as appointing a Data Protection Officer, and nominating representatives within the EU to be points of contact.

So what do I need to do?

If you’re just beginning in your GDPR compliance quest, I’d recommend you take a look at this guide which will get you started in your considerations. Additionally, having folks attend training so that they can understand and learn how to implement GDPR is highly recommended – spending a few pounds/euros/dollars, etc on training now can save you from the costly infringement fines later on down the line. There are many courses available – in the UK I recently took this foundation course, but do hunt around to find the best classroom or virtual courses that make sense for your location and teams.

 

Understanding where Personal Data physically resides, the categories of Personal Data you control and/or process, how and by whom it is accessed, and how it is secured are all areas that you have to deal with when complying with GDPR. Completing Privacy Impact Assessments are a good step here. Processes for access control, incident detection and response, breach notification and more will also need review or implementation. Being hit with a €20million+ fine is not something any organisation will want to be subject to. Depending on the size of your organisation, a fine of this magnitude could easily be a terminal moment.

 

There is some good news, demonstrating compliance, mitigating risk, and ensuring a high level of security are factors that are considered if you are unfortunate to experience a data breach. But ideally, not being breached in the first place is best, as I’m sure you‘d agree, so this is where your security posture comes in.

 

Article 5, which lists the six principles of processing personal data, states that personal data must be processed in an appropriate manner as to maintain security. This principal is covered in more detail by Article 32 which you can read more about here.

 

Ten Recommendations for Securing Your Environment

  1. Encrypt data – both at rest and in transit. If you are breached, but the Personal Data is in a render unintelligible to the attacker then you do not have to notify the Data Subjects (See Article 34 for more on this). There are lots of solutions on the market today – have a chat to your channel partner to see what options are best for you.
  2. Have a solid vulnerability management process in place, across the entire ecosystem. If you’re looking for best practices recommendations, do take a look at this post. Ensuring ongoing confidentiality, integrity and availability of systems is part of Article 32 – if you read Microsoft’s definition of a software vulnerability it talks to these three aspects.
  3. Backups. Backups. Backups. Please make backups. Not just in case of a dreaded ransomware attack; they are a good housekeeping facet anyway in case of things like storage failure, asset loss, natural disaster, even a full cup of coffee over the laptop. If you don’t currently have a backup vendor in place, Code42 have some great offerings for endpoints, and there are a plethora of server and database options available on the market today. Disaster recovery should always be high on your list regardless of which regulations you are required to meet.
  4. Secure your web applications. Privacy-by-design needs to be built in to processes and systems – if you’re collecting Personal Data via a web app and still using http/clear text then you’re already going to have a problem.
  5. Pen tests are your friend. Attacking your systems and environment to understand your weak spots will tell you where you need to focus, and it’s better to go through this exercise as a real-world scenario now than wait for a ‘real’ attacker to get in to your systems. You could do this internally using tools like Metasploit Pro, and you could employ a professional team to perform regular external tests too. Article 32 says that you need to have a process for regularly testing, assessing, & evaluating the effectiveness of security measures. Read more about Penetration testing in this toolkit.
  6. Detect attackers quickly and early. Finding out that you’ve been breached ~5 months after it first happened is an all too common scenario (current stats from Mandiant say that the average is 146 days after the event). Almost 2/3s of organisations told us that they have no way of detecting compromised credentials, which has topped the list of leading attack vectors in the Verizon DBIR for the last few years. User Behaviour Analytics provide you with the capabilities to detect anomalous user account activity within your environment, so you can investigate and remediate fast.
  7. Lay traps. Deploying deception technologies, like honey pots and honey credentials, are a proven way to spot attackers as they start to poke around in your environment and look for methods to access valuable Personal Data. 
  8. Don’t forget about cloud-based applications. You might have some approved cloud services deployed already, and unless you’ve switched off the internet it’s highly likely that there is a degree of shadow IT (a.k.a. unsanctioned services) happening too. Making sure you have visibility across sanctioned and unsanctioned services is a vital step to securing them, and the data contained within them.
  9. Know how to prioritise and respond to the myriad of alerts your security products generate on a daily basis. If you have a SIEM in place that’s great, providing you’re not getting swamped by alerts from the SIEM, and that you have the capability to respond 24x7 (attackers work evenings and weekends too). If you don’t have a current SIEM (or the time or budget to take on a traditional SIEM deployment project), or you are finding it hard to keep up with the number of alerts you’re currently getting, take a look at InsightIDR – it covers a multitude of bases (SIEM, UBA and EDR), is up and running quickly, and generates alert volumes that are reasonable for even the smallest teams to handle. Alternatively, if you want 24x7 coverage, we also have a Managed Detection and Response offering which takes the burden away, and is your eyes and ears regardless of the time of day or night.
  10. Engage with an incident response team immediately if you think you are in the midst of an attack. Accelerating containment and limiting damage requires fast action. Rapid7 can have an incident response engagement manager on the phone with you within an hour.

 

Security is just one aspect of the GDPR, for sure, but it’s very much key to compliance. Rapid7 can help you ready your organisation, please don’t hesitate to contact us or one of our partners if you are interested in learning more about our solutions and services.

 

GDPR doesn’t have to be GDP-argh!

Deral Heiland

IoT: Friend or Foe?

Posted by Deral Heiland Employee Feb 21, 2017

Since IoT can serve as an enabler, I prefer to consider it a friend.  However, the rise of recent widespread attacks leveraging botnets of IoT devices has called the trust placed in these devices into question. The massive DDoS attacks have quieted down for now, but I do not expect the silence to last long. Since many of the devices used in recent attacks may still be online and many new IoT vulnerabilities continue to be identified, I expect what comes next will look similar or the same as past attacks.

 

While we're enjoying a lull before that happens, I figured it’s time for another good heart-to-heart discussion about the state of IoT security, including what it means to use IoT wisely and how to keep ourselves and each other safe from harm.

 

First I would like to level set: security vulnerabilities are not unique to IoT. We have been dealing with them for decades and I expect we will have them with us for decades to come. We need to focus on building a healthy understanding and respect for the use and security of IoT technologies. This will help us make better-informed decisions in relationship to its associated risk and deployment.

 

So why do IoT security vulnerabilities appear to have become such a threat lately? I think the answer to that question has four parts.

  1. The mass quantity of currently deployed devices. Unfortunately we cannot fix this issue as these devices are already in place and the deployment growth is expected to skyrocket by the end of the decade. Further, I don’t think we should want to fix this issue; there's nothing worse then avoiding new technology solely out of fear.
  2. Common vulnerabilities. IoT technology has taken a beating (rightly so) over the last year or two because of all of the simple vulnerabilities that are being discovered. Simple issues such as weak encryption, unauthenticated access to services, and default passwords hardcoded in the firmware are commonplace and just a small sample of core, basic issues within these devices.
  3. Ease of use. We are living in a plug-and-play generation. As a manufacturer, if your product doesn’t just work out of the box, it is unlikely anyone will buy it. So, sadly, we continue to trade security for usability.
  4. Exposure through unfettered access. Your plug-and-play IoT technology is exposed to any anonymous entity on the Internet. This is analogous to giving your car keys and a bottle of whiskey to not just your sixteen-year-old, but all possible sixteen-year-olds around the world. Nothing good will come of it.

 

So since we are not going to abandon IoT, this makes the first item unfixable. With that said, expect billions more IoT devices to enter our environment over the next coming years. This makes the remaining three items all that much more critical. So let us next discuss these items and look at possible solutions and next steps moving forward.

 

Common vulnerabilities:

We are never going to solve this issue overnight, but it's not like we can just throw up our hands and give up.  In our current IoT world we have dozens of new startups producing new products constantly, as well as dozens of established companies — that have never produced IoT products before — releasing new and “enhanced” products every month. To address these issues it would be great to see these companies implement a security program to facilitate security best practices in the design of their products. For these companies, contacting and partnering with non-profit organizations focused on the public good (like our friends at builditsecure.ly, or I Am The Cavalry) can help them during the design phase. Last but not least, every manufacturer of IoT needs to develop a sound process for handling discovered security issues within their products, including an effective security patching process.

 

Ease of Use:

Everyone likes a product that is easy to deploy and operate, but we need to consider security as part of that deployment. Default passwords issues have been haunting us for years and it's time we exorcise that demon. We need to start making setting a password part of the deployment process of all IoT technology including consumer grade solutions. Passwords are not the only issue we have. Another issue often encountered, is the enabling of all function and services of a given product. Whether they are being used or not has also been a common issue. In those cases only services needed for basic operations should be enabled all other features should be enabled as needed. Of course this will require vendors to put more attention into documentation and making their product management console more intuitive.  In the end with a little work we can expect to see "ease-of-use" also become "ease-of-security" in our IoT products.

 

Exposure:

In the case of exposure issues, these are often just unplanned deployments without consideration of the impact or risk. Exposing IoT management services directly to the Internet such as Telnet, SSH and even web consoles should be avoided, unless you truly need the whole internet knocking at your IoT door. If remote management is vital to the operations of a product it is best practices to make those services available behind VPN or require two factor or both (depending on the nature of the IoT solution being deployed). Another solution is to leverage basic firewall configurations to restrict administrative access to a specific IP address on the host device. Also we do not want to forget that it is very common for IoT technology to have management and control services that do not conform to the standard port numbers. I have seen telnet on a number of different ports besides TCP port 23.  So it is important to understand the product you’re deploying in detail, this will help you to avoid accidental exposures. As added food for thought on deploying IoT technology, consider taking a look at a blog we created several months ago on IoT best practices getting-a-handle-on-the-internet-of-things-in-the-enterprise.

 

So in conclusion, in the debate around the trustworthiness of IoT, we need to turn our attention away from fear, uncertainty, and doubt, and focus on working together to resolve the three issues I have pointed out here. With some diligence and cooperation, I am sure we can better manage the risk associated with the use and deployment of IoT technology. With the growth of IoT expected to skyrocket over the next several years, we pretty much have no choice.

I’m excited to be shaving my head at Shaves that Save at the RSA Conference US 2017—the second annual event where information security professionals go bald to raise money to fund a cure for childhood cancer and the St Baldrick’s Foundation.  I hope you can join us a for a whole lot of fun—head shaving, a great DJ, a bar to benefit St Baldrick’s, and an appearance by Stormtroopers and other Star Wars characters from the 501st Legion. And while we’ll have a lot of fun, the bigger goal is to raise money for research that will help save kids’ lives.

IMG_8065.jpg

 

The event is on Wednesday, February 15th from 6-7:30 PM in the Viewing Room across from the South Expo hall.  You don’t need to register for the event, but you do need an RSA pass. (An expo pass is fine.  Don't have one?  You can register for an Expo Pass.

 

We already have 12 shavees signed up from across the InfoSec industry!  I’m honored to join Josh Corman (Atlantic Council), Diana Kelley (IBM), Pete Lindstrom (IDC), Ed Moyle (ISACA), Rich Mogull (Securosis), Chris Nickerson (LARES), Michael Nickle (CA), Nick Selby (Secure Ideas Incident Response Team) and others in InfoSec to stand in solidarity with kids who typically lose their hair while undergoing treatment for cancer, and to help fund critical research.

 

I’ve been supporting St Baldrick’s for a number of years, and this is the third time I’m shaving my head. I was introduced to the foundation through a corporate partnership with NetApp who is a large St Baldrick’s supporter.  Since then, I’ve gotten to know a number of kids and families impact by cancer, and seen that they deserve better.  I’ve met kids who ultimately lost their battle.  I’ve seen kids who have taken chemo for over a 1000 days in treatment.  Thankfully I’ve seen a bunch where the treatment has worked, but many live in fear of a reoccurrence or long-term side effects from chemotherapy. These kids just want to be kids, and I’ve learned so much from their amazing attitudes as they persevere through treatment.

 

Unfortunately for these kids, only 4% of US Federal funding for cancer research is solely dedicated to childhood cancer, and St. Baldrick’s Foundation helps fill the funding gap as the largest non-government funder of childhood cancer research grants.  St. Baldrick’s research has helped more of them survive, and provides hope for a cure for others.  No child should have to fight cancer or suffer the effects of treatment.

 

How can you help?

  • At the RSA Conference?  Come cheer on the shavees!  We have a number of people shaving their head for the first time, and your energy makes it even better!
  • Donate to the St Baldrick’s Foundation (a U.S. non-profit 501 (c)3 organization) to support critical research.  You can donate from the event page.
  • Shave with us?  We have space left for a few more people if you want to join us.
  • Promote #ShavesThatSave on social media to help get the word out about the event.

 

I’d like the thank all the volunteers making this event a success:  Rapid7's Event Management Team for bringing the event to life, DJ Ka'nete for donating his services, MIS Training Institute, Entrust Datacard, the 501st Legion, Golden Gate Garrison, co-organizers Nick Selby and Davin Baker, and all the other volunteers and shavees.

One of the most nerve-wrecking things a person can do is give a talk to a group of people. As a matter of fact, approximately 3 out of 4 people suffer from speech anxiety. This is further exacerbated in an industry and community like ours where many of us are introverts and/or suffer from "imposter syndrome". We think we aren't as smart or good at something as we actually are. We often feel like someone else has done a better job explaining a theory or area in information security than we ever could. We also often feel like we have nothing new or interesting to contribute, but that isn't true!

 

The people who make up our community have a diverse skill set. Each of us has experiences and a pool of knowledge that are unique to us, even when they may seem similar to someone else's. We each have a unique voice, way of thinking, and ways of processing information. This is why the Security BSides Las Vegas' Proving Grounds track is so near and dear to me.

 

For those who are unfamiliar with what we do, Proving Grounds gives a platform for folks who have never spoken at a nation conference (DEF CON, RSA, DerbyCon) to give their first talk in a "safe" environment. We pair them with a mentor, who is someone established in the community who has experience at presenting. They work together so the first-time speakers can take their submitted outline and abstract and turn it into a well thought-out talk. The mentors help with everything from how the presentation looks, the flow of the information being shared, to presenting tips and tricks. The mentors are there the day of their partner's talk for moral support, and we also offer new presenters a chance to practice in the room they'll present in before the days of the con.

 

For the past four years, I have worked together with SecurityMoey as the co-director of this track. I leapt at the opportunity to work with him because I wished that there were something like this when I was preparing for my first talk. I’m an extremely nervous and anxious presenter—so much so, that I usually spend the 10 minutes or so before my talk in the bathroom trying to calm down and pump myself up. I also had a problem when I first started to submit CFPs where I didn’t know what information was relevant to the review board, what was too much or too little, or how to tailor a talk to an audience. I more or less winged it for a couple of years until I had watched enough talks and gotten enough peer feedback that I felt comfortable with how I wanted to present my information. It was a lot more work than it could have been, which is another benefit of the Proving Grounds track.

 

I can easily go on about how passionate I am about this program and how important mentoring is to our community. In fact, Moey and I presented on this at DerbyCon a few years ago. What it all boils down to is this: We have an awesome community, and we need to continue to grow by welcoming new people and ideas to our conferences.

 

Deadline for submission is February 15th, so submit soon! To submit your talk proposal, go here: https://bsideslv.org/openconf/openconf.php

 

Link to the talk Moey and I did at DerbyCon: http://www.irongeek.com/i.php?page=videos/derbycon5/teach-me04-learning-through- mentorship-michael-ortega-magen-wu

Today, we're excited to release Rapid7's latest research paper, Under the Hoodie: Actionable Research from Penetration Testing Engagements, by Bob Rudis, Andrew Whitaker, Tod Beardsley, with loads of input and help from the entire Rapid7 pentesting team.

 

This paper covers the often occult art of penetration testing, and seeks to demystify the process, techniques, and tools that pentesters use to break into enterprise networks. By drawing on the experiences of dozens of pentesters in the field, based on real, qualified data drawn from the real-life experiences of those pentesters, we're able to suss out the most common vulnerabilities that are exploited, the most common network misconfigurations that are leveraged, and the most effective methods we've found to compromise high-value credentials.

 

Finding: Detection is Everything

Probably the most actionable finding we discovered is that most organizations that conduct penetration testing exercises have a severe lack of usable, reliable intrusion detection capabilities. Over two-thirds of our pentesters completely avoided detection during the engagement. This is especially concerning given that most assessments don't put a premium on stealth; due to constraints in time and scope, pentesters generate an enormous amount of malicious traffic. In an ideal network, these would be setting off alarm bells everywhere. Most engagements end with recommendations to implement some kind of incident detection and response, regardless of the specific techniques for compromise were used.

Finding: Enterprise Size and Industry Doesn't Matter

When we started this study, we expected to find quantitative differences between small networks and large networks, and between different industries. After all, you might expect a large, financial industry enterprise of over 1,000 employees would be better equipped to detect and defend against unwelcome attackers due to the security resources available and required by various compliance regimes and regulatory requirements. Or, you might believe that a small, online-only retail startup would be more nimble and more familiar with the threats facing their business.

 

Alas, this isn't the case. As it turns out, the detection and prevention rates are nearly identical between large and small enterprises, and no industry seemed to fare any better or worse when it came to successful compromises.

 

This is almost certainly due to the fact that IT infrastructure pretty much everywhere is built using the same software and hardware components. Thus, all networks tend to be vulnerable to the same common misconfigurations that have the same vulnerability profiles when patch management isn't firing at 100%. There are certainly differences in the details -- especially when it comes to custom-designed web applications -- but even those tend to have the same sorts of frameworks and components that power them.

 

The Human Touch

Finally, if you're not really into reading a bunch of stats and graphs, we have a number of "Under the Hoodie" sidebar stories, pulled from real-life engagements. For example, while discussing common web application vulnerabilities, we're able to share a story of how a number of otherwise lowish-severity, external web application issues lead to the eventual compromise of the entire internal back-end network. Not only are these stories fun to read, they do a pretty great job of illustrating how unrelated issues can conspire on an attacker's behalf to lead to surprising levels of unauthorized access.

 

I hope you take a moment to download the paper and take a look at our findings; I don't know of any other research out there that explores the nuts and bolts of penetration testing in quite the depth or breadth that this report provides. In addition, we'll be covering the material at our booth at the RSA security conference next week in San Francisco, as well as hosting a number of "Ask a Pentester" sessions. Andrew and I will both be there, and we love nothing more than connecting with people who are interested in Rapid7's research efforts, so definitely stop by.

Note: Rebekah Brown was the astute catalyst for the search for insecure broadcast equipment and the major contributor to this post.

 

Reports have surfaced recently of local radio station broadcasts being hijacked and used to play anti-Donald Trump songs (https://www.rt.com/viral/375935-trump-song-hacked-radio/). The devices that were taken over are Barix Exstream systems, though there are several other brands of broadcasters, including Pkyo, that are configured and setup the same way as these devices and would also be vulnerable to this type of hijacking.

 

Devices by these manufacturers work in pairs. In the most basic operating mode, one encodes and transmits a stream over an internal network or over the internet and the other receives it and blasts it to speakers or to a transmitter.

 

how_it_works.png

 

Because they work in tandem, if you can gain access to one of these devices, you have information about the other one, including the IP address and port(s) it’s listening on. After seeing the story, we were curious about the extent of the exposure.

The View from the Control Room

We reviewed the January 31, 2017 port 80 scan data set from Rapid7’s Project Sonar to try to identify Barix Instreamer/Exstreamer devices and Pyko devices based on some key string markers we identified from a cadre of PDF manuals. We found over a thousand of them listening on port 80 and accessible without authentication. They seem to be somewhat popular on almost every continent and are especially popular here in the United States.

 

map.png

 

Many of these devices have their administration interfaces on something besides port 80, so this is likely just a sample of the scope of the problem.

 

Because they operate in pairs, once you gain access to one device, you can learn about their counterparts directly from the administration screens:

 

pyko-green.png

 

barix_yellow.png

 

barix_red.png

It’s trivial to reconfigure either the source or destination points to send or receive different streams and it’s likely these devices go untouched for months or even years. It’s also trivial to create a script to push a new configuration to all the devices very quickly (we estimated five minutes or less).

 

What is truly alarming is not only are these devices set up to be on the internet without any sort of authentication, but that this issue has been brought up several times in the past. The exposure – which in this case, is really a misconfiguration issue and not strictly a software vulnerability – was identified as early as April 2016, and this specific hijacking technique emerged shortly after the inauguration.

 

Coming Out of a Spot

The obvious question is that if this issue was identified nearly a year ago, why are there still systems that are susceptible on the internet? The answer is that just because an issue is identified does not automatically mean that the individuals responsible for securing them are aware that they are vulnerable or of what the impact would be. As much as we as an industry talk about information sharing, often we aren’t sharing the right information with the right people. Station owners and operators do not always have a technical or security background, and may not read the security news or blogs. Even when the main stream media published information on the impacted model and version, system operators may not know that they are using that particular model for their broadcast, or they may simply miss the brief media exposure.

 

We cannot and should not assume that people are aware of the issues that are discovered, and therefore we are making a greater effort to inform U.S. station owners by reaching out to them directly in coordination with the National Coordinating Center for Communications (COMM-ISAC) and the National Association of Broadcasters (NAB). We've offered not only to inform these operators that they are vulnerable, but also to help them understand the technical measures that are required to secure their systems, down to walking through how to set a password. What is intuitive to some is not always intuitive to others.

 

Cross Fade Out

While hijacking a station to play offensive music is certainly not good, the situation could have been — and still can be — much more serious. There are significant political tensions in the U.S. right now, and a coordinated attack against the nearly 300 devices we identified in this country could cause targeted chaos and panic. Considering how easy it is to access and take control of these devices, a coordinated hijacking of these broadcast streams is not such a far-fetched scenario, so it is imperative to secure these systems to reduce the potential impact of future attacks.

 

You can reach out to research@rapid7.com for more information about the methodology we used to identify and validate the status of these devices.

NOTE: Tom Sellers, Jon Hart, Derek Abdine and (really) the entire Rapid7 Labs team made this post possible.

 

On the internet, no one may know if you’re of the canine persuasion, but with a little time and just a few resources they can easily determine whether you’re running an open “devops-ish” server or not. We’re loosely defining devops-ish as:

 

  • MongoDB
  • CouchDB
  • Elasticsearch

 

for this post, but we have a much broader definition and more data coming later this year. We use the term “devops” as these technologies tend to be used by individuals or shops that are emulating the development and deployment practices found in the “DevOps” — https://en.wikipedia.org/wiki/DevOps — communities.

 

Why are we focusing on about devops-ish servers? I’m glad you asked!

 

The Rise of Ransomware

 

If you follow IT news, you’re likely aware that attackers who are focused on ransomware for revenue generation have taken to the internet searching for easy marks to prey upon. In this case the would-be victims are those running production database servers directly connected to the internet with no authentication.

 

Here’s a smattering of news articles on the subject:

 

 

The core reason why attackers are targeting devops-ish technologies is that most of these servers have a default configurations which have tended to be wide open (i.e. they listen on all IP addresses and have no authentication) to facilitate easy experimentation  exploration. Said configuration means you can give a new technology a test on your local workstation to see if you like the features or API but it also means that — if you’re not careful — you’ll be exposing real data to the world if you deploy them the same way on the internet.

 

Attackers have been ramping up their scans for these devops-ish services. We’ve seen this activity in our network of honeypots (Project Heisenberg):

 

probes.png

 

We’ll be showing probes for more services, including CouchDB, in an upcoming post/report.

 

When attackers find targets, they often take advantage of these open configurations by encrypting the contents of the databases and leaving little “love notes” in the form of table names or index names with instructions on where to deposit bitcoins to get the keys back to your data.  In other cases, the contents of the databases are dumped and kept by the attacker but wiped from the target, then demanding a ransom for the return of the kidnapped data. In other cases, the data is wiped from the target and not kept by the attackers, making anyone who gives in to these demands in for a double-whammy – paying the ransom and not getting any data in return.

 

Not all exposed and/or ransomed services contain real data, but attackers have automated the process of finding and encrypting target systems, so it doesn’t matter if they corrupt test databases which will just get deleted as it hasn’t cost them any more time or money to do so. And, because the captive systems are still wide open, there have been cases where multiple attacker groups have encrypted systems — at least they fight amongst themselves as well as attack you.

 

Herding Servers on the Wide-Open Range Internet

 

Using Project Sonar — http://sonar.labs.rapid7.com — we surveyed the internet for these three devops databases. NOTE: we have a much larger ongoing study that includes a myriad of devops-ish and “big data” technologies but we’re focusing on these three servers for this post given the timeliness of their respective attacks. We try to be good Netizens, so we have more rules in place when it comes to scanning than others do. For example, if you ask us not to scan your internet subnet, we won’t. We will also never perform scans requiring credentials/authentication. Finally, we’re one of the more profound telemetry gatherers which means many subnets choose to block us. I mention this, first, since many readers will be apt to compare our numbers with the results from their own scans or from other telemetry resources. Scanning the Internet is a messy bit of engineering, science and digital alchemy so there will be differences between various researchers.

 

We found:

 

  • ~56,000 MongoDB servers
  • ~18,000 Elasticsearch servers
  • ~4,500 CouchDB servers

 

 

Of those 50% MongoDB servers were captive, 58% of Elasticsearch were captive and 10% of CouchDB servers were captive:

elasticsearch_vis-1.pngmongodb_vis-1.pngcouch_vis-1.png

A large percentage of each of these devops-ish databases are in “the cloud”:

 

top_10_providers-1.png

 

and several of those listed do provide secure deployment guides like this one for MongoDB from Digital Ocean: https://www.digitalocean.com/community/tutorials/how-to-securely-configure-a-pro duction-mongodb-server. However, others have no such guides, or have broken links to such guides and most do not offer base images that are secure by default when it comes to these services.

 

Exposed and Unaware

 

If you do run one of these databases on the internet it would be wise to check your configuration to ensure that you are not exposing them to the internet or at the very least have authentication enabled and rudimentary network security groups configured to limit access. Attackers are continuing to scan for open systems and will continue to encrypt and hold systems for ransom. There’s virtually no risk in it for them and it’s extremely easy money for them, since the reconnaissance for and subsequent attacking of exposed instances likely often happens from behind anonymization services or from unwitting third party nodes compromised previously.

 

Leaving the configuration open can cause other issues beyond the exposure of the functionality provided by the service(s) in question. Over 100 of the CouchDB servers are exposing some form of PII (going solely by table/db name) and much larger percentages of MongoDB and Elasticsearch open databases seem to have some interesting data available as well. Yes, we can see your table/database names. If we can, so can anyone who makes a connection attempt to your service.

 

We (and attackers) can also see configuration information, meaning we know just how out of date your servers, like MongoDB, are:

mongo_versions-1.png

 

So, while you’re checking how secure your access configurations are, it may also be a good time to ensure that you are up to date on the latest security patches (the story is similarly sad for CouchDB and Elasticsearch).

What Can You Do?

 

Use automation (most of you are deploying in the cloud) and within that automation use secure configurations. Each of the three technologies mentioned have security guides that “come with” them:

 

 

It’s also wise to configure your development and testing environments the same way you do production (hey, you’re the one who wanted to play with devops-ian technologies so why not go full monty?).

 

You should also configure your monitoring services and vulnerability management program to identify and alert if your internet-facing systems are exposing an insecure configuration. Even the best shops make deployment mistakes on occasion.

 

If you are a victim of a captive server, there is little you can do to recover outside restoring from backups. If you don’t have backups, it’s up to you do decide just how valuable your data is/was before you consider paying a ransom. If you are a business, also consider reporting the issue to the proper authorities in your locale as part your incident response process.

 

What’s Next?

 

We’re adding more devops-ish and data science-ish technologies to our Sonar scans and Heisenberg honeypots and putting together a larger report to help provide context on the state of the exposure of these services and to try to give you some advance notice as to when attackers are preying on new server types. If there are database or server technologies you’d like us to include in our more comprehensive study, drop a note in the comments or to research@rapid7.com.

 

Burning sky header image by photophilde used CC-BY-SA

2016 kept us on our toes right up to the very end - and its last curveball will have implications lasting well past the beginning of the new year.

 

Speculation on Russian hacking is nothing new, but it picked up notably with the DNC hack prior to the presidential election and the subsequent release of stolen emails, which the intelligence community later described as an information operation aimed at influencing the election. And then on December 29th we saw the US government's response, the coordinated release of a joint report detailing the hacking efforts attributed to Russian intelligence agencies, economic sanctions, and the expulsion of Russian diplomats.

 

This blog is not going to discuss the merits – or otherwise - of various political actions, nor whether cyberespionage should warrant different responses to other types of espionage. Instead, I’m going to focus on the learnings we can take away from the Joint Analysis Report (JAR). The report is not perfect, but nonetheless, I believe it can be valuable in helping help us, as an industry, improve, so I’m choosing to focus on those points in this post.

 

The Joint Analysis Report won’t change much for some defenders, while for others it means a reevaluation of their threat model and security posture. But given that the private sector has been tracking these actors for years, it’s difficult to imagine anyone saying that they are truly surprised Russian entities have hacked US entities. Many of the indicators of compromise (IOCs) listed in the JAR have been seen before -- either in commercial or open source reporting. That being said, there are still critical takeaways for network defenders.

 

1) The US government is escalating its response to cyber espionage. The government has only recently begun to publicly attribute cyberattacks to nation states, including attributing the Sony attacks to North Korea, a series of industrial espionage-related attacks to Chinese PLA officers, and a series of attacks against the financial sector to Iran-backed actors. But none of those attack claims came with the expulsion of diplomats or suspected intelligence officers. The most recent case of a diplomat being declared persona non grata (that we could readily find) was in 2013 when three Venezuelan officials were expelled from the US in response to the expulsion of US diplomats from Venezuela. Prior to that was in 2012, when a top Syrian diplomat was expelled from the Washington Embassy in response to the massacre of civilians in the Syrian town of Houla. Clearly, this is not a step that the United States take lightly.

 

These actions are more significant to government entities than they are to the private sector, but being able to frame the problem is crucial to understanding how to address it. Information and influence operations have been going on for decades, and the concept that nations use the cyber domain as a means to carry out these information operations is not surprising. This is the first time, however, that the use of the cyber domain means has been met with a public response that has previously been reserved for conventional attacks. If this becomes the new normal then we should expect to see more reports of this nature and should be prepared to act as needed.

 

2) The motivation of the attackers that are detailed in the report is significant. We tend to think of cyber operations as fitting into three buckets: cyberespionage, cybercrime, or hactivism. The actions described in the JAR and in the statement from the President describe influence operations. Not only do the attackers want to steal information, but they are actively trying to influence opinions, which is an area of cyber-based activity we are likely to see increasing. The entities listed in the JAR, who are primarily political organizations (and there are far more political organizations out there than just the two primary parties’ HQ), as well as organizations such as think tanks, should reevaluate their threat models and their security postures. It is not just about protecting credit card information or PII, anything and everything is on the table.

 

The methods that are being used are not new – spear-phishing, credential harvesting, exploiting known vulnerabilities, etc. – and that fact should tell people how important basic network security is and will remain. There was no mention of zero-days or use of previously undetected malware. Companies need to understand that the basics are just as, or even more, important when dealing with advanced actors.

 

3) We need to work with what we have – and that doesn’t mean we just plug and play IOCs. It’s up to us to take the next step. So, what is there to do with the IOCs? There are a lot of people who are disappointed about the quality and level of detail of the IOCs on the JAR. It is possible that what has been published is the best the government could give us at the TLP: White level, or that the government analysts who focus on making recommendations to policy makers simply do not know what companies need to defend their networks (hint: it is not a Google IP address). We, as defenders, should never just take a set of IOCs and plug them into our security appliances without reviewing and understanding what they are and how they should be used.

 

Defenders should not focus on generating alerts directly off the IOCs provided, but should do a more detailed analysis of the behaviors that they signify. In many cases, even after an IOC is no longer valid, it can tell a story about an attacker behavior, allowing defenders to identify signs of those behaviors, rather than the actual indicators that are presented. IOC timing is also important. We know from open source reporting, as well as some of the details in the JAR, that this activity did not happen recently, some of it has been going on for years. That means that if we are able to look back through logs for activity that occurred in the past then the IOCs will be more useful than if we try and use them from this point in time forward, because once they are public it is less likely that the attackers will still be employing them in the way they did in the past.

 

We may not always get all of the details around an IOC, but it’s our job as defenders to do what we can with what we have, especially if we are an organization who fits the targeting profile of a particular actor. Yes, it would be easier if the government could give us all of the information we needed in the format that we needed, but reality dictates that we will still have to do some of our own analysis.

 

We should not be focusing on any one aspect of the government response, whether it is the lack of published information clearly providing attribution to Russia, or the list of less-than-ideal IOCs. There are still lessons that we, as decisions makers and network defenders, can take away. Focusing on those lessons requires an understanding of our own networks, our threat profile, and yes, sometimes even the geo-political aspects of current events so that we can respond in a way that will help us to identify threats and mitigate risk.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.

 

Screen Shot 2016-12-26 at 7.20.22 PM.png

You may or may not know this about me, but I am kind of an overly optimistic sunshine and rainbows person, especially when it comes to threat intelligence. I love analysis, I love tackling difficult problems, connecting dots, and finding ways to stop malicious actors from successfully attacking our networks.

 

Even though 2016 tried to do a number on us (bears, raccoons, whatever...)

 

Screen Shot 2016-12-30 at 7.51.22 PM.png

I believe that we can come through relatively unscathed, and in 2017 we can make threat intelligence even better by alleviating a lot of confusion and addressing many of the misunderstandings that make it more difficult to integrate threat intelligence into information security operations. In the spirit of the new year, we have compiled of a list of Threat Intelligence Resolutions for 2017.

 

Don’t chase shiny threat intel objects

 

Intelligence work, especially in the cyber realm, is complex, involved, and often time-consuming. The output isn’t always earth-shattering; new rules to detect threats, additional indicators to search for during an investigation, a brief to a CISO on emerging threats, situational awareness for the SOC so they better understand the alerts they respond to. Believe it or not in this media frenzied world, that is the way it is supposed to be. Things don’t have to be sensationalized to be relevant. In fact, many of the things that you will discover through analysis won’t be sensational but they are still important. Don’t

giphy.gif

discount these things or ignore them in order to go chase shiny threat intelligence objects – things that look and sound amazing and important but likely have little relevance to you. Be aware that those shiny things exist, but do not let them take away from the things that are relevant to you.

 

 

It is also important to note that not everything out there that gets a lot of attention is bad – sometimes something is big because it is a big deal and something you need to focus on. Knowing what is just a shiny object and what is significant comes down to knowing what is important to you and your organization, which brings us to resolution #2.

 

Identify your threat intelligence requirements

 

Screen Shot 2016-12-26 at 8.17.47 PM.png

Requirements are the foundation of any intelligence work. Without them you could spend all of your time finding interesting things about threats without actually contributing to the success of your information security program.

There are many types and names for intelligence requirements: national intelligence requirements, standing intelligence requirements, priority intelligence requirements – but they are all a result of a process that identifies what information is important and worth focusing on. As an analyst, you should not be focusing on something that does not directly tie back to an intelligence requirement. If you do not currently have intelligence requirements and are instead going off of some vague guidance like “tell me about bad things on the internet” it is much more likely that you will struggle with resolution #1 and end up chasing the newest and shiniest threat rather than what is important to you and your organization.

 

There are many different ways to approach threat intelligence requirements – they can be based off of business requirements, previous incidents, current events, or a combination of the above. Scott Roberts and Rick Holland have both written posts to help organizations develop intelligence requirements, and they are excellent places to start with this resolution. (They can be found here and here.)

 

Be picky about your sources

 

One of the things we collectively struggled with in 2016 was helping people understand the difference between threat intelligence and threat feeds. Threat intelligence is the result of following the intelligence cycle - from developing requirements, through collection and processing, analysis, and dissemination. For a (much) more in depth look into the intelligence cycle read JP 2-0, the publication on Joint Intelligence [PDF].

 

Threat feeds sit solidly in the collection/processing phase of the intelligence cycle - they are not finished intelligence, but you can't have finished intelligence without collection, and threat feeds can provide the pieces needed to conduct analysis and produce threat intelligence. There are other sources of collection besides feeds, including alerts issued by government agencies or commercial intelligence providers that often contain lists of IOCs. With all of these things it is important to ask questions about the indicators themselves:

 

  • Where does the information come from? A honeypot? Is it low interaction or high interaction? Does it include scanning data? Are there specific attack types that they are monitoring for? Is it from an incident response investigation? When did that investigation occur? Are the indicators pulled directly from other threat feeds/sources? If so, which ones?
  • What is included in the feed? Is it simply IOCs or is there additional information or context available? Remember, this type of information must still be analyzed and it can be very difficult to do that without additional context.
  • When was the information collected? Some types of information are good for long periods, but some are extremely perishable and it is important to know when the information was collected, not just when you received it. It is also important to know if you should be using indicators to look back through historical logs or generate alerts for future activity.

 

Tactical indicators have dominated the threat intelligence space and many organizations employ them without a solid understanding of what threats are being conveyed in the feeds or where the information comes from, simply because they are assure that they have the "best threat feed" or the "most comprehensive collection" or maybe they come from a

government agency with a fancy logo (although let's be honest, not that fancy) but elf gif.gif

you should never blindly trust those indicators, or you will end up with a pile of false positives. Or a really bad cup of coffee.

 

It isn’t always easy to find out what is in threat feeds, but it isn’t impossible. If threat feeds are part of your intelligence program then make it your New Year’s resolution to understand where the data in the feeds comes from, how often it is updated, where you need to go to find out additional information about any of the indicators in the feeds, and whether or not it will support your intelligence requirements. If you can’t find that information out then it may be a good idea to also start looking for feeds that you know more about.

 

 

Look OUTSIDE of the echo chamber

It is amazing how many people you can find to agree with your assessment (or agree with your disagreement of someone else's assessment) if you continue to look to the same individuals or the same circles. It is almost as if there are biases as work - wait, we know a thing or two about biases! <This Graphic Explains 20 Cognitive Biases That Affect Your Decision-Making>  Confirmation bias, bandwagoning, take your pick. When we only expose ourselves to certain things within the cyber threat intelligence realm we severely limit our understanding of the problems that we are facing and the many different factors that influence them. We also tend to overlook a lot of intelligence literature that can help us understand how we should be addresses those problems. Cyber intelligence is not so new and unique that we cannot learn from traditional intelligence practices.

Here are some good resources on intelligence analysis and research:

 

Kent Center Occasional Papers — Central Intelligence Agency

The Kent Center, a component of the employee-only Sherman Kent School for Intelligence Analysis at CIA University, strives to promote the theory, doctrine, and practice of intelligence analysis.

 

Congressional Research Service

The Congressional Research Service, a component of the Library of Congress, conducts research and analysis for Congress on a broad range of national policy issues.

 

The Council on Foreign Relations

The Council on Foreign Relations (CFR) is an independent, nonpartisan membership organization, think tank, and publisher.

 

Don’t be a cotton headed ninny muggins

Now this is where the hopeful optimist in me really comes out. One of the things that has bothered me most in 2016 is the needless fighting and arguments over, well, just about everything. Don't get me wrong, we need healthy debate and disagreement in our industry. We need people to challenge our assumptions and help us identify our biases. We need people to fill in any additional details that they may have regarding the analysis in question. What we don't need is people being jerks or discounting analysis without having seen a single piece of information that the analysis was based off of. There are a lot of smart people out there, and if someone publishes something you disagree with or your question then there are plenty of ways to get in touch with them or voice your opinion in a way that will make our collective understanding of intelligence analysis better.

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we’re highlighting some of the “gifts” we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.

 

This holiday season, eager little hacker girls and boys around the world will be tearing open their new IoT gadgets and geegaws, and set to work on evading tamper evident seals, proxying communications, and reversing firmware, in search of a Haxmas miracle of 0day. But instead of exploiting these newly discovered vulnerabilities, many will instead notice their hearts growing three sizes larger, and wish to disclose these new vulns in a reasonable and coordinated way in order to bring attention to the problem and ultimately see a fix for the discovered issues.

 

In the spirit of HaXmas, then, I'd like to take a moment to talk directly to the good-hearted hackers out there about how one might go about disclosing vulnerabilities in a way that maximizes the chances that your finding will get the right kind of attention.

 

Keep It Secret, Keep it Santa

 

First and foremost, I'd urge any researcher to consider the upsides of keeping your disclosure confidential for the short term. While it might be tempting to tweet a 140-character summary publically to the vendor's alias, dropping this kind of bomb on the social media staff of an electronics company is kind of a jerk move, and only encourages an adversarial relationship from there on out. In the best case, the company most able to fix the issue isn't likely to work with you once you've published, and in the worst, you might trigger a defensive reflex where the vendor refuses to acknowledge the bug at all.

 

Instead, consider writing a probing email to the company's email aliases of security@, secure@, abuse@, support@, and info@, along the lines of, "Hi, I seem to have found a software vulnerability with your product, who can I talk to?" This is likely to get a human response, and you can figure out from there who to talk to about your fresh new vulnerability.

 

The Spirit of Giving

 

You could also go a step further, and check the vendor's website to see if they offer a bug bounty for discovered issues, or even peek in on HackerOne's community-curated directory of security contacts and bug bounties. For example, searching for Rapid7 gives a pointer to our disclosure policies, contact information, and PGP key.

 

However, be careful when deciding to participate in a bug bounty. While the vast majority of bounty programs out there are well-intentioned, some come with an agreement that you will never, ever, ever, in a million years, ever disclose the bug to anyone else, ever -- even if the vendor doesn't deign to acknowledge or fix the issue. This can leave you in a sticky situation, even if you end up getting paid out. If you agree to terms like that, you can limit your options for public disclosure down the line if the fix is non-existent or incomplete.

 

Because of these kinds of constraints, I tend to avoid bug bounties, and merely offer up the information for free. It's totally okay to ask about a bounty program, of course, but be sure that you're not phrasing your request that can be read as an extortion attempt -- that can be taken as an attack, and again, trigger a negative reaction from the vendor.

 

No Reindeer Games

 

In the happy case where you establish communications with the vendor, it's best to be as clear and as direct as possible. If you plan to publish your findings on your blog, say so, and offer exactly what and when you plan to publish. Giving vendors deadlines -- in a friendly, non-threatening, matter-of-fact way -- turns out to be a great motivator for getting your issue prioritized internally there. Be prepared to negotiate around the specifics, of course -- you might not know exactly how to fix a bug, and how long that'll take, and the moment you disclose, they probably don't, either.

 

Most importantly, though, try to avoid over-playing your discovery. Consider what an adversary actually has to do to exploit the bug -- maybe they need to be physically close by, or already have an authorized account, or something like that. Being upfront with those details can help frame the risk to other users, and can tamp down irrational fears about the bug.

 

Finally, try to avoid blaming the vendor too harshly. Bugs happen -- it's inherent in the way we write, assemble, and ship software for general purpose computers. Assume the vendor isn't staffed with incompetents and imbeciles, and that they actually do care about protecting their customers. Treating your vendor with respect will engender a pretty typical honey versus vinegar effect, and you're much more likely to see a fix quickly.

 

Sing it Loud For All to Hear

 

Assuming you've hit your clearly-stated disclosure deadline, it's time to publish your findings. Again, you're not trying to shame the vendor with your disclosure -- you're helping other people make better informed decisions about the security of their own devices, giving other researchers a specific, documented case study of a vulnerability discovered in a shipping product, and teaching the general public about How Security Works. Again, effectively communicating the vulnerability is critical. Avoid generalities, and offer specifics -- screenshots, step-by-step instructions on how you found it, and ideally, a Metasploit module to demonstrate the effects of an exploit. Doing this helps move other researchers along in helping them to completely understand your unique findings and perhaps apply your learnings to their own efforts.

 

Ideally, there's a fix already available and distributed, and if so, you should clearly state that, early on in your disclosure. If there isn't, though, offer up some kind of solution to the problem you've discovered. Nearly always, there is a way to work around the issue through some non-default configuration, or a network-level defense, or something like that. Sometimes, the best advice is to avoid using the product all together, but that tends to be the last course of defense.

 

Happy HaXmas!

 

Given the recently enacted DMCA research exemption on consumer devices, I do expect to see an uptick in disclosing issues that center around consumer electronics. This is ultimately a good thing -- when people tinker with their own devices, they are more empowered to make better decisions on how a technology can actually affect their lives. The disclosure process, though, can be almost as challenging as the initial hackery as finding and exploiting vulnerabilities in the first place. You're dealing with emotional people who are often unfamiliar with the norms of security research, and you may well be the first security expert they've talked to. Make the most of your newfound status as a security ambassador, and try to be helpful when delivering your bad news.

Filter Blog

By date: By tag: