Skip navigation
All Places > Information Security > Blog

Recently, a number of Rapid7's customers have been evaluating the risks posed by the swift rise of ransomware as an attack vector. Today, I'd like to address some of the more common concerns.

 

What is Ransomware?

Cryptowall and Cryptolocker are among of the best known ransomware criminal malware packages today. In most cases, users are afflicted by ransomware by clicking on a phishing link or visiting a website that is either compromised is is hosting a compromised advertising network. While ransomware is usually associated with Windows PCs and laptops, there have been recent reports of new ransomware on Apple OSX called KeRanger.

 

Ransomware works by encrypting files that the user has access to, which is usually their local documents. However, some ransomware variants can target and encrypt files on mapped SMB drives as well. Once encrypted, the user is alerted with instructions on how to obtain the recovery key, typically for the price of $300-$500 equivalent in Bitcoin. Some attacks, however, are enterprise-centric and demand much more; the Hollywood Presbyterian Medical Center reportedly paid over $17,000 to a criminal enterprise to recover its encrypted data.

 

How Can I Avoid Ransomware?

Ransomware attacks happen similarly to other malware-based attacks. User education is the first line of defense -- people should not be clicking suspicious links, or visit websites that are known carriers of malvertising networks. In the event the user encounters a live link to a ransomware download, web-based threat prevention, email-based threat prevention, and application sandboxing can all help avoid infection.

 

In addition, enterprises can harden their user-based infrastructure preemptively by following some baseline cyber hygiene as described in Jason Beatty's blog post. Of special interest is the enforcement of role-based access control; all too often, organizations accrue "access cruft," where users inherit permission sets that are far too broad for their normal job functions as temporary access grants accidentally become permanent access grants. By limiting user access across network resources, the damage incurred by the compromise of a single user can be effectively contained.

 

I've Been Hit! How Can I Recover?

In the event a user or enterprise falls victim to a ransomware attack, the best solution is to treat the event as any other disaster: restore the lost data from backups, conduct an investigation into how the disaster occurred, and educate the users involved on how to avoid this disaster in the future. As of today, there is no known method for recovering lost data without cooperating with the criminals responsible for the ransomware.

 

Of course, backing up valuable data before an attack is critical in order to recover from this kind of attack. Backup schedules can vary widely between people and enterprises, many backup plans are implemented but remain untested, and the appearance of ransomware seems to have dramatically increased the chances of a data loss disaster. IT administrators who are concerned about ransomware affecting their users should investigate the relevance and reliability of their existing backup solutions, and weigh the costs of a sudden loss of data against the cost of more robust and frequent backup plans.

 

That Didn't Work. Should I Pay?

In most areas of crime, paying blackmail or ransom demands is counterproductive. It funds criminal enterprise directly and encourages more blackmail and ransom activity for both the original victim and future victims.

 

However, even the United States FBI seems to be advising people that, given no other disaster recovery alternative, victims may want to consider paying for recovery. In October of 2015, Joseph Bonavolonta of the FBI admitted, "To be honest, we often advise people just to pay the ransom." This position was later clarified that victims should only consider paying when there is no other recourse, such as recovering from backups.

 

The criminal enterprises running ransomware campaigns today are remarkably organized, and can even be considered helpful when it comes to getting their victims in a position to pay the ransom, nearly always via Bitcoin transactions. There is significant "victim support" built into these campaigns that walk users through the process of acquiring Bitcoin and ensuring that recovery is actually possible once they are paid. That said, these organizations are criminal, after all, and operate across international borders. It would appear that they are making good on their offers to decrypt the data held hostage, but there is absolutely no guarantee that they will continue to do so.

 

Conclusions

While ransomware represents the latest trend in drive-by, opportunistic malware, it is avoidable and containable by following fundamental security and disaster recovery best practices. Encouraging secure habits in an enterprise's user base is the cornerstone of avoiding the problem in the first place. Enterprises struck by ransomware are urged to treat the event as they would any local disk disaster: restore from backups, conduct a post-mortem investigation into how the disaster happened, and take the lessons learned to become more resilient in the event of future disasters.

Building a reliable security team is tough; there is no defined approach nor silver bullet.  The people we are defending against are intelligent, dedicated, and have a distinct asymmetrical advantage, with nearly unlimited time to find the one thing we miss.  This past decade has taught us that what we have been doing is not working very well.

 

I've been lucky to have latitude for creativity when building the security team at Rapid7.  So when Joan Goodchild asked me to join her for CSO Online's first edition of "security sessions" it felt like the perfect time to start socializing how we've approached building our team.

 

Rapid7, like many high-growth technology companies, has introduced a significant set of SaaS offerings over the past few years. With the introduction of these offerings, we needed to build a platform we believed our customers could trust. Given the current status-quo, we didn't feel like blindly following failed 'best-practices' was the right path, so we decided to forge our own.

 

Head over to CSO to get a glimpse into how we tackle building our team and program.  During this CSO Security Session, I spend several minutes discussing with Joan who we hire, how we hire, my views on certifications, higher education, technology (and its stagnation), and how we measure the progress of our security organization.

 

I hope our discussion stimulates some meaningful conversations for you, and I encourage you to think about the five following items:

 

  1. Have you done the fundamentals? Two-factor authentication, network segmentation, and patch management are all far more tactically important than nearly anything else your program could do.
  2. Do you need that security engineer with 7-10 years of experience? What about a more junior engineer that can write code, automate, and solve problems (not just identify them)? 
  3. Do you measure success with practical indicators? Don’t try and fit into someone else’s mold of 'metrics.' Take a look at what areas of your program you want to focus on, and use something like CMMI to measure the maturity (opposed to effectiveness) of those operations.  You can take a look at something like BSIMM to see how this can be done effectively in some security verticals. 
  4. Is a college degree, or a security certification something that should disqualify a candidate?  If you let your HR system automatically weed out people that don’t have certifications or degrees, you are going to miss out on great resources.
  5. Do you understand what makes your company tick? If you can’t become part of the success of your business, you will always be viewed as a problem.

 

The landscape we deal with is constantly changing and we need to adapt with it.  While I don’t presume anything we’ve done is the silver bullet, the more we all push the envelope and approach our challenges creatively, the more likely we are to start shifting that asymmetrical balance into a more reasonable equilibrium.

 

I’d be interested to hear your thoughts on building out an effective security team. Share them in the comments or on Twitter -- I’m @TheCustos.

The U.S. Departments of Commerce and State will renegotiate an international agreement – called the Wassenaar Arrangement – that would place broad new export controls on cybersecurity-related software. An immediate question is how the Arrangement should be revised. Rapid7 drafted some initial revisions to the Arrangement language – described below and attached as a .pdf to this blog post. We welcome feedback on these suggestions, and we would be glad to see other proposals that are even more effective.

 

Background

 

When the U.S. Departments of Commerce and State agreed – with 40 other nations – to export controls related to "intrusion software" in 2013, their end goal was a noble one: to prevent malware and cyberweapons from falling into the hands of bad actors and repressive governments. As a result of the 2013 addition, the Wassenaar Arrangement requires restrictions on exports for "technology," "software," and "systems" that develop or operate "intrusion software." These items were added to the Wassenaar Arrangement's control list of "dual use" technologies – technologies that can be used maliciously or for legitimate purposes.

 

Yet the Arrangement's new cyber controls would impose burdensome new restrictions on much legitimate cybersecurity activity. Researchers and companies routinely develop proofs of concept to demonstrate a cybersecurity vulnerability, use software to refine and test exploits, and use penetration testing software – such as Rapid7's Metasploit Pro software – to root out flaws by mimicking attackers. The Wassenaar Arrangement could (depending how each country implements it) either require new licenses for each international export of such software, or prohibit international export altogether. This would create significant unintended negative consequences for cybersecurity since cybersecurity is a global enterprise that routinely requires cross-border collaboration. 

 

Rapid7 submitted detailed comments to the Dept. of Commerce describing this problem in July 2015, as did many other stakeholders. The Wassenaar Arrangement was also the subject of a Congressional hearing in January 2016. [For additional info, check out Rapid7's FAQ on the Wassenaar Arrangement – available here.]

 

Revising the Wassenaar Arrangement

 

To their credit, the Depts. of Commerce and State recognize the overbreadth of the Arrangement and are motivated to negotiate modifications to the core text. The agencies recently submitted agenda items for the next Wassenaar meeting – specifically, removal of the "technology" control, and then placeholders for other controls. A big question now is what should happen under those placeholders – a placeholder does not necessarily mean that the agencies will ultimately renegotiate those items.

                                                                  

To help address this problem, Rapid7 drafted initial suggestions on how to revise the Wassenaar Arrangement, incorporating feedback from numerous partners. Rapid7's proposal builds on the good work of Mara Tam of HackerOne and her colleagues, as well as that of Sergey Bratus, one of the most important contributions of which was to emphasize that authorization is a distinguishing feature of legitimate – as opposed to malicious – use of cybersecurity tools.

 

Our suggested revisions can be broken down into three categories:

 

1) Exceptions to the Wassenaar Arrangement controls on "systems," "software," and "technology." These are the items on which the Wassenaar Arrangement puts export restrictions. We suggest creating exceptions for software and systems designed to be installed by administrators or users for security enhancement purposes. These changes should help exclude many cybersecurity products from the Arrangement's controls, since such products are typically used only with authorization for the purpose of enhancing security – as compared with (for example) FinFisher, which is not designed for cybersecurity protection. It's worth noting that our language is not based solely on the intent of the exporter, since the proposed language requires the software to be designed for security purposes, which is a more objective and technical measure than intent alone. In addition, we agree with the Depts. of State and Commerce that the control on "technology" should be removed because it is especially overbroad.

 

Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough:

4.A.5.   Systems, equipment, and components therefor, specially designed or modified for the generation, operation or delivery of, or communication with, "intrusion software".

Note:  4.A.5 does not apply to systems, equipment, or components specially designed to be installed or used with authorization by administrators, owners, or users for the purposes of asset protection, asset tracking, asset recovery, or ‘ICT security testing’.

 

4.D.4.  "Software" specially designed or modified for the generation, operation or deliver of, or communication with, "intrusion software".

Note:  4.D.4 does not apply to "software" specially designed to be installed or used with authorization by administrators, owners, or users for the purposes of asset protection, asset tracking, asset recovery, or ‘ICT security testing’. “Software” shall be deemed "specially designed" where it incorporates one or more features designed to confirm that the product is used for security enhancement purposes. Examples of such features include, but are not limited to:

a. A disabling mechanism that permits an administrator or software creator to prevent an account from receiving updates; or

b. The use of extensive logging within the product to ensure that significant actions taken by the user can be audited and verified at a later date, and a means to protect the integrity of the logs.

 

4.E.1.a. "Technology" [...] for the "development," "production" or "use" of equipment or "software" specified by 4.A. or 4.D.

 

4.E.1.c. "Technology" for the "development" of "intrusion software".

 

2) Redefining "intrusion software." Although the Wassenaar Arrangement does not directly control "intrusion software," the "intrusion software" definition underpins the Arrangement's controls on software, systems, and technology that operate or communicate with "intrusion software." Our goal here is to help narrow the definition of "intrusion software" to code that can be used for malicious purposes. To do this, we suggest redefining "intrusion software" as specially designed to be run or installed without authorization of the owner or administrator and extracting, modifying, or denying access to a system or data without authorization.

 

Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough:

Cat 4 "Intrusion software"
1. "Software"

a. specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', or to be run or installed without the authorization of the user, owner, or ‘administrator’ of a computer or network-capable device, and

b. performing any of the following:

a.1. The unauthorized extraction of or denial of access to data or information from a computer or network-capable device, or the modification of system or user data; or

b.2. The unauthorized modification of the standard execution path or a program or process in order to allow the execution of externally provided instructions system or user data to facilitate access to data stored on a computer or network-capable device by parties other than parties authorized by the owner, user, or ‘administrator’ of the computer or network-capable device.

 

3) Exceptions to the definition of "intrusion software." The above modification to the Arrangement's definition of "intrusion software" is not adequate on its own because exploits – which are routinely shared for cybersecurity purposes – are designed to be used without authorization. Therefore, we suggest creating two exceptions to the definition of "intrusion software." The first is to confirm that "intrusion software" does not include software designed to be installed or used with authorization for security enhancement. The second is to exclude software that is distributed for the purpose of preventing its unauthorized execution to particular end users. Those end users include 1) organizations conducting research, education, or security testing, 2) computer emergency response teams (CERT), 3) creators or owners of products vulnerable to unauthorized execution of the software, or 4) among an entities subsidiaries or affiliates. So, an example: A German researcher discovers a vulnerability in a consumer software product, and she shares a proof-of-concept with 2) CERT, and 3) a UK company that owns the flawed product; the UK company then shares the proof-of-concept with 4) its Ireland-based subsidiary, and 1) a cybersecurity testing firm. The beneficial and commonsense information sharing outlined in this scenario would not require export licenses under our proposed language.

 

Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough:

 

Notes
1. "Intrusion software" does not include any of the following:

a. Hypervisors, debuggers or Software Reverse Engineering (SRE) tools;
b. Digital Rights Management (DRM) "software"; or
c. "Software" designed to be installed or used with authorization by manufacturers, administrators, owners, or users, for the purposes of asset protection, asset tracking, or asset recovery., or ‘ICT security testing’; or

d. “Software” that is distributed, for the purposes of helping detect or prevent its unauthorized execution, 1) To organizations conducting or facilitating research, education, or 'ICT security testing', 2) To Computer Emergency Response Teams, 3) To the creators or owners of products vulnerable to unauthorized execution of the software, or 4) Among and between an entity's domestic and foreign affiliates or subsidiaries.


Technical Notes

1.
Monitoring tools': "software" or hardware devices, that monitor system behaviours or processes running on a device. This includes antivirus (AV) products, end point security products, Personal Security Products (PSP), Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS) or firewalls.
2.
'Protective countermeasures': techniques designed to ensure the safe execution of code, such as Data Execution Prevention (DEP), Address Space Layout Randomisation (ASLR) or sandboxing.
3. ‘Authorization’ means the affirmative or implied consent of the owner, user, or administrator of the computer or network-capable device.

4. ‘Administrator’ means owner-authorized agent or user of a network, computer, or network-capable device

5. 'Information and Communications Technology (ICT) security testing’ means discovery and assessment of static or dynamic risk, vulnerability, error, or weakness affecting “software”, networks, computers, network-capable devices, and components or dependencies therefor, for the demonstrated purpose of mitigating factors detrimental to safe and secure operation, use, or deployment.

 

 

This is a complex issue on several fronts. For one, it is always difficult to clearly distinguish between software and code used for legitimately beneficial versus malicious purposes. For another, the Wassenaar Arrangement itself is a convoluted international legal document with its own language, style, and processes. Our suggestions are a work in progress, and we may ultimately throw our support behind other, more effective language. We don't presume these suggestions are foolproof, and constructive feedback is certainly welcome.

 

Time is relatively short, however, as meetings concerning the renegotiation of the Wassenaar Arrangement will begin again during the week of April 11th. It's also worth bearing in mind that even if many cybersecurity companies, researchers, and other stakeholders come to agreement on revisions, any final decisions will be made with the consensus of the 41 nations party to the Arrangement. Still, we hope suggesting this language helps inform the discussion. As written, the Arrangement could cause significant damage to legitimate cybersecurity activities, and it would be very unfortunate if that were not corrected.

Disclosure Summary

ManageEngine OpUtils is an enterprise switch port and IP address management system. Rapid7's Deral Heiland discovered a persistent cross-site scripting (XSS) vulnerability, as well as a number of insecure direct object references. The vendor and CERT have been notified of these issues. The version tested was OpUtils 8.0, which was the most recent version at the time of initial disclosure. As of today, the current version offered by ManageEngine is OpUtils 12.0.

 

R7-2016-02.1: Multiple Persistent XSS Vulnerabilities

While examining ManageEngine OpUtils v8.0, an enterprise switch port and IP address management software, it was discovered to be vulnerable to a persistent cross-site scripting (XSS) vulnerability. This vulnerability allows a malicious actor to inject persistent XSS containing JavaScript and HTML code into various fields within the products Application Program Interface (API) and the old style User Interface (UI) . When this data is viewed within the web console the code will execute within the context of the authenticated user. This can allow a malicious actor to conduct attacks which can be used to modify the systems configuration, compromise data, take control of the product or launch attacks against the authenticated user's hosts system.

 

The first series of persistent XSS attacks were delivered to the OpUtils product via the network discovery process. When a network device is configured with SNMP,  the SNMP OID object sysDescr 1.3.6.1.2.1.1.1 can contain HTML or JavaScript code.  The code will be delivered to the product for persistent display and execution without proper input sanitization. This is a similar vulnerability to those disclosed as Multiple Disclosures for Multiple Network Manage Systems.

 

The following example  shows the results of discovering a network device where the SNMP sysDescr has been set to <SCRIPT>alert(“XSS-sysDescr”)<SCRIPT> . In this example, when device is viewed within OpUtils API UI web console, the JavaScript executes rendering an alert box within the authenticated users; web browser.

 

fig1-oputils.png

Figure 1: JavaScript Alert Box

 

After switching version 8.0 from the API UI to the old UI schema several other XSS injection points where identified. This includes persistent XSS attacks, which was also delivered to the OpUtils old UI interface via the network discovery process. If the network device is configured with SNMP and the following SNMP OID objects contain HTML or JavaScript code, the code will be delivered to the product for persistent display and execution.

 

sysDescr        1.3.6.1.2.1.1.1

sysLocation   1.3.6.1.2.1.1.5.0

sysName        1.3.6.1.2.1.1.6.0

 

sysDescr and sysLocation triggered when viewed within IP History as shown in Figure 2 and Figure 3.

 

fig2-oputils.png

Figure 2: sysDescr injected XSS

 

fig3-oputils.png

Figure 3: sysLocation injected XSS

 

In addition, sysDescr,  sysLocation, and sysName triggered when viewed within device history as shown in Figure 4.

 

fig4-oputils.png

Figure 4: sysName injected XSS

 

The second method of injection involved SNMP trap messages. By spoofing an SNMP trap message and altering the data within that trap message, a malicious actor can inject HTML and JavaScript code into the product. When the trap information is viewed within the SNMP Trap Receiver the code will execute within the context of the authenticated user.  Figure 5 shows an example attack where a trap message was used with the following HTML code “<embed src=//ld1.us/4.swf>” to embed flash into the Trap Receiver section of the UI.

 

fig5-oputils.png

Figure 5: XSS Via SNMP Trap Injection

 

 

R7-2016-02.2: Multiple Insecure Direct Object References

During testing, it was discovered that URLs ending in .cc are accessible without proper authentication. This allowed for retrieval of a portion of the web page. The following URLs are able to be accessed without authentication:

 

http://IP-Address:7080/SystemExplorer.cc

http://IP-Address:7080/UserView.cc

http://IP-Address:7080/AuditView.cc

http://IP-Address::7080/AuditViewRogue.cc

http://IP-Address:7080/IPAMReport.cc

http://IP-Address:7080/ipAddressManager.cc

http://IP-Address:7080/ipAddressManagerInputPage.cc

 

As a result of this direct access without authentication, an attacker is able to view the HTML of the web page “SystemExplorer.cc.” Here, it was discovered that the product's configured SNMP community string is transmitted in clear text as shown in Figure 6.

 

fig-6-infoleak.png

Figure 6: Information leakage via Insecure Direct Object Reference

 

Disclosure Timeline

Thu, Jan 14, 2016: Issues discovered by Deral Heiland of Rapid7, Inc.

Fri, Jan 15, 2016: Initial contact to vendor

Mon, Feb 15, 2016: Details disclosed to CERT, tracked as VU#400736

Wed, Mar 9, 2016: Clarification requested by the vendor, via CERT

Thu, Mar 17, 2016: Public disclosure of R7-2016-02

This advisory was written by the discoverer of the NPort issue, Joakim Kennedy of Rapid7, Inc.

 

Securing legacy hardware is a difficult task, especially when the hardware is being connected in a way that was never initially intended. One way of making legacy hardware more connectable is to use serial servers. The serial server acts as a bridge and allows serial devices to communicate over TCP/IP. The device then appears on the network as a normal network-connected device. This allows for remote administration of, for example, medical devices, industrial automation applications, and point of sales (POS) systems as if they were connected directly to the computer with a serial cable.

 

Fig1: Moxa NPort used to connect a glucometer (source).

 

By connecting these devices to a network, the inherent security of the serial device is, in most scenarios, completely compromised. Many serial devices’ security hinges on physical access. If you have physical access to the devices, you are authorized to talk to the device. When these devices are connected to the internet via a serial server, the physical access model does not apply anymore, and the security is entirely dependent on the security offered by the serial server.  In most scenarios, these serial servers should NEVER be connected to a public network.

 

The Devices

In this blog post, we are reporting serial servers exposed on the internet which are manufactured by Moxa. The serial servers can be configured via multiple interfaces, the most common being a web interface or a terminal over SSH or TELNET. At the time this blog post was written, over 5000 web servers could be fingerprinted as Moxa devices.

 

These devices are designed to be as simple as possible to setup and consequently the server is very permissive in who is allowed to connect to the server. For example, Moxa’s NPort series enables a web interface and a TELNET which can be used to configure the server, neither of which are password protected by default. The consumer is not forced to set a password and many consumers are using the default, the non-password protected setup.

We have found over 2200 devices accessible over the internet in which 46% of them are not password protected. Most of the internet connected devices are located in Russia and Taiwan but many devices are also located in the USA and Europe.

 

Figure 2: Geographic location of the 2200 internet connected devices.

 

Figure 3: Geographic location of the unprotected devices connected to the internet.

 

Figure 4: Breakdown of the model types connected to the internet.

 

Figure 5: Breakdown of the model type for the unprotected devices connected to the internet.

 

The most common connected device models are from the NPort 5100 series. The NPort 5100 series are “are designed to make your industrial serial devices Internet ready instantly, and are well-suited for POS security market applications”.

 

The Vulnerabilities

We reported in 2013 about serial servers connected to the internet and security implications. The same issues that were reported then are also applicable for these devices. When connecting to one of these devices which is not password protected over TELNET, the following menu is presented:

 

-----------------------------------------------------------------------------

Model name       : xxxxxxx

MAC address      : xx:xx:xx:xx:xx:xx

Serial No        : xxxxxxxx

Firmware version : x.x.xx Build xxxxxxxx

System uptime    : 5 days, 12h:53m:49s

-----------------------------------------------------------------------------

<< Main Menu >>

  (1) Basic settings

  (2) Network settings

  (3) Serial settings

  (4) DIO setting

  (5) Operating settings

  (6) Accessible IP settings

  (7) Auto warning settings

  (8) Monitor

  (9) Ping

  (a) Change password

  (b) Advanced network settings

  (l) Load factory default

  (v) View settings

  (s) Save/Restart

  (q) Quit

 

Key in your selection:

 

The TELNET interface allows the same configuration options as the web interface. Both of these interfaces can be protected by setting a password.

 

The NPort device can operate in multiple modes. One is Real COM mode. In this mode, with COM/TTY drivers provided by the vendor, all serial signals are transmitted intact and the behaviour is identical to plugging the serial device into the COM port. In this mode, up to 4 different hosts can be connected. Connecting to a serial device connected to a NPort is very simple. One simply has to download the Real TTY drivers, install them, enter the IP address to connect to and the device shows up as being plugged in. No authentication is required.

 

The only way of restricting who can connect the device is by using the IP white listing option to restrict the IPs which can connect to the serial device or to use the TCP Client Mode. In the TCP Client Mode, the serial server initiates connections to predetermined hosts when serial data arrives.

 

The serial server does not offer any encryption, so all data is sent in the clear. This makes it possible to eavesdrop on the communication.

 

The lack of authentication on these devices, and the lack of encryption even when authentication is possible, was reported to CERT, and after some discussion, CVE-2016-1529 was assigned to identify this issue. More generally, CWE-306, Missing Authentication for Critical Function, appears to apply to Moxa NPort devices.

 

Remediation

As these serial servers are likely connected to something very sensitive, these devices should NEVER be directly connected to the internet. If remote access is required, and since these devices do not offer encrypted traffic, connect the serial servers to a local network which can only be accessible via, for example, a VPN. Also, restrict the IPs which can connect to the serial device, and don’t forget to password protect the admin consoles.

 

Conclusions

There is still little awareness on what can happen if you connect devices directly to the internet. With search engines like Shodan, it is very easy to find these devices, making it important to secure them. Securing legacy hardware is still very difficult, and this how not to do it. Security is being compromised for convenience, and consumers are, in many cases, just using the default settings. The easier you make it for yourself to connect, the easier you make it for the attacker.

Disclosure Timeline

Fri, Jan 15, 2016: Initial contact to the vendor

Mon, Jan 18, 2016: Response received from the vendor and details provided.

Mon, Feb 1, 2016: Details disclosed to CERT as VU#757136

Mon, Feb 1, 2016: CVE-2016-1529 assigned

Thu, Mar 17, 2016: Public disclosure (planned).

On Mar. 3rd, Rapid7, Bugcrowd, and HackerOne submitted joint comments to the Copyright Office urging them to provide additional protections for security researchers. The Copyright Office requested public input as part of a study on Section 1201 of the Digital Millennium Copyright Act (DMCA). Our comments to the Copyright Office focused on reforming Sec. 1201 to enable security research and protect researchers.

 

Our comments are available here.

 

Background

 

Sec. 1201 of the DMCA prohibits circumventing technological protection measures (TPMs) to access copyrighted works, including software, without permission of the owner. That hinders a lot of security research, tinkering, and independent repair. Violations of Sec. 1201 can carry potentially stiff criminal and civil penalties. To temper this broad legal restraint on unlocking copyrighted works, Congress built in two types of exemptions to Sec. 1201: permanent exemptions for specific activities, and temporary exemptions that the Copyright Office can grant every three years. These temporary exemptions automatically expire at the end of the three-year window, and advocates for them must reapply every time the exemption window opens.

 

Sec. 1201 includes a permanent exception to the prohibition on circumventing TPMs for security testing, but the exception is quite limited – in part because researchers are still required to get prior permission from the software owner, as we describe in more detail below. Because the permanent exemption is limited, many researchers, organizations, and companies (including Rapid7) urged the Copyright Office to use its power to grant a temporary three-year exemption for security testing that would not require researchers to get prior permission. The Copyright office did so in Oct. 2015, granting an exemption to Sec. 1201 for good faith security research that circumvents TPMs without permission. However, this exemption will expire at the end of the the three year exemption window,  after which security researchers will have to start from zero in re-applying for another temporary exemption.

 

The Copyright Office then announced a public study of Sec. 1201 in Dec. 2015. The Copyright Office undertook this public study, as the Office put it, to assess the operation of Sec. 1201, including the permanent exemptions and the 3-year rulemaking process. This study comes at a time that House Judiciary Committee Chairman Goodlatte is reviewing copyright law with an eye towards possible updates, so the Copyright Office's study may help inform that effort. Rapid7 supports the goal of protecting copyrighted works, but hopes to see legal reforms that reduce the overbreadth of copyright law so that it no longer unnecessarily restrains security research on software.

 

Overview of Comments

 

For its study, the Copyright Office asked a series of questions on Sec. 1201 and invited the public to submit answers. Below are some of the questions, and the responses we provided in our comments.

 

"Please provide any insights or observations regarding the role and effectiveness of the prohibition on circumvention of technological measures in section 1201(a)."

 

Our comments to the Copyright Office emphasized that Sec. 1201 adversely affects security research by forbidding researchers from unlocking TPMs to analyze software for vulnerabilities. We argued that good faith researchers do not seek to infringe copyright, but rather to evaluate and test software for flaws that could cause harm to individuals and businesses. The risk of harm resulting from exploitation of software vulnerabilities can be quite serious, as Rapid7 Senior Security Consultant Jay Radcliffe described in 2015 comments to the Copyright Office. Society would benefit – and copyright interests would not be weakened – by raising awareness and urging correction of such software vulnerabilities.

 

"How should section 1201 accommodate interests that are outside of core copyright concerns[?]"

 

Our comments responded that the Copyright Office should consider non-copyright interests only for scaling back restrictions under Sec. 1201 – for example, the Copyright Office should weigh the chilling effect Sec. 1201 has on security research in determining whether to grant an exemption for research to Sec. 1201. However, we argued that the Copyright Office should not consider non-copyright interests in denying an exemption, because copyright law is not the appropriate means of advancing non-copyright interests at the expense of activity that does not infringe copyright, like security research.

 

Should section 1201 be adjusted to provide for presumptive renewal of previously granted exemptions—for example, when there is no meaningful opposition to renewal—or otherwise be modified to streamline the process of continuing an existing exemption?

 

Our comments supported this commonsense concept. Currently, the three-year exemptions expire and must be re-applied for, which is a complex and resource-intensive process. We argued that a presumption of renewal should not hinge on a lack of "meaningful opposition," since the opposition to the 2015 security researcher exemption is unlikely to abate – though that opposition is largely based on concerns wholly distinct from copyright, like vehicular safety. Our comments also suggested that any presumption of renewal of exceptions to Sec. 1201 should be overcome only by a strong standard, such as a material change in circumstances.

 

Please assess whether the existing categories of permanent exemptions are necessary, relevant, and/or sufficient. How do the permanent exemptions affect the current state of reverse engineering, encryption research, and security testing?

 

Our comments said that Sec. 1201(j)'s permanent exemption for security testing was not adequate for several reasons. The security testing exemption requires the testing to be performed for the sole purpose of benefiting the owner or operator of the computer system – meaning research taken for the benefit of software users or the public at large may not qualify. The security testing exemption also requires researchers to obtain authorization of owners or operators of computers prior to circumventing software TPMs – so the owners and operators can dictate the circumstances of any research that takes place, which may chill truly independent research. Finally, the security testing exemption only applies if the research violates no other laws – yet research can implicate many laws with legal uncertainty in different jurisdictions. These and other problems with Sec. 1201's permanent exemptions should give impetus for improvements – such as removing the requirements1) that the researcher must obtain authorization before circumventing TPMs, 2) that the security testing must be performed solely for the benefit of the computer owner, and 3) that the research not violate any other laws.

 

 

We sincerely appreciate the Copyright Office conducting this public study of Sec. 1201 and providing the opportunity to submit comments. Rapid7 submitted comments with HackerOne and Bugcrowd to demonstrate unity on the importance of reforming Sec. 1201 to enable good faith security research. Although the public comment period for this study is now closed, potential next steps include a second set of comments in response to any of the 60+ organizations and individuals that provided input to the Copyright Office's study, as well as potential legislation or other Congressional action on Sec. 1201. For each next step, we will aim to work with our industry colleagues and other stakeholders to propose reforms that can protect both copyright and independent security research.

This is the third post in a three-part series on threat intelligence foundations, discussing the fundamentals of how threat intelligence can be used in security operations. Here's Part 1 and Part 2.

 

Intelligence Analysis in Security Operations

In the first two parts of this series we talked about frameworks for understanding and approaching intelligence: the levels of intelligence (strategic, operational, tactical) as well as the different types of intelligence (technical, current, long-term, etc). Regardless of the level or type of intelligence, the consistent theme was the need for analysis. Analysis is the core of intelligence, it takes data and turns it into intelligence that we can use to help us make informed decisions about complicated issues.

 

Analysis: The Missing Piece

I recently gave a talk at RSA where I compared the traditional intelligence cycle: Screen Shot 2016-03-11 at 12.10.47 PM.png

 

 

 

to what the intelligence cycle often looks like in cyber threat intelligence:     Screen Shot 2016-03-11 at 12.11.07 PM.png

 

We are good at collection and processing, and we are good at dissemination, however we tend to leave a lot of the critical parts of the cycle out which results in overwhelming alerts, excessive false positives, and really, really confused people.

 

It’s easy to joke about or complain about, but here is the thing...analysis is hard. Saying that we should do more/better/more timely analysis is easy. Actually doing it is not, especially in a new and still developing field like cyber threat intelligence. Models and methods help us understand the process, but even determining what model to use can be difficult. There are multiple approaches; some work better in certain situations and others work best in others.

 

What is Analysis?

The goal of intelligence analysis is to evaluate and interpret information in order to reduce uncertainty, provide warnings of threats, and help make informed decisions. Colin Powell gave perhaps the most succinct guidelines for intelligence analysis when he said: “Tell me what you know, tell me what you don’t know, tell me what you think. Always distinguish which is which”. This statement sums up intelligence analysis.

 

Analysts take what is known—usually information that has been collected either by the analyst themselves or by others—identify gaps in the knowledge that might dictate a new collection requirement or may present a bias that needs to be taken into consideration, and then determine what they think that information means.

 

Before you begin any analysis you should have an idea of what it is that you are trying to figure out. Ideally this would be driven by requirements from leadership, teams you support, or some other form of standing intelligence needs. There are many situations in CTI, however, where those requirements are not as well defined as we might hope. Understanding what it is that the organization needs from threat intelligence is critical. Therefore, step one should always be to understand what problems, concerns, or issues you are trying to address.

 

Analytic Models

Once you understand what questions you are trying to answer through your analysis, there are various analytic models that can be used to conduct analysis. I have listed some good resources available to help understand some of the more popular models that are often used in threat intelligence.

 

Different models are used for different purposes. The SWOT method is good for conducting higher-level analysis to understand how your own strengths and weaknesses compared to an adversary’s capabilities. F3EAD, the Diamond Model, and the Kill Chain and are useful for analyzing specific instructions or how different incidents or intrusions may be related. Target Centric Intelligence is a lesser known model, but can help with not only understanding individual incidents, but provides a collaborative approach to intelligence including the decision makers, collectors, and analysts in an iterative process aimed at avoiding the stove-piping and miscommunications that are often present in intelligence operations.

 

 

A final note on collection

In many cases, analysis can only be as good as the information that it is based off of. Intelligence analysts are trained to evaluate the source of information in order to better understand if there are biases or concerns about the reliability that need to be taken into account. In cyber threat intelligence we, by and large, rely on data collected by others and may not have much information on its source, reliability, or applicability. This is one of the reasons that analyzing information from your own network is so important, however it is also important that we, as a community, are as transparent as possible with the information we are providing to others to be used in their analysis. There are always concerns about revealing sources and methods, so we need to find a balance between protecting those methods and enabling good analysis.

This is the second post in a three-part series on threat intelligence foundations, discussing the fundamentals of how threat intelligence can be used in security operations. Read Part One here.

 

Tinker, Tailor, Soldier, Spy: Utilizing Multiple Types of Intelligence

Just as there are different operational levels of intelligence—discussed in detail in the first post of this series—there are also different types of intelligence that can be leveraged in an organization to help them better understand, prepare for, and respond to threats facing them.

 

Don’t laugh—but a great basic resource for understanding the types of intelligence is the CIA’s Kid Zone, where they break intelligence down for the 6-12th graders that we all are at heart (or K-5, no judgement here).

 

They break intelligence down into several different types:

  • Scientific and Technical – providing information on adversary technologies and capabilities.

  • Current – looking at day-to-day events and their implications.

  • Warning – giving notice of of urgent matters that may require immediate attention.

  • Estimative – looking at what might be or what might happen.

  • Research – providing an in-depth study of an issue.

 

While most organizations may not work with all of these types of intelligence, or do so in the same way that the CIA does (and please don't tell me if you do), it is useful to understand the spectrum and what each type provides. The different types of intelligence require varying levels of human analysis and time. Some, like technical intelligence, are easier to automate and therefore can be produced at a regular cadence, while some, like threat landscape research, will always rely heavily on human analysis.

 

Screen Shot 2016-03-09 at 6.48.23 PM.png

Technical Intelligence

In information security operations, technical intelligence is used to understand the capabilities and the technologies used by an adversary. It can include details such as IP addresses and domains used in command and control, names and hashes of malicious files, as well as some TTP details such as vulnerabilities that a particular actor targets or a particular callback pattern for a beaconing implant.

 

Technical intelligence is most often used in machine-to-machine operations, and is therefore automated as much as possible to handle the large volume of information. In many cases, technical intelligence does not contain much context, even if context is available in other places, because machines do not care as much about the context as their humans do. A firewall doesn’t need to know why to block traffic to a malicious domain, it just needs to do it. The human on the other end of that firewall change might want to know, however, in case the change ends up triggering a massive amount of alerts. Technical intelligence must have been analyzed prior to consumption, otherwise it is just data or information at best. For more information see Robert Lee’s post on the data vs information vs intelligence debate.

 

If you are not using technical intelligence that you generated yourself, it is critical that you understand the source of the technical intelligence and how it was analyzed, especially if it was analyzed using automated means. I am going out on a limb here by stating that there is a way to analyze and produce threat intelligence in an automated fashion that can be utilized machine-to-machine. Do NOT prove me wrong—do the analysis!

 

Current Intelligence

Current Intelligence deals with day-to-day events and situations that may require immediate action. I have heard several people say that, “news isn’t intelligence,” and that is a true statement; however, threat information in the public domain, when analyzed for implications to your specific organization, network, or operations, becomes intelligence.

 

An example of the use of current intelligence is a report that an exploit kit has integrated a vulnerability that was just announced three days ago. If you know that you are on a thirty-day patch cycle that means (best case) you have twenty-seven days where you will be vulnerable to these attacks. Understanding how this threat impacts your organization and how to detect and block malicious activity associated with it is an example of current intelligence. Current intelligence can also be generated from information within an organization’s networks. Analyzing an intrusion or a spearphishing attack against executives can also generate current intelligence that needs to be acted on quickly.

 

When you do generate current intelligence from your own network, document it! It can then contribute to threat trending and threat landscape research, which we will discuss shortly. It can also be shared with other organizations.

 

Threat Trending (Estimation)

All of the intelligence gathered at the tactical level (technical intelligence, current intelligence) can be further analyzed to generate threat trends. Threat trending takes time because of the nature of trending, you are analyzing patterns over time to see how things change and how they stay the same. Threat trending can be an analysis of a particular threat that has impacted your network repeatedly, or it can be an analysis of how an actor group or malware family has evolved over time. The more relevant a threat trend is to your network or organization, the more useful it will be to you.

 

Threat trending allows us to move from an analysis of something that we have seen and know is bad towards predicting or estimating future threats.

 

Threat Landscape Research

Speaking of trending, there has been a long trend in intelligence analysis of focusing on time-sensitive, current intelligence at the expense of longer term, strategic research. Consider how many tactical level, technical IOCs we have in the community compared to strategic intelligence resources. How many new programs are focused on providing “real-time intelligence” versus “deliberate, in-depth analysis.” There are legitimate reasons for that: there are not enough analysts as it is, and they are usually focused on the time-sensitive tasks because they are, well, time sensitive. In addition, we don’t always have the right data to conduct strategic level analysis, both because we are not accustomed to collecting it from our own networks and most people who are willing to share tactical indicators of threats are not as willing to share information on how those threats impacted them.

 

We need to change this, because you cannot (or should not) make decisions about the future of your security program without a strategy, and you cannot (or should not) have a security strategy without understanding the logic behind it. Threat landscape research—which is a long term analysis of the threats in your environment, what they target, how they operate, and how you are able to respond to those threats—will drive your strategy. The tactical level information you have been collecting and analyzing from your network on a daily basis can all contribute to threat landscape research. Current intelligence, yours and public domain information, can also contribute to threat landscape research. One framework for capturing and analyzing this information is VERIS—the Vocabulary for Event Recording and Incident Sharing, which the DBIR is based off of. Just remember, this type of intelligence analysis takes time and effort, but it will be worth it.

 

Information Sharing

There is currently an emphasis on sharing IOCs and other technical information, however any of the types of intelligence we have discussed in this post are good candidates for information sharing. Sharing information on best practices and processes is also incredibly beneficial.

 

Sharing information on what has been seen in an organization’s network is a good way to understand new threats as they emerge and increase situational awareness. Information sharing essentially generates intelligence to warn others of threats that may impact them. Information sharing is becoming increasingly automated, which is great for handling higher volumes of information, however, unless there is an additional layer of analysis that focuses on how this information is relevant or impacts your organization then it will stay information (not intelligence) and will not be as useful as it could be. For more information see Alex Pinto’s presentation on his recent research on measuring the effectiveness of threat intelligence sharing.

 

Even if you are not yet convinced of the value of generating your own intelligence from your environment, consuming threat intelligence still requires analysis to understand how it is relevant to you and what actions you should take. A solid understanding of the different types of intelligence and how they are used will help guide how you should approach that analysis.

This is the first post in a three-part series on threat intelligence foundations, discussing the fundamentals of how threat intelligence can be used in security operations.

 

There is a consensus among many in threat intelligence that the way the community has approached threat intelligence in the past -  i.e, the “Threat Data → SIEM → Magical Security Rainbows” approach has left something to be desired, and that something is usually analysis. Rick Holland (@rickhholland) warned us early on that we were on the wrong track with his 2012 post My Threat Intelligence Can Beat Up Your Threat Intelligence where he wrote “The real story on threat intelligence is your organization’s ability to develop your own."

 

There are ways that we can take advantage of the threat intelligence that currently exists while learning how to better leverage the threat intelligence in our own networks. Doing this requires an understanding of intelligence fundamentals and how they can be applied in security operations. This series is designed to help those interested in threat intelligence -whether just starting out or re-evaluating their existing programs - understand the underlying fundamentals of threat intelligence and intelligence analysis.

 

In the first part of this three-part series we will discuss the levels of intelligence and the various ways threat intelligence can be utilized in operations.

 

Threat Intelligence Levels in Security Operations: Crawl

When an organization is determining how to best integrate threat intelligence into their security operations it is helpful to have a framework detailing the different ways that intelligence can be effectively utilized.

 

Traditionally, intelligence levels have aligned to the levels of warfare: strategic, operational, and tactical. There are several reasons for this alignment: it can help identify the decision makers at each level; it identifies the purpose of that intelligence, whether it is to inform policy and planning or to help detect or deter an attack; it can help dictate what actions should be taken as a result of receiving that intelligence.

 

At any level of intelligence it is critical to assess the value to your organization specifically. Please answer this for yourself, your team, and your organization, “How does this information add perspective to our security program? What decisions will this information assist us in making?”

 

Strategic intelligence

Strategic intelligence is intelligence that informs the board and the business. It helps them understand broader trends that are facing their organizations and other similar organizations in order to assist in the development of a strategy. Strategic Intelligence comes from analyzing longer term trends, and often takes the shape of analytic reports such as the DBIR and Congressional Research Service (CRS) reports. Strategic intelligence assists key decision makers in determining what threats are most impactful to their businesses and future plans, and what long-term efforts they may need to take to mitigate them.

 

The key to implementing strategic intelligence in your own business is to apply this knowledge in the context of your own priorities, data, and attack surface. No commercial or annual trend report can tell you what is important to your organization or how certain threat trends may impact you specifically.

 

Strategic intelligence - like all types of intelligence - is a tool that can be used to shape future decisions, but it cannot make those decisions for you.

 

Operational Intelligence

Operational intelligence provides intelligence about specific attacks that may impact an organization. Operational intelligence is rooted in the concept of military operations - a series of plans or engagements that may take place at different times or locations, but have the same overarching goal. It could include identified campaigns targeting an entire sector, or it could be hacktivist or botnet operations targeting one specific organization through a series of attacks.

 

Information Sharing and Analysis Centers (ISACs) and Organizations (ISAOs) are good places to find operational intelligence.

 

Operational intelligence is geared towards higher-level security personnel, but unlike strategic intelligence it dictates actions that need to be taken in the near to mid-term rather than the long term. It can help inform decisions such as whether to increase security awareness training, how to staff a SOC during an identified adversary operation, or whether to temporarily deny requests for exceptions to the firewall policy. Operational intelligence is one of the best candidates for information sharing. If you see something that is going on that may impact others in the near term, *please* share that information. It can help other organizations determine if they need to take action as well.

 

Operational intelligence is only useful when those receiving the intelligence have the authority to make changes to policies or procedures in order to counter the threats.

 

Tactical Intelligence

Tactical Intelligence focuses on the the “what” (Indicators of Compromise) and the “how” (Tactics, Techniques, and Procedures) of an attacker’s actions with the intent of using that knowledge to prevent, detect, or respond to incidents. Do attackers tend to use a particular method to gain initial access, such as social engineering or vulnerability exploitation? Do they use a particular tool or set of tools to escalate privilege and move laterally? What indicators of compromise might allow you to detect these activities? For a good list of various source of tactical intelligence check out Herman Slatman's list of threat intelligence resources.

 

Tactical intelligence is geared towards security personnel who are actively monitoring their environment and gathering reports from employees who report anomalous activity or social engineering attempts. Tactical Intelligence can also be used in hunt operations, where we are looking to identify attacker behaviors that vary only slightly from a typical user’s behavior. This type of intelligence requires more advanced resources, such as extensive logging, user behavioral analytics, endpoint visibility, and trained analysts. It also requires a security-conscious workforce, as some indicators may not be captured or alerted on without first being reported by an employee. You will always have more employees than attack sensors…listen to them, train them, gather the information they can provide, analyze it, and then act upon it.

 

Tactical threat intelligence provides specific, but perishable, information that security personnel can act on.

 

Understanding how threat intelligence operates at different levels can help an organization understand where it needs to focus their efforts and what it can do with the threat intelligence it has access to. It can also help guide how the organization should approach intelligence in the future. The intelligence you can generate from your own network will always be the most actionable intelligence, regardless of the level.

 

For more information on the levels of intelligence and the levels of warfare, check out these resources:

Deral Heiland

What's In A Hostname?

Posted by Deral Heiland Employee Mar 9, 2016

Like the proverbial cat, curiosity can often get me in trouble, but often enough, curiosity helps us create better security. It seems like every time I encounter a product with a web management console, I end up feeding it data that it wasn't expecting.

 

As an example, while configuring a wireless bridge that had a discovery function that would identify and list all Wi-Fi devices in the radio range, I thought: "I wonder what would happen if I broadcast a service set identifier (SSID) containing format string specifiers?"

 

I set up a soft AP on my Linux host using airobase-ng and configured the SSID to broadcast %x%x%x. I was shocked when the discovered AP's SSID displayed data from the wireless bridge's process stack as shown in Figure 1:

 

fsv.png

Figure 1: Format String Injected Via SSID

 

This data confirmed that this wireless bridge appliance was vulnerable to a format string exploit. This lead to the discovery of multiple devices vulnerable to injection attacks within the web management consoles via SSID, including format strings, persistent cross-site scripting (XSS) and cross-site request forgery (CSRF) (more details of these are discussed in a whitepaper I released at Blackhat).

 

Unfortunately, attacks against web management interfaces don’t stop with SSIDs. So many products inevitably consume data from various resources and then display that data within the web management console without conducting any validation checks of that data first. This often leads to vulnerabilities being exploited via the web management interfaces, and it appears to not be going away any time soon. Recently Matthew Kienow and myself released a number of advisories where XSS attacks were injected into web management consoles of Network Management Systems (NMS) using SNMP.

 

Again—several months back—while on a pen testing engagement, a coworker was running an open source tool used to launch relay style attacks. This tool captured hostname information from the network and stored it as part of its function and of course it had a web interface. Sadly his testing was interfering with my testing, so for fun I changed my Linux systems hostname to “><script>alert(“YOU-HAVE-BEEN-HACKED”)</script> .

 

Initially I wasn't sure if this XSS attack would work, but soon enough I heard a loud scream come from his corner of the room. Now this brings me around to the purpose of this blog: What would the impact be if everyone changed the name of their host system to contain XSS data—such as “><iframe>?  I am scared to even imagine the number of products that use the hostname data and display it within their web management interface. Based on all my testing against various application and embedded devices that use web interfaces for management, I have found roughly 40% of the systems I have tested to be vulnerable to some form of XSS injection attacks.

 

So, I wonder how many administration web consoles have this sort of problem with hostname parsing?

 

Want to Help Us Find Out?

Now if this idea intrigues you, don’t rush out and start renaming your systems, as even a simple XSS such as “><iframe>—which should create a simple box on the screen (Figure 2)—can have serious impact on the web interface functionality of some products and could easily prevent it from functioning normally.

 

However, if you want to try this out, first make sure you have permission and that you do it within a controlled environment—not within your production environment. If you end up giving this a try, I ask that you share the results with us at  security@rapid7.com (PGP KeyID: 0x8AD4DB8D) so we can follow-up with the results in a future blog.

 

Also, I highly recommend that you contact the product vendor for ethical disclosure so they can fix the issues.

 

box.png

Figure 2: <iframe> box

 

 

I am looking forward to hearing back on what you find.

Mobile app hacking is nothing new. Many people have performed different assessments and there are even courses all about it. Even so, many penetration testers may still be hesitant about performing these types of assessments, or may not do them well. Mobile application hacking is much like other forms of hacking. You can’t get really good unless you regularly practice. So how can we get experience hacking mobile applications? Well, with over 1.5 million apps in the Google Play store and the Apple App store, there is no shortage of apps to play with. There are also numerous purposely vulnerable mobile apps you can download and test as well.

 

There are a number of different techniques for analyzing mobile applications. They include:

 

  • File System Analysis
  • Network Analysis
  • Source Code Analysis
  • Dynamic Analysis

 

For the purpose of this blog entry, we will be focusing on File System Analysis on Android. We will expand this into a series if there is a demand for it.

 

To access the file system contents of an app, you need the appropriate permissions. On Android, that usually means root access. During engagements, I have had customers say “Well you have root access. Without that you wouldn’t have gotten to that data, and most people’s devices aren’t rooted.” A point well taken, and since I am in the business of showing true risk to an organization, I figured what better way than to create a tool that would allow access to the file system contents without root access, and thus, backHack was born.

 

backHack was created over 2 years ago, but I got busy and put the tool on the backburner. Fast forward to a few weeks ago when I found a new game: Alto’s Adventure. The game is awesome for a time killer, and beautifully made. It took a long time to get to the next level and collect coins, and I decided it was time to dust off backHack and see what I could do with the application.

 

Instead of just telling you what I did, I will show you, and I encourage you to follow along on your own. First, we need to make sure we have Android Studio installed, or at least ADB (Android Debug Bridge) accessible in our PATH. We also need to have debugging enabled on our device. At this point, issue the command ‘adb devices’ and make sure your device is showing as connected.

 

adbDevices.png

 

Now we run backHack. (python backHack.py)

 

backHackASCII.png

 

backHack has been designed with a simple menu system that would be easy enough for an infant to use. We first need to select what app we want to “hack”. For that, choose option 1, then select either option 1 to list all apps on the device, option 2 to search for an app, or option 3 to type in the name of the app. For our purposes we are looking at Alto’s adventure, so I will choose option 2, type in ‘alto’, and find the app name of ‘com.noodlecake.altosadventure’. I then copy and paste that name under option 3, returning me to the main menu.

 

backHackMenu1.pngbackHackMenu2.png

backHackMenu-AppSelection.png

Next, I backup the app by selecting option 2. For this step, we will be prompted to unlock our device and confirm the backup operation.

 

backHackMenu-Extract.png

 

Once the backup is complete, backHack extracts the backup, placing the files system contents under apps/<APPNAME>. In this case, it is apps/com.noodlecake.altosadventure.

 

backHackMenu-appFolder1.png

 

We then can poke around the file system and see what is there. Some good places to look are under the sp folder (shared_prefs) and the db folder (databases). In the case of Alto’s Adventure, there is a XML file named com.noodlecake.altosadventure.xml.

 

backHackMenu-appFolder2.png

 

When we look at this file, we find settings for the app, including coins and level. I find it fun to make changes, and see what it does, so let’s do that. We set coins to 999999999 and level to 60. (60 is the highest level currently, and we don’t want to be greedy by going for $1,000,000,000 coins do we?)

 

backHackMenu-prefs1.png

 

After saving the file, we then go back to backHack and select option 3. This will repack the app and restore to your device. Again, you will be prompted to confirm the restore operation on the device.

 

backHackMenu-restore1.png

 

Now that the app has been restored, we then open the application and see what happened. Boom! 999,999,999 coins, and level 61! (Notice the entry in the XML file was for currentGoalLevel, which we set to 60. The entry actually means “completedGoalLevel”. Also, coins are at 1,000,000,000. Guess they round up?)

 

Screenshot_20160303-231610.pngScreenshot_20160303-231627.png

 

While this is a fun way to get extra lives, coins, or level up on a game, the same methodology can be used in any app. For instance, how about modifying your United app to show you have 14,000,000 miles, are Premier 1K, and Star Alliance Gold?

 

United.png

 

Many times more than just modifying how an app behaves, you may find passwords, or other sensitive information stored in the file system, and backHack shows the risk better than having a rooted device, since now ANY device that is unlocked is able to be accessed.

 

Caveman.png

royhodgman

The Attacker's Dictionary

Posted by royhodgman Employee Mar 1, 2016

Rapid7 is publishing a report about the passwords attackers use when they scan the internet indiscriminately. You can pick up a copy at booth #4215 at the RSA Conference this week, or online right here. The following post describes some of what is investigated in the report.

 

Announcing the Attacker's Dictionary

Rapid7's Project Sonar periodically scans the internet across a variety of ports and protocols, allowing us to study the global exposure to common vulnerabilities as well as trends in software deployment (this analysis of binary executables stems from Project Sonar).

 

As a complement to Project Sonar, we run another project called Heisenberg which listens for scanning activity. Whereas Project Sonar sends out lots of packets to discover what is running on devices connected to the Internet, Project Heisenberg listens for and records the packets being sent by Project Sonar and other Internet-wide scanning projects.

 

The datasets collected by Project Heisenberg let us study what other people are trying to examine or exploit. Of particular interest are scanning projects which attempt to use credentials to log into services that we do not provide. We cannot say for sure what the intention is of a device attempting to log into a nonexistent RDP server running on an IP address which has never advertised its presence, but we believe that behavior is suspect and worth analyzing.

 

How Project Heisenberg Works

Project Heisenberg is a collection of low interaction honeypots deployed around the world. The honeypots run on IP addresses which we have not published, and we expect that the only traffic directed to the honeypots would come from projects or services scanning a wide range of IP addresses. When an unsolicited connection attempt is made to one of our honeypots, we store all the data sent to the honeypot in a central location for further analysis.

 

In this post we will explore some of the data we have collected related to Remote Desktop Prodocol (RDP) login attempts.

 

RDP Summary Data

We have collected RDP passwords over a 334 day period, from 2015-03-12 to 2016-02-09.

 

During that time we have recorded 221203 different attempts to log in, coming from 5076 distinct IP addresses across 119 different countries, using 1806 different usernames and 3969 different passwords.

 

Because it wouldn't be a discussion of passwords without a top 10 list, the top 10 passwords that we collected are:

 

password

count

percent

x

11865

5.36%

Zz

10591

4.79%

St@rt123

8014

3.62%

1

5679

2.57%

P@ssw0rd

5630

2.55%

bl4ck4ndwhite

5128

2.32%

admin

4810

2.17%

alex

4032

1.82%

.......

2672

1.21%

administrator

2243

1.01%

 

And because we have information not only about passwords, but also about the usernames that are being used, here are the top 10 that were collected:

 

username

count

percent

administrator

77125

34.87%

Administrator

53427

24.15%

user1

8575

3.88%

admin

4935

2.23%

alex

4051

1.83%

pos

2321

1.05%

demo

1920

0.87%

db2admin

1654

0.75%

Admin

1378

0.62%

sql

1354

0.61%

 

We see on average 662.28 login attempts every day, but the actual daily number varies quite a bit. The chart below shows the number of events per day since we started collecting data. Notice the heavy activity in the first four months, which skews the average high.

 

all_events_by_day.png

 

In addition to the username and password being used in the login attempts that we captured, we also collected the IP address of the device making the login attempt. To the best of the ability of the GeoIP database we used, here are the top 15 countries from which the collected login attempts originate:

 

country

country code

count

percent

China

CN

88227

39.89%

United States

US

54977

24.85%

South Korea

KR

13182

5.96%

Netherlands

NL

10808

4.89%

Vietnam

VN

6565

2.97%

United Kingdom

GB

3983

1.80%

Taiwan

TW

3808

1.72%

France

FR

3709

1.68%

Germany

DE

2488

1.12%

Canada

CA

2349

1.06%

 

With the data broken down by country, we can recreate the chart above to show activity by country for the top 5 countries:

 

 

events_per_day_by_country.png

 

RDP Highlights

There is even more information to be found in this data beyond counting passwords, usernames and countries.

We guess that these passwords are selected because whomever is conducting these scans believes that there is a chance they will work. Maybe the scanners have inside knowledge about actual usernames and passwords in use, or maybe they're just using passwords that have been made available from previous security breaches in which account credentials were leaked.

 

In order to look into this, we compared all the passwords collected by Project Heisenberg to passwords listed in two different collections of leaked passwords. The first is a list of passwords collected from leaked password databases by Crackstation. The second list comes from Mark Burnett.

 

In the table below we list how many of the top N passwords are found in these password lists:

 

top password count

num in any list

percent

1

1

100.00%

2

2

100.00%

3

2

66.67%

4

3

75.00%

5

4

80.00%

10

8

80.00%

50

28

56.00%

100

55

55.00%

1000

430

43.00%

3969

1782

44.90%

 

This means that 8 of the 10 most frequently used passwords were also found in published lists of leaked passwords. But looking back at the top 10 passwords above, they are not very complex and so it is not surprising that they appear in a list of leaked passwords.

 

This observation prompted us to look at the complexity of the passwords we collected. Just about any time you sign up for a service on the internet – be it a social networking site, an online bank, or a music streaming service – you will be asked to provide a username and password. Many times your chosen password will be evaluated during the signup process and you will be given feedback about how suitable or secure it is.

 

 

Password evaluation is a tricky and inexact art that consists of various components. Some of the many aspects that a password evaluator may take into consideration include:

 

  • length
  • presence of dictionary words
  • runs of characters (aaabbbcddddd)
  • presence of non alphanumeric characters (!@#$%^&*)
  • common substitutions (1 for l [lowercase L], 0 for O [uppercase o])

 

Different password evaluators will place different values on each of these (and other) characteristics to decide whether a password is "good" or "strong" or "secure". We looked at a few of these password evaluators, and found zxcvbn to be well documented and maintained, so we ran all the passwords through it to compute a complexity score for each one. We then looked at how password complexity is related to finding a password in a list of leaked passwords.

 

complexity

# passwords

%

crackstation

crackstation %

Burnnet

Burnett %

any

any %

all

all %

0

803

20.23

726

90.41

564

70.24

728

90.66

562

69.99

1

1512

38.10

898

59.39

634

41.93

939

62.10

593

39.22

2

735

18.52

87

11.84

37

5.03

94

12.79

30

4.08

3

567

14.29

13

2.29

5

0.88

13

2.29

5

0.88

4

352

8.87

7

1.99

4

1.14

8

2.27

3

0.85

 

The above table shows the complexity of the collected passwords, as well as how many were found in different password lists.

 

For instance, with complexity level 4, there were 352 passwords classified as being that complex, 7 of which were found in the crackstation list, and 4 of which were found in the Burnett list. Furthermore, 8 of the passwords were found in at least one of the password lists, meaning that if you had all the password lists, you would find 2.27% of the passwords classified as having a complexity value of 4. Similarly, looking across all the password lists, you would find 3 (0.85%) passwords present in each of the lists.

 

From this we extrapolate that as passwords get more complex, fewer and fewer are found in the lists of leaked passwords. Since we see that attackers try passwords that are stupendously simple, like single character passwords, and much more complex passwords that are typically not found in the usual password lists, we can surmise that these attackers are not tied to these lists in any practical way -- they clearly have other sources for likely credentials to try.

 

Finally, we wanted to know what the population of possible targets looks like. How many endpoints on the internet have an RDP server running, waiting for connections? Since we have experience from Project Sonar, on 2016-02-02 the Rapid7 Labs team ran a Sonar scan to see how many IPs have port 3389 open listening for tcp traffic. We found that 10822679 different IP addresses meet that criteria, spread out all over the world.

 

So What?

With this dataset we can learn about how people looking to log into RDP servers operate. We have much more detail in the report, but some our findings include:

  • We see that many times a day, every day, our honeypots are contacted by a variety of entities.
  • We see that many of these entities try to log into an RDP service which is not there, using a variety of credentials.
  • We see that a majority of the login attempts use simple passwords, most of which are present in collections of leaked passwords.
  • We see that as passwords get more complex, they are less and less likely to be present in collections of leaked passwords.
  • We see that there is a significant population of RDP enabled endpoints connected to the internet.

 

But wait, there's more!

If this interests you and you would like to learn more, come talk to us at booth #4215 the RSA Conference.

If you've been involved in patch frenzies for any reasonable amount of time, you might remember last year's hullabaloo around GHOST, a vulnerability in glibc's gethostbyname() function. Well, another year, another resolver bug.

 

gethostbyname(), meet getaddrinfo()

This time, it's an exploitable vulnerability in glibc's getaddrinfo(). Like GHOST, this will affect loads and loads of Linux client and server applications, and like GHOST, it's pretty difficult to "scan the Internet" for it, since it's a bug in shared library code. Google reports they have a working private exploit, and I know those rascals on the Metasploit team have been poking at the vulnerability today, so do yourself a favor and patch and reboot your affected systems as soon as practical.

 

The Long Tail of IoT

Unfortunately, as the Ars Technica article points out, there are certainly loads and loads of IoT devices out in the world that aren't likely to see a patch any time soon. So, for all those devices you can't reasonably patch, your network administrator could take a look at the mitigations published by RedHat, and consider the impact of limiting the actual on-the-wire size of DNS replies in your environment. While it's may be a heavy-handed strategy, it will buy you time to ferret out all those IoT devices that people have squirrelled away on your network.

 

Take A Breath

Finally, as with GHOST, there is a valid reason to be concerned, but we don't think this is the end-of-the-internet-as-we-know-it.

 

The bad news is that an exploit against at least one vector is known to exist, and the impact can be nasty if an attacker can segfault your processes with a malformed DNS response, and worse if they're clever and lucky enough to pop a shell. Plenty of legacy systems will be affected. So that all sounds pretty bad, yes?

 

But, ultimately, this bug is far more difficult to exploit than many. It's difficult to target (by both bad guys and good guys), and the attacks tend to require client interaction. As for those legacy systems? They tend to have, if not bigger problems, adjacent and better understood problems, like Shellshock and Heartbleed.

 

The bottom line is that you should patch (as with any CVE-classified bug), but I wouldn't expect the Internet to come crashing down over this.

 

Are Rapid7's Products Impacted?

We're still investigating which of Rapid7's products are impacted, and will update customers as we know more.  So far, we can confirm that both physical and virtual Nexpose appliances are affected and operating systems for them will need to be updated. Nexpose hosted engines are also affected and are being patched as I type. In both cases, we will reach out to any affected customers to advise on any action that needs to be taken by them.

 

Nexpose Coverage

Meanwhile, Nexpose picked up the glibc patch update earlier today, and it's going through analysis now; we can expect a check for Nexpose customers shortly, as we're targeting tomorrow's regular release for that. Armed with a Nexpose check, you can get a decent idea of what your threat exposure is to this bug-that-shall-not-be-branded, on the chance that it really does take off in the coming days.

Thanks to everyone who joined our webinar on How to Build Threat Intelligence into your Incident Detection and Response Program. We got so many great questions during the session that we decided to follow up with a post answering them and addressing the trends and themes we continue to see around threat intelligence.

TL/DR for those of you who don't have time to read all of the responses (we got a lot of questions):

  • Threat intelligence is a process, not something you buy. That means you will have to put work in in order to get results.
  • Threat intelligence works best when it is integrated across your security operations and is not viewed as a stand-alone function
  • Strategic, Operational, and Tactical threat intelligence (including technical indicators) are used differently and gathered using different methods.

Do you see threat intelligence as a proactive approach to cyber monitoring or a just a better way of responding to cyber threats? If you see it as proactive, how, since the intelligence is based on events, TTPs,that have already occurred?

 

A misconception about threat intelligence is that it is focused exclusively on alerting or monitoring. We talked about indicators of compromise and how to use them for detection and response, but there is a lot more to threat intelligence than IOCs. 

 

When threat intelligence is properly implemented in a security program it contributes to prevention, detection, and response. Understanding the high level, strategic threats facing your organization helps determine how to improve overall security posture.

 

All intelligence must be based on facts,( i.e. things that have already occurred or that we already know), but those facts that allow us to create models that can be used to identify trends and assess what controls should be put in place to prevent attacks. 

 

As prevention comes into alignment, it is important to maintain awareness of new threats leveraging operational and tactical intelligence,taking actions to protect your organization before they are able to impact you.

 

I can see the usefulness of tactical, operational and technical intelligence. How would you be able to establish strategic intelligence?

 

Strategic Intelligence is intelligence that informs leadership or decisions makers on the overarching threats to the organization or business. Think of this as informing high level decision making based on evidence, seeing the forest without being distracted by the trees.

 

Information that contributes to strategic intelligence is gathered and analyzed over a longer period of time than other types of threat intelligence. The key to utilizing strategic intelligence is being able to apply it in the context of your own data and attack surface. An example would be intelligence that financially motivated cyber criminals are targeting third party vendors in order to gain access to retail networks. This information could be used to assess whether a business would be vulnerable to this type of attack and identify longer term changes that need to take place to reduce the risk, such as network segmentation, audits of existing third-party access, and development of policies to limit access.

 

What is the difference between Strategic and Operational Intelligence?

 

Strategic intelligence focuses on long term threats and their implications while operational intelligence focuses on short term threats that may need to be mitigated immediately. Implementing strategic and operational intelligence often involves asking the same questions: who and why. With strategic intelligence you are evaluating the attackers - focusing on their tactics and motivations rather than geographical location - to determine how those threats may impact you in the future. With operational intelligence you are evaluating who is actually being targeted and how so that you can determine if you need to take any immediate actions in response to the the threat.



What is positive control and why is it important?

Positive control is the aspirational state of a technical security program . This means that only authorized users and systems are on the network, and that accounts and information are accessed only by approved users. Before you start assessing your network to understand what “normal” looks like, take care and be sure that you are not including attacker activity in your baseline.

 

 

If you are being targeted by an identified entity, what should you do to build intelligence on possible attacks?

Active and overt attacks fall into the realm of operational intelligence. You can gather intelligence on these attacks from social media, blog posts, or alerts from places like US-CERT, ISACs, ISAOs other sharing groups. Some questions you should be asking and answering as you gather information are:

  • Who else is being targeted? Can we share information with them on this attack?
  • How have the attackers operated in the past?
  • What are we seeing now that can help us protect ourselves?

 

What is done in Tactical Monitoring?

Tactical Intelligence tends to focus on mechanisms- the “how” of what an attacker does. Do they tend to use a particular method to gain initial access? A particular tool or set of tools to escalate privilege and move laterally? What social engineering or reconnaissance activities do they typically engage in prior to an attack? Tactical intelligence is geared towards security personnel who are actively monitoring their environment as well as gathering reports from employees who report strange activities or social engineering attempts. Tactical Intelligence can also be used by hunters who are seeking to identify a behavior that may be a normal user behavior but is also a behavior that is used by an attacker to avoid detection. This type of intelligence requires more advanced resources, such as extensive logging, behavioral analytics, endpoint visibility, and trained analysts. It also requires a security-conscious workforce, as some indicators may not be captured or flagged by logs without first being reported by an employee.

 

Can you point me to resources where to gather information regarding strategic, tactical and operational intelligence?

Before you start gathering information it is important to have a solid understanding of the different levels of threat intelligence. CPNI released a whitepaper covering four types of threat intelligence that we discussed on the webinar: https://www.cpni.gov.uk/Documents/Publications/2015/23-March-2015-MWR_Threat_Int elligence_whitepaper-2015.pdf

 

- Or - if you are an intelligence purist and find that four types of threat intelligence is one type too many (or if you’re just feeling rambunctious) you can refer to JP 2-0, Joint Intelligence, for in-depth understanding of the levels of intelligence and their traditional application. http://www.dtic.mil/doctrine/new_pubs/jp2_0.pdf

 

Once you are ready, here are some places to look for specific types of intelligence:

 

Strategic Intelligence can be gathered through open source trend reports such as the DBIR, DBIR industry snapshots, or other industry specific reports that are frequently released.

 

Operational Intelligence is often time sensitive and can be gather by monitoring social media, government alert like US-CERT, or by coordinating with partners in your industry.

 

Tactical Intelligence can be gathered using commercial or open sources, such as blogs, threat feeds, or analytic white papers. Tactical Intelligence should tell you how an actor operates, the tools and techniques that they use, and give you an idea of what activities you can monitor for on your own network. At this level understanding your users and how the normally behave is critical, because threat actors will try to mimic those same behaviors and being able to identify a deviation, no matter how small, can be extremely significant. 

 

What is open source threat intelligence?

Open Source intelligence (OSINT) is the product of gathering and analyzing data gathered from publicly available sources: the open internet, social media, media, etc.

More here: https://en.wikipedia.org/wiki/Open-source_intelligence

For more information on the other types of intelligence collection disciplines: https://www.fbi.gov/about-us/intelligence/disciplines

 

Open source threat intelligence is OSINT that focuses specifically on threats. In many cases you will be able to gather OSINT but will still have to do the analysis of the potential impact of the threat on your organization.

 

What are ISACs and ISAOs? Where can I find a list of them?

Most private sector information sharing is conducted through Information Sharing and Analysis Centers organized primarily by sectors (usually critical infrastructure, a list is located here: http://www.isaccouncil.org/memberisacs.html.

 

In the United States, under President Obama’s executive Order 13691, DHS was directed to improve information sharing between the US government’s National Cybersecurity and Communications Integration Center (NCCIC) and private sectors. This executive order serves as the platform to include those outside the traditional critical infrastructure sectors, Information Sharing and Analysis Organizations.

 

What specific tools are used for threat intelligence?

This is a great question, and I think underscores a big misunderstanding out there. Threat Intelligence is a process, not a product bought or service retained. Any tool you use should help augment your processes. There are a few broad classifications of tools out there, including threat intelligence platforms and data analytics tools. The best way to find the right tools is to identify what problem you are trying to solve with threat intelligence, develop a manual process that works for you, and then look for tools that will help make that manual process easier or more efficient.

 

Can a solution or framework be tailored to support organizations at different levels of cyber security maturity and awareness, or is there a minimum requirement?

There *is* a certain level of awareness that is required to implement a threat intelligence program. Notice that we didn’t say maturity - we feel that any level program can benefit from threat intelligence, but there is a lot that goes into a organization being ready to utilize it.

 

At the very basic level an organization needs to understand what threat intelligence is, what is isn’t, understand the problems that they are trying to solve with threat intel, and have a person or a team who is responsible for threat intel. An organization with this base level understanding is far ahead of many others.

 

When discussing the more technical implementations of threat intelligence such as threat feeds or platforms then there are some barriers to entry. Aside from those situations, nearly any organization can work to better understand the threats facing them and how they should start to posture themselves to prevent or respond to those threats. Regardless of where you are, if you understand how threat intelligence works and start to implement it appropriately then you will be better off regardless of what else you are dealing with.

 

How do you stop an attacker once discovered? ACL IPS etc?

Scoping the attack is the first stage, which requires both investigation and forensics. The investigation team will identify various attributes used in the attack (tools, tactics, procedures), and then will go back and explore the rest of your systems for those attributes.

 

As systems get added, the recursive scoping loop continues until no new systems are added.

 

Once scoping is done, there are a number of actions to be taken- and the complexity involved in deciding exactly what happens (and when) grows exponentially. A short (and anything but comprehensive) list of considerations include:

  • Executive briefing and action plan signoff
  • Estimate business impact by the recovery actions to be executed
  • Isolate compromised systems
  • Lock or change passwords on all compromised accounts with key material in the scoped systems
  • Patch and harden all systems in the organization against vulnerability classes used by the attacker
  • Identify exactly what data was impacted, consult with legal regarding regulatory or contractual required next steps
  • Safely and securely restore impacted services to the business

 

Obviously there are a lot of variables at play here, and every incident is unique.

This stuff is extremely hard, if it was easy- everyone would be doing it.

Call us if you need help.

 

When I find a system that has been compromised, can you tell me where it came from?

You’re asking the right question here- getting a sense of the attacker’s motivation and tactics is extremely valuable. Answering “who did this” and “where did they come from” is a lot more difficult than simply pointing at the source IP for initial point of entry or command and control.

 

Tactical Intelligence from the investigation will help answer these questions.

 

What should be the first step after knowing that the host has been compromised by zero day attack?

Run around, scream and shout.

In all seriousness, you won’t start off with the knowledge of zero-day being used to compromise an asset. Discovering that 0day was used in a compromise, by definition, means that an investigation was performed when the root-cause identified at the point of infection was, in fact, 0day. At that point you will hopefully have gathered more information about the incident that you can then analyze to better understand the situation you are facing.

Harley Geiger

I've joined Rapid7!

Posted by Harley Geiger Employee Feb 10, 2016

Hello! My name is Harley Geiger and I joined Rapid7 as director of public policy, based out of our Washington, DC-area office. I actually joined a little more than a month ago, but there's been a lot going on! I'm excited to be a part of a team dedicated to making our interconnected world a safer place.

 

Rapid7 has demonstrated a commitment to helping promote legal protections for the security research community. I am a lawyer, not a technologist, and part of the value I hope to add is as a representative of security researchers' interests before government and lawmaking bodies – to help craft policies that recognize the vital role researchers play in strengthening digital products and services, and to help prevent reflexive anti-hacking regulations. I will also work to educate the public and other security researchers about the impact laws and legislation may have on cybersecurity.

 

Security researchers are on the front lines of dangerous ambiguities in the law. Discovering and patching security vulnerabilities is a highly valuable service – vulnerabilities can put property, safety, and dignity at risk. Yet finding software vulnerabilities often means using the software in ways the original coders do not expect or authorize, which can create legal issues. Unfortunately, many computer crime laws - like the Computer Fraud and Abuse Act (CFAA) - were enacted decades ago and make little distinction between beneficial security research and malicious hacking. And, due to the steady stream of breaches, there is constant pressure on policymakers to expand these laws even further.

 

I believe the issues currently facing security researchers also have broader societal implications that will grow in importance. Modern life is teeming with computers, but the future will be even more digitized. The laws governing our interactions with computers and software will increasingly control our interactions with everyday objects – including those we supposedly own – potentially chilling cybersecurity research, repair, and innovation when these activities should be broadly encouraged. We, collectively, will need greater freedom to oversee, modify, and secure the code around us than the law presently affords.

 

That is a major reason why the opportunity to lead Rapid7's public policy activities held a lot of appeal for me. I strongly support Rapid7's mission of making digital products and services safer for all users. In addition, it helped that I got to know Rapid7's leadership team years before joining. I first met Corey Thomas, Lee Weiner, and Jen Ellis while working on "Aaron's Law" for Rep. Zoe Lofgren in the US House of Representatives. After working for Rep. Lofgren, I was Senior Counsel and Advocacy Director at the Center for Democracy & Technology (CDT), where I again collaborated with Rapid7 on cybersecurity legislation. I've been consistently impressed by the team's overall effectiveness and dedication.

 

Now that I'm part of the team, I look forward to working with all of you to modernize how the law approaches security research and cybersecurity. Please let me know if you have ideas for collaboration or opportunities to spread our message. Thank you!

 

Harley Geiger

Director of Public Policy

Rapid7

@HarleyGeiger

Filter Blog

By date: By tag: