Skip navigation
All Places > Information Security > Blog
1 2 3 Previous Next

Information Security

645 posts

Verizon has released the 2016 edition of their annual Data Breach Investigations Report (DBIR). Their crack team of researchers have, once again, produced one of the most respected, data-driven reports in cyber security, sifting through submissions from 67 contributors and taking a deep dive into 64,000+ incidents—and nearly 2,300 breaches—to help provide insight on what our adversaries are up to and how successful they've been.

 

The DBIR is a highly anticipated research project and has valuable information for many groups. Policy makers use it to defend legislation; pundits and media use it to crank out scary articles; other researchers and academics take the insights in the report and identify new avenues to explore; and vendors quickly identify product and services areas that are aligned with the major findings. Yet, the data in the report is of paramount import to defenders. With over 80 pages to wade through, we thought it might be helpful to provide some way-points that you could use to navigate through this year's breach and incident map.

 

Bigger is…Better?

 

There are a couple "gotchas" with data submitted to the DBIR team. The first is that a big chunk of data comes from the U.S. public sector where there are mandatory reporting laws, regulations, and requirements. The second is the YUGE number of Unknowns. The DBIR acknowledges this, and it's still valuable to look at the data when there are "knowns" even with this grey (okay, ours is green below) blob of uncertainty in the mix. You can easily find your industry in DBIR Tables 1 & 2 (pages 3 & 4) and if we pivot on that data we can see the distribution of the percentage of incidents that are breaches:

 

2016-verizon-data-breach-report-fig-1-1.png

We've removed the "Public (92)" industry from this set to get a better sense of what's happening across general industries. For the DBIR, there were more submissions of incidents with confirmed data disclosure for smaller organizations than large (i.e. be careful out there SMBs), but there's also a big pile of Unknowns:

 

2016-verizon-data-breach-report-fig-2-1.png

We can also take another, discrete view of this by industry:

 

2016-verizon-data-breach-report-fig-3-1.png

 

(Of note: it seems even the Verizon Data Breach Report has "Unknown Unknowns")

 

As defenders, you should be reading the report with an eye for your industry, size, and other characteristics to help build up your threat profiles and help benchmark your security program. Take your incident to breach ratio (you are using VERIS to record and track everything from anti-virus hits to full on breaches, right?) and compare it to the corresponding industry/size.

 

The Single Most Popular Valuable Chart In The World! (for defenders)

 

When it comes right down to it, you're usually fighting an economic battle with your adversaries. This year's report, Figure 3 (page 7) shows that the motivations are still primarily financial and that Hacking, Malware and Social are the weapons of choice for attackers. We'll dive into that in a bit, but we need to introduce our take on DBIR Figure 8 (page 10) before continuing:

 

2016-verizon-data-breach-report-fig-4-1.png

We smoothed out the rough edges from the 2016 Verizon Data Breach Report to figure to paint a somewhat clearer picture of the overall trends, and used a complex statistical transformation (i.e. subtraction) to just focus on the smoothed gap:

 

2016-verizon-data-breach-report-fig-5-1.png

 

Remember, the DBIR data is a biased sample from the overall population of cyber security incidents and breaches that occur and every statistical transformation introduces more uncertainty along the way. That means your takeaway from "Part Deux" should be "we're not getting any better" vs "THE DETECTION DEFICIT TOPPED 75% FOR THE FIRST TIME IN HISTORY!"

 

So, our adversaries are accomplishing their goals in days or less at an ever-quickening success rate while defenders are just not keeping up at all. Before we can understand what we need to do to reverse these trends, we need to see what the attackers are doing. We took the data from DBIR Figure 6 (page 9) and pulled out the top threat actions for each year, then filtered the result to the areas that match both the major threat action categories and the areas of concern that Rapid7 customers have a keen focus on:

 

2016-verizon-data-breach-report-fig-6-1.pngSome key takeaways:

  • Malware and hacking events dropping C2s are up
  • Key loggers are making a comeback (this may be an artifact of the heavy influence of Dridex in the DBIR data set this year)
  • Malware-based exfiltration is back to previously seen levels
  • Phishing is pretty much holding steady, which is most likely supporting the use of compromised credentials (which is trending up)

 

Endpoint monitoring, kicking up your awareness programs, and watching out for wonky user account behavior would be wise things to prioritize based on this data.

 

Not all Cut-and-Dridex

The Verizon Data Breach Report mentions Dridex 13 times and was very up front about the bias it introduced in the report. So, how can you interpret the data with "DrideRx" prescription lenses? Rapid7's Analytic Response Team notes that Dridex campaigns involve:

 

  • Phishing
  • Endpoint malware drops
  • Establishment of command and control (C2) on the endpoint
  • Harvesting credentials and shipping them back to the C2 servers

 

This means that—at a minimum—the data behind the Data Breach Investigations Report, Figures 6-8 & 15-22, impacted the overall findings and Verizon itself warns about broad interpretations of the Web App Attacks category:

 

"Hundreds of breaches involving social attacks on customers, followed by the Dridex malware and subsequent use of credentials captured by keyloggers, dominate the actions."

 

So, when interpreting the results, keep an eye out for the above components and factor in the Dridex component before tweaking your security program too much in one direction or another.

 

Who has your back?

 

When reading any report, one should always check to make sure the data presented doesn't conflict with itself. One way to add a validation to the above detection deficit is to look at DBIR Figure 9 (page 11) which shows (when known) how breaches were discovered over time. We can simplify this view as well:

 

2016-verizon-data-breach-report-fig-7-1.pngIn the significant majority of cases, defenders have law enforcement agencies (like the FBI in the United States) and other external parties to "thank" for letting them know they've been pwnd. As our figure shows, we stopped being able to watch our own backs half a decade ago and have yet to recover. This should be a wake-up call to defenders to focus on identifying how attackers are getting into their organizations and instrumenting better ways to detect their actions.

 

Are you:

 

  • Identifying critical assets and access points?
  • Monitoring the right things (or anything) on your endpoints?
  • Getting the right logs into the right places for analysis and action?
  • Deploying honeypots to catch activity that should not be happening?

 

If not, these may be things you need to re-prioritize in order to force the attackers to invest more time and resources to accomplish their goals (remember, this is an battle of economics).

 

Are You Feeling Vulnerable?

 

Attackers are continuing to use stolen credentials at an alarming rate and they obtain these credentials through both social engineering and the exploitation of vulnerabilities. Similarly, lateral movement within an organization also relies—in part—on exploiting vulnerabilities. DBIR Figure 13 (page 16) shows that as a group, defenders are staying on top of current and year-minus-one vulnerabilities fairly well:

 

dbirfig13.png

We're still having issues patching or mitigating older vulnerabilities, many of which have tried-and-true exploits that will work juuuust fine. Leaving these attack points exposed is not helping your economic battle with your adversaries, as letting them rely on past R&D means they have more time and opportunity. How can you get the upper-hand?

 

  • Maintain situational awareness when it comes to vulnerabilities (i.e. scan with a plan)
  • Develop a strategy patching with a holistic focus, not just react to "Patch Tuesday"
  • Don't dismiss mitigation. There are legitimate technical and logistic reasons that can make patching difficult. Work on developing a playbook of mitigation strategies you can rely on when these types of vulnerabilities arise.

 

"Threat intelligence" was a noticeably absent topic in the 2016 DBIR, but we feel that it can play a key role when it comes to defending your organization when vulnerabilities are present. Your vuln management, server/app management, and security operations teams should be working in tandem to know where vulnerabilities still exist and to monitor and block malicious activity that is associated with targets that are still vulnerable. This is one of the best ways to utilize all those threat intel feeds you have gathering dust in your SIEM.

 

There and Back Again

 

This post outlined just a few of the interesting markers on your path through the Verizon Data Breach Report. Keep a watchful eye on the Rapid7 Community for more insight into other critical areas of the report and where we can help you address the key issues facing your organization.


(Many thanks to Rapid7's Roy Hodgman and Rebekah Brown for their contributions to this post.)

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

I'm often asked by friends and colleagues: Why do I have to change my password every 30 or 60 days? My response is always the same: Odds are good that it’s because that's the way that it's always been done. Or, these people might have a super strict IT manager who likes to show - on paper - that his or her environment is "locked down." Occasionally I will get feedback that auditors require such stringent settings. The funny thing is, there's never really a good business reason behind such short-term password changes.

 

In fact, if you dig in further, in many cases there are numerous other issues that are a much higher risk than passwords that are not changed often. I often see weak password requirements – i.e. complexity not being enforced or 6-character minimum lengths. I often see this combined with super weak endpoint security such as minimal Windows patching, no third-party software patching, no full disk encryption, and network monitoring/alerting that is reactive at best.

So, why is it that we go with the 30, 60, or 90-day password change requirements? I don't think it's malicious but I do believe that people just aren't taking the time to think about what they're doing. In fact that's sort of the essence of many security challenges that businesses face today. People just aren’t thinking about what they're actually doing. They're going through the motions with their “policies” and they have these fancy technologies deployed but, in reality, the implementation of everything stinks. At the end of the day, management assumes that all is well because of all of the money and effort being spent on these issues (including those pesky password changes) but, yet, they still get hit with breaches and no one can figure out why.

 

I think many seasoned IT and security professionals would agree with me in that quick turnarounds on password changes is actually bad for security. We always joke about how users will write down their passwords on the sticky notes – and it's true! But it goes deeper than the humor. There's a strong political factor at the root of much of the password nonsense. Users don't want to have to create and remember long passwords.

 

After all, odds are they’ve never been taught/guided to use passphrases that are super simple to create and remember yet impossible to crack. Furthermore, management doesn't want to hear about it so IT doesn't press the issue. Thus the ignorant cycle of if we can't make them use strong passphrases, we can at least require quick password changes. The madness continues and it’s bad for business.

 

Anytime you create complexity and, in this case, requiring users to continually change their passwords – whether or not they’re suspected to have been compromised – you create more problems than you solve in most cases. There are always exceptions and compensating controls such as intruder lockout, two-factor authentication, and proactive system monitoring can thwart most attacks on user accounts. It’s time to look past the nonsense and capitalize on opportunities such as this to get people on our side rather than continue ticking them off.

This is a guest post by Ismail Guneydas. Ismail Guneydas is senior technical leader with over ten years of experience in vulnerability management, digital forensics, e-Crime investigations and teaching. Currently he is a senior vulnerability manager at Kimberly-Clark and an adjunct faculty at Texas A&M. He has M.S.  in computer science and MBA degrees.

 

2015 is in the past, so now is as good a time as any to get some numbers together from the year that was and analyze them.  For this blog post, we're going to use the numbers from the National Vulnerability Database and take a look at what trends these numbers reveal.

 

Why the National Vulnerability Database (NVD)?  To paraphrase Wikipedia for a moment, it's a repository of vulnerability management data, assembled by the U.S. Government, represented using the Security Content Automation Protocol (SCAP). Most relevant to our exercise here, the NVD includes databases of security-related software flaws, misconfigurations, product names, impact metrics—amongst other data fields.

By pouring through the NVD data from the last 5 years, we're looking to answer following questions:

  • What are the vulnerability trends of the last 5 years, and do vulnerability numbers indicate anything specific?
  • What are the severities of vulnerabilities? Do we have more critical vulnerabilities or less?
  • What vendors create most vulnerable products?
  • What products are most vulnerable?
    • Which OS? Windows OSX, a Linux distro?
    • Which mobile OS? IOS, Android, Windows?
    • Which web browser? Safari, Internet Explorer, Firefox?

 

Vulnerabilities Per Year

VulnerabilitiesPerYear.jpg

 

That is correct! Believe it or not, there was a 20% drop in the number of vulnerabilities compared to the number of vulnerabilities in 2014. However, if you look at the overall trending growth in the last 5 years, the 2015 number seems to be consistent with the overall growth rate. The abnormality here was the 53% increase in 2014. If we compare 2015's numbers with 2013, then we see  24% increase.

 

All in all though, this doesn't mean we didn't have an especially bad year as we did in 2014 (the trend shows us we will have more vulnerabilities in the next few years as well). That's because when we look closely at the critical vulnerabilities, we see something interesting. There were more critical vulnerabilities in 2015 then 2014. In 2014 we had more vulnerabilities with CVSS 4, 5, and 6; however, 2015 had more vulnerabilities with CVSS 7, 8, 9 and 10!

 

Vulnerability-Distro-By-CVSS-Score.jpg

As you see above there are 3376 critical vulnerabilities in 2015 where as there were only 2887 critical vulnerabilities in 2014. (That is a 17% increase.)

 

In other words, the proportion of critical vulnerabilities is increasing overall. That means we need to pay close attention to our vulnerability management programs and make sure they are effective—fewer false positives and negatives—up-to-date with recent vulnerabilities, and faster with shorter scan times.

 

Severity of Vulnerabilities

This chart shows weight distribution of 2015 vulnerabilities, based on CVSS score. As (hopefully) most of you know, 10 is the highest/most critical level, whereas 1 is the least critical level.

CVSS-Score-.jpg

 

There are many vulnerabilities with CVSS 9 and 10. Let's check following graph that gives more clear picture:

 

Weighted-Average.jpg

This means 36% of the vulnerabilities were critical (CVSS >=7). The average CVSS is 6.8 so that is at the boundary to be critical.

 

The severity of vulns is increasing, but this isn’t to say it’s all bad. In fact, it really exposes a crucial point: That you have to be deploying a vulnerability management program that separates the weak from the chaff. Effective vulnerability management program will help you to find and then remediate vulnerabilities in your environment.

 

Vulnerability Numbers Per Vendor

Let's analyze national vulnerability database numbers by checking vendors' vulnerabilities. The shifting tides in vulnerabilities doesn’t stop for any company, including Apple. The fact is there are always vulnerabilities, the key has to be detecting these before they are exploited.

 

Apple had the most number of vulnerabilities in 2015.  Of course with many iOS and OSX vulnerabilities out there in general, it's no surprise this number went up.

 

Here is the full list:

mostvulnerablevendors.jpg

 

Apple jumped from being number 5th in 2014.  Microsoft was number 3rd and Cisco was number 4th. Surprisingly Oracle (owner of Java) did well this year and took 4th place (they were number 2 last year). Congratulations (?) to Canonical and Novel, as they were not in top 10 list last year (they were 13rd and 15th).  So in terms of prioritization, with Apple making a big jump last year, if you have a lot of iOS in your environment, it's definitely time to  make sure you've prioritized those assets accordingly.

 

Here's a comparison chart that shows number of vulnerabilities per vendor for 2014 and 2015.

2014vs2015-Vendors.jpg

 

Vulnerabilities Per OS

In 2015, according to the NVD, OSX had the most vulnerabilities, followed by Windows 2012 and Ubuntu Linux.

MostVulnerableOS.jpg

 

Here most vulnerable Linux distro is Ubuntu. Opensuse is the runner up and then Debian Linux. Interestingly Windows 7, the most popular desktop application based on its usage, is reported to be less vulnerable then Ubuntu. (That may surprise a few people!)

 

Vulnerabilities Per Mobile OS

Picture12.png

 

IPhone OS has the highest number of vulnerabilities published in 2015. Windows and Android came after iPhone. 2014 was no different. iPhone OS had the highest number of vulnerabilities and Windows Rt and Android followed it.

 

Vulnerabilities Per Application

Picture10.png

Vulnerabilities Per Browser

Picture1.png

IE had highest number of vulnerabilities in 2015. In 2014, the order of product with the highest number of vulnerabilities were exactly same. (IE, Chrome, Firefox, Safari.)

 

Summary

Given the trends over the past few years reported via the NVD, we should expect more vulnerabilities to be published with higher CVSS score this year. Moreover, I predict that mobile OS will be hot area for security — as more mobile security professionals find and report mobile OS vulnerabilities, we'll see an increase in Mobile OS vulnerabilities as well.

 

It’s all about priorities. We only have so many hours in the day and resources available to us to remediate what we can. But if you take intel from something like the NVD and layer that over the visibility you have into your own environment, you can use this information to help build a good to-do list built by priorities, and not fear.

The FBI this week posted an alert that showed wire transfer scams bled $2.3 Billion from “business email compromise” from October 2013 through February 2016.  A couple of news outlets picked this up, including Brian Krebs.

 

When I was the head of security at a multi-national corporation, this was an issue that came up regularly. There were instances of very aggressive behavior, such as someone calling the call center pretending to be the CEO of one of the countries and demanding a $1 million dollar transfer. That was a very bold and very obvious fraud that the call center was able to handle. However, very often these requests came though email, just like the FBI reported.

 

When this happens, normally the scammer uses either a forged email domain very similar to the corporate one. If your user uses a browser without a fixed width font, they might get tricked into see the domain as legitimate, i.e.  rnicrosoft.com vs microsoft.com (look closely), or a use of a sub domain that looks very similar, i.e. yourcom.panyname.com. Then the header is simply forged. In simple mail clients, like Gmail, you have to take extra steps to see the actual sender domain.

 

 

The emails are usually pretty short, lacking detail, such as :

 

“I need you to immediately produce a wire transfer for $13,000 and sent to the bank listed. I will follow up with you later.

 

Regards,

CEO NAME”

 

And you might have a pdf attachment with banking details. Oddly enough, the PDFs I encountered were never malicious. They had legitimate account details so the wire transfers could be received.

 

Now you might think this is too simple and shouldn’t work. But obviously, it does, to the tune of $2.3 billion. You might ask yourself why, and if you aren’t, I’ll ask it for you. Self, why does this work?

 

Well consider that you might have a multibillion dollar corporation located in many countries. If you do business in certain countries, wire transfers are the norm. So wire transfers become part of a normal process for that company. And when someone asks for $13,000, or even as much as $75,000, for a company that posts $4.3 billion in revenue, they would not even blink an eye at this.

 

Scammers do a little recon, ask for an amount that is small to the company, and it gets processed. Little risk, high reward.

How would you protect against this?

 

The simplest method is verification of the request. The FBI suggests that a telephone call be placed to verify the request, which is a good practice. They also suggest two factor authentication for email, and limit social media activities, as scammers will do reconnaissance and determine if CEOs are traveling.

 

Krebs points out that some experts rely on technological controls such as DKIM and SPF. While these are things we recommend in our consultancy, they are complex for low maturity organizations and do require some effort and support. At the end of the day, they don’t actually solve the problem, because we are socially engineering human beings.

 

While all of these technology controls are good, we are dealing with humans. The best way to prevent this fraud from occurring is creating simple business processes that are enforced. In security terms, we would call this segregation of duties.

 

The simplest security

 

Simply put, segregation of duties says that no one person or one role should be allowed to execute a business process from start to finish. In the case of wire transfer fraud, for example, one person/role should not be able to create the wire transfer, approve it and execute it. Dividing these duties between two or more persons/roles means more eyes on the situation, and a potential to catch the fraud. A simple process map might look like:

 

 

Ensure that Role A and Role B have proper documentation (evidence) for each step of the request and approval, and you now have a specific security control that easily integrates into a business process. The key to enforcement: making sure every single request follows the chain every single time. No exceptions.

 

Now let me tell you about the one that almost made it.

 

There was one instance I dealt with which was one mouse click away from being executed.

 

An email (very similar to the example above) was sent to a director of finance, purportedly from the CEO. The director was busy that day, and filed the email away for processing later. By 4:55 pm or so, they realized they had not acted on the request. As it was almost end of day, and wire transfers are not processed by most banks after banking hours, she hurriedly forwarded the email to the wire transfer processor, marked with urgency, and made a call to ensure it was processed immediately. By the time it was picked up and put into the process, banks were closed. So they agreed it would execute first thing tomorrow morning.

 

That evening, a series of emails went back and forth between the approver, who was a simple finance analyst who held very firm to the process, and the requester. Though it had urgency, and people were shouting that it was a request from the CEO, the process prevailed.

 

All this time no one thought to actually verify the request, and this was not part of the process at that time. But because the approver was uncooperative with the request, it was escalated to the CFO, because the CEO was traveling, and he suspected it was fraudulent, and contacted me. We determined almost immediately it was fake, just by looking at email headers. There were other indicators too.

 

I immediately praised everyone involved, and bought them gifts for sticking to the process. The director might have felt ashamed, but I went to her as well and explained that these scams are successful because they count on stress and distraction to occur. These are normal human behaviors, and they sometimes cause us to act erratically. But because we had a firm process that was adhered to, all we lost was time.

 

There’s actually much more to this story, but I’ll save that for future posts.

 

Regardless of your organizations size or structure, you too can put this in place. If you are unsure these processes exist, start asking around. Begin with your controllers or comptrollers, or anyone in finance. Ask if you have a process for wire transfers, and if so what the process is. Get involved, understand how your business does business. This will benefit you in many ways.

 

Other things you can do:

 

  • Join Infragard, the FBI and civilian alliance, which will get you in depth resources and information. You can also report fraud to the IC3, The Internet Crime Complaint Center.
  • Ensure you have a separation of duties policy that is enforced
  • Periodically train / update awareness of these issues with the people involved

 

All these are free, requiring only a time investment, and will go a long way toward avoiding the kind of wire transfer fraud scam the FBI is warning about.

Today is Badlock Day

badlock-not-really.JPGYou may recall that the folks over at badlock.org stated about 20 days ago that April 12 would see patches for "Badlock," a serious vulnerability in the SMB/CIFS protocol that affects both Microsoft Windows and any server running Samba, an open source workalike for SMB/CIFS services. We talked about it back in our Getting Ahead of Badlock post, and hopefully, IT administrators have taken advantage of the pre-release warning to clear their schedules for today's patching activities.

 

For Microsoft shops, this should have been straightforward, since today is also Microsoft Patch Tuesday. Applying critical Microsoft patches is, after all, a pretty predictable event.

 

For administrators of servers that run other operating systems that also happen to offer Samba, we've all had a rough couple years of (usually) coordinated disclosures and updates around core system libraries, so this event can piggyback on those established procedures.

 

How worried should I be?

While we do recommend you roll out the patches as soon as possible - as we generally do for everything - we don't think Badlock is the Bug To End All Bugs[TM]. In reality, an attacker has to already be in a position to do harm in order to use this, and if they are, there are probably other, worse (or better depending on your point of view) attacks they may leverage.

 

Badlock describes a Man-in-the-Middle (MitM) vulnerability affecting both Samba's implementation of SMB/CIFS (as CVE-2016-2118) and Microsoft's (as CVE-2016-0128). This is NOT a straightforward remote code execution (RCE) vulnerability, so it is unlike MS08-067 or any of the historical RCE issues against SMB/CIFS. More details about Badlock and the related issues can be found over at badlock.org.

 

The most likely attack scenario is an internal user who is in the position of intercepting and modifying network traffic in transit to gain privileges equivalent to the intercepted user. While some SMB/CIFS servers exist on the Internet, this is generally considered poor practice, and should be avoided anyway.

 

What's next?

For Samba administrators, the easy advice is to just patch up now. If you're absolutely sure you're not offering CIFS/SMB over the Internet with Samba, check again. Unintentionally exposed services are the bane of IT security after all, with the porous nature of network perimeters.

 

While you're checking, go ahead and patch, since both private and public exploits will surface eventually. You can bet that exploit developers around the world are poring over the Samba patches now. In fact, you can track public progress over at the Metasploit Pull Request queue, but please keep your comments technically relevant and helpful if you care to pitch in.

 

For Microsoft Windows administrators, Badlock is apparently fixed in MS16-047. While Microsoft merely rates this as "Important," there are plenty of other critically rated issues released today, so IT organizations are advised to use their already-negotiated change windows to test and apply this latest round of patches.

 

Rapid7 will be publishing both Metasploit exploits and Nexpose checks just as soon as we can, and this post will be updated when those are available. These should help IT security practitioners to identify their organizations' threat exposure on both systems that are routinely kept up to date, as well as those systems that are IT's responsibility but are, for whatever reason, outside of IT's direct control.

 

Are any Rapid7 products affected?

No Rapid7 products are affected by this vulnerability.

Maybe I’m being cynical, but I feel like that may well be the thought that a lot of people have when they hear about two surveys posted online this week to investigate perspectives on vulnerability disclosure and handling. Yet despite my natural cynicism, I believe these surveys are a valuable and important step towards understanding the real status quo around vulnerability disclosure and handling so the actions taken to drive adoption of best practices will be more likely to have impact.

 

Hopefully this blog will explain why I feel this way. Before we get into it, here are the surveys:

 

A little bit of background…

 

In March 2015, the National Telecommunications and Information Administration (NTIA) issued a request for comment to “identify substantive cybersecurity issues that affect the digital ecosystem and digital economic growth where broad consensus, coordinated action, and the development of best practices could substantially improve security for organizations and consumers.” Based on the responses they received, they then announced that they were convening a “multistakeholder process concerning collaboration between security researchers and software and system developers and owners to address security vulnerability disclosure.”

 

This announcement was met by the deafening sound of groaning from the security community, many of whom have already participated in countless multistakeholder processes on this topic. The debate around vulnerability disclosure and handling is not new, and it has a tendency to veer towards the religious, with security researchers on one side, and technology providers on the other. Despite this, there have been a number of good faith efforts to develop best practices so researchers and technology providers can work more productively together, reducing the risk on both sides, as well as for end-users. This work has even resulted in two ISO standards (ISO 29147 & ISO 30111) providing vulnerability disclosure and handling best practices for technology providers and operators. So why did the NTIA receive comments proposing this topic?  And of all the things proposed, why did they pick this as their first topic?

 

In my opinion, it’s for two main, connected reasons.

 

Firstly, despite all the phenomenal work that has gone into developing best practices for vulnerability disclosure and handling, adoption of these practices is still very limited. Rapid7 conducts quite a lot of vulnerability disclosures, either for our own researchers, or on occasion for researchers in the Metasploit community that don’t want to deal with the hassle.  Anecdotally, we reckon we receive a response to these disclosures maybe 20% of the time. The rest of the time, it’s crickets. In fact, at the first meeting of the NTIA process in Berkeley, Art Manion of the CERT Coordination Center commented that they’ve taken to sending registered snail mail as it’s the only way they can be sure a disclosure has been received.  It was hard to tell if that’s a joke or true facts.

 

So adoption still seems to be a challenge, and maybe some people (like me) hope this process can help. Of course, the efforts that went before tried to drive adoption, so why should this one be any different?

 

This brings me to the second of my reasons for this project, namely that the times have changed, and with them the context. In the past five years, we’ve seen a staggering number of breaches reported in the news; we’ve seen high-profile branded vulnerability disclosures dominate headlines and put security on the executive team’s radar. We’ve seen bug bounties starting to be adopted by the more security-minded companies. And importantly, we’ve seen the Government start to pay attention to security research – we’ve seen that in the DMCA exemption recently approved, the FDA post-market guidance being proposed, the FTC’s presence at DEF CON, the Department of Defense’s bug bounty, and of course, in the very fact that the NTIA picked this topic. None of these factors alone creates a turn of the tide, but combined, they just might provide an opportunity for us to take a step forward.

 

And that’s what we’re talking about here – steps. It’s important to remember that complex problems are almost never solved overnight. The work done in this NTIA process builds on work conducted before: for example the development of best practices; the disclosure of vulnerability research; efforts to address or fix those bugs; the adoption of bug bounties. All of these pieces make up a picture that reveals a gradual shift in the culture around vulnerability disclosure and handling. Our efforts, should they yield results, will also not be a panacea, but we hope they will pave the way for other steps forward in the future.

 

OK, but why do we need surveys?

 

As I said above, discussions around this tend to become a little heated, and there’s not always a lot of empathy between the two sides, which doesn’t make for great potential for finding resolution. A lot of this dialogue is fueled by assumptions.

My experience and resulting perspective on this topic stems from having worked on both sides of the fence – first as a reputation manager for tech companies (where my reaction to a vulnerability disclosure would have been to try to kill it with fire); and then more recently I have partnered with researchers to get the word out about vulnerabilities, or have coordinated Rapid7’s efforts to respond to major disclosures in the community. At different points I have responded with indignation on behalf of my tech company client, who I saw as being threatened by those Shady Researcher Types, and then later on behalf of my researcher friends, who I have seen threatened by those Evil Corporation Types. I say that somewhat tongue-in-cheek, but I do often hear that kind of dialogue coming from the different groups involved, and much worse besides. There are a lot of stereotypes and assumptions in this discussion, and I find they are rarely all that true.

 

I thought my experience gave me a pretty good handle on the debate and the various points of view I would encounter. I thought I knew the reality behind the hyperbolic discourse, yet I find I am still surprised by the things I hear.

 

For example, it turns out a lot of technology providers (both big and small) don’t think of themselves as such and so they are in the “don’t know what they don’t know” bucket. It also turns out a lot of technology operators are terrified of being extorted by researchers. I’ve been told that a few times, but had initially dismissed it as hyperbole, until an incredibly stressed security professional working at a non-profit and trying to figure out how to interpret an inbound from a researcher contacted me asking for help. When I looked at the communication from the researcher, I could absolutely understand his concern.

 

On the researcher side, I’ve been saddened by the number of people that tell me they don’t want to disclose findings because they’re afraid of legal threats from the vendor. Yet more have told me they see no point in disclosing to vendors because they never respond.  As I said above, we can relate to that point of view! At the same time, we recently disclosed a vulnerability to Xfinity, and missed disclosing through their preferred reporting route (we disclosed to Xfinity addresses, and their recommendation is to use abuse@comcast.net).  When we went public, they pointed this out, and were actually very responsive and engaged regarding the disclosure. We realized that we’ve become so used to a lack of response from vendors that we stopped pushing ourselves to do everything we can to get one. If we care about reaching the right outcome to improve security – and we do – we can’t allow ourselves to become defeatist.

 

My point here is that assumptions may be based on past experience, but that doesn’t mean they are always correct, or even still correct in the current context. Assumptions, particularly erroneous ones, undermine our ability to understand the heart of the problem, which reduces our chances of proposing solutions that will work. Assumptions and stereotypes are also clear signs of a lack of empathy. How will we ever achieve any kind of productive collaboration, compromise, or cultural evolution if we aren’t able or willing to empathize with each other?  I rarely find that anyone is motivated by purely nefarious motives, and understanding what actually does motivate them and why is the key to informing and influencing behavior to effect positive change.  Even if in some instances it means that it’s your own behavior that might change J

 

So, about those surveys…

 

The group that developed the surveys – the Awareness and Adoption Group participating in the NTIA process (not NTIA itself) – is comprised of a mixture of security researchers, technology providers, civil liberties advocates, policy makers, and vulnerability disclosure veterans and participants. It’s a pretty mixed group and it’s unlikely we all have the same goals or priorities in participating, but I’ve been very impressed and grateful that everyone has made a real effort to listen to each other and understand each other’s points of view. Our goal with the surveys is to do that on a far bigger scale so we can really understand a lot more about how people think about this topic. Ideally we will see responses from technology providers and operators, and security researchers that would not normally participate in something like the NTIA process as they are the vast majority and we want to understand their (your?!) perspectives. We’re hoping you can help us defeat any assumptions we may have - the only hypothesis we hope to prove out here is that we don’t know everything and can still learn.

 

So please do take the survey that relates to you, and please do share them and encourage others to do likewise:

 

Thank you!

@infosecjen

Recently I transitioned from a Principal Consultant role into a new role at Rapid7, as Research Lead with a focus on IoT technology, and it has been a fascinating challenge. Although I have been conducting research for a number of years, covering everything from Format string and Buffer overflow research on Windows applications to exploring embedded appliance and hacking multifunction printers (MFP), conducting research within the IoT world is truly exciting and amazing and has taught me to be even more open minded.

 

That is, open minded to the fact that there are people out there attaching technology to everything and anything. (Even toothbrushes.)

 

oralb-bluetooth.png

 

As a security consultant, over the last eight years I have focused most of my research on operational style attacks, which I have developed and used to compromise systems and data during penetration testing. The concept of operational attacks is the process of using the operational features of a device against itself.

 

As an example, if you know how to ask nicely, MFPs will often give up Active Directory credentials, or as recent research has disclosed, network management systems openly consume SNMP data without questioning its content or where it came from.

 

IoT research is even cooler because now I get the chance to expand my experience into a number of new avenues. Historically I have prided myself in the ability to define risk around my research and communicate it well. With IoT, I initially shuddered at the question: “How do I define Risk?"

 

IoT Risk

In the past, it has been fairly simple to define and explain risk as it relates to operational style attacks within an enterprise environment, but with IoT technology I initially struggled with the concept of risk. This was mainly driven by the fact that most IoT technologies appear to be consumer-grade products. So if someone hacks my toothbrush they may learn how often I brush my teeth. What is the risk there, and how do I measure that risk?

 

The truth is, the deeper I head down this rabbit hole called IoT, the better my understanding of risk grows. A prime example of defining such risk was pointed out by Tod Beardsley in his blog “The Business Impact of Hacked Baby Monitors”. On the first look, we might easily jump to the conclusion that there may not be any serious risk to an enterprise business. But on second take, if a malicious actor can use some innocuous IoT technology to gain a foothold to the home network of one of your employees, they could then potentially pivot onto the corporate network via remote access, such a VPN. This is a valid risk that can be communicated and should be seriously considered.

 

IoT Research

To better define risk, we need to ensure our research involves all aspect of IoT technology. Often when researching and testing IoT, researchers can get a form of tunnel vision where they focus on the technology from a single point of reference, as an example, the device itself.

 

While working and discussing IoT technology with my peers at Rapid7, I have grown to appreciate the complexity of IoT and its ecosystem. Yes, ecosystem—this is where we consider the entire security picture of IoT, and not just one facet of the technology. This includes the three following categories and how each one of these categories interacts and impacts each of the other categories. We cannot test one without the other and consider that testing effective.  We must test each one and also test how they affect each other.

 

ecosystem.png

 

With IoT quickly becoming more than just consumer-grade products, we are starting to see more IoT-based technologies migrating into the enterprise environment. If we are ever going to build a secure IoT world, it is critical during our research that all aspects of the ecosystem are addressed.

 

The knowledge we learn from this research can help enterprises better cope with the new security risk, make better decisions on technology purchases, and help employees stay safe within their home environment—which leads to better security for our enterprises. Thorough research can also deliver valuable knowledge back to the vendors, making it possible to improve product security during the design, creation, and manufacturing of IoT technology, so new vendors and new products are not recreating the same issues over and over.

 

So, as we continue down the road of IoT research, let us focus our efforts on the entire ecosystem. That way we can assure that our efforts lead to a complete picture and culminate in security improvements within the IoT industry.

This is a guest post from our frequent contributor Kevin Beaver. You can read all of his previous guest posts here.

 

2016 marks the 15th year that I have been working for myself as an independent information security consultant. People who are interested in working for themselves often ask for my thoughts on what it takes to go out - and stay out - on your own. Early on, I thought it was about business cards and marketing slicks. In fact, I spent so much time, effort, and money on company tchotchkes that I'm confident I could have earned twice as much money in my first year alone had I focused on what was truly important. I soon found out that starting my information security consulting practice wasn't about "things". Instead, I saw the value of networking and surrounding myself with successful people – people that I could learn not only about information security but, more importantly, what it takes to be successful in business.

 

In what ways does this apply to your career in IT and information security? Every way! If you look at the essence of what it takes to be successful in our field, it's not about being a master of the technical stuff. Anyone can learn those things. Sure, some are better than others, but at the end of the day, the technical challenges are not our real challenges. Instead, it's about being able to master emotional intelligence including, among other things, the relationships we have with people who are in a position to both help us and hurt us. The relationships you have with others has an enormous impact on how effective you can be in your job and how far you can go in your career.

 

You certainly don’t have to work for yourself to benefit from this. Whether you work for a large corporation, a small startup, a government agency or a nonprofit, think about who you currently know and who you should get to know that can have a positive influence on your IT/security career. It might be a current executive in your own organization. It might be a fellow IT pro, auditor, or entrepreneur you meet at a security conference. It might be the parent of your child’s friend who’s an attorney or a doctor. It might be someone else in the information security field who you could reach out to on LinkedIn to start having a dialog with. There are a lot of people – many of which you probably haven’t thought about – who can help you out in tremendous ways. Not to make money off of but to learn from and collaborate. This leads me to an important point: whenever you are reaching out and meeting new people, make sure that you are also giving to this person in some capacity. The last thing anyone wants is a user of their relationship with nothing in return.

 

Looking back, the first few years of starting my business I should have spent surrounding myself with people in/around IT as well as those who were in a position to coach and mentor me along to be a better business person. This would've created more opportunities for me earlier on than anything else. As recently as a few weeks ago, I interacted with a young salesman who was more concerned about whether I had a marketing brochure rather than getting to know
me and understanding how I might be able to help him with his information security needs (he was hoping to sell my services to his clients). This is a common approach to one’s career: have a beautiful marketing slick or website and
they will come, and buy. If it were that simple, countless people would be super successful in every field. Instead, it takes persistence, year after year. Work on building and maintaining your relationships both inside and outside of your
organization as that’s what will help you succeed the most in your IT and security endeavors long-term.

The following issues affect ExaGrid storage devices running firmware prior to version 4.8 P26:

 

CVE-2016-1560: The web interface ships with default credentials of 'support:support'. This credential confers full control of the device, including running commands as root. In addition, SSH is enabled by default and remote root login is allowed with a default password of 'inflection'.

 

CVE-2016-1561: Two keys are listed in the root user's .ssh/authorized_keys file: one labeled "ExaGrid support key" and one "exagrid-manufacturing-key-20070604". A copy of the private key for the latter authorized key ships on the device in /usr/share/exagrid-keyring/ssh/manufacturing.

 

These issues have been rectified in firmware version 4.8 P26, available from the vendor.

 

Credit

Discovered by James @egyp7 Lee of Rapid7, Inc., and disclosed to the vendor and CERT per Rapid7's disclosure policy.

 

Product Description

ExaGrid provides a series of disk backup appliances based on Linux. The vendor's website states, "ExaGrid's appliances are deduplication storage targets for all industry leading backup applications." In addition, ExaGrid provides several hundred customer testimonials, demonstrating its popularity as a backup solution across several vertical markets.

 

Exploitation

Exploiting these issues require a standard ssh client for the first two issues, and a standard web browser with the third.

 

The SSH private key, which is common to every shipping device, is located on the device at /usr/share/exagrid-keyring/ssh/manufacturing, available to anyone who owns a device or anyone who can download and extract the firmware.

 

In order to facilitate detection of this exposure, the private key is provided below.

 

Fingerprints

MD5:22:c8:a9:c3:01:a0:17:31:a5:43:f2:70:4a:1c:55:f6

SHA1:1szdeYNwqO2Jom6rby+RTybD9cA

 

Public Key

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIBnZQ+6nhlPX/JnX5i5hXpljJ89bSnnrsSs51hSPuoJGmoKowBddIS K7s10AIpO0xAWGcr8PUr2FOjEBbDHqlRxoXF0Ocms9xv3ql9EYUQ5+U+M6BymWhNTFPOs6gFHUl8Bw3t 6c+SRKBpfRFB0yzBj9d093gSdfTAFoz+yLo4vRw==

 

Private Key

-----BEGIN RSA PRIVATE KEY-----

MIICWAIBAAKBgGdlD7qeGU9f8mdfmLmFemWMnz1tKeeuxKznWFI+6gkaagqjAF10

hIruzXQAik7TEBYZyvw9SvYU6MQFsMeqVHGhcXQ5yaz3G/eqX0RhRDn5T4zoHKZa

E1MU86zqAUdSXwHDe3pz5JEoGl9EUHTLMGP13T3eBJ19MAWjP7Iuji9HAgElAoGA

GSZrnBieX2pdjsQ55/AJA/HF3oJWTRysYWi0nmJUmm41eDV8oRxXl2qFAIqCgeBQ

BWA4SzGA77/ll3cBfKzkG1Q3OiVG/YJPOYLp7127zh337hhHZyzTiSjMPFVcanrg

AciYw3X0z2GP9ymWGOnIbOsucdhnbHPuSORASPOUOn0CQQC07Acq53rf3iQIkJ9Y

iYZd6xnZeZugaX51gQzKgN1QJ1y2sfTfLV6AwsPnieo7+vw2yk+Hl1i5uG9+XkTs

Ry45AkEAkk0MPL5YxqLKwH6wh2FHytr1jmENOkQu97k2TsuX0CzzDQApIY/eFkCj

QAgkI282MRsaTosxkYeG7ErsA5BJfwJAMOXYbHXp26PSYy4BjYzz4ggwf/dafmGz

ebQs+HXa8xGOreroPFFzfL8Eg8Ro0fDOi1lF7Ut/w330nrGxw1GCHQJAYtodBnLG

XLMvDHFG2AN1spPyBkGTUOH2OK2TZawoTmOPd3ymK28LriuskwxrceNb96qHZYCk

86DC8q8p2OTzYwJANXzRM0SGTqSDMnnid7PGlivaQqfpPOx8MiFR/cGr2dT1HD7y

x6f/85mMeTqamSxjTJqALHeKPYWyzeSnUrp+Eg==

-----END RSA PRIVATE KEY-----

 

Mitigations

Removing the two backdoor keys from /root/.ssh/authorized_keys and /root/.ssh/authorized_keys2 files and changing the root user's password will prevent exploitation of the first vulnerability.

 

As for the web UI exposure, it appears to be possible to change the password for the 'support' account through the web interface. However, this is likely to break software updates as the update process uses that account with a hard coded password.

 

Vendor Response

The vendor has fixed the reported vulnerabilities in firmware version 4.8 P26. Customers are urged to contact their support representative to acquire this firmware update.

 

"ExaGrid prides itself on meeting customer requirements," said Bill Andrews, CEO of ExaGrid. "Security is without question a top priority, and we take any such issues very seriously. When we were informed by Rapid7 of a potential security weakness, we addressed it immediately. We value Rapid7's involvement in identifying security risks since strong security will always be a key customer requirement."

 

Disclosure Timeline

This vulnerability advisory was prepared and released in accordance with Rapid7's disclosure policy.

 

  • Tue, Jan 26, 2016: Initial discovery by James Lee of Rapid7
  • Fri, Jan 29, 2016: Initial contact to vendor
  • Mon, Feb 01, 2016: Response from vendor and details disclosed
  • Mon, Feb 23, 2016: Disclosure to CERT
  • Tue, Mar 08, 2016: Vendor commits to a patch release in March.
  • Thu, Mar 24, 2016: Vendor provides an updated firmware image
  • Thu, Apr 07, 2016: Public disclosure and Metasploit module published.

A major area of focus in the current cybersecurity policy discussion is how growing adoption of encryption impacts law enforcement and national security, and whether new policies should be developed in response. This post briefly evaluates several potential outcomes of the debate, and provides Rapid7's current position on each.

 

Background

 

Rapid7 has great respect for the work of our law enforcement and intelligence agencies. As a cybersecurity company that constantly strives to protect our clients from cybercrime and industrial espionage, we appreciate law enforcement's role in deterring and prosecuting wrongdoers. We also recognize the critical need for effective technical tools to counter the serious and growing threats to our networks and personal devices. Encryption is one such tool.

 

Encryption is a fundamental means of protecting data from unauthorized access or use. Commerce, government, and individual internet users depend on strong security for our communications. For example, encryption helps prevent unauthorized parties from reading sensitive communications – like banking or health information – traveling over the internet. Another example: encryption underpins certificates that demonstrate authenticity (am I who I say I am?), so that we can have high confidence that a digital communication – such as a computer software security update – is coming from the right source and not a man-in-the-middle attacker. The growing adoption of encryption for features like these has made users much more safe than we would be without it. Rapid7 believes companies and technology innovators should be able to use the encryption protocols that best protect their customers and fit their service model – whether that protocol is end-to-end encryption or some other system.

 

However, we also recognize this increased data security creates a security trade-off. Law enforcement will at times encounter encryption that it cannot break by brute force and for which only the user – not the software vendor – has the key, and this will hinder lawful searches. The FBI's recently concluded efforts to access the cell phone belonging to deceased terrorist Syed Farook of San Bernardino, California, was a case study in this very issue. Although the prevalence of systems currently secured with end-to-end encryption with no other means of access should not be overstated, law enforcement search attempts may be thwarted more often as communications evolve to use unbreakable encryption with greater frequency. This prospect has tempted government agencies to seek novel ways around encryption. While we do not find fault with law enforcement agencies attempting to execute valid search or surveillance orders, several of the options under debate for circumventing encryption pose broad negative implications for cybersecurity.

 

Weakening encryption

 

One option under discussion is a legal requirement that companies weaken encryption by creating a means of "exceptional access" to software and communications services that government agencies can use to unlock encrypted data. This option could take two forms – one in which the government agencies hold the decryption keys (unmediated access), and one in which the software creator or another third party holds the decryption keys (mediated access). Both models would impose significant security risks for the underlying software or service by creating attack surfaces for bad actors, including cybercriminals and unfriendly international governments. For this reason, Rapid7 does not support a legal requirement for companies or developers to undermine encryption for facilitating government access to encrypted data.

 

The huge diversity of modern communications platforms and software architecture makes it impossible to implement a one-size-fits-all backdoor into encryption. Instead, to comply with a hypothetical mandate to weaken encryption, different companies are likely to build different types of exceptional access. Some encryption backdoors will be inherently more or less secure than others due to technical considerations, the availability of company resources to defend the backdoor against insider and external threats, the attractiveness of client data to bad actors, and other factors. The resulting environment would most likely be highly complex, vulnerable to misuse, and burdensome to businesses and innovators.

 

Rapid7 also shares concerns that requiring US companies to provide exceptional access to encrypted communications for US government agencies would lead to sustained pressure from many jurisdictions – both local and worldwide – for similar access. Companies or oversight bodies may face significant challenges in accurately tracking when, by whom, and under what circumstances client data is accessed – especially if governments have unmediated access to decryption keys. If US products are designed to be inherently insecure and "surveillance-ready," then US companies will face a considerable competitive disadvantage in international markets where more secure products are available.

 

Legal mandates to weaken encryption are unlikely to keep unbreakable encryption out of the hands of well-resourced criminals and terrorists. Open source software is commonly "forked," and it should be expected that developers will modify open source software to remove an encryption backdoor. Jurisdictions without an exceptional access requirement could still distribute closed source software with unbreakable encryption. As a result, the cybersecurity risks of weakened encryption are especially likely to fall on users who are not already security-conscious enough to seek out these workarounds.

 

Intentionally weakening encryption or other technical protections ultimately undermines the security of the end users, businesses, and governments. That said, if companies or software creators voluntarily choose to build exceptional access mechanisms into their encryption, Rapid7 believes it is their right to do so. However, we would not recommend doing so, and we believe companies and creators should be as transparent as possible with their users about any such feature.

 

"Technical assistance" – compelled malware

 

Another option under debate is whether the government can force developers to build custom software that removes security features of the developers' products. This prospect arose in connection with the FBI's now-concluded bid to unlock Farook's encrypted iPhone to retrieve evidence for its terrorism investigation. In that case, a magistrate judge ordered Apple to develop and sign a custom build of iOS that would disable several security features preventing the FBI from using electronic means to quickly crack the phone's passcode via brute force. This custom version of iOS would have been deployed like a firmware update only to the deceased terrorist's iPhone, and Apple would have maintained control of both the iPhone and the custom iOS. However, the FBI ultimately cracked the iPhone without Apple's assistance – with help, according to some reports, from a third party company – and asked the court to vacate its order against Apple. Still, it's possible that law enforcement agencies could again attempt to legally compel companies to hack their own products in the future.

 

In the Farook case, the government had good reason to examine the contents of the iPhone, and clearly took steps to help prevent the custom software from escaping into the wild. This was not a backdoor or exceptional access to encryption as traditionally conceived, and not entirely dissimilar to cooperation Apple has offered law enforcement in the past for unencrypted older versions of iOS. Nonetheless, the legal precedent that would be set if a court compels a company or developer to create malware to weaken its own software could have broad implications that are harmful to cybersecurity.

 

FBI Director James Comey confirmed in testimony before Congress that if the government succeeded in court against Apple, law enforcement agencies would likely use the precedent as justification to demand companies create custom software in the future. It's possible the precedent could be applied to a prolonged wiretap of users of an encrypted messaging service like WhatsApp, or a range of other circumstances. Establishing the limits of this authority would be quite important.

 

If the government consistently compelled companies to create custom software to undermine the security of their own products, the effect could be proliferation of company-created malware. Companies would need to defend their malware from misuse by both insiders and external threats while potentially deploying the malware to comply with many government demands worldwide, which – like defending an encryption backdoor – would be considerably burdensome on companies. This outcome could reduce user trust in the security of vendor-issued software updates, even though it is generally critical for cybersecurity for users to keep their software as up to date as possible. Companies may also design their products to be less secure from the outset, in anticipation of future legal orders to circumvent their own security.

 

These scenarios raise difficult questions for cybersecurity researchers and firms like Rapid7. Government search and surveillance demands are frequently paired with gag orders that forbid the recipient (such as the individual user or a third party service provider) from discussing the demands. Could this practice impact public disclosure or company acknowledgment of a vulnerability when researchers discover a security flaw or threat signature originating from software a company is compelled to create for law enforcement? When would a company be free to fix its government-ordered vulnerability? Would cybersecurity firms be able to wholeheartedly recommend clients accept vendor software updates?

 

Rapid7 does not support legal requirements – whether via legislation or court order – compelling companies to create custom software to degrade security. Creating secure software is very difficult under the best of circumstances, and forcing companies to actively undermine their own security features would undo decades of security learnings and practice. If the government were to compel companies to provide access to its products, Rapid7 believes it would be preferable to use tools already available to the companies (such as that which Apple offered prior to iOS 8) in limited circumstances that do not put non-targeted users at risk. If a company has no means to crack its products already available, the government should not compel a company to create custom software to undermine their products' security features. Software developers should also be free to develop patches or introduce more secure versions of their products to fix vulnerabilities at any time.

 

Government hacking and forensics

 

Finally, there is the option of government deploying its own tools to hack products and services to obtain information. End-to-end encryption provides limited protection when one of the endpoints is compromised. If government agencies do not compel companies to weaken their own products, they could exploit existing vulnerabilities themselves. As noted above, the government's exploitation of existing vulnerabilities was the outcome of the FBI's effort to compel Apple to provide access to Farook's iPhone. Government has also turned to hacking or implanting malware in other contexts well before the Farook case.

 

In many ways, this activity is to be expected. It is not an irrational priority for law enforcement agencies to modernize their computer penetration capabilities to be commensurate with savvy adversaries. A higher level of hacking and digital forensic expertise for law enforcement agencies should improve their ability to combat cybercriminals more generally. However, this approach raises its own set of important questions related to transparency and due process.

 

Upgrading the technological expertise of law enforcement agencies will take time, education, and resources. It will also require thoughtful policy discussions on what the appropriate rules for government hacking should be – there are few clear and publicly available standards for government use of malware. One potentially negative outcome would be government stockpiling of zero day vulnerabilities for use in investigations, without disclosing the vulnerabilities to vendors or the public. The picture is clouded further when the government partners with third party organizations to hack on the government's behalf, as may have occurred in the case of Farook's iPhone – if the third party owns a software exploit, could IP or licensing agreements prevent the government from disclosing the vulnerability to the vendor? White House Cybersecurity Coordinator Michael Daniel noted there were "few hard and fast rules" for disclosing vulnerabilities, but pointed out that zero day stockpiles put Internet users at risk and would not be in the interests of national security. We agree and appreciate the default of vulnerability disclosure, but clearer rules on transparency and due process in the context of government hacking are quickly becoming increasingly important.

 

No easy answers

 

We view the complex issue of encryption and law enforcement access as security versus security. To us, the best path forward is that which would provide the best security for the most number of individuals. To that end, Rapid7 believes that we should embrace the use of strong encryption without compelling companies to create software that undermines their product security features. We want the government to help prevent crime by working with the private sector to make communications services, commercial products, and critical infrastructure trustworthy and resilient. The foundation of greater cybersecurity will benefit us all in the future.

 

 

Harley Geiger

Director of Public Policy, Rapid7

todb

Getting Ahead of Badlock

Posted by todb Employee Mar 30, 2016

badlock-tay-selly.jpgWhile we are keeping abreast of the news about the foretold Badlock vulnerability, we don't know much more than anyone else right now. We're currently speculating that the issue has to do with the fundamentals of the SMB/CIFS protocol, since the vulnerability is reported to be present in both Microsoft's and Samba's implementations. Beyond that, we're expecting the details from Microsoft as part of their regularly scheduled patch Tuesday.

 

How Bad Is It?

Microsoft and the Samba project both clearly believe this is a more critical than usual problem, but in the end, it's almost certainly limited to SMB/CIFS, much like MS08-067 was. This comparison should be alternatively comforting and troubling. While the SMB world isn't the same as it was in late 2008, MS08-067 continues to be a solid, bread and butter vulnerability exploited by internal penetration testers. We are very concerned about the population of chronically unpatched SMB/CIFS servers that lurk in the dusty corners of nearly every major IT enterprise.

 

What Can I Do Now?

Any large organization with a significant install base of Windows servers should take this time clearing patch and reboot schedules for production SMB/CIFS servers using their usual Patch Tuesday change control processes. Assuming it's even remotely as bad as the discoverers are making it out to be, this is the patch you want to release into production pretty much as fast as your change control processes allow. Therefore, given the high visibility of this particular issue, it would be wise to treat it as a mostly predictable emergency.

 

In the event you feel like you're set up for a rapid patch deployment, this is also a pretty great time to conduct an assessment of both your intentional and accidental SMB/CIFS footprint. While Windows machines today ship with an operating system-level firewall by default, all too often, users will "temporarily" disable these protections in order to get some specific file sharing task done, and there's really nothing more permanent in an IT environment than a temporary workaround.

 

In short, our advice is take advantage of the hype around this bug, and buy some time from your management to get some legwork done in advance of next Patch Tuesday. You might be surprised with what you find, but it's better to discover those rogue SMB/CIFS endpoints now, in a measured way, than during a panic-fueled crisis. And if you haven't exercised your emergency patch procedures in a while, well, now you have every excuse you could ask for, short of an actual, unplanned emergency.

by Suchin Gururangan & Bob Rudis

 

At Rapid7, we are committed to engaging in research to help defenders understand, detect and defeat attackers. We conduct internet-scale research to gain insight into the volatile threat landscape and share data with the community via initiatives like Project Sonar1 and Heisenberg2. As we crunch this data, we have a better idea of the global exposure to common vulnerabilities and can see emerging patterns in offensive attacks.

 

We also use this data to add intelligence to our products and services. We’re developing machine learning models that use this daily internet telemetry to identify phishing sites and find+classify devices through their certificate and site configurations.

 

We have recently focused our research on how these tools can work together to provide unique insight on the state of the internet. Looking at the internet as a whole can help researchers identify stable, macro level trends in the individual attacks between IP addresses. In this post, we’ll give you window into these explorations.

 

IPv4 Topology

First, a quick primer on IPv4, the fourth version of the Internet Protocol. The topology of IPv4 is characterized by three levels of hierarchy, from smallest to largest: IP addresses, subnets, and autonomous systems (ASes). IP addresses on IPv4 are 32-bit sequences that identify hosts or network interfaces. Subnets are groups of IP addresses, and ASes are blocks of subnets managed by public institutions and private enterprises. IPv4 is divided into about 65,000 ASes, at least 30M subnets, and 232 IP addresses.

 

Malicious ASes

There has been a great deal of academic and industry focus on identifying malicious activity in-and-across autonomous systems3,4,5,6, and for good reasons. Well over 50% of “good” internet traffic comes from a small subset of large, well-defined ocean-like ASes pushing content from Netflix, Google, Facebook, Apple and Amazon. Despite this centralization “cloud” content, we’ll show that the internet has become substantially more fragmented over time, enabling those with malicious intent to stake their claim in less friendly waters. In fact, our longitudinal data on phishing activity across IPv4 presented an interesting trend: a small subset of autonomous systems have regularly hosted a disproportionate amount of malicious activity. In particular, 200 ASes hosted 70% of phishing activity from 2007 to 2015 (data: cleanmx archives7). We wanted to understand what makes some autonomous systems more likely to host malicious activity.

 

overall_density-1.png

top_20-1.png

 

IPv4 Fragmentation

We gathered historical data on the mapping between IP addresses and ASes from 2007 to 2015 to generate a longitudinal map of IPv4. This map clearly suggested IPv4 has been fragmenting. In fact, the total number of ASes has grown 60% in the past decade. During the same period, there has been a rise in the number of small ASes and a decline in the number of large ones. These results make sense given that IPV4 address space has been exhausted. This means that growth in IPv4 access requires the reallocation of existing address space into smaller and smaller independent blocks.

 

RStudioScreenSnapz024.png

 

 

AS Fragmentation

Digging deeper into the Internet hierarchy, we analyzed the composition, size, and fragmentation of malicious ASes.

ARIN, one of the primary registrars of ASes, categorizes subnets based on the number of IP addresses they contain. We found that the smallest subnets available made up on average 56±3.0 percent of a malicious AS.

We inferred the the size of an AS by calculating its maximum amount of addressable space. Malicious ASes were in the 80-90th percentile in size across IPv4.

 

To compute fragmentation, subnets observed in ASes overtime were organized into trees based on parent-child relationships (Figure 3). We then calculated the ratio of the number of root subnets, which have no parents, to the number of subsequent child subnets across the lifetime of the AS. We found that malicious ASes were 10-20% more fragmented than other ASes in IPv4.

 

RStudioScreenSnapz024.png

 

These results suggest that malicious ASes are large and deeply fragmented into small subnets. ARIN fee schedules8 showed that smaller subnets are significantly less expensive to purchase; and, the inexpensive nature of small subnets may allow malicious registrars to purchase many IP blocks for traffic redirection or host proxy servers to better float under the radar.

 

 

Future Work

Further work is required to characterize the exact cost structure of buying subnets, registering IP blocks, and setting up infrastructure in malicious ASes.

 

We'd also like to understand the network and system characteristics that cause attackers to choose to co-opt a specific autonomous system over another. For example, we used Sonar’s historical forwardDNS service and our phishing detection algorithms to characterize all domains that have mapped to these ASes in the past two years. Domains hosted in malicious ASes had features that suggested deliberate use of specific infrastructure. For example, 'wordpress' sites were over-represented in some malicious ASes (like (like AS4808), and GoDaddy was by far the most popular registrar for malicious sites across the board.

 

We can also use our SSL Certificate classifier to understand the distribution of devices hosted in ASes across IPv4, as seen in the chart below:

 

overall_device_density-1.png

 

Each square above shows the probability distribution (a fancier, prettier histogram) of device counts of a particular type. Most ASes host fewer than 100 devices across a majority of categories. Are there skews in the presence of specific devices to propagate phishing attacks from these malicious ASes?

 

Conclusion

Our research presents the following results:

 

  1. A small subset of ASes continue to host a disproportionate amount of malicious activity.

  2. Smaller subnets and ASes are becoming more ubiquitous in IPv4.

  3. Malicious ASes are deeply fragmented

  4. There is a concentrated use of specific infrastructure in malicious ASes

  5. Attackers both co-opt existing devices and stand up their own infrastructure within ASes (a gut-check would suggest this is obvious, but having data to back it up also makes it science).

 

Further work is required to characterize the exact cost structure of buying subnets, registering IP blocks, and setting up infrastructure in malicious ASes along with what network and system characteristics cause attackers to choose to co-opt one device in one autonomous system over another.

 

This research represents an example of how Internet-scale data science can provide valuable insight on the threat landscape. We hope similar macro level research is inspired by these explorations and will be bringing you more insights from Project Sonar & Heisenberg over the coming year.


  1. Sonar intro

  2. Heisenberg intro

  3. G. C. M. Moura, R. Sadre and A. Pras, _Internet Bad Neighborhoods: The spam case,“_ Network and Service Management (CNSM), 2011 7th International Conference on, Paris, 2011, pp. 1-8.

  4. B. Stone-Gross, C. Kruegel, K. Almeroth, A. Moser and E. Kirda, “FIRE: FInding Rogue nEtworks”; doi: 10.1109/ACSAC.2009.29

  5. C. A. Shue, A. J. Kalafut and M. Gupta, “Abnormally Malicious Autonomous Systems and Their Internet Connectivity,”; doi: 10.1109/TNET.2011.2157699

  6. A. J. Kalafut, C. A. Shue and M. Gupta, “Malicious Hubs: Detecting Abnormally Malicious Autonomous Systems,”; doi: 10.1109/INFCOM.2010.5462220

  7. Cleanmx archive

  8. ARIN Fee Schedule

Recently, a number of Rapid7's customers have been evaluating the risks posed by the swift rise of ransomware as an attack vector. Today, I'd like to address some of the more common concerns.

 

What is Ransomware?

Cryptowall and Cryptolocker are among of the best known ransomware criminal malware packages today. In most cases, users are afflicted by ransomware by clicking on a phishing link or visiting a website that is either compromised is is hosting a compromised advertising network. While ransomware is usually associated with Windows PCs and laptops, there have been recent reports of new ransomware on Apple OSX called KeRanger.

 

Ransomware works by encrypting files that the user has access to, which is usually their local documents. However, some ransomware variants can target and encrypt files on mapped SMB drives as well. Once encrypted, the user is alerted with instructions on how to obtain the recovery key, typically for the price of $300-$500 equivalent in Bitcoin. Some attacks, however, are enterprise-centric and demand much more; the Hollywood Presbyterian Medical Center reportedly paid over $17,000 to a criminal enterprise to recover its encrypted data.

 

How Can I Avoid Ransomware?

Ransomware attacks happen similarly to other malware-based attacks. User education is the first line of defense -- people should not be clicking suspicious links, or visit websites that are known carriers of malvertising networks. In the event the user encounters a live link to a ransomware download, web-based threat prevention, email-based threat prevention, and application sandboxing can all help avoid infection.

 

In addition, enterprises can harden their user-based infrastructure preemptively by following some baseline cyber hygiene as described in Jason Beatty's blog post. Of special interest is the enforcement of role-based access control; all too often, organizations accrue "access cruft," where users inherit permission sets that are far too broad for their normal job functions as temporary access grants accidentally become permanent access grants. By limiting user access across network resources, the damage incurred by the compromise of a single user can be effectively contained.

 

I've Been Hit! How Can I Recover?

In the event a user or enterprise falls victim to a ransomware attack, the best solution is to treat the event as any other disaster: restore the lost data from backups, conduct an investigation into how the disaster occurred, and educate the users involved on how to avoid this disaster in the future. As of today, there is no known method for recovering lost data without cooperating with the criminals responsible for the ransomware.

 

Of course, backing up valuable data before an attack is critical in order to recover from this kind of attack. Backup schedules can vary widely between people and enterprises, many backup plans are implemented but remain untested, and the appearance of ransomware seems to have dramatically increased the chances of a data loss disaster. IT administrators who are concerned about ransomware affecting their users should investigate the relevance and reliability of their existing backup solutions, and weigh the costs of a sudden loss of data against the cost of more robust and frequent backup plans.

 

That Didn't Work. Should I Pay?

In most areas of crime, paying blackmail or ransom demands is counterproductive. It funds criminal enterprise directly and encourages more blackmail and ransom activity for both the original victim and future victims.

 

However, even the United States FBI seems to be advising people that, given no other disaster recovery alternative, victims may want to consider paying for recovery. In October of 2015, Joseph Bonavolonta of the FBI admitted, "To be honest, we often advise people just to pay the ransom." This position was later clarified that victims should only consider paying when there is no other recourse, such as recovering from backups.

 

The criminal enterprises running ransomware campaigns today are remarkably organized, and can even be considered helpful when it comes to getting their victims in a position to pay the ransom, nearly always via Bitcoin transactions. There is significant "victim support" built into these campaigns that walk users through the process of acquiring Bitcoin and ensuring that recovery is actually possible once they are paid. That said, these organizations are criminal, after all, and operate across international borders. It would appear that they are making good on their offers to decrypt the data held hostage, but there is absolutely no guarantee that they will continue to do so.

 

Conclusions

While ransomware represents the latest trend in drive-by, opportunistic malware, it is avoidable and containable by following fundamental security and disaster recovery best practices. Encouraging secure habits in an enterprise's user base is the cornerstone of avoiding the problem in the first place. Enterprises struck by ransomware are urged to treat the event as they would any local disk disaster: restore from backups, conduct a post-mortem investigation into how the disaster happened, and take the lessons learned to become more resilient in the event of future disasters.

Building a reliable security team is tough; there is no defined approach nor silver bullet.  The people we are defending against are intelligent, dedicated, and have a distinct asymmetrical advantage, with nearly unlimited time to find the one thing we miss.  This past decade has taught us that what we have been doing is not working very well.

 

I've been lucky to have latitude for creativity when building the security team at Rapid7.  So when Joan Goodchild asked me to join her for CSO Online's first edition of "security sessions" it felt like the perfect time to start socializing how we've approached building our team.

 

Rapid7, like many high-growth technology companies, has introduced a significant set of SaaS offerings over the past few years. With the introduction of these offerings, we needed to build a platform we believed our customers could trust. Given the current status-quo, we didn't feel like blindly following failed 'best-practices' was the right path, so we decided to forge our own.

 

Head over to CSO to get a glimpse into how we tackle building our team and program.  During this CSO Security Session, I spend several minutes discussing with Joan who we hire, how we hire, my views on certifications, higher education, technology (and its stagnation), and how we measure the progress of our security organization.

 

I hope our discussion stimulates some meaningful conversations for you, and I encourage you to think about the five following items:

 

  1. Have you done the fundamentals? Two-factor authentication, network segmentation, and patch management are all far more tactically important than nearly anything else your program could do.
  2. Do you need that security engineer with 7-10 years of experience? What about a more junior engineer that can write code, automate, and solve problems (not just identify them)? 
  3. Do you measure success with practical indicators? Don’t try and fit into someone else’s mold of 'metrics.' Take a look at what areas of your program you want to focus on, and use something like CMMI to measure the maturity (opposed to effectiveness) of those operations.  You can take a look at something like BSIMM to see how this can be done effectively in some security verticals. 
  4. Is a college degree, or a security certification something that should disqualify a candidate?  If you let your HR system automatically weed out people that don’t have certifications or degrees, you are going to miss out on great resources.
  5. Do you understand what makes your company tick? If you can’t become part of the success of your business, you will always be viewed as a problem.

 

The landscape we deal with is constantly changing and we need to adapt with it.  While I don’t presume anything we’ve done is the silver bullet, the more we all push the envelope and approach our challenges creatively, the more likely we are to start shifting that asymmetrical balance into a more reasonable equilibrium.

 

I’d be interested to hear your thoughts on building out an effective security team. Share them in the comments or on Twitter -- I’m @TheCustos.

The U.S. Departments of Commerce and State will renegotiate an international agreement – called the Wassenaar Arrangement – that would place broad new export controls on cybersecurity-related software. An immediate question is how the Arrangement should be revised. Rapid7 drafted some initial revisions to the Arrangement language – described below and attached as a .pdf to this blog post. We welcome feedback on these suggestions, and we would be glad to see other proposals that are even more effective.

 

Background

 

When the U.S. Departments of Commerce and State agreed – with 40 other nations – to export controls related to "intrusion software" in 2013, their end goal was a noble one: to prevent malware and cyberweapons from falling into the hands of bad actors and repressive governments. As a result of the 2013 addition, the Wassenaar Arrangement requires restrictions on exports for "technology," "software," and "systems" that develop or operate "intrusion software." These items were added to the Wassenaar Arrangement's control list of "dual use" technologies – technologies that can be used maliciously or for legitimate purposes.

 

Yet the Arrangement's new cyber controls would impose burdensome new restrictions on much legitimate cybersecurity activity. Researchers and companies routinely develop proofs of concept to demonstrate a cybersecurity vulnerability, use software to refine and test exploits, and use penetration testing software – such as Rapid7's Metasploit Pro software – to root out flaws by mimicking attackers. The Wassenaar Arrangement could (depending how each country implements it) either require new licenses for each international export of such software, or prohibit international export altogether. This would create significant unintended negative consequences for cybersecurity since cybersecurity is a global enterprise that routinely requires cross-border collaboration. 

 

Rapid7 submitted detailed comments to the Dept. of Commerce describing this problem in July 2015, as did many other stakeholders. The Wassenaar Arrangement was also the subject of a Congressional hearing in January 2016. [For additional info, check out Rapid7's FAQ on the Wassenaar Arrangement – available here.]

 

Revising the Wassenaar Arrangement

 

To their credit, the Depts. of Commerce and State recognize the overbreadth of the Arrangement and are motivated to negotiate modifications to the core text. The agencies recently submitted agenda items for the next Wassenaar meeting – specifically, removal of the "technology" control, and then placeholders for other controls. A big question now is what should happen under those placeholders – a placeholder does not necessarily mean that the agencies will ultimately renegotiate those items.

                                                                  

To help address this problem, Rapid7 drafted initial suggestions on how to revise the Wassenaar Arrangement, incorporating feedback from numerous partners. Rapid7's proposal builds on the good work of Mara Tam of HackerOne and her colleagues, as well as that of Sergey Bratus, one of the most important contributions of which was to emphasize that authorization is a distinguishing feature of legitimate – as opposed to malicious – use of cybersecurity tools.

 

Our suggested revisions can be broken down into three categories:

 

1) Exceptions to the Wassenaar Arrangement controls on "systems," "software," and "technology." These are the items on which the Wassenaar Arrangement puts export restrictions. We suggest creating exceptions for software and systems designed to be installed by administrators or users for security enhancement purposes. These changes should help exclude many cybersecurity products from the Arrangement's controls, since such products are typically used only with authorization for the purpose of enhancing security – as compared with (for example) FinFisher, which is not designed for cybersecurity protection. It's worth noting that our language is not based solely on the intent of the exporter, since the proposed language requires the software to be designed for security purposes, which is a more objective and technical measure than intent alone. In addition, we agree with the Depts. of State and Commerce that the control on "technology" should be removed because it is especially overbroad.

 

Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough:

4.A.5.   Systems, equipment, and components therefor, specially designed or modified for the generation, operation or delivery of, or communication with, "intrusion software".

Note:  4.A.5 does not apply to systems, equipment, or components specially designed to be installed or used with authorization by administrators, owners, or users for the purposes of asset protection, asset tracking, asset recovery, or ‘ICT security testing’.

 

4.D.4.  "Software" specially designed or modified for the generation, operation or deliver of, or communication with, "intrusion software".

Note:  4.D.4 does not apply to "software" specially designed to be installed or used with authorization by administrators, owners, or users for the purposes of asset protection, asset tracking, asset recovery, or ‘ICT security testing’. “Software” shall be deemed "specially designed" where it incorporates one or more features designed to confirm that the product is used for security enhancement purposes. Examples of such features include, but are not limited to:

a. A disabling mechanism that permits an administrator or software creator to prevent an account from receiving updates; or

b. The use of extensive logging within the product to ensure that significant actions taken by the user can be audited and verified at a later date, and a means to protect the integrity of the logs.

 

4.E.1.a. "Technology" [...] for the "development," "production" or "use" of equipment or "software" specified by 4.A. or 4.D.

 

4.E.1.c. "Technology" for the "development" of "intrusion software".

 

2) Redefining "intrusion software." Although the Wassenaar Arrangement does not directly control "intrusion software," the "intrusion software" definition underpins the Arrangement's controls on software, systems, and technology that operate or communicate with "intrusion software." Our goal here is to help narrow the definition of "intrusion software" to code that can be used for malicious purposes. To do this, we suggest redefining "intrusion software" as specially designed to be run or installed without authorization of the owner or administrator and extracting, modifying, or denying access to a system or data without authorization.

 

Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough:

Cat 4 "Intrusion software"
1. "Software"

a. specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', or to be run or installed without the authorization of the user, owner, or ‘administrator’ of a computer or network-capable device, and

b. performing any of the following:

a.1. The unauthorized extraction of or denial of access to data or information from a computer or network-capable device, or the modification of system or user data; or

b.2. The unauthorized modification of the standard execution path or a program or process in order to allow the execution of externally provided instructions system or user data to facilitate access to data stored on a computer or network-capable device by parties other than parties authorized by the owner, user, or ‘administrator’ of the computer or network-capable device.

 

3) Exceptions to the definition of "intrusion software." The above modification to the Arrangement's definition of "intrusion software" is not adequate on its own because exploits – which are routinely shared for cybersecurity purposes – are designed to be used without authorization. Therefore, we suggest creating two exceptions to the definition of "intrusion software." The first is to confirm that "intrusion software" does not include software designed to be installed or used with authorization for security enhancement. The second is to exclude software that is distributed for the purpose of preventing its unauthorized execution to particular end users. Those end users include 1) organizations conducting research, education, or security testing, 2) computer emergency response teams (CERT), 3) creators or owners of products vulnerable to unauthorized execution of the software, or 4) among an entities subsidiaries or affiliates. So, an example: A German researcher discovers a vulnerability in a consumer software product, and she shares a proof-of-concept with 2) CERT, and 3) a UK company that owns the flawed product; the UK company then shares the proof-of-concept with 4) its Ireland-based subsidiary, and 1) a cybersecurity testing firm. The beneficial and commonsense information sharing outlined in this scenario would not require export licenses under our proposed language.

 

Here is the Wassenaar Arrangement text with our suggested revisions in red and strikethrough:

 

Notes
1. "Intrusion software" does not include any of the following:

a. Hypervisors, debuggers or Software Reverse Engineering (SRE) tools;
b. Digital Rights Management (DRM) "software"; or
c. "Software" designed to be installed or used with authorization by manufacturers, administrators, owners, or users, for the purposes of asset protection, asset tracking, or asset recovery., or ‘ICT security testing’; or

d. “Software” that is distributed, for the purposes of helping detect or prevent its unauthorized execution, 1) To organizations conducting or facilitating research, education, or 'ICT security testing', 2) To Computer Emergency Response Teams, 3) To the creators or owners of products vulnerable to unauthorized execution of the software, or 4) Among and between an entity's domestic and foreign affiliates or subsidiaries.


Technical Notes

1.
Monitoring tools': "software" or hardware devices, that monitor system behaviours or processes running on a device. This includes antivirus (AV) products, end point security products, Personal Security Products (PSP), Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS) or firewalls.
2.
'Protective countermeasures': techniques designed to ensure the safe execution of code, such as Data Execution Prevention (DEP), Address Space Layout Randomisation (ASLR) or sandboxing.
3. ‘Authorization’ means the affirmative or implied consent of the owner, user, or administrator of the computer or network-capable device.

4. ‘Administrator’ means owner-authorized agent or user of a network, computer, or network-capable device

5. 'Information and Communications Technology (ICT) security testing’ means discovery and assessment of static or dynamic risk, vulnerability, error, or weakness affecting “software”, networks, computers, network-capable devices, and components or dependencies therefor, for the demonstrated purpose of mitigating factors detrimental to safe and secure operation, use, or deployment.

 

 

This is a complex issue on several fronts. For one, it is always difficult to clearly distinguish between software and code used for legitimately beneficial versus malicious purposes. For another, the Wassenaar Arrangement itself is a convoluted international legal document with its own language, style, and processes. Our suggestions are a work in progress, and we may ultimately throw our support behind other, more effective language. We don't presume these suggestions are foolproof, and constructive feedback is certainly welcome.

 

Time is relatively short, however, as meetings concerning the renegotiation of the Wassenaar Arrangement will begin again during the week of April 11th. It's also worth bearing in mind that even if many cybersecurity companies, researchers, and other stakeholders come to agreement on revisions, any final decisions will be made with the consensus of the 41 nations party to the Arrangement. Still, we hope suggesting this language helps inform the discussion. As written, the Arrangement could cause significant damage to legitimate cybersecurity activities, and it would be very unfortunate if that were not corrected.

Filter Blog

By date: By tag: