1 2 3 Previous Next

Information Security

556 posts

NCSAM Banner_no text-01.pngThis Thursday, our Cyber Security Awareness Month webcast series kicks off with a look at security in the workplace:


How to Make Your Workplace Cyber-Safe

Thursday, October 8th at 11am ET/ 8am PT and 4pm BST

  • Bob Lord, CISO in Residence at Rapid7
  • Ed Adams, President and CEO at Security Innovation
  • Chris Secrest, Information Security Manager at MetaBank
  • Josh Feinblum, Vice President of Information Security at Rapid7


We won’t just be targeting security practitioners, either – anyone who works in an office can benefit.


You may be thinking, “But I don’t work in security, so why is that my concern?”


While it’s the duty of security and IT to mitigate risk and ensure that the security program adheres to industry best practices, it’s in everyone’s best interest to ensure that your workplace is cyber-safe. Breaches pose a threat not just to customer information but also PII of employees and partners, not to mention create service outages that strain the entire organization and cause reputational/brand damage with long-lasting effects.


The best way to prevent that scenario is to create a security-centric culture to which everyone feels they can contribute. This prevents the that’s-not-my-job mentality that can torpedo even a strong security program. Human error is the mostly frequently seen security incident pattern, according to the 2015 Verizon Data Breach Investigations Report, and so mitigating user risk in the workplace is a highly effective means of bolstering security across the business.


During the webcast, three panelists will join me for a discussion on how to make a workplace cyber-safe by creating a security-centric culture. Our moderator, Bob Lord, has some excellent ideas about starting with the breach and working backwards to determine how far an attacker can get. For example – if a laptop gets lost or stolen, what’s the severity? Has the hard drive been encrypted? Similarly, assume a user gets phished. What does the kill chain look like in that scenario?


Some of the other topics we’ll cover include:


  • Characteristics of an effective security awareness program: How to make sure that everyone understands risk, and the way in which their footprint impacts the business.
  • Managing passwords and devices: What happens when employees lose a device that’s storing company information? We’ll touch on encryption and also general password hygiene.
  • Common threats targeted at users: Social engineering and phishing are common and effective mechanisms for infiltrating a network. Workers need to realize that things have gotten way more sophisticated than a Nigerian prince asking for money.


- @TheCustos

[ETA: Added in James Lee's excellent State of the Metasploit Framework talk, which I stupidly omitted by accident!]


Once you hang around in infosec for a little while, you learn that each of the major cons have their own reputation, their own mini-scene. This one's got the great parties, that one has the best speakers, that other one is where the fresh research is presented, et cetera. One I kept hearing lots of good things about -- full of great content and really great people -- was Derbycon, a newer con entering its 5th year this year in Louisville, Kentucky.


With these words of praise in mind I went to Louisville last weekend and learned very quickly that Derbycon really does live up to its great reputation. It's a space where not only are n00bs (like me) welcome, but even seasoned pros bask in the positivity and family feel of the space. I don't think I've ever seen quite so many whole families with kids at an infosec con as I did at Derby. Maybe it's the genteel kindness of Louisville that rubs off on the attendees, but at Derby everyone was so friendly and the whole con felt very welcoming. Linecon, barcon, outside-the-front-entrance-smoking-con -- anywhere you went you had a great conversation with someone. (And it really can't be a coincidence that the 2 Black badges up for auction at the closing ceremonies each went for $7000, with all money going to Hackers for Charity. That's really amazing.)


... True enough, the beer and bourbon were flowing a-plenty -- and boy was that bourbon good -- and the community and surrounding company were the best part of the con. No surprise, the talks were top-notch too. I'm embedding a few videos of my favorite sessions below, but admittedly I am not as up on my technical knowledge as most of you. You can peruse the ENTIRE list of Derbycon 5 talks in this playlist: https://www.youtube.com/playlist?list=PLNhlcxQZJSm8cr3iBN27VZ4Rm11Erbae-


But for those of you looking for a little taste of it, take a look:

The State of the Metasploit Framework -- by Egypt

What changed this year? What community contributions did we see? What are the cool and new shiny things are in Metasploit Framework that you might have missed? Everyone who uses Metasploit should tune in.


The Opening Keynote - Information Security Today and in the Future -- featuring Ed Skoudis, John Strand, Chris Nickerson, Kevin Johnson & HD Moore

This was a really fascinating keynote -- there was a lot of emphasis on pen testing in this, but it touches on a lot of topics from the importance of relationships with your IT and devops team to educating the workforce. There's a ton in here, give it a listen.


Started from the bottom and now I'm here: How to ruin your life by getting everything you ever wanted - by Chris Nickerson

I missed this one in person and regret it IMMENSELY. Thankfully, Egypt shared it on twitter with a hearty endorsement, and I hugely agree. This isn't a tech talk, but if you work in infosec or with people who work in infosec... you need to see this talk. What happens to "infosec rockstars"? What is the real cost? What is the state of our community today?


Gray Hat PowerShell - by Ben Ten/@Ben0xA

Now this one IS a more technical talk, so if you already grok Powershell this one's for you (not for Powershell newbies). I couldn't get in to this as the line was out the door and around the hotel so... check it out.


The Metasploit Town Hall -- with todb, Egypt, thelightcosine & busterbcook

Back again to Derbycon, our High Priests of Metasploit give the community an update on what's new in Metasploit and take questions from those in attendance on what they'd like to see or improve.


Developers: Care and Feeding  -- by Bill Sempf

If you work with developers, and feel like you and they are speaking two very different languages and have massively different priorities, you need to hear this talk.


Other random things I learned at Derby:

  1. Some of you guys can drink a lot -- a LOT -- of bourbon and beer. Wow.
  2. It is entirely likely that you will walk in to the Hyatt bar on any Derbycon evening and see several Cards Against Humanity games going on concurrently
  3. I have now righted a GREAT wrong in my life and finally saw the "classic" 90s movie Hackers thanks to the 20th anniversary screening at Derby. (Yes, yes, I know, it's unfathomable that I hadn't seen it before. But now I can shout HACK THE PLANET!!!! with the best of them.)
  4. Judging by the references and fsociety shirts I saw, Mr Robot seems to be pretty popular in our scene -- and I'm glad, because I already can't wait for season 2.
  5. I know it's not the 90s anymore, but The Crystal Method can still really rock the house, and some of you look quite lovely in blinky cyberpunk headgear.
  6. If you are at all a light sleeper, make sure you book a hotel room above the 10th floor. I was on the 4th floor, and the parties were on the 2nd floor and, well, not much sleep was had.
  7. The Meme-Fu from some of the speakers at Derby was just so damn high:

'Til next year, Derbycon.  Let's keep that welcoming feeling going even outside of Louisville.


All Red Team, All the Time

Posted by boblord Employee Sep 24, 2015

In last week’s blog (which you should read now if you have not), I said:


The core problem with security today isn’t about technology. It’s about misaligned incentives. We are trying to push security onto people, teams, and processes that just don’t want it.


To be clear, it’s not that people don’t care. They say they want security, and I believe them. Or more precisely, part of their brain wants security. People who want to break a bad habit, or to lose weight, or to stop smoking, all want to achieve their goals, but other parts of their brain are in charge. What matters are their actions and behaviors. Outsiders will judge you by the results, not your efforts, goals, and intentions.


How do we bridge the gap between people and organizations wanting to be more secure, and actually being more secure? Thinking about the long-term effects, how do we get from where we are now to a world in which breaches are rare?


As I dreamed in the previous blog post, it’s all about incentives that move responsibility from people with “security” in their title to people everywhere in the organization.


I don’t have any great answers (other than my “All red team, all the time” dream), but I will offer a few  characteristics of an organization that will be more likely to be secure by design.


Product teams have to care more about the security of the data they collect than the security team does. To use an analogy (which I admit are always fraught with peril) I simply must care about my own health more than my personal trainer and doctor because when something goes wrong I’m the one who has the heart attack, not them. Today the incentives don’t line up that way in infosec. Teams regularly ignore or override the advice of their security “doctor.” And when the incident happens, the security teams often bear the brunt of the incident response process. Everyone with Product Manager as a title should be well versed in the attacker lifecycle, the black market value of the data they collect, the legal impact of a breach, and should have a written runbook for when all that private data is dumped on torrents (among other scenarios). As an equal partner in securing the data, security is more likely.


The CEO is the implicit Chief Security Officer. She has to set the tone for everyone. She has to brag to her VPs about how she’s tightening up her personal security. She needs to be the first to update her laptop OS, experiment with a new secure instant messaging system, and to request security report cards for the various team. She has to require each VP and Director to formally explain what they are doing to improve security in their areas (as opposed to putting the sole burden on the security team). She should ask them to explain why they are collecting and holding customer and employee data for so long. What really matters is not what the CEO says to the security team about security, it’s what the CEO says to everyone else about security when the security team isn’t present. Small, continuous reinforcement is stronger than a single bold pronouncement.


Everyone thinks like an attacker. You are up against dedicated, human adversaries. After you make a move to improve security, your adversaries will decide what to do, if anything. When you start to think this way consistently it gives you new perspective. Your company does a lot of work to pass the audits, build ISO or NIST controls, train people, roll out a new IDS system, refactor networks, implement an SDL, and a lot of other hard, painful, expensive things. But when you view your results through the lens of an attacker, you may find that it’s not enough; that it’s necessary, but not sufficient. Or that you over invested in one area like Protect at the expense of Detect, or Respond, or Recover. If you knew for a fact that you were going to be attacked tonight, what would you do differently? If you knew you had an intruder in your networks right now, what would you do? Thinking like an attacker doesn’t devalue all those hard things you do to defend. It gives you perspective to know if it’s enough and balanced. Thinking like an attacker will let you know if you’ve changed the incentives and economics for the adversary.


Those are a few characteristics that will lead to a more secure organization. I’m sure there are others.


Let me be blunt. Until those things happen, compromises and breaches are inevitable because the incentives are misaligned.


Have a story or a dream for me about about incentives that worked? Or went awry? Drop me a line on Twitter at @boblord. If you want to tell me confidentially, send me a DM! My settings allow you to DM me even if I don’t follow you.


Yesterday, PerimeterX disclosed an issue in the venerable Bugzilla bug tracker, which can allow an untrusted attacker to gain access to privileged bug reports. This includes, of course, privately reported, but still unfixed, security vulnerabilities. Operators of Bugzilla bug trackers which use e-mail based permisisons are strongly advised to patch today. This would be a good place to insert a "yo dawg" joke about bugs in bugs, but I trust you've filled in the rest yourself by the end of this sentence.

Bugzilla-ayb.png by Mozilla Foundation - https://bugzilla.wikimedia.org/skins/contrib/Wikimedia/bugzilla-ayb.png. Licensed under MPL 1.1 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Bugzilla-ayb.png#/media/File:Bugzilla-ayb.png

Bugzilla is one of the oldest bug trackers around, first published in 1998 and written in Perl. According to the Bugzilla project, now run by the Mozilla Foundation, there are 136 organizations running public bugzilla instances. There are undoubtedly many, many more in internal network environments.


New Perl-based web applications are a rarity today, with most people favoring more modern web application frameworks written in more modern languages such as Ruby, Javascript, or PHP. The current investigation by PerimeterX is focused on Perl applications for an upcoming sequel to the Chaos Communication Congress presentation, "The Perl Jam." Today, many applications on the Internet still running Perl could be considered both "legacy" and "mission critical."


In some long-running IT environments, especially closed environments, it is easy to fall into a "it was like that when I got here" mindset, which is one of the reasons penetration testers nearly always are able to find older, unsupported, and unpatched systems running in internal target environments. While open source is usually quick to embrace new technologies for developing applications, such as Ember, React, and Angular, often at dizzying speed, there are still plenty of mature open source projects which rely on established technologies. Upgrading and rewriting software in new languages and frameworks is a fairly major undertaking, and thus, is usually reserved for new projects where technical debt is negligible to non-existent.


Like in many other industries, newer web application technologies are generally safer. By analogy, old cars lacked crumple zones, anti-lock brakes, and airbags, which is unthinkable in today's automotive industry. Similarly, more modern web application frameworks tend to never expose problems like stack-based buffer overflows or format string vulnerabilities, and actively guard against common cross-site scripting and SQL injection bugs. Web applications can and do ship bugs, for sure, but typical usage of modern frameworks comes with some assurance that the developer won't be able to immediately shoot himself in the foot.


Push vs Pull Security

Posted by boblord Employee Sep 17, 2015

I woke up from a dream this morning. Maybe you can help me figure out what it means.


Your company hired me to build a security program. They had in mind a number of typical things. Build a secure software development lifecycle so app developers didn’t code up XSS vulnerabilities. Improve network security with new firewalls, and rolling out IDS sensors. Set up training so people would be less likely to get phished. Implement a compliance program like NIST or ISO. And you wanted all of that rolled out in a way that didn’t disrupt the business or upset employees, or slow down the business people signing partnership deals and doing M&A work.


I didn’t do that. Instead, I hired a bunch of red team attackers. I bought them Metasploit Pro, WiFi attack hardware, and literally anything else they thought would be cool.


I also convinced the CEO to do something odd. (You know I’m an actual hypnotist, right? Perhaps you find yourself smirking just a little at that thought. Feels good, doesn’t it?) I convinced her to let me pay the red team minimum wage. And I convinced her that if the red team was able to acquire certain key bits of intellectual property, source code, customer data, marketing plans, or financial data, that they’d be able to claim a bounty. That bounty would be the quarterly bonus of their victims. If the red team can get those flags, they stand to win big. They stand to claim quarterly bounties measured in the millions.


I announced this plan at the quarterly all hands. I heard some murmuring in the audience. When people went to the microphone to ask questions they seemed stunned, and asked for clarifications, if I was really serious, and if the CEO approved. It wasn’t until the next morning that I faced the angry mobs. People from across the company were lined up outside my office.


The head of HR was first. He pointed to printouts of the graphs the security team gave him each month. These graphs showed that his team was the most likely team in the company to get phished. By a lot. And most likely to not use a password manager (meaning they were almost certainly using weak passwords, and reusing them in multiple corporate and external sites). And that they were least likely in the company to keep their systems patched and free of personal applications. He pointed out that HR in general, but Recruiting in particular, were sitting ducks. After all, he reminded me, what do the recruiters do all day? They receive emails from people they don’t know with attachments purporting to be resumes. They click on all those purported PDFs and Word documents without question. And they click on any links that might either help them understand a candidate’s history, or lead to a phishing or malware site.


He said the team was concerned that under those conditions, there was no hope for them keeping their bonus. If just one recruiter slipped up, everyone in recruiting would have to explain to their spouses why they were going to be getting less money this quarter. He started to raise his voice at me. “Have you seen the stats showing how bad companies are at patching? Have you seen the stats on the percentage of companies that are compromised by phishing? Have you even read the Verizon DBIR, Bob?!”


I reminded him of some recent decisions that he had approved. I reminded him that he had several important people on his team who had previously demanded to bring their own laptop from home and to connect it to the corporate network. I reminded him that he had personally overridden the security team concerns about malware, data loss, and other security issues. He was furious. “This is SERIOUS, Bob!” (I almost asked, but did not, “Wasn’t it serious when it was just customer data on the line? What about you personally losing money made it more important? Never mind, I just answered my own question.” )


I asked him what he suggested.


“First, I don’t know how you can sit there in good conscience with so little control over the laptops. Given the example of recruiting, why on earth would you let IT give them a general purpose computing device with admin rights? That’s insane! Take those away and give them Chromebooks with hardware backed certificates, and lock down the network so they can’t bring other machines. And don’t let me catch you rolling these out without a full time VPN so Infosec has complete pcaps for inspection even if they are off the corporate network!”


I stammered a little. “W-What if something goes wrong with their laptop, or it’s lost or stolen? You can’t have people out of work over technology failure!”


“Haven’t you been listening to me Bob? It’s far more important that we do everything we can to protect our IP and customer data than it is to guarantee 100% uptime for every employee 100% of the time. If someone is out sick and didn’t bring their laptop home, they won’t work and we’ll deal with that. If a remote employee has a laptop stolen, they can wait 24 hours until the new one gets shipped out. Bob, the risks of not doing these things are small. But they add up. And in ways that are hard to predict. Those small advantages are how the red team will get in. Either through clever hacking, or social engineering, that’s how they’ll get in. And I’m not going to have my team members lose income because you and IT gave them technology that was insecure by design.”


With that he stormed out of my office. But they weren’t done with me yet. Next up was an engineer I barely recognized.


As he sat down he immediately said “Looks like you’re going to be having a good day! Here’s the deal. I do all the right things from a security perspective, but I’m pretty much alone in that regard on my little team. I have two problems. First, the way we do builds. Second, the way we do appsec. For builds, as you know, all developers have the entire code repo on their laptops. Now if source code is one of the red team targets, we’re doomed. Bob, I make an OK living as a junior dev, but not so well that I can do without that bonus. Here’s my pitch: You need to move development into the datacenter. Make it so there is never any source code outside of a secure enclave. It’s simply got to be easier to manage the security of a central system with known inputs and outputs, right? Plus, get this: The other devs are going to like it. Why? Because they get to do their builds on a $30,000 machine rather than a $2,000 machine. And that machine is on a 10G network rather than on wifi, so pulling a fresh copy of the tree takes seconds. Now, if source code is stolen, there’s no way you can take my bonus from me. It simply cannot be my fault. It’s going to be the security team that loses their bonus! Sorry, but I care more about me getting my bonus than you getting yours. True fact.”


“OK, “ I said. What was your other concern? Appsec, was it?”


“Yeah. We say we do application security here, but you and I both know that’s a joke. Some engineers take security seriously and do really great code reviews. But most have no idea what to look for. They didn’t learn any security basics in college and we’ve done almost nothing to remedy that sad fact. So here’s what I’m going to ask the VP of Engineering, so I’m asking you also. We need 2 full quarters to ramp up. We need to stop what we’re doing and start security boot camp.”


“But, we have that,” I protested.


“What, the 1 day class that half the engineers opt out of? That’s not what I mean. I mean a real boot camp where we not only learn secure coding practices, but spend time in labs learning to attack code. And you don’t get permission to check in any more code until you pass both the offensive and defensive tests. Most won’t pass the first time, and that’s OK. Learning to think like a hacker is a major mindset change. Bob, you can’t really learn to write code securely unless you’ve spend time acting as the attacker. And once you get into the groove, it’s actually a ton of fun.”


“But that’s not going to take 2 quarters. Maybe a month…”


“Yeah, but here’s the problem. If any of our existing non-secure code is implicated in the attack path, some team, maybe MINE, will lose their bonus because of code we wrote over 2 years ago. That’s not fair! If we’re going to make things fair, we need not only time to become modestly proficient in secure coding, but we need to review any code that the red team might use against us. We need to make sure everyone understands static and dynamic code analysis, and how to do a proper code review. We have to find critical code modules that should only be touched by gurus. We need to assign each critical module to an appropriate owner. We need tooling that will prevent obviously broken code from going into production. And the list goes on. And all that is going to take time. I’m confident that we can make the code close to bulletproof, but it’s a long way from there now and we need the time.”


“I’m not sure how I’m going to convince the Product teams to let you go dark for 2 quarters…” I warned.


“Those guys?!” he blurted out with snark that almost rivaled infosec engineer snark. “How long do you think it will take the red team to completely own them? I think they’ll get religion in the next few days. Or maybe by lunch.”


Up next was a Director in the Finance team. He wasted no time. “So what are you going to do about the Finance people who move millions of corporate dollars around on the same machine they use for Facebook? Huh?! They all use old versions of Excel and Word on an old version of Windows. They love IE rather than Chrome or Firefox. Have you seen how many IE toolbars they have? How can those machines NOT be compromised?! It’s a miracle we haven’t sent millions of dollars to some overseas crime syndicate. And take a look at their workspaces. They have yellow stickies with bank passwords on them under keyboards. Does the red team get to come in late at night and go through people’s desks? If so, we’re going to lose our bonuses tonight! I have the team huddling today to come up with a list of improvements they need to make to secure employee and company data and money. I expect your full support in reducing the attack surface for my team and improved security training. Oh, and if you don’t migrate us from this weak password-based authentication to something like a PKI-based hardware token, there’s going to be hell to pay!” And off he went.


One of the IT managers came in and sat down. “Let’s dump all our corporate machines.”


“Excuse me?”


“Yeah, we have all these machines. Some are in the building, some in our colo. Let’s get rid of all of them.”


“And do what? No mail, no wiki, no HR apps…”


“We need to move to the cloud. We shouldn’t have any internally hosted services. Move all our apps to cloud providers.”


“You told me once that the cloud wasn’t secure…”


“That’s before I knew we were actually going to be attacked!”


“You mean you didn’t think the company was a valid target by disgruntled ex-employees, competitors, pranksters, hacktivists, crime syndicates, or nation states? That NO ONE would want to break in?”


“Well I guess it was a possibility, but now it’s a certainty and we need to take action Bob! This is serious! Plus, think about it this way: I have exactly zero people looking at mail logs, applying patches, or anything else that might help further security. And remember when that cloud provider admitted last month that they had a breach based on their internal security tools? The way I figure it, that shows that they are actually doing security right. They had actually put effort into Detection, not just Prevention. And they clearly had also invested in the Respond and Recover functions of the NIST framework. And even if they need to do better, what they have is already much more than I’ll ever be staffed to do. If we move to the cloud, we’ll get continuous upgrades, better security, and my team can focus on much more strategic projects. So please assign someone from your team to make sure my team can make this change quickly and securely.”


Finally, the CEO came in. She confided in me that she just realized that her own bonus was on the line, and that it was a considerable sum of money. “Do you think the red team will come after me?” I told her that I honestly didn’t know, but now that she mentioned it, probably. “It won’t be hard for them, will it?” she asked. “Um, well, probably not”, I replied quietly.


“They won’t come after my personal accounts. Right? Bob???”


I took a deep breath and explained that the red team was going to do what the bad guys do, and that it’s common for the bad guys to do extensive research on targets, often lasting months, and to use personal information in furtherance of their attack of the company.


“That’s hardly sporting of them!”


“Are you talking about the red team, or the criminals who want the data for which you are the top custodian?” She ignored the question.


“I heard you have a document on how to secure personal accounts so you never get hacked. Please forward a copy to me. Looks like I’m going to be up late tonight.”  And with that, she left.


And then I woke up.


That was the dream I had. Or maybe you can classify it as a nightmare. Either way, it’s a useful thought exercise. I like this thought exercise because it fixes, in one stroke, the underlying problem we have today in security.


The core problem with security today isn’t about technology. It’s about misaligned incentives. We are trying to push security onto people, teams, and processes that just don’t want it. The push model of security hasn’t worked yet. If we want security, we need a pull model of security.


We need to align incentives so everyone demands security from the start and that we give them systems and networks that are secure by design. We need to have serious conversations about the relative priorities of customer data, employee preference, and perceptions of employee productivity. We need to be open about the hard economics and soft costs (like reputation) around the cost of a breach. And if it’s cheaper to clean up after a breach than to prevent it, let’s say so.


Short of the extreme “all red team, all the time” thought exercise, I don’t have any easy answers. But I do have some suggestions that might help nudge the incentives in the right direction so maybe we can get a little more “pull”, allowing us to “push” a little less. I’ll describe some of those thoughts in an upcoming blog post.


Have a story or a dream for me about about incentives that worked? Or went awry? Drop me a line on Twitter at @boblord. If you want to tell me confidentially, send me a DM! My settings allow you to DM me even if I don’t follow you.

Attack Surface Analyzer, a tool made by Microsoft and recommended in their Security Development Lifecycle Design Phase, is meant primarily for software developers to understand the additional attack surface their products add to Windows systems.


As defenders, this tool can be very useful.


The tool is meant to identify changes on a system that can have an impact on security, such as the creation of new services, modification of firewall rules, and more.


This data is very valuable to you as a defender, as it'll allow you to understand the new software package you're analyzing, see if hardening measures are required, as well as see if the software will work under your current policies, if for example, you have strict firewall rules deployed to your regular workstations and servers.


Here's how it works and how to use the results.



Baseline System

Before you can analyze the attack surface changes caused by a certain software package, you need to scan a baseline system. Ideally, the baseline system will be as clean as possible. Using a system that has been configured with the corporate standards can be useful, to see if the installer would modify settings, but use a baseline with as little 3rd party software installed as possible.


Ideally, a clean, vanilla Windows installation should be used.

As of September 14 2015, Microsoft ASA seemed to exhibit compatibility issues with Windows 10. Windows 8.1 was used for this article.


Baseline Scan

First, install the Attack Surface Analyzer.


The tool has command line options, and can be integrated to various testing and deployment processes, but for the purposes of this post, we'll use the GUI.


Make sure the "Run new scan" option is selected, then click "Run Scan".




The tool will then begin collecting data, which is saved to the CAB file specified on the previous window.



Once it has completed, it'll go back to the previous screen.




Product Scan

Installing Software


Install the software package you want to analyze, enabling all features.


If this software requires a reboot and/or to be run at least once before being completely set-up, make sure this is performed before running the product scan in the Attack Surface Analyzer.


In this example, we'll use the version of iTunes currently available on Apple.com:


Scanning the Product

The product scan is simply another scan, performed after product installation, reboots and everything has been completed.


Run ASA, and select "Run a new scan". Make sure the filename is clear, then hit "Run scan".


Again, once the scan completes, we're brought back to the main window of ASA.


The Report

Now that we have two scans, we need to compare them and generate the report for the product we installed.



Select "Generate standard attack surface report" and make sure you select the right baseline and product files, then click "Generate".

Note: reports can be generated from any computer with the Attack Surface Analyzer installed, provided the scan files are available to it.


An HTML report will be generated, containing a summary, issues identified by the tool, as well as the attack surface report.


Using the Results

The results can be used to understand the software package, potential issues, but also to identify potential hardening steps to be undertaken after the deployment of this software.

While interpreting results, be careful not to assume every single change is related to the software package. There could be other reasons why changes happened at the time the scans were run. This is why using a clean machine is important, as it'll reduce the amount of noise significantly.

Using iTunes as an example, in the Attack Surface section of the report, we can see multiple changes made to the system which you might not want to allow in a corporate environment.

Firewall rules were added. A potential mitigation for this would be to control Windows Firewall rules centrally using GPOs, ensuring that these rules do not persist on systems where iTunes is deployed.


You can also see that iTunes registered multiple services, which are executed as Local System. Two of them are started automatically, and one is on demand. You can then use this information to create a GPO to deny the launch of these services, if they're not necessary to the particular use cases for this software on corporate workstations.


If the installer had created local accounts, they would also be identified in this report. Here, we see that the previously mentioned services are linked to the NT Service account.


Someone who enjoys puns could say we're only scratching the surface of the information this tool can provide.


By running it on software packages that are to be deployed, you'll be able to assess threats, leverage the information in the report to update threat models for corporate assets as well as design hardening methodologies to mitigate some of the potential issues discovered.

In environments with important amounts of third party software, the command line version of Microsoft ASA, combined with your favorite software deployment tools, can be combined to provide an easy, mostly automated way for software packaging teams to generate these reports for further analysis.


Grab the installers to the most common software you have in your environment and get started!




In my work performing vulnerability assessments and penetration tests, I’m often confronted with the dilemma of dealing with a pesky intrusion prevention system (IPS) or web application firewall (WAF). Sometimes we know they’re there. Other times, they rear their ugly heads and force a days-long change management process for a whitelist request. Or, when testing web sites/applications, it’s not uncommon to find out that I can just connect via SSL/TLS and carry out my tests that would’ve otherwise been blocked over HTTP.


When performing your scans and tests, I think it pays to consider the following – well in advance – to save yourself or others some hassle and (especially) false sense of security:

  • Which technical control(s) might end up creating challenges? Perhaps it’s an IPS or a WAF. Maybe it’s a firewall or even an endpoint security technology running on the system(s) being tested. Maybe it’s a third-party monitoring system that places suspect behavior on a blacklist. Strangely enough, many people aren’t familiar with which controls they have in place, often because a third party is “handling everything”. These are considerations that need to be mastered rather than assumed.
  • What happens when scans are blocked? Do you stop there? Some people do, assuming that all’s well with security...penetration thwarted! That’s what the bad guys would encounter, no? Not really. The thing is, not every exploit originates from an automated scan. The savvy hacker (and security professional) knows that a lot can be done on the down-low. Such manual analysis/exploitation often flies under the radar of security controls. How are you going to address that? If you’re doing this work in terms of PCI DSS compliance, the Security Standards Council addresses the very issue of blocked scans/test and the importance of whitelisting when it’s needed. It's good to see that this approach is becoming an industry-wide recommended practice.
  • How do you document your findings? If you uncover weaknesses only by manipulating the traditional, scoped assessment approach, is it still reality? Is a blacklisted, whitelisted, or “skirt the situation altogether” approach a true reflection of the security posture of the systems being tested? My take is that if it’s out there for others to play with, it should be fair game for all types of testing. If not, perhaps your online footprint needs to be reduced. Still, I see people requiring that the testing only be done in certain ways to, perhaps, have a more favorable outcome for the auditors and management to see. That’s a slippery slope, but it happens all the time.


In the end, only you will know what’s required, what’s best for your organization, and what your lawyers are willing to defend. This issue underscores the importance of properly scoping your security assessments. Leaving your security defenses up all the time doesn’t mean that something cannot fail and get you into a bind. Don’t let finely-tuned testing parameters end up creating a false sense of security that facilitates a breach or is otherwise held against you in the future.

In a previous blog post, I referenced some research on how people plan for, or rather how they fail to plan for, natural disasters like floods. At the end of the blog post I mentioned that people who have poor mental models about disasters fail to prepare fully. I keep coming back to the idea of mental models because it starts to explain why we have such a gap between security practitioners and senior executives.


I asked one CISO how he talks to other executives and the company’s board about a recent breach in the news. He told me that the CEO doesn’t have much time for security, so he uses a shorthand. He talks to the CEO in analogies. He explains that they’ve already put metaphorical locks on the front door, but to be sure that they don’t make the same mistakes as the latest company in the news, they’ll need to put locks on the back door.


This approach isn’t uncommon, but it has a few flaws. First, it doesn’t take much time to show that this analogy doesn’t work well. The way attacks work today, the attackers will not be prevented from breaking into this metaphorical house. Instead, they’ll get a ladder from the garage and climb in the upstairs bedroom window. Of course, you can put more locks on those windows, but again, the attackers are going to find a way in if your security strategy is based solely on locks (prevention). In this analogy, where are other defender activities like identification, detection, response, and recovery?


The second reason the lock analogy fails is because it tends to create a problem/solution dynamic. If it’s a bug, go fix it. But again, that’s not how the attackers work. In other spaces this approach can work. For example, if your web site is experiencing performance problems, you can assign an appropriate engineer to fix the problem. After some analysis, she’ll come back with recommendations. Maybe she’ll propose buying more machines/instances, or maybe there’s a bottleneck in the code that can be refactored given the new website load patterns. But in general, she’ll be able to fix the problem and it will stay fixed. That’s not how security works. When the defenders make a change that improves security, the attackers get to decide if the cost of the attack is worth continuing or not. Or perhaps they’re already in the network so far that the improved security doesn’t affect them. In many cases, they’ll modify their approach or tools to get past these changes. In many cases, the security improvements will be little more than a short lived setback.


If you are an executive who views security decisions through the “problem/solution” lens, you’ll be tempted to offer the security team budget or headcount to “fix” the problem. Someone presented you with a problem, and you gave them a solution. Implicit in this transaction is a shift of the responsibility and accountability back to the security team. They asked for money for more locks, and you gave it to them. If there is a breach, the security team will be accountable, not you.


The metaphor of locks on doors isn’t the only one you’ve heard. Others include outrunning the next guy rather than the bear, hard crunchy exterior/soft chewy interior, seat belts, guard rails, airbags, and so on. Bruce Schneier also talked about the problems of metaphors:


It's an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber's failed attack in 2009. The problem is that connecting the dots is a bad metaphor, and focusing on it makes us more likely to implement useless reforms.


Trying to communicate using the wrong mental models leads to real problems for security practitioners and the data they are trying to protect. So what are the right mental models?


The single biggest improvement in your mental models you can make is to understand that you are up against dedicated, human adversaries.


Until defenders, executives, and stakeholders in an organization internalize this fact, we will continue to see them miscommunicate and then plan and execute poorly. And the results will be security by tool rather than security by strategy. And that will lead to more breaches in the news (and many not!).


The key words to ponder are “dedicated” and “human”. In some cases, the attackers have a job, and they are being paid to attack you. Or maybe they feel a moral purpose in attacking you. Some work alone, some in teams with different specializations. But they are dedicated. And of course we know that they are human. But that has implications that most executives (and many security teams) haven’t pondered. It means they read about your latest acquisition and begin to probe the target company as a way into yours. They can correlate previous breach data with your employees to find a likely candidate for a spear phishing attack. They look for your technical debt. They find systems orphaned by a recent reorg or layoff. Humans can be creative, patient, and insightful.


As an aside, all of this makes security unlike any other part of your organization. No other part of your organization has the sort of dedicated, human adversaries that seek to benefit from the value of your data in the way security attackers will. What about the legal team, you may ask? Don’t they have dedicated and human adversaries? Yup. But let’s walk through the steps in a legal “attack”. First, the adversary notifies you that you are under attack. While there have been some high-profile announcements that a company’s networks and systems were under attack, it’s not common. As a reminder, the average time between intrusion and detection is measured in months and quarters. During that time, attack takes place without anyone knowing. Next, both the attacker and defender play by roughly the same rules, and those rules are enforced by a neutral referee who decides if both sides are abiding by these rules. You get the idea. The legal analogy isn’t even close to what infosec defenders deal with.


There’s a common saying in the CISO world that ”security practitioners need to learn to speak the language of the business”. That’s absolutely true. There’s no doubt in my mind. We need to continue to learn how the business works, and we need to get better at saying “yes” while at the same time reducing risk. That fact is necessary but not sufficient for us to close the gap between security people and senior decision makers. The other major factor will be those senior decision makers breaking free of simplistic metaphors and faulty mental models. It’s never really been a communication gap. It’s been a mental model gap. Without shared mental models, communication will always be faulty.


Getting all levels of an organization aligned on the right mental models is clearly not an easy task. What will work in one organization isn’t what will work for another. Not all stakeholders understand the importance of spending time to learn how attacks work. However, I would propose a few things. If you are a security practitioner, don’t shy away from teaching others how attacks work. You should be looking at your security program through the lens of a kill-chain or attacker lifecycle model. When you present, teach people how you think. Explain why this next budget request will address a specific concern, but that others remain. Explain what you think your adversaries will do next. Resist the temptation to reduce those complex dynamics down to locks on doors. Focus your conversations on models, not metaphors. That’s true for all your communications, reports, quarterly plans, and elevator chats.


If you are a senior decision maker and don’t come from a security or intelligence background, you may find it challenging and time consuming to learn to think more like an attacker. Resist the urge to say “I don’t need to be a subject matter expert in security; that’s why I have a security team”. While that statement is true, just by saying it you prevent yourself from learning just enough to make smart decisions. You are already expert-enough in numerous other domains. Security and privacy awareness will be critical skills for your success in the coming years. Think ahead to the inevitable (yes, inevitable!) breach where outsiders will hold you accountable in potentially unexpected ways.  Assess your organization’s culture of security objectively rather than the way you hope it is. And make sure your actions match your words.


Have a story for me about about mental models gone wrong? Drop me a line on Twitter at @boblord. If you want to tell me confidentially, send me a DM! My Twitter settings allow you to DM me even if I don’t follow you.

By now, you’ve probably caught wind of Mark Stanislav’s ten newly disclosed vulnerabilities last week, or seen our whitepaper on baby monitor security – if not, head on over to the IoTSec resources page.


You may also have noticed that Rapid7 isn’t really a Consumer Reports-style testing house for consumer gear. We’re much more of an enterprise security services and products company, so what’s the deal with the baby monitors? Why spend time and effort on this?


The Decline of Human Dominance

Well, this whole “Internet of Things” is in the midst of really taking off, which I’m sure doesn’t come as news. According to Gartner, we’re on track to see 25 billion-with-a-B of these Things operating in just five years, or something around three to four Things for every human on Earth.


Pretty much every electronic appliance in your home is getting a network stack, an operating system kernel, and a cloud-backed service, and it’s not like they have their own network full of routers and endpoints and frequencies to do all this on. They’re using the home’s WiFi network, hopping out to the Internet, and talking to you via your most convenient screen.


Pwned From Home

In the meantime, telecommuting increasingly blurs the lines between the “work” network and the “home” network. From my home WiFi, I check my work e-mail, hop on video conferences, commit code to GitHub (both public and private), and interact with Rapid7’s assets directly or via a cloud service pretty much every day. I know I’m not alone on this. The imaginary line between the “internal” corporate network and the “external” network has been a convenient fiction for a while, and it’s getting more and more porous as traversing that boundary makes more and more business sense. After all, I’m crazy productive when I’m not in the office, thanks largely to my trusty 2FA, SSO, and VPN.


So, we’re looking at a situation where you have a network full of Things that haven’t been IT-approved (as if that stopped anyone before) all chattering away, while we’re trying to do sensitive stuff like access and write sensitive and proprietary company data, on the very same network.


Oh, and if the aftermarket testing we’ve seen (and performed) is to be believed, these devices haven’t had a whole lot of security rigor applied.


Compromising a network starts with knocking over that low-hanging fruit, that one device that hasn’t seen a patch in forever, that doesn’t keep logs, that has a silly password on an administrator-level account – pretty much, a device that has all of the classic misfeatures common to video baby monitors and every other early market IoT device.


Let’s Get Hacking

Independent research is critical in getting the point across that this IoT revolution is not just nifty and useful. It needs to be handled with care. Otherwise, the IoT space will represent a mountain of shells, pre-built vulnerable platforms, usable by bad guys to get footholds in every home and office network on Earth.


If you’re responsible for IT security, maybe it’s time to take a survey of your user base and see if you can get a feel for how many IoT devices are one hop away from your critical assets. Perhaps you can start an education program on password management that goes beyond the local Active Directory, and gets people to take all these passwords seriously. Heck, teach your users how to check and change defaults on their new gadgets, and how to document their changes for when things go south.


In the meantime, check out our webinar tomorrow for the technical details of Mark’s research on video baby monitors, and join us over on Reddit and “Ask Me Anything” about IoT security and what we can do to get ahead of these problems.

reddit_alien.png[update 3pm EST Sept 9] This AMA is now live! The direct link is here:  https://www.reddit.com/r/IAmA/comments/3ka38q/we_are_professional_iot_hackers_an d_researchers/

Join us and ask your questions!



Following up on their research on IoT baby monitor vulns, Mark Stanislav & Tod Beardsley will be doing an Ask Me Anything (AMA) on Reddit in r/IAMA this Wednesday, September 9, at 3:30pm EST.


They'll be answering any/all of your questions on Internet of Things (IoT) security, as well as their baby monitor security research findings. Make sure to join us in the subreddit right here: https://www.reddit.com/r/iama/


Need proof? Here you go:

“Those fools. They didn’t even bother to do X. And everyone knows you have to do X.”


If you’ve been in Infosec for even a short time, you’ve seen this sort of statement, whether explicit or implicit, about something in the news. It comes up often after a company has suffered a breach. And it’s often true. The company should have done X. Everyone knows you need to do X. Even my dad knows that. But then again, the security people making these comments often work at companies that really should be doing Y and Z, but may not be. What is it that makes people feel they can criticize others while not having their own house in order?


I’ve also noticed a few examples of executives being grilled by government officials. The line of questioning is harsh and with no wiggle room. The questioners only want to know about the state of security, and are not interesting in hearing comments about the level of difficulty to make things secure, or how programs are now in place to address the issues.


This gap between the viewpoints of insiders and outsiders is hugely important, and will be more so in the future.


Outsiders might be members of the press, commentators, Twitter accounts, regulators, or opposing counsel during a shareholder lawsuit. They may be schooled in security matters, and may even be security experts. Or they may be non-technical people who reuse passwords, but will point fingers at you when you failed to implement a robust PKI key management scheme on your complex multi-national WAN.


But it doesn’t matter. Their position is what it is. And it’s hard for the defenders on the inside to see eye to eye with the outsiders. Here are a few of the differences I’ve noted.


Insiders measure effort. Outsiders measure outcomes.


Insiders understand that they must find a way to do more with less. Outsiders look at how the attacker succeeded.


Insiders prioritize against many possible outcomes, breach scenarios, and black swan events. The actual crisis may not look like the ones theorized. Outsiders have 20/20 hindsight and ask incredulously how it could have happened.


Insiders focus on the many things that went well. Outsides focus on the one thing that went wrong.


Insiders know it’s hard to hire security talent, and hard to change an organization’s business practices to be more secure. Outsiders look at a failure of management to make security a priority. Period.


Insiders know it’s hard to get management attention without sounding alarmist and like fear mongers. Outsiders wonder why the security team didn’t escalate on a daily basis.


Insiders know that security budgets are not limitless, and the security team has limited influence whenever people feel they are slowing the business down. Outsiders know that without trust of your customers, you have no business.


Insiders know you can’t log everything. Not every packet, netflow record, OS log, and app log. That would be too expensive. Outsiders point out that disk is cheap, even at scale. They point out that your lack of forensics data increased the time to respond and recover after the breach was detected. And that there are still lingering questions about how the attack happened. And that it would have been cheaper to buy the disks to support the logs than to deal with uncertainty.


Insiders know it’s nearly impossible to keep tabs on all the machines at the company. Outsiders point to the one orphaned machine that was attacked and think you should have known.


Outsiders will ask what went wrong. Then they will ask what you are doing to prevent this from happening again. When you tell them, they will respond by saying “Well, then that’s what you should have been doing all along”. This is a key element of the insider/outside friction.


Insiders look for a root cause analysis. Outsiders will look at superficial symptoms. When they give guidance to the insiders, it can generate more friction if the insiders think they are being tasked with fixing symptoms rather than causes.


Insiders know that you have to allow employees the right to bring their own devices onto the corporate network to be productive. Outsiders know that the cause of your breach was caused by an unmanaged employee device, and that it was completely foreseeable and preventable.


As in many of the examples above, the divide between insiders and outsiders is shown after a breach. This “before and after” analysis offers the most striking examples of this divide.


If you are caught in a line of questioning from an outsider, how will you answer their pointed and loaded questions? Some questions can be all rhetoric, and even attempting to answer can cause more issues. Questions might include variations on the following.


  • Why was this group of users exempt from 2FA policy for so long?
  • Was it a policy to ignore warnings from your SIEM?
  • Why did you allow source code on laptops, knowing that’s a common exfiltration path for criminals?
  • Why did you prioritize the convenience of your employees over safeguarding the trust given to you by your customers?
  • If the data had no immediate value to the company, and only had value to criminals, why did you retain it rather than deleting it?
  • Given that the excessive access rights for this one user were abused by the attackers, what process had you gone through to prevent this exact attack path? How exactly did you fail?
  • As a security professional, you know that most attacks involve the exploitation of known vulnerabilities. Please explain again why you failed to guarantee software updates on so many systems? After all, that’s job #1 for reducing risk. Do you disagree?
  • You said you prioritized the desire of the sales/business development team over the concerns of the security team. How are your sales/growth numbers looking since the breach?
  • Knowing that executives are prime targets for cyber criminals, why did you allow them to opt out of software updates and phishing training?
  • Explain why you allowed a network design to include unsecured connections to remote offices that failed to meet your own written security standards?
  • Given what we know about how valuable customer and employee data, and reading about all the other similar attacks in the news, can you explain why you failed to encrypt the customer and employee data?
  • You admitted that the machines in the attack path were no longer used for much; that they were orphaned, not managed, and unpatched. And yet you failed to remove this obvious risk from your network. Why?
  • Why was this not raised to the CEO? (or if you’re the CEO: Why did you not take appropriate action?)
  • Why was this decision made at the Manager level rather than the VP level or by the CEO?
  • You claim you spend millions of dollars on your information security program. Do you think you got your money’s worth?


Some questions are much more pointed or slanted than others. But if you were asked these questions, perhaps in public or under oath, would you feel good about your answers? Would you feel confident in not just your efforts, but in the results? Would outsiders feel you did the best you could, or would they feel that you hadn’t taken security seriously?


Have a story for me about being judged by outsiders? Drop me a line on Twitter at @boblord. If you want to remain anonymous, send me a DM.  My settings allow you to DM me even if I don’t follow you.

Usually, these disclosure notices contain one, maybe two vulnerabilities on one product. Not so for this one; we’ve got ten new vulnerabilities to disclose today.


If you were out at DEF CON 23, you may have caught Mark Stanislav’s workshop, “The Hand that Rocks the Cradle: Hacking IoT Baby Monitors.” You may have also noticed some light redaction in the slides, since during the course of that research, Mark uncovered a number of new vulnerabilities across several video baby monitors.


Vendors were notified, CERT/CC was contacted, and CVEs have all been assigned, per the usual disclosure policy, which brings us to the public disclosure piece, here.


For more background and details on the IoT research we've performed here at Rapid7, we've put together a collection of resources on this IoT security research. There, you can find the whitepaper covering many more aspects of IoT security, some frequently asked questions around the research, and a registration link for next week's live webinar with Mark Stanislav and Tod Beardsley.







Predictable Information Leak

iBaby M6


Local Net, Device


Backdoor Credentials

iBaby M3S


Local Net, Device


Backdoor Credentials

Philips In.Sight B120/37




Reflective, Stored XSS

Philips In.Sight B120/37




Direct Browsing

Philips In.Sight B120/37




Authentication Bypass

Summer Baby Zoom Wifi Monitor & Internet Viewing System




Privilege Escalation

Summer Baby Zoom Wifi Monitor & Internet Viewing System


Local Net, Device


Backdoor Credentials

Lens Peek-a-View


Local Net


Backdoor Credentials





Backdoor Credentials

TRENDnet WiFi Baby Cam TV-IP743SIC



Disclosure Details


Vendor: iBaby Labs, Inc.

maxresdefault.jpgThe issues for the iBaby devices were disclosed to CERT under vulnerability note VU#745448.


Device: iBaby M6

The vendor's product site for the device assessed is https://ibabylabs.com/ibaby-monitor-m6

Vulnerability R7-2015-11.1: Predictable public information leak (CVE-2015-2886)

The web site ibabycloud.com has a vulnerability by which any authenticated user to the ibabycloud.com service is able to view camera details for any other user, including video recording details, due to a direct object reference vulnerability.


The object ID parameter is eight hexadecimal characters, corresponding with the serial number for the device. This small object ID space enables a trivial enumeration attack, where attackers can quickly brute force the object IDs of all cameras.


Once an attacker is able to view an account's details, broken links provide a filename that is intended to show available "alert" videos that the camera recorded. Using a generic AWS CloudFront endpoint found via sniffing iOS app functionality, this URL can have the harvested filename appended and data accessed from the account. This effectively allows anyone to view videos that were created from that camera stored on the ibabycloud.com service, until those videos are deleted, without any further authentication.


Relevant URLs


Additional Details

The ibabycloud.com authentication procedure has been non-functional as of at least June, 2015, continuing through the publication of this paper in September, 2015. These errors started after testing was conducted for this research, and today, do not allow for logins to the cloud service. That noted, it may be possible to still get a valid session via the API and subsequently leverage the site and API to gain these details.



Today, this attack is more difficult without prior knowledge of the camera's serial number, as all logins are disabled on the ibabycloud.com website. Attackers must, therefore, acquire specific object IDs by other means, such as sniffing local network traffic.


In order to avoid local network traffic cleartext exposure, customers should inquire with the vendor about a firmware update, or cease using the device.



Device: iBaby M3S

m3-product_0_0.jpgThe vendor's product site for the device assessed is https://ibabylabs.com/ibaby-monitor-m3s


Vulnerability R7-2015-11.2, Backdoor Credentials (CVE-2015-2887)

The device ships with hardcoded credentials, accessible from a telnet login prompt and a UART interface, which grants access to the underlying operating system. Those credentials are detailed below.


Operating System (via Telnet or UART)

  • Username: admin
  • Password: admin



In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.


Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, requesting details for ticket #4085

Tue, Jul 07, 2015: Disclosure to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure



Vendor: Philips Electronics N.V.

B120_37-IMS-en_US?wid=1250&$jpglarge$The issue for the Philips device was disclosed to CERT under vulnerability note VU#569536.


Device: Philips In.Sight B120/37

The vendor's product site for the device assessed is http://www.usa.philips.com/c-p/B120_37/in.sight-wireless-hd-baby-monitor


Vulnerability R7-2015-12.1, Backdoor Credentials (CVE-2015-2882)

The device ships with hardcoded and statically generated credentials which can grant access to both the local web server and operating system.

The operating system "admin" and "mg3500" account passwords are present due to the stock firmware used by this camera, which is used by other cameras on the market today.

The web service "admin" statically-generated password was first documented by Paul Price at his blog[1].

In addition, while the telnet service may be disabled by default on the most recent firmware, it can be re-enabled via an issue detailed below.


Operating System (via Telnet or UART)

  • Username: root
  • Password: b120root


Operating System (via Telnet or UART)

  • Username: admin
  • Password: /ADMIN/


Operating System (via Telnet or UART)

  • Username: mg3500
  • Password: merlin


Local Web Server

Reachable via http://{device_ip}/cgi-bin/{script_path}

  • Username: user
  • Password: M100-4674448


Local Web Server

Reachable via http://{device_ip}/cgi-bin/{script_path}

  • Username: admin
  • Password: M100-4674448
  • A recent update changes this password, but the new password is simply the letter 'i' prefixing the first ten characters of the MD5 hash of the device's MAC address.


Vulnerability R7-2015-12.2, Reflective and Stored XSS (CVE-2015-2883)

A web service used on the backend of Philips' cloud service to create remote streaming sessions is vulnerable to reflective and stored XSS. Subsequently, session hijacking is possible due to a lack of an HttpOnly flag.

When accessing the Weaved cloud web service[2] as an authenticated user, multiple pages have a mixture of reflective and stored XSS in them, allowing for potential session hijacking. With this access, a valid streaming session could be generated and eavesdropped upon by an attacker. Two such examples are:


Vulnerability R7-2015-12.3, Direct Browsing via Insecure Streaming (CVE-2015-2884)

The method for allowing remote viewing uses an insecure transport, does not offer secure streams protected from attackers, and does not offer sufficient protection for the the camera's internal web applications.

Once a remote viewing stream has been requested, a proxy connection to the camera's internal web service via the cloud provider Yoics[3] is bound to a public hostname and port number. These port numbers appear to range from port 32,000 to 39,000 as determined from testing.This bound port is tied to a hostname with the pattern of proxy[1,3-14].yoics.net, limiting the potential number of port and host combinations to an enumerable level. Given this manageable attack space, attackers can test for for a HTTP 200 response in a reasonably short amount of time.


Once found, administrative privilege is available without authentication of any kind to the web scripts available on the device. Further, by accessing a Unicode-enabled streaming URL (known as an "m3u8" URL), a live video/audio stream will be accessible to the camera and appears to stay open for up to 1 hour on that host/port combination. There is no blacklist or whitelist restriction on which IP addresses can access these URLs, as revealed in testing.


Relevant URLs

  • Open audio/video stream of a camera: http://proxy{1,3-14}.yoics.net:{32000-39000}/tmp/stream2/stream.m3u8 [no authentication required]
  • Enable Telnet service on camera remotely: http://proxy{1,3-14}.yoics.net:{32000-39000}/cgi-bin/cam_service_enable.cgi [no authentication required]



In order to disable the hard-coded credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels. In order to avoid the XSS and cleartext streaming issues with Philips' cloud service, customers should avoid using the remote streaming functionality of the device and inquire with the vendor about the status of a cloud service update.


Additional Information

Prior to publication of this report, Philips confirmed with Rapid7 the tested device was discontinued by Philips in 2013, and the current manufacturer and distributor is Gibson Innovations. Gibson has developed a solution for the identified vulnerabilities, an expects to make updates available by September 4, 2015.


Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, requesting details

Tue, Jul 07, 2015: Philips Responsible Disclosure ticket number 15191319 assigned

Tue, Jul 17, 2015: Phone conference with vendor to discuss issues

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Thu, Aug 27, 2015: Contacted by Weaved to validate R7-2015-12.2

Tue, Sep 01, 2015: Contacted by Philips regarding the role of Gibson Innovations

Wed, Sep 02, 2015: Public disclosure



Vendor: Summer Infant

71z07dgKAiL._SX450_.jpgThe issues for the Summer Infant device was disclosed to CERT under vulnerability note VU#837936.


Device: Summer Baby Zoom WiFi Monitor & Internet Viewing System

The vendor's product site for the device assessed is http://www.summerinfant.com/monitoring/internet/babyzoomwifi.


Vulnerability R7-2015-13.1, Authentication Bypass (CVE-2015-2888)

An authentication bypass allows for the addition of an arbitrary account to any camera, without authentication.

The web service MySnapCam[4] is used to support the camera's functionality, including account management for access. A URL retrievable via an HTTP GET request can be used to add a new user to the camera. This URL does not require any of the camera's administrators to have a valid session to execute this request, allowing anyone requesting the URL with their details against any camera ID to have access added to that device.

After a new user is successfully added, an e-mail will then be sent to an e-mail address provided by the attacker with authentication details for the MySnapCam web site and mobile application. Camera administrators are not notified of the new account.


Relevant URL


Vulnerability R7-2015-13.2, Privilege Escalation (CVE-2015-2889)

An authenticated, regular user can access an administrative interface that fails to check for privileges, leading to privilege escalation.


A "Settings" interface exists for the camera's cloud service administrative user and appears as a link in their interface when they login. If a non-administrative user is logged in to that camera and manually enters that URL, they are able to see the same administrative actions and carry them out as if they had administrative privilege. This allows an unprivileged user to elevate account privileges arbitrarily.


Relevant URL



In order to avoid exposure to the authentication bypass and privilege escalation, customers should use the device in a local network only mode, and use egress firewall rules to block the camera from the Internet. If Internet access is desired, customers should inquire about an update to Summer Infant's cloud services.


Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Tue, Sep 01, 2015: Confirmed receipt by vendor

Wed, Sep 02, 2015: Public disclosure



Vendor: Lens Laboratories(f)

71yIyoKGwBL._SY355_.jpgThe issues for the Lens Laboratories(f) device was disclosed to CERT under vulnerability note VU#931216.


Device: Lens Peek-a-View

The vendor's product site for the device assessed is http://www.amazon.com/Peek---view-Resolution-Wireless-Monitor/dp/B00N5AVMQI/


Of special note, it has proven difficult to find a registered domain for this vendor. All references to the vendor point at Amazon directly, but Amazon does not appear to be the manufacturer or vendor.


Vulnerability R7-2015-14, Backdoor Credentials (CVE-2015-2885)

The device ships with hardcoded credentials, accessible from a UART interface, which grants access to the underlying operating system, and via the local web service, giving local application access via the web UI.

Due to weak filesystem permissions, the local OS ‘admin’ account has effective ‘root’ privileges.


Operating System (via UART)

  • Username: admin
  • Password: 2601hx


Local Web Server

Site: http://{device_ip}/web/

  • Username: user
  • Password: user


Local Web Server

Site: via http://{device_ip}/web/

  • Username: guest
  • Password: guest



In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.


Disclosure Timeline

Sat, Jul 04, 2015: Attempted to find vendor contact

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure



Vendor: Gynoii, Inc.

71yIyoKGwBL._SY355_.jpgThe issues for the Gynoii devices was disclosed to CERT under vulnerability note VU#738848.


Device: Gynoii

The vendor's product site for the device assessed is http://www.gynoii.com/product.html


Vulnerability R7-2015-15, Backdoor Credentials (CVE-2015-2881)

The device ships with hardcoded credentials, accessible via the local web service, giving local application access via the web UI.


Local Web Server

Site: http://{device_ip}/admin/

  • Username: guest
  • Password: guest


Local Web Server

Site: http://{device_ip}/admin/

  • Username: admin
  • Password: 12345



In order to disable these credentials, customers should inquire with the vendor about a firmware update.


Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Fri, Jul 24, 2015: Confirmed receipt by CERT

Wed, Sep 02, 2015: Public disclosure



Vendor: TRENDnet

trendnet_tv_ip743sic_wifi_h_264_day_night_ir_1074096.jpgThe issues for the TRENDnet device was disclosed to CERT under vulnerability note VU#136207.


Device: TRENDnet WiFi Baby Cam TV-IP743SIC

The vendor's product site for the device under test is http://www.trendnet.com/products/proddetail.asp?prod=235_TV-IP743SIC


Vulnerability R7-2015-16: Backdoor Credentials (CVE-2015-2880)

The device ships with hardcoded credentials, accessible via a UART interface, giving local, root-level operating system access.


Operating System (via UART)

  • Username: root
  • Password: admin



In order to disable these credentials, customers should inquire with the vendor about a firmware update. UART access can be limited by not allowing untrusted parties physical access to the device. A vendor-provided patch should disable local administrative logins, and in the meantime, end-users should secure the device’s housing with tamper-evident labels.


Disclosure Timeline

Sat, Jul 04, 2015: Initial contact to vendor

Mon, Jul 06, 2015: Vendor reply, details disclosed to vendor

Sun, Jul 16, 2015: Clarification sought by vendor

Mon, Jul 20, 2015: Clarification provided to vendor

Tue, Jul 21, 2015: Disclosure to CERT

Wed, Sep 02, 2015: Public disclosure



Not Just Baby Monitors

As you can see, there were several new findings across a range of vendors, all operating in the same space. Here at Rapid7, we believe this is not unique to the video baby monitor industry in particular, but is indicative of a larger, systemic problem with IoT in general. We've put together a collection of IoT resources, including a whitepaper and a FAQ, covering these issues, which should fill you in on where we're at on this IoT security journey. Join us next week for a live webinar where Mark Stanislav and Tod Beardsley will discuss these issues further, or just use the #IotSec hashtag on Twitter to catch our attention with a question or comment.


In the meantime, keep an eye on those things keeping an eye on your infants and toddlers.


Update (Sep 02, 2015): Gynoii acknowledged the above research shortly after publication and are assessing appropriate patch strategies.

Update (Sep 02, 2015): iBaby Labs communicated that access token expiration and secure communication channels have been implemented.

Update (Sep 02, 2015): Summer Infant tweeted that all reported issues have been resolved.

Update (Sep 03, 2015): TRENDnet reports updated firmware available here (version 1.0.3), released on Sep 02, 2015.


[1] http://www.ifc0nfig.com/a-close-look-at-the-philips-in-sight-ip-camera-range/

[2] http://www.weaved.com/

[3] https://www.yoics.net

[4] http://www.mysnapcam.com/

Prior to RStudio version 0.99.473, the RStudio integrated toolset for Windows is installed and updated in an insecure manner. A remote attacker could leverage these flaws to run arbitrary code in the context of the system Administrator by leveraging two particular flaws in the update process, and as the RStudio user via the third update process flaw. This advisory will discuss all three issues.

Since reporting these issues, RStudio version 0.99.473 has been released. This version addresses all of the below concerns. End-users and distributors of RStudio are encouraged to update to this latest version. In addition, R version 3.2.2 was released and now uses HTTPS internally for package updates by default.

RStudio, Inc. has shown they are committed to security and continues to be remarkably responsive to this set of vulnerability disclosures, both by fixing their own update implementations, and working with the larger R community to address related issues.

R7-2015-10.1: Cleartext-only Initial Installation of RStudio

The initial, recommended installation procedure is conducted over HTTP, rather than HTTPS, and the download source page is delivered over HTTP as well, with no HTTPS equivalent. By poisoning the initial installation, the attacker can gain control of the target with Administrator privileges.

R7-2015-10.2: Cleartext-only Installation of R Packages

The procedure for installing common R packages (programming libraries) also uses cleartext HTTP, and there appears to be no way to specify an HTTPS or other cryptographically secure source. Upon using a poisoned R package, an attacker can gain control of the target with the privileges of the R user.

In most cases, the first and second vulnerabilities can only be leveraged by an attacker who has access to the local LAN environment, or has some level of control over the upstream network. Attacks would involve either perform man-in-the-middle networking attacks or DNS poisoning at the time of installation of the affected components.

R7-2015-10.3: Cleartext-only Update Procedure

As with library installations, in-product library and application updates are also conducted over cleartext HTTP. As with initial installation, cryptographic hashes are provided in order to validate updates, but those hashes are themselves delivered in the clear.

Product Description

RStudio is a development environment for R. R is a programming language used largely for developing software in the fields of statistical analysis, machine learning, data science, and similar domains. RStudio is maintained by RStudio, Inc., and is an affiliated project with the Foundation for Open Access Statistics (FOAS).

R (the language) and the Comprehensive R Archive Network (CRAN) is maintained by the R Foundation.

While it is difficult to gauge how many installations are affected, R is the third most popular software package mentioned in academic papers, and the fifth most cited analytics software package in current job listings, according to The Popularity of Data Analysis Software by Robert A. Muenchen.

Installing and updating RStudio requires Administrator privileges.


These vulnerabilities were reported by Tod Beardsley of Rapid7, Inc.


An unauthenticated, remote attacker may exploit these vulnerabilities under certain circumstances. While local machine access is not required, local or upstream network access is. An attacker would need to have the ability to alter the cleartext communication between the intended source of software -- www.rstudio.org and download1.rstudio.org in the case of R7-2015-10.1 and R7-2015-10.3, or several package-hosting domains including cran.rstudio.org and www.stats.ox.ac.uk in the case of R7-2015-10.2. The easiest way to achieve this would be through a DNS Hijacking attack on these domains. This would require an attacker capable of either altering DNS responses in flight, by posing as a legitimate DNS server, or by poisoning a cached response on the DNS server.

A more complex attack would involve a Man-in-the-Middle (MITM) attack which would intercept the cleartext TCP/IP packets or cleartext HTTP responses, and rewriting legitimate responses to provide the attacker-supplied data.

Both the DNS Hijacking and MITM attacks would require the attacker to either have some control over the local network, the network providing DNS or HTTP responses, or a hop in between. In the case of public, shared Wifi, this is easily accomplished using standard Wireless Access Point spoofing techniques.

Note that in the case of R7-2015-10.3 and R7-2015-10.2 for package installation, package update, and application updates, the existing mechanism for obtaining packages and updates relies on providing a trusted URL over an untrusted channel. Altering this response is all that is needed in order to successfully exploit the target.

To illustrate, a request for an update, which happens automatically upon startup, can be simulated as:

$ curl -iHL "http://www.rstudio.org/links/check_for_update?version=0.1.1&os=windows&format=kvp"
HTTP/1.1 200 OK
Date: Tue, 30 Jun 2015 14:08:46 GMT
Server: Apache/2.2.29
Content-Length: 190
Content-Type: text/html; charset=UTF-8


Patches and Mitigations

Updating to the latest version of RStudio effectively remedies all of the issues described here. Updates can be obtained at https://www.rstudio.com/products/rstudio/download/. Note the HTTPS transport.

In addition, the core R language has also been updated to prefer secure package retrieval. This is detailed in the R Release notes.

What follows is advice for securing environments in the event patching production systems is delayed. These strategies should considered short-term, partial solutions until updated versions are obtained and installed.

In the absence of a patch, one practical mitigation to these vulnerabilities for the user is to ensure that the local network is, in fact, trusted, and only trusted users and machines are permitted to connect to it during the installation and update to RStudio and R packages. While these attacks may occur upstream from the target's local network, such an attack would take significantly more control and planning to execute.


One way to mitigate this issue today is to obtain a copy of the RStudio installer from a trusted party over a reasonably secure mechanism, such as an HTTPS website or a signed PGP package delivery system. Debian's apt-based distribution, for example, provides a normally highly secure package installation platform.


RStudio does provide a mechanism to install packages locally, provided a local path, which can be used to mitigate the direct effects of R7-2015-10.2. However, this depends on the user having some external, trusted mechanism to validate the packages to be installed, and would prevent the user from using the built-in install and update packages functionality.


While package updates do not appear to occur automatically, application update checks happen with every start. This can be avoided by unchecking the "Automatically notify me of updates to RStudio" option under "Tools: Global Options: General," and not checking for updates in-product.

Further Reading

JJ Allaire published an in-depth blog post entitled, Secure Package Downloads for R, at http://blog.rstudio.org/2015/08/17/secure-https-connections-for-r/

Tal Galili highlighted the switch from HTTP to HTTPS as a default mechanism for package updates in his blog post, http://www.r-statistics.com/2015/08/r-3-2-2-is-released/

The R Consortium has produced a backwards-compatible best practices guide for using R securely, at https://www.r-consortium.org/news/blogs/2015/08/best-practices-using-r-securely .

Disclosure Timeline

This vulnerability advisory was prepared in accordance with Rapid7's disclosure policy. Initial contact was attempted through the R Project (r-project.org), and then successfully established with the vendor directly.

Thu, Jun 11, 2015: Initial discovery by Tod Beardsley, Rapid7, Inc.
Thu, Jun 11, 2015: Attempt to contact security@r-project.org (failed)
Mon, Jun 29, 2015: Attempt to contact security@rstudio.com
Tue, Jun 30, 2015: Response from the vendor and details provided
Mon, Jul 13, 2015: Analysis and updates by the vendor
Tue, Jul 14, 2015: Disclose details to CERT/CC (cert@cert.org), VU#734892
Wed, Aug 12, 2015: RStudio 0.99.473 is released, addressing all issues.
Fri, Aug 14, 2015: R 3.2.2 is released, supporting default HTTPS updates
Wed, Aug 19, 2015: Status updates are provided by the vendor
Fri, Aug 28, 2015: Public disclosure

The timeline for disclosure and the associated blog posts referenced above show that not only has RStudio addressed these issues quickly and effectively, but that they worked closely with the larger R community to ensure that the commercial and academic user bases for R and RStudio can operate in a safe and secure manner. RStudio, Inc. is a model Internet citizen when it comes to vulnerability handling and coordinated disclosure.


Culture of Security

Posted by boblord Employee Aug 27, 2015

I sometimes talk to executives about how employees and their fellow executives at the company view security, and about cultural issues around security. They often tell me that, generally speaking, people are on board with the necessary work to keep things safe. I’ve yet to hear someone tell me that people pride themselves on working around security teams and programs so they can run the business more efficiently. I’ve heard a number of stories that include facts about their compliance efforts, how they train employees, and sometimes about how they use metrics to improve their security posture.


Those are all great things, and in most cases I learn something. But here’s what I’m up to: I’m asking a direct set of questions. In every case I can remember, I get a positive few of how the company thinks about security. Sure, there are staffing issues, and many problems that keep people up at night, but the general answer I get is “we have a culture of security at our company.”


Then I switch gears and start asking indirect questions, questions that that may reveal a different slant on the issue of security culture. For example, I ask “Tell me about a time when your IT and Security teams needed to make a controversial and unpopular change,” and “Tell me about how management supported them,” and “What was the outcome months later?” Sometimes I get a good story showing leadership from the IT/Security teams, and backing from management. But I’ve also gotten responses like “Bob, I’m proud to say that our IT and Security teams have never ruffled any feathers. They’ve always found solutions that improve security without upsetting anyone or adding any friction to the business.” I don’t know about you, but that sort of answer does not ring true. It’s more likely that the security team isn’t pushing the company to evolve to be more secure, or isn’t capable of doing so. Other indirect questions also tend to back up this conclusion.


The culture of security in an organization is defined more by the way it encourages or discourages hard decisions than any other factor. Talking about metrics is good, training is great, and so on. Everyone has to do those things and they’re hard. But on the topic of culture, we should be asking ourselves about stories. Stories are the way real cultures are made. Stories tell of love and fortunes won, and lost. They give an emotional energy to work that no policies and procedures manual can. (Though if you are good, you’ll write those otherwise boring and lifeless documents in a way that will engage people.)


Stories also are the way people communicate the real incentives in an organization. Sure, there are performance reviews, bonus plans, and team charters. But those corporate tools will often struggle to keep up with the way people share stories at the water cooler and over drinks.


Consider this story: The CEO is traveling overseas and breaks her phone. She needs to log into her mail account and the system is asking for her 2-factor code. Since the 2-factor code is generated on her phone, she’s unable to get access to mail. She has to buy a new phone, and spend time on the phone with the security team (who are pulled into make sure it’s really the CEO rather than a social engineering attack/test) to get it reinstated. Time zone differences and getting the right people involved results in her losing over a day of productivity. She’s clearly upset.


How would this story play out in your organization?

(a) The CEO cools down after a day and gets over her frustration. At the next company all hands, she tells the complete story. She explains she was upset at the downtime while traveling on business. Then she explains why the security of the customer data is more important than the personal productivity of any one person, even the CEO. She praises the cross-functional teams that came together to design, implement, and maintain these security systems. In her story, she puts the focus on the customers and the trust they give the company every day, and how it’s critical in these dangerous time to have a bias towards security.

(b) She gets over her frustration, and eventually gets back to work. But she tells no story. Perhaps she asks the security team to research alternatives for future cases where executives are traveling.

(c) She orders the security team to disable 2FA for mail company-wide until they come up with a better plan for handling escalations like this. It’s not reasonable to have people out of work when they break their phone.


I wonder how many of you reading this post will answer (c). How an organization responds to these conflicts is the definition of its culture of security. What matters for culture is how you react when people are uncomfortable, unable to work, forced to change their processes, and need to take on new and routine work. What core principles win? Do you have written and exhibited patterns that put the safety of your customer data first? What about when it costs an employee some productivity? What about when it’s the CEO? What about when no one at the company can send email or use the VPN because the 2FA server/service is down for a day?


These stories telegraph to all employees how the organization actually values security, regardless of slogans, emails, and policy documents. When the security team asks other teams to do work to improve security, these stories will remind people that how valued security is. The real incentive structure will be communicated by these stories.


What security stories does your organization have? I honestly want to know! Drop me a line on Twitter at @boblord. If you want to remain anonymous, send me a DM.

Try this experiment. Go to your favorite search engine and type this:

+”no evidence” security compromise


(Other variations are also interesting, including adding words like “breach”)


There is something about the phrase “no evidence” that troubles me. You may have noticed the same thing. On a regular basis organizations say that there is no evidence of compromise, and no evidence that attackers gained access to user/customer/employee data. They write these phrases to lessen the blow of what is surely a very hard time internally for the responders. They want to lessen the blow and be reassuring to those of us who worry if we’ve been impacted.


For me, it does the reverse. It makes me worry more because it tells me about the state of preparedness of that organization. The phrase “no evidence” could mean everything from “we have tons of evidence and we’re sifting through it, but the probability of the attackers accessing your data is very low,” all the way to “we don’t collect data for use by incident responders, so who knows?”


Simply put, absence of evidence is not evidence of absence. In fact, the term used in debates is “argument from ignorance." That’s not very reassuring.


What phrases should we be reading instead? What would make us feel better about the investigation? We should be seeing statements like “We have evidence that there was no compromise.” Variations might include “We keep detailed logs of network, system, and user activity. Although the results are preliminary, we conclude that the attackers were not able to access your data.” Or maybe “We were able to trace the attackers activities over the past year, and understand the attack. They never had access to your data."


Now if you’re reading this blog, and you know how attacks work, you know that those phrases are unlikely. More likely phrases by teams armed with evidence would be “We know which users were affected and are taking appropriate steps to notify them,” or “Attackers were able to access only the following data…” Even though those phases would indicate that the attackers accessed confidential data, it would be reassuring because it would allow appropriate action by all parties. Incident response teams would know how to re-assess their risk, and adjust technologies and processes. It would give customers/users the information needed to better protect themselves.


How does an organization get to the point where it can confidently and honestly say it had evidence? The bottom line is you have to assume you’ll be breached. When you assume you will be breached, you’ll behave differently than if you assume your defenses will be sufficient.


Collecting evidence required to make strong statements after a breach (or a suspected one!) and storing it for months or even years can be a challenge. How long should you keep pcaps, netflow data, and OS/application logs? How much would it cost? What about the security of all that data? I’ve talked to numerous teams who strongly assert it would be too expensive. But few can show me the spreadsheets mapping out the costs, assumptions, and a creative look at the task of gathering and storing data. And worse, it implies that they haven’t given the executive stakeholders the the opportunity to make a business decision on the subject.


My suggestion is to have the debate. Don’t look to show that it’s not cost effective, but rather how you could define the problem statement to make it cost effective. What trade-offs would start to make the problem look solvable? What assumptions can you change to make it more feasible? For example, what if you didn’t store SSL pcaps, but just the netflow data? What if you store netflow data much longer than full pcaps?


As you think about how the NIST Cybersecurity Framework considers the continuous functions of Identify, Protect, Detect, Respond, Recover, are you giving enough consideration to the latter functions? Are you building a cross-functional team to write breach runbooks, and to dry-run test them when you read about breaches in the press? Are you testing your detection capabilities?


In short, I’m hoping to see fewer blog posts that assume a lack of data is acceptable. It’s not.


Have thoughts? Drop me a line on Twitter at @boblord.

Filter Blog

By date: By tag: