Skip navigation
All Places > AppSpider > Blog
1 2 3 Previous Next

AppSpider

33 posts

AppSpider scans can detect exploitable vulnerabilities in your applications, but once these vulnerabilities are detected how long does it take your development teams to create code fixes for them?  In some cases it could take several days to weeks before a fix/patch to resolve the vulnerability can be deployed, and during this time someone could be actively exploiting this issue in your application.  AppSpider Defend, which is now integrated into AppSpider Pro, helps to protect your applications until a fix for the identified vulnerabilities are deployed.

 

Defend allows you to easily create custom defenses for Web Application Firewalls(WAFs), Intrusion Protection Systems(IPS), or Intrusion Detection Systems(IDS), based on the results of vulnerability scans conducted with AppSpider .

 

Using innovative automated rule generation, Defend, part of AppSpider Pro, helps security professionals to patch web application vulnerabilities with custom rules in a matter of minutes, instead of the days or weeks it can take by hand.

 

Without the need to build a custom rule for a WAF or IPS or the need to deliver a source code patch, Defend allows developers the time to identify the root cause of the problem and fix it in the code.

 

 

When you are ready to generate Defend rules, simply:

  1. Click on the Load Findings icon.
  2. Select the vulnerability summary XML file from a completed AppSpider scan.
  3. Determine which of the discovered vulnerabilities you would like to generate Defend rules for.
  4. Select the WAF/IDS/IPS that you want to configure with Defend. The current supported WAF/IDS/IPS’s are the following:  ModSecurity, SourceFire/Snort, Nitro/Snort, Imperva, Secui/Snort, Akamai, Barracuda, F5, and DenyAll.
  5. Then click on the Export Rules icon to generate a Defend rules file which can be uploaded into your WAF/IDS/IPS solution.

 

With these 5 easy steps you can generate a set of Defend rules that, along with your existing WAF/IDS/IPS solution, can help protect against exploits discovered by AppSpider.

 

Once you have loaded the Defend rule set into your WAF/IDS/IPS solution you can verify that the Defend protection has been enabled by clicking the Defend Scan icon which will launch a Defend Quick scan to replay the attacks which AppSpider used to discover the vulnerabilities and confirm that the attacks are no longer successful due to the Defend rules being deployed.

 

 

For more information on how the Defend functionality works you can review the AppSpider Pro User Guide.

"Words matter” is something that comes out of my mouth nearly each day. At work it matters how we communicate with each other and the words we use might be the difference between collaboration or confrontation. The same happens with the security world, especially when we communicate with folks in IT or within the devops methodology. Last week this became highly apparent sitting with folks attending OWASP’s annual AppSec USA, where they discussed the difference between a fix or fail.

 

Clownforlaragiddings.jpgThe problem, in our world, often stems from the fact that security is oftentimes a scary concept, conjuring up thoughts of clowns lurking in the woods on the walk to school (my therapist told me to express my fears outwardly). Security means something is at risk and that if it doesn’t get fixed immediately the world may come to a frantic halt. The truth, however, is that not all security threats are created equal and in most cases the need to prioritize fixes can eliminate the panic. The challenge is actually how security threats or vulnerabilities are presented to those outside of security. Imagine what a "security vulnerability report" does to the devops folks working the app your business uses to bill customers?

 

For years we’ve focused on finding all the vulnerabilities, prioritizing them based on business and threat context, and then ultimately throwing them over the wall to IT or devops. But security has been learning how to more effectively create a remediation workflow. In some cases this means true management of the workflow, analytics that tells you if a vuln has been patched, or dashboards fed from live data so decisions are made at the point of impact. All that stuff is great, but what if I said you also must have a reframing of what the word "security vulnerability" actually means?

 

Security Vuln or JIRA Ticket?

Back to my time with the OWASP crew in DC; and I’ll be fully transparent that this idea came to me as I spoke with Rapid7 application security customers (check out AppSpider for more info). We talked a long time about the importance of collecting all the right application data for scanning and then prioritizing the vulns found. But the part of the conversation that really turned around my thinking was when we got to remediation. The functionality that these customers liked the most was the ability to not throw over a 2,300-page stale report (true story!) but instead translate found vulnerabilities directly into the devops ticketing system.

 

In this case it was a simple measure of taking what was found via application security testing and then placing that, with context, into JIRA. All of a sudden the devops team had a list of high-priority bug fixes, which they valued and would get to quickly, rather than a big security report that seemed to be more blame-game rather than helpful.

 

Words matter in security, as does intent. It’s an important thing to consider as you build our your security program and discover the points of contact with IT and devops.

In recent years, more and more applications are being built on popular new JavaScript frameworks like ReactJS and AngularJS. As is often the case with new application technologies, these frameworks have created an innovation gap for most application security scanning solutions and an acute set of challenges for those of us who focus on web application security. It is imperative that our application security testing approaches keep pace with evolving technology. When we fail to keep up, portions of the applications go untested leaving unknown risk.

Related resource: [VIDEO] Securing Single Page Applications Built on JavaScript Frameworks

Screenshot 2016-07-28 11.19.36.png

 

So, let’s look at some of the key things we need to think about when testing these modern web applications.

 

1. Dynamic clients of today’s complex web applications. – These applications have highly dynamic clients. Applications are built on JavaScript platforms like AngularJS and ReactJS. Single Page Application (SPA) frameworks fundamentally change the browser communication that security experts have long understood. These frameworks use custom event names instead of the traditional browser events we understand (‘on click,’ ‘on submit,’ etc.). Evaluate whether your dynamic application security testing solution is capable of translating these custom events into the traditional browser event names we understand.

 

Related Resource: [WEBCAST] BEST PRACTICES FOR REDUCING RISK WITH A DYNAMIC APP SECURITY PROGRAM

 

2. RESTful APIs (back-end). Today’s modern applications are powered by complex back-end APIs. Most organizations are currently testing RESTful API’s manually or not testing them at all. Your dynamic application security solutions should be able to automatically discover and test a RESTful API while crawling both AJAX applications and SPA. Because APIs are proliferating so rapidly, they take a long time test. Ensuring your dynamic application security solutions should enable your expert pen testers to focus on the problems that can’t be automated, like Business Logic testing.

 

Related resource: [Whitepaper] The Top Ten Business Logic Attack Vectors

 

3. Interconnected applications. - As security experts, it’s imperative that we understand today’s interconnected world. We are seeing interconnected applications at work and at home. For example, The Yahoo homepage shows news from many sites and includes your Twitter feed. Amazon is offering up products from eBay. We are used to thinking about testing an individual application, but now we must go beyond that. Many applications have created open APIs so that other applications can connect to it, or are consuming API’s of 3rd party applications. These applications are becoming increasingly interconnected and interdependent. Your DAST solution should help you address this interconnectivity by testing the API’s that power them.

securing-your-java-frameword.jpg

Dynamic application security testing solutions are evolving rapidly. We encourage you to expect more from your solution. AppSpider enables you to keep up with the changing application landscape so that you can be confident your application has been effectively tested. AppSpider goes where no scanner has gone before - to the deep and dark crevices of your modern applications. By using AppSpider for Dynamic Application Security Testing (DAST), you can keep up with application evolutions from the dynamic clients of Single Page Applications (SPAs) to the complex backend APIs. Learn more about AppSpider and how it scans Single Page Applications that are built on JavaScript frameworks.

Today, Rapid7 is pleased to announce an AppSpider (application security scanning) update that includes enhanced support for JavaScript Single Page Applications (SPAs) built with ReactJS. This release is significant because SPAs are proliferating rapidly and increasingly creating challenges for security teams. Some of the key challenges with securing SPA’s are:

  1. Diverse frameworks - The diversity and number of JavaScript frameworks contributes to the complexity in finding adequate scan coverage against all modern Single Page Applications.
  2. Custom events - These frameworks implement non-standard or custom event binding mechanisms, and in the case for ReactJS, it creates a so-called “Virtual DOM” which provides an internal representation of events outside of the real browser DOM. It is important to discover the type and location of every actionable component on the page. Tracking the event bindings on a real DOM is relatively straightforward by shimming EventTarget.prototype.addEventListener and determining the event type, and which node it is bound to.

For example:

However, in cases where a framework manages its own event delegation (such as in the ReactJS Virtual DOM) it becomes more efficient to hook into the framework, effectively providing a query language into the framework for its events (instead of listening for them).

 

According to ReactJS page:

Event delegation

  • React doesn't actually attach event handlers to the nodes themselves.
  • When React starts up, it starts listening for all events at the top level using a single event listener.
  • When a component is mounted or unmounted, the event handlers are simply added or removed from an internal mapping.
  • When an event occurs, React knows how to dispatch it using this mapping.

AppSpider has now created a generalized lightweight framework hooking structure that can be used to effectively crawl/discover/scan frameworks that do things ‘their own way.’  Look for an upcoming announcement on how you can incorporate and contribute your own custom framework scanning hooks with AppSpider.

 

What’s New?

So what is AppSpider doing with ReactJS now? AppSpider is leveraging Facebook’s open source developer tools (react-devtools) that are wrapped in a generalized framework hook and now crawled exhaustively by AppSpider. Additionally, ‘doin’ it their own way’ event binding systems (such as the ReactJS Virtual DOM) are being considered and executed. It is still the case that frameworks are supported right out of the box including AngularJS, Backbone, jQuery, Knockout, etc., without the need for tuning. Only where needed are we adding specific support for frameworks with custom techniques.

 

Why is this important? Web application security scanners struggle with understanding these more complex types of systems that don’t rely on fat clients and slow processes. Scanners were built using standard event names relying on these ever-present fields to allow them to interact with the web application. Without these fields, a traditional scanner no longer has the building blocks necessary to correctly interact with the web applications it is scanning. Additionally, they have used the ever-present DOM structure to better understand the application and assist with crawling. This becomes difficult if not impossible for a traditional scanner when they have to deal with applications that process information on the server side instead of the client side. If this creates such an issue, why are people flocking to these frameworks? There are several reasons:

 

  1. Two-way data bindings which allow a better back and forth between client side and server side interactions
  2. Processing information on the server side which increases performance and gives a better user experience
  3. Flexibility to break away from the fat client side frameworks

 

These capabilities can make a dramatic difference to developers and end users but they also introduce unique issues to security teams.

 

Security teams and their tools are all used to the standard event names like OnClick or OnSubmit. These events drive how we interact with a web application, allowing our standard tools to crawl through the application and interact with them. By using these standard events we have been able to automate the manual tasks of making the application think we interacted with it. This becomes much more complicated when we introduce custom event names. How do you automate interacting with something that changes from application to application or even worse whenever you refresh the same application? AppSpider answers that question by allowing you to connect directly into the framework and have the framework tell you what those custom events are before it even begins to crawl/attack.

 

Security experts have relied upon the DOM to know what was needed to test an application and monitor this interaction to understand potential weaknesses. Server side processing complicates this as all processing is done on the server side, away from the eyes and tools of the security expert and displaying only the end results. With AppSpider, you can now handle applications that are utilizing server side processes because we are not dependent on what is shown to us; instead we already know what is there.

 

Currently, the only way for pen testers to conduct web application tests on applications using ReactJS and other modern formats is to manually attack them, working their way one-by-one through each option. This is a time consuming and tedious task. Pen testers lack the tools to be able to quickly and dynamically scan a web application using these SPA frameworks to identify potential attack points and narrow down where they would like to do further manual testing. AppSpider now allows them to quickly and efficiently scan these applications, saving them time and allowing them to focus efforts where they will be the most effective.

 

How can you tell if your scanner is supporting these custom event names? Answering this question can be difficult as you have to know your application to truly understand what is being missed. You can typically see this quickly when you start to analyze the results of your scans. You will see areas of your application completely missed or parameters that don’t show up in your logs as being tested against.

 

 

Know your weaknesses.  Can your scanner effectively scan this basic ReactJS application and find all of the events? http://webscantest.com/react/ Test it and find out!  Is your scanner able to get see past the DOM? AppSpider can. 

Integrating Application Security with Rapid Delivery

Any development shop worth its salt has been honing their chops on DevOps tools and technologies lately, either sharpening an already practiced skill set or brushing up on new tips, tricks, and best practices. In this blog, we’ll examine how the rise of DevOps and DevSecOps have helped to speed application development while simultaneously enabling teams to embed application security earlier into the software development lifecycle in automatic ways that don’t delay development timeframes or require major time investments from developers and QA teams.

What is DevOps?

DevOps is a set of methodologies (people, process, and tools) that enable teams to ship better code – faster.  DevOps enables cross-team collaboration that is designed to support the automation of software delivery and decrease the cost of deployment. The DevOps movement has established a culture of collaboration and an agile relationship that unites the Development, Quality Engineering, and Operations teams with a set of processes that fosters high-levels of communication and collaboration.

 

Collaboration between these three groups is critical because of the inherent conflict of development organizations being pressured to ship new features faster while operations groups are encouraged to slow things down to be sure that performance and security are up to snuff.

 

graphic-01-01.png

DevSecOps and Application Security

Getting new code out to production faster is a great goal that often drives new business, however in today’s world, that goal needs to be balanced with addressing security.

 

DevSecOps is really an extension of the DevOps concept. According to DevSecOps.org, “It builds on the mindset that "everyone is responsible for security" with the goal of safely distributing security decisions at speed and scale to those who hold the highest level of context without sacrificing the safety required.

 

Web application attacks continue to be the most common breach pattern confirming what we have known for some time- that web applications are a preferred vector for malicious actors and they are difficult to protect and secure. According the 2016 Verizon Data Breach Report, 40% of the breaches analyzed for the 2016 DBIR were web app attacks. Today’s web and mobile applications pose risk to organizational security that must be addressed. There are several well-known classes of vulnerabilities that can be present in applications; SQL Injection, Cross-Site Scripting, Cross Site Request Forgery, and Remote Code Execution are some of the most common.

 

Why are Applications a Primary Target?

Applications have become a primary target for attackers for the following reasons:

 

1.  They are open for business and easily accessible:  Companies rely on firewalls and network segmentation to protect critical assets.  Applications are exposed to the internet in order to be used by customers. Therefore, they are easy to reach when compared to other critical infrastructure and malicious attackers are often masked as legitimate desired traffic.

 

2.  They hold the keys to the data kingdom: Web Applications frequently communicate with databases, file shares, and other critical information.  Because they are close, if they are compromised it is easier to reach this data which can often times be some of the most valuable.  Credit Card, PII, SSN, and proprietary information can be just a few steps away from the application.

 

3.  Penetrating applications is relatively easy. There are tools available to attackers that allow them to point-and-shoot at a web application to discover exploitable vulnerabilities.


graphic-01-02.png

Embed Application Security Early in the SDLC - A Strategic Approach

So, we know that securing applications is critical. We also know that most application vulnerabilities are found in the source code. So, it stands to reason that application vulnerabilities are really just application defects and should be treated as such.

 

Dynamic Application Security Testing (DAST) is one primary methods for scanning web applications in their running state to find vulnerabilities which are usually security defects that require remediation in the source code. These DAST scans help developers identify real exploitable risks and improve security.

 

Typically, speed and punctiliousness don’t go and in hand, so why would you go about mixing two things that might be thought of as having a natural polarity? There are several reasons that implementing a web application scan early in the SDLC as part of DevOps can be beneficial and there are ways to do it so that it doesn’t take additional time for developers or for testers, it can be baked in as part of your SDLC and part of your Continuous Integration process.

 

When dynamic application security testing first became popular, security experts often conducted the tests at the end of the software development lifecycle. That only served to frustrate developers, increase costs and delay timelines. We have known for some time now that the best solution is to drive application security testing early into the lifecycle along with secure coding training.

 

Microsoft was one of the early pioneers of this with their introduction of the Secure Development Lifecycle (SDL) which was one of the first well-known programs that explicitly stated that security must be baked into the software development lifecycle early and at every stage of development not bolted on at the end.

 

The benefits of embedding application security earlier into the SDLC are well understood. If you treat security vulnerabilities like any other software defect, you save money and time by finding them earlier when developers and testers are working on the release.

 

  • Reduced Risk Exposure -The faster you find and fix vulnerabilities in your web applications mean less exposure to risk. If you can find a vulnerability before it hits production you’ve prevented a potential disaster, and the faster you remove vulnerabilities from production, the exposure you are faced with.

 

  • Reduced Remediation Effort - If a vulnerability is found earlier in the SDLC then it’s going to be easier and less expensive to fix for several reasons. The code is fresh, the developer is familiar with it and can jump in and fix it without have to dig up old skeletons in the code. There is less context switching (context switching is bad) when we find security defects during the development process. Additionally, if a vulnerability is found early then it is much more likely that there won’t be other code relying on it so it can be changed more safely.  Finally, new code will be less likely burdened with tech debt and therefore be easier to fix.

 

  • Reduced schedule delays - Security experts are well aware that development teams don’t want to be slowed down. By embedding application security earlier in the SDLC, we can avoid they time delays that come with testing during later stages.


These factors should help explain why incorporating application security into a DevOps mentality makes sense.  So how can a security-focused IT staff member help the developers get excited about this?

Adopting a DevSecOps Mindset for Application Security - 8 Best Practices

Build a Partnership

Partnership and collaboration is what DevOps is all about. Sit down with your development team and explain that you aren’t trying to slow them down at all. You simply want to help them secure the awesome stuff they are building. Help them learn by explaining the risk.  The ubiquitous “ALERT(XSS)” doesn’t do a good enough job of pointing out the significance of a cross-site scripting vulnerability. Talk your developers through the real-world impact and risks.  


Conduct Secure Code Training

Schedule some “Lunch-n-Learn”s or similar session to explain how these vulnerabilities can emerge in code.  Discuss parameterization and data sanitization so developers are familiar with these topics.  The more aware of secure coding practices the developers are, the less likely they are to introduce vulnerabilities into the application's code-base.


Know the Applications

It helps when the security expert understands the code base. Try to work with your developers to learn the code base so you can help highlight serious vulnerabilities and can clearly capture risk levels.

 

Security Test Early, Fail Fast.

Failure isn’t typically a good word, but failing fast and early is an agile development mindset that is applicable to application security. If you test early and often you can find and fix vulnerabilities faster and easier. The earlier new code is tested for security vulnerabilities the easier it is to fix.

 

Security Test Frequently

Test your code when new changes are introduced so that critical risks don’t make it past staging.  Fixing issues is easier when they are fresh. Scan new code in staging before it hits production to reduce risk and speed remediation of issues.

 

Integrate Security with Existing Tools

Find opportunities a solution that to embed dynamic security testing early into your software development lifecycle by integrating with your existing tools. Seamlessly integrating security into the development lifecycle will make it easier to adopt. Here are some of the most effective ways of integration security testing into the SDLC:

 

  • Continuous Integration - Many organizations achieve early SDLC security testing by integrating their DAST solutions into their Continuous Integration solutions (Hudson, Jenkins, etc) to ensure security testing is conducting easily and automatically before the application goes into production. This requires a application security scanner that works well in “point and shoot” mode and includes an open API’s for running scans. Ask your vendor how their scanner would fit into your CI environment.

 

  • Issue Tracking - Another effective strategy for building application security early into the SDLC is ensuring your application security solution automatically sends security defects to the issue tracking solution, like Jira, that is used by your development and QA teams.

 

  • Test Automation - Many QA teams are having success by leveraging their pre-built automated functional tests to help drive security testing to make security tests even more effective. This can be done through browser automation solutions like Selenium.

 

Rapid7’s AppSpider is built with this in mind and includes a broad range of integrations to suit your team’s needs. Learn more about how AppSpider helps drive application security earlier into the SDLC in the following video.

 

Screenshot 2016-07-07 11.50.45.pnghttp://www.rapid7.com/resources/videos/driving-appsec-earlier-into-the-sdlc.jsp

 

AppSpider is a DAST solution designed to help application security folks test applications both as part of DevOps and as part of a scheduled scanning program.  Thanks for reading and have a great day.

AppSpider’s Interactive Reports Go Chrome

 

We are thrilled to announce a significant reporting enhancement to AppSpider, Rapid7’s dynamic application security scanner. AppSpider now has a Chrome Plug-in that enables users to open any report in Chrome and be able to use the real-time vulnerability validation feature without the need for Java or having to zip up the folder and send it off. This makes reporting and troubleshooting even easier!

 

Enabling Security - Developer Collaboration to Speed Remediation

AppSpider is a dynamic application security scanning solution that finds vulnerabilities from the outside-in, just as a hacker would. Our customers tell us that AppSpider not only makes it easier to collaborate with developers, but also speeds remediation efforts. Unlike other application security scanning solutions, we don’t just report security weaknesses for security teams to ‘send’ to developers. Our solution includes an interactive component that enables developers to quickly and easily review a vulnerability and replay the attack in real-time. This enables them to see how the vulnerability works all from their desktop without having direct access to AppSpider itself - and without learning how to become a hacker.

 

Related Content [VIDEO] Why it's important to drive application security earlier in the software development lifecycle.

 

Developers can then use AppSpider’s interactive reports to confirm that their fixes have resolved the vulnerability and are actually protecting the application from the weaknesses found. Developer’s don’t need to have AppSpider installed in their environment to leverage this functionality, just the report, connection to the application they are testing and they’re good to go.

 

Related Content [VIDEO] Watch AppSpider interactive reports in action.

 

AppSpider Interactive Reports - How it Works

Pretty cool, huh? Well, here’s how and why it works...

 

For those who work in application security, we know all too well that many, if not most, of the application security vulnerabilities we deal with exist in the source code of custom applications that we are responsible for - often in the form of unvalidated inputs. As security professionals, we aren’t able to resolve these vulnerabilities (or defects) with a simple patch. We need to work with the developers to resolve security defects, implement coding best business practices and then re-release the new code into production.

 

At Rapid7, we have understood this for a long time and we have been helping security teams and development teams to collaborate more effectively through AppSpider.

 

There are many reasons why effective DevSecOps collaboration is difficult. Developers aren’t security professionals and reporting security defects to them is easier said than done. We have the logistical issues of emailing around spreadsheets or PDFs and then we have the communication issues related to us speaking security and them speaking to developer. Not to mention the pain of having to go back and forth re-testing their “fixes” to see if they are still vulnerable or not, ‘cause let’s face it, most developers wouldn’t know a SQL Injection from a Cross Site Request Forgery (CSRF), let alone know how to actually attack their code to see if it’s vulnerable to these attack types.

 

This is an area that we have always shined in however, until today AppSpider required the security professional and the developer to make use of a Java applet to accomplish this within our reports. Now that Chrome and Firefox have disabled Java support, some teams weren’t able to leverage this awesome functionality.

 

Are you looking to upgrade your dynamic application security scanner? Check out AppSpider in action? Check out this on-demand demo of our web application security solution here!

AppSpider’s got even more Swagger now!

As you may remember, we first launched improved RESTful web services security testing last year. Since that time, you have been able to test the REST APIs that have a Swagger definition file, automatically without capturing proxy traffic. Now, we have expanded upon that functionality so that AppSpider can automatically discover Swagger definition files as part of the application crawl phase. You no longer have to import the Swagger definition file, delivering an even easier and more automatic approach for security testing RESTful web services (APIs, microservices, and web APIs). This is a huge timesaver and another evolution along AppSpider’s long history of being better at handling modern applications than other application security scanning solutions.

Challenges with Security Testing RESTful APIs

When it comes to RESTful web services, most application scanning solutions have been stuck in the traditional web application dark ages. As APIs have proliferated, security teams have been forced to manually crawl each API call, relying on what little - if any - documentation is available and knowledge of the application. With a manual process like that, the best we can hope for is to not miss any path or verb (GET, PUT, POST, DELETE) within the API. In this scenario, you also have to figure out how to stay current with a manually documented API. The introduction of documentation formats such as Swagger, API Blueprint, and RAML helped, but testing it was still a manual process fraught with errors.

RESTful Web Services: Security Testing Made Easy

Enter Rapid7.

 

At the end of 2015, we released a revolutionary capability for testing your REST APIs with the introduction of Swagger-based DAST scanning. This ability for AppSpider to learn how to test an API by consuming a Swagger definition (.json) file revolutionized the way DAST solutions handle API security testing. Doing so allowed our customers for the first time, to easily scan their API without a lot of manual work. Now, we are taking it up another notch by making REST API security testing even easier.

 

What’s New?

This is no trivial task as it’s not just parsing data out. When our engineers started this task, the first thing they thought about was how customers would use this feature. We quickly realized that just like everything else in application security, when we start scanning new technologies in the web application ecosphere, we realize that we encounter the same challenges we did when learning to effectively scan traditional web applications. So, here are three of the latest enhancements we have made to speed REST security testing.

 

  1. Automated Discovery of Swagger Definitions - Instead of feeding your Swagger definition file into AppSpider, you can simply point AppSpider to the URL that contains your Swagger definition and AppSpider will automatically ingest it and begin to take action.

 

  1. Parameter Identification and Testing with Expected Results - Application security testing solutions always have the challenge of knowing what the parameters are and what data they are expecting. Web applications can have many different parameters, some of which may be unique to just that API. We knew that if this was going to be effective we needed to be able to account for these unique types of parameters. This led us to expand our capability so that you can give AppSpider guidance on what these parameters mean to your application. Your guidance allows AppSpider to improve the comprehensiveness of the testing. AppSpider remembers your guidance and uses it in subsequent tests.

 

Quick tip: Regardless of which application security testing solution or experts you use, be sure that your scanner or testers are using expected results (a date for ‘date’, a name for ‘last name’ and a valid credit card number for ‘ccn’). Without expected results, the test is largely ineffective.

 

 

  1. Scan restrictions - Just like any other area of a web application, APIs have sensitive portions that you may not want to scan, a good example of this is a HTTP verb like DELETE. Many teams have effectively documented ALL of their REST API. This is great and is really where you should be, but we need to be able to avoid testing certain sections. We are already very good at customizing your web application scanning to make it the best it can be. We have just extended this capability into the handling of APIs. Now you can leverage AppSpider’s scan restrictions capability and exclude any parameter or HTTP verb you do not want to use.

 

By leveraging AppSpider’s automated testing of RESTful web services that includes both parameter training with scan restrictions, you really have an unparalleled opportunity to test the security of your REST APIs quickly and frequently. We know you thought this was out of reach, but it’s not!

 

So keep this in mind next time you are having a discussion on how to efficiently scan and understand the security weaknesses in your APIs. If you’re stuck in manual process it might be time to take a look at how to automate these processes using something like Swagger. Note, Swagger has been renamed to the (OpenAPI Specification). If you are already automated well then we can give you an answer you’ve always wanted..we can automate your API scanning like never before.

 

You may also be interested in :

AppSpider Release Notes

Blog: AppSpider’s Got Swagger: The first end-to-end security testing for REST APIs

We spent last week hearing from experts around the globe discussing what web application security insights we have gotten from Verizon's 2016 Data Breach Investigations Report. Thank you, Verizon, and all of your partners for giving us a lot to think about!

 

We also polled our robust Rapid7 Community asking them what they have learned from the 2016 DBIR. We wanted to share some of their comments as well:

 

Quick Insights from the Rapid7 Community

"I find that the Verizon Data Breach Investigation Report is a good indication of the current environment when it comes to the threat climate - I use it to prioritize what areas and scenarios I spend the most time focusing resources upon. For my environment, the continued shrinking of time between vulnerability disclosure and exploit is very important. For offices like mine with a small staff, identifying and applying patches in an ever more strategic manner is key. I think vendors who successfully market intelligent heterogeneous automated patching systems will start to see big gains in sales. And those that can tie it to scanning/compliance/reporting/attack suites are going to be even better positioned in the market."

              - Scott Meyer, Sr. Systems Engineer at United States Coast guard

"The internet is evolving, and greater complexity creates greater risk by introducing new potential attack vectors. Attackers aren't always after data when targeting a web application. Frequently sites are re-purposed to host malware or as a platform for a phishing campaign. Website defacements are still prevalent, accounting for roughly half of the reported incidents."

              - Steven Maske, Sr. Security Engineer

"Train, train, and retrain your users. Use proper coding. Really, we still fall victim to SQLi? Two factor authentication is still king. Limit download to x to prevent complete data exfiltration"

              - Jack Voth, Sr. Director of Information Technology at Algenol Biotech

Lessons Learned from the 2016 Verizon Data Breach Report

 

Learning from DBIR

Strategies to Implement

1. Web application attacks are a primary vector.

  • Start security testing your applications today.
2. No industry is immune, but some are more affected than others.
  • Focus on the attack patterns that your industry is experiencing.
  • Know your enemy's motivation.
3. Unvalidated inputs continue to plague our web applications.
4. Web applications are evolving and so should your application security program.
5. Different industries have different enemies.
  • Know who and what you are defending against. Grudge or Money?
6. There are so many free and fabulous resources. Use them!
  • Get involved with OWASP today!

 

How Rapid7 Can Help

Rapid7’s AppSpider, a Dynamic Application Security Testing (DAST) solution finds real-world vulnerabilities in your applications from the outside in, just as an attacker would. AppSpider goes beyond basic testing by enabling you to build a truly scalable web application security program. You can watch an on-demand demo of AppSpider here if you are interested in learning more.

 

Deeper application coverage

The AppSpider development team keeps up with evolving web application technologies so that you don’t have to. From AJAX and REST APIs to Single Page Applications, we’re committed to making sure that AppSpider assesses as much of your applications as is possible, so that you can rely on AppSpider to find unvalidated inputs and a host of other vulnerabilities in your modern web applications. View our quick video below to learn how to achieve deeper web application coverage with your web app scanner.

 

web-application-complexity.PNG

 

Breadth of web app attack types

From unvalidated inputs to information disclosure, with more than 50 different, we’ve got you covered. AppSpider goes way beyond the OWASP Top 10 attack types, including SQL Injection and Cross Site Scripting (XSS) - we test for every custom attack pattern that can be tested by software. This leaves your team more time and budget to test the attack types that require humanlike business logic testing.

 

Application security program scalability

AppSpider is designed to help you scale your application security testing program so that you can conduct regular testing across hundreds or thousands of applications throughout the software development lifecycle.

 

Dynamic Application Security Testing (DAST) earlier in the SDLC

AppSpider comes with a host of integrations that enable you to drive application security earlier into the SDLC through Continuous Integration (like Jenkins), issue tracking (like Jira) and browser integration testing (like Selenium). Our customers are successfully collaborating with their developers and building dynamic application security testing earlier into the SDLC.

 

You may also be interested in these blog posts that also offer perspective on the 2016 Verizon DBIR:

This is a guest post from Shay Chen, an Information Security Researcher, Analyst, Tool Author and Speaker. The guy behind TECAPI , WAVSEP and WAFEP benchmarks.

 

Are social attacks that much easier to use, or is it the technology gap of exploitation engines that make social attacks more appealing?

 

While reading through the latest Verizon Data Breach Investigations Report, I naturally took note of the Web App Hacking section, and noticed the diversity of attacks presented under that category. One of the most notable elements was how prominent the use of stolen credentials and social vectors in general turned out to be, in comparison to "traditional" web attacks. Even SQL Injection (SQLi) - probably the most widely known (by humans) and supported attack vector (by tools) is far behind - and numerous application level attack vectors are not even represented in the charts.

 

Although it’s obvious that in 2016 there are many additional attack vectors that can have a dire impact, attacks tied to the social element are still much more prominent, and the “traditional” web attacks being used all seem to be attacks supported out-of-the-box by the various scan engines out there.

 

It might be interesting to investigate a theory around the subject: are the attackers limited to attacks supported by commonly available tools? Are they further limited by the engines not catching up with the recent technology complexity?

 

With the recent advancements and changes in web technologies - single page applications, applications referencing multiple domains, exotic and complicated input vectors, scan barriers such as anti-CSRF mechanisms and CAPTCHA variations - even enterprise scale scanners have a hard time scanning modern application in a point-and-shoot scenario, and the typical single page application may require scan policy optimization to get it to work properly, let alone get the most out of the scan.

 

Running phishing campaigns still requires a level of investment/effort from the attacker, at least as much as the configuration and use of capable, automated exploitation tools. Attackers appear to be choosing the former and that’s a signal that presently there is a better ROI for these types of attacks.

 

If the exploitation engines that attackers are using face the same challenges as vulnerability scanner vendors - catching up with technology - then perhaps the technology complexity for automated exploitation engines is the real barrier that makes the social elements more appealing, and not only the availability of credentials and the success ratio of social attacks.

 

How about testing it for yourself?

 

If you have a modern single-page application in your organization (Angular, React, etc), and some method of monitoring attacks (WAF, logs, etc), note:

 

  • Which attacks are being executed on your apps?
  • Which pages/methods and parameters are getting attacked on a regular basis, and which pages/methods are not?
  • Are the pages being exempted technologically complex to crawl, activate or identify?

 

Maybe complexity isn’t the enemy of security after all.

This is a guest post from Tom Brennan, Owner of ProactiveRISK and serving on the Global Board of Directors for the OWASP Foundation.

 

In reading this year's Verizon Data Breach Investigations Report, one thing came to mind: we need to get back to the basics. Here are my takeaways from the DBIR.

 

1. Remain Vigilant

 

Recently, data relating to 1.5 million customers of Verizon Enterprise were for available for sale. Some would say this is ironic, but what it means to me is that everyone is HUMAN. SEC_RITY requires “U” to be vigilant in all aspects of its operations from creation, deployment, and use of technology.  I was very happy to see the work of the Center of Internet Security (CIS) Top 20 Security Controls referenced. These are important proactive steps in operating ANY business and I’m proud to be one of the collaborators on this important project.

 

CIS Top 20 Security Controls

1: Inventory of Authorized and Unauthorized Devices

2: Inventory of Authorized and Unauthorized Software

3: Secure Configurations for Hardware and Software on Mobile Device Laptops, Workstations, and Servers

4: Continuous Vulnerability Assessment and Remediation

5: Controlled Use of Administrative Privileges

6: Maintenance, Monitoring, and Analysis of Audit Logs

7: Email and Web Browser Protections

8: Malware Defenses

9: Limitation and Control of Network Ports, Protocols, and Services

10: Data Recovery Capability

11: Secure Configurations for Network Devices such as Firewall Routers, and Switches

12: Boundary Defense

13: Data Protection

14: Controlled Access Based on the Need to Know

15: Wireless Access Control

16: Account Monitoring and Control

17: Security Skills Assessment and Appropriate Training to Fill Gaps

18: Application Software Security

19: Incident Response and Management

20: Penetration Tests and Red Team Exercises

 

2. All software security issues are software quality issues.

 

Unfortunately, finding fault is what some humans do best, having adequate controls is what IT defending is actually about. The sections in the Verizon report that discussed attack vectors should remind everyone that not all software quality issues are security issues, but all software security issues are software quality issues. Currently one of the greatest risks to software is third party software components.

 

3. What type of Attacker are you Defending Against?

 

What has not changed since 1989 when I first used ATDT,,,  to wardial by modem off an 8-bit for the first time is that it’s STILL people behind the keyboards. People on a wide ethical spectrum are still using keyboards to harm, steal, deface, intimidate, and wage cyber attacks/wars, and ALL criminals need is means, motive, and opportunity.

 

Every organization needs to be asking what TYPE of attacker are they defending against (Threat Modeling). For example: "My business relies on the internet for selling widgets, the adversary is an indiscriminate bot/worm, or a random individual with skills, or a group of skilled and motivated attackers. This is where OWASP’s Threat Risk Modeling workflow can really help when proactively defined with OWASP’s Incident Response Guidelines.

 

Modern and resilient businesses should conduct mock training exercises to educate and prepare the team. Business is about taking risks, and not all survive. Some lack the number of customers they need to survive, others struggle to move enough product, and now for many, the eventuality a business could be hacked and unable to recover is a concern whether you are a Small Business or sitting on the Board of Directors of a Fortune 50 organization. You can use insider threat examples, outsider and 3rd party vendor risks, all are different and based on a tolerance threshold decisions need to be made.

 

4. OWASP - Get Involved! It's free and it’s helpful!

 

As the 2016 Verizon Data Breach Investigations Report shows, web applications remain a primary vector of successful breaches. I encourage everyone to get involved with the OWASP Foundation where I spend a great deal of time. OWASP operates as a non-profit and is not affiliated with any technology company, which means it is in a unique position to provide impartial, practical information about AppSec to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a community of like-minded professionals, OWASP issues software tools and documentation on application security. All of its articles, methodologies and technologies are made available free of charge to the public. OWASP maintains roughly 250 local chapters in 110 countries and counts tens of thousands of individual members. The OWASP Foundation uses several licenses to distribute software, documentation, and other materials. I encourage everyone to review this OPEN resource and ADD to the knowledge tree.

 

I really enjoyed the 2016 Verizon DBIR for the data. Their perspective in this report is based on wide array of both customer engagements and data from nearly 70 partners. The average reader that uses a credit card at a hotel, casino, or retail store may feel uneasy about the risk of trusting others with their data. If your business is dealing with confidential data you should be concerned and proactive about the risks you take.

 

If you haven't already, take a look at the Defender's Perspective of this year's DBIR, written by Bob Rudis.

This year’s 2016 Verizon Data Breach Report was a great read. As I spend my days exploring web application security, the report provided a lot of great insight into the space that I often frequent. Lately, I have been researching out of band and second order vulnerabilities as well as how Single Page Applications are affecting application security programs.  The following three takeaways are my gut reaction thoughts on the 2016 DBIR from a web app sec-ian perspective:

1. Assess Your Web Applications Today

 

Not tomorrow, not next week, today. I don’t want to see talented geeks jump on board a hot startup and hear, “Oh, we don’t have a security program.”

 

I look at this report and the huge increase in web application attacks wondering how ANYONE could still not be taking their web application security program seriously. Seriously? Let’s get serious for a slim second.

 

There has been a dramatic rise in web application attack patterns across all industry verticals as covered in the research. Though three industries: entertainment, finance, and information, have seen a larger jump. Web application attacks make up 50% or more of the total breaches, with a notable jump in the finance industry from 31% to 82% in 2016. However, it is suggested that this jump is due to sampling errors introduced from the overwhelming data points linked to Dridex.

 

2. Fun, Ideology, or Grudge drove most incidents. Money motivated most theft. Few spies were caught. 

F/I/G is the combination of Fun, Ideology, or Grudge

       

Although at first eye numbing stare, it appears that all web application hacking motives of 2015 were from grudge wielding, whistle blowing people with no real secret agent spying going on, though admittedly with a sizable criminal element. 

 

When this same data is filtered through ‘confirmed data disclosure,' 95% of the resultant cases appear to be financially motivated, and it becomes much more apparent that data disclosure is all about the money.

3. “I value your input, I just don’t trust it.” (p. 30)

 

Unvalidated input continues to be one of the most fundamental software problems that lead to web application breaches.  From the dawn of client/server software to the now modern Single Page Application framework, we have been releasing applications with partially validated inputs despite the fact that we have known about validating inputs for decades. Unfortunately, this fundamental cultural development flaw will likely not be leaving us anytime soon. Please, if you learn anything from the DBIR, make sure to validate input, folks!


In terms of the top 10 threat varieties of 2015, SQL Injection (#7), and Remote File Inclusion (#9) are ever present and are a direct result to trusting input in an unsafe manner.



The ‘Recommended Controls’ for Web App Attacks section in the DBIR states, "validate inputs, whether it is ensuring that the image upload functionality makes sure that it is actually an image and not a web shell, or that users can’t pass commands to the database via the customer name field." This is not to say validation of output is not also of high importance. Rather, it indicates the place where most initial damage can occur, whereby output validation reduces the available information able to be gathered on the target.

 

That's it for my take on the 2016 Verizon Data Breach Investigations Report. Be sure to check out the Defender's Perspective, written by Bob Rudis.

 

 

The 2016 Verizon Data Breach Investigations Report (DBIR) is out and everyone is poring over the report to see what new insights we can take from last year's incidents and breaches. We have not only created this post to look at some primary application security takeaways, but we also have gathered guest posts from industry experts. Keep checking back this week to hear from people living at the front lines of web application security, as well as commentary from several of our customers who provided some quick takeaways that can help you and your team.

 

Let’s dive into four key takeaways from this year's DBIR, from an application security point of view.

 

1. Protect Your Web Applications

Web app attacks remain the most common breach pattern underscoring what we already know - that web applications are a preferred vector for malicious attackers and they are difficult to protect and secure. The figure below shows that 40% of the breaches analyzed for the 2016 DBIR were web app attacks.

 

 

2. Stop Auditing Like It’s 1999

 

We’ve said this before and we’ll say it again. Applications are evolving at a rapid pace and they are becoming more complex and more dynamic with each passing year. From web APIs to Single Page Applications, it’s critical that your application security experts not only understand the technologies used in your applications, but also find tools that are able to handle these modern applications.

 

As we pay our respects to the dearly beloved, Prince, please, stop testing like it’s 1999. Update your application security testing techniques, sharpen your skills, and make sure your tools understand modern applications.

3. No Industry is Immune

 

No industry is exempt from web app attacks, but some are seeing more breaches than others. For the finance, entertainment, and information industries, web app attacks are the primary attack pattern in reported breaches. For the financial industry, web app attacks are a whopping 82% of their attacks. These industries, in particular, should be assessing and gearing up their web application security programs to ensure optimal investment and attention.

4. Validate Your Inputs

 

As an industry, we have been talking about invalidated inputs forever. It feels like we are fighting an uphill battle. We strive to train our developers on secure coding, the importance of input validation and how to prevent SQL Injection, XSS, buffer overflows, and other attacks that stem from invalidated and unsanitary inputs. Unfortunately, too many application inputs continue to be vulnerable and we are swimming against a steady stream of new applications written by developers who continue to repeat the same mistakes.

 

That’s our take on the 2016 Verizon Data Breach Investigations Report. We would love to hear your thoughts in the comments! Please check back throughout the week to hear what some of our favorite web application security experts have to share about their key takeaways and reactions from this year’s DBIR.

 

For more perspective in this year's DBIR through an application security lens. Check out the rest of the blogs in this series. http://community.rapid7.com/community/appspider/blog/2016/05/03/3-web-app-sec-ia n-takeaways-from-the-2016-dbir

 

Be sure to check out The 2016 Data Breach Investigations Report Summary (DBIR) - The Defenders Perspective, by Bob Rudis (@hrbrmstr).

Is your Dynamic Application Security Testing (DAST) solution leaving you exposed?

 

We all know the story of the Emperor’s New Clothes. A dapper Emperor is convinced by a tailor that he has the most incredible set of clothes that are only visible to the wise. The emperor purchases them, but cannot see them because it is just a ruse. There are no clothes. Unwilling to admit that he doesn’t see the clothes, he wanders out in public in front of all of his subjects, proclaiming the clothes’ beauty until a child screams out that the Emperor is naked.

 

Evolving Applications

If there is one thing we know for sure in application security, it’s that applications continue to evolve. This evolution continues at such a rapid pace that both security teams and vendors have trouble keeping pace.

 

Over the last several years, there have been a few major evolutions in how applications are being built. For several years now, we have been security testing multi-page AJAX driven applications powered by APIs and now we’re seeing more and more Single Page Applications (SPAs). And this is happening across all industries and at organizations of all sizes.

 

Take Gmail for example, in the image below, you can see one of the original versions of Gmail compared to a more recent versions. Today's Gmail is a classic example of a modern application.

 

   

 

 

So, as security professionals, we have built our programs around automated solutions, like DAST, but how are DAST solutions keeping up with these changes?

 

DAST Solutions - The Widening Coverage Gap

Unfortunately, most application security scanners have failed to keep up with these relatively recent evolutions. Web scanners were originally architected in the days of classic web applications when the applications were static and relatively simple HTML pages. While scanners have never and will never cover an entire web application, they should cover as much as possible. Unfortunately, the coverage gap has widened in recent years forcing security teams to conduct even more manual testing particularly of APIs and Single Page Applications. But with over-burdened and under-resourced application security teams, testing by hand just doesn’t cut it.

 

Closing the Gap with Rapid7 AppSpider

Of course, we don't think manual testing is an acceptable solution. Application security teams and application scanners can and should close this coverage gap with automation to improve both the efficiency (reduce manual efforts) and effectiveness (find more vulnerabilities) of security efforts.

 

If this is something that interests you, you have come to the right place!

 

Keeping up with application technology is one of our specialties. The application security research team at Rapid7 has been committed to maximum coverage since AppSpider was created. Our customers rely on us to keep up with the latest application technologies and attack techniques so that they can leverage the power of automation to deliver a more effective application security program.

 

Rapid7 AppSpider Closing the Coverage Gap.png

 

If your solution isn't effectively addressing your applications and you are looking for a way to test APIs, dynamic clients and SPAs more automatically. download a Free Trial of AppSpider today!

 

To learn more, visit www.rapid7.com.

 

For more information on how to reduce your application security exposure, check out these resources:

We are thrilled to announce a major new innovation in application security testing. AppSpider is the first Dynamic Application Security Testing (DAST) solution capable of testing Swagger-enabled APIs. Swagger is one of the most popular frameworks for building APIs and the ability to test Swagger-enabled APIs is not only a huge time savings for application security testing experts, but also enables Rapid7 customers to more rapidly reduce risk.

 

Why does this matter?

Modern applications make liberal use of APIs. APIs are powering mobile apps like Twitter and Facebook and they’re providing rich client experiences like Gmail. They are also powering the Internet of Things (IoT) – APIs are what connect the billions of IoT devices to the cloud where the data they collect is processed, crunched and made useful.

 

APIs have enabled the complex web of applications that exists today in almost every corporate and government environment. and at the same time, have quickly become one of the most difficult challenges for security teams because most DAST solutions are blind to them.

 

These modern problems, like API security testing, require modern solutions. AppSpider is a modern DAST solution designed for today’s connected world. DAST solutions must be relevant for today’s environment.

Remaining relevant in today’s inter-connected world

In today’s connected world, security professionals are challenged with securing exploding digital ecosystems that touch every facet of their business from customers and shareholders to employees and partners. These digital ecosystems have become a complex tapestry of old and new web applications, web services and APIs that are highly connected to each. Adding to the complexity, the Internet of Things (IoT) is now driving tremendous innovation by connecting our physical world to our digital one. This inter-connected network of applications is constantly accessing, sharing and updating critical sensitive data.

 

Your company’s data is one of your most precious assets and we know that securing that data is what keeps you up at night.

 

It keeps us up at night too.

 

We look at today’s application ecosystems as having three pillars:

 

  1. Web applications and web services
  2. Internet of Things (IoT)
  3. Connected applications (connected by RESTful APIs)

 

We at Rapid7 are dedicated to bringing you solutions that are relevant to today’s ecosystem. We are committed to delivering solutions that can help you be effective at securing your organization’s data even in this highly connected and complex world we are in.

 

AppSpider: Modern DAST for a Connected World

Understanding how AppSpider addresses today’s connected, modern technologies requires a little understanding of history about DAST. Legacy DAST solutions communicate with applications through the web front-end in order to identify potential security vulnerabilities in the web application and architectural weaknesses. Most DAST solutions first perform a “crawl” of the client interface to understand the application and then they conduct an “attack” or “audit” to find the vulnerabilities.

 

But with these newer applications that have rich clients and RESTful APIs, less and less of the applications can effectively be crawled. The applications are no longer just HTML and JavaScript which are more easily crawled. In dynamic application security testing, we’re looking for a high application coverage rate, but what’s actually happened is that many security teams have found that their coverage has actually eroded in recent years as their applications have been modernized and their legacy DAST solution has not kept pace.

 

Today’s applications – think about Amazon and Google – have rich clients with mini-applications nested inside of it and APIs on the back end checking and updating other data. These applications cannot be crawled using the legacy crawl and attack DAST approach. There are many faceless parts of the application, deep below the surface, that have to be analyzed by a scanner in a different way.

 

Traditional crawling only works for the first pillar described above, web applications. A modern DAST solution must be relevant for all three pillars.

 

Let us be the first to say that legacy DAST is dead. It's time for modern DAST solutions.

 

AppSpider has moved beyond the crawl and attack framework and is able to analyze these modern applications even the portions that it can’t crawl. It’s capable of understanding IoT and interconnected applications because it can now analyze and test a Swagger-enabled REST API.

 

APIs: A Source of Pain for Application Security Experts

Unfortunately, APIs carry the exact same security risks that we have been fighting with web applications for years. APIs enable traffic to pass through normal corporate defenses like network firewalls, and, just like web applications, they are vulnerable to SQLInjection, XSS and many of the attacks we’re used to because they access sensitive corporate data and pass it back and forth to all kinds of applications.

 

Today’s APIs have newer architectures and names like, RESTful Interfaces, microservices or just "APIs" and they have enabled developers to rapidly deliver highly-scalable solutions that are easy to modify and extend.

 

As great as APIs are for developers and for end users, they have created some very serious challenges for security experts, and all too often, APIs are going completely untested leaving vulnerabilities undiscovered resulting in security risk.

 

Until now, most teams haven’t had the ability to security test APIs because they have required manual testing. We spoke to one customer who currently has about eight APIs. Each API takes about two hours to test manually and they want to test it every time there is a new build, but many security teams aren’t staffed for that level of manual testing.

 

And, to make matters worse, security experts often don’t know the functionality of APIs because they aren’t documented in such way that security teams can easily get up to speed. When you end up with is already-strapped-for-time security experts faced with a manual testing effort for functionality they need to learn about.

 

Swagger-enabled APIs

Enter Swagger (and, of course, AppSpider) to save the day! Swagger, an open source solution, is one of the most popular API frameworks. It defines a standard interface to REST APIs that is agnostic to the programming language. A Swagger-enabled API enables both humans and computers to discover and understand the capabilities of the service.

 

Because APIs are being delivered so quickly, many APIs and microservices haven’t been well documented (or documented only in a Word doc that sits with the development team). With Swagger, teams have increased the documentation for their APIs and have also increased their interoperability and the ability for other solutions. Its this machine readable documentation that enables like modern DAST solutions, to discover and analyze Swagger-enabled APIs.

 

How AppSpider Tests Swagger-Enabled APIs

AppSpider has two major innovations that enable it to fully test Swagger APIs. The first is AppSpider’s Universal Translator and the second is the ability to analyze these Swagger files.

 

Let’s first look at AppSpider’s Universal Translator. The Universal Translator was built to enable AppSpider to analyze the parts of the application that can’t be crawled, like APIs. The Universal Translator analyzes traffic captured with a proxy like Burp or Paros. Now, AppSpider’s Universal Translator is also able to analyze a Swagger file eliminating the need to capture proxy traffic for testing a Swagger-enabled RESTful API. The Universal translator then normalizes traffic and attacks the application.


Screenshot 2015-12-16 18.44.49.png  

 

 

The diagram above shows how the Universal Translator works. It consumes data that comes in from three sources, a traditional crawl, recorded HTTP traffic and now, Swagger files. It then normalizes that data into a standard format and then completes the attack phase of the application security test.

 

We like to call the Universal Translator “future-proof” because its designed to be adaptable to this rapidly changing digital environment - we can easily extend it as technologies become available, like Swagger, which now enables further innovation.

 

 

What can you do to improve API security testing on your team?

There are many things you can do to begin testing APIs more effectively.

  1. Learn about them: Regardless of whether you test an API manually or automatically, it’s important to understand the functionality. Security testers should invest the time to learn the API's functionality and then plan an appropriate test. Go to your developers and ask them how they are documenting APIs and how you can learn about them.
  2. Does your team have Swagger? Find out if your developers are using Swagger. If they aren’t, encourage them to check it out. You can add Swagger files to existing APIs to make them machine readable and enable automated testing with AppSpider.
  3. Does your DAST have Swagger? Consider using a DAST solution that further automates testing of APIs. AppSpider is able to test Swagger APIs from end to end and it also automates much of the testing process for other APIs.
  4. Test APIs with AppSpider: If your team is interested in further automating API testing, download AppSpider to evaluate if it’s a fit for your team and if it can help you address more of your attack surface automatically.

 

Learn More:

“Laws are like sausages. It’s better not to see them being made.” – Otto von Bismarck

I'm not sure how many of you have kids or how diligent they are with their homework but I’m sure you’ve heard stories of parents observing that their kids have finished their homework in a remarkably short period of time.  However, upon investigation, you quickly discover that your child has only finished half of their homework.

Sadly, this state of affairs can also be true for SAAS providers offering web application application security assessment services.  Only half of the work gets done, resulting in rapid, but inaccurate scans and potentially vulnerable websites that are given clean bills of health by the scanning company.

Taking shortcuts

In order to find all SQL Injection and other important vulnerabilities, its critical to test every single parameter or input in an application. Properly configured web application vulnerability scanners should test parameters by locating all of the parameters on a page and then making attacks against individual parameters at a time.  So if there are 10 parameters, you do an attack against the first parameter and then enter acceptable test data values into the other nine parameters to successfully complete the form request.

Why can’t you just attack all 10 at once?  Well, let’s say that parameter one is vulnerable and parameters two -10 have good filters or validation. If you attack parameter one with an attack that works (i.e. the application does not recognize it) and parameter two with an attack that trips the filter in the application, the application will quite likely appear to not be vulnerable.

Now the problem is that if you are testing various attacks (SQL Injection, Blind SQL Injection, Cross Site Scripting, etc.) you will have dozens of attacks of each class against each parameter.  Your total attacks per parameter will exceed 100 and if you have 10 parameters on a page (which you will likely have in a signup form, for example), you will have over a thousand attacks for that page. On top of that, some of these attacks, like blind SQL, will have multiple requests per attack.

Performance vs comprehensiveness

Many SaaS vendors want to complete scans fast to make them look more impressive. The problem is that in order to accomplish, you have to cheat.

To speed up a scan, you might only test the first parameter or the first three or whatever and then skip testing the rest of the parameters.  If the customer doesn’t test the site and doesn’t get hacked, no one is the wiser if those untested parameters are vulnerable.

Does this matter?  Is it possible that one of parameters 4-10 is vulnerable if 1-3 are not?  In a word, yes.  Different parameter types (dates, text fields, numerical values, etc.) will have different filters.  Just because a developer got 1 right doesn’t mean that he got them all correct.  We’ve seen numerous cases where one parameter is 100% clean and others are full of holes.  You have to thoroughly test every parameter.

Letting those POSTs get away with murder

Since dealing with forms on web pages can be difficult and there is a possibility that they could modify data in the database behind the web application, some SaaS solutions don’t even attack them. So this means all the inputs from the forms never get tested.

On many of the sites we have tested over the last decade, the form inputs sent over POST have been some of the most critical attack points with some of the worst vulns and often the most important areas to test on a website. Not testing them is the same as locking your doors, but leaving your windows wide open.

How can you assess your vendor

Ask your vendor the hard questions, such as:

1. How many parameters do they attack per page? Are there limits they impose.

2. Ask them to demonstrate that only one parameter at a time gets attacked while the other fields having good data. Heck, ask them to put these answers in the Statement of Work (SOW).

3. Confirm that they attack forms and POST data. Ask them to demonstrate it or test it yourself with a trial.