Web Application Pen Testing Methods.
Ready? Use our quick contact form!

Your Email or Phone:

Enter code:

Web Application Penetration Testing Methodology

Safety and Stability - Network - Web Applications - Wireless -

Web Application Penetration Testing

Most of our Penetration Testing engagements include one or more web applications. While there are many cases where the procedures outlined below may be changed, this is our standard approach to testing web applications and web services. If we are given special circumstances, we may modify the methods, but this is the standard case.

Web Application Pen Test Tools.

The primary tools we use for Web Application Penetration Testing are:

  • Web Browsers
  • Burp Suite Professional
  • SoapUI
  • Custom Perl Scripts

This is not a complete list, but these are the major tools. We look for simple, powerful, flexible and proven tools.

Web Browsers. We use many different web browsers depending on circumstances, but the two we use the most are FireFox (or derivatives), and Google Chrome (or derivatives). For FireFox, the developer tools are important to us, and for Google Chrome, we often use a penetration testing specific browser derived from Chrome, called SandCat. Whichever browser we are using at any given time, we are using it for manual inspection and analysis.

Burp Suite Professional. Burp Suite is a penetration testing platform that integrates several important testing tools, including a web application scanner, spidering tools, intercepting proxy, entropy analysis tools for session tokens or other (presumably) random tokens, and tools for crafting and testing many kinds of attack payloads. The creators at Portswigger.net do a great job of keeping the suite current with the latest exploits and continually incorporate improvements. For our purposes, the factors that make it a great tool are:

  • Flexibility.
  • Because it is a platform intended for manual penetration testing of web applications, it allows great flexibility in the use of it's automated scanner and other automated tools. We can decide if we should scan, what to scan for, when we should scan, and how we should scan.
  • Manual tools.
  • The suite offers a surprisingly robust framework for crafting and testing custom attacks through it's repeater and intruder tools, and allows for real-time interception and manipulation of traffic between the client and the server.
  • Extensibility.
  • When we encounter anything we want to do that Burp Suite doesn't already handle, the suite allows us to write and incorporate our own plugins.
  • Standard Logging.
  • The suite allows us to capture and log every request and response, in sequence, and in formats parsable by other tools.

Unlike ordinary application scanners, this is a penetration testing suite. The emphasis is on fine grained control for penetration testers and robust support for manual testing methods, and not just push-button automation. That makes it a near perfect tool for our purposes.

SoapUI is a tool designed for functional testing of SOAP, and more recently, REST web services. It is not intended as a penetration testing tool, but we find it very useful for it's ability to rapidly create functional test cases for web services. Those test cases can then be used with our other tools that are intended for penetration testing.

Perl is our scripting language of choice. We use Perl for day to day on-the-fly scripting for all kinds of penetration testing tasks. You never know when you will need to do something special with a web application, and we can write what we need with Perl.

Methods and sequence.

Manual Application Review. There are two primary reasons for starting our actual testing with a manual review of your application. The first is safety and stability: we want to know if there are any factors that could result in unintended consequences before we configure and launch any automated tools. The second reason is quality. A brief examination of the alternative approach will help to illuminate some of our reasons for performing a manual review as a first step.

If a manual review is not the first step, then it usually means that automated scanning comes first, and that means that the scanner has to be configured. One common approach to configuring an automated scanner is to provide the scanner with an initial URL, along with any scope limitations, and then allow the scanner to spider the approved target. In other words, the tool is allowed to follow all of the links it can find, and perform a security scan of all of the parameters it can identify, for all of the links it finds, as long as the URL is in scope. This is the cheapest and (often) fastest way to scan a web application, and for vendors who do not actually perform a manual review, it is the only approach. It also has a host of potential problems. Here are a few:

  • Pre-Knowledge.
  • The spidering approach requires that you know all of the potential application specific problems that a scanner might cause or encounter, without even looking at the application itself. You can't configure the scanner to avoid something that you are not even aware of, so your only option is to do the best you can to anticipate, cross your fingers and pull the trigger.

  • Spiders in the weeds.
  • Spiders have a problem dealing with certain application structures, such as dynamically generated URLs. In some circumstances, applications generate URLs in a very lengthy, or even logically infinite sequence. A human tester can quickly recognize a dynamically generated URL, and discern that every subsequent URL of the same form is essentially identical, and that the functionality exposed is always the same. All of those judgments are required in order to make the right decision about whether to continue scanning URLs of the same form. In spite of many attempts to code a solution to the problem, it still requires human judgment. Spiders simply can't reliably make that determination, and as a result they often end up 'off in the weeds', stuck on what is essentially a dead end loop, or skipping important content because of a misjudgment of this issue. When scanners end up in a loop, inexperienced testers may let them run for days, then finally stop them near the end of the testing window, and assume that the application was 'adequately' scanned. In fact, how much of the application was scanned depends on the point at which the spider took a trip into the weeds.

  • Noise.
  • Spiders generate a lot of traffic, and when combined with scanning activity, they are normally about as stealthy as a circus setting up in the town square. The activity will often trip Web Application Firewalls or other Intrusion Prevention Systems, and that can cause avoidable complications in subsequent manual testing. For vendors who actually intend to perform any manual testing, it makes sense to do it quietly, behaving as much as possible like a normal user, and doing as much of it as possible before setting off the fireworks show.

  • Lock outs, email forms and databases.
  • If you think of an application scanner as a machine gun, with a machine brain, on a school playground, you will have an accurate if dramatized picture of what can happen when scanners fail to make adjustments for login pages, contact forms and database input forms. Account lockouts, thousands of emails directed at sales and support staff, and database corruption are all problems that scanners can, and routinely do cause when left to define their own targets.

So, if allowing the scanner to find it's own targets through spidering is not a good idea, what is? Before getting to our approach, there is one intuitive solution that needs to be addressed, and that is defining all of the targets. While this seems like a reasonable approach, it is entirely impractical to actually do, at least directly. You might be able to list all of the known web pages, but modern web applications make dozens, sometimes hundreds of requests through java script, style sheets, web services and other similar requests, for each web address that actually appears in your browser's address bar. While theoretically possible, the chance of missing something important is quite high, and so is the amount of time required. This notion is headed in the right direction though, and there is a practical way to develop such a list. It involves a careful manual review using a proxy, and that gets to our approach.

All of the problems enumerated above can be effectively mitigated if you conduct a manual review before scanning. Here is what our manual application review includes:

  • Full Exercise with Proxy Capture.
  • The first thing our testers do is set up the testing browser to use a proxy tool that captures every web request. From that point on the tester will fully exercise the application, while taking notes about what they see. The main purpose here is to exercise the application fully, capture all of the traffic, gain a full understanding of the application, and take notes.

  • Scope Checks.
  • One of the things our testers look for in this phase is any indication of scope problems that require clarification, such as mixed protocol schemes (https is in scope, but http is not mentioned and we find both), or the application is making requests to different host specifiers than expected, and we need to know if your intent was different than what actually ended up in our scope documents. We try to ensure that scope is accurate before we even begin, but it is important for testers to be very 'scope aware' and to identify any such issues as early as possible and get clarification.

  • Safety and Stability Checks.
  • Our testers are taking notes about many things, but none are more important than identifying potential safety and stability factors. Any login forms, email forms, database forms, or other potential problem areas are identified. Again, we try to identify and address problem areas in our scope documents before testing even begins, but our testers are trained to look for anything that might have been missed in planning.

  • Side Channel Vulnerabilities.
  • While exercising the application, we look at any email, text messaging or other out of band communication sent to us by the web application. This is something that scanners can't do very well, and we look closely at everything the application does.

  • Logical Faults.
  • There may be some issues that can be fully documented into finding reports immediately, without perfoming any testing that would raise flags with an application firewall or IPS, and if so, we do it. This often involves so called 'logical faults', and we find them often in login pages, password reset pages, and account registration pages. For findings that require further intrusive testing, we just take notes for later follow up.

Confirmation.

At this point our initial manual review is complete. Hopefully our tester does not have any scope questions or safety and stability concerns that have not been addressed in our scope documents, but if so, this is the point at which testing will pause until clarified.

Manual Testing.

After the initial review we perform any manual intrusive testing that may be indicated from our manual review of the application. We do it at this point, as quietly as possible, and before launching automated scans.

Automated Scans.

Automated scanning is critical to ensure full testing coverage, but scanners need careful attention. During manual review, our tester has developed a full proxy record of every page request, and a full proxy record of every subsequent request that was generated as a result of exercising the application. All of the java script, style sheet, web service and image requests that would have been nearly impossible to list have been captured. Our tester has also made rational, informed human decisions about what not to do, like continuing to pursue dynamic URLs forever. That resource list now becomes the scanner target list. After checking to insure that the scanner is configured as prescribed in our scope documents, including any scope or safety clarifications we may have received, we launch automated scans.

After automated scans have completed, which may require adjustments for IPS or WAF evasion, we will have a scanner report. We look very closely at the vulnerability scan results. We take note of any identified vulnerabilities and start sorting them into two buckets - those that require further validation and those that are reliable and need no further validation. We are not just looking for vulnerabilities that the scanner identified though. We look at vulnerability scans differently than most. For us, the results are a record of tens or hundreds of thousands of interactions with your application, and we look for anything in those results that seems out of the ordinary at all. It is surprising how often you can find hints that lead you to really serious vulnerabilities when you combine knowledgeable, informed human intuition with scanner output. We look hard.

Manual Testing Again.

Finally, we look at everything that has been identified for further testing. This is the point at which it is impossible to list tools or methods because there is simply too much potential ground to cover, but very often we will use Burp Suite Intruder and/or Repeater. In general, vulnerabilities will fall into three categories at this point:

  • Vulnerabilities that were identified by automation and are reliable. A finding report is prepared, along with any validating evidence from the automated tool.
  • Vulnerabilities that were identified by automation but are not reliable until validated. These are validated using whatever tools or methods are appropriate. Screen captures and other evidence is collected and a finding report is created.
  • Possible vulnerabilities or simple suspicions identified manually. These are all tested, one way or another, until we are convinced that we know what we are seeing, and can either dismiss them or report them.

Web Application Penetration Testing Summary.

This is our standard methodology for a standard testing approach. We adjust as necessary for different testing objectives. Our standard approach is neither fully automated, nor fully manual. It is our opinion that one cannot expect full breadth of coverage without the use of at least some automation, nor can automation be expected to exhibit human intuition and experience.

At High Bit Security, we employ an approach that balances depth and breadth. We use carefully configured automated tools to aid in breadth of testing. We also use extensive manual effort to dig deep into potential security faults, but we stop pursuing depth when we have proven the fault, and have documented the finding in detail. This balanced approach allows for thorough breadth of coverage, sufficient depth, detailed documentation, and above all, safer testing.

Ask us for a free, quick, no hassle quote using the contact form above.