Web Security, Application Pen Testing Methods.
Simple contact: 3 fields, one form, that's it. Inquire Here:
Need a web application penetration test? Use our quick contact form!

Email (Company domain please):


no gmail, yandex, yahoo, etc.
captcha
Enter code:
Yes, I am a human.

Web Security - Application Penetration Testing Methodology

Home - Safety and Stability - Network - Web Applications - Wireless - Compliance -

Web Security - Web Application Penetration Testing

Most of our penetration testing engagements include web security testing, usually one or more web applications or web services. While there are many cases when the procedures outlined below may be changed, this is our standard approach to testing web security for web applications and web services. If we are given special circumstances, we may modify the methods, but this is the standard case.

Before getting into the tools and methods we use, it is worth discussing a more general question: Is there any difference between manual web security testing and automated web security testing, and if there is, how do you, the client, know what you are getting?

Manual, Automated, or Both?

Both of these approaches to penetration testing have value, and we use both. Automation is necessary for full testing coverage, and in some cases is actually better than manual testing. Automation by itself, however, is entirely incapable of identifying, let alone validating, some of the most important security flaws found in web applications.

There are a number of vendors currently offering 'penetration testing services' for advertised prices of $895, or even $700 or less. We do not object to this, as long as the 'service' is accurately described. Our objection is with vendors who devote an entire web page to the quality of their testing, including statements about the manual effort they employ, including the qualifications of the 'testers', and then disclose in the fine print at the bottom of the page that the testing is automated.

It's not the testing methods we object to, it's the honesty. If you know what you are buying, and it meets your objectives, then we have no objection to purely automated testing. If you are told, however, that automated testing is sufficient by itself, and if thorough web security testing is an important consideration for you, then we encourage you to take a look at the table we present below.

Web Security, Testing Coverage

OWASP (Open Web Application Security Project) is perhaps the most respected organization in the world on the subject of web application security. The table below comes directly from the table of contents of the OWASP testing guide v4, but we've taken the liberty of adding one column. In this table, we added the Method column, which identifies testing that requires Manual methods (M), Automated methods (A), or both (MA).

Testing Method Index:

M

Manual - If an M appears, it means that the comments from OWASP indicate at least some Manual testing is needed. If only an M is present, we could find no indication from OWASP or our own experience that automation alone can be used even to consistently identify these faults.

A

Automated - If an A is present, then either OWASP or we can see at least some value in automation. If only an A is present, then either OWASP or we believe that automation alone could be used to accurately identify the faults, though not necessarily for validation of the faults.

MA

Both - If A and M are both designated, then both manual testing and automation are necessary for full coverage.


We should mention one more thing about the table that follows: Our interpretation of the OWASP comments and resulting method classification is, of course, our own. That is why we are including the link to the OWASP testing guide, so you can review the OWASP comments yourself. There may be some difference of opinion in some areas, but we think most informed professionals will agree with our overall conclusions. Even if you are not an informed professional, some of the OWASP comments make it obvious that human intuition and persistence is a necessary component.

Source: OWASP_Testing_Guide_v4_Table_of_Contents

4.2 Information Gathering

Method

Ref. No.

Category

Test Name

M

4.2.1

OTG-INFO-001

Conduct Search Engine Discovery and Reconnaissance for Information Leakage

A

4.2.2

OTG-INFO-002

Fingerprint Web Server

A

4.2.3

OTG-INFO-003

Review Webserver Metafiles for Information Leakage

MA

4.2.4

OTG-INFO-004

Enumerate Applications on Webserver

MA

4.2.5

OTG-INFO-005

Review Webpage Comments and Metadata for Information Leakage

MA

4.2.6

OTG-INFO-006

Identify application entry points

MA

4.2.7

OTG-INFO-007

Map execution paths through application

A

4.2.8

OTG-INFO-008

Fingerprint Web Application Framework

A

4.2.9

OTG-INFO-009

Fingerprint Web Application

MA

4.2.10

OTG-INFO-010

Map Application Architecture

 

4.3 Configuration and Deploy Management Testing

Method

Ref. No.

Category

Test Name

MA

MAR 4.3.1

OTG-CONFIG-001

Test Network/Infrastructure Configuration

MA

MA 4.3.2

OTG-CONFIG-002

Test Application Platform Configuration

A

A 4.3.3

OTG-CONFIG-003

Test File Extensions Handling for Sensitive Information

MA

MA 4.3.4

OTG-CONFIG-004

Backup and Unreferenced Files for Sensitive Information

MA

MA 4.3.5

OTG-CONFIG-005

Enumerate Infrastructure and Application Admin Interfaces

A

A 4.3.6

OTG-CONFIG-006

Test HTTP Methods

A

A 4.3.7

OTG-CONFIG-007

Test HTTP Strict Transport Security

A

A 4.3.8

OTG-CONFIG-008

Test RIA cross domain policy

 

4.4 Identity Management Testing

Method

Ref. No.

Category

Test Name

M

4.4.1

OTG-IDENT-001

Test Role Definitions

M

4.4.2

OTG-IDENT-002

Test User Registration Process

M

4.4.3

OTG-IDENT-003

Test Account Provisioning Process

M

4.4.4

OTG-IDENT-004

Testing for Account Enumeration and Guessable User Account

M

4.4.5

OTG-IDENT-005

Testing for Weak or unenforced username policy

M

4.4.6

OTG-IDENT-006

Test Permissions of Guest/Training Accounts (no current OWASP content)

M

4.4.7

OTG-IDENT-007

Test Account Suspension/Resumption Process (no current OWASP content)

 

4.5 Authentication Testing

Method

Ref. No.

Category

Test Name

A

4.5.1

OTG-AUTHN-001

Testing for Credentials Transported over an Encrypted Channel

A

4.5.2

OTG-AUTHN-002

Testing for default credentials

M

4.5.3

OTG-AUTHN-003

Testing for Weak lock out mechanism

MA

4.5.4

OTG-AUTHN-004

Testing for bypassing authentication schema

MA

4.5.5

OTG-AUTHN-005

Test remember password functionality

MA

4.5.6

OTG-AUTHN-006

Testing for Browser cache weakness

M

4.5.7

OTG-AUTHN-007

Testing for Weak password policy

M

4.5.8

OTG-AUTHN-008

Testing for Weak security question/answer

M

4.5.9

OTG-AUTHN-009

Testing for weak password change or reset functionalities

M

4.5.10

OTG-AUTHN-010

Testing for Weaker authentication in alternative channel

 

4.6 Authorization Testing

Method

Ref. No.

Category

Test Name

MA

4.6.1

OTG-AUTHZ-001

Testing Directory traversal/file include

MA

4.6.2

OTG-AUTHZ-002

Testing for bypassing authorization schema

MA

4.6.3

OTG-AUTHZ-003

Testing for Privilege Escalation

M

4.6.4

OTG-AUTHZ-004

Testing for Insecure Direct Object References

 

4.7 Session Management Testing

Method

Ref. No.

Category

Test Name

MA

4.7.1

OTG-SESS-001

Testing for Bypassing Session Management Schema

MA

4.7.2

OTG-SESS-002

Testing for Cookies attributes

MA

4.7.3

OTG-SESS-003

Testing for Session Fixation

A

4.7.4

OTG-SESS-004

Testing for Exposed Session Variables

MA

4.7.5

OTG-SESS-005

Testing for Cross Site Request Forgery

M

4.7.6

OTG-SESS-006

Testing for logout functionality

M

4.7.7

OTG-SESS-007

Test Session Timeout

MA

4.7.8

OTG-SESS-008

Testing for Session puzzling

 

4.8 Data Validation Testing

Method

Ref. No.

Category

Test Name

MA

4.8.1

OTG-INPVAL-001

Testing for Reflected Cross Site Scripting

MA

4.8.2

OTG-INPVAL-002

Testing for Stored Cross Site Scripting

MA

4.8.3

OTG-INPVAL-003

Testing for HTTP Verb Tampering

MA

4.8.4

OTG-INPVAL-004

Testing for HTTP Parameter pollution

MA

4.8.5

OTG-INPVAL-005

Testing for SQL Injection

MA

4.8.6

OTG-INPVAL-006

Testing for LDAP Injection

MA

4.8.7

OTG-INPVAL-007

Testing for ORM Injection

MA

4.8.8

OTG-INPVAL-008

Testing for XML Injection

MA

4.8.9

OTG-INPVAL-009

Testing for SSI Injection

MA

4.8.10

OTG-INPVAL-010

Testing for XPath Injection

MA

4.8.11

OTG-INPVAL-011

IMAP/SMTP Injection

MA

4.8.12

OTG-INPVAL-012

Testing for Code Injection

MA

4.8.12.1

 

Testing for Local File Inclusion

MA

4.8.12.2

 

Testing for Remote File Inclusion

MA

4.8.13

OTG-INPVAL-013

Testing for Command Injection

MA

4.8.14

OTG-INPVAL-014

Testing for Buffer overflow

MA

4.8.14.1

 

Testing for Heap overflow

MA

4.8.14.2

 

Testing for Stack overflow

MA

4.8.14.3

 

Testing for Format string

MA

4.8.15

OTG-INPVAL-015

Testing for incubated vulnerabilities

MA

4.8.16

OTG-INPVAL-016

Testing for HTTP Splitting/Smuggling

 

4.9 Error Handling

Method

Ref. No.

Category

Test Name

MA

4.9.1

OTG-ERR-001

Analysis of Error Codes

MA

4.9.2

OTG-ERR-002

Analysis of Stack Traces

 

4.1 Cryptography

Method

Ref. No.

Category

Test Name

MA

4.10.1

OTG-CRYPST-001

Testing for Weak SSL/TSL Ciphers, Insufficient Transport Layer Protection

MA

4.10.2

OTG-CRYPST-002

Testing for Padding Oracle

MA

4.10.3

OTG-CRYPST-003

Testing for Sensitive information sent via unencrypted channels

 

4.11 Business Logic Testing

Method

Ref. No.

Category

Test Name

M

4.11.1

OTG-BUSLOGIC-001

Test Business Logic Data Validation

M

4.11.2

OTG-BUSLOGIC-002

Test Ability to Forge Requests

M

4.11.3

OTG-BUSLOGIC-003

Test Integrity Checks

MA

4.11.4

OTG-BUSLOGIC-004

Test for Process Timing

M

4.11.5

OTG-BUSLOGIC-005

Test Number of Times a Function Can be Used Limits

M

4.11.6

OTG-BUSLOGIC-006

Testing for the Circumvention of Work Flows

M

4.11.7

OTG-BUSLOGIC-007

Test Defenses Against Application Mis-use

MA

4.11.8

OTG-BUSLOGIC-008

Test Upload of Unexpected File Types

M

4.11.9

OTG-BUSLOGIC-009

Test Upload of Malicious Files

 

4.12 Client Side Testing

Method

Ref. No.

Category

Test Name

MA

4.12.1

OTG-CLIENT-001

Testing for DOM based Cross Site Scripting

MA

4.12.2

OTG-CLIENT-002

Testing for JavaScript Execution

MA

4.12.3

OTG-CLIENT-003

Testing for HTML Injection

MA

4.12.4

OTG-CLIENT-004

Testing for Client Side URL Redirect

MA

4.12.5

OTG-CLIENT-005

Testing for CSS Injection

M

4.12.6

OTG-CLIENT-006

Testing for Client Side Resource Manipulation

MA

4.12.7

OTG-CLIENT-007

Test Cross Origin Resource Sharing

MA

4.12.8

OTG-CLIENT-008

Testing for Cross Site Flashing

MA

4.12.9

OTG-CLIENT-009

Testing for Clickjacking

MA

4.12.10

OTG-CLIENT-010

Testing WebSockets

M

4.12.11

OTG-CLIENT-011

Test Web Messaging

 

Web Security, Testing Coverage Summary

The number of testing points that require manual testing is obviously a substantial part of the total, so it's no wonder that many of the vendors who rely solely on automated scanning do their best to at least appear to be performing manual testing. To what degree they actually do so, and to what degree you need it, we leave to your judgment. If you suspect you are being told one thing and sold another, we can give you a few pointers on what to ask:

So, that's our current guidance on how to ensure that you are actually getting at least some manual testing. It's worth repeating that we are not opposed to testing that is entirely automated. Automated testing alone will not be thorough but there may be cases where it is appropriate. Our objection is with vendors who try to sell a simple automated scan to clients who are expecting significant manual effort.

Our view is that no penetration test can be considered reasonably thorough without both automated scanning and substantial manual testing. We are not alone in that opinion. The PCI- DSS council as of PCI-DSS v3, along with a growing number of software purchasing departments, and the OWASP testing guide all support that view.

Our web application testing covers the entire OWASP testing guide, not just the top 10 or top 25, and makes extensive use of manual testing using qualified, certified testers as well as automation. The remainder of this section deals with the tools and methods we use to achieve this coverage.


Web Security, Application Pen Test Tools.

The primary tools we use for Web Application Penetration Testing are:

This is not a complete list, but these are the major tools. We look for simple, powerful, flexible and proven tools.

Web Browsers. We use many different web browsers depending on circumstances, but the two we use the most are FireFox (or derivatives), and Google Chrome (or derivatives). Whichever browser we are using at any given time, we are using it for manual inspection and analysis.

Burp Suite Professional. Burp Suite is a penetration testing platform that integrates several important testing tools, including a web application scanner, spidering tools, intercepting proxy, entropy analysis tools for session tokens or other (presumably) random tokens, and tools for crafting and testing many kinds of attack payloads. The creators at Portswigger.net do a great job of keeping the suite current with the latest exploits and continually incorporate improvements. For our purposes, the factors that make it a great tool are:


Unlike ordinary application scanners, this is a penetration testing suite. The emphasis is on fine grained control for penetration testers and robust support for manual testing methods, and not just push-button automation. That makes it a near perfect tool for our purposes.

SoapUI is a tool designed for functional testing of SOAP, and more recently, REST web services. It is not intended as a penetration testing tool, but we find it very useful for it's ability to rapidly create functional test cases for web services. Those test cases can then be used with our other tools that are intended for penetration testing.

Perl is our scripting language of choice. We use Perl for day to day on-the-fly scripting for all kinds of penetration testing tasks. You never know when you will need to do something special with a web application, and we can write what we need with Perl.

Methods and sequence.

Manual Application Review. There are two primary reasons for starting our actual testing with a manual review of your application. The first is safety and stability: we want to know if there are any factors that could result in unintended consequences before we configure and launch any automated tools. The second reason is quality. A brief examination of the alternative approach will help to illuminate some of our reasons for performing a manual review as a first step.

The alternative to a manual review as the first step is to perform automated scanning first, and that means that the scanner has to be configured. One common approach to configuring an automated scanner is to provide the scanner with an initial URL, along with any scope limitations, and then allow the scanner to spider the approved target. In other words, the tool is allowed to follow all of the links it can find, and perform a security scan of all of the parameters it can identify, for all of the links it finds, as long as the URL is in scope. This is the cheapest and (often) fastest way to scan a web application, and for vendors who do not actually perform a manual review, it is the only approach. It also has a host of potential problems. Here are a few:


So, if allowing the scanner to find it's own targets through spidering is not a good idea, what is? Before getting to our approach, there is one intuitive solution that needs to be addressed, and that is defining all of the targets in advance. While this seems like a reasonable approach, it is entirely impractical to actually do, at least directly. You might be able to list all of the known web pages, but modern web applications make dozens, sometimes hundreds of requests through java script, style sheets, web services and other similar requests, for each web address that actually appears in your browser's address bar. While theoretically possible, the chance of missing something important is quite high, and so is the amount of time required. This notion is headed in the right general direction though, and there is a practical way to develop such a list. It involves a careful manual review using a proxy, and that gets to our approach.

All of the problems enumerated above can be effectively mitigated if you conduct a manual review before scanning. Here is what our manual application review includes:

Confirmation.

At this point our initial manual review is complete. Hopefully our tester does not have any scope questions or safety and stability concerns that have not been addressed in our scope documents, but if so, this is the point at which testing will pause until clarified.

Manual Testing.

After the initial review we perform any manual intrusive testing that may be indicated from our manual review of the application. We do it at this point, as quietly as possible, and before launching automated scans.

Automated Scans.

Automated scanning is critical to ensure full testing coverage, but scanners need careful attention. During manual review, our tester has developed a full proxy record of every page request, and a full proxy record of every subsequent request that was generated as a result of exercising the application. All of the java script, style sheet, web service and image requests that would have been nearly impossible to list have been captured. Our tester has also made rational, informed human decisions about what not to do, like continuing to pursue dynamic URLs forever. That resource list now becomes the scanner target list. After checking to insure that the scanner is configured as prescribed in our scope documents, including any scope or safety clarifications we may have received, we launch automated scans.

After automated scans have completed, which may require adjustments for IPS or WAF evasion, we will have a scanner report. We look very closely at the vulnerability scan results. We take note of any identified vulnerabilities and start sorting them into two buckets - those that require further validation and those that are reliable and need no further validation. We are not just looking for vulnerabilities that the scanner identified though. We look at vulnerability scans differently than most. For us, the results are a record of tens or hundreds of thousands of interactions with your application, and we look for anything in those results that seems out of the ordinary at all. It is surprising how often you can find hints that lead you to really serious vulnerabilities when you combine knowledgeable, informed human intuition with scanner output. We look hard.

Manual Testing Again.

Finally, we look at everything that has been identified for further testing. This is the point at which it is impossible to list tools or methods because there is simply too much potential ground to cover, but very often we will use Burp Suite Intruder and/or Repeater. In general, vulnerabilities will fall into three categories at this point:

Web Application Penetration Testing Summary.

This is our standard methodology for a standard testing approach. We adjust as necessary for different testing objectives. Our standard approach is neither fully automated, nor fully manual. It is our opinion that one cannot expect full breadth of coverage without the use of at least some automation, nor can automation be expected to exhibit human intuition and experience.

At High Bit Security, we employ an approach that balances depth and breadth. We use carefully configured automated tools to aid in breadth of testing. We also use extensive manual effort to dig deep into potential security faults, but we stop pursuing depth when we have proven the fault, and have documented the finding in detail. This balanced approach allows for thorough breadth of coverage, sufficient depth, detailed documentation, and above all, safer testing.

Ask us for a free, quick, no hassle quote using the contact form above.