Web Security - Application Penetration Testing Methodology
Home - Safety and Stability - Network - Web Applications - Wireless - Compliance -
Web Security - Web Application Penetration Testing
Most of our penetration testing engagements include web security testing, usually one or more web applications or web services. While there are many cases when the procedures outlined below may be changed, this is our standard approach to testing web security for web applications and web services. If we are given special circumstances, we may modify the methods, but this is the standard case.
Before getting into the tools and methods we use, it is worth discussing a more general question: Is there any difference between manual web security testing and automated web security testing, and if there is, how do you, the client, know what you are getting?
Manual, Automated, or Both?
Both of these approaches to penetration testing have value, and we use both. Automation is necessary for full testing coverage, and in some cases is actually better than manual testing. Automation by itself, however, is entirely incapable of identifying, let alone validating, some of the most important security flaws found in web applications.
There are a number of vendors currently offering 'penetration testing services' for advertised prices of $895, or even $700 or less. We do not object to this, as long as the 'service' is accurately described. Our objection is with vendors who devote an entire web page to the quality of their testing, including statements about the manual effort they employ, including the qualifications of the 'testers', and then disclose in the fine print at the bottom of the page that the testing is automated.
It's not the testing methods we object to, it's the honesty. If you know what you are buying, and it meets your objectives, then we have no objection to purely automated testing. If you are told, however, that automated testing is sufficient by itself, and if thorough web security testing is an important consideration for you, then we encourage you to take a look at the table we present below.
Web Security, Testing Coverage
OWASP (Open Web Application Security Project) is perhaps the most respected organization in the world on the subject of web application security. The table below comes directly from the table of contents of the OWASP testing guide v4, but we've taken the liberty of adding one column. In this table, we added the Method column, which identifies testing that requires Manual methods (M), Automated methods (A), or both (MA).
Testing Method Index:
M | Manual - If an M appears, it means that the comments from OWASP indicate at least some Manual testing is needed. If only an M is present, we could find no indication from OWASP or our own experience that automation alone can be used even to consistently identify these faults. |
A | Automated - If an A is present, then either OWASP or we can see at least some value in automation. If only an A is present, then either OWASP or we believe that automation alone could be used to accurately identify the faults, though not necessarily for validation of the faults. |
MA | Both - If A and M are both designated, then both manual testing and automation are necessary for full coverage. |
We should mention one more thing about the table that follows: Our interpretation of the OWASP comments and resulting method classification is, of course, our own. That is why we are including the link to the OWASP testing guide, so you can review the OWASP comments yourself. There may be some difference of opinion in some areas, but we think most informed professionals will agree with our overall conclusions. Even if you are not an informed professional, some of the OWASP comments make it obvious that human intuition and persistence is a necessary component.
Source: OWASP_Testing_Guide_v4_Table_of_Contents
4.2 Information Gathering | |||
Method | Ref. No. | Category | Test Name |
M | 4.2.1 | OTG-INFO-001 | Conduct Search Engine Discovery and Reconnaissance for Information Leakage |
A | 4.2.2 | OTG-INFO-002 | Fingerprint Web Server |
A | 4.2.3 | OTG-INFO-003 | Review Webserver Metafiles for Information Leakage |
MA | 4.2.4 | OTG-INFO-004 | Enumerate Applications on Webserver |
MA | 4.2.5 | OTG-INFO-005 | Review Webpage Comments and Metadata for Information Leakage |
MA | 4.2.6 | OTG-INFO-006 | Identify application entry points |
MA | 4.2.7 | OTG-INFO-007 | Map execution paths through application |
A | 4.2.8 | OTG-INFO-008 | Fingerprint Web Application Framework |
A | 4.2.9 | OTG-INFO-009 | Fingerprint Web Application |
MA | 4.2.10 | OTG-INFO-010 | Map Application Architecture |
4.3 Configuration and Deploy Management Testing | |||
Method | Ref. No. | Category | Test Name |
MA | MAR 4.3.1 | OTG-CONFIG-001 | Test Network/Infrastructure Configuration |
MA | MA 4.3.2 | OTG-CONFIG-002 | Test Application Platform Configuration |
A | A 4.3.3 | OTG-CONFIG-003 | Test File Extensions Handling for Sensitive Information |
MA | MA 4.3.4 | OTG-CONFIG-004 | Backup and Unreferenced Files for Sensitive Information |
MA | MA 4.3.5 | OTG-CONFIG-005 | Enumerate Infrastructure and Application Admin Interfaces |
A | A 4.3.6 | OTG-CONFIG-006 | Test HTTP Methods |
A | A 4.3.7 | OTG-CONFIG-007 | Test HTTP Strict Transport Security |
A | A 4.3.8 | OTG-CONFIG-008 | Test RIA cross domain policy |
4.4 Identity Management Testing | |||
Method | Ref. No. | Category | Test Name |
M | 4.4.1 | OTG-IDENT-001 | Test Role Definitions |
M | 4.4.2 | OTG-IDENT-002 | Test User Registration Process |
M | 4.4.3 | OTG-IDENT-003 | Test Account Provisioning Process |
M | 4.4.4 | OTG-IDENT-004 | Testing for Account Enumeration and Guessable User Account |
M | 4.4.5 | OTG-IDENT-005 | Testing for Weak or unenforced username policy |
M | 4.4.6 | OTG-IDENT-006 | Test Permissions of Guest/Training Accounts (no current OWASP content) |
M | 4.4.7 | OTG-IDENT-007 | Test Account Suspension/Resumption Process (no current OWASP content) |
4.5 Authentication Testing | |||
Method | Ref. No. | Category | Test Name |
A | 4.5.1 | OTG-AUTHN-001 | Testing for Credentials Transported over an Encrypted Channel |
A | 4.5.2 | OTG-AUTHN-002 | Testing for default credentials |
M | 4.5.3 | OTG-AUTHN-003 | Testing for Weak lock out mechanism |
MA | 4.5.4 | OTG-AUTHN-004 | Testing for bypassing authentication schema |
MA | 4.5.5 | OTG-AUTHN-005 | Test remember password functionality |
MA | 4.5.6 | OTG-AUTHN-006 | Testing for Browser cache weakness |
M | 4.5.7 | OTG-AUTHN-007 | Testing for Weak password policy |
M | 4.5.8 | OTG-AUTHN-008 | Testing for Weak security question/answer |
M | 4.5.9 | OTG-AUTHN-009 | Testing for weak password change or reset functionalities |
M | 4.5.10 | OTG-AUTHN-010 | Testing for Weaker authentication in alternative channel |
4.6 Authorization Testing | |||
Method | Ref. No. | Category | Test Name |
MA | 4.6.1 | OTG-AUTHZ-001 | Testing Directory traversal/file include |
MA | 4.6.2 | OTG-AUTHZ-002 | Testing for bypassing authorization schema |
MA | 4.6.3 | OTG-AUTHZ-003 | Testing for Privilege Escalation |
M | 4.6.4 | OTG-AUTHZ-004 | Testing for Insecure Direct Object References |
4.7 Session Management Testing | |||
Method | Ref. No. | Category | Test Name |
MA | 4.7.1 | OTG-SESS-001 | Testing for Bypassing Session Management Schema |
MA | 4.7.2 | OTG-SESS-002 | Testing for Cookies attributes |
MA | 4.7.3 | OTG-SESS-003 | Testing for Session Fixation |
A | 4.7.4 | OTG-SESS-004 | Testing for Exposed Session Variables |
MA | 4.7.5 | OTG-SESS-005 | Testing for Cross Site Request Forgery |
M | 4.7.6 | OTG-SESS-006 | Testing for logout functionality |
M | 4.7.7 | OTG-SESS-007 | Test Session Timeout |
MA | 4.7.8 | OTG-SESS-008 | Testing for Session puzzling |
4.8 Data Validation Testing | |||
Method | Ref. No. | Category | Test Name |
MA | 4.8.1 | OTG-INPVAL-001 | Testing for Reflected Cross Site Scripting |
MA | 4.8.2 | OTG-INPVAL-002 | Testing for Stored Cross Site Scripting |
MA | 4.8.3 | OTG-INPVAL-003 | Testing for HTTP Verb Tampering |
MA | 4.8.4 | OTG-INPVAL-004 | Testing for HTTP Parameter pollution |
MA | 4.8.5 | OTG-INPVAL-005 | Testing for SQL Injection |
MA | 4.8.6 | OTG-INPVAL-006 | Testing for LDAP Injection |
MA | 4.8.7 | OTG-INPVAL-007 | Testing for ORM Injection |
MA | 4.8.8 | OTG-INPVAL-008 | Testing for XML Injection |
MA | 4.8.9 | OTG-INPVAL-009 | Testing for SSI Injection |
MA | 4.8.10 | OTG-INPVAL-010 | Testing for XPath Injection |
MA | 4.8.11 | OTG-INPVAL-011 | IMAP/SMTP Injection |
MA | 4.8.12 | OTG-INPVAL-012 | Testing for Code Injection |
MA | 4.8.12.1 | Testing for Local File Inclusion | |
MA | 4.8.12.2 | Testing for Remote File Inclusion | |
MA | 4.8.13 | OTG-INPVAL-013 | Testing for Command Injection |
MA | 4.8.14 | OTG-INPVAL-014 | Testing for Buffer overflow |
MA | 4.8.14.1 | Testing for Heap overflow | |
MA | 4.8.14.2 | Testing for Stack overflow | |
MA | 4.8.14.3 | Testing for Format string | |
MA | 4.8.15 | OTG-INPVAL-015 | Testing for incubated vulnerabilities |
MA | 4.8.16 | OTG-INPVAL-016 | Testing for HTTP Splitting/Smuggling |
4.9 Error Handling | |||
Method | Ref. No. | Category | Test Name |
MA | 4.9.1 | OTG-ERR-001 | Analysis of Error Codes |
MA | 4.9.2 | OTG-ERR-002 | Analysis of Stack Traces |
4.1 Cryptography | |||
Method | Ref. No. | Category | Test Name |
MA | 4.10.1 | OTG-CRYPST-001 | Testing for Weak SSL/TSL Ciphers, Insufficient Transport Layer Protection |
MA | 4.10.2 | OTG-CRYPST-002 | Testing for Padding Oracle |
MA | 4.10.3 | OTG-CRYPST-003 | Testing for Sensitive information sent via unencrypted channels |
4.11 Business Logic Testing | |||
Method | Ref. No. | Category | Test Name |
M | 4.11.1 | OTG-BUSLOGIC-001 | Test Business Logic Data Validation |
M | 4.11.2 | OTG-BUSLOGIC-002 | Test Ability to Forge Requests |
M | 4.11.3 | OTG-BUSLOGIC-003 | Test Integrity Checks |
MA | 4.11.4 | OTG-BUSLOGIC-004 | Test for Process Timing |
M | 4.11.5 | OTG-BUSLOGIC-005 | Test Number of Times a Function Can be Used Limits |
M | 4.11.6 | OTG-BUSLOGIC-006 | Testing for the Circumvention of Work Flows |
M | 4.11.7 | OTG-BUSLOGIC-007 | Test Defenses Against Application Mis-use |
MA | 4.11.8 | OTG-BUSLOGIC-008 | Test Upload of Unexpected File Types |
M | 4.11.9 | OTG-BUSLOGIC-009 | Test Upload of Malicious Files |
4.12 Client Side Testing | |||
Method | Ref. No. | Category | Test Name |
MA | 4.12.1 | OTG-CLIENT-001 | Testing for DOM based Cross Site Scripting |
MA | 4.12.2 | OTG-CLIENT-002 | Testing for JavaScript Execution |
MA | 4.12.3 | OTG-CLIENT-003 | Testing for HTML Injection |
MA | 4.12.4 | OTG-CLIENT-004 | Testing for Client Side URL Redirect |
MA | 4.12.5 | OTG-CLIENT-005 | Testing for CSS Injection |
M | 4.12.6 | OTG-CLIENT-006 | Testing for Client Side Resource Manipulation |
MA | 4.12.7 | OTG-CLIENT-007 | Test Cross Origin Resource Sharing |
MA | 4.12.8 | OTG-CLIENT-008 | Testing for Cross Site Flashing |
MA | 4.12.9 | OTG-CLIENT-009 | Testing for Clickjacking |
MA | 4.12.10 | OTG-CLIENT-010 | Testing WebSockets |
M | 4.12.11 | OTG-CLIENT-011 | Test Web Messaging |
Web Security, Testing Coverage Summary
The number of testing points that require manual testing is obviously a substantial part of the total, so it's no wonder that many of the vendors who rely solely on automated scanning do their best to at least appear to be performing manual testing. To what degree they actually do so, and to what degree you need it, we leave to your judgment. If you suspect you are being told one thing and sold another, we can give you a few pointers on what to ask:
- Ask the vendor to provide a copy of the automated scan report as a seperate deliverable, along with your penetration test report.
- Ask for and check references.
- Ask the vendor if they will provide a manual activity report along with the penetration testing report.
If they claim their scanner doesn't produce a report, ask them to zip up whatever automated scanner output their testers look at and send you that.
If they claim they don't do any automated scans, then you might wonder how are they going to check hundreds of thousands of payload/parameter combinations manually, without using a scanner at all, along with all of the rest of the OWASP table above, for the price they quoted?
In our opinion, there is no legitimate reason to resist this request. The automated scanner report is, after all, a report on the security posture of your systems, paid for by you, generated at your request, and should be part of any thorough web security test. If you can think of a legitimate reason for a company to refuse to provide it to you, we'd like to hear it.
If they resist, it's probably because they don't want you comparing their 'pen test' report with the automated scan report. You can probably figure out why.
When you call or email their references, make sure you ask if they felt comfortable that the vendor used substantial manual effort.
This may be one of the most important questions to ask. Any vendor who is actually performing manual penetration testing is sure to be keeping records that will demonstrate it and should be glad to have the chance to prove it to you.
If they claim that this will take too much time and will increase the price, tell them it doesn't have to be pretty and you're not expecting to pay for it.
In our case, we are happy to provide our manual testing notes, which are full of half formed suspicions, false starts, dead ends, on-the-fly vulnerability research, screen captures taken for later use, typos, and yes, even mistakes. In other words, exactly the kind of thing you would expect to see from live human beings trying to figure out what is going on with your web application. When you see it, you will know three things: It ain't pretty, it wasn't generated by a scanner, and we actually did the work.
One word of caution about evidence: Be wary of any vendor who tells you that you will see the evidence in your logs. It is very difficult to tell the difference between scanner traffic and manual effort, especially if the scanner is configured to randomize timing. Some scanners even have configuration options intended to mimic human behavior.
So, that's our current guidance on how to ensure that you are actually getting at least some manual testing. It's worth repeating that we are not opposed to testing that is entirely automated. Automated testing alone will not be thorough but there may be cases where it is appropriate. Our objection is with vendors who try to sell a simple automated scan to clients who are expecting significant manual effort.
Our view is that no penetration test can be considered reasonably thorough without both automated scanning and substantial manual testing. We are not alone in that opinion. The PCI- DSS council as of PCI-DSS v3, along with a growing number of software purchasing departments, and the OWASP testing guide all support that view.
Our web application testing covers the entire OWASP testing guide, not just the top 10 or top 25, and makes extensive use of manual testing using qualified, certified testers as well as automation. The remainder of this section deals with the tools and methods we use to achieve this coverage.
Web Security, Application Pen Test Tools.
The primary tools we use for Web Application Penetration Testing are:
- Web Browsers
- Burp Suite Professional
- SoapUI
- Custom Perl Scripts
This is not a complete list, but these are the major tools. We look for simple, powerful, flexible and proven tools.
Web Browsers. We use many different web browsers depending on circumstances, but the two we use the most are FireFox (or derivatives), and Google Chrome (or derivatives). Whichever browser we are using at any given time, we are using it for manual inspection and analysis.
Burp Suite Professional. Burp Suite is a penetration testing platform that integrates several important testing tools, including a web application scanner, spidering tools, intercepting proxy, entropy analysis tools for session tokens or other (presumably) random tokens, and tools for crafting and testing many kinds of attack payloads. The creators at Portswigger.net do a great job of keeping the suite current with the latest exploits and continually incorporate improvements. For our purposes, the factors that make it a great tool are:
- Flexibility. Because it is a platform intended for manual penetration testing of web applications, it allows great flexibility in the use of it's manual and automated tools. The automation that it provides is under our tight control. We can decide if we should scan, what to scan for, when we should scan, and how we should scan.
- Manual tools. The suite offers a surprisingly robust framework for crafting and testing custom attacks through it's repeater and intruder tools, and allows for real-time interception and manipulation of traffic between the client and the server.
- Extensibility. When we encounter anything we want to do that Burp Suite doesn't already handle, the suite allows us to write and incorporate our own plugins.
- Standard Logging. The suite allows us to capture and log every request and response, in sequence, and in formats parsable by other tools.
Unlike ordinary application scanners, this is a penetration testing suite. The emphasis is on fine grained control for penetration testers and robust support for manual testing methods, and not just push-button automation. That makes it a near perfect tool for our purposes.
SoapUI is a tool designed for functional testing of SOAP, and more recently, REST web services. It is not intended as a penetration testing tool, but we find it very useful for it's ability to rapidly create functional test cases for web services. Those test cases can then be used with our other tools that are intended for penetration testing.
Perl is our scripting language of choice. We use Perl for day to day on-the-fly scripting for all kinds of penetration testing tasks. You never know when you will need to do something special with a web application, and we can write what we need with Perl.
Methods and sequence.
Manual Application Review. There are two primary reasons for starting our actual testing with a manual review of your application. The first is safety and stability: we want to know if there are any factors that could result in unintended consequences before we configure and launch any automated tools. The second reason is quality. A brief examination of the alternative approach will help to illuminate some of our reasons for performing a manual review as a first step.
The alternative to a manual review as the first step is to perform automated scanning first, and that means that the scanner has to be configured. One common approach to configuring an automated scanner is to provide the scanner with an initial URL, along with any scope limitations, and then allow the scanner to spider the approved target. In other words, the tool is allowed to follow all of the links it can find, and perform a security scan of all of the parameters it can identify, for all of the links it finds, as long as the URL is in scope. This is the cheapest and (often) fastest way to scan a web application, and for vendors who do not actually perform a manual review, it is the only approach. It also has a host of potential problems. Here are a few:
- Pre-Knowledge. The spidering approach requires that you know all of the potential application specific problems that a scanner might cause or encounter, without ever looking at the application itself. You can't configure the scanner to avoid something that you are not even aware of, so your only option is to do the best you can to anticipate, cross your fingers and pull the trigger.
- Spiders in the weeds. Spiders have a problem dealing with certain application structures, such as dynamically generated URLs. In some circumstances, applications can generate URLs in a nearly infinite sequence. A human tester can quickly recognize a dynamically generated URL, and discern that every subsequent URL of the same form is essentially identical, and that the functionality exposed is always the same. All of those judgments are required in order to make the right decision about whether to continue scanning URLs of the same form. In spite of many industry attempts to code a solution to the problem, it still requires human judgment. Spiders simply can't reliably make that determination, and as a result they often end up 'off in the weeds', stuck on what is essentially a dead end loop, or skipping important content because of a badly coded solution to this issue. When scanners end up in a loop, inexperienced testers may let them run for days, then finally stop them near the end of the testing window, and assume that the application was 'adequately' scanned. In fact, how much of the application was scanned depends on the point at which the spider took a trip into the weeds.
- Noise. Spiders generate a lot of traffic, and when combined with scanning activity, they are normally about as stealthy as a circus setting up in the town square. The activity will often trip Web Application Firewalls or other Intrusion Prevention Systems, and that can cause avoidable complications in subsequent manual testing. For vendors who actually intend to perform any manual testing, it makes sense to do it quietly, behaving as much as possible like a normal user, and doing as much of it as possible before setting off the fireworks show.
- Lock outs, email forms and databases. If you think of an application scanner as a machine gun, with a machine brain, on a school playground, you will have an accurate if dramatized picture of what can happen when scanners fail to make adjustments for login pages, contact forms and database input forms. Account lockouts, thousands of emails directed at sales and support staff, and database corruption are all problems that scanners can, and routinely do cause when left to define their own targets.
So, if allowing the scanner to find it's own targets through spidering is not a good idea, what is? Before getting to our approach, there is one intuitive solution that needs to be addressed, and that is defining all of the targets in advance. While this seems like a reasonable approach, it is entirely impractical to actually do, at least directly. You might be able to list all of the known web pages, but modern web applications make dozens, sometimes hundreds of requests through java script, style sheets, web services and other similar requests, for each web address that actually appears in your browser's address bar. While theoretically possible, the chance of missing something important is quite high, and so is the amount of time required. This notion is headed in the right general direction though, and there is a practical way to develop such a list. It involves a careful manual review using a proxy, and that gets to our approach.
All of the problems enumerated above can be effectively mitigated if you conduct a manual review before scanning. Here is what our manual application review includes:
- Full Exercise with Proxy Capture. The first thing our testers do is set up the testing browser to use a proxy tool that captures every web request. From that point on the tester will fully exercise the application, while taking notes about what they see. The main purpose here is to exercise the application fully, capture all of the traffic, gain a full understanding of the application, and take notes.
- Scope Checks. One of the things our testers look for in this phase is any indication of scope problems that require clarification, such as mixed protocol schemes (https is in scope, but http is not mentioned and we find both), or the application is making requests to different host specifiers than expected, and we need to know if your intent was different than what actually ended up in our scope documents. We try to ensure that scope is accurate before we even begin, but it is important for testers to be very 'scope aware' and to identify any such issues as early as possible and get clarification.
- Safety and Stability Checks. Our testers are taking notes about many things, but none are more important than identifying potential safety and stability factors. Any login forms, email forms, database forms, or other potential problem areas are identified. Again, we try to identify and address problem areas in our scope documents before testing even begins, but our testers are trained to look for anything that might have been missed in planning.
- Side Channel Vulnerabilities. While exercising the application, we look at any email, text messaging or other out of band communication sent to us by the web application. This is something that scanners can't do very well, and we look closely at everything the application does.
- Logical Faults. There may be some issues that can be fully documented into finding reports immediately, without perfoming any testing that would raise flags with an application firewall or IPS, and if so, we do it. This often involves so called 'logical faults', and we find them often in login pages, password reset pages, and account registration pages. For findings that require further intrusive testing, we just take notes for later follow up.
Confirmation.
At this point our initial manual review is complete. Hopefully our tester does not have any scope questions or safety and stability concerns that have not been addressed in our scope documents, but if so, this is the point at which testing will pause until clarified.
Manual Testing.
After the initial review we perform any manual intrusive testing that may be indicated from our manual review of the application. We do it at this point, as quietly as possible, and before launching automated scans.
Automated Scans.
Automated scanning is critical to ensure full testing coverage, but scanners need careful attention. During manual review, our tester has developed a full proxy record of every page request, and a full proxy record of every subsequent request that was generated as a result of exercising the application. All of the java script, style sheet, web service and image requests that would have been nearly impossible to list have been captured. Our tester has also made rational, informed human decisions about what not to do, like continuing to pursue dynamic URLs forever. That resource list now becomes the scanner target list. After checking to insure that the scanner is configured as prescribed in our scope documents, including any scope or safety clarifications we may have received, we launch automated scans.
After automated scans have completed, which may require adjustments for IPS or WAF evasion, we will have a scanner report. We look very closely at the vulnerability scan results. We take note of any identified vulnerabilities and start sorting them into two buckets - those that require further validation and those that are reliable and need no further validation. We are not just looking for vulnerabilities that the scanner identified though. We look at vulnerability scans differently than most. For us, the results are a record of tens or hundreds of thousands of interactions with your application, and we look for anything in those results that seems out of the ordinary at all. It is surprising how often you can find hints that lead you to really serious vulnerabilities when you combine knowledgeable, informed human intuition with scanner output. We look hard.
Manual Testing Again.
Finally, we look at everything that has been identified for further testing. This is the point at which it is impossible to list tools or methods because there is simply too much potential ground to cover, but very often we will use Burp Suite Intruder and/or Repeater. In general, vulnerabilities will fall into three categories at this point:
- Vulnerabilities that were identified by automation and are reliable. A finding report is prepared, along with any validating evidence from the automated tool.
- Vulnerabilities that were identified by automation but are not reliable until validated. These are validated using whatever tools or methods are appropriate. Screen captures and other evidence is collected and a finding report is created.
- Possible vulnerabilities or simple suspicions identified manually. These are all tested, one way or another, until we are convinced that we know what we are seeing, and can either dismiss them or report them.
Web Application Penetration Testing Summary.
This is our standard methodology for a standard testing approach. We adjust as necessary for different testing objectives. Our standard approach is neither fully automated, nor fully manual. It is our opinion that one cannot expect full breadth of coverage without the use of at least some automation, nor can automation be expected to exhibit human intuition and experience.
At High Bit Security, we employ an approach that balances depth and breadth. We use carefully configured automated tools to aid in breadth of testing. We also use extensive manual effort to dig deep into potential security faults, but we stop pursuing depth when we have proven the fault, and have documented the finding in detail. This balanced approach allows for thorough breadth of coverage, sufficient depth, detailed documentation, and above all, safer testing.
Ask us for a free, quick, no hassle quote using the contact form above.