Network Penetration Testing Methods.
Ready? Use our quick contact form!

Your Email or Phone:

Enter code:

Network Penetration Testing Methodology

Safety and Stability - Network - Web Applications - Wireless - Compliance -

Network Penetration Testing Methods.

For us, the Network Penetration Test is the basic package. Not all of our engagements include this basic testing, but most of them do. Also, there are many cases where the procedures outlined below may be changed. If we are given special circumstances, we may modify the methods, but this is the standard case.

Network Pen Test Tools.

The primary tools we use for Network Penetration Testing are:

  • NMap
  • OpenVAS
  • AMap
  • Custom Perl Scripts

This is not a complete list, but these are the major tools. You may wonder why we don't use more sophisticated software. The answer is that we get better results from knowledgeable people using these tools than we do from sophisticated, purpose built software with it's inevitable limitations. We look for simple, powerful, flexible and proven tools, and that's what all of the tools you see on the list above have in common.

NMap is one of the most powerful and versatile port scanners ever written, and most of the 'improvements' we have seen in other tools are simply improvements in user interface, simplified usability, or specialized function. Once you strip the eye candy and hype away, most of them can't do a fraction of what can be done with NMap, or if they can, they aren't scriptable. The ability to script NMap, coupled with it's miriad switches, means we can do just about anything related to port scanning with it, and customize it to our own needs. All of our penetration testers are coders first, so wrapping script around NMap is second nature for them.

OpenVAS is the open source descendant of the original Nessus network vulnerability scanner, runs on all linux systems, has up to date vulnerability feeds, is highly customizable, and can be modified at the source code level. Both the scanner itself and it's output are readily scriptable, and you can write your own plugins for it. Simple, powerful, proven and flexible. It meets all of our criteria, and has one more advantage. Because it is open source it has found a user base that is highly knowledgeable and willing to deal with a few false positives, and that's perfect for our needs. A highly polished, commercial scanner that generates a lot of false positives would not do well with typical end users, but eliminating those false positives often means missing real issues (false negatives). It's part of our job to sort through the output and eliminate the false positives. We want, and need, a thorough, flexible scanner that misses little, and we get it with OpenVAS.

AMap is a lessor known but very useful tool for identifying services based on banners. AMap is particularly useful to us because it is simple, scriptable, and has a customizable database of signature profiles that we can add to or modify.

Perl is our scripting language of choice. We could probably do just as well with Python or some other language, but Perl is what we use for the vast majority of our needs. The real question is not what language you use, but what you are able to accomplish with it. We use Perl for day to day on-the-fly scripting during penetration testing, but we also use it to generate several of the backbone reports we use internally during a penetration test. These internally generated reports are absolutely critical to an efficient and thorough penetration test. A penetration tester with a class c network to test could easily have hundreds of thousands of individual data points that require analysis, and without the ability to sort, organize, correlate and otherwise manage those data points, the task would be simply overwhelming. With Perl, we not only run the primary tools listed above, but correlate the output in any way we need it in order to make sense of it. That is not a small deal. A penetration tester has a much harder job than finding a needle in a haystack, because he's looking for the unusual, and it may or may not even be a needle. When you are dealing with that much data, and don't know what you might be looking for in that data, the ability to manage it is everything.

Methods and sequence.

Whois lookups. For External engagements the first step in our network testing methodology comes before any testing even begins. We perform whois and network lookups on all IP addresses and ranges, and if there is any question about ownership of any of the systems we bring it to your attention. It may seem unnecessary, but we have found surprises before, and it's much better for us and our clients than getting a call from the FBI or someone else's legal counsel.

Canary Scans. Routers, switches, and other critical systems can sometimes be crashed by network vulnerability scanners, or even simple port scans. Systems can also be overwhelmed by too much testing volume. We invented the term 'Canary Scan' for a method we use to address this issue. It is simply a trial run, using our standard port scanner and vulnerability scanner configuration, on safer targets. This technique can save important production systems from exposure to potentially dangerous scanner activity. It is not always possible, but if you have non-production systems that are configured similarly to production systems, we can launch automated tools first at these less important targets, and then, if there are no negative consequences, proceed to use the tools on the actual targets of the engagement. Most of the time, there is no impact at all, but if there is a negative impact this technique can save important production systems from risk exposure. We will ask you about potential 'Canary Hosts' in our pre-engagement Interview, and if we have the targets, we will do it here.

Port Scan Configuration. If the Canary Scans above were used, and if any configuration changes were indicated, we will make them here. Otherwise we will run our standard port scan battery of tests. This includes not one, but ten different port scans with different configurations.

Vulnerability Scanner Configuration. If the Canary Scans above were used, and if any configuration changes were indicated, we will make them here. Otherwise we will run our standard vulnerability scanner configuration. Our network vulnerability scanners use a standard, customized configuration designed to avoid denial of service tests and unsafe memory corruption testing, and are also configured to use low thread counts to avoid overwhelming target systems or network devices.

Data Correlation. Once the port scans and vulnerability scans have finished, our Perl scripts go to work on the data. Among other things, we check for evidence of common firewall misconfiguration that can be indicated by differing responses from the various port scans. Remember the needle in the haystack? Imagine just 10 hosts, each with potentially 65,535 tcp ports, and 65,535 udp ports, with 10 different port scans reporting 2 or 3 possible port states for each port, and checking your firewall configuration depends on knowing if you have missed just one rule for one port. Getting to that answer is one of the reasons we run so many port scans. Sifting through the data quickly is the reason we use Perl. Firewall configuration is not the only thing we are looking for at this stage. All of the data related to what is commonly understood as footprinting is also organized, any unusual ports or unusual responses are flagged, known web ports are identified, and Amap is run on all of those ports, and that data is captured. We also run zone transfer tests at this time if it is an external test, to check your DNS server configuration for any domain names we have logged, and that data is captured. All of that was just for the port scans. We also capture and organize all of the output from the vulnerability scanner, and get it ready for a manual review. That does not mean looking at the pretty pdf reports produced by the scanner. It means reducing it to line by line output that can be sorted and grouped however we need to see it, including comparing it to port scan results.

Manual Testing. Once we have rolled up all of the data from the automated tools, we start the real work. First, we check for any evidence of IPS activity, and if we aren't confident that we have good information for both port scans and vulnerability scans, we start thinking about switching our source IP, reconfiguring our scans for a more stealthy approach and doing it again. We'll repeat that process using increasingly stealthy tactics until we can confidently report that your IPS is effective, or report on full or partial success in evading it.

Next, we look at the data for any anomalies that might indicate a problem in any of your firewall configurations. This will typically include some false positives, and we check them all.

We then look very closely at the vulnerability scan results. We take note of any identified vulnerabilities and start sorting them into two buckets - those that require further validation and those that are reliable and need no further validation. We are not just looking for vulnerabilities that the scanner identified though. We look at vulnerability scans differently than most. For us, the results are a record of tens or hundreds of thousands of interactions with your systems, and we look for anything in those results that seems out of the ordinary at all. It is surprising how often you can find hints that lead you to really serious vulnerabilities when you combine knowledgeable, informed human intuition with scanner output. We look hard.

After squeezing all we can get out of the vulnerability scan, we take a hard look at open ports, whether the vulnerability scan reported anything for the port or not. We look for known services running on non-standard ports, unknown services running on any port, any banners returned from services, and in general, anything that causes our tester to take notice. Anything that does not look perfectly normal is identified for further testing. Any service that we think might not have received adequate scanner coverage due to IPS activity is also identified for further testing.

If we are on an internal engagement, we will spend some time sniffing the wire for any packets that we should not be able to see on a properly switched network. Anything unusual is identified for further validation.

Finally, we look at everything that has been identified for further testing. This is the point at which it is impossible to list tools or methods because there is simply too much potential ground to cover. In general, vulnerabilities will fall into three categories at this point:

  • Vulnerabilities that were identified by automation and are reliable. A finding report is prepared, along with any validating evidence from the automated tool.
  • Vulnerabilities that were identified by automation but are not reliable until validated. These are validated using whatever tools or methods are appropriate. Screen captures and other evidence is collected and a finding report is created.
  • Possible vulnerabilities or simple suspicions identified manually. These are all tested, one way or another, until we are convinced that we know what we are seeing, and can either dismiss them or report them.

Network Penetration Testing Summary.

This is our standard methodology for a standard testing approach. We adjust as necessary for different testing objectives. If you want a stealth approach from the beginning, or want denial of service testing performed, the approach will of course be different.

Most vendors opt for one of two approaches to penetration testing. The Breadth First approach emphasizes thorough coverage, and practitioners tend to rely heavily on automated tools to accomplish all or much of the testing. This approach means a heavy reliance on the most dangerous tools – automated scanners. While generally cheaper this approach is dangerous and can miss important findings depending on the amount of manual testing employed. The Depth First approach emphasizes undetected penetration in depth, and practitioners of this approach tend to avoid the use of automated tools because those tools can draw attention. Depth First practitioners use far more manual effort and often consider an engagement to be unsuccessful if they are not able to fully compromise a system or expose deeper systems. This approach often results in inadequate breadth of coverage, is usually more expensive and can expose deeper systems to unnecessary risk.

Our standard approach is neither fully automated, nor fully manual. It is our opinion that you cannot expect full breadth of coverage without the use of at least some automation, and full breadth of coverage is of critical importance. If you imagine trying to test just one host without automation, you will see why. Trying to send 65,535 requests manually from a terminal would only test half of the potential ports on just that one host, assuming that you did all of them correctly, did not forget any, and logged it all properly. If you could do that, and send one request every second, it would take you 36 hours just to perform a single port scan of one host. It would take you 15 days working 24 hours per day to run our standard port scans on that one host.

At High Bit Security, we employ an approach that balances depth and breadth. We use carefully configured automated tools to aid in breadth of testing, substituting manual testing methods in cases where automation is unsafe.

We also use extensive manual effort to dig deep into potential security faults, but we stop pursuing depth when we have proven the fault, and have documented the finding in detail. This balanced approach allows for thorough breadth of coverage, sufficient depth, detailed documentation, and above all, safer testing because it reduces reliance on automated tools and does not expose deeper systems to needless risk.

Ask us for a free, quick, no hassle quote using the contact form above.