How To Tell A Good Security Test Report From A Bad One

Submitted by alla on Mon, 01/11/2010 - 19:32

Suppose you had a penetration test, a vulnerability assessment, a security test, whatever it was called. (Different people would use different names for different kind of test). Now you have a report, and, apart from having to sort out the problems that were discovered, you want to know if the testers have done a good job.

This post should help you with that last one.

First of all it is important to distinguish between fully automatic tests and what I'll call for a lack of better term "manual" tests. Fully automatic tests basically consist of running some vulnerability scanner (such as Nessus, Retina, etc.) and giving the report produced by the scanner to the customer. The final report has little or no human input. "Manual" tests may also use automated scanners for some part of the work. However, the findings produced by the scanner are validated by a human and used as input for further testing or exploitation.

The two kinds of test differ in price. Automatic tests are cheap, manual are expensive. Because of that, comparing the two does not make a lot of sense. Below I will be talking about a "manual" test.

A test report will always contain a section with test findings and recommendations. This is, of course, the most important part of the report, because it lists the discovered security problems and the suggestions for fixing them. Another interesting section is test details or test log. It should describe the actual tests that have been performed and their outcome. Not all testers will include it in the report. I personally believe that this is the most important part of the report that allows to distinguish between a good job and a lousy one.

So, let's start with findings and recommendations. What indicates a good report:

  • The findings are aggregated. That is, you don't have 10 different findings for missing Windows updates, you have one finding that says "There is a number of missing updates, here is a list of all affected hosts and the missing updates". Why does it make a difference? A vulnerability scanner will output a separate finding for each missing patch. For the customer it is important to know that there are unpatched systems (and what are they) and if there is anything important missing. The fact that findings are aggregated indicates that a human processed the results and made some sense out of them.
  • The descriptions and recommendations are specific to your situation. If your web application is written in ASP.NET and the description gives an example in PHP, you can tell it was copy-pasted without much thought.
  • The descriptions give both background information (for example, what is SQL injection in general) and the information specific to your particular case - what exactly is vulnerable and how. The background information is not always necessary - if a tester has been working with this customer for a while, he or she may know that the customer has a good idea what an SQL injection is and does not have to explain it.
  • The severity of the findings takes into account the specifics of the environment. A reflected cross-site scripting may be a very important problem in one case (single sign-on system) and not particularly important in another (an application that does not store any session-specific information)
  • The description includes a proof of concept exploit or a screenshot or log of successful exploitation. This is not always strictly necessary, but if the tester says that something is really a severe problem, I believe that he has to back up this statement by some proof. At least the report has to state if the tester has managed to exploit the problem and to what extent.
  • Again, not strictly necessary, but always a good sign - the description of the way to reproduce a problem. This information can be used by the customer to validate the fixes, or by the testers if a re-test is required. This information may not be included in findings and recommendations, but at least should be present in the test log or test details (see below).
  • The findings include a list of references that give background information about this type of problem and the ways to fix it. This allows a customer to get a better idea of the problem.

Now let's talk about the test details section. The first good sign is that it is present in the report. It allows the customer to see what exactly has been done, and also how it has been done. It also allows the tester to do the same if a re-test is required or some claim in the report has to be validated.

A "manual" test can use automated scanners. The test details section should state clearly what software has been used, what findings is has produced and why they are considered valid or invalid. In case of a large test (internal company network, for example) only the valid findings may be mentioned to save time and space.

For any security problem listed in findings and recommendations, the test details section should show the way to detect and/or exploit the vulnerability. If a number of similar problems has been found, it is usually enough to demonstrate the exploit for one case. Let's say if there are 200 workstations missing Windows updates hacking into each one of them may be a waste of time, but showing that you can get a shell on one of them is fine.

I think that proof of concept exploits are important. First of all, actually exploiting a problem may give a tester a better idea of the impact of the problem. There may be conditions that make it not exploitable, increase or decrease the impact. Secondly, it makes it easier for the customer's security officer or other person who commissioned the test to make a point. The first thing a person who mentions a security problem to the people who are responsible for it is "Yes, but it is not possible, because..." Saying that while looking at the evidence of successful exploitation is slightly more difficult.

The test details section should also include the tests that produced no evidence of vulnerability. This is important for the peace of mind of both the tester and the customer. It gives evidence that the tester was sufficiently thorough. Also, if at some later point a serious vulnerability is discovered that was not mentioned in the test report (pen-tester's nightmare) the test details section should provide information why it was missed. (For example, yes, we tested it, and at the time of the test, the system behaved differently from what we see now - this is what we got then).

The detailed description of how the vulnerability was identified or exploited is also important for re-tests. A re-test may be performed by the customer themselves, in which case it is very important to know what exactly to do to reproduce the problem. It may be performed by another tester, again in this case knowing what has been previously done saves time and improves reliability (The exploit seems to be blocked by IDS. Did the original tester have to use some trick to get around IDS filtering or is this how the vulnerability is fixed now?). Even if it is the original tester who does the retest, having the details helps a lot. Remembering the exact syntax of a particular SQL injection 6 months after is a lot to ask from a human memory.

Another sign of a good report is appendices containing custom scripts and tools. Not every test requires a development of custom test tools. However if a tester has done that, it usually shows general clue and capability.

Contacts

+32 (0) 2 215 53 58

Gremwell BVBA
Sint-Katherinastraat 24
1742 Ternat
Belgium
VAT: BE 0821.897.133.