This blog isn't about the results of his assessment, and to a certain extent the merits of his findings. Rather its a food for thought of how we, the potential consumers of these tools, could collectively work together to provide an unbiased, "just the facts'" of what really works. So here goes...
How about a real evaluation of the various tools. A more real and truthful evaluation conducted by a collective number of independent security professionals performing the same evaluation of each of the tools against the same vulnerable web application.
A bake off consisting of a web application with every predetermined and identified vulnerability severing as the baseline. The defunked web application will of course not be one of the various vendors test sites. The aim is to keep this unbiased and independent. A possible candidate could be Damn Vulnerable Web Application (DVWA - http://dvwa.co.uk).
The list of tools to be evaluated can consist of both open source and commercially available tools. However, for the commercially available tools, the vendor wishing to participate in the evaluation must permit each assessor an unrestricted license copy against one IP address outside of their controlled environment. Each assessor must use the same version of the application and signature database.
Then take each tool that will be assessed and test how many vulnerabilities can be successfully identified. Duh, right? The difference is that the first assessment will be completed without any credentials supplied to the tool. The second run will be completed with a non-privilege user account. And the third will be completed using an administrator’s account.
For the test plan, should be based something like OWASP Test Guide or some other industry collaborate accepted framework. This test plan should be something that anyone can refer to and has been vetted by industry experts. The idea here is the test plan is based on real world scenarios and common issues experience by everyone. (Hence the OWASP, NIST, SANS Top 10...)
During the assessment, anything that is identified, as a potential vulnerability but isn't exploitable or useless in for additional or cascading attack will be marked as a type 1 (false-positive). Anything that is missed but is identified as vulnerability will be categorized as a type 2 (false-negative.) And if another verifiable vulnerability is newly discovered outside of what has already been identified, this will be classed simply as a positive find, but discarded.
Once each evaluation for each tool is completed, the reports are submitted for correlation, analysis and reporting.
The evaluation performed by "Larry" doesn't do a really good job of stating the baseline for the assessment. Rather it just states which one discovers X numbers in comparison to the like tools against a vendor's vulnerable web application without the use of credentials.
If you are still interested, you can review the reports yourself. Larry Suto's latest report is available here:
http://ha.ckers.org/files/Accuracy_and_Time_Costs_of_Web_App_Scanners.pdf
And his previous report can be downloaded from:
http://ha.ckers.org/files/CoverageOfWebAppScanners.zip
The vendor’s responses posted online:
From Acunetix:
http://www.acunetix.com/blog/news/latest-comparison-report-from-larry-suto/
From HP: http://www.communities.hp.com/securitysoftware/blogs/spilabs/archive/2010/02/08/on-web-application-scanner-comparisons.aspx
For the references to the testing methodologies:
OWASP - http://www.owasp.org/index.php/Category:OWASP_Testing_Project
PortSwigger - http://portswigger.net/wahh/tasks.html
The vendor’s responses posted online:
From Acunetix:
http://www.acunetix.com/blog/news/latest-comparison-report-from-larry-suto/
From HP: http://www.communities.hp.com/securitysoftware/blogs/spilabs/archive/2010/02/08/on-web-application-scanner-comparisons.aspx
For the references to the testing methodologies:
OWASP - http://www.owasp.org/index.php/Category:OWASP_Testing_Project
PortSwigger - http://portswigger.net/wahh/tasks.html
No comments:
Post a Comment