In Vulnerability Assessment, accuracy is vital
The primary requirement for a Vulnerability Assessment solution is accurate testing. Ease of use and clear reports are important, but if accuracy isn't there then little else matters.
Poor accuracy in Vulnerability Assessment produces two kinds of testing error. Overlooking a vulnerability (a false negative) leaves a security flaw you don't know about. Reporting a vulnerability as present when in fact none exists (false positive) sends you looking for something that can't be found. Obviously you don't want either.
Clearly it's important for a solution to find the vulnerabilities. But an inaccurate report can be more trouble than it's worth.
If the first 4 vulnerabilities reported by your solution didn't actually exist upon close examination, it becomes pretty difficult to take the 5th vulnerability seriously. 'Crying wolf' creates complacency. A VA report that says there are dozens of serious security issues when there are really only 2 is more distraction than assistance. Also, how valuable is your time? Your security budget doesn't get larger just because your VA system says there *may be* dozens or hundreds of vulnerabilities on your network. The hidden cost of an inaccurate VA system is the man-hours it takes to chase false positives, prove that they are false and check them off the list. The total cost of ownership of a VA system with a 5 to 8% false positive rate is doubled when the time to verify and eliminate false positives is included. Even a 2% error rate can be a headache.
Nearly all VA solutions depend upon version checking as their primary method of assessing the relative vulnerability of network hardware or software. VA solutions typically look at the response header and from the version data there they deduce whether the hardware or software is vulnerable. If an old version is known to have 5 vulnerabilities and the header says that the old version is in use, then it is assumed that all 5 of those vulnerabilities exist.
Version checking has many advantages for the vendor and one key disadvantage for the customer. It is easy to program and claim '45,000 tests'. Also, a version analysis scan that finds an old version can produce a long and impressive list of vulnerabilities. This makes the solution look good.
The disadvantage: Poor accuracy misses real problems and list dozens if not hundreds of vulnerabilities that don't actually exist. Version information contained in a header doesn't reflect the presence or absence of a security issue with high accuracy.
The fundamental indicator of a vulnerability is 'unwanted response to a query'. Vulnerabilities can be exactly and accurately identified by how the host responds when given a special query.
AVDS alone, in the field of Vulnerability Assessment solutions, uses specially crafted queries and the resulting behavior of network components and web applications as its primary indicator of whether a specific vulnerability exists or not. This strategy requires a great deal more effort in the programming of vulnerability tests but produces so few false positives that most of our customers never experience one.
The version number reported in the header is only a general indicator of vulnerability. It is not accurate enough for mission critical application in Vulnerability Assessment.
Examples of false negatives (missed vulnerabilities):
Examples of false positives (no actual vulnerability):
False negatives are clearly a catastrophic failure in VA. All vendors recognize this and the broadly accepted solution is to declare every possible issue a vulnerability and let the network administrator try to prove otherwise. This and the race to have the most tests and report the most vulnerabilities has made the false positive endemic to Vulnerability Assessment.
A 5% false positive rate may not be a problem for small networks - depending upon what the admin's time is worth. If there are 5 false positives in a network of 300 IPs, that may not seem like a big deal. But if all 5 are also flagged critical just doubled your work time chasing ghosts.
What if you have 1000 IPs with 25 high risk false positives? It may take weeks to sort out.
Nearly all VA solutions depend primarily upon the version number to determine if an application is vulnerable. It therefore requires additional manual labor to verify that each problem actually exists and at least one recommends that you buy and run an additional tool to do just that.
AVDS doesn't care what the application version number says. It automatically does the 'manual labor' needed to prove that the vulnerability exists.
A Real Life Vulnerability Test:
The SOAP interface to the eMBox module in Novell eDirectory 22.214.171.124 and earlier, and 8.8.x before 8.8.2, relies on poorly executed client-side authentication. This allows remote attackers to bypass authentication via requests for /SOAP URIs, and this can cause a denial of service (daemon shutdown) opportunity or allow arbitrary files to be read. NOTE: it was later reported that 126.96.36.199 (aka 8.7.3 SP10) is also affected.
How version-dependent tools test:
1) Check the version of eMBox. Is it 188.8.131.52 or earlier?
2) If yes, then report it as vulnerable
How AVDS tests:
1) Confirm it's an HttpStk server by sending it a request that triggers a pre-defined error page (basically an invalid HTTP request)
2) Then HTTP POST this to the server:
3) If it returns:
AVDS knows it is talking to the right type of server
4) Send a followup request with:
5) If it returns
AVDS knows it's secure.
Any other response indicates the host is vulnerable regardless of what version number the header provides. The test itself makes no change to the host and doesn't interfere with any other traffic.
Testing the behavior of hosts and applications is harder to program than just asking for the version number, but it results in accurate testing, conclusive and actionable reports and a dramatic reduction in the time it takes to clean up network vulnerabilities.