#133: Dangerous Knowledge, Diagnostic Evidence & the Cone of Uncertainty
3 Ideas in 2 Minutes on Knowledge Discovery
I. Dangerous Knowledge
The most Dangerous Knowledge is not the kind of knowledge you think it is. It’s the assumptions we make every day. Senior lecturer in intelligence and security studies Charles Vandepeer explains:
Assumptions have been described as the most dangerous form of knowledge. Why? Because an assumption carries with it unconsidered information, knowledge that is not subject to thought or critique. However, assumptions are a fact of life; we all have them and we all rely on them. Within intelligence analysis, assumptions are critical because of their potential consequences. The best that we can do is identify them and make them explicit.
—Charles Vandepeer, Applied Thinking for Intelligence Analysis
II. Diagnosticity of Evidence
According to former intel analyst Richards Heuer, the Diagnosticity of Evidence is an often underrated feature of evidence:
It refers to the extent to which any item of evidence helps the analyst determine the relative likelihood of alternative hypotheses.
To illustrate, a high temperature may have great value in telling a doctor that a patient is sick, but relatively little value in determining which illness the patient is suffering from. Because a high temperature is consistent with so many possible hypotheses about a patient’s illness, it has limited diagnostic value in determining which illness (hypothesis) is the more likely one.
Evidence is diagnostic when it influences an analyst’s judgment on the relative likelihood of the various hypotheses. If an item of evidence seems consistent with all the hypotheses, it may have no diagnostic value at all. It is a common experience to discover that most available evidence really is not very helpful, as it can be reconciled with all the hypotheses.
—Richards Heuer, Psychology of Intelligence Analysis
III. Cone of Uncertainty
You may have noticed that predicting the future is pretty difficult, which is why forecasting is the trickiest business in intel analysis. Still, there are gradations of “difficult” represented by the Cone of Uncertainty.
Picture a funnel; a timeline on the x-axis with a measure of variance in your estimates on the y-axis. In the beginning, there will be lots of variance in your forecasting of what’s going to happen in the future. The fat end of the cone. That changes the more time passes. Predictions become increasingly accurate; albeit still imperfect.
There’s another application of the Cone of Uncertainty, as former covert officer Andrew Bustamante explains:
The Cone of Uncertainty also works in reverse. When you do something in anonymity, there’s the fat end of the cone. You can hide inside the fat end of the cone because you have anonymity. But then as you continue to do similar things, you kind of work yourself into a place where you can hide in fewer and fewer places.
🐘
Have a great week,
Chris
themindcollection.com