Showing posts with label The Endeavor. Show all posts
Showing posts with label The Endeavor. Show all posts

Thursday, February 24, 2011

Evidence

John D Cook has a very nice post up about evidence in science:

Though it is not proof, absence of evidence is unusually strong evidence due to subtle statistical result. Compare the following two scenarios.

Scenario 1: You’ve sequenced the DNA of a large number prostate tumors and found that not one had a particular genetic mutation. How confident can you be that prostate tumors never have this mutation?

Scenario 2: You’ve found that 40% of prostate tumors in your sample have a particular mutation. How confident can you be that 40% of all prostate tumors have this mutation?

It turns out you can have more confidence in the first scenario than the second. If you’ve tested N subjects and not found the mutation, the length of your confidence interval around zero is proportional to N. But if you’ve tested N subjects and found the mutation in 40% of subjects, the length of your confidence interval around 0.40 is proportional to √N. So, for example, if N = 10,000 then the former interval has length on the order of 1/10,000 while the latter interval has length on the order of 1/100. This is known as the rule of three. You can find both a frequentist and a Bayesian justification of the rule here.

Absence of evidence is unusually strong evidence that something is at least rare, though it’s not proof. Sometimes you catch a coelacanth.


Now it is true that this approach can be carried too far. The comments section has a really good discussion of the limitations of this type of reasoning (it doesn't handle sudden change points well, for example).

But it is worth noting that a failure to find evidence (despite one's best attempts) does tell you something about the distribution. So, for example, the failure to find a strong benefit for users of Vitamin C on mortality, despite a number of large randomized trials, makes the idea that this drug is actually helpful somewhat less likely. It is true we could look in just one more population and find an important effect. Or that it is only useful in certain physiological states (like the process of getting a cold) which are hard to capture in a population based study.

But failing to find evidence of the association isn't bad evidence, in and of itself that the association is unlikely.

P.S. For those who can't read the journal article, the association between Vitamin C and mortality is Relative Risk 0.97 (95% Confidence Interval:0.88-1.06), N=70 456 participants (this includes all of the trials).