Confirmation and the Relevance Quotient: A Proof

The critical principle under consideration in this blog is:

(1) Pr(H|E) > Pr(H) ↔ Pr(E|H) > Pr(E).

Alternatively:

(1)* Pr(H|E)/Pr(H) ↔ Pr(E|H)/Pr(E) > 1

Let it be that some proposition or observation statement, E, is evidence for some hypothesis, H, iff E increases the probability of H. Formally put, E is evidence for H iff

Pr(H|E) > Pr(H)

There are different ways of telling when E counts as evidence for H, as well as different implications about the relationship between E and H when it is know that E is evidence for H. For example, J.L. Mackie's well known relevance criteria gives us the following:

Pr(H|E) > Pr(H) ↔ Pr(E|H) > Pr(E|~H).

What makes (1) interesting and a bit different than the relevance criteria is that it trades on what is sometimes called the relevance quotient in the Bayesian literature. The relevance quotient -- Pr(E|H)/Pr(E) -- is a useful heuristic for thinking about explanatory power [1]. Say we come across some empirical phenomenon, expressed by 'E.' If the expectedness of E -- Pr(E) -- is low but becomes more likely when H is introduced, we can say that H has some explanatory power vis-a-vis E. The greater Pr(E|H)/Pr(E) is than 1, the greater the explanatory power of H [2]. According to (1), then, if H has any explanatory power at all vis-a-vis E, E is evidence for H. Now on to the proof.

For the first part of the proof, we start with the following inequality:

Pr(H|E) > Pr(H)

Working out Bayes's theorem for Pr(H|E), we get:

[Pr(H)Pr(E|H) ÷ Pr(E)] > Pr(H)

The above is equivalent to (this step is for perspicuity):

[Pr(H) × Pr(E|H)/Pr(E)] > Pr(H)

We divide both sides of the inequality by Pr(H) and get:

Pr(E|H)/Pr(E) > 1

Finally, some trivial algebra gives us:

Pr(E|H) > Pr(E)

So far, all I have proved is the conditional, 'Pr(H|E) > Pr(H) → Pr(E|H) > Pr(E).' To prove the bi-conditional claim of (1), I start with 'Pr(E|H) > Pr(E)' and work towards 'Pr(H|E) > Pr(H).' Here's the proof (without the walk through):

1. Pr(E|H) > Pr(E)
2. Pr(E|H) > Pr(H)Pr(E|H) + Pr(~H)Pr(E|~H)
3. Pr(E)Pr(H|E)/Pr(H) > Pr(H)Pr(E|H) + Pr(~H)Pr(E|~H)
4. Pr(E)Pr(H|E) > [Pr(H)Pr(E|H) + Pr(~H)Pr(E|~H)] x Pr(H)
5. Pr(E)Pr(H|E) ÷ [Pr(H)Pr(E|H) + Pr(~H)Pr(E|~H)] > Pr(H)
6. Pr(H)Pr(E|H) ÷ [Pr(H)Pr(E|H) + Pr(~H)Pr(E|~H)] > Pr(H) [from (5), since Pr(E)Pr(H|E) = Pr(E) x Pr(H)P(E|H)/Pr(E)]
7. Pr(H|E) > Pr(H)

From the two proofs above, we have: 

Pr(H|E) > Pr(H) → Pr(E|H) > Pr(E)

and

Pr(E|H) > Pr(E) → Pr(H|E) > Pr(H).

And since '(Φ → Ψ) & (Ψ → Φ)' is logically equivalent to '(Φ ↔ Ψ)', it follows that

Pr(H|E) > Pr(H) ↔ Pr(E|H) > Pr(E).

_______________________________________________________________
Footnotes:
[1] See McGrew (2003), Confirmation, Heuristics, and Explanatory Reasoning.
[2] Explanatory power can also be understood comparatively, by using the relevance quotient to compare two competing explanations -- H1 and H2 -- for E. H1 is said to have better explanatory power than H2 vis-a-vis E, iff Pr(E|H1)/Pr(E) > Pr(E|H2) > Pr(E). See McGrew (2003) for more on this. Also, it occurred to me that thinking of explanatory power along the lines of the relevance quotient has gains over the more simple view of explanatory power as merely the likelihood itself -- Pr(E|H). Pr(E|H) might be really high and yet not make it any more likely that E would be the case. Moreover, it might be that Pr(E|H1) > Pr(E|H2), tempting one to say that H1 has more explanatory power than H2, when in fact neither of the relevance quotients for either of the likelihoods is above 1. Hence, neither H1 nor H2 has any explanatory power for E at all. By using the relevance quotient as our heuristic for explanatory power, we avoid these problems.


Comments

Popular Posts