Probabilities, Popper, and Theory Testing

One method for theory confirmation and disconfirmation is known as hypothetico-deductivism (H-D). According to H-D, a theory is confirmed by the observation of it's entailments/predictions (observation consequences), and a theory is disconfirmed by the failure of it's observation consequences to obtain. Let 'H' represent some hypothesis, and let 'O' be an observation statement describing an event or feature of the world predicted by H (the observation consequence). On H-D, O is thought to be entailed by H. The approach looks something like this (for more on H-D, see my posts, "Method for Confirmation" and "Method for Disconfirmation"):

1. If H, then O.
2. O.
___________
3. Therefore, (probably) H.

Here, to say that O confirms H is to say that O, a statement about some observed phenomenon, raises the probability of H. H predicts O, we find that O is true, and, hence, we conclude that H is confirmed by O. The schema for disconfirmation, according to H-D, goes as follows:

1. If H, then O.
2. not-O.
___________
3. Therefore, not-H.

Corroboration and Refutation

Philosopher of science, Karl Popper, argued that, while theories can be corroborated by the discovery of their entailments, they cannot be confirmed by the discovery of those entailments. To say that O corroborates H simply means that O is consistent with H. This was part of Popper's solution to the problem of induction -- if the reliability of induction cannot be demonstrated in a non-circular manner, then we must get rid of induction, altogether, talking instead of a theory's being corroborated (click here for my post on the problem of induction). This led Popper to stake a good deal of importance on the falsifiability of a theory. According to Popper, the discovery of some phenomenon predicted by H does not raise the probability of H; it only shows that, thus far, H has been resilient to refutation. And if the predictions of a theory do not bear out in observation/experimentation, then that theory is refuted.

Problems with Popper's Hypothetico-Deductivism. 

Colin Howson and Peter Urbach note at least three problems with this account.

First, many scientific theories do not have deducible consequences that can be tested. Newton's laws themselves, for example, do not predict any empirical consequences (what empirical consequence can be deduced solely from the statement that "for every action there is an opposite and equal reaction"?). Rather, one must cite various initial conditions and auxiliary assumptions that must obtain before these laws can imply any empirical consequences. To test Newton's laws, we must add conditions and qualifications about the mass, distance, location, velocity, and so on, of particular objects, and then see if those objects obey Newton's laws. Let 'A' and 'I' represent auxiliary assumptions and initial conditions, respectively. Corroboration would look like this.

1. If H & A & I, then O.
2. O.
__________________
3. Therefore, H is a corroborated theory.

Notice, however, that on this emendation it is no longer a logical consequence of H itself that is being tested. O is a consequence of the conjunction, (H&A&I). Hence, O does not corroborate H alone; it corroborates the entire conjunction. Now, imagine O had not occurred. Schematically, we have:

1. If H & A & I, then O.
2. ~O.
__________________
3. Therefore, ~H or ~A or ~I.

From ~O alone, we cannot conclude that H itself is refuted, since the problem might be with one of our auxiliary assumptions. The falsifying evidence underdetermines which part of the overall theory is at fault, making an application of Popper's view of falsification difficult. We can always reject one or more auxiliary assumptions or initial conditions and hold onto the core of our theory -- H. Popper's naive view of falsification and his commitment to the H-D method for disconfirmation do not rational require a rejection of H itself.

Secondly, rather than yielding deductive consequences, many theories only tell us what is likely to happen. Howson and Urbach cite Mendel's theory of inheritance as an example. Hence, theories can (and often do) make predictions that are probable in nature, packaged as frequencies and likelihoods, as opposed to entailments.

Lastly, even observation consequences that are entailed by a theory and which do in fact obtain may only be known with a degree of certainty. This is because our methods of data acquisition and/or observation may not be able to guarantee that some event has in fact occurred. Experiments and the devices we use in those experiments are often imperfect, giving us margins of experimental error and less than certain confidence in what our instruments tell/show us. Howson and Urbach write, "Thus for many deterministic theories, what may appear to be the checking of logical consequences actually involves the examination of experimental effects which are predicted only with a certain probability" (emphasis mine) [1].

The simple point to draw is that both H-D and Popper's view of falsification are lacking.

_____________________________________________________________________________
footnotes:

[1] Howson, Colin, and Peter Urbach. Scientific Reasoning: The Bayesian Approach. La Salle: Open Court, n.d. N. pag. Print
[2] Howson and Urbach note that Popper tries to give an account of statistical falsification, but, they argue, there are "insuperable difficulties" with such an approach. See Chapter 5 of their book (cited above) to hear their grounds for this claim.

Comments

Popular Posts