Continuing my series of posts following my reading of Popper’s ‘Logic of Scientific Discovery’, I think that I have just finished the more difficult part of the book. Popper wrote a large section on his unique theory of probability, many of the nuances I have to admit due to my lack of reading, are lost to me. I think a fair few things are important to say:
1. There has been a large consensus among many philosophers I’ve known, that a logical theory of probability is lacking in various ways compared to a consideration of more conventional mathematical notions about trying to understand statistical functions, interesting that among the mathematicians that I’ve known this conviction is not held. For this reason I might consider that this probability approach, at least in terms of the contemporaries of Popper’s time, were not in the majority. How such an approach now would hold, however is a much more detailed response. Formal approaches are the fashion in many areas of contemporary philosophy.
2. Popper should be read in context with as a point of comparison and contrast: the probability accounts of Carnap, Von Mises and Keynes (as in, J.M. Keynes the legend of Economics). Each of these thinkers had a particular aim for their thought around probability. Carnap integrates probability in a logic of science approach while the wider context of von Mises and Keynes are as theoreticians and practicioners of an applied numerical science; physics in one and economics in the other. Construing probability in such a wide light and in the audience of philosophical methodology shows the real ecclecticism and interdisciplinarity of the time.
3. Popper moves away from talk about truth. Starting off the book with a discussion concerning the limits of science, namely through falsification and demarcation, Popper then moves on to try and consider how to make positive claims of science. Often a reply to a discussion of falsification is a claim to the effect of: if our method concerns what shouldn’t be admissible as a scientific claim, what can we say is an admissible claim?
Here is where the probability account comes in. Instead of a distinct bivalent set of values of whether a claim is true or false, we are led to a notion of degrees of credence in probability. Perhaps we can never talk of appropriately ‘verifying’ a theory, but we can talk in terms of falisification and a positive notion of what he calls ‘corroboration’. Claims are suitably corroborated in terms of its instances and a calculus that applies a certain set of purely statistical assumptions. The game of science, then, is moved away from talk of truth to talk of corroborated estimates of what we may deem to be factual. It is this notion of corroboration which is the answer to the negation of verification.
4. Contemporary science, it should be said, relies on a great deal of statistical work. Everything from sociology and economics, to chemistry and climate science involve the gathering of statistics to establish predictions and models. Any good theory of science worth its salt needs to acknowledge the common contemporamous practice of science. The 20thC turn to statistics in the methodological literature is very much sensitive. I would go further to say that it would be a desiderata to acknowledge that the practice of science now uses these machinations.
5. One thought I advance: to what extent are the standards of rigour for corroboration internal to the discourse and its practitioners or that they are sufficiently generic to account for all discourses? I suspect that the notion of statistical accuracy and range has a lot of pragmatism involved depending on what is being measured, or predicted. This discussion is slightly addressed when Popper relates his notion of probability with the (emerging at the time) Quantum Mechanics.