A pile in the swamp

I’ve been reading Karl Popper’s ‘The Logic of Scientific Discovery’ very slowly over the past few months. I feel live I’ve made some headway so I’ll make some notes. Firstly I’ll consider how people normally think of Popper, and then how I’ve read him. I find a certain disconnection or incompatibility in the two.

The putative Popper

Popper is often understood as advocating a thesis of ‘falsification’. Falsificationism is often seen as a tack to mention after talking about ‘Verificationism’. Verifiationism itself is a philosophical bastardry. Verificationism (as ‘championed’ by Ayer) is the illegitimate son of what the Vienna Circle became associated with in the so-called label ‘Logical Positivism’. I think there is a lot of damage done by Upper school and degree lecturers in the ‘textbook’ depictions of the so-called ‘Logical positivism’, which is mischaracterised as being firstly a thesis that the only true statements are analytic or synthetic propositions. This not only ignores the fact that the analytic/synthetic distinction is suspect, and falsely puts forward implicitly that it’s stricly defined. Analyticity, or the classification of non-empirical synthetic statements became a thorny issue toward the late 20thC.

Characterising Verificationism as coming from the Vienna group of philosophers undermines their complexity and the great achievements they made. There is a lot of good historical scholarship to work past this historical misunderstanding, and with the birth of movements such as Experimental Philosophy (advocated by the likes of Knobe et al) or Formal Philosophy (epistemology, philosophy of action, probability) advocated by people such as Hendricks, we can see the true heritage of what wealth in the ideas and projects of the Vienna group that there really is. In this light, Popper seems an afterthought.

Falsification is seen as a thesis which is in some turn a reversal of verification. In a very superficial way this makes sense. Verificationism is an empiricist thesis which goes the way of asserting that if a claim cannot be verified in some way then it is meaningless and not a proper object of (scientific) enquiry. Taken as a universal thesis beyond the domain of science, it makes metaphysics, ethics, religious language and the supernatural as nonsensical domains of thought. However admittedly the onus must be to explain ethics and religion despite the fact that they don’t refer to anything. Which leads to views such as expressionism in meta-ethics, which is an interesting discussion in itself.

Falsification is almost seen as a fork to the knife of verification. Putatively, falsification is seen in verificationist terms: something is meaningless if it cannot be falsified. Verification has its own problems so if one is to make it a negative, one could seemingly avoid the paradoxes and issues that the former entails. Showing that something is so vague that it cannot possibly be falsified makes it meaningless because it doesn ot pertain to something relevant enough. While this explanation seems true enough, ot does not go really into the very complex character of Popper’s ‘Logic of Science’. Popper is apparently a philosopher who is read by actual scientists and is praised outside of philosophy. I wonder however, how much these non-philosophers really understand the so-called ‘falsificationist’ philosophy of Popper, at least as it comes from ‘The Logic of Scientific Discovery’. As a further read beyond the simple falsification dictum shows a deeply metaphysical and rationalist character more akin to Carnap and dare I even say, Kant’s philosophy of science [a blog named Noumenal Realm should expect every post to be on Kant].

Popper’s Falsification beyond ‘falsification’

Popper’s ‘Logic of Scientific Discovery’ (Henceforth Logik) is a work that I must admit I hardly understand, there are historical issues pertaining to the scientists and philosophers that Popper refers to that in themselves are topics for wider discussion. It should be said though, that his agenda is in a wider context that defined a large part of the Viennese philosophers, and philosophically oriented scientists of the day (if there is a distinction between the two). Popper’s Logik pushes my apprehension of formal logic to the limit and a lot of his work does involve a lot of formal niceties that deserve good analysis.

Most people (rightly) point out that the initial step in Popper’s process is to distinguish between science and non-science. Demarcation is essential in characterising that is the proper object of analysis. Popper has a view that stratifying claims in terms of higher and lower dimensionality, or extensions of reference, is a means to seperate the specificity and generality of claims. This is a highly important part of his system, as it is a way of establishing falsification.

An example Popper gives in this ‘stratification’ is a set of claims about orbit. We can start off with pre-Kepler observations about heavenly bodies going around the sun, and then make more specific (and ‘weaker’) claims about the nature of the orbit. When we make claims of greater specificity, we go a dimension higher than the initial vague terminologies of pre-Kepler or prescientific/folk observations so that they become more refined. There is an epistemic weighting to this ordering system. In order to falsify a claim (x), the falsifying thesis (y) must be pertain to the dimensionality appropriate toe refer to (x) in this aformentioned schematisation. The more specific a claim is, the more difficult it is to falsify it, on the other hand, there are different levels of how a claim may be relevant. As a proposition gets ‘higher’ in the dimensionality so described, its domain of reference changes. Initially we may speak of how we see planets orbit, but as claims become more advanced, we may talk about the nature of the orbit, and say, go into the level of (non-Euclidean) geometry as we say that an orbit is parabolic rather than circular.

Perhaps in our contemporary context this is played out in how specified areas of physics have become. The likes of particle physics, for instance, has gone to a domain where observation is difficult and speculation takes place on a level of mathematical entities. This is not to say that it is not possible to experiment or observe empirically, but it takes a much more sophisticated level in terms of the equipment and options. As a friend once said in a different context: science these days has gone far beyond boiling one’s own piss.

Popper intricately describes the formal relations of the higher and lower levels. a lower level can disprove a higher level (e.g. parabolic orbit (specific) falsifies proposing ‘circular’ or ’round’ orbit (general/vague description)). As well, it must take a proposition of the same level or higher specii to be disproven. Perhaps the story of physics in the late 19thC to the 20thC is a story of how a higher level explanation trumps the past theory. Often this may take place in terms of an experiment to show the limits of how far a particular dimensionality can explain, or shows a level where it falls apart.

It should be said that this notion of ‘dimensionality’ where propositions are levelled into a hierarchy of specificity is much like Kant’s systematicity proposal where ‘concepts’ are diagnosed in terms of propositions of greater and lesser generality. There are differences however. I am not entirely clear if the dimensionality in Popper’s thesis is a distinction between ‘better’ and ‘worse’ explanations (where general means ‘undefined or vague or broad reference’ to concepts like a ’round orbit’ as opposed to a non-euclidean parabolic), or, if there is a ‘final theory’ where propositions range from observations which are specific instances, to higher, generalised claims which account for a family of propositions. As propositions become more generalised they also become more formal, and mathematical. So, in Kant’s dimensionality we presume a ‘final theory’ (to use Hawking’s terminology) where specificity refers to empirical instances and generality refers to the mathematical/formal world. In Popper’s dimensionality, generality refers to the empirical level, which is in varying degrees, correct or incorrect. Specificity accounts for the increasingly formal nature of a theoretical proposition, as well as the ground on which it can be falisfied and the probability of its truth.

Something should be said about probability at this point. I found probability a hard topic to understand, partly because I’m aware Carnap has a hand in Popper’s thought here. Popper advocates, as did Carnap, a notion of logical probability, where we allocate a probability in terms of a credence of belief. The calculi of how such a probability is constructed I’ve not quite yet addressed in the book yet (I’m not sure if it will be addressed, or if I’d understand). Many people have skewed away from the notion of Logical probability for various reasons, seeming too close to Logicism would be a start, a discussion on the foundations of mathematics would be another way into this issue. Those issues are beyond my comprehension, but I will say that Popper addresses logical probability in terms of falsification, and the dimensionality thesis. The degree of probability depends on the degree of falsification potential of a thesis, in addition, the dimensionality of a proposition also has an accord on the logical probability of this thesis. In this way does Popper have a certain richness to compare with Kant’s philosophy of science.

Introducing logical probability makes Popper’s notion of dimensionality (or what I might call systematicity in a sense), pertain to how to decide on a theory’s rightness or wrongness, on how to distinguish between science and non- science, or on how to determine the falsification potential of a thesis. For Kant, dimensionality does not include logical probability but the assumption of the ‘as if’ presumption of a final theory. Popper has no such pretension. The idealism of a final theory is truly gone in the age of Popper, but that is not to say that presuming a final theory is to assert that ours is the final theory. Comparing Kant and Popper has much potential in this regard.

Michael

Social psychology

There are quite a few philosophers who draw from empirical research these days:

1. Neuroscience/neuropsychology
2. Economics/game theory
3. Social psychology
(among others)

I think I’ve changed my mind about this a little over the past few months. I used to outright reject any insight from such disciplines (possible exception of nonempirical game theory); but I deem that there are some important provisos that should be fulfilled before considering them as having philosophical implications. And, oddly enough, these are non-philsophical considerations.

i. Are the variables sound?
ii. Are the variables sufficiently able to be mathematically constructed?
iii. Are the findings empirically repeatable?
iv. Has a pilot study been conducted to deem methodologies effective
v. Is the study ethical?

Let me consider the last point. Why should we care that a study is ethical? There are various reasons, and most of them perhaps you may not have considered. The obvious one is that, unethical studies cause harm to the research subject. Minor implications: reputation of the researcher, his group, their funding agency, their university/institution, and the discipline’s reputation as a whole comes to jeopardy. This means people will not trust researchers if they are unethical, and for good reason too if they were known to cause harm. Some of you might be more filppant and say something like okay so we may have done this research already, there is still import of the study, right?

Not necessarily. Unethical studies are difficult to repeat, one for ethical reasons, two, because often the variable are too different to repeat in exactly the same way. Studies that cannot, or will not be repeated are too difficult to verify, but they are, if you are innovative enough, able to falsify it (by testing the aspects of the operation design process). Unethical studies tend to stand in a singularity, very few studies would bear resemblance to them, so there is no context, and further, the researcher-subject relationship; due to the nature of oppressive and coercive relationships, are difficult to reconstruct. Further, studies like Milgram’s social psychology experiment are difficult to interpret given certain presuppositions that must be addressed: nature or nurture? What is the structure of explanation?

Sinistre*

The antiomies of the foundations

There is a distinct contradiction, and yet, agreement, in the following two propositions:

P1. Mathematics cannot be shown to be complete
P2. We cannot but conceive of Mathematics, properly construed, as ideally composed of a set of axioms such that all and any system of mathematics can be reduced to a common simple system, or set of axioms such that shows a common genus to all mathematics.

This view, I maintain, is a Kantian view of mathematics. Kant’s constraints upon the proper conduct of science is that there ultimately originates a primary concept, but, that this concept is knowable or discoverable, or even actual, is not relevant, nor should we be too concerned if we never find it.

For science to be proper, Kant says, it must fit an ideal of knowledge, but such an ideal is projected (this entails the ideality of natural kinds) and not real. Such an ideal also seems to suggest that we use a bit of elipsis in our explanations and descriptions of science. A Kantian view of science also would set as a desideratum that there were a formalisability/mathematicisation constraint on anything if it is to be proper science at all.

The ideal is a projection, and is an “as if it were real” constraint (that is the ellipsis to which I speak of). Because it is a projection, our kinds and entities and laws within the scientific frame work not only can be subject to change, but desirably so, are they changeable, for scientific theories could always change, and are not rigidly set.

Rigidity is still present in the Kantian conception of science, however, in the desideratum of the constructability of formal langauges upon which we describe our phenomena. Consider the difference between ‘Water’ (h20) and water (that stuff we drink). Most, if not all the water we come across is not ‘water’, perhaps in some ways, ‘water’ does not exist, HOWEVER. Water necessarily presupposes ‘water’, in virtue of its ideality. For what makes water1 the same as water2 other than h20? Nothing.

H20 is criterial of water, but in a way, its pure form is never to be found in water, only ‘water’, which projects onto all thigns called water, makes sense of our empirical concept in such a way to be science. But, because ‘water’ is a priori regulatively ideal, it is also subject to change. The contradiction is, then, how is water necessarily h20, yet only indexical to our scientific understanding?

The answer to this lies in the conception of necessity. Necessity here, is defined as a criterial relation. Therefore, to say that “2 is a number” is necessarily true is to state a criteria. Necessity is criteria. But then, is not necessity similar to possibility? For criteria presupposes the conditions, and conditions is construed in the Kantian system as possibility. It would seem then that necessity can only take place as a concept where possibility is first defined, such that in a sense, necessity is only possible if, possibility allows, and this is necessarily so.

Destre (and Michael)