"Philosophy of Science: A Very Short Introduction", by Samir Okasha
Samir Okasha in “Philosophy of Science: A Very Short Introduction” gives a good overview of the concept of science.
Okasha explains the difference between deductive and inductive reasoning. A deductive argument follows from its assumptions. An inductive argument is one where you have to reason about new unseen things.
At the root of Hume’s problem is the fact that the premisses of an inductive inference do not guarantee the truth of its conclusion.
Philosophers have responded to Hume’s problem in literally dozens of different ways; this is still an active area of research today.
For inductive reasoning to help us make predictions about the future, we need a new assumption. We have to take as given that along some lines things will remain the same.
This assumption may seem obvious, but as philosophers we want to question it. Why assume that future repetitions of the experiment will yield the same result? How do we know this is true?
A good model is one that’s not too crude about what it accepts about nature’s constancy. If you assume that business cycles just mechanically happen every seven or so years, then that’s fairly crude.
Karl Popper thought that scientists should only argue deductively. We all know Karl Popper and we cite him when we say that theories have to be falsifiable. But philosophy of science didn’t stop with Popper. In particular, Popper’s theory of progress in science doesn’t capture what actually happens:
In general, scientists do not just abandon their theories whenever they conflict with the observational data. […] Obviously if a theory persistently conflicts with more and more data, and no plausible ways of explaining away the conflict are found, it will eventually have to be rejected. But little progress would be made if scientists simply abandoned their theories at the first sign of trouble.
Most philosophers think it’s obvious that science relies heavily on inductive reasoning, indeed so obvious that it hardly needs arguing for. But, remarkably, this was denied by the philosopher Karl Popper, […]. Popper claimed that scientists only need to use deductive inferences.
The weakness of Popper’s argument is obvious. For scientists are not only interested in showing that certain theories are false.
In contrast, Thomas Kuhn speaks of paradigm changes:
In short, a paradigm is an entire scientific outlook – a constellation of shared assumptions, beliefs, and values that unite a scientific community and allow normal science to take place.
But over time anomalies are discovered – phenomena that simply cannot be reconciled with the theoretical assumptions of the paradigm, however hard normal scientists try. When anomalies are few in number they tend to just get ignored. But as more and more anomalies accumulate, a burgeoning sense of crisis envelops the scientific community. Confidence in the existing paradigm breaks down, and the process of normal science temporarily grinds to a halt.
In Kuhn’s words, ‘each paradigm will be shown to satisfy the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent’.
Karl Popper is normative, “How should science be done?”, while Thomas Kuhn is descriptive, “How is science done?”
In rebutting the charge that he had portrayed paradigm shifts as non-rational, Kuhn made the famous claim that there is ‘no algorithm’ for theory choice in science. […] Kuhn’s insistence that there is no algorithm for theory choice in science is almost certainly correct.
The moral of his story is not that paradigm shifts are irrational, but rather that a more relaxed, non-algorithmic concept of rationality is required to make sense of them.
Kuhn’s idea of “theory-ladenness” of data is interesting. Kuhn says that this makes comparisons between theories difficult or impossible. That’s probably exaggerated, but in economics, many of the things we measure (like GDP) are abstract concepts and theory guides how we measure it.