‘Scientific evidence’ part 1 – old versus new ‘science’

“Science is built up of facts as a house is with stones. But a collection of facts is no more a science than a heap of stones is a house”. (Henri Poincare, 1904)

In 1963, B.K. Forscher wrote a letter to the editor of the esteemed journal Science titled “Chaos in the Brickyard”. The letter expressed his concerns about the proliferation of meaningless studies that failed to adhere to the sound theory-building ideals of the scientific method. The situation has not improved. Indeed, with the advent of the so-called ‘evidence-based model’ for judging the value of ‘facts’, the situation has worsened dramatically.

Old approach to ‘science’

Before the term ‘scientist’ came into being, natural philosophy was the term used to describe the study of nature and attempts to understand and explain natural phenomena through observation, theory formation, and evaluation of predictions of those theories against nature using inductive reasoning to decide if the predictions and their parent theory were supported or not. Popper (1980) advanced this approach with his ‘falsification’ ideal, allowing the use of deductive reasoning to decide if theories had been falsified in light of data, or had survived to be possibly disproven another day (read confirmation bias post). Both falsifications and survivals were published and presented to other natural philosophers, who attempted to replicate the survivals in particular. If a positive finding was repeated many times and not once falsified, that theory was accepted as relatively trustworthy ‘evidence’. In fact, theories that survived many years of such practice were elevated to the status of ‘natural laws’. These ‘natural laws’ were used to drive further theory formation, and to act as ‘validity checks’ of findings from new investigations, i.e. if a new fact violated an existing and undisputed law, its value was questionable. Furthermore, if the new finding could not be replicated, it was deemed a fluke and so, of little value to the body of trusted ‘evidence’.

New approach to ‘science’

Forscher (1963) recognised the deterioration of the rigorous ‘old-science’ approach many decades ago, highlighting that modern ‘scientists’ were inclined to simply churn out investigations that were not based on careful theory building, and were not evaluated against accepted-trustworthy ‘evidence’. Faith in ‘science’ was apparently restored by the development of the ‘evidence-based-practice model’ that has since spawned organisations like the Cochrane Collaboration and journals based on its principles (e.g. Evidence-based Medicine). Rather than judging the value of investigations against undisputed laws, as in natural philosophy, the evidence-based model assesses trustworthiness and value based on the rigour of the research design and experimental  approach, trusting that statistical principles and elimination of confounding factors will ensure validity of new ‘facts’ (more about the problems with this in part 2). Old fashioned replication is discouraged by editors of journals who refuse to publish work that is not ‘novel or original’. Negative / falsifying findings are also unlikely to be seen by others as they are deemed of little interest compared to ‘positive’ findings which, without replication, could simply be flukes. In fact, a recent and high-profile investigation of the replicability of scientific publications found that of 100 previously-published-positive findings, only 36% could be reproduced (Nosek et al., 2015). The failure to judge the value of new ‘evidence’ against established ‘laws’ leads precisely to the problems raised by Forscher (1963) and by Poincare over a century ago i.e. a collection of useless ‘facts’ without the context that alone makes for useful ‘evidence’.

btr-old-new-science

Criticisms of the evidence-based model

Many areas of modern ‘science’ are characterised by lack of consensus and equivocal findings, with the latest ‘understanding’ often changing on a weekly basis with the latest new piece of research. Running injury causes and cures and dietary recommendations are two prime examples. The lack of context in the evidence-based model, and the failure to dismiss shiny-new ‘facts’ and fads that cannot be replicated and / or make no sense against undisputed laws, is the explanation for this. It is for these (and other) reasons that evidence-based practice has been widely criticised with arguments that are well established, including nonsensical findings (Britton et al., 1998; Strauss and McAlister, 2000; Mullen and Streiner, 2004). The nonsensical findings and the lack of consensus characteristic of modern ‘evidence’ are highlighted beautifully by an old joke of the couple who visit their rabbi for marital advice:

The husband delivers a long list of complaints about his wife, to which the rabbi replies, “You’re right, you’re right.”

The wife then gives her long list of complaints about her husband, to which the rabbi replies,“You’re right, you’re right.”

About to leave, the wife yells at the rabbi, “How can you tell us both that we’re right? One of us must be wrong!”; to which the rabbi replies “You’re right, you’re right.”

How to avoid being buried alive in the pile of equivocal ‘facts’

The majority of the most significant discoveries in the history of science (the ‘laws’) were discovered long before evidence-based practice reared its ugly head. These discoveries were made using old methods of natural philosophy, based on solid theory, and guided by natural laws. We should heed the words of Poincare and Froscher and choose our stones / bricks carefully from the pile / brickyard. Only then can we build something that looks and functions like a house. It is from such carefully-selected bricks that the foundations of BTR and the walls of its wisdom are built.

References

Poincare, H. (1904). Science and Hypothesis. Dover: Walter Scott Publishing Co.

Forscher, B.K. (1963). Chaos in the brickyard. Science, 142, 339.

Popper, K (1980). The Logic of Scientific Discovery (10th Ed). London: Hutchinson

Nosek, B.A. et al. (2015). Estimating the reproducibility of psychological science. Science, 349 (6251), aac4716.

Britton, B.J., Evan, J.G and Potter, J.M. (1998). Does the fly matter? The CRACKPOT study in evidence based trout fishing. British Medical Journal, 317, 1678-1680.

Strauss, S.E. and McAlister, F.A. (2000). Evidence-based medicine: A commentary on common criticisms. Canadian Medical Association Journal, 163, 837-841.

Mullen, E.J. and Streiner, D.L. (2004). Evidence for and against evidence-based practice. Brief Treatment and Crisis Intervention, 4, 111-121.