What is truth? How do people reach conclusions and evaluate facts? What counts as knowledge, and how do we know?
Hold up before you give up on this post, which I realize might seem to be getting into the type of heady esotericism humanists are sometimes criticized for.
For Edouard Machery, director of the Center for Philosophy of Science at the University of Pittsburgh, these questions about how people understand what it means to know something or how people make knowledge come down to very real-world issues, including the replication crisis that has for the past several years caused hand-wringing among scientists, who acknowledge that the causes of the so-called crisis have to do with entrenched publishing incentives but also disagree about ways to correct it. Machery spoke to the University of Pittsburgh’s Information Ecosystems Sawyer Seminar on Friday, Feb. 21, having presented a public talk entitled “Why are Good Data so Hard to Get? Lessons from the Replication Crisis” the previous day.
For his part, Machery was one of dozens of researchers who co-authored a Comment piece in Nature Human Behaviour in January of 2018, calling for a change to the threshold for “statistical significance,” the point at which a study’s results could not be the result of mere chance. Currently, statistical significance can be expressed as P<0.05, but the article, “Redefine Statistical Significance,” argued the threshold should be changed to P<0.005. This change, they argue, “would immediately improve the reproducibility of scientific research in many fields.”
The replication crisis has real-world implications: this is not a case of cloistered academics splitting hairs about semantics while puffing on their pipes and sipping La Croix. When we realize that the replication crisis deals not only with studies like the one proposing humans have a finite amount of willpower, resulting in a phenomenon called “ego depletion,” but also with studies about the effectiveness of cancer treatments.
I have to admit that the replication crisis has given me pause, not because I’m a scientist, but because, as a frequent instructor of first-year college writing classes, I’m tasked with teaching students how to research in library databases and find relevant sources for their research papers. Often, when we teach the difference between popular and scholarly sources, we discuss them in terms of trustworthiness. If a scholarly source is written by a trained researcher, peer-reviewed, and published in a journal with rigorous standards, it should be a source we can trust. But the replication crisis has taught us to be more skeptical with our sources and more careful about what we call a “fact.”
Which brings me back to the questions that opened this post. The replication crisis reminds us all that good data are hard to get. In an attempt to get some good data and interrogate what constitutes knowledge across cultures, Machery and dozens of colleagues across the globe have worked to amass data to answer that question. For example, some of Machery’s recent work has to do with what he and his colleagues refer to as “core folk epistemology.” In a 2017 paper entitled “Gettier Across Cultures” and published in the journal Nous, they argue that people living in Brazil, India, Japan, and the United States had similar ways of coming to conceptualizations of truth, which the authors referred to as “intuitive judgment.” Another 2017 paper, “The Gettier Intuition from South America to Asia” and published in Journal of Indian Council of Philosophical Research performs a similar test, demonstrating an attempt by Machery and his colleagues to scale up the work. (This particular work reminded me of a recent episode of Freakonomics Radio, which looked at the difficulty of scaling up small studies into policy.)
The nature of knowledge and what constitutes a fact, then, is less heady than we might first think. Proposed solutions to the replication crisis, including the one Machery and others propose to adjust the threshold of statistical significance, have proliferated. The 2019 Nature Human Behaviour piece elicited pushback, with other researchers proposing other alternatives. Machery defends the proposal in an upcoming article published in Review of Philosophy and Psychology. Solutions aside, several have argued (including here and here) that the crisis is actually strengthening the sciences because it is forcing researchers to reevaluate their methods and the culture of publication within their fields. A handful of journals have cropped up that publish only studies that fail to replicate or that are negative results, and there are plenty of calls for all science journals to rethink their policy on publishing negative results.
For all that, the replication crisis is by no means solved, and even when it is (if there is a moment of “solution”), we should all remember to be skeptical. Maybe the reason why the questions I began with seem so heady and esoteric is because the answers continually elude us.
Briana Wipf is a second-year PhD student in the English department at the University of Pittsburgh, where she studies medieval literature and the digital humanities. As she sets out on working on her comprehensive reading and exams, she expects to question truth and knowledge constantly. Follow her on Twitter @briana_wipf.