I’ve found a very strange article on the Scientific American blog Literally Psyched. Maria Konnikova, a doctoral candidate in psychology at Columbia University, claims that “Humanities aren’t a science. Stop treating them like one.“
Her point appears to be that humanities and social sciences (“social sciences” aren’t mentioned in the title)—including history, literature, psychology, political science, linguistics, and psychology—shouldn’t be treated like “hard sciences.” What does she mean by that? And that’s the problem, for her criterion for “hard science” appears to be the use of mathematics and statistics:
I don’t mean to pick on this single paper. It’s simply a timely illustration of a far deeper trend, a tendency that is strong in almost all humanities and social sciences, from literature to psychology, history to political science. Every softer discipline these days seems to feel inadequate unless it becomes harder, more quantifiable, more scientific, more precise. That, it seems, would confer some sort of missing legitimacy in our computerized, digitized, number-happy world. But does it really? Or is it actually undermining the very heart of each discipline that falls into the trap of data, numbers, statistics, and charts? Because here’s the truth: most of these disciplines aren’t quantifiable, scientific, or precise. They are messy and complicated. And when you try to straighten out the tangle, you may find that you lose far more than you gain.
. . .over and over, with alarming frequency, researchers and scholars have felt the need to take clear-cut, scientific-seeming approaches to disciplines that have, until recent memory, been far from any notions of precise quantifiability. And the trend is an alarming one.
. . . It’s one of the things that irked me about political science and that irks me about psychology—the reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point, whether it lends itself to that kind of proof or not.
Konnikova gives a few examples of what she considers misguided attempts to apply mathematical models or statistics to “humanities,” including assessment of the factuality of stories like Beowulf or The Iliad using likelihood analysis, and forensic linguistics, the application of linguistic methods to legal jargon and practice.
Now I don’t know anything about either of these fields, and perhaps Konnikova is right here. But where she goes wrong is in concluding two things: that all “hard scientific” study must involve math or statistics, and that there are other “nonscientific” ways of knowing involved in what she calls the “humanities.”
First of all, not all science involves math or statistics. Granted, much of it does, but much is simply observational, especially in biology. One example comprises observations of mimicry, like the fly-mimicking beetle I posted the other day. And there is not a single equation in On the Origin of Species, the greatest and most influential biology book of all time. Finding a transitional fossil (like Tiktaalik) in the right sediments is science, for it gives substantial evidence for what early tetrapods were like (yes, I know some math is involved in dating strata).
The point is that the methods of science do not absolutely require statistics or mathematics. Those methods rely on replicated observation, eliminating alternative hypotheses, generating new and testable hypotheses, and constant doubt. That’s not so different from the methods used by archaeologists, historians, linguists, psychologists, and, yes, Biblical scholars. Is Konnikova unaware of the gazillions of psychology experiments that use statistics, including the recent flap about whether our “decisions” are made before we’re conscious of them?
Second, Konnikova fails to make the case that the “humanities” can tell us something real about the world without using the methods of science outlined above. Instead, she just throws sand in the reader’s eye. For example:
It’s one of the things that irked me about political science and that irks me about psychology—the reliance, insistence, even, on increasingly fancy statistics and data sets to prove any given point, whether it lends itself to that kind of proof or not. I’m not alone in thinking that such a blanket approach ruins the basic nature of the inquiry. Just consider this review of Jerome Kagan’s new book, Psychology’s Ghosts, by the social psychologist Carol Tavris. “Many researchers fail to consider that their measurements of brains, behavior and self-reported experience are profoundly influenced by their subjects’ culture, class and experience, as well as by the situation in which the research is conducted,” Tavris writes. “This is not a new concern, but it takes on a special urgency in this era of high-tech inspired biological reductionism.” The tools of hard science have a part to play, but they are far from the whole story. Forget the qualitative, unquantifiable and irreducible elements, and you are left with so much junk.
Well, how does one go about finding out whether self-reported experience is influenced by culture, class, experience, and a particular research situation? You do a scientific test! And that often involves statistics. One researcher, for example (I can’t recall the paper), did an analysis of genetic studies of IQ, and discovered that the political leanings, upbringing, and education of the researcher was strongly correlated with whether or not that researcher found significant genetic differences in IQ between races. (The differences were in the expected direction.) In other words, the methods of hard science uncovered a possible observer bias.
And I disagree profoundly with this statement:
Sometimes, there is no easy approach to studying the intricate vagaries that are the human mind and human behavior. Sometimes, we have to be okay with qualitative questions and approaches that, while reliable and valid and experimentally sound, do not lend themselves to an easy linear narrative—or a narrative that has a base in hard science or concrete math and statistics. Psychology is not a natural science. It’s a social science. And it shouldn’t try to be what it’s not.
I’m not sure what Konnikova means by “qualitative approaches”—I hope it’s not just storytelling—for she gives no examples. But how do you find out if a “qualitative approach” is valid and experimentally sound without a). proper controls, and b). replication? That is, without the methods of science. (The “linear narrative” thing is just postmodern obfuscation.) And of course you can hardly open an experimental psychology journal without finding statistics!
Here’s what she says about history:
To be of equal use, each quantitative analysis must rely on comparable data – but historical records are spotty and available proxies differ from event to event, issues that don’t plague something like a plane crash. What’s more, each conclusion, each analysis, each input and output must be justified and qualified (same root as qualitative; coincidence?) by a historian who knows—really knows—what he’s doing. But can’t you just see the models taking on a life of their own, being used to make political statements and flashy headlines? It’s happened before. Time and time again. And what does history do, according to the cliodynamists, if not repeat itself?
To hear Konnikova tell it, one can’t really learn anything about history, because it’s all mushy and fuzzy. I agree that history doesn’t often use statistics (although read Steve Pinker’s Better Angels of our Nature to see how he deploys fancy statistics to argue that societies are getting better); but historians can still make hypotheses and do sleuthing, as well as interview people and cross-check their statements. That’s how we know that the Holocaust really happened despite the claims of denialists.
Finally, she excludes nearly everything in humanities and social science as being outside the domain of “hard science”:
It’s tempting to want things to be nice and neat. To rely on an important-seeming analysis instead of drowning in the quagmire of nuance and incomplete information. To think black and white instead of grey. But at the end, no matter how meticulous you’ve been, history is not a hard science. Nor is literature. Or political science. Or ethics. Or linguistics. Or psychology. Or any other number of disciplines. They don’t care about your highly involved quantitative analysis. They behave by their own rules. And you know what? Whether you agree with me or not, what you think—and what I think—matters not a jot to them.
Well, if she defines “hard science” as “science that involves mathematics and statistics,” then her claim is true by definition—except that some hard science, as I’ve noted above—doesn’t use math or stats, and some social sciences do. But what she fails to recognize is that for many of these disciplines (I exclude literature), one finds out what is true by using the same methods of rational inquiry that undergird the “hard” sciences. Archaeologists and historians try to cross-check facts and authenticate documents and dates. Linguists do indeed use quantitative analysis, and reconstruct the history of languages in ways similar to those used by biologists to reconstruct the history of life.
And as for ethics, well, yes, you can’t determine what is right by the methods of science, but you can certainly inform moral decisions, and learn about morality, from science. One example is Pyysiäinen and Hauser’s moral-situation study, which showed (statistically) that atheists and believers resolve novel moral dilemmas in the same way. If your stand on abortion or animal rights depends on whether fetuses or animals seem to feel pain, those questions are also subject to scientific study.
In the end, I’m not quite sure why Konnikova goes off on the incursion of “hard-science” methods into social science and the humanities. The underlying principles of finding truth are the same in all of these areas, regardless of whether one uses math or not.
It may be uncharitable of me, but I suspect Konnikova’s trying to tout the humanities and social sciences as “other ways of knowing.” But that won’t work. As the social sciences and humanities mature, they come to realize that their criteria for finding truth are the same as those used in biology and physics. Indeed, they even become more mathematical. It may be harder to suss out what’s true about humans than about, say, ants, but that reflects our more complex culture, not different ways of knowing about different organisms. (This does not apply, of course, to things like literature, where the notion of “truth” is itself slippery.)
There is only one way of finding out what is true, and that doesn’t involve revelation or making up stories.