The Atlantic has a review of Siddhartha’s new book on genetics; the review is by Nathaniel Comfort, a professor at the Institute of the History of Medicine at Johns Hopkins, and carries the provocative title of “Genes are overrated.”
I haven’t yet read Mukherjee’s book, so I won’t comment on its content except to say that the reviews have been generally positive but mixed, as Comfort’s is. I want instead to concentrate briefly on Comfort’s attitude towards science and genes.
One of the criticisms Comfort levels at Mukherjee is that he holds a “whiggish” view of genetics; that is, he sees genetics’ history as being one of progressive understanding. To Comfort, that’s a misleading way of describing science, which, to him, doesn’t progress toward deeper understanding of reality—like building an edifice of understanding—but acts simply as a bulldozer, plowing under theories that are shown to be wrong. Some quotes (my emphasis):
The antidote to such Whig history is a Darwinian approach. Darwin’s great insight was that while species do change, they do not progress toward a predetermined goal: Organisms adapt to local conditions, using the tools available at the time. So too with science. What counts as an interesting or soluble scientific problem varies with time and place; today’s truth is tomorrow’s null hypothesis—and next year’s error.
. . . The point is not that this [a complex view of how genes work; see below] is the correct way to understand the genome. The point is that science is not a march toward truth. Rather, as the author John McPhee wrote in 1967, “science erases what was previously true.” Every generation of scientists mulches under yesterday’s facts to fertilize those of tomorrow.
“There is grandeur in this view of life,” insisted Darwin, despite its allowing no purpose, no goal, no chance of perfection. There is grandeur in a Darwinian view of science, too. The gene is not a Platonic ideal. It is a human idea, ever changing and always rooted in time and place. To echo Darwin himself, while this planet has gone cycling on according to the laws laid down by Copernicus, Kepler, and Newton, endless interpretations of heredity have been, and are being, evolved.
Comfort is correct that science never knows when it’s reached the absolute, never-to-be-changed truth: there is no bell that goes off in our heads saying “ding ding ding: you’re there, and need go no further.” And a true Whiggish view of history—one that implies there’s an inevitable and unswerving path from error to truth, without any dead ends, mistakes, paths toward error, or roadblocks, is also a distortion, one that Matthew also criticized in his review of Mukherjee’s book in Nature.
But this doesn’t mean Comfort is right in arguing that everything we think we know will inevitably be demolished by future research. There are simply some things that are so unlikely to be falsified that we can see them not only as provisional truths, but as nearly absolute truths. A normal water molecule, for instance, has two hydrogen atoms and one oxygen atom. The Earth is about 4.6 billion years old, and life evolved on it, with all tetrapods descending from ancestral fish. Bodies attract each other with a force inversely proportional to the square of the distance between them. DNA is the purveyor of heredity, and in most organisms is a double helix. AIDS is caused by infection with a virus that attacks our immune system. You can all think of a gazillion more such “truths”—asssertions that you’d bet your house on.
Yes, science refines our understanding, and some theories, like Newton’s laws, are found to be special cases of deeper theories, like quantum mechanics. But to say that science is not a march toward truth, but a simple erasure of the false, is not only simplistic, but even a bit tautological: if we keep eliminating what doesn’t stand up, and keep adumbrating new theories, we will usually arrive at a more correct understanding of nature. For example, smallpox was once thought to be due to the wrath of gods. That theory was plowed under by the view that it was spread from person to person, and then to the notion that one could prevent it via inoculation. That, in turn, led to the recognition that the disease was caused by a virus, and then to the preparation of effective vaccines using live, attenuated viruses. The result: we understand fully how to get rid of the disease, and it’s been eliminated from our planet. In what sense is this not due to progressive homing in on the truth? We can use the laws of physics to land probes on comets. In what sense is that not due to a better understanding of how bodies move and interact, and not just a dispelling of what is false?
I see this kind of postmodernism infecting a lot of scientific writing, and it’s misguided; no, it’s simply wrong.
Comfort also errs, I think, in claiming (as did Evelyn Fox Keller did in her 2000 book The Century of the Gene) that the gene is now pretty much a useless concept, both in definition and in action. (I critically reviewed that book in Nature; pdf available on request.) Comfort:
This handful of errors, drawn from a sackful of options, illustrates a larger point. The Whig interpretation of genetics is not merely ahistorical, it’s anti-scientific. If Copernicus displaced the Earth from the center of the universe and Darwin displaced humanity from the pinnacle of the organic world, a Whig history of the gene puts a kind of god back into our explanation of nature. It turns the gene into an eternal, essential thing awaiting elucidation by humans, instead of a living idea with ancestors, a development and maturation—and perhaps ultimately a death.
. . . Ironically, the more we study the genome, the more “the gene” recedes. A genome was initially defined as an organism’s complete set of genes. When I was in college, in the 1980s, humans had 100,000; today, only about 20,000 protein-coding genes are recognized. Those that remain are modular, repurposed, mixed and matched. They overlap and interleave. Some can be read forward or backward. The number of diseases understood to be caused by a single gene is shrinking; most genes’ effects on any given disease are small. Only about 1 percent of our genome encodes proteins. The rest is DNA dark matter. It is still incompletely understood, but some of it involves regulation of the genome itself. Some scientists who study non-protein-coding DNA are even moving away from the gene as a physical thing. They think of it as a “higher-order concept” or a “framework” that shifts with the needs of the cell. The old genome was a linear set of instructions, interspersed with junk; the new genome is a dynamic, three-dimensional body—as the geneticist Barbara McClintock called it, presciently, in 1983, a “sensitive organ of the cell.”
Yes, gene action is complicated, but the notion of a “gene” is not only not near death, but still extremely useful. Even if many diseases are caused by many different genes, they’re still genes, which I’ll define as “a segment of DNA that codes for a protein or an RNA molecule that regulates protein-coding genes.” In fact, there are many diseases and conditions—Landsteiner blood type, Rh type, Tay-Sachs disease, Huntington’s disease, sickle-cell anemia, color-blindness, and so on—that are caused by mutations in single genes, and can be effectively understood (and used in genetic counseling) by considering them as “single gene traits.” These are said to number over 10,000.
I’ve put at the bottom a discussion from Matthew’s book, Life’s Greatest Secret, about of the notion of “gene” and how it was questioned and then widely accepted.
And why the modern concept of a gene turns it into “kind of god” baffles me. The notion of genes, and of DNA as the molecule that carries them, has been immensely useful, and “true in the scientific sense. Does that make them into “gods”? Only to a postmodernist who resents the hegemony of scientific truth.
As for genes being a “higher order concept”, a “shifting framework” or a “three-dimensional body,” well, that’s not something that I, as a geneticist, am familiar with. Perhaps those concepts are adumbrated in the “science studies” departments—the same places where truths are seen as relative and privileged.
Let me add that most of Comfort’s review is okay, but then at the end he veers off into pomo la-la land. The usefulness of the idea of “genes” will survive: it survived Keller’s attack and will survive Comfort’s. But what I see as damaging is the notion that science doesn’t progress towards some kind of truth, or greater understanding of reality. It mystifies me how anyone familiar with the history of science can say that.
And if genes are overrated, it’s news to me. They are the bearers of heredity, the switches of development, and the coders of bodies. Without the notion of genes, and of the genetic code described so well in Matthew’s latest book, we’d be back in the days before 1900.
APPENDIX (!): Excerpts from Life’s Greatest Secret:
For much of the 1950s, scientists had felt uncomfortable about the word ‘gene’. In 1952, the Glasgow-based Italian geneticist Guido Pontecorvo highlighted the existence of four different definitions of the word that were regularly employed by scientists and which were sometimes mutually contradictory. A gene could refer to a self-replicating part of a chromosome, the smallest part of a chromosome that can show a mutation, the unit of physiological activity or, finally, the earliest definition of a gene – the unit of hereditary transmission. Pontecorvo questioned whether the gene could any longer be seen as a delimited part of a chromosome, and suggested instead that it was better seen as a process and that the word gene should therefore be used solely to describe the unit of physiological action.
. . . Although Pontecorvo’s suggestion was not taken up, scientists recognised the problem. The debate over words and concepts continued at the Johns Hopkins University symposium on ‘The Chemical Basis of Heredity’, which was held in June 1956. By this time it was generally accepted as a working hypothesis that all genes in all organisms were made of DNA and that the Watson–Crick double helix structure was also correct. Joshua Lederberg, a stickler for terminology, declared audaciously that ‘“gene” is no longer a useful term in exact discourse’ He would no doubt be surprised to learn that it is still being used, more than half a century later.
. . . The multiple roles of nucleic acids have expanded far beyond the initial definition of a gene as the fundamental unit of inheritance and show the inadequacy of Beadle and Tatum’s 1941 suggestion that each gene encodes an enzyme. As a consequence, some philosophers and scientists have suggested that we need a new definition of ‘gene’, and have come up with various complex alternatives. Most biologists have ignored these suggestions, just as they passed over the argument by Pontecorvo and Lederberg in the 1950s that the term ‘gene’ was obsolete.
In 2006, a group of scientists came up with a cumbersome definition of ‘gene’ that sought to cover most of the meanings: ‘A locatable region of genomic sequence, corresponding to a unit of inheritance, which is associated with regulatory regions, transcribed regions and/or other functional sequence regions. In reality, definitions such as ‘a stretch of DNA that is transcribed into RNA’, or ‘a DNA segment that contributes to phenotype/function’, seem to work in most circumstances. There are exceptions, but biologists are used to exceptions, which are found in every area of the study of life. The chaotic varieties of elements in our genome resist simple definitions because they have evolved over billions of years and have been continually sieved by natural selection. This explains why nucleic acids and the cellular systems that are required for them to function do not have the same strictly definable nature as the fundamental units of physics or chemistry.