Brooks goes for the jugular

I’m not a huge fan of New York Times columnist David Brooks, but in college he honed his writing skills producing satire, and it shows in his latest column (click on screenshot below). It’s about the one-day government shutdown—not the finest hour for either party, but certainly not a shining moment for us Democrats:

Brooks’s analysis of the Democrats’ “five-part plan” to screw up the shutdown negotiations is at once sad, true, and funny. I’ll leave you to enjoy it, but here’s a snippet:

Second, the Democrats focused all their energies on those all-important Michel Foucault swing voters. When Democrats get all excited, they go into a hypnotic trance and think the entire country is the Middlebury College faculty lounge. The American story is a story of systemic oppression. Since the cultural discourse that privileges white hegemony is the world’s single most important problem, of course it’s worth shutting down the entire government to take a stand on DACA.

It’s not that people don’t like DACA. They do. It’s that they just don’t recognize themselves in a party that thinks it’s worth closing the government, destabilizing the economy and straining the military for it.

Third, Democrats devised a brilliant Tao Te Ching messaging strategy. The ancient Chinese master informs us, “Being and not being create each other. … Before and after follow each other.” In this way, he teaches the paradoxical infinity of ultimate truth.

The Democrats captured this same paradoxical profundity with their superb messaging over the weekend: We bravely shut down the government to save the Dreamers even though Donald Trump is responsible for shutting down the government.

The ancient Chinese master bows in respect.

Spot the leopard!

(The title could refer to the name of a cat, but it doesn’t.)

This is part of a series by famed wildlife photographer Art Wolfe, who has a series of hard-to-see wildlife (I’m not gonna tell you where it is, as that would spoil future fun).

Can you spot the leopard (Panthera pardus)? This is not too hard, so I won’t give a reveal.  It was sent by reader Dorsa, who said it took her a while to find the beast.  This is, of course, what prey see—if their vision is nearly the same as ours.

Don’t look at the comments before you have a good search, as someone’s revealed the location.

“Sea change”

My whole life I’ve heard the phrase above (it’s sometimes given as “sea-change”), but never knew what it meant until I looked it up yesterday. It turns out that it simply means a big change. And, like so many other common phrases, it came from Shakespeare. Here’s the Oxford English Dictionary’s definition:

Since I just learned it, I thought maybe other people might not know of its meaning or origin, either.

That said, I don’t plan to use the phrase, as to my ears it sounds a bit pompous, and I’d rather say “big change”. I wonder if those who use it—I heard it somewhere the other day on the news—know what it means, or use it in the proper sense.

As long as I’m writing this, here, from Shakespeare Online, is a list of words invented by the man, along with the following note (go to the link to click on the individual words):

The English language owes a great debt to Shakespeare. He invented over 1700 of our common words by changing nouns into verbs, changing verbs into adjectives, connecting words never before used together, adding prefixes and suffixes, and devising words wholly original. Below is a list of a few of the words Shakespeare coined, hyperlinked to the play and scene from which it comes. When the word appears in multiple plays, the link will take you to the play in which it first appears. For a more in-depth look at Shakespeare’s coined words, please click here.

Like many others, I am baffled by Shakespeare’s immense eloquence and fertility of thought and language. They seem to have come out of nowhere, but since we know very little of the man, the mystery is even deeper. What a treat it would be to have dinner with him! I know of nobody who’s a better writer in English.

“History making” hijabi model steps down from L’Oreal campaign after her Twitter comments come to light

The other day HuffPo put up one of its usual hijabi-extolling posts, noting that model Amena Khan “made history” by being in a campaign for L’Oreal hair products—while wearing a hijab. I wasn’t going to post about it, as there’s not much new here beyond the usual “hijabi-is-a-hero” palaver, but developments yesterday changed that (see below). Click on the screenshot to go to the article:

As the article notes, “A blogger, model and co-founder of Ardere Cosmetics, Khan has called the new collaboration ‘game changing.’ She is the first woman who wears a hijab to be featured in a major mainstream hair ad.”

Well, you might wonder why L’Oreal would want to use a woman who covers her hair to advertise shampoo and conditioner. Khan explains it in the HuffPo video below:

Okay, fair enough. And, as Maajid Nawaz explains in this short video, although the decision to use a woman who covers her hair to advertise hair products seems weird, it’s based on financial calculations.

If L’Oreal wants to do this, fine. But what bothers me is the usual tactic of making a hijabi into some kind of hero. In this case, though, it’s a bit hypocritical. After all, why do Muslims wear the hijab? As I’ve discussed before, and as you can see on “Rules related to covering“—an Islamic website that mandates codes of dress—by and large the hijab is worn as a religiously-mandated sign of modesty: to hide a woman’s hair. The premise is that the sight of hair will arouse uncontrollable lust in men, and then bad things will ensue. The Muslim rules, which are patriarchal, deem it the woman’s responsibility to avoid exciting men by looking attractive.

But it’s not just the hair that should be covered: women must avoid any adornment or beautification that calls attention to them:

Their face and hands must not have any kind of beautification (zinat) on them.

Well, Khan wears so much makeup—including lipstick, eye shadow, eyeliner, blush, nail polish (also forbidden) and other products that women use that I’m not aware of—that it looks as if it’s been laid on with a trowel. (See other photos of her on her Twitter account). She also shapes her eyebrows, also a forbidden enhancement.  Have a look:

At the same time that she’s adhering to Muslim custom and covering her hair out of modesty, she’s doing all she can to call attention to her beauty,—to her face and nails and body. Well, she’s a model, and that’s what they do. But isn’t it a bit hypocritical to wear a garment whose purpose is to avoid exciting lust, while doing the exact oppostie with your face, hands, and feet? (Khan often wears sandals, a display of feet that is prohibited by the same dictates that prohibit showing hair).

I’ve said all this before, and felt no need yesterday to say to call out this dichotomy again, but then it was discovered that Khan has a rather dubious history of posting anti-Israeli messages on Twitter. These are not just criticisms of Israel occupying the West Bank or the like, but contentions that Israel has no right to exist—a sentiment that, I think, borders on anti-semitism.  Because of these, Khan pulled out of the campaign (it’s not clear to me whether she was actually fired.) You can see reports on her background and withdrawal at the BBC as well as  Israelly Cool. 

What did Amena Khan say on Twitter? Well, she’s deleted her tweets, but some were captured by the Daily Wire:



I won’t get into who is the deliberate murderer of children or whether Israel is an “illegal state”, but let’s just agree these tweets are clearly “anti-Israel”, and pretty much state that Israel has no right to exist.

When these tweets were revealed, Khan to “withdrew” from the campaign, offering a weird apology that said she didn’t really mean what she said about Israel:

L’Oreal, whether out of a dislike for Khan’s views or simple business acumen, was not reluctant to accept her “withdrawal.” From the BBC:

A spokesperson for L’Oreal Paris told Newsbeat: “We have recently been made aware of a series of tweets posted in 2014 by Amena Kahn, who was featured in a UK advertising campaign.

“We appreciate that Amena has since apologised for the content of these tweets and the offence they have caused.

“L’Oreal Paris is committed to tolerance and respect towards all people. We agree with her decision to step down from the campaign.”

I have to admit that there’s a bit of Schadenfreude here: while HuffPo and L’Oreal (and other places) were extolling this woman as a pathbreaker, a history maker, and even a kind of hero, at the same time she had a background of espousing hatred verging on the anti-Semitic. And to extol her “Muslim-ness” for wearing the hijab, while ignoring her attempts to call as much attention as possible to her beauty, smacks of either ignorance or hypocrisy.

I put a comment on the HuffPo site last night saying they should update their report, but of course they haven’t done it despite widespread reporting about Khan’s withdrawal from the beauty campaign. (Curiously, they’ve removed her Instagram posts from the site.)  Nor has HuffPo US posted any report of her withdrawal, although HuffPo UK has. But even HuffPo UK’s report is bizarre, putting scare quotes around Khan’s “anti-Israel” tweets:

A model who became the first woman in a hijab to feature in advertising for hair brand L’Oreal has stepped down from the “game changing” campaign after a series of “anti-Israel” tweets emerged.

Amena Khan, who announced her recruitment to the initiative just six days ago, said she decided to step down “because the current conversations surrounding it detract from the positive and inclusive sentiment that it set out to deliver”.

She wrote on Instagram of her regret over tweets dating from 2014, which had prompted accusations she held “anti-Israel” views.

Why the scare quotes around “anti-Israel”? Does that mean it’s questionable whether the tweets shown above really were against Israel? That’s the only reason I can imagine for the quotes, and it’s shameful. There’s no question about what those tweets say!


Meanwhile, over at LBC Radio (“Leading Britain’s Conversation”), broadcaster James O’Brien, who appears to be an anti-Brexit liberal, makes clear to a Muslim mother why she shouldn’t force her eight-year-old daughter to wear the hijab. Click on the screenshot to get to the article and the 4.5-minute video.  Remember that while women in Iran, Afghanistan, and Saudi Arabia have no choice about wearing the hijab (and, in Iran, demonstrated in the streets when the theocracy forced veiling in 1979), the issue of “choice” in Western countries, where girls are veiled very young, is often problematic.

My final remarks simply echo the sentiments of Alishba Zarmeen, a feminist activist from Pakistan:

One possible counterargument for people like Khan is that some women veil not out of modesty, but simply as a symbol of their religious faith. Fair enough, but, given the above, that’s like saying that some people waving the Confederate flag are only doing so as a symbol of their “Southern heritage.” Remember the “fucking history and traditional use of that symbol”!

h/t: Heather, Orli

Matthew’s lecture on “What makes great biology?”

I’m out of the office this morning, so Readers’ Wildlife will take a one-day hiatus. But we do have a nice half-hour video lecture from Matthew on “What makes great biology?”.  It’s largely based, as the YouTube notes say, on his interviewing or knowing personally several of the people who have done “great biology”, including Sydney Brenner. That required Matthew to fly to Singapore, where the aging Brenner has retired. What a pair he and Crick were!

As Matthew explained, he gave this talk to an audience, but a recording glitch meant that he had to re-record it in his office. The lecture lays out five characteristics that he sees in great biological research, and he gives examples of each characteristic. Those examples are eclectic, reflecting Matthew’s own interests in genetics and evolution—especially human evolution—as well as his own work on the biology of olfaction.

All the biologists used as examples are still alive save Francis Crick. Eve Marder’s work was new to me, and so I learned something— as you will if you watch it. As you see, Matthew is a lively and engaging speaker.

Note his collection of Stegosaurus toys on the windowsill.

Tuesday: Hili dialogue

Good morning: it’s January 23, 2018, and National Pie Day! Eat some pie—the breakfast of champions. It’s also World Freedom Day in Taiwan and South Korea, celebrating the repatriation of prisoners captured during the Korean War. Meanwhile, the North and South Koreans are practicing various Winter Olympic sports at a resort in the North, and I can’t help but feel, as I’ve said before, that the South has been duped. What kind of “peace” will it get from this if the North refuses to negotiate over its nuclear program? But at least the U.S. government is operating today—for at least a couple of weeks.

On January 23, 1556, what is supposed to be the deadliest earthquake in world history, the Shaanxi earthquake, occurred in the province of that name in China. As many as 830,000 people may have been killed. Why so many? As Wikipedia notes, “Most of the population in the area at the time lived in yaodongs, artificial caves in loess cliffs, many of which collapsed with catastrophic loss of life.” On this day in 1719, the principality of Liechtenstein was created. And on January 23, 1849, the U.S.’s first woman doctor, Elizabeth Blackwell got her M.D. at Geneva Medical College in Geneva, New York. In 1957, Walter Frederick Morrison sold his invention of a “flying disc” to the Wham-O toy company, who renamed it the “Frisbee.”  On this day in 1973, Richard Nixon announced that a peace agreement was reached in Vietnam. In 1986, the first members of the Rock and Roll Hall of Fame were inducted: Chuck Berry, James Brown, Little Richard, Ray Charles, Sam Cooke, Fats Domino, The Everly Brothers, Buddy Holly, Jerry Lee Lewis, and Elvis Presley. That seems a worthy list; was anybody omitted?  Finally, on January 23, 1997, Madeline Albright became the first woman to serve as the U.S. Secretary of State.

Notables born on this day include Stendahl (1783), Édouard Manet (1832), David Hilbert (1862), Django Reinhardt (1910), Ernie Kovacs (1919), and Jeanne Moreau (1928).

Below is Manet’s famous painting “Olympia”, created in 1863 and exhibited in 1865, causing a huge scandal. Note that there’s a black cat visible on the bed to the right. Some background from Wikipedia (my emphasis):

What shocked contemporary audiences was not Olympia’s nudity, nor the presence of her fully clothed maid, but her confrontational gaze and a number of details identifying her as a demi-mondaine or prostitute. These include the orchid in her hair, her bracelet, pearl earrings and the oriental shawl on which she lies, symbols of wealth and sensuality. The black ribbon around her neck, in stark contrast with her pale flesh, and her cast-off slipper underline the voluptuous atmosphere. “Olympia” was a name associated with prostitutes in 1860s Paris.

The painting is modelled after Titian’s Venus of Urbino (1538) [see it here]. Whereas the left hand of Titian’s Venus is curled and appears to entice, Olympia’s left hand appears to block, which has been interpreted as symbolic of her sexual independence from men and her role as a prostitute, granting or restricting access to her body in return for payment. Manet replaced the little dog (symbol of fidelity) in Titian’s painting with a black cat, which traditionally symbolized prostitution. Olympia disdainfully ignores the flowers presented to her by her servant, probably a gift from a client. Some have suggested that she is looking in the direction of the door, as her client barges in unannounced.

Here’s an enlargement of the cat, substantiating my theory (which is mine) that many otherwise good painters couldn’t portray cats very well:

Those who died on this day include Arthur Guinness (1803; founded the beer and the brand), Gustave Doré (1883), Edvard Munch (1944), Paul Robeson (1976), photographer Helmut Newton (2004), Johnny Carson (2005; heavy smoker), Jack LaLanne (2011), and “Mr. Cub”, Ernie Banks (2015).

I love Doré’s woodcuts, particularly those he produced for Dante’s Inferno. But here’s his rendition of Puss in Boots:

Here’s Helmut Newton’s photograph of Twiggy with a cat (1967):

Meanwhile in Dobrzyn, Hili is still balking at going out in the snow:

Cyrus: Let’s go to the river.
Hili: Not a chance.
In Polish:
Cyrus: Chodź, idziemy nad rzekę.
Hili: Mowy nie ma.

Here are three tweets from Grania. First, a strawberry cat!

This is hilarious, but Malgorzata says the dogs are being maternal towards the kittens. Alternative hypothesis: the dogs want to nom them.

Turkish cats:

And two from Matthew. Be sure you click on the original tweet to see the news headline:

And a trenchant cartoon:

You won’t believe this weird spider!

LOL! I used a clickbait headline again! But you really should pay attention to this bizarre arachnid: the pelican spider (also called the “assassin” spider). Note the eyes at the top, the bizarrely formed cephalothorax, and the very long chelicerae (jaws):

Photo: Hannah Wood

The specimen above, pictured in an article at the Mother Nature Network (MNN) appears to have had its legs removed, but here’s an intact one:

Sourse: Synapse

Why do they look like this? MNN explains:

Pelican spiders were introduced to science in 1854, when one of the bizarre-looking creatures was found preserved in 50 million-year-old amber. With a long neck-like structure and mouthparts protruding like an angled “beak,” the comparisons to a pelican were probably inevitable. Scientists initially thought pelican spiders were extinct, but then live specimens were found a few decades later — and that’s when the purpose behind their pelican-esque appearance became clear.

Pelican spiders, aka assassin spiders, evolved to look like this for good reason: They eat other spiders, and need a way to subdue their potentially dangerous prey from a safe distance. They’re active hunters, skulking through the night in search of silk draglines created by other spiders. When they find one, they follow the silk to its source, sometimes plucking on the spider’s web to trick it into coming closer. And once the unsuspecting prey is within range, a pelican spider will impale it with her long, fang-tipped “jaws” (actually appendages called chelicerae), as the Smithsonian Institution explains. She then uses her chelicerae to hold the prey away from her body, keeping herself safe from potential counterattacks until the captured spider dies (see photo below).

Spider at work:

Photo: Nikolai Scharff

These spiders are especially abundant in Madagascar, but are also found in Australasia and southern Africa. There’s a new paper reviewing the Madagascar group (see below, free access), but it’s nearly 100 pages long, full of morphological detail useful for taxonomy but not for us, and I haven’t read it. If you’re an arachnophile, have a go.

I couldn’t find a video of these things preying on other spiders, but this YouTube video gives a longer explanation and an animation of how they catch other spiders:

h/t: Su

Wood, H. M. and N. Scharff. 2018. A review of the Madagascaran pelican spiders of the genus Eriauchenius O. Pickard-Cambridge, 1881, and Madagascarahaea gen. n. (Araneaa, Archaeidae). ZooKeys 727:1096.

An open letter to Charlotte Allen, an ignorant, evolution-dissing writer

Dear Ms. Allen,

I have become aware of your recent article, “St. Charles Darwin“, in First Things (“America’s most influential journal of religion and public life”). The point of your article appears to be twofold: to defend A. N. Wilson’s execrable hit-piece that masquerades as a book-length biography of Darwin (I reviewed his book here), and, second, to question the truth of evolution itself.  But, by your own admission, you have no expertise to do either of these things.

First, you admit that you know nothing about Darwin’s life:

I have no idea myself whether Charles Darwin was a “self-effacing” and “endearing” beetlemaniac—a Mahatma Gandhi of biology, so to speak—as his fans claim, or a cat-killing, digestive tract-obsessed egotist and plagiarist, as Wilson seems to think.

Perhaps you should read some of the Darwin scholarship by historians of science, like Janet Browne, and then you might get an idea of what the man was really like. And if you did that, you’d see that the critiques of Wilson’s biography by myself, John van Wyhe, and Adrian Woolfson—critiques that you find “hilarious”—rest on Wilson’s blatant misrepresentation of the biographical facts. Wilson simply distorted and lied about Darwin’s life (did you see that I caught him in a blatant lie about Darwin’s supposed plagiarism?).

Your claim that our criticisms of Wilson’s book stem from the fact that he is an atheist turned Christian, and that his religiosity is why his book has “gotten under the skin of people who make at least part of their living promoting Darwin”, is ludicrous. The book would remain dreadful even if Wilson had remained an atheist.

After I read Wilson’s book, I was puzzled that an apparently smart man could do such a terrible job criticizing not just Darwin, but his theory of evolution. It was then that I realized that Wilson was probably a creationist, or at least acted like one, and that suggested a plausible motivation for his execrable scholarship. But his scholarship remains bad regardless of his religion.

Further, you clearly know almost nothing about evolution, either, as seen in this paragraph:

It’s not surprising that Wilson, in his Darwin biography, finds the master’s theories wanting. Evolution, particularly evolutionary psychology, can be a useful heuristic in reminding us how similar we are to other animals, our kin, but when you go hunting through the fossil record for hard evolutionary evidence, you always come up . . . a little short. Yes, there seem to have been dinosaurs with feathers (presumably bird ancestors), but paleontologists continue to classify the extinct creatures as reptiles. There’s a “transitional” fish from the Devonian period, which artists like to draw with little legs like on the atheist bumper sticker—but the actual fossils, recovered in Nunavut, Canada, in 2004, are only of the fish’s head, whose bone structure seemed adapted to taking in air on shallow mud flats.

“Useful heuristic”? Do you know anything about evolution beyond what you’ve taken from Wilson’s book or the creationist literature? No the fossil record does not come up short. Those dinosaurs with feathers are exactly what we expect for transitional forms: they have a largely dinosaurian skeleton but birdlike feathers, and, moreover, appear well after theropod dinosaurs (the presumed ancestor) were already around—but before modern birds appeared. Further, the fossils become less dinosaurian and more birdlike as one gets to more recent strata. Whether one calls these “birds” or “dinosaurs” is a matter of taste; the important fact is that they are exactly the transitional forms we expect, and they appear at exactly the time they should have if dinosaurs evolved into birds.

And surely you know that the truth of evolution doesn’t rest solely on fossils—in fact, there was not much of a fossil record in Darwin’s time. His evidence for evolution derived from other areas like embryology, comparative morphology, vestigial organs, and biogeography. Now, of course, we do have fossil records of many transitional forms: not just those from reptiles to birds, but from reptiles to mammals, amphibians to reptiles, terrestrial mammals to whales, and—brace yourself, as you’re going to hate this!—from early hominins that had small brains, big teeth, and lived in the trees to the more cerebral and gracile species of Homo. All of these, and newer evidence from genetics as well, attest to the truth of evolution.

Your comment about Tiktaalik shows your further ignorance. It’s not just the fish’s head that we have, for crying out loud, but a substantial part of the postcranial skeleton, including its shoulder and front fins. Let me remind you by showing you the photos of the fossil:

The bony fins that might have evolved into legs:

And, as Greg Mayer reported on this site four years ago, we also have a pelvis and a partial hindlimb.

To see the significance of this fossil as the kind of “fish” that could have evolved into tetrapod amphibians, I suggest that you read Neil Shubin’s Your Inner Fish. Did you have a look at it? I didn’t think so.

At the end, I wondered how you—even more ignorant about Darwin and evolution than was Wilson—could do such a terrible job in your article. And I conclude that, like Wilson, you have been conditioned by your religious beliefs to attack Darwin and his ideas. I implore you to do some reading about science before you further mislead the readers of First Things. For, without doing your journalistic homework, you’ll do nothing to keep that magazine “an influential journal of religion and public life.”

Do you really want to cast in your lot with creationists? Enlightened believers accepted evolution a long time ago.  Surely you don’t want First Things to become Worst Things!

Yours sincerely,
Jerry Coyne


I’ve posted the link to this piece as a comment after Allen’s piece. We’ll see if it appears.

Why do intellectuals avoid discussing free will and determinism?

One thing that’s struck me while interacting with various Scholars of Repute is how uncomfortable many get when they have to discuss free will. I’m not talking about Dan Dennett here, as of course he’s a compatibilist and is glad to cross swords with anybody—while admitting sotto voce that yes, we could not have chosen otherwise.  And I’m not talking about Sam Harris, who has spoken out eloquently about about our lack of free will in his eponymous book. (And, of course, Dan had a go at Sam when reviewing that book, to which Sam replied.)

No, I’m talking about other prominent thinkers, and I’ll use Richard Dawkins as an example. When I told him in Washington D.C. that, in our onstage conversation, that I would ask him about free will, he became visibly uncomfortable. But I didn’t back off, and when I reported on our discussion, I said this:

 . . . I did pin Richard down to saying something about free will (in the dualistic sense), as in his upcoming book of essays, Science in the Soul (recommended), he’d written this:

“After my public speeches I have come to dread the inevitable ‘do you believe in free will’ question and sometimes resort to quoting Christopher Hitchens’s characteristically witty answer, “I have no choice”.

Well, that’s glib, but also a non-answer, so I wanted to ask him if he accepted that all our actions are predetermined except for possible quantum events in the brain. And he did admit that, but added that he doesn’t really understand compatibilism and other attempts to give us free will. I didn’t get into those issues, and we briefly discussed the implications of pure determinism for society and the justice system.

As you see, Hitchens also avoided the question. Perhaps Steve Pinker discusses the issue in extenso somewhere in his works, but I don’t know where, and I’ve never directly asked him his opinion.

I’ve seen similar “avoidance behavior” from other scholars, too, but won’t name them here.

It’s my impression, then, that with the exception of vociferous compatibilists like Dennett, people who really are determinists often try to avoid discussing it in public. And by “it”, I don’t mean just free will, but mostly the fact that we are not able after performing a given act, to argue that we could have done otherwise. That is, people don’t like to talk about determinism. This bothers me, because, as I’ve said before, I think fully grasping the determinism of human behavior has enormous practical implications for how we punish and reward people, particularly in our broken judicial system.

Why this avoidance of determinism? I’ve thought about it a lot, and the only conclusion I can arrive at is this: espousing the notion of determinism, and emphasizing its consequences, makes people uncomfortable, and they take that out on the determinist. For instance, suppose someone said—discussing the recent case of David Allen Turpin and Louise Anna Turpin, who held their 13 children captive under horrendous circumstances in their California home (chaining them to beds, starving them, etc.—”Yes, the Turpins people did a bad thing, but they had no choice. They were simply acting on the behavioral imperatives dictated by their genes and environment, and they couldn’t have done otherwise.”

If you said that, most people would think you a monster—a person without morals who was intent on excusing their behavior. But that statement about the Turpins is true!

Now how the Turpins are treated by the law is different from saying that they had no choice in their behavior: causes and social consequences are not the same issue. As I’ve argued many times, saying that people had no choice in committing a crime is a statement about “is”s, not “oughts”, and there are very good reasons to incarcerate criminals, though in a way different from what we do now. But grasping determinism, as I, Sam, and people like Robert Sapolsky believe, would lead to recommending a complete overhaul of our justice system. Philosophers who spend their time confecting definitions of free will that still accept determinism could better spend their time working on such an overhaul. Their lucubrations on compatibilism are, I contend, a semantic endeavor that’s largely a waste of time. Why bother with semantics when you could fix severe problems in society? As Marx said, “Philosophers have hitherto only interpreted the world in various ways; the point is to change it.”

Now you could argue that the notion of determinism of human behavior is complicated and hard to understand, and that’s why Big Thinkers avoid it. I don’t believe that. Certainly people like Dawkins, Hitchens, and Pinker have the neuronal wherewithal not only to understand determinism, but to work out its ramifications for how we treat people. It’s not rocket science. I am neuronally challenged compared to those people, but the fact is clear to me, and the ramifications seem obvious.

You could also say that some people avoid discussing determinism because they misunderstand its implications. Determinism does not, as I said, imply that criminals should go free. It does not imply that, if we grasp it, we’ll become nihilists who lie abed in an existential stupor. It does not say that we can’t change people’s minds by arguing with them. Yes, many people have such misunderstandings, but I can’t believe that the people I’ve named would share those misunderstandings.

Here’s another possible reason why the Brainy Ones avoid determinism. They may think—as Dennett has said explicitly several times—that if people believe they’re puppets controlled by the strings of their genes and environments (which they are), it will rip society asunder, for our feeling of agency, which we need to somehow confirm as real, is a potent social glue.

But for decades people said the same thing about religion: “We can’t disabuse people of their belief in God, for society would fall apart.” As we know from Scandinavia, that’s simply not true. And I really do believe that if people intellectually grasped determinism, society wouldn’t fall apart, either. For one thing, our feeling of agency is so strong that grasping determinism wouldn’t turn us into do-nothing nihilists. Although it’s an illusion, so is the notion of the “I” in our brain. Life will go on when we believe in determinism but still have our evolved feeling of agency.

That is what I have to say this morning, and I throw this out for readers’ discussion. I really don’t want to engage again in the endless fracas about whether we have “free will” or argue fruitlessly about whether we have a kind of “compatibilist” free will that is “the only type of free will worth wanting.” No, I assume that most readers here accept determinism of human behavior, with the possible exception of truly indeterminate quantum-mechanical phenomena that may affect our behavior but still don’t give us agency.  What I want to know is why many intellectuals avoid discussing determinism, which I see as one of the most important issues of our time.   

Now some readers may say that there are no practical consequences to accepting behavioral determinism. I disagree, as do people like Harris and Sapolsky, and most other determinists who aren’t at the same time compatibilists. Those people who say there are no consequences could argue, “Well, if there are no consequences, why should I bother to discuss it?” If that’s your reason for avoiding determinism, so be it. But I think you’re wrong.

Readers’ wildlife photos (and science post!)

Bruce Lyon, our professor of ecology and evolution at the University of California at Santa Cruz, has sent another installment in his continuing and engrossing series on coots. This counts not only as a photography post, but as a science post, so be sure to read his text (indented), which describes experiments to figure out why the chicks look so damn weird!


Coot soap opera VI: Tacky colorful coot chicks

Here is one final installment about our work with American coots, Fulica americana (previous posts are herehereherehere, and here). This installment is about the bizarrely ornamented chicks.

The following scene should be familiar to anybody who watches nature shows on TV: a colorful highly ornamented bird bows and displays to some duller individuals watching intently. Typically this description would describe a mating display, where males display to attract females. However, the scene also describes family life in coots, where highly ornamented chicks display to their rather plain parents.

My first encounter with a baby chick was rather shocking because baby birds are not supposed to look like these do. Working on a duck research project in in central British Columbia near the town of 150 Mile House, I heard sounds I did not recognize coming from the reeds in a small wetland, so I waded in to find the source. The noise was coming from a baby American coot floating in the water—an orange monstrosity unlike any baby bird I had every seen. The chick, just out of the egg, was covered in fluorescent orange plumes, a naked orange top of the head, blue eye shadow and bright red nubbins around the eyes.

Below: a highly ornamented coot chick displays to its parent.

Below: A photo of three baby coots lined up on a rock waiting to be tagged. The word extraordinary is often overused, but I think it applies to these chicks.
Below: Left. Close up of a baby coot’s face—note the funky red nubbins on the face near the beak. Right: Similar facial nubbins are found on male ruffs (Calidris pugnax), which is interesting because this sandpiper has a lek mating system. A lek is where groups of males gather in a mating arena and display to attract female. Lek mating systems often have extreme sexual selection and highly ornamented males, like ruffs or birds of paradise. We expect extreme ornaments on males in lekking species, not in baby birds.
Below: These two herring gull (Larus argentatus) chicks show what self-respecting baby birds are supposed to look like—these chicks are doing their best to resemble a rock. Baby birds are tasty and helpless, so camouflage makes sense. 

At least to the human eye, coot chicks are anything but cryptic with their crazy ornamentation and doodads. Ornamental plumage in birds is typically associated with sexual selection; in fact bird ornamentation played a key role in motivating Darwin’s sexual selection theory (three chapters of his sexual selection book were devoted to birds). He proposed that the ornamental traits are evolutionarily favored because they increase mating success (typically but not always in males), either because females prefer ornamented males or because the ornamentation helps males win fights over access to females or resources females like. Clearly this explanation does not apply to a baby coot fresh out of the egg.

Right after my first coot chick encounter, I described the bizarre chick to my friend John Eadie (now a professor at the University of California Davis) over dinner at the field house. As nerds often do, we couldn’t help brainstorming about possible explanations for the bizarre plumage. John suddenly remembered an idea by Mary Jane West-Eberhard that seemed like a perfect explanation: parental choice theory. This idea was part of West Eberhart’s broader social selection theory, theory that sought to expand aspects of Darwin’s sexual selection ideas to contexts beyond mating. West Eberhard, an evolutionary biologist who specializes on social insects, noted that mechanisms of choice and social competition occur in a variety of contexts beyond mating, and these mechanisms could in theory lead to the same types of traits produced by sexual selection. Her parental choice idea is analogous to mate choice, except that the choosy individuals are parents rather than individuals seeking mates, and the parents’ preference is for characteristics of offspring, not mates. The idea is simple: in species where parental food is essential for offspring survival, ornamented offspring might evolve if parents control food allocation among their offspring and happen to prefer to feed more ornamented offspring over less ornamented offspring. The assumptions of this theory apply to coots perfectly: as discussed in previous posts, parent coots use aggression to control which chicks are fed and food is so critical to chick survival that many chicks die because they are not fed enough. Eadie and I did not get around to testing these ideas until after I had done my PhD work on the coots, which was good because my PhD work allowed me to figure out the field methods needed for studying coot parental behavior.

Below: Ornamental coot chick feathers up close. The feather structure enabled an easy way to experimentally alter chick plumage: simply removes the orange feather tips. The ornamented body feathers have two parts: a black base with the normally downy feather structure that provides warmth to the chick and a naked orange shaft that extends beyond the down. Cutting the tip removes the color but not the fluff. The little red blobs in the upper left are the red facial nubbins—these are highly modified, have no down and are too small to be modified by trimming tips.

Below. It’s “haircut” time!  To test if parent coots have a preference for ornamented chicks we experimentally dulled half of the chicks in each brood by trimming the orange tips from their body feathers.  (In a later study we used electric trimmers, much faster than scissors.) The facial nubbins were left as is.
Below: In each brood, we alternated the trimmed and untrimmed chicks with hatching order because hatching order is such a strong determinant of survival within broods. A coin toss determined whether the first chick to hatch would be trimmed or not, and we then alternated the two treatments as the chicks hatched.  As this photo of an untrimmed and trimmed chick shows, trimming dramatically reduces the ornamentation and appearance of a  chick. Trimmed chicks are, however, still cute and fuzzy!
Below: Interestingly, most rails, including some species of coots, have plain black chicks so our experiments created an appearance that is not completely novel for coots and other rails generally. Moreover, a rail phylogeny shows that chick ornamentation is a derived trait (i.e., evolved more recently), so trimming chicks recreates an ancestral feature. Photo of a water rail (Rallus aquaticus) chick from Europe (photo Gunnar Pettersson):
Below: A cartoon that I use in seminars shows the key results of the experiment. Within broods, the ornamented chicks were fed far more, grew more rapidly and had much higher survival than their non-ornamented siblings. Our experiment confirmed a key aspect of West-Eberhart’s idea: coot parents have a strong preference for highly ornamented chicks and, in an experimental setting at least, this preference has a big impact on the survival of the ornamented chicks. Ethical note: on average, 50% of the chicks in natural broods die of starvation and our experiments did not increase chick mortality but simply altered which chicks lived or died.

As an aside, ornithologists normally capture chicks to weigh for growth rate data. We could not do that because within a day of hatching coot chicks are too damn good at hiding and are almost impossible to catch. I therefore developed a method of photographing recognizable individual chicks (based on their tags) from a floating blind while at the same time accurately determining the distance between the chick and camera (I turned my telephoto lens into a range finder). When photo distance is known, one can convert an object’s relative size in a photo to its actual size, using conversion factors obtained from reference photographs of known sized objects photographed at known distances. I was able to test the accuracy of this method because I raised a bunch of coot chicks in captivity and photographed chicks of known mass while they swam in a kiddie pool. The photo method turns out to be very accurate and we use it in all of our coot studies.

Below: Astute readers may realize that our experimental results could be explained by something other that parental preference for chick ornamentation. Maybe parent coots did not recognize the trimmed chicks as their own chicks, or even as coot chicks at all. Or perhaps, counter to what I claimed above, maybe trimming does actually affect chick survival by reducing their ability to keep warm or dry. Fortunately, a chance discussion with a colleague highlighted this concern before we did the study and prompted me to think about a sham control treatment that could test for the effects of trimming itself. We created two types of control broods: broods where all chicks were trimmed and broods where none of the chicks were trimmed. We found that there were no differences between these two brood types in anything we measured: average feeding rates, chick growth or survival. This shows that trimming feathers per se does not affect chick success; it only matters when parents have both ornamented and unornamented chicks to choose between. The photo below shows an all-trimmed control brood: note that none of these chicks have much color, despite being very young.

Our experiment showed that parent coots prefer ornamented chicks, but this then raises the question as to why parents have this preference in the first place. This question is much harder to answer than simply showing that they have a preference, but more recent studies I did on patterns of natural color variation suggest some possibilities (done with Dai Shizuka, my PhD student at the time). First, we looked at whether brood-parasitic chicks are more colorful than non-parasitic chicks—but in fact they are less colorful. Second, in addition to their colorful plumage, coots have bald heads that can change color fairly quickly from dull to bright and it may be that this skin color conveys useful information to the parents, for example reliable information about chick hunger or body temperature. If parents evolved a preference for feeding chicks with particularly colorful naked heads, this preference could then favor the evolution of chick traits that amplify the head signal: perhaps plumage color is a dazzle trait that mimics the brightest color of the naked head. Finally, another possibility is suggested by our discovery that within broods, later-hatched chicks tend to be more colorful (redder) than early-hatched ones. As I described in a previous post, when parents eventually take control of food allocation, they divide the brood and the parents specialize on feeding the youngest remaining chicks alive. Perhaps the orange coloration is an honest signal of chick age or size that helps the parents identify the smallest or youngest chicks still alive.

Below. A composite photo showing the plumage color and pattern of chicks of four different ages. The fluorescent orange plumage could be an honest signal of size (or age) because once the chicks hatch, they do not grow any new orange feathers. Given this, their coloration must dull as the chick grows and the orange feathers become more thinly spread over an increasingly larger surface area. Big chicks cannot lie about their size! It also seems that the feathers dull and fall out with age.