More Pit Bulls, Less Crime?

I’m just catching up on Steve Sailer’s latest dissection of Malcolm Gladwell’s latest spiel of sophistry in which the over-hyped, afro-coiffedBlink-author-turned-seminar-guru attempts to fashion a critique of racial profiling around the facile analogy of pit-bull attacks. 

Sailer’s analysis is incisive as usual, but even as he makes mincemeat of Gladwell’s faulty logic and specious statistical assumptions, his treatment of the relevant dog-attack data leaves me wondering if there could be something that’s being overlooked in the pit bull controversy.

Here is Sailer’s summary of some key points from a statistical report published in the Journal of Veterinary Medicine:

…if you go look up the data on people killed by dogs, you find that of the 238 deaths from
        1979-1998 for which the breed of dog is known, 66 were due to pit  bull-type breeds (along with 10 people killed by part-pit bull mixed breed dogs). Pure-bred Rottweilers were far back in second place with 39 kills and pure-bred German Shepherds in third with 17
. Unfortunately, we don’t have terribly good data on the number of dogs by breed, but certainly the Labrador retriever is vastly more common than all the various pit bull breeds combined, yet only one person in those two decades was killed by a pure bred Labrador (and four by part
        Labradors).

        Moreover, the danger to children (who comprise about 70% of dog fatalities) from pit bulls relative to Labradors is even worse than  these numbers suggest because sensible dog owners buy dog breeds based on likely exposure to children. If you have a small child, you are much
        more likely to buy a Labrador to be his pet rather than a pit bull. 

Beyond the undeniable fact that feared breeds are implicated in a far greater percentage of all fatal attacks, the  point that screams out from these numbers is that fatal dog attacks are still pretty damn rare. The specter of sanguine beasts may play into the public’s appetite for sensational news coverage, but in terms of real-world risk, children are vastly more likely to die from drowning in backyard swimming pools or even in bicycle accidents than from being mauled by a canine, regardless of breed.

In highlighting the discrepancy between perceived and actual risk relating to the comparably emotional subject of "children and guns," Independent Institute policy analyst, David Kopel provides some sound statistical perspective:

If any object which is associated with about 236 accidental
childhood deaths a year [this refers to the number  of children under the age of 15 who died from accidental firearm discharges in 1990, according to statistics compiled by the National Safety Council] should be outlawed, then it would be logical to
call for the prohibition of bicycles (over 400 child deaths a year).
An even larger number of children are killed by motor vehicles
(3,263).  Four hundred and thirty-two children die annually in fires
caused by adults who fall asleep while smoking; the 432 deaths would,
by the handgun-banning logic, make a persuasive case for outlawing
tobacco.

    If the focus is on children under age 5, then
outlawing swimming pools and bathtubs (350 drowning deaths) or
cigarette lighters (90 deaths) would save many more children under 5
from accidental deaths than would a gun ban (34 deaths)
.

   
Thus, the "if it saves one life" anti-accident logic applies with much
greater force to bicycles, automobiles, bathtubs, swimming pools,
tobacco, and cigarette lighters than to guns.  Unlike gunowners, owners
of these other objects have no specific Constitutional right of
possession.  Thus, there would [be]
little Constitutional objection to a ban on future production of these
items.  And while bicycles, bathtubs, and cigarette lighters make life
more convenient, these objects do not save lives or prevent injury.

Kopel’s implicit point about the life-saving utility of firearms is especially important in considering the breed-specific dog attack data. Until University of Chicago economist John Lott (notwithstanding his questionable scruples) dramatically re-framed the gun control debate with his econometric analyses arguing that liberalized gun laws correlate with criminal deterrence, it remained an article of faith among most social scientists that private gun possession was a net liability for society, at least in terms of risk analysis. By failing to consider effects of gun ownership within a two-tailed research paradigm, sociologists had overlooked the protective and deterrent value that was, to whatever arguable degree, always part of reality.

Given that the incidence of fatality associated with ostensibly vicious dog breeds (132 total deaths over a two decade period attributed to "pit-bull type breeds," pure-bred Rottweilers, and German Shepards combined) is profoundly smaller than that associated with firearms (or cigarette lighters, bathtubs, automobiles, etc), it wouldn’t take much of a crime-deterring counter-effect to offset the headline-grabbing horror stories.

There is no question that many people choose to keep notorious dog breeds precisely because of the protective benefits they imagine such dogs will provide.  And as with guns, this may be especially true for people who live in urban areas where the risks and realities of crime are more immediate.  The question that necessarily arises is whether this belief is supported by empirical evidence; and if so, to what extent?

It may be that for every toddler who is tragically mauled by a snarling pit bull, there are two toddlers whose lives are saved when the same type of dog snarls and threatens to maul a would-be intruder.  My cursory Googling on the subject hasn’t turned up any relevant research, but as municipalities rush to draft ordinances banning ill-reputed dog breeds, the question seems worth taking seriously. 

If anyone knows of any data on this subject, please drop me a note.

WWNWD?

Back in the early 90s, when  Clarence Thomas was being grilled about his predilection for Long Dong Silver porn loops and political correctness was casting its absurd pall over American campus life, I remember Naomi Wolf’s Foucault-derived feminist fantasies being the subject of fawning media attention. Charlie Rose and Larry King listened politely as this drop-dead gorgeous scholar-upstart spun her lurid tale about how modern women were being subjugated under the yoke of what she called The Beauty Myth, which, according to her extensive scholarly investigations, could be definitively traced to the insidious machinations of deep-rooted patriarchal ideation. Back then, few critics bothered to question Ms. Wolf’s biology-free worldview, or her grasp of real-world sexual politics. And if you were disposed to doubt, the best bet was to grumble quietly and wait for the silliness to run its course.

As the insights of evolutionary psychology slowly and stealthily wore on the liberal imagination, the PC hypno-germs began to wear off and Ms. Wolf quietly dropped her post-structuralist gender hegemony shtick in favor of ever increasing fits of flakiness. Her discursions shifted from the warmed-over semiotic palaver of academe, to focus on more personal — and conspicuously self-important — cultural terrain. The same commonplace realizations that occur to most people as they learn from the trials of life had a way of playing out as grandiose revelations to Ms. Wolf, who soon fashioned a career out repackaging folk wisdom as rarefied social insight.  If Ms. Wolf discovered that professional life could be difficult, she wrote a book about it.  If she was given to wax nostalgic over her sexual awakening, she wrote a book about it.  If she got knocked up, well, time to hit the lecture circuit.  The world must know.

Seemingly oblivious to her privileged status, Ms. Wolf’s forays into glorified self-help prosaicism were invariably propped up by a pretense of sagacity that a less sensitive soul might chalk up to the sheltered pedigree of a Jewish princess.  But her obliviousness wasn’t without a certain endearing charm. And at some point, I stopped scoffing and began to chuckle.  What would be next?  A prolonged meditation on Wolf’s discovery of innate sex differences? A treatise on animal psychology based on her experience as a pet owner? A cookbook?

I suppose I should have guessed that the next chapter in The World According to Naomi would have a spiritual ring, but even I wasn’t prepared for the Jesus bomb.  Whatever could have prompted a smart Jewish neo-feminist wordsmith to profess her faith in Our Lord and Savior? Much less to serve up her new found faith with such bordering-on-Shirley-MacClaine adolescent-boy-channeling weirdness?  Did it have something to do with being traumatized by Ali G?  Could it be a final rebuff directed at her one-time mentor Harold Bloom, whom she famously — if dubiously — accused of sexual harassment?

Or just maybe, could it be that Ms. Wolf — Naomi — is being quite sincere?  And I am being mean.   

True to form, the culturati have begun to pile on, and I expect there will be more to follow.  It won’t be long, I imagine, before Camille Paglia chimes in with a measure of polished spleen.  And while I’m sure I’ll check in on the din, I think I will refrain from wallowing.  Whatever is to be made of Naomi Wolf’s apostasy, or of her penchant for self promotion, or of her over-hyped career as a shape-shifting feminist gadfly, she has become a curious fixture in the rhythm of American middlebrow culture.  I wish her well.

And while I like to think my deeply considered atheism leaves me pretty well inoculated against the Jesus bug that caught Naomi, I also recall a time when I claimed I wasn’t a "cat person."  Yet here I sit with a little gray-furred fucker purring on my desk, obscuring the monitor. And the love that wells up is as real as indigestion.

         

First They Came for the Holocaust Deniers…

Not that anyone cares, but David Irving is still behind bars.  Nearly 80 days and counting. 

With its churlish tone and requisite turns of adjectival spleen, Malte Herwig’s front-line report for The Guardian is well worth reading. Here’s a snippet:

‘My little daughter,’ [Irving] adds with a sheepish grin, ‘of course thinks
it’s cool that daddy is in prison’; and somehow one cannot help feeling
that daddy himself relishes having another big fight on his hands.
Irving loves to cast himself as an innocent maverick single-handedly
taking on powerful governments, the mighty press and influential lobby
organisations.

Yeah, I’m sure it’s a real ego-trip, stomping around the grounds of that Austrian hoosegow for months.  Who cares if  the big boorish martyr has trouble securing ink for a pen with which to scribble at his memoirs.   He’s having the time of his life, after all.

But Herwig isn’t finished:

Why did [Irving] risk going on a journey that he knew might get him into
trouble? ‘I’m from a family of officers, and I’m an Englishman. We
march toward the gunfire,’ he snarls into the receiver. Now that he is
doing his rounds in a prison yard, however, he finds that he didn’t
pack the right marching equipment. ‘I have very expensive shoes,’ he
sighs, ‘but they are coming apart from walking outside in the yard.’

It’s a lurid portrait, this unhinged patriot-cum-psychotic, snarling in those tread-worn expensive shoes. On second thought, maybe it wasn’t such a good idea to trust him with a pen after all.  Who knows, he might pull some crazy-ass Hannibal Lecter shit on one of the guards.

In the course of his interrogation, Herwig goes on to treat us to a cheeky measure of Anglo-baiting speculation as to what might lie behind this remorseless expensive-shoe-wearing thought criminal’s untamed penchant for provocation:   

[Irving’s] desire to cause
outrage seems rooted in the sort of reckless arrogance you find in some
public school boys who think the world belongs to them.

OK, let’s be very clear.  At present the only "world" that belongs to David Irving is a Viennese prison populated with  hardened criminals.  And after all the blame and snide psychoanalytic quips are out of the way, the banal reality is that the "reckless arrogance" for which he awaits trial amounts to nothing more than speaking and writing outside the bounds of officially proscribed discourse.

Herwig sums up his report with a few obligatory remarks about the "debate on Holocaust denial and free speech," but at no point in his animadversion does he evince any real concern over the fact that a 67 year old historian has been incarcerated for nothing more than speaking his mind.  Irving is simply depicted as a smug controversialist with sinister motives. Who got what was coming to him.

Somehow, I doubt this is what Bill Ryan had in mind when he talked about Blaming the Victim.  But the shoe fits.

The shame of it is, Herwig’s attitude is pretty much par for the course.  Civic-minded people who are quick to drum up the pre-scripted censorship-screaming pother every time Heather Has Two Mommies is "banned" from some backwoods school reading list tend to be conspicuously unconcerned over the very real criminal sanctions visited upon expositors of dissident Holocaust history.

And if you think Irving’s is an isolated case, well, you have some catching up to do.

Ernst Zundel sat in a Canadian prison cell for over a year before being extradited to his native Germany where he now sits in another cell awaiting his day in court for the "crime" of  "defaming the dead." 

After being harassed by authorities for years, the Flemish publisher, Siegfried Verbeke, was recently arrested under a German court order for similar transgressions.

Robert Faurisson has taken his shots as well. As if being savagely beaten by thugs weren’t bad enough, France’s professor laureate of forbidden history has been repeatedly fined and arrested for failing to keep his French Anne-Frank-doubting mouth shut.

And then there’s Germar Rudolf, who, with the full cooperation of U.S courts, was recently deported from his home in Chicago, where he lived with his American wife and child, to face his German inquisitors.  A mild-mannered chemist who fled European censors to run a small book imprint in the Land of the Free, Rudolf now sits in yet another cage where he waits in queue for yet another sham tribunal.

I won’t bother going through the similar experiences of George Theil. Or Fred Leuchter. Or Adam Gmurczyk. Or any number of other cases of unambiguous censorious persecution attending this, and only this, issue. And there are many others.  As I’m sure there will be more.  The beat goes on.   

In case you’re wondering, I never got around to forming a strong opinion as to the merits of the arguments advanced by Holocaust revisionists.  I’ve never been much of a "history guy," and arriving at an informed opinion would entail a hell of a lot of research that, frankly, I just don’t feel like doing. I do know that I’ve yet to come across even one of these guys who actually "denies" that the Holocaust happened.  They may deal in rhetorical hyperbole from time to time, but the brass tacks always come down to distinctions over intentionality, or relativist points about the comparative monstrosity of Allied and Soviet atrocities.  And I certainly will admit that some of the forensic issues raised by the better crop of dissident scholars — about Zyklon stains and gas chambers, for example — seem plausible enough to be debated. But what do I know?

And more importantly, why should it matter what I think?  The simple, shameful fact is that people are being persecuted and imprisoned for writing books and handing out pamphlets. This is a blight on the fundament of Western Civilization, and the paucity of outrage among intellectuals is despicable. 

As David Irving snarled, "it’s like having a law that prohibits wearing yellow collars." Or expensive shoes, perhaps.
   

Judith Rich Harris Interview

Don’t miss the latest installment of Gene Expression’s outstanding "10 Questions"series, in which the plain-spoken scholar-provocateur, Judith Rich Harris, chews over some difficult issues raised by her shibboleth-shattering research on child development and the surprisingly negligible role of parental influence in shaping children’s destiny. 

Her reply to the throw-away tenth question turns out to be especially revealing:

Interviewer: If you could have your full genome sequenced for $1000, would you do it? (assume privacy concerns are obviated)

Harris: I’d
jump at the chance, and I wouldn’t give a damn about privacy concerns –
I’d want the information to be made freely available. My father spent
his adult life crippled by an autoimmune disorder called ankylosing
spondylitis. His father died young of an autoimmune disorder called
pernicious anemia. And I have been ill most of my adult life with an
autoimmune disorder that has launched attacks on several different body
systems. So I think my genes might have something interesting to tell
medical researchers.

If you have yet to catch up on Harris’s heresy, The Nurture Assumption, which has held up pretty well in the face of voluminous and intense criticism, is a must-read. For a good summary of her "dangerous idea," check out the short essay she filed for John Brockman‘s latest "Edge Annual Question" symposium.

Harris’s new book, No Two Alike, promises a deeper exploration of the Darwinian dynamics underlying individual differences. Should be good fun.

You, Me, and The Bell Curve – Part Three: Forbidden Grounds

Editor’s note: what follows is the penultimate post in my series on the Bell Curve controversy. If you want to  catch up, go to Part One and Part Two.
_________________________________________________________

OK, then.  Having thus disposed of good Naureckas’s pet gadfly, it is well that we should now turn our attention more directly to the verboten meat of the controversy — to the ominous racially-charged thought-bombs that we all mull in private while avoiding in public.

In irresponsibly broad strokes, the racial-cognitive trichotomy breaks down so that East Asians get top billing with an average IQ of around 105. Then you have the whites weighing in at the occidentally-normed goldilocks median of 100, drawing inevitable attention to the significantly lower average score of about 85 for westernized blacks (native African blacks hover around 70, a subject of ongoing study and controversy that we shall ignore for present purposes).  This is the crudely stated statistical picture that emerges time and time again, and it is no longer controversial among experts. Again, the sticking point is not whether, but why.

Before digging in, it might be a good idea to recite the by-now-familiar caveats – because the caveats are important.

First, it should be kept in mind that when we talk about group differences, we make generalizations perforce. Race may not be “socially constructed” as fashionable rhetoric would have it, but racial categories are fuzzy at the edges, and even if some measure of genetic influence can be conclusively linked to observed ethnic differences in mental traits, the world will still be populated with more than enough black geniuses, middling Asians, and dirt stupid white folk to keep things interesting for the ride.  And regardless of the underlying causes of aggregate differences, there will always be more variation within groups than between them. As Charles Murray reminds us, “a few minutes of conversation with individuals you meet will tell you much more about them than their group membership does.” 

But the hard questions still need to be asked. And answered.  If the root causes of significant racial gaps in important mental skills are essentially due to cultural or environmental forces, however elusive, then the trick will always be to find and fix them.  Whatever it takes.  This has been the working assumption for the past fifty years, and it’s still the only possibility that you can discuss in polite company. 

In considering the notion that genes or other intractable biological factors are at work, the admirable human urge to find and fix is called into question.  At the sociopolitical crossroads of the debate, the plausibility of what has been dubbed the “hereditarian hypothesis” begs us to reevaluate our best efforts from the ground up.  When underlying beliefs about human nature collide with brute realities, mistakes invariably follow. We invest in the wrong resources.  We erect false expectations. We assign ill-deserved blame and foment unrealistic hope.  The truth may not set us free, but it just may keep us from fucking things up.

Getting to the Ultima Thule of the race-genetic conundrum, however, is difficult business, beset by logical pitfalls and empirical loopholes. Scientists may be able to serve up an overwhelming amount of evidence that is consistent with the idea that genes contribute strongly to racial differences in mental ability, but definitive conclusions have thus far proven elusive.

Harvard geneticist Richard Lewontin, a tenacious critic of hereditarian ideas, famously illustrated the fundamental problem by invoking a memorable analogy – originally formulated by Charles H. Cooley and restated in The Bell Curve – in which handfuls of seed corn of identical genetic stock are dispersed in radically different environments. Even with the genetic variance set at zero, the seeds finding purchase in more arable environs will yield an observably larger crop than those deposited in arid desert soil. Lewontin’s analogy reminds us that even when the role of genes in determining  within-group variation can be established with relative certainty, the cause of between-group variation may yet yield to wholly environmental explanations.  As operationalized vis-a-vis the IQ controversy, this means there is always the possibility that the answer lurks somewhere within the socio-cultural morass that shapes our subjective racial experiences.            

If at first glance Lewontin’s analogy seems to render the whole affair an epistemological dead end, at least as a practical matter, we should note that it would indeed be possible to design a definitive experiment; all you would have to do is take representative groups of infants of different racial backgrounds and raise them up in artificially controlled environments without access to any cultural cues that might sour the results.  Maybe they could be tended over by robots in geodesic domes or something. I don’t know.  The point is that if the environments could be objectively equalized, any psychometric differences that remained between groups could then be safely chalked up to biology. And if there were no differences, well, then the null hypothesis would be confirmed. Simple as that.  The problem, needless to point out, is that such an experiment would be, um, ethically unconscionable.

Within the bounds of permissible inquiry, then, the question will hinge with necessary imprecision, on whether there are environmental influences confounded with the idea of race that are sufficient to explain the differences. The trouble is that efforts to identify such variables have not proven fruitful.

Let’s look at how some of the most plausibly imagined culprits have fared under scientific scrutiny.

During my lackluster life as a college student, I frequently encountered the claim that IQ tests were simply rigged against black subjects. The case for internal cultural test bias was sometimes illustrated with something called the “chitling test” where students (read: white students) would be asked to answer a battery of questions centered around African-American folkways and culturally distinctive terminology, the implication being that standardized IQ tests were similarly imbued with Euro-centric cues that would be inaccessible to black subjects. We were also referred to discrete SAT and IQ test questions where references to regattas or holiday customs or suchlike were said to stack the odds against minority test-takers who, it was implied, couldn’t reasonably be expected to have cultural exposure to such arcane knowledge.

I remember thinking the whole business had a dubious odor, and my suspicion was accurate. Even if the idea remains entrenched in the popular imagination, internal cultural test bias –  at least in the simple guise implied by the chitlings and regattas –  has been rigorously examined and unequivocally rejected by scholars for decades. Contrary to what a culture bias model would predict, it turns out that black test subjects, regardless of their socioeconomic background, typically score higher on those test items that are rated as being culturally sensitive. Over time and across cultures, the racial disparities are more acute on tests and test items that draw upon abstract nonverbal reasoning.

The classically observed racial rank order has been consistently documented, for example, using  Raven’s Standard Progressive Matrices, a well-recognized nonverbal psychometric battery that relies purely on abstract problem solving.  Moreover, a consistent finding is that those tests that are more highly correlated with the general factor underlying all measures of cognitive ability are the ones that yield the starkest racial differences.  This is true even with tests of simple “reaction time” and “inspection time,” which cannot reasonably be argued to imbue any cultural component.                

Of course sniffing out culturally suspect test items may make for good undergraduate fun, but the more statistically relevant question concerns external or "predictive" bias. If IQ tests are intrinsically biased against some groups, it should follow that their value as tools with which to predict scholastic and vocational achievement would reflect the difference. This is not the case. In their summary of extant (and extensive) research on predictive test bias, the previously mentioned APA task force made clear that “[c]onsidered as predictors of future performance, the tests do not seem to be biased against African Americans” If anything, some research shows that IQ scores slightly over-predict real world performance for blacks, which is another can of worms we shall save for another day. 

If the weight of evidence begrudges them to acknowledge that the tests aren’t’t biased, many IQ critics will hang their hopes on the seemingly plausible idea that familial discord and socioeconomic disadvantage must be to blame. I imagine this is the explanation that first occurs to most people, and a review of the data might seem upon a cursory glance to support common sense. Once again, however, problems arise as soon as the confounding influence of within-group genetic factors is taken into proper account. 

In glaring contrast to received opinion, behavior geneticists – notably Robert Plomin and the late David C. Rowe – have established that within-family factors such as illness, birth-order, parental favoritism, peer-relations, and any number of idiosyncratic life experiences  account for a far greater share of all environmentally-rooted behavioral variation than such classically suspect between-family factors as educational opportunity, parental income, and  social class. 

More importantly, since behavior-genetic twin research shows no discernible contrast in how this surprising "gene-environment architecture" adds up in black versus white families, the question thus arises as to how it is possible that between-group environmental differences, while accounting for so little of the observed variation within families, should nevertheless be expected to account for the lion’s share of IQ variation among races.

Family environment-centered theories are more conspicuously challenged by the inconvenient fact that racial achievement gaps persist even after the imputed environmental variables are equalized. Contrary to what would be expected if racial differences in IQ were an artifact of socioeconomic factors, it turns out that disparities in black and white test scores are actually more pronounced in affluent communities where black and white students share similar advantages. In addendum to their exhaustive review of research on race and intelligence, Arthur Jensen and J Philippe Rushton pointedly ask how critics can explain “the fact that Black students from families with incomes of $80,000 to $100,000 score considerably lower on the SAT than White students from families with $20,000 to $30,0000 incomes?”

It’s a good question.  Any ideas? I’m sure you can come up with something.

Environmentally-bound explanations are further degenerated by their failure to fashion a coherent response to the formidable body of evidence showing that a wide range of physiological traits including brain size (as measured by sophisticated neuroimaging technology) correlate with racial rank orders documented in intelligence tests.    The proposition that brain size correlates with IQ — and more specifically, general intelligence — is no longer a subject of serious controversy. A good summary of the state of the evidence was recently set out in Jeremy Gray and Paul Thompson’s paper for the journal Neuroscience, which states:

Correlations between intelligence and total brain volume or grey matter volume have been replicated in magnetic resonance imaging (MRI) studies, to the extent that intelligence is now commonly used as a confounding variable in morphometric studies of disease. MRI-based studies estimate a moderate correlation between brain size and intelligence of 0.40 to 0.51.

This being the case, it might follow that a genetically weighted interpretation of the observed racial differences in IQ would  predict a racial rank order in brain volume.  Well, guess what:

Overall, MRI studies show that brain size is related to IQ differences within race. Moreover, the three-way pattern of group differences in average brain size is detectable at birth. By adulthood, East Asians average 1 cubic inch more cranial capacity than Whites, and Whites average 5 cubic inches more cranial capacity than Blacks. These findings on group differences in average brain size have been replicated using MRI, endocranial volume from empty skulls, wet brain weight at autopsy, and external head size measures. They were acknowledged by Ulric Neisser, Chair of the APA’s Task Force on intelligence, who noted that, with respect to “racial differences in the mean measured sizes of skulls and brains (with East Asians having the largest, followed by Whites and then Blacks) . . . there is indeed a small overall trend”.

That’s from Arthur Jensen and J. Philippe Rushton’s article "30 Year of Research on Race Differences and Cognitive Ability," published in the June 2005 issue of the APA journal, Psychology, Public Policy, and Law, which is probably the most exhaustive one-stop synthesis of the empirical and theoretical state of the debate currently available.   

In presenting their formidable case for moderate to strong hereditarianism, Jensen and Rushton triangulate from far-ranging sources of evidence. In addition to the brain size stats, their paper looks at worldwide racial patterns in IQ distribution, and considers the accumulated burden of evidence gathered from behavior genetics.  They emphasize that race differences are most profound in tests of the general factor (usually referred to as g — more on which later) latent in all tests of cognitive ability, and they survey the growing body of evidence from trans-racial adoption studies, racial admixture studies, evolutionary psychology, physical anthropology and other disciplines. 

In every case, Rushton and Jensen argue, the weight of the evidence is more consistent with what would be expected if genes play a strong (but not exclusive) role in determining racial differences in intelligence.  If you’re up to the task, the Jensen/Rushton report is, quite simply, a must-read. If you begin to feel the earth shift beneath your feet, don’t worry; you can always seek salvific refuge in one or more of the rejoinders which were published in the same issue.  Just be sure not to read R&J’s response to said rejoinders.  Wouldn’t’t want to spoil a happy ending. 

At odds to explain the surfeit of data, those who cling to strictly environmental explanations are left to speculate about some unknown factor that might yet quell our worst suspicions. “The real problem” as summarized by a contributor to the invaluable Gene Expression forum, “is how to test the factor X theory, that the black-white IQ gap is due to something unique to the black environment that affects all blacks equally but is completely absent from the white environment in a way that could evade all detection thus far.” This, then, is where the rest of the chips must fall. It must be “stereotype threat” or hypertension, or academic disengagement. Or something else, goddammit.

Yet with every failed attempt to identify a magic factor X, the unthinkable hereditarian hypothesis lurks nearby reminding us of its prima facie plausibility.         

In searching for environmental sources to explain the resilient differences between races, the fact that IQ is in large measure heritable may not provide a strictly deductive bridge, but if you consider the broader  hereditarian reality in combination with the impotent track record of such strictly cultural-environmental factors as have been tried and tested, and if you are honest, you will be hard-pressed not to assign greater plausibility to the idea that genes play a significant role. When you consider the physiological correlates, the dimming hope for the cultural-environmental equivalent of Lewontin’s nutrient-deficient soil assumes the cadence of a desperate prayer.            

Reasonable people may disagree as to whether some unquantifiable constellation of factors might yet account for the gap, but it is reason, I fear, that is too often lost in the din of disputation. Just because something is theoretically possible, doesn’t’t mean it is plausible. This is easily recognized when the political stakes are lowered. 

Consider, for example, the correlation between smoking and lung cancer. As absurd as such an argument might seem on its face, it is conceivably possible that the imputed causal link between tobacco consumption and cancer risk is entirely illusory. Rather than playing an independent role as typically assumed, the act of smoking theoretically could turn out to mask a host of confounding behavioral or carcinogenically predispository factors that are simply more common among smokers than non-smokers.  Scientists don’t waste time and resources pursuing the elusive equivalent of Lewontin’s desert soil in the smoking population, however, because to do so would be a fool’s errand and a public disservice; common sense and the weight of evidence lead us to favor a causal link as the default hypothesis. Smoking causes cancer, even if in the strictest sense it remains unproven.

I think a disinterested appraisal of the evidence similarly favors the hereditarian hypothesis with respect to observed racial differences in IQ. Period.  Where racial differences are at issue, Occam’s Razor has been ignored for too long. Our intentions may have been noble, but it is surely more noble to confront unpleasant possibilities.

But quite regardless of our best intentions and worst suspicions, we may soon have to deal with a more definitive body of evidence.  Notwithstanding the unconscionable experiment mentioned earlier, recent advances in genetic analysis provide a more direct means of testing the hereditarian hypothesis. 

I turn again to that scoundrel Charles Murray, who just can’t seem to keep his mouth shut.  As summarized in his important essay, “The Inequality Taboo,” here is how a more definitive answer could be extracted:

Take a large sample of racially diverse people, give them a good IQ test, and then use genetic markers to create a variable that no longer classifies people as “white” or “black,” but along a continuum. Analyze the variation in IQ scores according to that continuum. The results would be close to dispositive.

Anticipating the usual criticism from the social constructionist gang, Murray goes on:

The results of such a study would be especially powerful if the study also characterized variables like skin color, making it possible to compare the results for subjects for whom genetic heritage and appearance are discrepant. For example, suppose it were found that light-skinned blacks do better in IQ tests than dark-skinned blacks even when their degree of African genetic heritage is the same. This would constitute convincing evidence that social constructions about race, not the genetics of race, influence the development of IQ. Given a well-designed study, many such hypotheses about the conflation of social and biological effects could be examined.

So the cards are on the table, and proof is in the offing.  Place your bets. 

The Fallacy of the Fallacy of Reification

Beset and beleaguered by mounting evidence, a vocal minority of critics seek refuge in the hermeneutically parsed denial of the very idea of intelligence. Radical strains of such criticism find succinct expression in the oft-repeated mantra that IQ scores are simply "what IQ tests measure," which is a bit like pointing out that temperature is merely what  thermometers measure – it may be tautologically accurate, but it contributes nothing to our understanding of reality.

There is something very nearly absurd about this tack.  By dint of  predisposition, scholars and intellectuals are people who trade in the practical realities and deeper vicissitudes of applied intelligence as a matter of course; they spend their professional lives thinking and arguing in high-stakes cognitive competition with colleagues, yet when the subject of intelligence is raised in an empirical context, these selfsame intelligence-obsessed people suddenly trip into paroxysms of denial. It’s like a devoted baseball fan who refuses to accept the relevance of batting averages. Intellectuals are obsessed with intelligence. When they play at denying its reality,  it’s just hard to take them seriously. And when the IQ deniers sneer over George Bush’s middling SAT scores or scream injustice when some borderline retarded murderer is primed for the chair, well, the irony-bordering-on-hypocrisy almost begs for a punch line.       

I haven’t come up with one.  And even when I can summon the requisite patience to suspend my incredulity and chew over the rehearsed palaver about social construction and reification, however cleverly it may be formulated, I am always left with the same stubborn questions, which invariably distill into one question: What about the fucking data?

The inescapable fact is that IQ scores, whatever they represent, effectively if imprecisely predict human destiny in a host of measures, probably better than any known variable in the social sciences.  This is true across racial lines.  This is true after socioeconomic factors are taken into account. This is true  whether test subjects are male or female. This, quite simply, is true

So if you want to play Foucauldian parlor games with intelligence, go ahead an spin your wheels. But in doing so, just remember that you forfeit your rational grounds to whip up the bluster over “class” or “culture” or whatever nominative categories may be custom-fitted to your preferred worldview. You can’t have it both ways, smart guys.  Either explain the data or cop to the sociological nihilism at the core of your high-minded casuistry.  As Andre Marrou used to say, it really is that simple.   

Even if there are aspects of intelligence that remain impervious to empirical reduction – and it would be silly to argue otherwise – we are left with the range of the spectrum that we can measure, and measure with relative consistency.  And at some point we must resign to face the implacable reality that agglomerated test scores predict important aspects of human destiny with consistency and accuracy over time. The map should not be confused with the territory, but you can’t travel far without one.  The metric value of a map is that it gets you closer to your destination. Same with IQ.  The proof is in its independent predictive power.

What does IQ predict? Linda Gottfredson of the University of Delaware has studied the predictive utility of cognitive tests in relation to a broad range of functionally important social dimensions.  Her investigations demonstrate, inter alia, that general intelligence as measured by IQ tests “can be said to be the most powerful single predictor of overall job performance,” meaning that in most cases a simple mental snapshot will tell us more about how a person is likely to do on the job than their formal education, or their personality, or even their past experience. Not only does IQ predict job performance at practically every level of complexity, it also predicts health and longevity, even after socioeconomic status is taken into account. IQ even predicts what kind of jokes are more likely to amuse us.

The simplest explanation, of course, is that IQ transcends the parameters of the test.  Rather than being a narrow scholastic measure or an artifact of self-contained psychometric games as the reification crowd would have it, objectively measured intelligence provides a roughly accurate read on how people process information and make decisions in the real world

The causal link between IQ and practical decision making is perhaps nowhere better illustrated than in military life, where basic mechanical proficiency and the ability to make snap judgments take on crucial importance.  In her 1998 article for Scientific American, Gottfredson highlights a 1969 study commissioned by the U.S. Army finding that “enlistees in the bottom fifth of the ability distribution required two to six times as many teaching trials and prompts as did their higher-ability peers to attain minimal proficiency in rifle assembly, monitoring signals, combat plotting and other basic military tasks.” 

And as the indefatigable Steve Sailer recently noted:

One of the least known but most decisive facts in the pseudo-controversy over the validity of IQ tests is that the U.S. military, after 88 years of intensively studied experience with giving IQ tests to tens of millions of potential recruits, remains utterly committed to using them. Indeed, since 1992, when the end of the Cold War and the destruction of Iraq, reduced the need for a giant standing army, only about one percent of all new enlisted personnel have gotten in with scores under the 30th percentile nationally on the military’s entrance test.               

When resources are limited and the stakes are high, you stick with what works. And IQ works. No matter how desperately the deniers try to huff and puff it away.   

There is nothing at all wrong with formalizing nebulous phenomena for the purpose of measurement and meaningful analysis. We do this all the time, with weather forecasts and price indexes, with compass points and movie ratings. Numerical distinctions, however arbitrarily assigned, can help us to apprehend phenomena that would otherwise defy coherent analysis. 

The real question goes to whether the metric yields more information about the thing being described. And whatever it is, this socially reified phantom-quantum that IQ tests measure, it happens to tell us a lot about how people are likely to behave in this big beautiful world.  IQ isn’t the only explanation, but it is a great explainer.   

______________________________________________________

This horse ain’t dead yet. Be sure to check back for the final installment, in which The Hog appraises recent developments in genomic research and dashes all remaining hope. 

Continue reading

Shocked, Shocked, SHOCKED…

The third part in my series on the Bell Curve bugaboo should be up in a day or two, but if you suspect I’ve been wasting my energy in focusing on Jim Naureckas’s relatively inconsequential smear job for Fairness and Accuracy in Reporting, you might want to read up on the ongoing efforts of another self-described media monitoring group to publicly chastise NBC News for having the brazen audacity to air a segment — about Hollywood, no less — featuring a 15 second clip of The American Conservative‘s house film critic, Steve Sailer, without disclosing said Sailer’s unspeakably nefarious links to the usual SPLC-fingered purveyors of thoughtcrime. 

It’s the same pathetic neo-McCarthyite MO as that favored by Naureckas and kin: trot out the gallery of "racist" rogues, play up the tangential associations, and proceed to tarnish anyone who deigns to provide a forum for the persona non grata in the cross-hairs.  The article even mentions the Pioneer Fund, for fuck’s sake.  And David Irving, naturally.  It’s like déjà vu all over again.   

For his part, Sailer seems to be taking whole affair with his usual good humor, speculating along the way about some curious intersections between his career and that of Media Matters CEO, David Brock, who I pegged as a shameless apparatchik hack long before he was Blinded by the Right

Stay tuned.    

Please Do Not Disturb Professor Dennett

Is it just me, or is Daniel Dennett becoming a crotchety old fart?  First he signs on for that dead-end campaign to rename atheists "Brights," which may have been the most obnoxiously flaky lost cause since metric time. Then, in the course of stumping for his new book, Breaking the Spell: Religion as a Natural Phenomenon, disbelieving Dan files this testy exchange with  The New York Times Magazine.

An excerpt:

Interviewer: I take it you do not subscribe to the idea of an everlasting soul, which is part of almost every religion.

Dennett: Ugh. I certainly don’t believe in the soul as an enduring entity. Our brains are made of neurons, and nothing else. Nerve cells are very complicated mechanical systems. You take enough of those, and you put them together, and you get a soul.

Interviewer: That strikes me as a very reductive and uninteresting approach to religious feeling.

Dennett: Love can be studied scientifically, too.

Interviewer: But what’s the point of that? Wouldn’t it be more worthwhile to spend your time and research money looking for a cure for AIDS?

Dennett: How about if we study hatred and fear? Don’t you think that would be worthwhile?

Ugh, indeed.  It’s almost as though the famed philosopher of deep Darwinism couldn’t be bothered with the
interviewer’s perfectly reasonable (if predictable) questions on the
very subject of his purported expertise. 

Maybe he was just having a bad day. And I suppose Dennett’s air of glib dismissiveness may  read as "refreshing" among a certain segment of his already convinced audience. Yet I can’t help being reminded of the caricature of neuroscientific determinism from Tom Wolfe’s entertaining essay, "Sorry, But Your Soul Just Died":   

I have heard neuroscientists theorize that, given computers of sufficient power and sophistication, it would be possible to predict the course of any human being’s life moment by moment, including the fact that the poor devil was about to shake his head over the very idea.

I don’t know.  A certain elitist posture is probably inevitable in these matters.  There is a profound difference, however, between the kind of respectfully cultivated elitism that follows from the humility of a scientific worldview, and the kind of vaguely hostile condescension that trades in contempt for people who may be less intelligent or less inclined to shed their supernatural comforts.  Even as Dennett disdains to turn his scientific gaze toward the superstitious predispositions afflicting so many billions of hapless human brains, his simmering impatience with the spell-enchanted rabble is ever more palpable. And telling.
 
For good measure, I suggest contrasting Dennett’s flippancy with the deft and respectful — yet no less uncompromising — exposition of neo-Darwinian verities proffered by the conservative writer, John Derbyshire, notably in his recent debate with Tom Bethall over the increasingly noisy subject of intelligent design, and more substantially in his important essay, "The Specter of Difference," which touches upon some of the more discomfiting sociobiological undercurrents of our post-genomic era.

In the former exchange, Derbyshire serves up an engaging analogy to delineate the epistemological parameters of scientific inquiry:

…yes, material causes only are admitted in science, because
science is the attempt to find material explanations for observed
phenomena.  Likewise, only hollow balls 2.5 inches in diameter are
allowed in tennis, because tennis is a contest played with 2.5 inch
diameter hollow balls.  Whether other kinds of balls exist is a matter
of opinion among tennis players and fans, I suppose; though if a
player were to come on court and attempt to serve a basketball across
the net, the rest of us would walk away in disgust.

Which, I hope and suspect, is the sort of caveat that a scientifically curious laity would be better served to understand, especially with all this tiresome ID flummery still making the rounds.

As it happens, I tend to agree, at least in theory, with Dennett’s materialist-to-the-core
account of even the most ineffable and deeply-felt pseudo-spiritual
pining, and despite my grumbling, I remain something of a fan.  Dennett’s studies on the vagaries of consciousness are out of my depth, but Darwin’s Dangerous Idea was a hoot, and I like to point out that Dennett — along with Peter Singer, Richard Rorty and David Stove — ranks among the few contemporary philosophers who can be read and understood without the benefit of an academic background, which, in case you’re wondering,  is a compliment. 

Still, with high stature public intellectuals in precipitous decline, it seems reasonable that we might expect a little more from one of the best.  So while I’m sure I’ll look into Dennett’s disquisition on the god goblins once it hits the remainder bin, for now a little Derb will do.   

Peter Sotos Interview

Editor’s note: If you came here looking for the Hoover Hog interview with Peter Sotos, click here.

____________________________________

If you’re inclined to let your mind out on a different kind of tether, The Fanzine has just posted Brandon Stosuy’s fascinating interview with outlaw litterateur, Peter Sotos (an edited version of which recently appeared in the Prague Literary Review). 

More than twenty years since his life-defining run-in with the authorities, Sotos’ work continues to advance a uniquely insightful — and disquieting — perspective on the nature of sexual deviance and the human condition. Whatever you may think of him, Peter is a born writer whose literary output has yet to receive its due critical attention.

Here is Sotos commenting on the subjective nature of pornography:

It’s impossible to apply grand definitions to pornography because the
intense precision of individual taste is central. This is why laws and
the necessary text on pornography are so loud and popular. It allows
different sides to establish self-serving moral grounds but never a
concrete or unified answer. One defines pornography for oneself only.
The act of masturbation wouldn’t, obviously, qualify everything as
pornography but rather what one is looking for in pornography. Just
that these objects are capable of being used as pornography. I’ve seen
far too much of this so-called "transgressive" pornography that is
completely defined by the ridiculous arguments of those who seek to
vilify printed words and pictures. The ones that bask in their naked
freedom and flaunted spirituality are just as ugly, just as obscene, as
the ones who constantly beg you to watch out for their children’s
future.

…on his place in the world of letters:

I know where others say they see me fitting in. But, honestly, I don’t
think in those terms at all. I don’t see anyone else doing what I do.
Which sounds terrible, I know. But I don’t feel much kinship with
contemporary writers, especially those who create fiction. My interest
is in completely the other direction. There are writers whose work I
love, of course, and it’s nice when some people make certain smallish
comparisons. Sade, Dworkin… But nothing in terms of an ongoing
tradition.

…and on his kinship (sorry) with the late feminist writer, Andrea Dworkin:

I think Andrea Dworkin cared very deeply about her words being more
than that – just words. I’m certain that I do, as well. But we don’t
see the frustrating impossibilities of that action in the same context
or towards the same result.

Intrigued?  Read the whole thing.  My review of Peter Sotos’ recent book, Selfish, Little: The Annotated Lesley Ann Downey will appear in the next issue of PLR.

You, Me, and The Bell Curve – Part Two: Behavior Genetics and Gouldian Knots

Editor’s note: this is the second post in a series on The Bell Curve controversy. To catch up on Part One, click here.

______________________________________________________

Sociology and Behavior Genetics: A Failure to Communicate

In one of his less bombastic turns, Naureckas refers us to work conducted by sociologist Jane Mercer, which he claims “has shown that supposed racial differences in IQ vanish if one controls for a variety of socio-economic variables.” Mercer’s research, which is discussed in The Bell Curve, actually looked at a group of 180 Latino and 180 white elementary school children and found that by controlling for the mother’s civic participation, place of residence, language, socioeconomic status, education, urbanization, values, home ownership, and family cohesion, the significance of IQ can be reduced to a negligible factor.

Claims of this ilk abound in the sociological literature and were prominently discussed in the most valuable collection of Bell Curve criticism to date, Intelligence, Genes, and Success. Closer analysis, however, typically reveals a kind of post-hoc data-massaging that would be laughable in other disciplines. While such efforts have the merit of relying on solid sociological methods, they are invariably bedeviled by the possibility of confusing causal relationships.   As Murray and Herrnstein emphasize in their discussion of Mercer’s research, the difficulty is that her method “broadens the scope of the control variables to such an extent that the process becomes meaningless.” “[T]he obvious possibility,” they note, “is that Mercer has demonstrated only that parents matched in IQ will produce children with similar IQs – not a startling finding.”

Of course, even if such studies are at odds with Occam’s Razor, they could nevertheless provide a sound refutation of the hypothesis that IQ plays a primary role in human destiny. The trouble is that once a researcher has zeroed in on some magic constellation of variables to make the IQ differences "vanish," the burden remains to explain why the custom-fitted factors do not themselves result from native ability. It’s a classic chicken or the egg problem. 

Fortunately it is possible to disentangle the confusion.

As it turns out, there is little controversy about the genetic basis for the variance of intellectual traits within human population groups. The cumulative weight of numerous twin, adoption, and sibling studies shows that genes account for anywhere from 50% to 80% of the variance in IQ, making intelligence – or whatever it is that IQ tests measure – one of the most heritable traits ever documented in the social sciences. It is equally true that IQ, to some considerable extent, influences educational and vocational success.

Taken together, these facts delineate the crux of the problem with Mercer’s method and conclusions. Richard Herrnstein famously expressed it in the form of a syllogism, which goes a little something like this:

· If differences in mental ability are inherited, and

· If success requires those abilities, and

· If earnings and prestige depend upon success

· Then social standing (which reflects earnings and prestige) will be based, to some extent, on inherited differences among people

Obviously, the trick is to come up with a way of way of sorting out the independent effect of IQ in comparison with the environmental complexes identified by critics. To this task, the field of behavioral genetics has stepped up with the necessary insights and methodological tools.

Behavior geneticists are known for their comparative studies of twins and adopted children, which have radically transformed our understanding of the classic debates over nature and nurture. But another straightforward approach is to look at siblings raised in the same family.  By definition, brothers and sisters from intact families share roughly the same home environment, the same socio-economic background, and the same educational opportunities, so they provide a useful way of gaining perspective on what’s really happening.

When researchers approach the problem this way, the independent role of IQ re-emerges in force. For example, Charles Murray’s own post-Bell Curve analysis of approximately 3000 sibling pairs from the NLSY database found that even when they are raised in the same home and matched in their socioeconomic background, blood-related children with different IQs show markedly divergent patterns in achievement across a host of measurable indices. Specifically, he found that even when “both children had attended elementary and secondary schools for the same number of years, only 18 percent of the siblings with ‘normal’ IQs (in the 90 to 109 range) got bachelor’s degrees, while 83 percent of their brothers or sisters in the very bright category (IQ of 125 or above) did so.”

Murray’s sibling analysis further demonstrates that IQ plays a similarly strong role in explaining occupational achievement and economic success, even when socioeconomic factors are shared. In the1998 monograph Income Inequality and IQ, he divides the sibling groups into five cognitive strata based on IQ. By comparing the sibling cohorts with the full NLSY sample, you get a revealing snapshot of the limits of family influence.

To wit:

Cognitive Class          Full sample      Sibling sample

Very Bright (125+)      $36,000              $33,500

Bright (110-124)          $27,000              $26,500

Normal (90-109)           $20,000              $20,000

Dull (75-89)                  $12,400              $14,000

Very Dull (< 75)           $5,000                $7,500

“In 1992,” Murray explains, “the median earnings for the Normals was $20,000. Their Very Bright siblings were already averaging $33,500 while their Very Dull siblings were making only $7,500. Once again, the Brights and Dulls each fell about halfway between ($26,500 and $14,000 respectively).

In highlighting the significance of such data, Murray invokes a simple yet eloquent thought experiment. “These are large differences,” he writes:

Think of them in terms of a family reunion in 1992, with one sibling from each cognitive class, sitting around the dinner table, all in their late twenties to mid thirties, comparing their radically different courses in the world of work. Very few families have five siblings so arranged, of course, but the imaginative exercise serves to emphasize that we are not comparing apples and oranges here–not suburban white children with inner-city black children, not the sons of lawyers with the sons of ditch diggers–but siblings, children of the same parents, who spent their childhoods under the same roof. They just differed in their scores on a paper-and-pencil mental test.

In the vast and growing literature of behavior genetics, innumerable studies attest to the strong heritability of intelligence and the relatively insignificant role of shared and non-shared environmental influences. There are tweaks and peaks in the topography; for example, twin and adoption studies often show that modest gains in childhood IQ can be chalked up to family influence – and the effect is indeed more pronounced at the socioeconomic extremes – but such gains invariably trail off in time as the primacy of family environment gives way to the social environment that people create by dint of choice and innate disposition. Indeed, one of the most paradoxical and revolutionary discoveries in the field is that the genetic component of behavioral traits becomes more pronounced as people grow older.

When confronted with the findings of behavior genetics, sociologists often respond by plugging some newly devised concatenation of factors into the mix until the picture morphs into something more consonant with their preferred reality. This is always an option, but once the classically suspect socioeconomic culprits have been considered and controlled, you have to be extra careful not to impart causal significance to factors that are more parsimoniously understood to be the result of IQ.

As Steve Sailer pithily puts it, “You can make all sorts of things disappear by ‘controlling for’ variables that are closer to symptoms than causes. For instance, you can make the average height gap between the Dutch and the Japanese disappear by ‘controlling for’ length of the pants hanging in their closets.” But reality doesn’t yield to statistical caprice. 

Life is full of disappointments. That’s why there’s popcorn and binge-drinking.

Say it Five Times Before Breakfast: Stephen Jay Gould did not refute The Bell Curve

Of course documenting obdurate group differences is a task far different than determining wherefore such differences arise and persist. It is this why question that constitutes the meat and pornography of the debate, even if, truth be told, it was never a major part of The Bell Curve’s central thesis. Approaching the thorny question of what causes group differences in intelligence to be so resilient is tricky business as we shall see, and until more sophisticated methods are available (and they’re coming sooner than you think), it remains largely a game of elimination, triangulation, and speculation.

Considering that Naureckas expends so much effort trying to convince us that the basic claims about the existence of group differences are without merit, he isn’t in much of a position to engage this issue. To the extent that he tangentially broaches the etiological borderlands of the debate at all, he seems content to pluck from a smattering of critical sources until predictably turning over the mike to Stephen Jay Gould, whom he claims “thoroughly debunked” the whole sordid business with his “classic work on the pseudo-science behind eugenics, The Mismeasure of Man.”

At this point, we should all take a deep breath. Because I know what you’ve been told. And I know what you’ve read. I know all about those eloquent New Yorker essays and the pop-references to spandrels and evolutionary contingency, and I saw him on Nightline, too. Oh, and let’s not forget that Simpson’s episode. Yeah, he was good. Always a chuckle in his voice. A real scientist for the people. A national treasure whose absence from the current scene is duly and respectfully noted. He had that special knack for bridging the gap between the two cultures.

A rare bird indeed.

But you can keep all that. Because despite his long-cultivated media-enabled cachet as an intellectual dragon slayer, the banal truth is that Stephen Jay Gould was neither a coherent nor respected authority in the IQ controversy. Sure, The Mismeasure of Man continues to enjoy glowing affirmations in the popular press, and it doesn’t look like anyone’s going to take away that National Book Award any time soon, but notwithstanding his quasi-saintly reputation among the learned laity, Professor Gould’s work – whether as an IQ debunker or as an evolutionary theorist – was never the subject of such credulous praise in the expert literature. In fact, he was scarcely taken seriously.

You don’t believe me, I know. But maybe your trusted compatriot Paul Krugman can jar you out of that trance. Speaking before the European Association for Evolutionary Political Economy in 1996, the respected economist cum Bush-bashing pundit provided a sober reality check on Gould’s imagined stature as an evolutionary theorist. “It is not very hard to find out, if you spend a little while reading in evolution, that Gould is the John Kenneth Galbraith of his subject,” said Krugman:

That is, he is a wonderful writer who is beloved by literary intellectuals and lionized by the media because he does not use algebra or difficult jargon. Unfortunately, it appears that he avoids these sins not because he has transcended his colleagues but because he does not seem to understand what they have to say; and his own descriptions of what the field is about – not just the answers, but even the questions – are consistently misleading. His impressive literary and historical erudition makes his work seem profound to most readers, but informed readers eventually conclude that there’s no there there…

Adding:

…if you think that Gould’s ideas represent the cutting edge of evolutionary theory (as I myself did until about a year and a half ago), you have an almost completely misguided view of where the field is and even of what the issues are.

And if Gould’s reputation as a top-flight evolutionary thinker can be understood as resulting from savvy self-promotion and culturati-hustling panache, the same MO largely accounts for his firmly entrenched public reputation as an IQ debunker, though in this role, as we shall see, he is on even shakier ground.

Surprised? Yeah, I was too. Yet the conspicuous contrast between Gould’s reception in the popular versus academic literature was initially highlighted by the late Harvard geneticist, Bernard Davis back in 1983. In his important essay “Neo-Lysenkoism, IQ and the Press,” first published in The Public Interest, Davis observed that whereas “the nonscientific reviews of The Mismeasure of Man were almost uniformly laudatory, the reviews in the scientific journals were almost all highly critical.” Davis took Gould to task for his selective account of the evidence, for his misuse of mathematical models, and significantly, for the crypto-Marxist subtext of his arguments.

On that occasion, Gould responded with characteristic aplomb, but as the high-prestige rejoinders filed in, he wisely adopted the tack of ignoring his critics altogether. Why enter the fray when your riding high with the bourgeoisie?

Thus Arthur Jensen’s devastating review of The Mismeasure of Man, remains unanswered. And Gould certainly couldn’t have been troubled to compose a response to big bad Phil Rushton’s thoroughgoing takedown of the 1996 reissue of his celebrated classic. A national treasure has better things to do than wrangle with such unseemly ruffians of academe. To paraphrase evolutionary psychologist Robert Wright’s riff in his acerbic Slate essay “Homo Deceptus: Never Trust Stephen Jay Gould,” a “savvy alpha male” stands to gain nothing from getting into a gutter brawl with scrawny, marginal primates.

Gould’s preferred method, on full display in his award-winning polemic, was to dredge up century old science of ostensibly dubious merit, and to have at it with a twentieth-century laser, debunking, discrediting, bashing, and belittling ad nauseum – all the while implying that the current lot of research can be just as readily dismissed since it is presumptively fraught with the same “philosophical errors.” It’s a clever bit of intellectual sleight of hand; by switching the deck, Gould invariably leaves the impression of having done a lot more debunking than has actually been the case.

But even within the terms his retro-skeptical rigging, Gould’s efforts appear to be of diminishing merit. When Mismeasure was reissued in 1996, it has been pointed out that Gould conspicuously neglected to update his sources to account for important scientific developments that would have proven difficult if not impossible to reconcile with his central arguments. For example, a good portion of Mismeasure is devoted to discrediting the findings of nineteenth century craniometry. Gould takes great pains to depict skull-measuring scientists of bygone days as dupes of their own Eurocentric prejudices; eminent scholars such as Paul Broca, Samuel George Morton, and the great Sir Francis Galton, who purported to document a racial rank order in cranial capacity, Gould assures us, were employing careless methods and crude metrics, their conclusions being falsely derived to conform to their own deep-rooted racialist biases.

Yet Gould never addressed more recent re-analyses that examined the literal bones of contention at issue with more objective tools. To do so would be to concede that the racial taxonomies documented by the long deceased and much defamed skull collectors in his rogues gallery were largely accurate. Nor did he deign to evaluate the numerous modern studies that document essentially the same racial differences using sophisticated neuroimaging technology, which consistently support moderate correlations between brain size and the general factor underlying intelligence tests.

As a critic of psychometrics broadly applied, Gould was prone to at once radical and almost certainly disingenuous pronouncements. The idea that a “general factor” might underlay data gleaned from IQ tests he famously discounted as the “rotten core” of the whole sordid business. Yet his understanding of factor analysis has been roundly rejected by recognized experts. He consistently ignored evidence that would have been ill-suited to the hero’s narrative, steadfastly refusing to evaluate recent evidence showing that mental test scores correlate with a host of objective physiological and psychological measures such as reaction time, nerve conduction, and glucose metabolism.

I don’t know Gould’s preferred definition of pseudoscience, but ignoring evidence contrary to one’s favored conclusions doesn’t seem the way of dispassionate inquiry. That kind of chutzpah may pay off in the short term, but science, alas, is not a popularity contest.

_____________________________________________________________

Stay tuned for Part Three, in which the editor takes on the Chitling Test and expounds upon the fallacy of "the fallacy of reification."  To catch up on Part One, click here.

 

Marek Kohn on Race and Science: “It’s No Longer Black and White”

I’ll soon be posting the second part in my series on the Bell Curve controversy.  In the meantime, be sure to check out Marek Kohn’s surprisingly even-tempered article in The Guardian, "This Racist Undercurrent in the Tide of Genetic Research."  Despite the "racist" hook in the headline, Kohn plays it straight as he contrasts the torrent of outrage that greeted Murray & Herrnstein’s tome with the "undertone of complacency" that has hovered around more recent taboo-busting bombshells, such as last year’s widely publicized paper, "The Natural History of Ashkenazi Intelligence," which posited a genetic explanation for higher average IQs among Jews of Ashkenazi descent. 

Despite some signs of residual hostility (I’m not sure what to make of those "hardcore" race scientists),  Kohn lays out a pretty fair and balanced round-up of recent developments that may or may not portend a paradigm shift in the consensus thinking over matters of race and  biology:

…race has raised its head in public several times in the past year, and
the reaction – or lack of it – has been notable. Murray restated his
case, more magisterially than ever, in the magazine Commentary. The
British biologist Armand Marie Leroi argued in the New York Times that
race was a scientifically meaningful and medically valuable concept.
His case has the implicit support of the US Food and Drug
Administration, which has approved a heart drug, BiDil, that is
intended specifically for black people. Discredited by association with
the Third Reich, and discarded by mainstream science thereafter, racial
science is pushing for rehabilitation on a range of fronts.

More significantly, Kohn’s analysis shows a refreshing degree of candor in rejecting the warmed over rhetoric that’s too often trucked out in reflexive counterpoint to inconvenient facts:
In the past it was easy: an ominous reference to the Nazis and a snippet of scientific reassurance – such as the observation that there is more variation within so-called races than between them – would do the trick. But the hardcore advocates of race science have spent years working out answers to the standard rebuttals. And you cannot refute a scientific claim by referring to its historical baggage.      
Kohn worries that an emerging acceptance of the new and improved race research may be coming a bit too easily for a public whose understanding of human biodiversity is still very  much influenced by latent prejudice and "old fashioned racial notions."   Personally, I suspect his read on the zeitgeist may be a shade premature, but he does point up one of the central ironies of forbidden subjects:
Over the years, the denial of race became almost absolute. Differences were only skin-deep, it was said – despite the common knowledge that certain groups had higher incidences of genetically influenced diseases. It became a taboo, and as the taboo starts to appear outdated or untenable, the danger is that unreflective denial will be replaced by equally uncritical acceptance.      

The underlying point here is one I’ve been making for some time; when you cultivate a downward glance and hang every hope on the insistence that the sky doesn’t exist, looking up can be dangerous business.  But the sky isn’t falling.  And I remain optimistic that the average person can
be trusted to separate the racial wheat from the racist chaff, no matter
where the science leads.

Kohn advises that we proceed with due caution, which is fine.  The real lesson, however, goes to the inherent danger of taboo.  When all the cards are on the table, the notion that ideas can be dangerous may be the most dangerous idea of them all.