Editor’s note: this is the second post in a series on The Bell Curve controversy. To catch up on Part One, click here.
Sociology and Behavior Genetics: A Failure to Communicate
In one of his less bombastic turns, Naureckas refers us to work conducted by sociologist Jane Mercer, which he claims “has shown that supposed racial differences in IQ vanish if one controls for a variety of socio-economic variables.” Mercer’s research, which is discussed in The Bell Curve, actually looked at a group of 180 Latino and 180 white elementary school children and found that by controlling for the mother’s civic participation, place of residence, language, socioeconomic status, education, urbanization, values, home ownership, and family cohesion, the significance of IQ can be reduced to a negligible factor.
Claims of this ilk abound in the sociological literature and were prominently discussed in the most valuable collection of Bell Curve criticism to date, Intelligence, Genes, and Success. Closer analysis, however, typically reveals a kind of post-hoc data-massaging that would be laughable in other disciplines. While such efforts have the merit of relying on solid sociological methods, they are invariably bedeviled by the possibility of confusing causal relationships. As Murray and Herrnstein emphasize in their discussion of Mercer’s research, the difficulty is that her method “broadens the scope of the control variables to such an extent that the process becomes meaningless.” “[T]he obvious possibility,” they note, “is that Mercer has demonstrated only that parents matched in IQ will produce children with similar IQs – not a startling finding.”
Of course, even if such studies are at odds with Occam’s Razor, they could nevertheless provide a sound refutation of the hypothesis that IQ plays a primary role in human destiny. The trouble is that once a researcher has zeroed in on some magic constellation of variables to make the IQ differences "vanish," the burden remains to explain why the custom-fitted factors do not themselves result from native ability. It’s a classic chicken or the egg problem.
Fortunately it is possible to disentangle the confusion.
As it turns out, there is little controversy about the genetic basis for the variance of intellectual traits within human population groups. The cumulative weight of numerous twin, adoption, and sibling studies shows that genes account for anywhere from 50% to 80% of the variance in IQ, making intelligence – or whatever it is that IQ tests measure – one of the most heritable traits ever documented in the social sciences. It is equally true that IQ, to some considerable extent, influences educational and vocational success.
Taken together, these facts delineate the crux of the problem with Mercer’s method and conclusions. Richard Herrnstein famously expressed it in the form of a syllogism, which goes a little something like this:
· If differences in mental ability are inherited, and
· If success requires those abilities, and
· If earnings and prestige depend upon success
· Then social standing (which reflects earnings and prestige) will be based, to some extent, on inherited differences among people
Obviously, the trick is to come up with a way of way of sorting out the independent effect of IQ in comparison with the environmental complexes identified by critics. To this task, the field of behavioral genetics has stepped up with the necessary insights and methodological tools.
Behavior geneticists are known for their comparative studies of twins and adopted children, which have radically transformed our understanding of the classic debates over nature and nurture. But another straightforward approach is to look at siblings raised in the same family. By definition, brothers and sisters from intact families share roughly the same home environment, the same socio-economic background, and the same educational opportunities, so they provide a useful way of gaining perspective on what’s really happening.
When researchers approach the problem this way, the independent role of IQ re-emerges in force. For example, Charles Murray’s own post-Bell Curve analysis of approximately 3000 sibling pairs from the NLSY database found that even when they are raised in the same home and matched in their socioeconomic background, blood-related children with different IQs show markedly divergent patterns in achievement across a host of measurable indices. Specifically, he found that even when “both children had attended elementary and secondary schools for the same number of years, only 18 percent of the siblings with ‘normal’ IQs (in the 90 to 109 range) got bachelor’s degrees, while 83 percent of their brothers or sisters in the very bright category (IQ of 125 or above) did so.”
Murray’s sibling analysis further demonstrates that IQ plays a similarly strong role in explaining occupational achievement and economic success, even when socioeconomic factors are shared. In the1998 monograph Income Inequality and IQ, he divides the sibling groups into five cognitive strata based on IQ. By comparing the sibling cohorts with the full NLSY sample, you get a revealing snapshot of the limits of family influence.
Cognitive Class Full sample Sibling sample
Very Bright (125+) $36,000 $33,500
Bright (110-124) $27,000 $26,500
Normal (90-109) $20,000 $20,000
Dull (75-89) $12,400 $14,000
Very Dull (< 75) $5,000 $7,500
“In 1992,” Murray explains, “the median earnings for the Normals was $20,000. Their Very Bright siblings were already averaging $33,500 while their Very Dull siblings were making only $7,500. Once again, the Brights and Dulls each fell about halfway between ($26,500 and $14,000 respectively).
In highlighting the significance of such data, Murray invokes a simple yet eloquent thought experiment. “These are large differences,” he writes:
Think of them in terms of a family reunion in 1992, with one sibling from each cognitive class, sitting around the dinner table, all in their late twenties to mid thirties, comparing their radically different courses in the world of work. Very few families have five siblings so arranged, of course, but the imaginative exercise serves to emphasize that we are not comparing apples and oranges here–not suburban white children with inner-city black children, not the sons of lawyers with the sons of ditch diggers–but siblings, children of the same parents, who spent their childhoods under the same roof. They just differed in their scores on a paper-and-pencil mental test.
In the vast and growing literature of behavior genetics, innumerable studies attest to the strong heritability of intelligence and the relatively insignificant role of shared and non-shared environmental influences. There are tweaks and peaks in the topography; for example, twin and adoption studies often show that modest gains in childhood IQ can be chalked up to family influence – and the effect is indeed more pronounced at the socioeconomic extremes – but such gains invariably trail off in time as the primacy of family environment gives way to the social environment that people create by dint of choice and innate disposition. Indeed, one of the most paradoxical and revolutionary discoveries in the field is that the genetic component of behavioral traits becomes more pronounced as people grow older.
When confronted with the findings of behavior genetics, sociologists often respond by plugging some newly devised concatenation of factors into the mix until the picture morphs into something more consonant with their preferred reality. This is always an option, but once the classically suspect socioeconomic culprits have been considered and controlled, you have to be extra careful not to impart causal significance to factors that are more parsimoniously understood to be the result of IQ.
As Steve Sailer pithily puts it, “You can make all sorts of things disappear by ‘controlling for’ variables that are closer to symptoms than causes. For instance, you can make the average height gap between the Dutch and the Japanese disappear by ‘controlling for’ length of the pants hanging in their closets.” But reality doesn’t yield to statistical caprice.
Life is full of disappointments. That’s why there’s popcorn and binge-drinking.
Say it Five Times Before Breakfast: Stephen Jay Gould did not refute The Bell Curve
Of course documenting obdurate group differences is a task far different than determining wherefore such differences arise and persist. It is this why question that constitutes the meat and pornography of the debate, even if, truth be told, it was never a major part of The Bell Curve’s central thesis. Approaching the thorny question of what causes group differences in intelligence to be so resilient is tricky business as we shall see, and until more sophisticated methods are available (and they’re coming sooner than you think), it remains largely a game of elimination, triangulation, and speculation.
Considering that Naureckas expends so much effort trying to convince us that the basic claims about the existence of group differences are without merit, he isn’t in much of a position to engage this issue. To the extent that he tangentially broaches the etiological borderlands of the debate at all, he seems content to pluck from a smattering of critical sources until predictably turning over the mike to Stephen Jay Gould, whom he claims “thoroughly debunked” the whole sordid business with his “classic work on the pseudo-science behind eugenics, The Mismeasure of Man.”
At this point, we should all take a deep breath. Because I know what you’ve been told. And I know what you’ve read. I know all about those eloquent New Yorker essays and the pop-references to spandrels and evolutionary contingency, and I saw him on Nightline, too. Oh, and let’s not forget that Simpson’s episode. Yeah, he was good. Always a chuckle in his voice. A real scientist for the people. A national treasure whose absence from the current scene is duly and respectfully noted. He had that special knack for bridging the gap between the two cultures.
A rare bird indeed.
But you can keep all that. Because despite his long-cultivated media-enabled cachet as an intellectual dragon slayer, the banal truth is that Stephen Jay Gould was neither a coherent nor respected authority in the IQ controversy. Sure, The Mismeasure of Man continues to enjoy glowing affirmations in the popular press, and it doesn’t look like anyone’s going to take away that National Book Award any time soon, but notwithstanding his quasi-saintly reputation among the learned laity, Professor Gould’s work – whether as an IQ debunker or as an evolutionary theorist – was never the subject of such credulous praise in the expert literature. In fact, he was scarcely taken seriously.
You don’t believe me, I know. But maybe your trusted compatriot Paul Krugman can jar you out of that trance. Speaking before the European Association for Evolutionary Political Economy in 1996, the respected economist cum Bush-bashing pundit provided a sober reality check on Gould’s imagined stature as an evolutionary theorist. “It is not very hard to find out, if you spend a little while reading in evolution, that Gould is the John Kenneth Galbraith of his subject,” said Krugman:
That is, he is a wonderful writer who is beloved by literary intellectuals and lionized by the media because he does not use algebra or difficult jargon. Unfortunately, it appears that he avoids these sins not because he has transcended his colleagues but because he does not seem to understand what they have to say; and his own descriptions of what the field is about – not just the answers, but even the questions – are consistently misleading. His impressive literary and historical erudition makes his work seem profound to most readers, but informed readers eventually conclude that there’s no there there…
…if you think that Gould’s ideas represent the cutting edge of evolutionary theory (as I myself did until about a year and a half ago), you have an almost completely misguided view of where the field is and even of what the issues are.
And if Gould’s reputation as a top-flight evolutionary thinker can be understood as resulting from savvy self-promotion and culturati-hustling panache, the same MO largely accounts for his firmly entrenched public reputation as an IQ debunker, though in this role, as we shall see, he is on even shakier ground.
Surprised? Yeah, I was too. Yet the conspicuous contrast between Gould’s reception in the popular versus academic literature was initially highlighted by the late Harvard geneticist, Bernard Davis back in 1983. In his important essay “Neo-Lysenkoism, IQ and the Press,” first published in The Public Interest, Davis observed that whereas “the nonscientific reviews of The Mismeasure of Man were almost uniformly laudatory, the reviews in the scientific journals were almost all highly critical.” Davis took Gould to task for his selective account of the evidence, for his misuse of mathematical models, and significantly, for the crypto-Marxist subtext of his arguments.
On that occasion, Gould responded with characteristic aplomb, but as the high-prestige rejoinders filed in, he wisely adopted the tack of ignoring his critics altogether. Why enter the fray when your riding high with the bourgeoisie?
Thus Arthur Jensen’s devastating review of The Mismeasure of Man, remains unanswered. And Gould certainly couldn’t have been troubled to compose a response to big bad Phil Rushton’s thoroughgoing takedown of the 1996 reissue of his celebrated classic. A national treasure has better things to do than wrangle with such unseemly ruffians of academe. To paraphrase evolutionary psychologist Robert Wright’s riff in his acerbic Slate essay “Homo Deceptus: Never Trust Stephen Jay Gould,” a “savvy alpha male” stands to gain nothing from getting into a gutter brawl with scrawny, marginal primates.
Gould’s preferred method, on full display in his award-winning polemic, was to dredge up century old science of ostensibly dubious merit, and to have at it with a twentieth-century laser, debunking, discrediting, bashing, and belittling ad nauseum – all the while implying that the current lot of research can be just as readily dismissed since it is presumptively fraught with the same “philosophical errors.” It’s a clever bit of intellectual sleight of hand; by switching the deck, Gould invariably leaves the impression of having done a lot more debunking than has actually been the case.
But even within the terms his retro-skeptical rigging, Gould’s efforts appear to be of diminishing merit. When Mismeasure was reissued in 1996, it has been pointed out that Gould conspicuously neglected to update his sources to account for important scientific developments that would have proven difficult if not impossible to reconcile with his central arguments. For example, a good portion of Mismeasure is devoted to discrediting the findings of nineteenth century craniometry. Gould takes great pains to depict skull-measuring scientists of bygone days as dupes of their own Eurocentric prejudices; eminent scholars such as Paul Broca, Samuel George Morton, and the great Sir Francis Galton, who purported to document a racial rank order in cranial capacity, Gould assures us, were employing careless methods and crude metrics, their conclusions being falsely derived to conform to their own deep-rooted racialist biases.
Yet Gould never addressed more recent re-analyses that examined the literal bones of contention at issue with more objective tools. To do so would be to concede that the racial taxonomies documented by the long deceased and much defamed skull collectors in his rogues gallery were largely accurate. Nor did he deign to evaluate the numerous modern studies that document essentially the same racial differences using sophisticated neuroimaging technology, which consistently support moderate correlations between brain size and the general factor underlying intelligence tests.
As a critic of psychometrics broadly applied, Gould was prone to at once radical and almost certainly disingenuous pronouncements. The idea that a “general factor” might underlay data gleaned from IQ tests he famously discounted as the “rotten core” of the whole sordid business. Yet his understanding of factor analysis has been roundly rejected by recognized experts. He consistently ignored evidence that would have been ill-suited to the hero’s narrative, steadfastly refusing to evaluate recent evidence showing that mental test scores correlate with a host of objective physiological and psychological measures such as reaction time, nerve conduction, and glucose metabolism.
I don’t know Gould’s preferred definition of pseudoscience, but ignoring evidence contrary to one’s favored conclusions doesn’t seem the way of dispassionate inquiry. That kind of chutzpah may pay off in the short term, but science, alas, is not a popularity contest.
Stay tuned for Part Three, in which the editor takes on the Chitling Test and expounds upon the fallacy of "the fallacy of reification." To catch up on Part One, click here.