Editor’s note: what follows is the penultimate post in my series on the Bell Curve controversy. If you want to catch up, go to Part One and Part Two.
_________________________________________________________
OK, then. Having thus disposed of good Naureckas’s pet gadfly, it is well that we should now turn our attention more directly to the verboten meat of the controversy — to the ominous racially-charged thought-bombs that we all mull in private while avoiding in public.
In irresponsibly broad strokes, the racial-cognitive trichotomy breaks down so that East Asians get top billing with an average IQ of around 105. Then you have the whites weighing in at the occidentally-normed goldilocks median of 100, drawing inevitable attention to the significantly lower average score of about 85 for westernized blacks (native African blacks hover around 70, a subject of ongoing study and controversy that we shall ignore for present purposes). This is the crudely stated statistical picture that emerges time and time again, and it is no longer controversial among experts. Again, the sticking point is not whether, but why.
Before digging in, it might be a good idea to recite the by-now-familiar caveats – because the caveats are important.
First, it should be kept in mind that when we talk about group differences, we make generalizations perforce. Race may not be “socially constructed” as fashionable rhetoric would have it, but racial categories are fuzzy at the edges, and even if some measure of genetic influence can be conclusively linked to observed ethnic differences in mental traits, the world will still be populated with more than enough black geniuses, middling Asians, and dirt stupid white folk to keep things interesting for the ride. And regardless of the underlying causes of aggregate differences, there will always be more variation within groups than between them. As Charles Murray reminds us, “a few minutes of conversation with individuals you meet will tell you much more about them than their group membership does.”
But the hard questions still need to be asked. And answered. If the root causes of significant racial gaps in important mental skills are essentially due to cultural or environmental forces, however elusive, then the trick will always be to find and fix them. Whatever it takes. This has been the working assumption for the past fifty years, and it’s still the only possibility that you can discuss in polite company.
In considering the notion that genes or other intractable biological factors are at work, the admirable human urge to find and fix is called into question. At the sociopolitical crossroads of the debate, the plausibility of what has been dubbed the “hereditarian hypothesis” begs us to reevaluate our best efforts from the ground up. When underlying beliefs about human nature collide with brute realities, mistakes invariably follow. We invest in the wrong resources. We erect false expectations. We assign ill-deserved blame and foment unrealistic hope. The truth may not set us free, but it just may keep us from fucking things up.
Getting to the Ultima Thule of the race-genetic conundrum, however, is difficult business, beset by logical pitfalls and empirical loopholes. Scientists may be able to serve up an overwhelming amount of evidence that is consistent with the idea that genes contribute strongly to racial differences in mental ability, but definitive conclusions have thus far proven elusive.
Harvard geneticist Richard Lewontin, a tenacious critic of hereditarian ideas, famously illustrated the fundamental problem by invoking a memorable analogy – originally formulated by Charles H. Cooley and restated in The Bell Curve – in which handfuls of seed corn of identical genetic stock are dispersed in radically different environments. Even with the genetic variance set at zero, the seeds finding purchase in more arable environs will yield an observably larger crop than those deposited in arid desert soil. Lewontin’s analogy reminds us that even when the role of genes in determining within-group variation can be established with relative certainty, the cause of between-group variation may yet yield to wholly environmental explanations. As operationalized vis-a-vis the IQ controversy, this means there is always the possibility that the answer lurks somewhere within the socio-cultural morass that shapes our subjective racial experiences.
If at first glance Lewontin’s analogy seems to render the whole affair an epistemological dead end, at least as a practical matter, we should note that it would indeed be possible to design a definitive experiment; all you would have to do is take representative groups of infants of different racial backgrounds and raise them up in artificially controlled environments without access to any cultural cues that might sour the results. Maybe they could be tended over by robots in geodesic domes or something. I don’t know. The point is that if the environments could be objectively equalized, any psychometric differences that remained between groups could then be safely chalked up to biology. And if there were no differences, well, then the null hypothesis would be confirmed. Simple as that. The problem, needless to point out, is that such an experiment would be, um, ethically unconscionable.
Within the bounds of permissible inquiry, then, the question will hinge with necessary imprecision, on whether there are environmental influences confounded with the idea of race that are sufficient to explain the differences. The trouble is that efforts to identify such variables have not proven fruitful.
Let’s look at how some of the most plausibly imagined culprits have fared under scientific scrutiny.
During my lackluster life as a college student, I frequently encountered the claim that IQ tests were simply rigged against black subjects. The case for internal cultural test bias was sometimes illustrated with something called the “chitling test” where students (read: white students) would be asked to answer a battery of questions centered around African-American folkways and culturally distinctive terminology, the implication being that standardized IQ tests were similarly imbued with Euro-centric cues that would be inaccessible to black subjects. We were also referred to discrete SAT and IQ test questions where references to regattas or holiday customs or suchlike were said to stack the odds against minority test-takers who, it was implied, couldn’t reasonably be expected to have cultural exposure to such arcane knowledge.
I remember thinking the whole business had a dubious odor, and my suspicion was accurate. Even if the idea remains entrenched in the popular imagination, internal cultural test bias – at least in the simple guise implied by the chitlings and regattas – has been rigorously examined and unequivocally rejected by scholars for decades. Contrary to what a culture bias model would predict, it turns out that black test subjects, regardless of their socioeconomic background, typically score higher on those test items that are rated as being culturally sensitive. Over time and across cultures, the racial disparities are more acute on tests and test items that draw upon abstract nonverbal reasoning.
The classically observed racial rank order has been consistently documented, for example, using Raven’s Standard Progressive Matrices, a well-recognized nonverbal psychometric battery that relies purely on abstract problem solving. Moreover, a consistent finding is that those tests that are more highly correlated with the general factor underlying all measures of cognitive ability are the ones that yield the starkest racial differences. This is true even with tests of simple “reaction time” and “inspection time,” which cannot reasonably be argued to imbue any cultural component.
Of course sniffing out culturally suspect test items may make for good undergraduate fun, but the more statistically relevant question concerns external or "predictive" bias. If IQ tests are intrinsically biased against some groups, it should follow that their value as tools with which to predict scholastic and vocational achievement would reflect the difference. This is not the case. In their summary of extant (and extensive) research on predictive test bias, the previously mentioned APA task force made clear that “[c]onsidered as predictors of future performance, the tests do not seem to be biased against African Americans” If anything, some research shows that IQ scores slightly over-predict real world performance for blacks, which is another can of worms we shall save for another day.
If the weight of evidence begrudges them to acknowledge that the tests aren’t’t biased, many IQ critics will hang their hopes on the seemingly plausible idea that familial discord and socioeconomic disadvantage must be to blame. I imagine this is the explanation that first occurs to most people, and a review of the data might seem upon a cursory glance to support common sense. Once again, however, problems arise as soon as the confounding influence of within-group genetic factors is taken into proper account.
In glaring contrast to received opinion, behavior geneticists – notably Robert Plomin and the late David C. Rowe – have established that within-family factors such as illness, birth-order, parental favoritism, peer-relations, and any number of idiosyncratic life experiences account for a far greater share of all environmentally-rooted behavioral variation than such classically suspect between-family factors as educational opportunity, parental income, and social class.
More importantly, since behavior-genetic twin research shows no discernible contrast in how this surprising "gene-environment architecture" adds up in black versus white families, the question thus arises as to how it is possible that between-group environmental differences, while accounting for so little of the observed variation within families, should nevertheless be expected to account for the lion’s share of IQ variation among races.
Family environment-centered theories are more conspicuously challenged by the inconvenient fact that racial achievement gaps persist even after the imputed environmental variables are equalized. Contrary to what would be expected if racial differences in IQ were an artifact of socioeconomic factors, it turns out that disparities in black and white test scores are actually more pronounced in affluent communities where black and white students share similar advantages. In addendum to their exhaustive review of research on race and intelligence, Arthur Jensen and J Philippe Rushton pointedly ask how critics can explain “the fact that Black students from families with incomes of $80,000 to $100,000 score considerably lower on the SAT than White students from families with $20,000 to $30,0000 incomes?”
It’s a good question. Any ideas? I’m sure you can come up with something.
Environmentally-bound explanations are further degenerated by their failure to fashion a coherent response to the formidable body of evidence showing that a wide range of physiological traits including brain size (as measured by sophisticated neuroimaging technology) correlate with racial rank orders documented in intelligence tests. The proposition that brain size correlates with IQ — and more specifically, general intelligence — is no longer a subject of serious controversy. A good summary of the state of the evidence was recently set out in Jeremy Gray and Paul Thompson’s paper for the journal Neuroscience, which states:
Correlations between intelligence and total brain volume or grey matter volume have been replicated in magnetic resonance imaging (MRI) studies, to the extent that intelligence is now commonly used as a confounding variable in morphometric studies of disease. MRI-based studies estimate a moderate correlation between brain size and intelligence of 0.40 to 0.51.
This being the case, it might follow that a genetically weighted interpretation of the observed racial differences in IQ would predict a racial rank order in brain volume. Well, guess what:
Overall, MRI studies show that brain size is related to IQ differences within race. Moreover, the three-way pattern of group differences in average brain size is detectable at birth. By adulthood, East Asians average 1 cubic inch more cranial capacity than Whites, and Whites average 5 cubic inches more cranial capacity than Blacks. These findings on group differences in average brain size have been replicated using MRI, endocranial volume from empty skulls, wet brain weight at autopsy, and external head size measures. They were acknowledged by Ulric Neisser, Chair of the APA’s Task Force on intelligence, who noted that, with respect to “racial differences in the mean measured sizes of skulls and brains (with East Asians having the largest, followed by Whites and then Blacks) . . . there is indeed a small overall trend”.
That’s from Arthur Jensen and J. Philippe Rushton’s article "30 Year of Research on Race Differences and Cognitive Ability," published in the June 2005 issue of the APA journal, Psychology, Public Policy, and Law, which is probably the most exhaustive one-stop synthesis of the empirical and theoretical state of the debate currently available.
In presenting their formidable case for moderate to strong hereditarianism, Jensen and Rushton triangulate from far-ranging sources of evidence. In addition to the brain size stats, their paper looks at worldwide racial patterns in IQ distribution, and considers the accumulated burden of evidence gathered from behavior genetics. They emphasize that race differences are most profound in tests of the general factor (usually referred to as g — more on which later) latent in all tests of cognitive ability, and they survey the growing body of evidence from trans-racial adoption studies, racial admixture studies, evolutionary psychology, physical anthropology and other disciplines.
In every case, Rushton and Jensen argue, the weight of the evidence is more consistent with what would be expected if genes play a strong (but not exclusive) role in determining racial differences in intelligence. If you’re up to the task, the Jensen/Rushton report is, quite simply, a must-read. If you begin to feel the earth shift beneath your feet, don’t worry; you can always seek salvific refuge in one or more of the rejoinders which were published in the same issue. Just be sure not to read R&J’s response to said rejoinders. Wouldn’t’t want to spoil a happy ending.
At odds to explain the surfeit of data, those who cling to strictly environmental explanations are left to speculate about some unknown factor that might yet quell our worst suspicions. “The real problem” as summarized by a contributor to the invaluable Gene Expression forum, “is how to test the factor X theory, that the black-white IQ gap is due to something unique to the black environment that affects all blacks equally but is completely absent from the white environment in a way that could evade all detection thus far.” This, then, is where the rest of the chips must fall. It must be “stereotype threat” or hypertension, or academic disengagement. Or something else, goddammit.
Yet with every failed attempt to identify a magic factor X, the unthinkable hereditarian hypothesis lurks nearby reminding us of its prima facie plausibility.
In searching for environmental sources to explain the resilient differences between races, the fact that IQ is in large measure heritable may not provide a strictly deductive bridge, but if you consider the broader hereditarian reality in combination with the impotent track record of such strictly cultural-environmental factors as have been tried and tested, and if you are honest, you will be hard-pressed not to assign greater plausibility to the idea that genes play a significant role. When you consider the physiological correlates, the dimming hope for the cultural-environmental equivalent of Lewontin’s nutrient-deficient soil assumes the cadence of a desperate prayer.
Reasonable people may disagree as to whether some unquantifiable constellation of factors might yet account for the gap, but it is reason, I fear, that is too often lost in the din of disputation. Just because something is theoretically possible, doesn’t’t mean it is plausible. This is easily recognized when the political stakes are lowered.
Consider, for example, the correlation between smoking and lung cancer. As absurd as such an argument might seem on its face, it is conceivably possible that the imputed causal link between tobacco consumption and cancer risk is entirely illusory. Rather than playing an independent role as typically assumed, the act of smoking theoretically could turn out to mask a host of confounding behavioral or carcinogenically predispository factors that are simply more common among smokers than non-smokers. Scientists don’t waste time and resources pursuing the elusive equivalent of Lewontin’s desert soil in the smoking population, however, because to do so would be a fool’s errand and a public disservice; common sense and the weight of evidence lead us to favor a causal link as the default hypothesis. Smoking causes cancer, even if in the strictest sense it remains unproven.
I think a disinterested appraisal of the evidence similarly favors the hereditarian hypothesis with respect to observed racial differences in IQ. Period. Where racial differences are at issue, Occam’s Razor has been ignored for too long. Our intentions may have been noble, but it is surely more noble to confront unpleasant possibilities.
But quite regardless of our best intentions and worst suspicions, we may soon have to deal with a more definitive body of evidence. Notwithstanding the unconscionable experiment mentioned earlier, recent advances in genetic analysis provide a more direct means of testing the hereditarian hypothesis.
I turn again to that scoundrel Charles Murray, who just can’t seem to keep his mouth shut. As summarized in his important essay, “The Inequality Taboo,” here is how a more definitive answer could be extracted:
Take a large sample of racially diverse people, give them a good IQ test, and then use genetic markers to create a variable that no longer classifies people as “white” or “black,” but along a continuum. Analyze the variation in IQ scores according to that continuum. The results would be close to dispositive.
Anticipating the usual criticism from the social constructionist gang, Murray goes on:
The results of such a study would be especially powerful if the study also characterized variables like skin color, making it possible to compare the results for subjects for whom genetic heritage and appearance are discrepant. For example, suppose it were found that light-skinned blacks do better in IQ tests than dark-skinned blacks even when their degree of African genetic heritage is the same. This would constitute convincing evidence that social constructions about race, not the genetics of race, influence the development of IQ. Given a well-designed study, many such hypotheses about the conflation of social and biological effects could be examined.
So the cards are on the table, and proof is in the offing. Place your bets.
The Fallacy of the Fallacy of Reification
Beset and beleaguered by mounting evidence, a vocal minority of critics seek refuge in the hermeneutically parsed denial of the very idea of intelligence. Radical strains of such criticism find succinct expression in the oft-repeated mantra that IQ scores are simply "what IQ tests measure," which is a bit like pointing out that temperature is merely what thermometers measure – it may be tautologically accurate, but it contributes nothing to our understanding of reality.
There is something very nearly absurd about this tack. By dint of predisposition, scholars and intellectuals are people who trade in the practical realities and deeper vicissitudes of applied intelligence as a matter of course; they spend their professional lives thinking and arguing in high-stakes cognitive competition with colleagues, yet when the subject of intelligence is raised in an empirical context, these selfsame intelligence-obsessed people suddenly trip into paroxysms of denial. It’s like a devoted baseball fan who refuses to accept the relevance of batting averages. Intellectuals are obsessed with intelligence. When they play at denying its reality, it’s just hard to take them seriously. And when the IQ deniers sneer over George Bush’s middling SAT scores or scream injustice when some borderline retarded murderer is primed for the chair, well, the irony-bordering-on-hypocrisy almost begs for a punch line.
I haven’t come up with one. And even when I can summon the requisite patience to suspend my incredulity and chew over the rehearsed palaver about social construction and reification, however cleverly it may be formulated, I am always left with the same stubborn questions, which invariably distill into one question: What about the fucking data?
The inescapable fact is that IQ scores, whatever they represent, effectively if imprecisely predict human destiny in a host of measures, probably better than any known variable in the social sciences. This is true across racial lines. This is true after socioeconomic factors are taken into account. This is true whether test subjects are male or female. This, quite simply, is true.
So if you want to play Foucauldian parlor games with intelligence, go ahead an spin your wheels. But in doing so, just remember that you forfeit your rational grounds to whip up the bluster over “class” or “culture” or whatever nominative categories may be custom-fitted to your preferred worldview. You can’t have it both ways, smart guys. Either explain the data or cop to the sociological nihilism at the core of your high-minded casuistry. As Andre Marrou used to say, it really is that simple.
Even if there are aspects of intelligence that remain impervious to empirical reduction – and it would be silly to argue otherwise – we are left with the range of the spectrum that we can measure, and measure with relative consistency. And at some point we must resign to face the implacable reality that agglomerated test scores predict important aspects of human destiny with consistency and accuracy over time. The map should not be confused with the territory, but you can’t travel far without one. The metric value of a map is that it gets you closer to your destination. Same with IQ. The proof is in its independent predictive power.
What does IQ predict? Linda Gottfredson of the University of Delaware has studied the predictive utility of cognitive tests in relation to a broad range of functionally important social dimensions. Her investigations demonstrate, inter alia, that general intelligence as measured by IQ tests “can be said to be the most powerful single predictor of overall job performance,” meaning that in most cases a simple mental snapshot will tell us more about how a person is likely to do on the job than their formal education, or their personality, or even their past experience. Not only does IQ predict job performance at practically every level of complexity, it also predicts health and longevity, even after socioeconomic status is taken into account. IQ even predicts what kind of jokes are more likely to amuse us.
The simplest explanation, of course, is that IQ transcends the parameters of the test. Rather than being a narrow scholastic measure or an artifact of self-contained psychometric games as the reification crowd would have it, objectively measured intelligence provides a roughly accurate read on how people process information and make decisions in the real world.
The causal link between IQ and practical decision making is perhaps nowhere better illustrated than in military life, where basic mechanical proficiency and the ability to make snap judgments take on crucial importance. In her 1998 article for Scientific American, Gottfredson highlights a 1969 study commissioned by the U.S. Army finding that “enlistees in the bottom fifth of the ability distribution required two to six times as many teaching trials and prompts as did their higher-ability peers to attain minimal proficiency in rifle assembly, monitoring signals, combat plotting and other basic military tasks.”
And as the indefatigable Steve Sailer recently noted:
One of the least known but most decisive facts in the pseudo-controversy over the validity of IQ tests is that the U.S. military, after 88 years of intensively studied experience with giving IQ tests to tens of millions of potential recruits, remains utterly committed to using them. Indeed, since 1992, when the end of the Cold War and the destruction of Iraq, reduced the need for a giant standing army, only about one percent of all new enlisted personnel have gotten in with scores under the 30th percentile nationally on the military’s entrance test.
When resources are limited and the stakes are high, you stick with what works. And IQ works. No matter how desperately the deniers try to huff and puff it away.
There is nothing at all wrong with formalizing nebulous phenomena for the purpose of measurement and meaningful analysis. We do this all the time, with weather forecasts and price indexes, with compass points and movie ratings. Numerical distinctions, however arbitrarily assigned, can help us to apprehend phenomena that would otherwise defy coherent analysis.
The real question goes to whether the metric yields more information about the thing being described. And whatever it is, this socially reified phantom-quantum that IQ tests measure, it happens to tell us a lot about how people are likely to behave in this big beautiful world. IQ isn’t the only explanation, but it is a great explainer.
______________________________________________________
This horse ain’t dead yet. Be sure to check back for the final installment, in which The Hog appraises recent developments in genomic research and dashes all remaining hope.
Continue reading →