“Am I just thick”, asked a thirteen-year-old girl, “or am I only good at those things that really interest me?”  Her friends grinned; “She’s very good at annoying the boys”.  And then, “Does my brain get better through use?”, quizzed a third.  “What sort of use?”, interjected a fourth1.

 

What makes us the thinkers we are?  Maybe even to ask such a question implies that you are intelligent.  One psychologist defined intelligence as “the ability to carry on abstract thinking”, while another said it was “the capacity to acquire capacity”2.  Whatever intelligence is it is not easy to define, but those young girls most certainly had it.  Psychologists in the mid twentieth century were anxious to have answers to these questions, as politicians needed to know whether we are who-we-are simply because of our inheritance, or do schools and social policy actually make a difference.  With good money spent on him, could a board school boy have eventually become as good a prime minister, if not better, as a public school boy?

 

How to separate out the influence of inherited factors, from those acquired from the environment, frustrates psychologists to this day.  It was the French psychologist, Alfred Binet3, and his colleague, Theodore Simon, who first developed a test for intelligence in 1905 which was based on a limited range of factors ─ vocabulary, comprehension and verbal relationships.  From these simple beginnings tests have become better predictors of a child’s subsequent school performance.  To Binet intelligence was not simply a fixed and inborn quality “In any meaningful sense of the word, it can be augmented by good education”.  Intelligence was far too complex to be detected entirely by a single test, or described by a simple number, or to be used to predict outcomes beyond the limited domains defined by the testers.  Intelligence is partly culturally specific, Binet argued, “it is judgment, otherwise known as common or practical sense, initiative or the faculty of adjusting oneself”.  Simple, and as true, as saying that it ain’t what you think, it’s the way that you think it.

 

That was far too subjective to be of any use in guiding government policy, concluded the young English psychologist, Cyril Burt4 born in 1883.  Burt established an early reputation for the meticulous collection and analysis of data concerning delinquency, from which he came to the opposite of Binet’s conclusion, namely that mental capabilities are largely dictated by inheritance.  He spent his life trying to assess these in ways uncontaminated by the effect of upbringing, experience or education.  Through the study of identical twins, he sought to determine the relative contribution of genetics and environmental factors in determining intellectual functioning.  He was interested in discovering if the scores on intelligence tests of unrelated adopted children reared together were strongly related, and if the scores for identical twins reared apart in very different environments were only weakly related.

 

Burt claimed that his studies suggested that approximately 80% of intelligence is determined by genetic factors.  if individual differences have such a strong genetic component, and superior intelligence is associated with higher social groups, then it became easy for Burt to argue that the social divisions in early twentieth century England were entirely natural, and the result of perfectly logical evolutionary processes operating over long periods of time.  In policy terms this led Burt to recommend tailoring different kinds of education to youngsters of different intellectual capabilities5.  Unlike Binet Burt believed in the possibility of developing highly accurate, predictive tests for that kind of ‘general’ intelligence that ultimately defined each individual’s intellectual capability.  He was convinced that the Stamford-Binet6 tests could calculate a person’s intelligence Quotient (I.Q.) by comparing his or her chronological age with their mental age as defined by the levels of questions they could answer.  Using an ever expanding data base to define average mental ability it was then possible to express a person whose intelligence was “average” as having an I.Q. of 100.  People with I.Q.s of between 120 and 130 were classified as being intellectually superior, while those with ratings of below 70 were regarded as mentally retarded.  This became the basis for the bell curve7 for the normal distribution of intellectual ability; sixty-eight percent of a normal population would have an I.Q. of between 85 and 115, only 0.13% of between 145 and 160, or of between 40 and 55.  Pupils of grammar school ability were thought to have I.Q.s of 110 and above.

 

To the psychometricians this was all gloriously neat, and the numbers gave it the apparent precision of scientific objectivity.  That appealed to politicians as well.  That different social and ethnic groups displayed very different levels of normality was all too easily assumed, in the atmosphere of the time (Hitler was using just this data to ‘prove’ why it was right to eliminate the Jews), to be due to genetics rather than any cultural ‘skewing’ in the tests.  It now seems that Cyril Burt, carried away by his own rhetoric and his early popularity, in his old age started to massage the data.  Much of his data on twin studies has subsequently been shown to be fraudulent.  But Burt proceeded with apparent total confidence, and advised government that it would be a waste of resources to spend as much money on ‘the less able’, as it would be to invest in the most gifted.  He went on to advise that so accurate had now become the measurement of intelligence that a single test administered at the age of eleven, would be highly predictive of future intellectual performance8.  They weren’t, and a whole generation was to be scarred by the mistake.

 

Thesis 61:     24th August 2006