The Changing Face Of Intelligence

(From The Prometheus Society's Journal, Gift of Fire Issue No. 59-A, May 1993)

Intelligence, A Measure of Appropriate Behavior: That individuals differ in mental ability has been recognized since time immemorial. Nonetheless, even today, the questions "what is intelligence?" and the related "what mental faculties should be tested in evaluating intelligence?" continue to defy an easy answer. All know that some persons learn slowly, others quickly, and that mental superiority is a desirable trait; "intelligence," however, (like a number of abstract terms) resists clear definition.

Some psychologists (as Spearman) claim that an all-embracing mental ability (called the "g" factor) determines a person's ability to perform any type of cognitive task: Those possessing substantial quantities of this underlying "mind stuff" (or "g" factor) would thus score extremely well on both verbal tests and tests of spatial conceptualization; likewise, easily administered tests of vocabulary or analogies can be integrated as valid indicators of more generalized intelligence. Other investigators (as Thorndike) have directly opposed Spearman—contending that only isolated aptitudes (such as memory or verbal fluency), but not "general intelligence," can be measured; likewise, a linguistically gifted individual could conceivably perform quite poorly on visual tasks. Most modern psychologists (perhaps settling for a "happy medium") believe that overall intelligence is truly measurable but that the testing of specific component aptitudes is useful in the diagnosis of learning disabilities (as dyslexia), in career counseling, and in the detection of intellectual impairment due to cultural deprivation. Meanwhile many laymen (perhaps in rationalizing a poor performance) contend that the "right answers to many of the problems appearing on intelligence tests are nothing more than those which conform to the ideas of the person who made up the test;" they question whether such tests truly measure mental ability or merely the degree to which a subject's style of reasoning accords with that of the test-writer. Let us examine whether this layman's criticism addresses a valid problem.

The 2 basic premises, which I will first attempt to substantiate and will then use in furthering my argument, are:

1) Innate intelligence is unmeasurable; only overtly manifested ability can be evaluated—making intelligence tests, in many ways, akin to achievement tests.

2) The definition of "intelligence," or of what component abilities should be most emphasized on an exam, is socially determined and thus may change in accordance with the needs of an era or culture.

Let us imagine a potential genius who is, unfortunately, mute and quadriplegic: He understands the profoundest discussions with lightning rapidity and, while lying flaccidly in bed, contrives elaborate and penetrating scientific theories. Nonetheless, because he can not communicate by speech or writing, can not draw or build fine models—can not, in short, show the world that he is adept at language, arithmetic, or any other currently valued mental skill he may be deemed an imbecile. His enormous innate capacity, his complex and insightful (but wholly internalized) thoughts, would not matter—for to the pragmatically oriented world, he has demonstrated little intelligence. What a man is capable of achieving is measured largely by what he hasachieved—the extent of accomplishment being implied by a behavior, by an outward demonstration of what he can do. The mute and quadriplegic "potential" genius might, thus, readily be deemed an idiot.

In both "performance" and "verbal'' tests, only overt accomplishment can be measured. Because crude "performance'' tests may fail to adequately distinguish higher levels of intelligence from the average and do not evaluate those abilities (linguistic and analytical) which correlate best with academic and professional success, most IQ tests examine primarily verbal and numerical reasoning. Thus, to score highly, a man must perform well at tasks deemed important by his particular society.Performance depends upon his previous mastery of vocabulary and various spatial and numerical concepts; he must demonstrate proficiency in skills emphasized by the current educational system and by prevailing custom.

How much we have learned is, of course, largely determined by our innate capacity for learning; thus, because the English language and arithmetic are taught in all schools and are used frequently in practical affairs, the mastery of vocabulary and facility at manipulating numbers can be interpreted as valid indicators of general intelligence. Conversely, what we have already learned in turn affects our capacity for future learning; knowledge is often cumulative—new facts being understood only in the context of the old, new theories and arguments being grasped only when the starting premises are fully comprehended. Ignorance hinders future understanding; what we can learn is, in many ways, inseparable from what we have learned. Present knowledge and skill, in addition to being the only testable parameters, seem valid predictors of future accomplishment, indices of the capacity for greater understanding; in assessing intelligence, we inevitably assess mainly what "has been learned."

What should the bright, receptive "student" have learned? He should have learned what was explicitly and implicitly taught to him—the mores and language of his society, the styles of thinking which have been collectively accepted as correct and most useful. "Intelligence" may be defined as the set of those abilities which most help an individual to thrive within his particular society, and a society to prosper at a given time. Likewise, those mental skills deemed most valuable in manipulating and developing upon the knowledge prized by a particular culture-having been maximally stressed in education and practiced most frequently in everyday life—will be emphasized in evaluating intellectual ability. The criteria for assessing intelligence will thus change with the needs of the circumstance—with the demands of the culture or historical

Likewise, the "bright" aborigine would be expected to track prey skillfully and to identify, at a glance, the footprints of diverse animals. The "jury of his peers" would not evaluate him according to such criteria as linguistic aptitude or reasoning ability—an extensive vocabulary and skill at manipulating the complex rules of logic being totally superfluous to survival in the wild.

Similarly, according to the thesis developed in Daniel Boorstein's The Discoverers, the emphasis placed on rote memorization (as a prized mental skill) has markedly decreased over the centuries. In the earliest days of civilization, before the advent of writing (or when papyri were very few in number), information was imparted orally and seldom, if ever, recorded in permanent (written) form; thus ancient scholars, being the sole keepers of knowledge in an era when few documents existed as reference sources, were by necessity men of prodigious retentive powers—capable of remembering even the most disjointed facts which had been told to them just once. With time, documents decreased in size from unwieldy scrolls to easily handled manuscripts, easier to both inscribe and read; written information became somewhat more accessible and the importance of rote memorization correspondingly lessened. The eventual invention of the printing press, permitting wide dissemination of books in large quantifies, allowed the emphasis on memory to decrease even further, readily available books, containing the needed information in a permanently recorded form, could be consulted by anyone whenever a fact was forgotten. Scholars in this later era—when the rules of logic had been systematized, intricate tools for investigating the environment had been designed, and written information was easily accessible—were men with a gift for analytical thinking and experimentation, not for rote recall; reasoning power supplanted memory as the main criterion by which higher intelligence was judged.

The "classical" education highly valued by the erstwhile British aristocracy emphasized linguistic skills almost exclusively; youths were taught predominantly literature, philosophy, ancient Greek and history. In a more technologically oriented age, some of the most respected scholars of that time might have seemed profoundly inept or "one sided;" conversely, many modern scientists—esteemed for their acute mathematical and spatial reasoning—might have been judged "ill fit for higher learning" or "unintelligent" by an educational system which focused entirely on literary accomplishment.

In our own era, the ability to reason analytically is deemed vital to the advancement of technology and, thus, is stressed on many intelligence tests. Because all information is communicated through the verbal modes of speech and writing, vocabulary and linguistic reasoning are considered important indicators of "mental capacity." The ability to "rote memorize"—vital to the retention of knowledge in a pre-literate era and important in the learning of complicated ecclesiastical rituals in the medieval period—has been de-emphasized; moreover, the focus on technologically useful forms of thinking has reduced the value placed on linguistic aptitude in isolation.

Thus, the mental abilities deserving emphasis—the criteria for assessing intelligence—may change with the times; men of extreme but one sided talent, deemed "brilliant" in one era, might be considered unremarkable in another. In evaluating intelligence, we measure how well an individual has assimilated the knowledge valued by his culture, how well he has learned to reason in conformity with the current styles of thinking, and how well he can adapt (on a cognitive level) to the conventions of his time. The particular knowledge and mental skills valued by a given society, extensively taught in school and used frequently in practical daily affairs, are indispensable to academic or professional success and should be mastered quickly and in depth by any ''bright," receptive individual. Assessing what has been learned as an index of what can be learned, and attempting to predict future scholastic achievement, intelligence tests will measure the extent to which an individual's thinking conforms to the pattern currently deemed most "desirable.'' A man talented at rote memorization but inept at deductive reasoning would not be expected to score highly on intelligence tests, or even to appear bright before his peers, in an era which de-emphasizes retentive powers and stresses "scientific" thinking.

Having seen that "intelligence," as that set of mental abilities most prized by a particular culture, may change in meaning with the times, we can now address a) whether or not the layman's criticism, that intelligence tests are invalid as measurements of "mental ability'' because they merely evaluate the extent to which the examinee's style of reasoning accords with that of the test-writer, is justified, and b) whether any ongoing changes, in our society, are working to alter the criteria by which we should judge intelligence.

a) The layman's criticism: The validity of IQ tests is increased by the facts that 1) such tests are not accepted for general usage until they have been standardized and all confusing questions have been "weeded out" during preliminary laboratory trials, 2) the test-writer himself is usually highly "intelligent" and thus has demonstrated that his own style of thinking conforms to the pattern most valued by the times, and 3) an "intelligent" person (who, by definition, knows how to reason in the manner encouraged by his society) should be able to recognize which of several possible answers would most likely be derived by using conventional, culturally fostered patterns of thinking.

Indirectly, such tests also measure "adaptability." Just as, in debate, the most persuasive (and winning) argument is tailored to appeal to "common sense" and is built upon widely accepted ("sane") fundamental premises, replies to test questions must be made to conform with the expectations (and cognitive style) of one's culture. At times, 2 or more answers might seem correct; the task, then, is to determine which of these would most reflect a pattern of reasoning valued and encouraged by society. If the question is a verbal analogy, what are the denotations and connotations ordinarily ascribed to the given words? If the question is numerical, how might a computer (programmed to "think" in the analytical style valued by our technological culture) be expected to answer? In an era which stresses "reasoning by analogy," a man might approach numerical sequence problems by first looking for similarities in the shapes or "symbolic" meanings of the figures; in a more quantitatively oriented era, he would look first for the possibility of the numbers being products, square roots or sums of one another. His answers, being perfectly adapted to the demands and expectations of his society, will be correct.

The test-taker must "conform"—but knowing how and when to conform is part of "intelligence;" adaptability is subsumed under the general definition of intelligence as "the capacity for understanding (the needs of a situation) and for reasoning (one's way through them)." Likewise the "right answers" to many problems on intelligence tests are those which conform to the ideas of the test-writer and, in carefully standardized exams, to those of society as a whole. Such tests measure the abilityand willingness to use conventionally accepted styles of resorting and to realize that such patterns of thinking, being expected and most readily understood by others, should be employed first in all situations requiring communication and in almost all academic settings. Intelligence, measurable in only its outwardly demonstrated form, differs little from other forms of behavior; "successful" (acceptable) behavior invariably conforms to the demands of one's environment.

b) Definitions in Transition. At present, the ability to reason analytically, to comprehend and use one's native tongue well and (to a very minimal extent) to remember large quantifies of information are considered the marks of high intelligence; we might now ask whether any ongoing technological or sociological changes are working to alter the criteria by which we will soon come to evaluate mental capacity. A brief examination of the current, and potential, effects of computers indicates that the answer is "yes;" several examples of these effects, both possible and already realized, can be found in fiction and in "real life" experience:

In one of his short stories, Isaac Asimov intimates (with characteristic foresight) that few, in the society of the future, will even be able to add and subtract: In his story, an elderly man (born before computers came into widespread usage) astonishes his colleagues by performing a seemingly "miraculous'' feat; albeit slowly, he is able to add several numbers on paper—a calculation which computers, but no living humans of his time, can do.

"Real life" example also illustrates how arithmetical skill and the comprehension of mathematical concepts may soon become de-emphasized in the work place and, because many modern schools tailor their courses to practical demands, eventually in education as well: Graduate students (in the biological sciences) have traditionally been required to take advanced statistics courses wherein a theoretical understanding of the subject matter, the ability to derive the various key equations, and facility at solving diverse quantitative problems are required; such a comprehensive understanding has, hitherto, been necessary for the proper analysis of research data. Some laboratories, however, (realizing that statistical computations are tedious and time-consuming) have recently installed desk-top computers programmed to perform every type of analysis—a "student's T test," a "qui square," and so forth—in less than 5 seconds. Moreover, the program has been simplified to meet the needs of those who understand almost nothing about statistics: a series of questions appear on the screen and, from the answers given, the computer determines what "test" should most appropriately be employed. Supplied with such a program, even the most dedicated researcher needs little understanding of statistical theory; at most, he needs to know the definitions of several key terms. Graduate schools, responding to this decreasing need for sophisticated understanding, may soon begin to offer their students simplified statistics courses; only a passing familiarity with definitions, but not the ability to perform quantitative calculations, will be required.

Because a computer's memory can store vast quantifies of readily accessible information, the emphasis on prodigious retentive ability as a prerequisite for academic or professional success and as a criterion for judging "intelligence" can be expected to decline even further. Because even the desk-top computer can perform the most complex calculations in just a fraction of a second, proficiency at arithmetic and even the understanding of mathematical concepts are rapidly becoming superfluous abilities; to solve a quantitative problem, all (excepting programmers and the most innovative theoretical scientists) can merely "push a button." Many critics of modern society, likewise, worry that the computer will engender a general decline in the species' cognitive ability; they foresee an illiterate, innumerate future generation Which, like that portrayed by Isaac Asimov, is virtually incapable of analytical reasoning. Despite such pessimistic projections, however, the effects of the computer parallel those of Gutenburg's printing press: The latter, making reference texts cheaply available to all, gave rise to a marked decrease in the stress placed on memory; in its place, "scientific reasoning" achieved ascendancy as the most important "cognitive" function. Likewise, we must ask whether a "new" form of "intelligence" will soon supplant the currently emphasized "speed of calculation" and "numerical reasoning." Several factors indicate that it probably will, and that many of our standard IQ tests actually measure "obsolescent'' forms of intelligence:

1) Intelligence tests, to have "external validity," should accurately predict academic and professional success; a high score should pragmatically imply a greater potential for high achievement in the "realworld." Most modern intelligence tests, being "timed," stress speed of performance; when a series of numerical problems is given, the final score depends as much upon how quickly the examinee can add or subtract as upon how well he understands the conceptual relations between the quantifies. Memory is emphasized indirectly: He who forgets the formula for computing the area of a triangle may be unable to answer a given problem, despite possessing an excellent understanding of the spatial and analytical concepts involved; he who momentarily forgets the meaning of an abstruse word will be unable to complete an analogy, despite his innate gift for discerning even the obscurest similarities.

How quickly a man can add or subtract becomes meaningless in a practical sense when both the accountant and engineer will inevitably use a calculator or desk-top computer which performs arithmetic operations a million times faster than any human. Furthermore, a rapidly expanding technology continually introduces new words (at least temporarily obscure even to the brightest) into the general vocabulary; dictionaries and instruction manuals, to be consulted whenever the meaning of a specific term is obscure, are available in every home and office. How well a man remembers the definitions of esoteric terms is thus less important than that he knows how to "look them up," knows what reference sources to consult to find their meanings clarified.

2) The ability of computers to store and retrieve data and to perform swift calculations seems truly miraculous. When the quantity of stored information is vast, however, a new problem arises—analogous to that encountered when consulting the index of a comprehensive textbook: the desired information may be classified under any one of many, seemingly equally appropriate, headings or categorized in any one of numerous "files." The greater the amount of stored data, the more complex the classification system will become and the more difficult it will be to hazard a guess under which heading it may be categorized. In a short book, for example, we might find everything on, say, B. F. Skinner (one paragraph in entirety) indexed under the man's name; in a very long and thorough text, however, we might need to consult such headings as "behaviorism,""20th century psychologists" and "conditioning" before finding the needed facts... If we wish to learn how monthly mortgage payments are determined, should we consult a computer program which discusses real estate in general, or one which focuses on the calculation of interest rates of all types? Where we should look to find the information we seek is frequently unclear.

Likewise, a new mental ability becomes important—the ability to find needed data rapidly in a world where super-abundant, and ever increasing, information is often "stored" in a complex, ambiguous or even seemingly arbitrary manner. Often, we must think "divergently"—consider many, ostensibly unrelated, possible headings at once—to find the index category or computer "file" under which information of multi-disciplinary or multipurpose interest might be classified. We must make a mental listing beforehand of all the categories, likely and unlikely, under which the subject of interest could be subsumed—then consult each of these possibilities successively until, after much "trial and error," we hit upon the proper heading. Gaining access to specific data, when large amounts of information are recorded and stored in a complex fashion, requires ingenuity. Likewise "resourcefulness," or the ability to simultaneously conceive of diverse solutions to a single problem (here, the problem of how information is indexed), may become a new criterion by which "intelligence" is judged.

3) Already, many jobs hitherto requiring manual labor have become automated while computers are performing most workplace calculations. Meanwhile, a growing number of people now work at "gaining access" to large quantifies of (sometimes ambiguously) stored information or at managing organizations or departments of increasing internal complexity; although initially trained as engineers, nurses, or biologists, they are currently employed as administrators. Abilities similar to those required in accessing data are needed in administrative work: One must think "divergently" of many possible solutions to a given, often ill-defined, problem; the relative scarcity or abundance of various resources, present and projected individual or group needs, the various agencies which might be consulted for financial or legal assistance, the impact of even subtle changes in policy on employee morale, and so forth—must all be considered simultaneously. "Resourcefulness" and the ability to think "broadly" (or "divergently"), to foresee how numerous factors might interact and to envision multiple possible solutions to any given problem, take priority. In an era when computers perform more and more of technology's "analytical" work and when increasing numbers of people assume managerial roles, the incisive and narrowly-focused reasoning which considers data sequentially and ignores all ostensibly extraneous information may be superseded by the ability to consider heterogeneous pieces of information simultaneously.

The body of modern knowledge is enormous—too huge for one individual to master—even 5 lifetimes; continual advancement, especially in the technologies, assures that every man will always be "slightly ignorant" (even regarding the developments in his own specialty) and that, inevitably, he will often need to consult references for an explanation of new discoveries. The efficient use of such reference sources, necessary for adaptation to an ever-changing society, is of vital practical importance; gaining access to the facts of interest, when (abundant) information is stored in a complex manner, is facilitated by a divergent type of thinking called "resourcefulness." This "resourcefulness," as a key determinant of success in the modern world, may be a valid criterion by which to evaluate adult intelligence.

An intelligence test, adapted to the needs of modern society, should assess the examiner's ability to use reference sources (resourcefulness) more than his speed of performing arithmetic computations or his memory for esoteric definitions. Test-takers might actually be handed dictionaries and encyclopedias upon entering the examination hall and many problems, requiring a comprehension of obscure terms, would really measure how well the examinee uses the reference sources at his disposal: the question, for example, "What does a physician mean by 'succussion splash'?" might be easily answered by anyone who thought to consult the medical dictionary handed to him at the start of the test. Because the ability to add and subtract quickly is of little practical importance in today's computerized society, all examinees would be permitted to use calculators to solve the quantitative problems. A flair for learning Latin roots and a knack for performing sums at lightning speed, like a talent for rote memorizing (ecclesiastical rituals), are forms of mental ability irrelevant to modern society; in education and in testing, far more emphasis should be placed on "resourcefulness"—currently a much more pertinent ability.

Sociological conditions and technological innovations (such as the development of writing, the invention of the printing press, the advent of the computer) determine what particular mental abilities are most valued during a given period; these prized abilities are considered, by themselves, to be the marks of high intelligence. In short, the definition of intelligence changes with the times; so, too, should our means of evaluating it.