ChatGPT – Without experience, there is no fullness of meaning
However, Jonathan Swift (1667 to 1745) perhaps anticipated this situation in his classic Gulliver’s Travels released in 1726.
While some youngsters today may remember reading excerpts from this tale, where Captain Gulliver ends up in countries where he is either a giant or a miniature dwarf, few will recall reading of his trip to the land of Lagano where he visits the Academy of Projectors and watches them use a machine called The Engine.
This excerpt reads: “It was twenty feet square, placed in the middle of the room. The superfices was composed of several bits of wood, about the bigness of a die, but some larger than others. They were all linked together by slender wires. These bits of wood were covered, on every square, with paper pasted on them; and on these papers were written all the words of their language, in their several moods, tenses, and declensions; but without any order.
“The pupils, at his command, took each of them hold of an iron handle, whereof there were forty fixed round the edges of the frame; and giving them a sudden turn, the whole disposition of the words was entirely changed.
“He then commanded six-and-thirty of the lads, to read the several lines softly, as they appeared upon the frame; and where they found three or four words together that might make part of a sentence, they dictated to the four remaining boys, who were scribes.”
Questioning the role of language in thinking
The output of The Engine was to eventually compile all of the various ways that known words could be put together, and thereby the total amount of knowledge possible in the world could supposedly be known and written down.
While this was far before the development of mechanical or electrical computation devices that we would call a ‘computer’, this 300-year-old story caused readers to stop and question the role of language in our thinking.
Is our expansion in knowledge over time just a matter of rearranging words?
In 1980, philosopher John Searle – now retired from the University of California, Berkeley – detailed this problem in “Minds, Brains, and Programs”, published in the journal Behavioral and Brain Sciences.
He proposed the ‘Chinese Room’ argument, where a computer responds to Chinese questions using a programme that constructs Chinese answers by using programmed instructions. This would convince a receiver that they were indeed talking to a Chinese person, thus passing the Turing test proposed by Alan Turing in 1950 that would indicate intelligent behaviour equivalent to a human.
But Searle then uses himself as an example, applying the same programme rules to ‘translate’ the Chinese and produce the answer, without understanding a bit of the language himself, to indicate how this is not human ‘understanding’ at all, and relegating this process to ‘weak AI’.
For several decades now, students have been able to input a word and their computer or smartphone will bring up a series of next-most-likely words from which they can select without typing them out.
That process has now been expanded to incorporate huge amounts of online verbiage, following rules of association that can exhume – without any true understanding – the most common of past human expression.
This can include the most inspirational and benevolent of past online writing, but also the most hateful and racist of archived language.
Experience is missing
What is missing is experience. If there is no experience with a concept, there is no fullness in meaning.
A newborn infant suspended in a pool that cancels all sensory input can learn nothing. The few atomic scientists and military persons who personally witnessed an atomic bomb blast are nearly all gone, and our only substitute for their direct experiences is our woefully inadequate experiences with relatively minor explosions, if anything at all. And the visual images of historical footage carry little to no emotional impact.
The phrase ‘no experience, no meaning’ thus underlies our growing dissociation with real-world phenomena and accelerates the ability of political actors to distort our shallow understanding through mediated messages.
‘Semantics’ is the study of the relationships between words and meaning. A basic tenet of semantics is common experiences between speaker and listener. Therefore direct experiences with the topic being discussed, where possible, underlies all meaningful communication and all teaching.
Today our scientific terminology derived from Latin and Greek is massive, far beyond its limited common usage several millennia ago, and will continue to grow. New scientific terms are added, defined by increased experiences detailed in the ‘methods’ section of science articles.
Dictionaries are history books, not law books, as I suspect Jonathan Swift understood. Artificial intelligence remains artificial as long as machines cannot experience. And that includes the most recent proposals to integrate computation systems with actual neural tissues.
Last month, the journal Frontiers in Science published a proposal by a multidisciplinary group of researchers from Johns Hopkins and other universities to alter organoids – small clumps of human brain cells – to perform advanced computational tasks.
They write that the field of ‘organoid intelligence’ would enable brain cells and brain-machine interface technologies to develop biological computing, or biocomputing – raising a range of ethical issues. But still, organoids are not harbourers of experience.
Therefore, the answer to the problem posed by the future developments of ChatGPT and other word manipulation systems requires restoring the genuine real-world interactions for our younger generations through field trips, hands-on labs, travel, interactions with ‘others’, multilingual education etc.
Reality and real-world experiences may be dismissed in this digital age, but it remains the best place to get an education.
John Richard Schrock is a Roe R Cross Distinguished Professor and biology professor emeritus at Emporia State University in the United States.