Intelligence, Awareness, Reversion
Or why "Artificial Intelligence" is a misnomer
It’s well-nigh impossible to avoid the current widespread frenzy over Artificial Intelligence. From scientific research to industry, from the waging of war to medical care, from complicated logistics to the art of writing, people are caught up in being agog over or afraid of how AI might revolutionize our worlds and our lives. People aren’t simply agog over or afraid of AI as one might be before a spectacle: a large portion of the world’s population now uses generative AI on a regular basis. And it isn’t just people on the ground who are taken by AI. At this point, more money has been sunk into AI research than into both the Manhattan Project and the Apollo program combined by a wide margin—and investments in AI development from all quarters is forecasted to continue apace. Moreover, differentials in AI research and development are taken as indicators of the (dis)parity between nations: major news outlets regularly publish reports, now, of how the U.S. and China compare on AI innovation and implementation. AI is hot, and people are on fire for it with a fire that shows no signs of going out.
That developments in the Large Language Models and other machine-learning architectures that fall under the term “Artificial Intelligence” will prove beneficial in any number of domains, I have little doubt. Already, for instance, AI has helped identify numerous spatial anomalies that scientists had heretofore missed, and it has already proven useful in improving the way we move goods through supply chains. That what we call “AI” can, does, and will play a significant role in our singular and collective lives going forward is beyond doubt.
What AI is, though, is not beyond doubt. While the more careful will hedge talk about AI simulating “human learning, comprehension, problem solving, decision making, creativity and autonomy,” people everywhere talk about AI with the same sort of language we use to talk about human cognitive operations—as being able to think, to understand, etc. Others will go so far as to say that, in fact, the goal of AI research and development is the development of intelligence simpliciter. Regardless of whether any of us has made the move to equate AI (properly so-called) with natural intelligence, the words we use to describe it and through which we seek to understand it do shape how we think about and relate to it. It thus behooves us all to ask: Does it even make sense to talk about Artificial Intelligence as intelligence? Could an assemblage of material parts, no matter how intricate and complex, ever of itself be intelligent?
If intelligence is, or mainly is, prediction, and if prediction as we recognize ourselves to engage in it is straightforwardly indicated by output or utterances that, if given by another human being, would be understood as expressing a prior act of prediction, then yes, we rightly talk about AI in terms of intelligence. After all, AI has already manifestly performed the work of prediction in a variety of fields, and it seems that it will only get better at doing this. That one would be tempted to stop here and ask no further questions would be understandable. A real mark of intelligence in all of us is that we can and do predict what will result from what and then act accordingly: a child is thought intelligent because he successfully predicts that performing a certain action will convince someone else to hand over a desired thing, a carpenter is considered intelligent because he can predict that building things in a different way will result in a better structure, and a medical scientist is considered intelligent because he successfully predicts that doing x or y will result in better cancer-treatment outcomes so far seen. Examples can be multiplied. It is thus unsurprising that we should thus think AI intelligent or talk about it as such because it serves to make successful predictions.
Intelligence is not reducible to predictive ability, however, and this is clear if we simply attend to our acts of prediction. When I predict anything—what striking the keys of my keyboard will produce on my computer screen, what making a turn at a particular junction while driving will result in, or what following a particular diet and exercise regimen will do for my health—I am aware, in my very act of predicting, of my act of predicting. That is, whether I am attending to this awareness or not, I am aware of my act of predicting whenever I engage in such an act.
Such is true of any cognitional act you choose. Attending to a beautiful sky while walking along the shoreline involves inherently being aware that you are attending to a beautiful sky. Noticing a pretty rock submerged in the flowing water of a small mountain stream while on a hike in springtime inextricably involves being aware that you are noticing such a rock. And recognizing, remembering, understanding, or knowing that someone loves you through the hard times essentially involves being aware that you recognize, remember, understand, or know this whenever you engage in any such activity. That is, it is a return by a complete return is involved in any and ever act of intelligence we engage. Such a return—or reversion, if you wish—is not discursive, not something spatially or temporally unfolded through a back-and-forth, but rather something simultaneous with any intelligent activity whatever.
No material or extended thing, no matter how intricate or complex, of itself or by itself can effect through and for itself such a complete return or reversion. One can of course speed up the processing speed of computer chips and computer systems immensely, but it will always produce output through an extended series of physical occurrences—one set of things leading to another, and that to another, which will then result in the set of things or events we understand as legible or intelligible output. Rather, intelligence is properly reduced to spirit, which (for us) works itself out through material things but is not reducible to them. If self-presence, a return or reversion by a complete return or reversion, is not possible for anything material or extended, no matter how complex or intricate, then, insofar as AI is reducible to a highly complex bringing-together of material or extended things, it cannot be intelligent, and it is unreasonable for us to talk about it as if it were except metaphorically.
But what if AI does prove to be intelligent? What if, once we achieve the holy tech grail of AGI, we find that AI gives us overwhelming reason to suppose it is no longer a simulacrum of intelligence but an actual embodiment of it? If that occurs, then we will not have created intelligence. Instead, we would have created the conditions for its entrance into our world. That is, we would have then successfully conjured up an instance of spirit—a necromancer or occultist’s dream.


