AI Is Not Smart...
The Delusion Shared By Humans and AI, and How AI Rhetoric and Processes Contribute
Folies a Deux: Another CSR Concept
“Shared delusions,” or "folie à deux," occur when delusional beliefs are transmitted from one individual to another, often within close relationships. This transmission can extend to larger groups, especially when reinforced by cultural or religious frameworks. This is yet another factor in AI rhetoric that relates to religious belief, as I mentioned in one of my previous posts.
Shared delusions about artificial intelligence reinforce ideas in our communities of practice that are wrong. I cannot find a way to say it that is more tactful. If we act on the assumption that AI has actual intelligence, then we may fall into the even more wrong assumption that its intelligence is greater than ours. We could be discounting ourselves simply because a machine has guessed correctly.
Researchers have explored the cognitive mechanisms that facilitate the formation and propagation of such beliefs. For instance, Brüne (2009) examined the psychological processes underlying both religiousness and delusional beliefs, suggesting that certain cognitive biases and social dynamics contribute to their development and maintenance. These include the tendency to infer agency, the need for cognitive closure, and the influence of social conformity. Such factors can lead to the adoption and reinforcement of shared delusional beliefs within religious groups.
Additionally, the concept of "true-believer syndrome" illustrates how individuals persist in their beliefs despite contradictory evidence. This persistence is often observed in religious contexts, where deeply held convictions are maintained even when faced with disconfirming information. The interplay between cognitive biases, social influences, and the inherent structure of certain religious doctrines can create environments where shared delusions not only emerge but also become deeply entrenched.
An increasing argument of students who are caught using generative AI unethically is that they were just relying on the “superior” communicator or thinker. One of my colleagues’ students said, “AI is smarter than humans, so we should trust it because if it is saying something that we do not know, then we just do not know it yet.”
While there are multiple features of generative AI tools that facilitate its improvement in communicating ideas and incorporating “new” ideas into future outputs, it is not a brain. It is not conscious. And it definitely is not smart.
AI Shares In Our Delusion
Unlike humans, who can reflect on and correct their understanding, AI tools lack the mechanisms to assess the correctness of their outputs. This limitation stems from their design: they do not understand the information they process; they merely predict patterns based on correlations in the training data. Consequently, when generative AI tools produce errors—often referred to as “hallucinations”—they do so with an air of authority that can mislead users into believing in their reliability.
The term “confidence” is frequently used to describe the way AI delivers its outputs, but this description requires clarification. AI tools are not confident in the sense of human confidence, which is based on understanding, reasoning, and experience. Instead, their confidence reflects the strength of the correlations they identify between the input data and the output they generate. This statistical confidence has nothing to do with the validity or accuracy of the information. A generative AI model can be “confident” about a completely fabricated answer if the correlations within its training data align strongly with the prompt it receives.
AI “Pretends” To Be Smart, and That Pretending Does Not Work
AI excels at synthesizing information and generating outputs that mimic human reasoning. This skill has led to its adoption in areas such as content creation, customer service, and research. However, the illusion of intelligence masks the reality: AI doesn’t “know” anything. It doesn’t reason, empathize, or possess intent. Instead, it uses mathematical algorithms to predict the next most likely word or response based on its training data. Devin Coldeway refers to these algorithms as “internal statistical maps” (the structures of these maps will be discussed at the end of the article, so keep reading!) This capability allows AI to generate outputs that feel authoritative but are often inaccurate, incomplete, or contextually inappropriate.
Jon Ippolito, the director of the University of Maine’s Center for New Media, wrote an insightful blog post about why genAI tools can let us down.
"Generative AI's tendency to steer toward the mean, not the median, is why it often produces “average” results yet is also capable of spewing crazy talk. Unlike the median, which exists in real data (like 2 kids per family), the mean can be a fantasy (1.94 kids) and may fall into "no-man's land" between real values.
This averaging is fine for factual answers—Paris is France’s capital—but can fail embarrassingly when data near the mean is sparse—Google bizarrely recommending we eat "at least one" rock daily."
Andrej Karpathy, who used to be associated with OpenAI, released a video on how generative AI tools are trained, and how they operate, just under a year ago. Watch it below for a relatively short-but-thorough explanation:
Androids Dreaming of Electric Sheep (and Documents, and Images, etc.)
AI models, as Karpathy explains above, operate by processing vast amounts of input data, identifying patterns, and generating outputs based on probabilities. They do not “understand” the information they process as humans do; instead, they produce outputs based on mathematical approximations.
This process sometimes results in “hallucinations,” or confidently stated inaccuracies. In other words, AI "dreams up" responses based on the data it processes and the context provided by users. These “dreams” can sometimes feel human-like, but they remain mathematical constructs.
Ippolito’s post implicitly notes the flaw in these dreams’ construction in his critique of generative AI’s "averaging tendency," where models create outputs that skew toward generalizations rather than robust, contextually accurate insights. This means that, similarly to human dreams, the closest they will get to reality is an approximation dominated by the strongest connections in the model’s dataset.
This lack of intrinsic understanding reveals a fundamental limitation. AI tools mimic human-like cognition but do not possess consciousness or comprehension. Ellie Pavlick’s research shows that while AI can simulate some sensory grounding through multimodal training, this simulation does not equate to genuine human-like awareness or reasoning. It only shows that the model is able to categorize, correlate, and catalog data and metadata.
Modeling the Connections Between Concepts
As discussed by David Chalmers in "Could a Large Language Model Be Conscious?", some AI systems mimic sensory or spatial awareness through multimodal training on text and images. But this mimicry is not equivalent to human experience. It merely reflects patterns in the data, not lived reality.
I rarely quote other sources at length, but please forgive me for including large sections of this article. He says it much better than I could.
"LLMs have a huge amount of training on text input which derives from sources in the world. One could argue that this connection to the world serves as a sort of grounding.
…
“Multimodal extended language models have elements of both sensory and bodily grounding. Vision-language models are trained on both text and on images of the environment. Language-action models are trained to control bodies interacting with the environment. Vision-language-action models combine the two. Some systems control physical robots using camera images of the physical environment, while others control virtual robots in a virtual world."
...
“It’s plausible that neural network systems… are capable at least in principle of having deep and robust world models. And it’s plausible that in the long run, systems with these models will outperform systems without these models at prediction tasks. If so, one would expect that truly minimizing prediction error in these systems would require deep models of the world. For example, to optimize prediction in discourse about the New York City subway system, it will help a lot to have a robust model of the subway system. Generalizing, this suggests that good enough optimization of prediction error over a broad enough space of models ought to lead to robust world models.
“If this is right, the underlying question is not so much whether it’s possible in principle for a language models to have world models and self models, but instead whether these models are already present in current language models. That’s an empirical question. I think the evidence is still developing here, but interpretability research gives at least some evidence of robust world models.”
Chalmers references the work of Harvard researcher Kenneth Li on the deliberate instigation, modification, and maintenance of data-inferred models. Li and colleagues assessed the abilities of an AI tool to play the game Othello. They trained a variant of the GPT model, termed Othello-GPT, to predict legal moves in Othello without prior knowledge of the game's rules. Their findings revealed that the model developed an emergent internal representation of the board state, suggesting that language models can form complex internal structures beyond mere surface statistics.
While this does not mean that an AI tool is smart, it does say that generative AI systems proactively create connections between data and respond based on inferences to improve its models. The nature and structure of those models are our next topic.
AI Is Not Smart, But It Is Gaining New Concepts
AI does not "learn" like humans do, but it does gain new concepts and connections between ideas from each new interaction with its users and trainers. Eleizer Yudkowsky calls this "growth" in the AI, while others call it “recursive self-improvement.” Either term is technically a misnomer, because the programmers trained the tools to respond to prompts and track data associated with feedback. There is a human reinforcing the changes the model makes to its connections (explicitly in the pre-training and training stages, implicitly during the end-user use stage). The connections between concepts, as they are "perceived" by the AI tools, form networks not unlike "thought maps" or "word clouds," except they are more complex.
To give credit where credit is due, I am going to give you the first link where I found an external author who was reporting on this idea.
This author referenced a video they made, which discussed the ideas of Max Tegmark et al, associated with the Massachusetts Institute of Technology, which were first published in "The Geometry of Concepts: Sparse Autoencoder Feature Structure”.
The basic ideas are that concepts have "vectors," or lines that demonstrate how strongly or weakly they are connected to other concepts. These connections, as we know from other resources, are fluid based on context of prompts and the objectives of user communications.
Diagrams of the connections between a few concepts create familiar shapes, such as "parallelograms" or "trapezoids." Larger groups of concepts form more cloud-like structures. As increasing numbers of concepts are added to the diagram, certain "fields" or "domains" tend to stay closer to each to each other. The "global" structure, according to Tegmark et al., appears increasingly like a brain. But while these conceptual “vectors” interact in ways that seem brain-like, the structures they make are fundamentally different from human cognition because they lack agency, intentionality, and understanding.
Connection Vectors and Schemata
In some ways, these networks of connections remind me of the concept of “schemata” in cognitive psychology and instructional design. A “schema” is our model for each place, thing, or institution we encounter. We build and maintain these models based on what we learn about them and experiences we have with their iterations. Each person’s schema is different than the next.
For example, we all have an idea of what a school is like. We've all taught there, learned there, engaged there. But since every person's experience at a school is different, and every person's brain is different, we all think of schools differently from everyone else (even if we go to the same school as other people).
Even if we get data from tens of thousands of other people who say "this is what I think a school is like," we will still give at least a slight priority to our own experience unless we control for it heavily. Kind of like confirmation bias.
In a similar way, generative AI models learn from the data and each piece’s connection to other pieces, as demonstrated by pre-training, training, and end-user use. Gradually, it refines these connections into a solid network that can give well-tuned responses to a variety of inputs that require knowledge about each piece. New connections between previously unaffiliated data pieces foster nuance in the model’s responses.
Another way to think of this (allegorically and mythologically) is that the artificial intelligence is building a digital understanding similar to Plato’s Theory of Forms. Plato postulated that each item in the world (circle, chair, house, plant, etc.) are copies or iterations of Perfect Forms or Perfect Ideas. For instance, while numerous individual circles exist in the physical realm, each varying in precision and appearance, they all partake in the Form of the Circle—a perfect, abstract concept of circularity that transcends any physical manifestation. Each manifestation gave more insight into another aspect of Perfection. In a similar way, more datapoints allow for more complex and accurate data networks.
Conclusion
AI tools, for all their power and utility, remain fundamentally different from human intelligence. They excel in speed, pattern recognition, and the synthesis of vast datasets, yet they lack the self-awareness, creativity, and contextual understanding that define human cognition.
They remind me of a quote in Asimov’s novel The Caves of Steel. The protagonist of the story, Elijah Baley, is fed up with his android partner not learning anything about humans that is not relevant to the criminal case they are investigating. R. Daneel Olivaw replies,
“Aimless extension of knowledge, however, which is what I think you really mean by the term curiosity, is merely inefficiency. I am designed to avoid inefficiency.”
Olivaw’s flaws result in him almost passing for human, but not quite, several times throughout the novel. He knows what a general person would do in a general situation, but not what a specific type of person would do in a complex and multi-faceted situation.
Similarly, artificial intelligence tools are designed to be efficient and satisfy general needs, unless they are specifically directed to do something different (and their general programming can definitely conflict with specialized instructions, precluding them from being used in all circumstances!). As I said before, AI operates as a “great pretender,” simulating intelligence convincingly but often failing to grasp the truth or recognize its errors. This inability to understand or analyze its own correctness can make AI both a remarkable ally and a potential source of risk if misused or over-relied upon.
The key to navigating this new era lies in understanding AI for what it truly is: a tool designed to augment human effort, not replace human judgment. When we approach AI critically—recognizing its limitations and its reliance on statistical, not contextual, confidence—we can harness its strengths responsibly. This means keeping humans firmly in control of decisions and processes, ensuring that AI complements rather than diminishes our own capabilities.
As we integrate AI more deeply into our lives, we must guard against the illusion of its "smartness" and remember that its outputs are only as valuable as the human oversight that shapes and critiques them.
References
Brune, M. (2009). On Shared Psychological Mechanisms of Religiousness and Delusional Beliefs
Chalmers, D. (2023). Could a Large Language Model Be Conscious?.
Coldewey, D., (2024). "The Great Pretender".
Geeky Gadgets, "ChatGPT Brain-like Structures".
Ippolito, J. (2023). AI Made Me Basic.
Karpathy, A. (2022). How Generative AI Tools Are Trained.
Li, K., et al., (2022). “Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task”
Tegmark, M., et al., (2024). "The Geometry of Concepts".