A Not-So-Brief History of the Founding of "Scientism"
Analyzing Quasi-Religious Rhetoric Regarding AI and Its Use According to the Cognitive Science of Religion
These screenshots are taken from the famous Bill Wurtz video, “history of the entire world, i guess” (NSFW).
Similarities Between AI Rhetoric and Religion
Recent language surrounding AI has begun to mimic that of spiritual salvation and providence, an interesting phenomenon that cognitive science scholars of religion have noticed and sought to understand. Influential figures in the AI industry have contributed to this narrative, portraying AI as a transformative force capable of redefining human existence and, in some cases, even “saving” humanity (Altman, 2024; Amodei, 2024; Andreessen, 2024).
Altman, Andreessen, and Amodei (2024) and other authors persistently claim that AI tools will bring individuals and the world to a higher plan of existence. And they truly believe this, and act according to it.
One of my colleagues was privileged to go to the OpenAI Educators’ Forum. They reported to me that they were shocked at the religion-like ideas that were discussed. While OpenAI leaders are pleased with the positive reception of AI tools in education (students are their most frequent users), consumer demand is only a small part of their research and development. Their ultimate goal is to create Artificial General Intelligence (AGI), which will serve as a sort of omnipresent, omnipotent digital god. Their ultimate goal in terms of people is to ensure that the wonder and awe they feel toward AI tools does not fade.
Even groups that are more human-centered and AI-adjacent believe in quasi-religious ideas and form religion-like hierarchies. One such group has a “manifesto” that makes somewhat utopian and unproveable claims about the impact of AI on education, the workforce, and the “AI-driven world” of the future. When an author in the movement tries to write a material related to the group, they are told to more explicitly include central aspects of the manifesto and belief statement of the group. They are also required to mention certain key individuals in the group to give them more visibility.
Elements of this rhetoric seem similar to a fictional religion created by Isaac Asimov for several stories in his sci-fi/political novel, Foundation, “Scientism.” This religion was actually a political and economic organization disguised as a faith-based social reformation group. By connecting technology operation to religious belief and ritual, the Foundation was able to manipulate entire planets into following their every whim. They could control how technology was used and who used it.
What is “Scientism”?
“Scientism” is a smokescreen for the true purpose of the Foundation that made up the religion. The true purpose is to pervade society and undermine an oppressive, autocratic government. Science and technology were used as miracles and impress people with the abilities given to those who practiced the magician-like ways of Scientism. Gadgets were given to converts to the religion to help improve their lives, and in later years were treated as religious relics. If you view the television adaptation of this story, note that they miss the nuance of this distinction.
Instead of promoting science as it exists, as the name suggests, science is used to spread the benefits of technology, as a type of social improvement group. The elite members at the top of the religion see its true purpose as a research group, but even most of the “technician-priests” believe that the religion is a sincere one. Every ship and technological installment (hospitals, power plants, etc.) has at least one technician-priest. The “communion” of the religion is actually medicine and inoculations, whose effects are beyond the understanding of the local residents. Small and large miracles (from producing light to shutting down hospitals or spaceship yards) are actually political in nature. Through this religion, and the threat of spiritual consequences for acting against technology, the Foundation controls multiple planets.
Central Concepts of Cognitive Science of Religion
In my undergraduate work, I studied the concepts of the cognitive science of religion, also called the cognitive study of religion. This anthropological field examines religions and their adherents through common beliefs and mental processes. CSR researchers claim that our evolutionary adaptabilities make humans prone to seek out and join religious groups. This aids in personal survival and also helps to build communities.
Hyper-Active Agency Detection Device
As I mentioned in my article on “Artificial Agency,” Atran (2005) argues that one of our evolutionary cognitive abilities that fosters religious belief is the “hyper-active agency detection device” (HADD). HADD was created from humans’ ancient need to survive attacks from predators (Atran 2002, 58-60, 64-70). In addition to detecting agents when they were around through sight and hearing, humans learned to interpret any occurrence they could not understand as being purposeful, goal-oriented, and caused by an agent. This heightened their chances of survival, but also produced many false positives. As religion formed, the unseen agents were given supernatural powers. As we grow older, according to Jesse Bering (2007), we still detect agents around us, and we view life as having a purpose and ourselves as being bound to other agents, real (humans) or imaginary (AI machines) (457-458).
Artificial Agency: Characters, Friends, and Social Trends
NOTE: For the “NotebookLM” version of this post, see the widget below:
Due to HADD, we assign human agency and characteristics to unseen forces. I would imagine that this cognitive characteristic is why we perceive human characteristics in AI tools.
Costly Signals
Costly signals (sacrifices or rituals) are actions that require pain (physical or otherwise). This process was born out of our ancestors’ need to trust one another. When they were living in groups, they needed to know who would support them while hunting, surviving, or warring with another community (Sosis 2005, 167-168). Costly signals were a way to ferret out free-riders and strengthen community bonds, maintaining unity all the time rather than only in times of necessity. In terms of religion, these costly signals include tithing, initiation rites, dress and diet restrictions, and fasting, among other things.
Members of the congregation perform these rituals to communicate their religious commitment to their community. Members who see others of the same religious group participate in the same sacrifices are more likely to help them in their time of need, according to a study done by Richard Sosis (2005, 171-182). Many rituals are so intense and/or frequent that they are nearly impossible to fake, and the religions that use these costly signals have enduring, if not large, congregations (Sosis 2005, 171-172).
Laymen are not the only ones who are expected to participate in sacrifices. Religious leaders are expected to be the most frequent and public participants in ritual. Scott Atran notes that if religious leaders (and some social leaders) do not sacrifice publicly frequently enough, they will lose their influence and power over their community (Atran 2002, 202-206). However, the actual ritual is not performed most of the time, because willingness to perform the ritual is often seen as being as valid as performing the ritual (think of Abraham and the sacrifice of Isaac/Ishmael) (Atran 2002, 126-129; Lawson 2017, 8-9).
Minimally Counterintuitive Ideas
Minimally counterintuitive ideas are concepts or stories that have been transmitted from person to person because they are the most memorable concepts in our mind. They are intuitive ideas that have been altered slightly to be counterintuitive enough that they challenge us, but not so counterintuitive that they are made absurd. Barrett (2007) focuses on these ideas in his work and notes that most beliefs of the average layperson are minimally counterintuitive ideas (4-5). Minimally counterintuitive ideas feed off and strengthen the hyper-active agency detection device. In fact, sharing minimally counterintuitive ideas (spiritual experiences or near misses) and relating to others’ hypersensitive agency detection device experiences promotes the spread of the idea of superhuman agents.
Each person’s shared ideas or experiences will, theoretically at least, enhance and strengthen the validity of everyone else’s, and this will bond the individuals together in a community (Spiro 1952, 501-502).
Variation Regarding Non-Central Beliefs
This does not mean, however, that every individual within the same congregation believes in the exact same ideas. Rather, an individual’s background will influence which ideas they focus on and which ideas they are skeptical of (Sagan 1987, 5-10). Barret (1999) recounts a study documenting theological correctness, in which each adult claimed a theologically correct (according to the theology of their religion) view of their god and claimed to fully understand their god. However, in a task in which they had to utilize and demonstrate their concepts of their god, they revealed that their individual perception was not theologically correct but anthropomorphic. Other studies demonstrate that many individuals in a religious congregation believe things that go against what the group professes to teach.
AI Rhetoric, Scientism, and the Cognitive Science of Religion
Leaders in the AI industry use rhetoric and actions that are similar to religious rituals and teachings. They also encourage actions and thought processes that are similar to common ones discussed in the CSR field.
As I have already talked about this in the Artificial Agency post, I will let that speak for itself. Below are other aspects of religions that are also present in the rhetoric and actions of common arguments regarding artificial intelligence tools.
Costly Signals
For leaders and advocates, costly signals serve both as a means to galvanize public and private support and as a mechanism to assure stakeholders and followers that AI’s transformative journey is not merely theoretical.
One notable costly signal is the substantial personal and institutional investment in AI startups and major tech companies. Leaders in AI, such as Elon Musk, Sam Altman, and others, have poured resources into companies focused on AGI and generative AI models, with OpenAI and Anthropic being prime examples. This financial commitment goes beyond mere support; it is a public declaration that the success of these companies is a direct reflection of their beliefs in AI's transformative role. These investments often lock leaders into long-term trajectories that may not yield immediate returns, demonstrating an intense, high-risk devotion to the cause. Similarly, the cycle of releasing successive AI models—each an upgrade or enhancement of the last—functions as a costly signal by showcasing a relentless push toward improvement, regardless of occasional public scrutiny or product failures. Such cycles come with risks, as each model demands new resources, staffing, and sometimes, a gamble on technologies or data that might not meet immediate market demands.
Moreover, AI boardroom dynamics have offered clear instances of costly signals. High-profile shifts, including demotions, reappointments, and restructuring, underscore the personal and professional sacrifices leaders are willing to make to maintain alignment with AI’s ethical and developmental goals. For instance, leadership controversies at major AI companies have led to temporary removals of key figures, only to see them reinstated after public outcry from employees or stakeholders. Such turbulence, rather than destabilizing these organizations, often reinforces a collective commitment to the ideals these leaders espouse. It sends a signal that the individuals at the helm are willing to undergo professional upheaval to stay true to their principles, highlighting the seriousness with which they pursue AI’s future.
Releasing products, even amid intense criticism, is another form of costly signaling within the AI field. Many AI tools have been launched, criticized, and later repackaged under different names or marketing narratives, reflecting an industry that prioritizes persistence and adaptability over flawless first impressions. This process entails re-investment in the face of failure and a continuous iteration that shows dedication to fulfilling prior declarations. Users, too, are part of this costly signaling; early adopters and advocates of AI are often encouraged to integrate AI deeply into their lives, promoting it as the ultimate tool. This devotion sometimes comes at the expense of other technologies, which are downplayed in favor of AI as the future, thus symbolically endorsing AI’s predominance.
Government involvement represents a broad-scale costly signal. Numerous governments have poured vast sums into AI research, development, and regulation, reflecting a nationwide commitment that transcends individual gains. By instituting policies and creating funding initiatives aimed at fostering AI, these governments signal their belief in AI’s necessity for national prosperity, often at the expense of redirecting resources from other areas like social services or infrastructure. This political and financial commitment reinforces the idea that AI is not just another technological development, but a paradigm-shifting pursuit.
These costly signals collectively underscore a critical point: the AI movement is not content with theoretical discourse or superficial progress. Leaders, organizations, and governments alike demonstrate their dedication through profound and sometimes risky actions that underscore a shared commitment to the field’s future. These sacrifices serve as public assurances that AI’s potential will be vigorously pursued, despite the personal, financial, and political costs involved.
Minimally Counterintuitive Ideas
The rhetoric surrounding AI tools is steeped in minimally counterintuitive ideas. This approach makes AI’s capabilities seem simultaneously plausible and awe-inspiring, allowing humans to imagine AI taking on roles that extend beyond our current experience without pushing those possibilities into the realm of outright fantasy. The result is an intriguing blend of familiarity and novelty—AI as a tool that assists us now but, with minimal extrapolation, might one day rival human judgment in areas like law or medicine.
Today, we see AI tools excelling in clearly defined, highly structured tasks, such as generating text, organizing information, and performing basic diagnostics, especially when clear, data-rich parameters are available. In content creation, for instance, AI already demonstrates notable productivity gains by streamlining repetitive tasks like drafting emails or summarizing articles. Such uses are familiar enough to be widely accepted, given they build on the role of technology as a productivity enhancer—a narrative that has been cultivated since the early days of computerization. However, the prospect that AI could eventually become an effective judge or doctor stretches our conceptual boundaries, even if only minimally. Here, the rhetoric around AI shifts toward a speculative space where the technology’s current capabilities are projected into domains that typically require complex moral or clinical judgment.
This rhetoric relies on the claim that such applications of AI are just beyond current reality: "If AI can write convincingly or interpret data as it does now, why could it not one day judge impartially or diagnose with greater accuracy than a human?" For example, in the medical field, AI systems are beginning to assist in diagnostic tasks, and though few view AI as an authoritative physician, we are comfortable considering it as a helpful diagnostic tool. This incremental stretch of capabilities primes us to imagine AI in even more influential roles—perhaps as a superhuman diagnostician capable of integrating vast amounts of data in a way no human could. An AI tool could even serve to diagnose and make decisions regarding groups of people as well as individuals. If we extend those data and context interpretation abilities to non-medical data, soon we could have AI judges. All it takes is small steps in the allowed abilities and use cases of AI tools.
In terms of influence, minimally counterintuitive AI rhetoric allows individuals to see the technology as approachable yet aspirational, with a future potential that feels attainable. This kind of rhetoric aligns with the cognitive processes that make religious or quasi-religious beliefs compelling, as it posits an entity (in this case, AI) as an enhanced form of what we already know, rather than a complete departure from it. By blending familiarity with just enough novelty, the discourse around AI can engage the imagination without requiring an abandonment of common-sense expectations. This combination, while speculative, helps bolster acceptance and belief in AI’s future potential, lending credibility to even the loftiest claims about what AI might achieve in domains traditionally governed by human expertise.
Variation Regarding Non-Central Beliefs
At the center of core beliefs lies a staunch commitment to the necessity and potential of AGI, an assumed next step for AI that could embody a complete understanding and application of human-level intelligence. Proponents of AGI argue that it is not merely a progression of technology but a transformative milestone that promises unparalleled benefits for society. Closely tied to this is the belief in AI’s capacity for eventual impartiality, with adherents envisioning tools capable of making unbiased, objective decisions across all sectors.
The goal of complete impartiality aligns with the values of effective altruism, which prioritizes actions and innovations that maximize well-being for the greatest number, even if this means severe consequences for minority groups or individuals. Thus, these fundamental beliefs create a vision where AI is seen as not only transformative but also intrinsically aligned with altruistic ethics. Moreover, there exists a belief that positive AI applications can counteract potentially negative uses of AI (Andreesen, 2024), contributing to a society where all actors—including those with competing interests—leverage AI to create a balanced, mutually reinforcing system.
In contrast, non-core beliefs are viewed with a considerable degree of openness. For instance, the general aim of improving human life through AI is widely endorsed, but opinions diverge on what specific forms that improvement might take. Some argue that AI should prioritize healthcare, while others believe education or economic equity is where it could offer the most significant impact. Additionally, while many agree that collaborating with stakeholders (policymakers, communities, and even skeptics) is valuable, the methods and extent of that engagement vary widely among practitioners. Some advocate for aggressive stakeholder inclusion in every developmental phase, while others see collaboration as an occasional necessity rather than a continual process. Another key area of variation lies in the conception of AGI tools or the array of systems that will ultimately benefit society. The question of whether AGI should be centralized within one "super system" or dispersed across specialized, interoperable modules remains a matter of personal and organizational preference.
This flexible stance on non-central beliefs enables individuals within the genAI field to explore, experiment, and propose a range of approaches while still operating under an overarching ideological framework. Core beliefs establish a shared vision that emphasizes AI’s role as a benevolent force and guidepost for progress, while non-core beliefs allow for practical, diverse expressions of that vision. This balance cultivates a cohesive yet dynamic environment, fostering both unity and innovation. Through this structure, the genAI field can progress with a shared understanding of purpose, adapting to emerging challenges and perspectives without losing sight of its fundamental ideals.
AI Chatbots Are the New Priests
While religion was not the first thing that sprang to most peoples’ minds when they thought about genAI, some developers wasted no time in creating religion-themed chatbots. One of the first tools I encountered was “Robot Spirit Guide,” that could offer you spiritual or live advice based on whatever religious text or group you selected. The first version of the site offered around a hundred religious groups, but they must have had complaints about accuracy because now they offer less than ten. They also have a disclaimer that an AI tool should not relied upon as an ultimate authority. Still, they promote the idea that an AI tool trained on religious texts can serve as a replacement for those texts.
Religious groups have also developed AI assistants for themselves. an Episcopalian group created Churchy Answers That Help You (CATHY). This AI “virtual guide” directs people to textual answers about the Episcopal Church, the Book of Common Prayer, and the Forward Movement, which is a publication by members of the Episcopal Church and the self-proclaimed largest publisher of Episcopalian materials. Cathy can give you secular information about the church or suggest scripture readings or doctrinal information.
Informational chatbots and tools about religious topics are only a small segment of religion-themed AI tools. In an article entitled, “The Rise of GodGPT,” Business Insider uses a conversation with Cathy to introduce its topic before it moves on to discuss the other types of religious tools, from deity stand-ins to religious text summarizers. You can talk with the Buddhabot or interrogate the Bhagavad Gita. More often than not, though, inaccurate, incomplete, and nonsense information mean that you are no more sure of the right answer to your question than you were before you started the conversation.
Modern-Day Scientism?
So, why not just make an AI-based religion? Why not bring Scientism from fiction to reality? Some people are trying to do just that.
Religione.ai, a platform positioned as an AI-driven moral guide, exemplifies modern scientism. The platform’s explicit purpose is to offer ethical advice, reflecting the human desire for moral guidance in an increasingly digital world. While it may provide useful insights, it also encapsulates the risks of Scientism: an overreliance on technological solutions in areas where human discernment and ethical nuance remain irreplaceable.
Ironically, when I went to use the tool I got an error message that the tool’s creators had not paid a bill for a service that supports its operations. It does not seem that this iteration of Scientism will last independently.
While not every religion in the world has a belief in a being with higher knowledge, power, or presence than human beings, it is a common aspects of many religions. Religione.ai and other chatbots are only a few steps away from the first online temple, which is the Cyber-Temple of Zoroastrianism, created by Shahriar Shahriari. That was the first expression of public desire to directly communicate with the divine via the internet.
The difference is that this religion is waiting for the first appearance of its god, rather than a subsequent appearance. Like other religions, its adherents have varying levels of faith. Some users are not so confident in the promised abilities of AGI. But some users, and most developers, view the creation of their god as their ultimate goal. Worship in this religion is not prayers or devotion but research and purchase of compute power. Its tithes are data, not money or food. And its fasting is the sacrifice of other research and development goals. Its missions are post-graduate work in Silicon Valley or positions in businesses as AI experts.
But what about the political and social ties that the Foundation used Scientism to achieve? Well, that occurred a few months ago. OpenAI and Anthropic entered into a formal agreement to provide models and research to the United States government regarding AI tools, safety, and AI advancement. While the government framed the agreement as a safety measure to ensure that artificial intelligence models do not harm the public, OpenAI used this deal to acquire a “customer,” the U.S. Agency for International Development.
I do not know if there will be any AI god created in the future. But I do see a potential future in which the advent of AGI is seen as an Advent. It may not be worshipped with pilgrimages, meditation, or psalms, but it will serve as the object of fervor that rivals the most pious zealot.
Does This Mean That AI is Bad?
While it is true that many of today's AI tools were developed by researchers and companies pursuing AGI as their north star, it behooves us not to dismiss these technologies simply because we may disagree with the underlying motivations of their creators.
The modern hospital system was largely developed by religious orders whose ultimate aims included converting patients to Christianity. The great Islamic Golden Age produced groundbreaking advances in mathematics, astronomy, and medicine by scholars working within a religious framework very different from modern secular science. Even the scientific method itself was significantly shaped by natural theologians seeking to understand God's creation through careful observation. Today's AI tools - from medical imaging systems that save lives to language models that make knowledge more accessible - can be evaluated on their own merits and thoughtfully applied to benefit humanity, regardless of whether one subscribes to the AGI-focused worldview that helped birth them. We can appreciate the practical value of these innovations while maintaining a healthy skepticism toward any quasi-religious claims about artificial superintelligence.
The history of human progress teaches us that transformative discoveries often come wrapped in the prevailing beliefs and preoccupations of their time. Just as we've learned to separate the astronomical insights of ancient astrologers from their mystical framework, we can thoughtfully engage with modern AI advances while maintaining critical distance from AGI prophecies. There's wisdom in learning to extract what's useful while being mindful of underlying assumptions and motivations. Rather than rejecting AI tools wholesale because we may disagree with some developers' ultimate aims, we can approach them with nuanced discernment - embracing beneficial applications while being cognizant of limitations and ethical considerations.
Conclusion
AI’s quasi-religious status in some sectors stems from an evolving human tendency to seek meaning and guidance from influential technologies. Through a cognitive science of religion lens, the rise of AI tools as moral and intellectual authorities illustrates the potential pitfalls of Scientism. It is essential that, while we integrate AI into various domains, we also retain a balanced perspective on its role. AI can be a valuable tool, but only when humans retain their critical faculties and do not abandon human oversight to the allure of technological determinism.
References
Atran, Scott, 2002. In Gods We Trust: The Evolutionary Landscape of Religion, New York: Oxford.
Barrett, Justin, 1999. “Theological Correctness: Cognitive Constraint and the Study of Religion”. Method & Theory in the Study of Religion, 11(4), 325-339.
Barrett, Justin, 2007. “Cognitive Science of Religion: What is it and Why is it?”, Religion Compass, 1, 1-19. Accessed online on 22 January 2020. DOI: 10.1111/j.1749-8171.2007.00042.x
Lawson, E. Thomas, 2017. “The Cognitive Science of Religion and the Growth of Knowledge”, in Religion Explained?: The Cognitive Science of Religion after 25 Years, ed. Luther H. Martin and Donald Wiebe. London: Bloomsbury Publishing: 1-9.
Sagan, Carl. Fall 1987“The Burden of Skepticism”, Skeptical Inquirer, 12.
Saler, Benson, 2008. “Conceptualizing Religion: Some Recent Reflections”. Religion 38:3, 219-225. Accessed 8 January 2020.
Sosis, Richard, 2005. “The Adaptive Value of Religious Ritual”, American Scientist, 92, 166-172. Accessed 23 April 2020.
Spiro, Melford E. 1952. “Ghosts, Ifaluk, and Teleological Functionalism.” American Anthropologist, vol. 54, no. 4, 497–503, doi:10.1525/aa.1952.54.4.02a00040.