Multiple Funny Things Happened On the Way to the Singularity
Are Current Company AI-Human Workflows Really Productive? How Are Companies Really Interacting with AI Tools?
The Hype and Its Origins
So many institutions, whether they are AI organizations or those who are adopting AI technologies into their workflows and processes willy-nilly, are focusing on the potentiality of a “singularity” or the creation of “Artificial General Intelligence.” OpenAI was so confident in the importance of AI in the future of the workforce of the modern world that they held an Educators’ Forum last week to discuss the impact of generative AI, particularly the products of OpenAI, on education. Both Liza Long and Marc Watkins, whom I encourage you to follow, participated in this forum. The links on their names take you to their summaries and thoughts of the forum.
Shortly before this forum, Sam Altman released a manifesto of sorts on this personal blog, in which he connected his hypotheses and aspirations regarding generative AI to his own personal aspirations. It appears, at least in my reading, that success in creating artificial general intelligence would serve as some form of validation.
OpenAI is not the only institution that has become more public and bold about its estimations of the effects of generative AI on our society. Dario Amodei, associated with Anthropic, the creators of Claude, published a blog post entitled “Machines of Loving Grace.” While the implications of this title are somewhat alarming (evoking religious imagery), I am not so sure that Dario meant all of those implications. He was probably just wanting to quote an evocative poem by Richard Brautigan. However, as I tell anyone with whom I discuss literature, the intent of the author is only part of determining the meaning of the work. How the work is received by the public is just as valid.
Another organization that is more pro-hype, and one that I was more thoroughly connected to before I moved away from it, is the Human Intelligence Movement. It has its own “manifesto,” although it is much shorter than the other two. It is mostly predictions regarding the future and an assertion that if these predictions are true, all students in a “traditional” education system will be left behind because they will not be as equipped to navigate an “AI world” and “AI-driven workforce.”
All of these manifestos, and their communications and groups, have quasi-religious aspects about them, and I will discuss those later in a post (I have a lot of ideas about that, believe me). Their ultimate goal, at least for the time being, is to ensure that people do not lose the wonder they feel when they consider generative AI tools. They must not write or do anything that contravenes the speaking points of the hype. Also, they must focus on the future and what could improve, rather than what is wrong. Finally, there are figures that one must look up to, revere, and treat as ultimate authorities.
When one comes out of these groups and their writings, or at least takes a step back and views them critically, one discovers that there are, as one author wrote, “glaring blind spots” in their reasoning and goals. Often, when manifestos and grand publications or events are released, their goal is to engage “believers” or “followers” in reinforcing certain ideas. This prepares the people for potential changes or drastic occurrences.
Drastic Impact Of Businesses Embracing the Hype
Not all businesses are buying into the hype, but the stories of those who are embracing it are quite alarming. About a month ago, Klarna decided to cut its workforce in half and replace those who were let go by AI tools. This caused a massive uproar in the business world, especially when the spokespersons for the company addressed the issue in a purely logistical manner. Their profits increased greatly, they said, since they used AI for certain tasks. It also operated more efficiently and without deviation from their procedures. In their customer service functions, for instance, the AI bot ostensibly sufficiently performs the workload of seven hundred people.
I would have thought that these impacts would be years in the future, but they occurred less than two years after ChatGPT became public. Granted, generative artificial intelligence has been explored for around seventy years. But the tools that are currently replacing the labor and products of humans have only been in the mainstream public eye for a very short time.
I wrote in a previous post about the dangers of perceiving and responding to AI tools as if they are human. Well, these companies have been this a step further and stated that certain functions of AI tools are so similar to those of humans that AI can replace humans. If we take this further, we could commit the reverse process and de-humanize humans, or “machinopomorphize” them (I made that word up, It’s probably linguistically incorrect).
Further Notes on the "Personification" of AI Machines
This image is the cover of the second novel in Asimov’s Robot series, The Naked Sun. The character on the left, R. Daneel Olivaw, exposes his reality as a robot in front of a more primitive robot model, who had been fooled into thinking he was a human. In the previous installment,
But What is Really Happening?
How many predictions of these institutions, and the individuals that make them up, are being realized? How many will only be realized under certain (and unlikely) circumstances? And how many will never come to pass?
One of the most pressing matters that AI institutions use to validate their perspectives on AI use in education is “AI is going to be widespread in all fields and workplaces, so students should be taught best practices in as many courses as possible.” This is tied to an idea (whether intentional or not) that AI use will be best practice in the field. Just because something can be done does not mean that it should be done. I will talk about this in a future post about principles of productive use of AI tools.
In the meantime, it is time that everyone realizes two things:
the minds of the AI developers are, more often than not, in contravention of the preferences of its users, mostly because the developers are not trying to make usable tools. They are trying to approach the AI “singularity” and make a massive machine that is as close to a human brain as possible.
the users, whether they are educators or students or practitioners who are seeking to ethically implement new technologies, are responsible for ethical use and productive patterns. Developers are more concerned, at least at the moment, with the abilities of their tools rather than the ethical implications of their tools.
This means that the hype generated by AI-created institutions and those who are applying tools without critical thinking is probably not going to be completely realized. And for the developers, that is perfectly fine. They get more money when they have more users, whether or not their users are cognizant of their true goals.
What happens, though, when the hype is over? When businesses and individuals discover that AI tools do not work the way that they first believed?
There are several main reactions to post-hype disillusionment. First, these companies could abandon all AI-related projects and move on from generative AI, claiming that it is a waste of time to focus on as a group. Another possible reaction is that organizations could discourage, or ban outright, personal AI use for work tasks. Another “solution” is that the organization can hide AI use from their consumers and regulators. Finally, the most balanced and measured solution to realizing the false promises of the hype is committing to foster and encourage ethical and productive use of AI tools, whether as an entire organization, as groups, or as individuals integrating it into their personal workflows.
There are companies and individuals who can function as examples of each of these reactions, but the real impacts of these strategies cannot be known right now. We can hypothesize about some of them, but I will not do that here.
Abandonment of Projects
The government of Australia has recommended that organizations abandon projects relating to generative AI after a study it undertook determined that generative AI is absolutely awful at important business processes, including summarizing information and recommending action points. It was also notoriously unreliable in citing information and providing accurate references to where in a document it found information.
While this approach will certainly result in fewer mistakes and more human production, it misses the opportunities provided by informed, ethical implementations of AI tools. In looking at the methodologies of the study researchers, it appears that they simply instructed the AI to summarize the documents it examined. This methodology does not seem to connect with a truly informed understanding of how AI tools work.
These simple interactions with AI, mired with uninformed and inefficient use, could be the downfall of many integration projects. As a result of disillusion when alleged super-abilities of LLMs as all-in-one tools are not realized, it appears that around 1/3 of all AI-related projects in business will be abandoned within a year and a half (end of 2025), according to Gartner. They cite “poor data quality, inadequate risk controls, escalating costs or unclear business value” as the main reasons for this abandonment. Gartner hypothesizes that this is because many of these applications are centered around large language models that are expected to perform a massive amount of functions, essentially functioning as a Swiss Army Knife. As I have written before, that is really not possible.
Hiding AI Use
Another answer to discovering that the AI hype promises will not be fulfilled is covering all AI tracks in one’s organization. According to the Australian government (who is, evidently, being very active in the world of AI and business), an alarming 1/3 of businesses using AI are actively hiding that fact from employees, consumers, or both stakeholders. This leads to drastic consequences, not only in terms of content liability but data privacy and confidentiality. The creation of artificial pornography alone has skyrocketed. This is, according to the Australian government, partially a result of undisclosed business use of artificial intelligence. It appears that retail and hospitality sectors are the worst offenders in this regard.
Discouragement of Employee AI Use
Some companies, including those that were never indulging in pro-AI hype, are forbidding, or at least actively discouraging, AI tools use in official business. They do not want to be held liable, understandably, for the misuse of these tools and/or the mistakes they can make.
Employees know this, but many users go ahead with using AI anyway. In fact, around eighty percent of the general workforce actively use AI and keep it a secret from their coworkers and employers. Since my work is intricately associated with AI, I do not have to keep any secrets, but if it was not so prominent in my job, I might not want to be as forthcoming. I understand where they are coming from. As long as they fulfill their objectives and produce their outputs, who cares what tools they use?
Secret employee use of AI may seem like a boost to productivity, but that is only assuming that the users are engaging in ethical practices and using AI literacy. That may not be the case. Around 35 percent of workers acknowledge that AI may be biased or not trustworthy. As we know who have looked into this, there is no question that immediate AI answers regarding information are not honest. They are intended to give you whatever result will make you happy. The factuality of the answer itself is immaterial, a happy accident.
To make matters worse, unless employees know of the nature and purpose of generative AI tools (to please them) by themselves, they may not be told this for quite some time. Only around 20 percent of organizations whose employees are secretly using AI tool actively created AI policies.
The result is unregulated, surreptitious, non-standardized and non-sanctioned AI use. Depending on the sector in which one works, this could cause major privacy, legal, and liability issues for the organization.
Corralling Employees Into the AI Sheepfold
A quite alarming opinion piece by Forbes writer David Morel notes that just like the pro-AI hype, the anti-AI hype regarding potential drastic negative worldwide consequences are probably not accurate either. He suggests a course of action that, while it is the exact opposite of the Australian government’s, has many of the same potential impacts.
Morel suggests creating a workforce team to help implement AI into existing AI workflows. His arguments center on the idea that AI is a new technology, and businesses need to adapt AI to survive. The most troubling idea of his comes about two-thirds into his article, when he suggests that the team should represent as many departments in the organization as possible so that “resistance” will be “reduced,” and “employees will feel like they have buy-in [power].”
A similar thought was expressed on a horrifying LinkedIn and Facebook post, comparing the AI in education conversation to a church whose purpose was to “save the faithful” over “converting non-believers.” On that post, I stated, “I have to say that I find this disconcerting, somewhat dangerous, and a false equivalency. Invoking religious elements in use with technology is how one develops elements akin to the Scientism religion of the Foundation or the Bene Gesserit Matriarchs of Dune.” Again, I will talk more about the comparisons between AI use (especially hype) communities and religious groups in another post.
This strategy seems just as dangerous and predatory as those of the AI developing institutions and organizations that promote their products without transparently communicating what they were designed to do. Just try it, or force it on the workers, and a use case will emerge from the crucible.
Commitment to Discover Ethical Productivity
In the same study about withholding information about AI use from stakeholders, the Australian government determined that half of all businesses that use AI either have no AI policy or a weak (“maturing”) AI policy. At least these institutions are trying to make improvements in their implementation.
In all of the studies referenced above, it appears that some organizations had bought into the hype and were seeking all-in-one solutions, and some were interested in exploring case-by-case, isolated implementation of AI where it would be most productive. That is the type of AI integration that I try to discuss on this blog, and I hope that this post served as an illustration of the different ways one can continue once one has realized the falsities of the hype.
As Singularity Is To Hype, So Integration Is To Reality
We have talked about the drastic promises of AI developers, their administrators, and businesses who are categorically accepting of AI tools. We have also talked about various responses to realizations that the AI hype is overblown. I hope that you can tell that I support the last approach.
The hype is not reality. The singularity, at least the general singularity, will not happen. In one of the first episodes of “This Week in AI,” Steve and I talked about whether or not we thought the Singularity would ever occur.
I stated that I don’t think it will happen, but I do believe that there may be some smaller “singularities,” in which AI tools surpass human abilities in some areas. But even these will take at least fifty years to occur. Furthermore, they would only occur if human progress stagnates and AI progress remains the same. Another type of small-s “singularity,” and one that is much more likely, is when AI becomes such a defining force in a particular field that one or more of that field’s norms are changed from that point on to incorporate AI tools.
In either of these small-s “singularity” scenarios, or even if they don’t occur, businesses and organizations need training on ethical and professional productivity related to generative AI tools. Access to AI tools; education and training regarding their nature, abilities, and use; and permission to experiment and utilize AI tools according to set policies and guidelines will help workers or agents utilize AI in a way that is literate and non-hype dependent. As my ideological forebear Theodore Schellenberg stated, “the use of modern gadgetry cannot supplant the use of proper techniques and principles.”
Part of our need to critically think about these technologies means that we need to not take the promises of AI tools, their creators, or hype adherents at face value. Discover what works for us, and what tools and abilities of tools we really need. Examine the functions and data of tools and the policies, and decide if passive support of corporate tools is worth having one’s usage data tracked. Compare corporate and open AI tools and see which ones can be utilized more effectively in training and education environments.
In other words, if you do something, and do it to the best of your ability, you will know what the truth is. Act on what you have heard, or what your own ideas are, and you will know of their viability. The time for listing to pro- and anti-hype is over. The time for action has come.
Where Should I Go For Balanced Perspectives?
If you are wondering “how can I know how to implement AI tools ethically and productively in my work?” then there are multiple resources. I would encourage you to soak up as much as you can from LinkedIn and Substack writers, as well as academics and practitioners, for free. There is a wealth of information. I have talked about Steve, Liza, Marc and Mark, and David on my blog before. I hope that his is a similar blog. The common denominator in our content is that we are wanting to communicate about ideas, not line our pockets. This blog is free, and all content is available to all users. You only pay me if you like the content I write enough to justify a payment.
For commercial offerings (non-hype) from myself and some collaborators, look below:
The Plug
I feel a little bad about putting these plugs into my blog posts. But hey, at least I label the plug as such instead of sneaking it into the paragraphs of the regular post, right? That way you can get my ideas without having to sift through the info about my offerings.
Steve Hargadon and I (and we have now been joined by Crystal Trice) have made multiple webinars and offerings regarding AI use and implementation in multiple contexts (mostly libraries and education, but also the workplace). We also created a workshop on Custom AI tool creation. I offer consulting, AI implementation, and presentation services (you can find about those in LinkedIn).