What Do We Mean When We Talk About AI Automation?
One of the most-commonly associated terms with generative AI tools, especially on LinkedIn and in professional-centric groups, is “automation.”
“Learn how to Automate Your Work with AI.”
“I saved 80 percent of my time by using this AI-based scheduler!”
“I have automated emails. I never have to follow-up with anyone ever again!”
When you look at these products in practice, either one of two things seems to be true (at least in my experience):
the automation process only includes AI in a very small part, and it is no different than the small AI integrations that have been available for ten years (like Grammarly). This either means that only a small part of new AI was integrated, or that the company has been using AI all along and has decided to emphasize that aspect because of the current hype.
the automation process uses AI so much that there are significant risks of error in the products, interactions, and processes that result. This means that there is extremely little oversight throughout the process. The end result could be just what the user needs. Or, it could be a mess, or it could be just slightly off enough to pass a cursory inspection but cause significant problems for the consumer or recipient (e. g., the mass email sent out by Vanderbilt University after a violent shooting that missed the mark and was also written by ChatGPT).
The first definition of “AI Automation” is just false advertising. It is automation without the major hazards of AI. It does not incorporate AI as much as is implied. This is problematic, but it is not the type of AI automation I will discuss in this post. The second one is far more sinister, problematic, and, in my view, dangerous.
What Does Complete AI Automation Look Like in Theory?
When business professionals, and their employers, think of automation they usually think of increase in effectiveness and a lack of quality. The old argument of “quantity vs. quality” resounds in our minds when we think of automation.
However, when we think of “AI automation” that dichotomy somehow slips our minds. If something has AI and is automated, then that must mean that the AI oversees the automation and makes it effective and accurate, right? We forget that in most manual workflows, a human must be involved to oversee the AI tool. Despite the incredible ineptness of Google’s AI Overview tool, for example, we assume that any AI output in an automated process will be pristine.
In the library world, the automation provided by AI has been expected to revolutionize the ability of librarians, academics, and others in researching. One tool, Research Rabbit, uses automation and data to provide research suggestions as fast as possible. The Research Rabbit tool looks at citations, keywords, authors, and previous user engagement with groups of papers. Then, depending on queries and papers you have previously selected, it gives you groups of papers.
The idea of Research Rabbit is that you have a massive literature review corpus at your disposal without doing any manual research. Essentially, it is automating the pre-research or “initial research” process. This tool assumes that you can skip the “boring“ parts of research because the AI tool has already done this for you.
Another specialized research genAI tools include Elicit, Consensus, and Scite.ai. Similar tools include Scholar GPT or Scholar AI, both of which are Custom GPTs. These tools more completely automate and simplify the literature review. The user provides a research question and in 11 steps (for Scite.ai, anyway), you have a fully-cited literature review with links to materials. You then have options to only use more recent articles or increase the word count. Also, you can tell the tool to only use abstracts. Whatever you want to research, these tools claim that they can accurately give you the entire process behind the scenes.
What Does Complete AI Automation Look Like in Practice?
For an exhausted academic who is teaching five courses a semester and just wants to go camping in the Rocky Mountains, a complete AI-supported automation research tool sounds very nice. But what do we give up? What is the downside of AI automation? There are multiple hazards to actually practicing according to the philosophies that undergird consistent use of these tools.
Inaccuracy: AI tools often only look at data and metadata, and their only examination of context is that which they have been programmed to acknowledge. For instance, if one paper is heavily cited in another paper, this could be because the second author appreciates the tool or is using it as a counterargument. Acting and working according to conclusions that do not regard these nuances can lead to extremely inaccurate interpretation of subjects, incorrect connections between papers, a biased argument, and other inaccuracies.
Privacy and Confidentiality: With whatever AI tool we use, we should always be careful to safeguard our privacy and confidentiality, as well as those whom we serve and for whom we work. The ease of automation, as well as the appealing nature of having a significant aspect of our work done for us, can cause us to put our guard down. For example, how many of us really read the Terms and Conditions of the tools we use?
Diminution of Research Skills: This is more of a long-term hazard, but these negative effects can happen even after two or three interactions with a tool. If we offload our skills to an AI tool, they will atrophy.
Lack of Critical Information Literacy: This is similar to the last point. If we trust an AI tool to completely automate any significant part of our work, we become more prone to AI-dependency. However, the threat to information literacy is important to more fields than just research. AI automation means that the end result is multiple-steps derived from the first response of the AI tool to a query. If we consume the output without looking at the data, the tool’s processes, and the implications of the output, we are not using AI ethically. We are not engaging in quality control, which is important in all fields.
For Critical Workers, More Work in the Long Run: If a long process is automated by an AI tool, then those who do want to review every step, quality-check the output, and revise it for any reason may take more time to do all of those things than if they had simply done the original tasks manually. I call this concept “AI Feasibility,” and it is in my AI Literacy Framework post.
Why Can Complete Automation in Practice Be Dangerous?
The main problem with automated AI tools like Research Rabbit, Scite.ai, Scholar GPT, and the like is that they offer people a chance to offload the heavy and necessary work. In other words, they are automating the wrong aspect of the process: the part of coming up with ideas. What we should be automating is the process of taking our own ideas, and research insights, into communication products, and then still reviewing them.
In other words, the combined effects listed in the last section result in the loss of the human element. We lose the critical perspective that our experience has given us. The end result of multiple people engaging in complete automation is the hypothetical future Ethan Mollick noted in Co-Intelligence in which AI-generated products are reviewed by AI supervisors, explained on presentations generated by AI tools, etc., all without human interaction and at 10x the speed.
In some ways, this seems like a digital version of the massive factory introduced in the Industrial Revolution, in which the humans lived in fear of the machines (not because they were sentient, but because they controlled the means and nature of production based on predetermined programming). In a way, a workplace dominated by AI-controlled automation would be analogous in that the humans could become answerable to the machines instead of the other way around.

What Do We Need Instead of Automation?
You already know the conclusion I am going to make, because it is the name of this blog. Collaborating with AI tools and interacting with their products ensures that materials are correct, thorough, and ethically-created. Furthermore, collaboration helps humans remember that they are responsible for the quality of the product. No matter how much training they put a tool through or how sophisticated their system prompt, they will still have to do work on the output.
I was going to refer to “small-sized automation” or “limited automation,” but then that would semantically mean that one would have to define exactly what that limit is or “what things are too big to be automated” under my definition. I much prefer to just refer to “collaboration” in all projects. In this way, any project can be impacted by AI as long as the human remains in control.
The comparison between complete AI automation and collaboration is demonstrated by a recent LinkedIn post I saw. The author, whom I do not wish to disparage, noted that he was engaging in building “workflows.” Anyone who knows me knows that I like this word when it comes to AI tools and products. To my mind, the term means “working through a multi-step process with a multimodal tool or multiple tools.” However, this use of the term “workflow” was defined as “a series of machines that go through multiple creative an iterative processes before providing a response to the user.” The human was only present at the beginning and the end, nowhere in-between. The author stated that their ultimate goal was to “mimic” human behavior and processes to provide a “2x-10x” increase in productivity.
While the quantity of product provided by these processes will no doubt be many times more than human alone, I think that if we collaborate with AI, we will be able to be 5x as productive while still having personalized and authentic materials.
In terms of academic research that is supported by AI in a collaborative rather than controlled fashion, I much prefer Semantic Scholar. This free AI research tool functions like a database.
While this tool originally only examined the "hard sciences" and "earth sciences," its library of humanities and social science resource is gradually spreading. It only includes open-access resources and it is completely free to all users.
One thing that you should know about Semantic Scholar is that it was AI-powered well before the "AI hype" that began in 2022. It operates initially as a database that you can query with regular Boolean-keyword queries. It uses AI to combine the query with possible synonyms. Then, it provides search results. Two more major AI aspects kick in when you select some of those results.
1. It can analyze the metadata in that article examine the key arguments of the paper, and provide you with recommendations of which sections to use for certain parts of your paper (based on the position, content, and purpose of the paper)
2. It can provide recommendations on other resources to examine based on the keywords in the paper and other people's use of the paper.
For more discussion of these ideas, feel free to register for the recordings of the three-part ChatGPT and Libraries Bootcamp by Library 2.0. This bootcamp is hosted by Steve Hargadon, and I will be there to answer questions anyone may have. The session for this upcoming Friday is entitled “Enhancing Research and Information Literacy.”
Go read Marc Watkins’ take on this same topic,