Do We "Bring the Human" to Human-AI Machine Interactions?
The World's CURRENT Top 10 AI Use Cases, How They Have Changed, and Why Those Changes Are Important
I came across the above image in an email that was sent to me from Harvard Business Review, but it originated in this blog post on the HBR website. Lest we think that HBR is plagiarizing, the author of the post is also the founder of filtered.com . In this post, I am going to combine insights from my own experience as well as the results of the HBR survey. Then, we will discuss the trends that are commonly happening in the Mountain West, at least in academia.
How My Use of GenAI Has Changed in the Past 2.5 Years
The first time that I heard of ChatGPT was in a faculty senate meeting at my full-time job, and we were primarily concerned about its efficacy in writing essays (you must forgive us, this was the first conversation many of us had about this tool!). Now, however, around two and a half years later, I have found a plethora of uses for around a dozen tools.
2023
My first uses of AI revolved around writing and incorporating these tools into my written communication. I had AI check my writing, I had it create outlines, and I examined the weaknesses in its writing styles and used the garden-variety “fact-check and critique AI outputs” to introduce myself to basic AI literacy. Usually, I used ChatGPT for this. However, occasionally I used Claude and Perplexity for some variation.
In around the second half of 2023, I started using ChatGPT to develop ideas and use it as a collaborator in expressing my thoughts. I used ChatGPT text in some of my final outputs (like in one specific chapter of my OER textbook on library science). I debated with it to come up with some of the frameworks I have used in my webinars. I also used it to come up with some images beginning in November 2023. The culmination of this work was my presentation at AECT 2023, “Effectively Using ChatGPT in Education According to the TCoP Model.” The YouTube video below is of the AECT convention presentation.
2024
The next year was one of four or five presentations on various topics throughout the year related to AI, including: privacy, copyright, instructional design, productivity, and human-first integration. I also, as you know, started this blog in the Spring of that year.
My work with AI tended to go back to ChatGPT for text, Ideogram for images, and Perplexity (and then SearchGPT) for web searching. I committed to developing a system or pattern for incorporating information literacy, privacy-conscious prompting, and objective-based AI use. This resulted in several presentations at the 2024 AECT International Convention, including the development of my 14-part AI Literacy Framework. This was actually much worse-received than I thought it would be.
A (Working) Framework for AI Literacy
NOTE: This is by no means a perfect framework, but it is a list of the central skills, concepts, and ideas that I feel are necessary for one to be considered “AI-literat…
Even if people did not like the framework, it did help me develop the idea of “AI Feasibility,” which is “the mental consideration of factors that leads one to decide if an AI tool really needs to be used, or if a human can do it faster and more efficiently.” This idea needed some connection to existing ideas and concepts, which is why I focused on “professional productivity” as a collaborative effort in future offerings. Still, the concept of “AI feasibility” was not a resounding success.
In any case, I did not let this one post and its reception define my professional life. I also developed my ideas concerning AI presentation to educators and copyright and AI, and I wrote those ideas down in multiple spaces. I wrote a chapter of a book on higher education and AI by Utah State University, “Navigating Benefits and Concerns When Discussing GenAI Tools with Faculty and Staff.” I also authored and edited a textbook on AI and Ethics in higher education with professionals from 12 institutions and three or four countries. One of my chapters formed the backbone of my post series on AI and copyright.
AI and Copyright: A New Type of User?
After four months of being gone, I am back! This actually marks one year of the CollaborAItion blog, and it is a milestone that I am happy to celebrate! I am glad that so many of you have had opportunities to examine information literacy, AI, and copyright with me (and archives, for some of the posts!).
Also during this year, I developed a fruitful and highly beneficial partnership with Steve Hargadon of Library 2.0. I have done around ten or twelve bootcamps, webinars, workshops, and conference sessions with Steve.
To bolster the webinars, I started the practice of recording brief (usually 5-minute max) tutorials with demonstrations of skills and practices and products. I think that I have created around 70 tutorials to date, whether in my side jobs or in my presentations for my full-time job.
Developing these presentations has helped me to continuously refine and thinking critically about how I use genAI. Yes, I was collaborating with AI, but was I earnestly considering what “human-AI hybrid product” meant? Was I focusing on the objectives of my work? Not only did my activity with AI moderately reduce in this time period, but the nature of my prompts and projects changed. I mostly used AI for products that required some back-and-forth iteration of ideas.
When that was not the case, but I still used AI, I used it to fulfill tasks that I could have fulfilled manually (writing html code, for example), but I could do much faster with AI. I then used my knowledge to fine-tune and edit the outputs into finished products. One of these detailed projects began as a passion project to honor the Frasier reboot. I created a “Legislature” game that attempted to fulfill David Crane’s description of “C-SPAN in a box.” Another example is the massive proposal that Cathy Leavitt and I developed for the AECT NATO 2024 Competition, Tactical Training Tools.
2025
In addition to a change in my most-frequented tool, 2025 brought my twins into the picture. I will give pictures of my twins out upon request, but I will not publish a picture here (no matter how much I want to!)
I switched from ChatGPT and Ideogram to BoodleBox as my AI tool of choice this year, as can be seen in the last four posts. While I started using BoodleBox in January 2025, paternity leave and other responsibilities delayed my posting about it until the summer. I was won over by their privacy and confidentiality policies and their commitment to sound practices related to developing AI tools and using API contracts.
Harvard Business Review: Changes In GenAI Use In the Past Year
I was not the only person, obviously, to alter my use of AI tools in the past few years. Harvard Business Review published a list of changes between the 2023-2024 academic year and the 2024-2025 academic year (roughly). I put these into BoodleBox and asked it to develop a write-up, which I then edited.
2023-2024
Initially, many users approached these tools with a mixture of curiosity and caution, often employing them for relatively straightforward tasks such as generating ideas, editing text, or conducting specific searches. This period represented what might be called the "exploration phase" of generative AI adoption, characterized by users testing the capabilities and limitations of these new technologies.
During this time, many professionals and educators found themselves divided into distinct camps regarding AI use. Some were confident about their own use while harboring concerns about student or colleague use, others were fearful about the potential undermining of educational fundamentals, and still others viewed AI as simply another educational technology tool to be integrated thoughtfully. The important observation is that these positions were not mutually exclusive—many individuals held seemingly contradictory views simultaneously. This explains why, as I wrote in my “Navigating Benefits and Concerns” chapter during this time, some educators were supremely confident in their own abilities to use AI but strongly prohibited their students from ever exploring the tool in their course.
2024-2025
Looking at the data for 2024-2025, we observe a notable shift from primarily technical applications toward more emotional and personal uses. While the top use cases remain relatively stable in their ranking, the underlying trends suggest users are increasingly turning to AI for therapy, companionship, and personal development. This indicates a deepening relationship between humans and AI tools, moving beyond mere utility toward something resembling collaboration.
This evolution aligns with what I have termed "CollaborAItion"—the concept of consistently and meaningfully interacting with AI tools rather than simply integrating them into workflows. The data suggests users are recognizing that effective AI use requires active participation rather than passive consumption, with humans remaining firmly "in the loop" of decision-making processes, as Ethan Mollick promoted in his book Co-Intelligence.
From 2023 to 2025, the general trend seems to be toward personalized learning and personal development with AI. People have started to use AI for more intrinsic, individually-motivated purposes rather than more external, delegated projects. For example, in 2024 the only personal use case was “therapy/companionship.” For more information on this, see my posts on Artificial Agency.
Now, therapeutic use is the single most frequent used the next two choices are also personal uses: organizing one’s life and finding a purpose in life. Even the next choice, “enhanced learning,” is probably more personally-directed than it was the previous year. I gave a webinar on this topic earlier this year, and I will be facilitating a “class” version of this webinar later in this year or early 2026!
What Do These Changes Mean?
These shifts in AI usage patterns reflect a maturing understanding of how these tools function most effectively. Users are discovering that generative AI is not simply about automation or replacement of human tasks, but rather about augmentation of human capabilities, desires, and objectives through deliberate collaboration. The "AI Revolution" is not manifesting as many predicted—with complete automation—but rather through collaborative partnerships between professionals and AI systems. Another way of saying this is that “the Singularity is not coming, but small-s singularities” are coming. I will be facilitating an AI Summit at the OLC Accelerate 2025 Convention, and “Preparing for Small-s singularities” will be my keynote topic.
Conclusion: No Matter How We Use GenAI, We Need to “Bring the Human”
My approach to generative AI emphasizes what I call "deliberate creation"—a process that begins with careful consideration of purpose, audience, and desired outcomes before engaging with AI tools. Rather than simply asking a question and accepting whatever response emerges, I engage in structured conversations guided by frameworks such as COSTAR (Context, Objective, Style, Tone, Audience, Response) or the Rhetorical Shot Framework.
I view AI tools not as autonomous creators but as instruments—similar to how a pianist uses a piano. The piano produces sound, but the musician determines how the piece is played, which notes receive emphasis, and where improvisation might enhance the performance. Similarly, while AI generates content, I maintain responsibility for the quality, accuracy, and ethical dimensions of the final product.
When using AI for research or information gathering, I maintain a healthy skepticism, recognizing that these tools can hallucinate or present inaccurate information as fact. As my colleague Steve Hargadon aptly states, AI is for "research, not conclusions." This approach aligns with Carl Sagan's principles of scientific skepticism—maintaining critical thinking while exploring new technological frontiers.
Regardless of specific applications, the most crucial element of effective AI use is maintaining human-centered considerations throughout the process. This includes developing clear content strategies, defining specific task goals, establishing communication preferences, implementing thoughtful data strategies, and planning for continuous refinement of results.
The future of AI lies not in automation but in collaboration—not with AI assistants that generate perfect outputs independently, but with future-ready professionals who know how to use AI productively and creatively. The most successful AI users will be those who approach these tools as "deliberate creators" who understand how to collaborate effectively with AI while maintaining human judgment, ethical considerations, and critical thinking.
As we continue to navigate this evolving landscape, we would do well to remember the Three Laws of Human-AI Machine Collaboration:
The First Law: A human may not use generative AI to injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A human may use generative AI tools for any objective they desire, except where such objectives would conflict with the First Law.
The Third Law: A human must engage in good-faith, literate collaboration with AI tools in every interaction as long as such collaboration does not conflict with the First or Second Law.
Zeroth Law: A human may not use generative AI to injure humanity or, through inaction, allow humanity to come to harm.
Future Event: AI SEARCH
While you are ruminating on the best ways to incorporate AI into your writing and other forms of creativity and productivity, you may be interested in how you can incorporate it into your research.
This webinar, presented with Library 2.0, explores the benefits and drawbacks of AI Search capabilities by placing them in context of the “history of research.” Previous methods of navigating the information ecosystem have passed down some of their benefits and drawbacks to later methods and tools. AI Search is the latest “search tool,” and it has added benefits and drawbacks of its own.
This 90-minute webinar explores the historical evolution of research methodologies and information access, from traditional library catalogs to modern AI-powered tools. It examines how each technological advancement has transformed the research landscape, reshaping how scholars, students, and information professionals interact with knowledge. Participants will trace the development of information retrieval systems through pivotal innovations, including card catalogs, digital databases, search engines, and contemporary AI research assistants.
The session addresses both the democratizing potential and inherent limitations of each technological shift, focusing on aspects such as information accessibility, critical evaluation, and responsible use.
$129/person.




