Beginnings are hard.
The heading above is the standard one that Substack recommends to all authors. After thinking about other headings I could write, I decided that this heading fits. In fact, it serves as a fundamental reason to ask the questions posed in the title and subtitle.
CollaborAItion
This concept was fueled by the concurrence of three things in a relatively quick time frame.
From the beginning of my professional work, I have been surrounded by the impact of the work of Dr. David Wiley. David has been a massive influence in the worlds of education, from teaching about equitable learning and putting that into practice through Lumen Learning, to initiating a dialogue on the concept of open access and inventing the phrase “open content,” to encouraging a best-practices approach to working with technology in education.
In his most recent course for BYU, which I was fortunate to audit, David taught about applying AI to Education. For this course, in order to apply what he was teaching, as he always does, David built what he termed “review prompts.” These prompts essentially turned an AI text generator into a quasi-tutor.
Recently, David asked me to provide a snippet of insight into what these prompts and the ensuing experiences did for me. This is what I said:
“I think the most important aspect was that we were forced to think about our topic in a critical way and also had to engage in information literacy skills. It was on us to verify the information given to us by the AI. Another benefit was that the AI was very creative in explaining things to us (especially if we played with temperature and top-p values), so we could come up with excellent metaphors to enhance our perspective on course topics.”
Over the past few months, I have been collaborating with
and his organization Learning Revolution. We have worked on around three conferences, two bootcamps, and a podcast/interview series. In fact, our first episode of our podcast, “This Week in AI,” was released last Friday. You can see that episode below.During this video, Steve and I talked about the various benefits of AI tools but also discussed the concerns users can (and should) have about their interactions. I mentioned the SIFT method and information literacy, programmed rapport, and proper use. But the most fundamental aspects of our conversation was the idea that proper human-AI interactions are collaborative sessions rather than just text transfer sessions.
I have recently been asked by multiple institutions to create materials regarding AI, copyright, and information literacy. I have also been asked by individuals about the proper way to use AI in educational and work contexts. I end up repeatedly dispelling the notion that a person can simply ask a question (or assign a task) to an AI tool and sit back and expect it to get everything right the first time. Working through this process takes time. It will make mistakes. Even if it fulfills every basic thing you want it to do, its responses can be improved upon.
The answer to my most-asked question, “How should I use AI in my work?” is “collaborate,” NOT “integrate.”
What is CollaborAItion?
CollaborAItion is the concept of consistently and meaningfully interacting with AI tools and their outputs. This approach involves understanding the potential biases inherent in these tools and critically examining the outputs based on those biases. Rather than passively accepting AI-generated content, CollaborAItion encourages users to actively engage with the material, question its validity, and integrate it thoughtfully into their work.
To ensure the legitimacy of AI outputs, one effective method is the SIFT Method by Michael Caulfield—Stop, Investigate the source, Find better coverage, and Trace claims to the original context. This process helps users to verify the accuracy of information and identify any potential misinformation. By adopting CollaborAItion, educators, instructional designers, librarians, and other professionals can harness the power of AI while maintaining a critical eye, ensuring that the technology serves as a valuable tool rather than a replacement for human insight
Why CollaborAIte When I Can AutomAIte?
The idea of automating tasks with AI tools is undoubtedly appealing due to the promise of efficiency and convenience. However, the true value of AI in educational and professional settings lies in collaborating with the tool.
Often, when people are excited about automation of work process (and I include education processes as work processes), this is because they are erroneously thinking about one of two things:
They do not understand the nature of AI and how an optimal interaction with an AI tool progresses.
They do not care about doing these processes and want to offload a seemingly insignificant part of their work so they do not have to deal with it.
Both of these thoughts are not based in sound consideration of the nature of an AI tool. First of all, an AI tool left to its own designs and bereft of human oversight and interactions will not produce anything that is truly useable. Secondly, a generative AI tool is designed to give you what it thinks you want. It does not give you want you are asking for unless you ask for it very explicitly, deliberately, and in a certain way. Even then, it will only approximate the desired product which will require you to iterate. Both of these incorrect assumptions will lead to chagrin when their believers realize that using AI is much more work, especially when they are learning basic AI literacy skills.
CollaborAItion involves using AI as a partner that augments human capabilities rather than replacing them. This approach requires users to engage actively with AI outputs, critically evaluate them, and integrate them into their workflows in a meaningful way. CollaborAItion is a much more fulfilling and beneficial process than simple automation.
Furthermore, mere task AutomAItion is actually detrimental to human users. Automating tasks can lead to over-reliance on AI, which may perpetuate biases and inaccuracies inherent in these tools. By contrast, CollaborAItion emphasizes the importance of human judgment and critical thinking in interpreting AI-generated content. This collaborative approach not only enhances the quality of the outputs but also fosters a deeper understanding of the material, promotes information literacy, and encourages a more thoughtful integration of technology into professional practices
Conclusion: Keeping Humans in the Loop
When reading these last two sections, you undoubtedly heard the subtle switch from when I was writing by myself and when I used ChatGPT to create the text. While I utilized an AI tool for this post, the central ideas of the post were mine. I take full responsibility for the content of the paragraphs, because I am claiming myself as the author. I made sure it mentioned the SIFT method by Caulfield, and I inserted his name in the text when it did not attribute him. I also removed incorrect references. Furthermore, I made sure that its language was appropriate for the context of this medium.
You will see just below this paragraph that I include myself as an author in the chat reference. This is because I believe that I am an author. As I tell my attendees and fellow educators, “if you cite yourself as an author of a conversation with an AI tool, you had better act like one.” As
suggests in his book Co-Intelligence and his Substack , this is the essence of collaboration: partnering with the AI tool at the most, treating it like an assistant at the least. It is not a hammer, and you are not meant to function like a copy machine. Both you and the AI have significant contributions, but you are the one who determines how effectively those contributions are acknowledged and leveraged.References
Hepler, R., and OpenAI “Writing Style Analysis,” conversation with GPT-4o [Large Language Model] on May 21, 2024. https://chatgpt.com/share/14b63c5b-b677-47d1-b4b5-d84cfca267cc.