Cultivating Consistency from the Course to the Company
Technology used in education/training should be the same (or close to the same) as technology used persistently in the field...
My First Presentation of the AECT 2024 International Convention
When I first started this post about a month ago, I was not planning on connecting this to a presentation at the AECT International Convention. I also wasn’t planning on starting an ISU AECT affiliate chapter or applying for a PhD program, but lots of things are happening that were not planned. It just so happens that my first presentation meshed well with this concept.
I talked about "Effectively and Ethically Integrating Multimodal GenAI Tools and Workflows In Education."
Say that three times fast!
The first part of my presentation, which was not explicitly about ethics, elicited a LOT of discussion about use cases, procedures, student supervision, and citation. We didn't, in fact, get to the ethical parts, but that didn't mean we didn't talk about ethics.
Specifically, the bulk of the conversation was about citing or acknowledging AI, because that was what people wanted to talk about after I showed my adapted citation style.
I included myself as as co-author (really the lead author) with the AI tools. Some thought this was unintentionally claiming credit for something I didn't do, and others said that it was a good way to hold users responsible for the human-AI products. There was some spirited arguing about this, and then someone said, "I really think this is semantics."
THAT also set people talking, as well as another person who brought up the idea that "If we're citing a tool as an author, then we're saying that it's a human."
None of the three other arguments could admit that.
Then there was another person who said, "AI isn't a tool, we should be developing a relationship with it, like a synthetic person."
For those of you keeping track, that's five arguments going around. I was so satisfied that this quintessentially dull topic was being discussed so vigorously.
Then, yet another argument started over whether the statements should be called "citation" or "acknowledgement."
I appreciated the conversation/discussion style of this type of session and look forward to two more sessions of this type tomorrow, plus an engaging and sure-to-be thought-provoking panel on Copyright and AI (I'll report on that tomorrow).
I wrote a previous blog post about the importance of citation and acknowledgment of genAI tool use, and it just so happens to be one of my most popular posts.
Importance of Citing or Acknowledging AI
The Increasing Volume of Arguments Against Acknowledging AI Use
Citations, Communication, and Consistency
Citations are just one aspect of AI use about which educators should communicate consistently to students. Like one of the participants said, “citations” could be considered “acknowledgments.” But the fact remains that users should acknowledge AI tool use. In the workplace, supervisors and administrators will expect you to proactively acknowledge what tools you use, whether they are genAI or not.
Communication and consistency regarding AI best practices in the classroom and other educational contexts can give future workers a leg up on their colleagues who have not had exposure to ethical and high-quality genAI tools. This is not to say that genAI tools will absolutely be used in all workplace contexts. This viewpoint is a hallmark of the pro-hype that was discussed in my last few posts.
At the same time, though, knowing how to use genAI when it can be applied effectively and ethically will be a significant competitive advantage. This means that students should realize what goals are reasonable for human-AI collaboration and which goals are not realistic.
Communications between educators and students regarding all educational technologies should not be pro- or anti-hype. They should be realistic about the capabilities of the tools and the issues and dangers surrounding them.
I am a firm believer in the idea that the tools used in educational contexts should be the same, or at least as similar as possible to, the tools that are used in the workplace. There is no lasting benefit to using Padlet or H5P in class to facilitate frequent activities or create assets if professionals do not use those tools in the field. That would just result in learning a completely new technology for no purpose other than convenience for the instructor.
Canvas, D2L, and Moodle have functions and interfaces similar to forums, social media systems, and ticket submission and internal communication systems. This aids students in becoming used to working with digital asset and human resource management systems. This may or may not be intentional. The most effective discussion assessments are not the ones that require students to “reply to two of your classmates or else you’re lame.” They encourage students to “reply only to those comments that intrigue or interest you, but reply substantially.” This mimics typical workplace communication patterns.
So, how would you apply this concept in generative AI? Simply model effective use according to the TCoP Model that I talked about in “Creating a 50-50 Digital Alloy.”
Creating A 50-50 Digital Alloy
***NOTE: In my mind, the words “teacher”, “trainer,” and “instructor” are interchangeable, as are the words “student,” “learner,” and “trainee.” The concepts in this post are applicable in formal and informal education, in online and in-person courses, and in educational, corporate, institutional, and public instruction.
“The process of the TCoP Model (pronounced “teacup”) is simple: How do we as educators become confident enough to teach our students about AI? We gain experience ourselves. As we consume best-practices-content from this tool created by others, we learn how to use it to make our own products. Then, we model these best practices to our students and encourage them to make their own products. Hopefully, at least some of these students will teach others what they have learned and share their own products. Do you see where this is going? Eventually, a massive network of practitioners who have been taught through active consumption and practice will develop.”
The conversations, objectives, and outputs we model to our students should be things that they will encounter frequently in their chosen field. Instructors are commonly subject-matter experts (SME), and so they should know what things are most important and significant in the workplace. I would recommend that educators model the Rhetorical Framework when they model human-AI interactions. Students will learn how to conscientiously and repeatedly consider all aspects of their interactions with generative AI tools.
As educators consistently model the most effective, ethical, and equitable ways to use technologies, students will develop patterns of use and standardized ways of coming up with prompts. Notice I say “ways of coming up with prompts,” not “prompts.”
Prompts for similar materials, even in the exact same format, do not have to be exactly the same every time. In fact, integrating new knowledge about objectives, best practices, content, and other factors can help students continuously improve their prompts. Providing “prompt libraries” and “templates” could trap students in a non-critical pattern of rote prompting.
Including Workforce Stakeholders In Education
One of the most effective ways to understand what genAI-enhanced processes and outputs are used most frequently in the workplace is to ask the people who work in the workplace (there’s a shocker!). If your students are employees (if you are a workplace trainer), that is ridiculously easy and you probably already do it anyway. If you are a formal educator or adult educator, this may be outside of your normal processes.
In addition to creating a “nimble pipeline” between the classroom and the workforce through internships and other opportunities, collaborating with practitioners can help your assessments and objectives be more relatable to the goals students will make once they graduate.
Multiple resources (although they conflict a bit with the last post about the pro-hype) note that “employers want new grads with AI experience.” whether or not they themselves know the realities of human-AI collaboration. Employers want these recent graduates to =come to their companies ready to teach their coworkers.
It may be useful to have these stakeholders and exemplars come and speak to your students. This could be a more general discussion in which generative AI is a minor topic, or it could be a conversation centered on generative AI. Your students will have questions that may not have come into their minds otherwise. I have talked to three student groups, and the educators for each of them told me that the questions their students asked of me were much more complex than those they asked in typical sessions. I know how I talk and present. Those questions were in spite of me, not because of me.
What Impact Will Consistent and Realistic Education Have on the Workplace?
Most proposed AI-enhanced projects in the workforce, at least of which I have heard, are not realistic unless one would have a dedicated contracting company managing a massive ChatGPT-like service. And for most companies, this is simply not reasonable.
Consistent and realistic work with AI tools will educate students, who will become professionals, about what can be done with AI in a typical workday, what needs a little more work, and what is simply not realistic. Employees will know what to not promise and what objectives they can provide to their supervisors. Overpromising because one does not understand the technology with which one is working does not always excuse one for not delivering on those promises. In fact, it could be damaging to one’s value in the eyes of their employer. Clear and pragmatic knowledge concerning the abilities of AI can prevent employees from overinflating the importance of AI in a company’s workflow.
Generalized Examples of the Value of Accurate Knowledge and Training
I have dealt with incomplete knowledge about generative AI tools in two proposed consulting projects. They both turned out essentially the same way.
Businesses asked me to create solutions for them. I discussed the various ethical issues and realistic capabilities of AI tools. I proposed alternative solutions to the problems they were facing (consistent with David Wiley’s counsel to “commit to the problem, not to your solution,”), but they were persistent on wanting to have their original solution. I told them I would contemplate working on their proposed solution but warned them that it would probably not be realistic.
A short time later, someone else told them in an anti-hype way about all of the issues with generative AI tools. Only then did they believe that the hype was wrong and that they needed to be more realistic. Unfortunately, they also acted as though I had never spoken to them pragmatically. It seemed a little like this classic British sketch.
Conclusion
The point of this post is not to shame educators into not trying to teach with generative AI tools. My point is that instructional designers, trainers, and educators should be more deliberate in how they teach about, train about, and model generative AI use. No matter what citation style or AI literacy framework or use pattern we model, we should be clear and consistent to avoid confusing students. When we cultivate this pattern, our students will carry it on to their work. They will be able to set realistic goals and carry out extraordinary projects with the aid of AI without being carried away.