***NOTE: In my mind, the words “teacher”, “trainer,” and “instructor” are interchangeable, as are the words “student,” “learner,” and “trainee.” The concepts in this post are applicable in formal and informal education, in online and in-person courses, and in educational, corporate, institutional, and public instruction.
David Wiley has created yet another excellent post on his blog, in which he discusses the idea of “the symmetrical power of AI.” His brief post (even for him, this is brief) simply notes that an AI can complete any task that a typical instructor will begin with AI.
This might seem like an obvious thing to note at first glance, but it is an important truth for educators to consider. During my experiences in conferences or giving presentations or trainings, I have connected with faculty who are at times concerned or fearful about AI tools. I have learned that they largely fall into two of four camps:
Fearful that students will primarily use AI to commit unethical practices for course assessments (“cheating,” “plagiarism,” etc.
Confident that students can use AI productively, ethically, and proactively (and that they will choose to do this most of the time)
Fearful that the advent of generative AI will unavoidably undermine the basics of education, especially online education
Confident that generative AI tools can be used in similar ways as other EdTech tools.
The important thing to note about these camps is that they are not exclusive. You would think that if someone is confident about students they will be confident about their own use, and so with the fearful camp. However, this is not true. Many educators who use AI quite comfortably in their own work are fearful that the default use for students is “cheat” or “plagiarize.” I hope that you can easily see the hypocrisy of this viewpoint.
“But students don’t know how to use AI, they do not have experience!” This is not a valid argument in the slightest. Compared to the average student, no educator (except for those who are teaching a high-level course about AI) has had significantly more experience than their students. Furthermore, students will never understand how to use these tools if they never have exposure to their use. The workplace will expect them to use these tools. “Banning” them is a misinformed practice and is a great disservice to their professional development. The answer lies in actively and publicly modeling ethical and effective genAI tool use, what is called the Technology Consumer or Producer (TCoP) Model. Drs. John Curry, Sean Jackson, and
worked together to develop this model of technology integration in educational contexts.TCoP Model
The process of the TCoP Model (pronounced “teacup”) is simple: How do we as educators become confident enough to teach our students about AI? We gain experience ourselves. As we consume best-practices-content from this tool created by others, we learn how to use it to make our own products. Then, we model these best practices to our students and encourage them to make their own products. Hopefully, at least some of these students will teach others what they have learned and share their own products. Do you see where this is going? Eventually, a massive network of practitioners who have been taught through active consumption and practice will develop.
I prefer this model for multiple reasons, but mostly because I love experiential learning, especially with technology. I enjoy project-based assignments (even writing assignments) much more than the standardized exams or quizzes. To read about my journey of applying this model, and the ethical and best practices considerations I came across while embarking on this journey, read my chapter in the recently-published book by Utah State University, Teaching and Generative AI: Pedagogical Possibilities and Productive Tensions.
Putting it Together (like Sondheim)
Whenever I have a summative, conclusory module in a course or paper, I always want to reference this song from Sunday in the Park with George. Now that I am writing my own blog, I can do whatever I want to. Ha.
How do teacups align with symmetry? Well, in the TCoP Model, the educator is being transparent about their AI use with their students. In fact, they are inviting their students to critique their workflow and use their example to create their own process for their own creations. In other words, they are encouraging their students to engage in this symmetry.
Is “AI Symmetry” Really Asymmetrical?
An interesting thought about this symmetry is that it might not be so symmetrical. For example, David notes that “instructors can use AI to write [various] assignments” and that students can fulfill these assignments. The first assignment types he mentions are standardized, and then he moves to more writing assignments, and then more non-text project-based assignments. What he did not mention (but he probably thought of, let’s give him credit, he’s at SUNY CIT) is that the level of student engagement and the length of the workflow and the number of AI tools involved are much different for the later assignments than for the earlier ones.
Furthermore, the level and complexity of AI use on the part of the student will be much higher than that of the instructor. You can come up with an objective, and assessment description, presentation prompts, a rubric, and a reflection prompt within minutes if you have the right complexity, type of prompt, and AI tool. In fact, I created an Instructional Design Assistant with GPT 4 to assist in this endeavor.
Students who are trying to create videos and other multimedia outputs will have to use a multimodal tool such as GPT 4 or GPT-4o, or will have to create a workflow using multiple AI tools. They will also have to engage in much more oversight, intervention, rearrangement, and other actions with each of the tools or Custom GPTs.
This is as it should be. However, when we model ethical AI use to our students we should probably show them when we have fulfilled similar objectives with our products. In other words, show them the entire creative process, from text ideas in ChatGPT to creating images with Ideogram to putting it all together with CapCut. If we only show them the easy, quick text generation, they will not know all of the creative enhancements they can and should use. In other words, they could come away from our demonstrations thinking they only need to put forth a small amount of effort.
In my mind, the difference between an intentionally-curated final product and a passively-generated (on the part of the human) AI output that is passed off as a final product is like the difference between an item made of a 50/50 alloy and one made with a 90/10 alloy. If a copper-tin metal has 50 percent copper and 50 percent tin, it will be durable. A metal that has 90 percent tin will be extremely brittle. This is not to say that they are not useable in certain situations (much like you can probably use ChatGPT to create a one-off email), but they should not be used for very important projects.
Here is an example of a brittle alloy. I created a brief video about lemonade using ChatGPT (most notably,
’s “Vendy the Friendly Vending Machine” tool, Udio, T-t-S Online, and CapCut. It is good for entertainment, but I would need to do more with the individual elements to make it useable in a meaningful way:As far as a 50/50 metal alloy goes, I would humbly suggest that my OER InfoLit course on Canvas Commons is a candidate. I spent hours curating, developing, and formulating objectives, assessments, content, and other materials for this course.
How Should We Model Alloy Creation?
Some suggestions to this end include:
walk them through a (live) process of creating a multimedia product
if you are teaching text prompts, teach them simple and complex, multi-level prompting styles, such as the COSTAR Framework
if you are teaching about image generation or audio generation, teach students about the necessity of prompt and output iteration
when AI makes a mistake, acknowledge that the fault lies with you (not necessarily a real fault, but “I could have been more detailed in my prompt,” or “let’s try going at it from a different perspective”
show the complexity of multimodal tools by calling multiple tools in one conversation (a voice generator, a data analyzer, a sound generator, for example) and show how these can build off of each other
Other ideas (let me know in the comments!)
When we model complex, ethical, and thorough AI use and oversight, students will be inspired to put forth a large amount of effort in the creative process. They will end up creating a 50/50 alloy product, a true melding of AI power and human effort, rather than simply a 90-10 alloy, with 90 percent generated by AI and trivial effort (if any) from the human.