We Haven't Gone As Far As We Can Go
Camping, "Kansas City," Copyright, Caution, and Commercial Events
Let’s see how many /k/ sounds we can get in there, shall we?
As you might be able to tell from the title and subtitle of this post, I got to see Oklahoma! last week. My family went camping in Island Park, ID for the week of Independence Day and we took the forty-sumodd minute drive up to West Yellowstone to see the Playmill Theatre’s excellent production. The actress playing Ado Annie was the epitome of the character. You could put her in the Hugh Jackman production and she would fit perfectly. The whole production was excellent, but she was undoubtedly my highlight.
While watching it, I remembered that the AECT 2024 International Convention was going to be held in Kansas City in October. (Just finished registering and paying for everything associated with that yesterday, as a matter of fact!)
My first day back at work, I took the time to draft this spoof talking about the upcoming convention:
"I got to Kansas City on a Friday,
By Saturday I learnt a thing er two,
I never knew ed-tech could be so mighty,
It changes everything we thought we knew.
I counted twenty people sharing their LinkedIns,
Almost every time I took a walk,
And then I went right into the vendor's ballroom floor,
And an AI avatar started to talk.
Whut next!
Yeah, what!
Whut next?!
Everything's up-to-date in Kansas City,
They've guwn about as fur as they kin go.
With panels, presentations, talks all from AECT,
Without any contention that we know,
(yet)
The journal has been led by [Dr.] John H. Curry,
With all the papers from people in the know
We've been a-waiting all year long for new people to meet.
With every kind o' framework that they think is really neat,
Until the leading expert says they should scrap it complete!
They've guwn about as fur as they kin go.
Yes, sir!
They've guwn about as fur as they kin go.”
Feel free to add your own stanzas regarding AI, copyright, ethics, equity, accessibility, open access, what have you…
AECT 2024 International Convention
In order to register for the in-person convention that starts on October 19th and ends on the 23rd, visit this web page. You also get access to the virtual convention from October 1st to the 3rd. You could sign up for only virtual access, but come on, that’s just lame! Come and meet all sorts of people (and be one of the twenty people I see sharing LinkedIns on my walks).
If you do go, I will be presenting several times and am very excited to be collaborating in at least one of those presentations: a six-person panel on Copyright and AI tools.
(opencontent.org), Valerie Hartman, Michael Ahlf, Laura Gray, and Sara McNeil will join me and discuss the impact of AI on copyright law, and vice versa, from the standpoint of instructional designers, educators, and librarians.While We’re On Copyright…
Copyright (and the implications of it) seems to be an issue that many people (practitioners, educators, and others, forget about when it comes to best practices regarding generative AI tools. In my conversations with students, faculty, and practitioners in the workplace, “Let’s just put it through ChatGPT” seems to be a prevailing answer. Even though many people know about the issues (of which copyright is only one) surrounding AI use, they do not take the time to think through their uses and carefully develop workflows that focus on Collaboration rather than Automation.
Take, for instance, this post by Sara Simms Bracco. She is a professional writer, just one of many whom I know to have been inundated with discussions that lack sensitivity regarding ChatGPT. Writers especially seem to get messages and comments that generative AI can simply replace them or copy their tone completely. As someone who has given it my fair share of personal writing copy and had it mimic my voice, I can tell that you that is not true. Just because ChatGPT can write something does not mean that it will write it well.
One of the reasons for this is because of users’ simplistic and shallow prompting. But also, the copyright-less nature of AI outputs means that AI-enhanced products cannot be copyrighted. Ethically speaking (and legally speaking, for now), the human must be in control and should be the major source of the product. AI can only be used for refinements and editing (and that is a vague distinction).
Even if you have human input for 60 percent of the work, if any part of a product was created by AI then it is not eligible for copyright protection. That may be fine for non-profits and those of us in the open-access field. However, in the workplace where ideas need to be copyrighted and patented and trademarked, this poses a significant problem.
Why Do We Need to Be Cautious?
Apart from the need for obeying copyright law, what other issues related to generative AI exist? Or do we only need to worry about our ability to protect output?
Well, I’m glad you asked. I have spoken in multiple presentations on this topic, and I will refer you to the best one of these, the Closing Keynote for the Library 2.0 AI and Libraries II Mini-Conference.
In this video, I talk about the importance of data privacy, data confidentiality, quality control, citations, and other things that we must consider well before we begin earnestly integrating genAI tools into our workflows.
But Ms. Bracco’s insightful post raised another issue related to AI Literacy and use that I would like to stress. She said that “unless you are writing a textbook, personally written words win every time.” She then talked about the facts, vocabulary, and sentence structure provided by ChatGPT. She may not have meant to, but she perpetuated a dangerous idea in this paragraph. Again, I am sure that this was unintentional.
Speaking from an information literacy standpoint, AI tools are some of the worst places that one can go for information. This is especially true if you are learning about a new topic or skill. I have made multiple tutorials and trainings regarding using AI tools for learning, but in none of them have I encouraged simply reading what ChatGPT has written and copying and pasting it into an educational resource without editing, fact-checking, and heavily revising the output before it is released as a product. I heavily encourage you to use the SIFT Method, which I discussed in my first post on this blog.
What is CollaborAItion?
Beginnings are hard. The heading above is the standard one that Substack recommends to all authors. After thinking about other headings I could write, I decided that this heading fits. In fact, it serves as a fundamental reason to ask the questions posed in the title and subtitle.
Even if you are writing a textbook, please write it in your own words. Use genAI to help you come up with the structure, the wording of the learning objectives, even discussion questions (that, again, you edit). But please do not rely on it alone for information about any topic or skill. You will only be putting yourself at a disadvantage, especially if you are required to work with others who have learned the skill or topic with traditional means and structures. I am not talking about bias or prejudice against personalized learning (that is a conversation for another day). I am talking about the validity and usability of the information given by GPT 4o when it is not connected to the internet or given structured and heavily-revised prompts.
Two Examples Of the Need for Caution
While I was writing this post (why does this always happen?) I received two emails that demonstrated the need for caution when reading, analyzing, and acting upon AI tool outputs. One of these came from a vendor, and the other was a news article about the British government. I looked a bit for more examples, but these seem to be from different enough contexts that they demonstrate my point, that caution and circumspection should be used everywhere, with every tool.
Example One: Tony Blair Institute
The Tony Blair Institute released a report that claimed that around a fifth of the working time of Britain’s workforce could be saved by AI. Around forty percent, it claimed, of all tasks could be completely automated using AI. These are quite drastic and impressive numbers. The only issue is, there is no data to back them up because the statistics were generated by GPT-4.
More than a handful of reporters, both mainstream and alternative, commented on this flippant use of AI. The Institute responded to these confrontations by explaining that they used GPT 4 because “making a prediction based on interviews with experts would be too hard.”
Anyone who has read this blog’s earlier posts knows that I value the human in any interaction with an AI tool. If you are not doing at least 50 percent of the work (and the important, data-related work), you are doing the work wrong.
Creating A 50-50 Digital Alloy
***NOTE: In my mind, the words “teacher”, “trainer,” and “instructor” are interchangeable, as are the words “student,” “learner,” and “trainee.” The concepts in this post are applicable in formal and informal education, in online and in-person courses, and in educational, corporate, institution, and public instruction.
In the case of the Tony Blair Institute, they were doing it wrong. They simply pulled a massive group of data and then had GPT 4 analyze it in two different conversations. Then, they used GPT-4 to communicate the “findings” and put it into their report. They had the AI tool separate the jobs in Great Britain into separate categories (which they also had it determine) and then make broad statements. Then, they took these at face value.
I do not wish to place a massive amount of blame on the TBI. In fact, I think that they are doing what many people are doing. But that does not diminish the dangerous precedent they are setting for other professionals.
Example Two: Elicit “Checkmarks”
Elicit recently added a feature that is designed to promote rubber-stamping AI analyses when organizing your literature reviews. Whenever Elicit is 90 percent confident (not accurate, confident) in its answer, it will show a checkmark. This is apparently a sign that you do not have to waste time checking its work. After all, the AI said that the AI was accurate. What could be wrong with that?
The elaboration of this feature is that Elicit will use an AI to compare the analyses of multiple AI models on the same section of a paper. If all of them agree with each other, it is good indicator that the analysis is accurate. Then Elicit presents you with a summary of this work and gives the checkmark.
But wait, there’s more. You have to be a paid subscriber to “high accuracy mode,” because apparently accuracy is something you have to pay for, and once you pay for it, it’s guaranteed.
What seems to be implied here is that in many assessments of papers, Elicit is wrong. At least, it is wrong enough that it does not have a 90 percent confidence in its answers. Therefore, unless you pay for a premium subscription, then the tool is giving you inaccurate information (which you could be paying for under a non-Plus subscription).
To my mind, you are better off putting the document through GPT4o, which you can do for free. At least with that, you are not paying anything. Furthermore, you can connect to the internet and compare it to other documents. And, last but not least, you know that it is a machine that makes mistakes. You will not have any checkmark to bias your analysis of its decisions. You will be using generalized tools for specific purposes, in a conversation that you are steering.
Conversation Steering vs. Prompt Engineering
On Thursday I had the excellent experience of guest lecturing for Dr. Jeanne Beatrix Law, Director of Composition at Kennesaw State University. Her course on Prompt Engineering, which was a section of a Professional Writing course, discusses the most beneficial, effective, and ethical uses of generative AI tools in communication and education contexts.
We Haven’t Gone As Far As We Can Go When It Comes to Being Circumspect
I would come up with another spoof of “Kansas City,” but I don’t want to be redundant. So let me just end this post by saying that we all (me included) need to be much more careful about how we use AI, what we use AI for, and how quick we are to go from retrieving an AI-generated output and releasing a product based on that output. We need to be productive without seeming beholden to the “machine god,” as Ethan Mollick describes AI in the popular imagination.
The Plug (Again)
As this is a free blog, and as I have a consulting business that I run, I assume that you will forgive me for advertising briefly for a commercial function.
This is happening on the next three Fridays! In coordination with
of Learning Revolution, I am hosting a bootcamp on professional productivity with ChatGPT and other AI tools! You can register without feeling like you need to attend all three sessions, because recordings will be available forever afterward.
$149/person, $599-999/institutional license
For more information scan the QR code below, or go to this link:
https://www.learningrevolution.com/professional-productivity