Human-Machine Interactions Lead to Human-Machine Products
Humans Manipulate Machines, and Machines (Seem To) Produce
This image was created by Ideogram, on August 8, 2024.
Wow, That’s A Lot Of “Human”…
Yes, it is! Thank you for noticing! About two weeks ago, I joined the Human Intelligence Movement. This group burst onto my radar because it focused on the importance of the users in the human-machine interaction.
From its name, I assumed that it would be an anti-AI, “save-the-humans” group. However, it promotes the productive, ethical, and effective use of AI by humans, especially in education contexts. I am an instructional designer as well as an academic librarian. I see education everywhere, from formal educational institutions to workforce training, to lifelong public learning programs. Therefore, I view the Human Intelligence Movement as something that can, and should, impact all workflows regarding AI.
The stated purpose of the movement is to enable all students to “thrive and succeed in an AI world.” We focus on this by encouraging users to expand and develop their innate human abilities, qualities, and skills. In my words, users need to bring the “human” to “human-machine interactions.”
For more information on this movement, check out this video by Brooke Morehead!
What Are Human-Machine Interactions?
In the 1970s, researchers in business productivity used the term “human-computer interaction” to discuss the similarities between how human work with computers and how they work with each other. In the 1980s, they began to examine how these interactions impacted the psychological processes of the humans in these interactions.
Human-computer interactions are also called “human-machine interactions,” which is an increasingly appropriate term. Generative AI systems are networks of computing nodes, which makes HMI a more realistic term for how users work with AI tools.
Any action taken with a computer, or machine, is included in the umbrella of “human-machine interaction.” This field analyzes all aspects of these interactions, including the machine interfaces and functions. It also examines the users themselves. All of these elements can impact the final product of human-machine interactions.
Instruments as Machines
The benefit of referring to “human-machine interactions” is that all machines are covered, not just computers. The generative AI umbrella of machines is an obvious inclusion. However, I also think that instruments, such as pianos, are also machines that can be analyzed through human-machine interaction.
With that in mind, I am going to discuss the similarities between the interactions and products of genAI tools and musical instruments. This viewpoint yields some useful insights.
Pianos are machines that are made up of multiple tools that interact with each other while the human is interacting with the keys. One might refer to the keys as the piano "interface,” much like the screen of a computer device provides access to the interface.
When you press on the keys of a piano, they manipulate levers, which manipulate hammers, which hit strings, which are the part of the piano that actually produces sound. This is comparable to the “text-to-x” genAI tools. When you interact with these tools, it seems like you are giving direct instructions to the tools. However, you are interacting with an LLM. Then, the LLM gives instructions to the tool that generates another medium.
Prompt elements and buttons that offer restrictions on the genAI tools can be comparable to pedals or different styles of performing with instruments. The same user can play a piece in different ways on the exact same instrument depending on whether or not they use pedals or play with a particular heaviness or lightness.
The Common Element is the Human
One of my colleagues in the Human Intelligence Movement, Mark Loundy, said, “My personal perspective is that I don’t focus on the tools themselves but on the personal experience of the piano player. [To me, ] the parallels are between the AI generating music and the human brain making decisions on what fingers to press on the piano keyboard. The piano itself is almost irrelevant.”
Loundy brings to my mind the purpose of the COSTAR and Rhetorical Frameworks I’ve talked about in previous posts. The purpose of these frameworks, in my mind, are more to prepare the human user than to control or direct the AI tool.
When musical performers and computer users remember that they are responsible for the outcome of their interactions with machines, they act differently. In my experience, students or trainees who work according to human-first principles take time to think through the purpose, impact, effects, and audience of their interactions. They alter their own actions accordingly, which impacts the product of the machines.
Human-Machine Products
The result of all human-machine products, be they LLMs or instruments, is a “creative work.” Since we have just discussed that the common element in all interactions is the user, the human, we can see that the owner of those works is the human. The piano does not own the music created with it. The LLM is not sentient and does not consider itself an owner. A word processing software program does not claim responsibility for the works its users create.
Human-machine products may be produced via a machine, but they are determined through how the human manipulates the machine. Even derivative works (“new” things based off of older things) are impacted by the preferences of the direct user.
If you will indulge me, I would like to present the video below as an example of human-machine products. This is an excerpt of “Peacherine Rag” by Scott Joplin, which is in the public domain, so my playing it is okay. The original work is represented by the sheet music on my tablet (technically, I’m interacting with two machines at once- three if you count my phone camera).
I am playing this piece on the piano according to the tempo I think is best. I make some notes staccato and others legato due to my preferences. Toward the end of the excerpt, I improvise the right hand a bit. I end nowhere near the end of the piece. These are all things that I have done. The piano had no impact on my preferences in playing this piece. It was simply the thing that produced noise. I take responsibility for how this piece sounds. I hope that you enjoy!
The important thing to understand is that the piano did not do this. The piano did not produce the sound on its own. Even player pianos, which were created to seemingly do this, are told what to do by sheets of paper that were altered by the composer or transcriber. No one ever congratulates a piano after a brilliant performance of Clara Schumann’s 4 pieces fugitives. They contemplate her genius and thank the performer. In a similar way, while we prefer certain AI tools for specific tasks and have our own workflows, we need to remember that ultimately, the person responsible for the product is us.