If you are shocked at the phrase in the subtitle, and that I would ever use such a phrase, I was extremely shocked too when I heard it in a YouTube video produced by the Stanford Center for Human-Centered Artificial Intelligence (Stanford HAI). As I have mentioned in previous posts, I have taken to listening to Stanford HAI videos as I work in my shed and mow the lawn. I had to stop mowing when I heard the phrase.
You can watch it below, and the phrase is first used in the last three minutes:
To be fair, it was first used in a question posed to a faculty member and researcher at Stanford HAI. After an extraordinary podcast panel discussion with professionals in Early Childhood Education, one of the attendees asked about the ethical guardrails that the developers and researchers were utilizing to provide limits for their “babies in the basement.”
Those words were the first aspect of the question addressed by the panelists. They clarified that they were not creating “babies in a basement.” “I want to make that very clear,” Nick Haber, a professor in the Stanford Graduate School of Education, stated. He clarified that they were examining applications of AI that could help educators of children improve their teaching practices and assets. They were not examining how to create machine facsimiles of toddlers.
What Are We Doing (Or Trying To Do?)
However, the phrase lingered in my mind as I worked with several individuals and groups over the rest of today. Another episode in the same podcast series discussed the importance of expectations and managing them.
The moderators asked their collaborators about the importance or necessity of benchmarks as signs of “progress” and “beneficial development.” Jessica Riskin, a historian, recommended a multidisciplinary approach for AI development instead of only developing according to narrowly-defined benchmarks of data quantities and industry-specific tests. James Landay, a professor of human-computer interaction at Stanford, agreed somewhat with Riskin but proposed a slightly different approach. He mentioned Dr. Geoffrey Hinton’s 2016 prediction that by 2021 radiologists would be completely replaced by AI and other software tools. Similar impacts, he implied, would occur for other healthcare professionals.
Hinton’s prediction was obviously incorrect. The focus of AI in relation to healthcare became not “how can we replace human workers,” but “how can we help healthcare workers become more efficient, accurate, and knowledgeable?” Their goals changed because their reasons changed.
Landay then mentioned another recent AI-related goal: to create AI tools that were completely indistinguishable to humans. This goal also lost its importance and support, in part because of the ethical implications.
Why Are We Trying To Do It?
Landay’s takeaway from these predictions and altered goals, which he related to Riskin’s argument, was that while benchmarks were essential, we need to ask ourselves why we have the benchmarks or goals we have established.
In other words, we need to ask ourselves what we are trying to create, and why we are trying to create it. Is our goal really to help our work, or to help others? Or, are we just doing something because it appears to be the next great thing? We need to critically examine both the tools we use and the products of our work. This is as true for us mid-point chat tool creators and end-users as it is for the base model creators.
I cannot say that I always follow these ideas, but they have been influential in other iterations within my work. For example, when I advise custom GPT tool creators, I always tell them that they need to have a specific output style or goal in mind. Their tool or conversation should not stray from that goal. As they create the tool, they should always keep in mind various ethical guidelines and caveats.
“Babies in the Basement?”
The entire concept behind the initial question of the “babies in the basement” sent me into a cognitive conundrum. Why would the attendee think so certainly that the purpose behind AI research in ECE was to create AI-infused toddler replicants? Why would they ask such a question in a public forum?
This question recently surfaced more directly in my life a few weeks ago. I was in my hometown for a relatively large barbecue that my parents throw every year. I was catching up with my old neighbors about how our work and lives were progressing. After the usual pleasantries. They asked what I did and what projects I was completing. Every family but one (who was involved in the military and had doubtless seen technology intersect with his work in many ways) walked away from me with sour expressions the minute I mentioned that I was working with AI, even though I mentioned that I was emphasizing ethical and meaningful use.
The answer lies in a disparity between the information commonly known by practitioners and creators and the information presented and understood by the general public. Even if our goals are realistic, ethical, and called for (helping radiologists instead of replacing them, for example), the public may not understand that. Therefore, when they hear about research projects and their projects, they can only guess at the desired outcomes. The hype and other misinformation can lead one to erroneously conclude that researchers are attempting to replace babies, or radiologists, or librarians, or educators, etc.
Not only, therefore, must we have concrete goals (whats) and theoretical justification or problems needing to be solved (whys). We must be transparent about these goals and make sure that the public can understand these aspects of our work. Then we will be able to help the public see how properly-utilized AI can be a boon rather than an existential threat.
Examples of Conscientious and Transparent Whats and Whys
If you would like to look at recent work I have done, you can go look at libraryrobot.org, which I made with
at Learning Revolution, Future of AI, and Library 2.0. These tools are free to the public, and they include a Book Finder, Book Summarizer, and a Boolean-Keyword Search Query Optimizer, for databases and search engines alike. Another tool (and the one that I am most proud of) is my Instructional Design Assistant.Another excellent tool is the one created by Hannah Chavez Moutran and explained on her blog post “Enhancing Library Services with Conversational AI.” See the goal of her tool? It’s right in the title. And she explains the outcomes related to her goal in her blogpost. Experiment with her Library Assistant Chatbot on Voiceflow.
Another excellent tool, created by
, focuses on fostering best practice-founded Instructional Design through the lens of ethics. This tool has a specific goal right in the title, and adheres closely to its instructions to remind the user of ethical best practices.