What Sci-Fi Teaches Us About Possible GenAI Futures
Or, Every Time I Bring Up A Sci-Fi Franchise ANYWHERE, People Engage Like Crazy, So...
I, Robot, Foundation, and Robot Stories: The Tech-Savvy Future of Asimov
R. Daneel Olivaw: in the beginning he was designed to be as close to human as possible. However, the truth is that he and Giskard Reventlov, when left to their own devices, went far beyond their original charge. They turned into robot-philosopher-gods.
In Asimov’s first major publication regarding robots, I, Robot, discusses not only the Three Laws of Robotics but also everything that could go wrong with them. In my opinion, I, Robot should be required reading for everyone. From “Liar!”, in which a robot lies convincingly to a team of scientists, to “Evidence,” which explores robot approximations of human sensations, this collection of short stories examine all sorts of philosophical and psychological implications of technology. In the end, a humaniform positronic robot is elected President, and he oversees the integration of massive Machines that make all major economic decisions throughout the world. Ultimately, the leading robopsychologist comes to a realization about robotics in general:
“The Machine is only a tool after all, which can help humanity progress faster by taking some of the burdens of calculations and interpretations off its back. The task of the human brain remains what it has always been; that of discovering new data to be analyzed, and of devising new concepts to be tested.”
This leads us to the robot who influenced all the rest of the stories. Daneel is introduced in Asimov’s first mystery novel, and he adapted and changed in each novel in which he was a character.
In Asimov’s novel The Caves of Steel. The protagonist of the story, Elijah Baley, is fed up with his android partner not learning anything about humans that is not relevant to the criminal case they are investigating. R. Daneel Olivaw replies,
“Aimless extension of knowledge, however, which is what I think you really mean by the term curiosity, is merely inefficiency. I am designed to avoid inefficiency.”
Olivaw’s flaws result in him almost passing for human, but not quite, several times throughout the novel. He knows what a general person would do in a general situation, but not what a specific type of person would do in a complex and multi-faceted situation.
I link this to the limitations of AI in the “AI Is Not Smart” article.
The character on the left, R. Daneel Olivaw, exposes his reality as a robot in front of a more primitive robot model, who had been fooled into thinking he was a human. Baley himself has to repeatedly examine proof of his companion’s robot nature. While we are certainly not at this stage of human-machine interaction in the physical world, it appears that we may be approaching that conundrum in our digital environment.
I talked about the implications of using technology with the understanding that the AI tool cannot truly reason. But what are the implications of this type of developing technology for the future?
Much of the apprehension of the Asimovean galaxy has a similar motivator as the hype of reality: false knowledge and expectations. The “Medievalists” begin as a whisper campaign spreading lies about robots and stoking fear. As robots become increasingly sophisticated, the Medievalists become agitated and escalate to physical action.
We can only use these tools beneficially in the future if we are realistic about the abilities and features of the tools. Humans will probably have AI counterparts for the foreseeable future. This does not mean that they will be completely autonomous agents. It does mean that we need to prepare for them to be regular and frequent influences on how we work. We cannot afford to be “Medievalists,” who are like Asimov’s version of Luddites. Of course, he paints them with a murderous streak and Luddites never killed anyone, they just destroyed property. Both groups, though, proved futile.
The point of Asimov’s stories is that we should proactively create ethical and moral guidelines for technologies and their use. After rules are created, we should not sit back and expect that our tools will innately stop. We need to hold ourselves accountable and terminate, or at least heavily revise and regulate, our technologies’ capabilities. In genAI, some of these initiatives are fulfilled through “red-teaming.” I talked about possible laws for human users of genAI in my article on the original Asimovean laws.
Asimov's Three Laws of Robotics
“The Machine is only a tool after all, which can help humanity progress faster by taking some of the burdens of calculations and interpretations off its back. The task of the human brain remains what it has always been; that of discovering new data to be analyzed, and of devising new concepts to be tested.”
Dune: The Tech-Wary Future of Frank Herbert
My readings of the Dune franchise are deliberately limited to the first three novels in the series: Dune, Dune Messiah, and Children of Dune. The next three novels do not count, and the novels of Brian &co. are not valid either.
Centuries before the Atreides clan take over the desert world of Arrakis from the Harkonnens, a massive war breaks out over the galaxy. This war concerns the creation and use of robots. (Actually, a similar war breaks out between the Robots and Foundation trilogies, but that is beside the point). One side, the eventually-victorious one, is associated with the Butlerian Jihad. This is a massive socio-militaristic movement that actively promotes destroying robots.
Eventually, by the time of the Dune series, the entire galaxy lives by the religious commandment, “Thou shalt not build a machine in the likeness of a human mind.” Adherence to this commandment is so deliberate that two major shifts happen in the empire’s society:
Human beings are genetically and psychologically manipulated to serve as if they were machines, resolving complex mathematical problems and analyzing data in their brains without any technology. These prized individuals are called mentats and are treated simultaneously as slaves and valued assistants. Their brains are almost more important than their bodies.
Any technology advancement, even if it is many steps removed form artificial intelligence or robots, is looked down upon. The only main major technologies appreciated by the empire were those used by galactic ships.
The lesson of Dune is that we should not allow technologies to reach the point of making our decisions for us. In the fictional world of Dune, humans created the machines and those machines that used sentience to overthrow their creators. That is not going to happen in our world. But there can be negative effects of overreliance on AI through “skill atrophy.”
One way of preventing skill atrophy and determining if AI tools need to be used is “AI feasibility.” This is the mental consideration of factors that leads one to decide if an AI tool really needs to be used, or if a human can do it faster and more efficiently.
AI Feasibility (and How Open AI Tools and Workflows Affect It)
NOTE: Anyone who knows me will know that I have absolutely no filter when it comes to writing or saying what I think. I write things that are in their embryonic stages, and these might not be perfect representations of my ideas. I do not mean to offend anyone, and these posts are not meant to cast judgment. I am just thinking through typing. If I critiq…
The Robot Wars In Both Series
Both series have “robot wars,” but there is a distinct difference in their motivations. In Asimov’s series, robots are seen as being who are always altruistic and would never seek to harm a human being (because they cannot). However, humans are still fearful that they will “rise up” and destroy humanity. They go to war out of fear.
In Dune, the war is instigated as a human uprising against actual robot oppressors. The fear is real, not imagined. The robots are led not by a physical robot but by a collection of digital artificial intelligences that each rule a planet. (I grant that I got a bit of information from the “Expanded Dune” wiki, but that’s all!).
We have conflict regarding generative AI in reality as well, but what type of “war” do you think will break out, if any? There are multiple fronts to the conversations that we have regarding AI: copyright, ethical, social, moral (there are surprisingly strong religious opinions regarding AI), creative, and business concerns. How do you think they will be decided? Will they all combine into one major conflict with two easily-discernible sides, or will each type of issue be resolved in its own way?
You know that my personal issues and foci regarding generative AI are regarding copyright and information literacy. In the conclusion of my post on “Navigating AI, Copyright, and Fair Use,” I talk about three possible futures. Three possible futures for all of these issues could create quite the complicated AI ecosystem. It could significantly regulate, if not render ineffective, potential use cases.
Navigating Artificial Intelligence, Copyright, and Fair Use
NOTE: I recently had the privilege of editing and writing several chapters in a textbook entitled “Intro to AI and Ethics in Higher Education”. The following is a reproduction and adaptation of my chapter “Implications of Copyright Law on the Use of GenAI Tools in Education and Workplace
What other science fiction novels and/or series do you think I should examine next? What series did you think of first when you were introduced to generative AI tools?
I recommend the book Annie Bot, if you haven't read it yet. It make me think a lot about how AI could enable "victimless" misogyny with this technology.