I sometimes feel like stories and movies are a suspension of reality—an escape, an exploration. When it comes to AI, I was particularly convinced it was all ridiculous fantasy. Until I heard a mythologist explain how the lore we've developed about AI over time has and will continue to impact the designers of this technology.
Movies, particularly, have wide reach across fields. We're actually fooling ourselves to assume that we can disregard the outlandish Sci-Fi plots, warnings, and design features of artificial intelligence.
After all, Sam Altman, CEO of OpenAI (creators of ChatGPT), admitted that the movie, Her, "certainly more than a little bit inspired us." We can speculate that the lawsuit Scarlet Johansson filed against OpenAI for using what sure sounded like her as ChatGPT's voice when the feature first launched is potentially a nod to that "inspiration."
To explore this, I'm going to send out a CYBORG_ issue every once-in-a-while that examines an AI movie and how the themes impact us right here and now. We'll kick things off with a brand new, 2024 film called, "Subservience." I'll do my best not to include any spoilers, and if you haven't seen the movie, you should still be able to follow along!
Breakdown
One of the most prominent choices for storytelling about AI in this movie was to have the Al robot extremely human-like. Complete with a UV light to kill bacteria in the robot's mouth so they don't have to brush their teeth, meaning that the robot is housed in flesh-like conditions that would require addressing bacteria...
Obviously, the human actress, Megan Fox, is already human and therefore leans on previously established tropes and mannerisms to appear more robotic: unsynchronized head and body turns, stiff posture, smaller range of vocal inflections, etc.
What we might miss, though is the implied design choice made by the robot manufacturers in this world. This actually matters a lot and has potentially devastating impacts beyond the cliché "bugs in the machine" plot.
Natural interactions One positive feature of this design is that you already know what to do with a human. There's no need for on-boarding or an interface. If you don't already know what an Alexa device is and how to use it, you're still going to need a little bit of training. It's about as minimal of a design as it gets, but that doesn't mean it's obvious what to do with it.
Family / home tasks This is most likely the best reason a robot would look like a human: taking care of children. I would guess it's more natural for very young children to understand and accept a caretaker that appears human. Again, leaning on a baby or child's natural instincts to overcome the potential barriers to trust the robot.
Corporate motivations We have an awkward plot point where an entire team of construction workers are replaced by bots. These aren't special bots with built-in tools or different designs optimized for construction. They look just like humans.
This is a slap in the face, because it demonstrates the company's motivations: cost reduction. It's not about safety, speed, or quality. If it were, they'd opt for a truly radical upgrade to robots specialized to the unique needs of construction work. Instead, the company is choosing the less effective strategy: cutting the easy costs.
No more need for human needs like food, breaks, and other decent working conditions. AND we don't have to buy new tools like drills and screwdrivers. Simply replace the operator.
This is where we find the most nefarious parts of the AI-housed-in-a-human-body design choice, and in my opinion reflects the real concerns we should have with the introduction of AI into the workplace: greed. Yes, some jobs are literally threatened by AI, and that hurts. Even worse is a board of executives deciding that the people they previous employed are mere objects that can be replaced by other slightly upgraded objects.
Slavery
Perhaps I was primed by the title of this movie; maybe it was the overt sexual interactions between owner and machine; and it might be the image of the robot standing in the corner while the family eats dinner at the table. Slavery was the dominant theme that clouded everything I saw in the movie.
I don't see a world where AI truly becomes self-aware and sentient—as I've explained in a previous CYBORG_, we have a cognitive bias that would erroneously ascribe life to the inanimate. Despite our biases and cognitive traps, I'm still repulsed by the idea that we could become comfortable with this style of robotic slavery.
Again, not because it has anything to do with the robot itself, but because we would opt for a human-like servant to be at our beck and call. What would it do to us to train ourselves that it's ok to treat human-looking creatures like servants? How would our behavior towards the obvious humanoid robot spill over to real people that we don't like? How would we start treating real humans that we see as beneath us?
Would slavery and servitude return as the only option for unskilled or manual labor? For the people who can't afford a human-like robot, maybe it would become acceptable to grab another human. Maybe a child.
Human history clearly shows us that humans can't be trusted. We're too easily tempted by power and greed to allow a complex intelligence be housed in the skin of a human and not become a training tool for exploitation of actual humans.
Creating robots that can serve as a sex object, a cook, a babysitter, and an assistant with any number of other tasks is wildly irresponsible, and it is ultimately a future that is in the realm of possibility for us right now. We'll have to raise our voices for better legislation, thoughtful restrictions and standards for our AI researchers and developers. It is possible to change the future here in the present.
Real-World Robots
In 2014, Savioke Labs was trying to finish Relay, a robot to help with hotel delivery services like bringing towels and snacks to guests.
They were worried about this robot being creepy in the eyes of guests, so they went through a rigorous week of research, prototyping, and testing. The book, "Sprint," by Jake Knapp described their process, and included this concern from Steve Cousins, CEO & founder of Savioke:
"We're all spoiled by C-3PO and WALL-E. We expect robots to have feelings and plans, hopes and dreams. Our robot is just not that sophisticated. If guests talk to it, it's not going to talk back. And if we disappoint people, we're sunk."
Interestingly enough, based on the prototypes and tests in this final "Sprint" week, they had great feedback for giving the robot just a tiny bit of personality: a digital "face" and a few actions that appeared to express happiness, like a little dance or some chirping noises.
"Guest after guest responded the same way. They were enthusiastic when they first saw the robot...People wanted to call the robot back to make a second delivery, just so they could see it again...But no one, not one person, tried to engage the robot in any conversation."
Even though this was 10 years ago, we can see a few principles that indicate the difference in human interactions. The Savioke robot is not at all humanoid. It's more like an elegant box that glides across the floor like a Roomba. The people who interacted with it had no expectations of treating it like a human by trying to talk to it. They didn't even have many options of things they could do with it.
Contrasted with the highly advanced, but human-looking models from Subservience, we see humans enraged by robots (such as the waiters in the diners/bars in the film), we see the main character overcome by sexual desire despite it being against his values, we see poor behavior of human-to-human interactions especially when loyalty is challenged.
Cyborg
There is space for robots to enrich our lives, but it is the design of the robots that will have drastically different outcomes.
As we watch technology progress, we need to be even more vigilant in living by our values and choosing carefully what technology is allowed to touch in our lives.
My wife reminds me frequently not to yell at Alexa, because I don't want to be the kind of person that yells at something/someone (much as I get frustrated by Amazon attempting to upsell me, bio-hack me, or order something on my behalf). I'm trying to use this primitive robot / intelligence to train myself to extend respect to all things—human and other—while still drawing boundaries around what I will allow Alexa access to or what I choose to offload to it.
We also need to examine our psychology. We need to not let our proclivity to animate soul-less objects distract us from the real and impactful issues at hand. Spend less time wondering about whether a robot has feelings (because it doesn't, for now) and spend more time scrutinizing the companies that are getting exceptionally wealthy off of your information and your attention.