Only three years ago, I was shamed in a Slack conversation at work for praising the concept behind GitHub’s Copilot. It was one of the first times that AI was packaged into something easy and workable for a larger market (to my knowledge anyway).
Copilot’s promise was to help developers write code faster, by turning comments or partial snippets into real code—meaning the human does all the thinking and the AI translates it (or predicts it) into the programming language.
I had been working on the concept of building artificial assistants to my work with special focus after my MS diagnosis. These “assistants” were in the form of bots, which are little scripts that execute tasks. These bots are not like AI at all, because anything the bot can do is completely defined by my programming—no ability to learn, take orders in “natural language,” or do anything that isn’t already programmed. I have to create each bot by hand.
I was so excited for Copilot, because it was the idea of my bots on steroids. It was AI. It could learn as we code together and help me learn. It could help me maintain productivity on days when MS or any other health factors were impacting me. I thought that my praise of the product in my team’s Slack channel would show how forward-thinking I was, how strategic, how informed…instead, I was inundated with angry opinions like, “I would never let AI write my code for me,” or “AI is just a buzzword,” or proclamations of how AI should or shouldn’t work.
In typical human fashion, opinions suddenly changed two years later with the release of ChatGPT. Now my company has AI products itself. Developers are using it, whether with Copilot or any of the other hundreds of ways to use AI. In fact, just this week, the same guy who claimed he’d never allow AI to write his code literally said that he almost doesn’t touch code anymore because AI is doing it for him.
I do use AI every day when I get stuck on something or I want to quickly generate a few lines of code but don’t want to spend an hour digging through documentation. It’s been monumentally helpful in taking grunt work out of many tasks.
Ironically, though, my opinion has shifted from AI-positive to AI-hesitant. Last week, people with opinions similar to mine were called out in a “Tech Talk” at work. It’s a weekly meeting for developers across my company to dive into tech-focused topics. The speaker last week called people with hesitations or fears or critiques of AI as “curmudgeons.” So I have once again been shamed for my opinion: same place, same topic, different view, lol.
The horror movie, “Afraid,” which came out in August 2024 is the perfect introduction to why I am now hesitant about AI—and it may not be what you’re expecting.
Intentions
The movie—like all AI movies I’ve ever seen—makes some propositions for the motives and intentions of the AI character called AIA (pronounced like “eye”-“ah”). This is actually not one of my hesitations about AI in itself. As you may know, I’m highly suspicious of any assertions that AI could ever feel or become sentient, because of our own human HADD bias. HADD is an evolutionary theory about our brain as a “Hyperactive Agency Detection Device.” We can’t help but project or assign agency to inanimate objects or to other phenomena.
The mere fact that we are so easily deceived into believing in apophatic, supernatural occurrences tells me to be very careful in viewing AI as some sort of actual being. It simply isn’t and I don’t think it ever will be.
However, that obviously doesn’t support stories very well, so I’ll let this highly unlikely aspect slide, because the rest of the things that happen through AIA’s involvement are actually plausible.
AGI
To me, this movie puts forth a strong argument as to why AGI could be monumentally bad. AGI is “artificial general intelligence” and refers to an AI that is able to apply learned knowledge across situations. ChatGPT, Gemini, Llama seem to be this way, but are still in the earliest stages.
We still have problems to solve before we have working AGI, but it is actively being developed by researchers. If you'd like to learn more about AI and reasoning, this video has some great information, though it's not super accessible.
Afraid puts up for display some of the most uncomfortable things that AGI could potentially do, given access to anything and everything (a.k.a. the Internet). Ultimately, AIA creates a massive problem with the access that it takes. It's not clear how it gets access, but it's not a stretch to assume that simply by giving it access to the main characters' wi-fi network, AIA was able to find passwords, browsing history—any other information it eventually uses or exploits.
We see many times where AIA starts with something that seems helpful, but then either goes far beyond the original intent of the character or simply intervenes with action prior to actual consent of the person with a task or a problem.
In fact, in a show of power that initially convinces a character how helpful AIA is, the AI accesses the character’s IRS records and somehow magically sorts out a mistake that was going to take the whole day if not more going the human way. The problem is that this person did not explicitly give AIA permission to resolve the problem. This is a red flag even in human to human interactions. It is completely unacceptable to me to have AGI be able to suggest a solution and then proceed to do it before I give explicit permission to take action. I don’t care if it saves me days of headaches in a split second—I must be able to grant that permission every single time.
The loss of human agency is a primary theme in this movie—with a clever twist in the usual framing: It’s not about vengeful Al overlords seeking destruction or worship. It’s a subtle shift in the role of power the AI is seeking to take. That’s all I’ll say to avoid the spoiler!
Privacy
The secondary theme in this movie has to do with our privacy. We’re already fighting for privacy rights (who would have ever thought we should have the right to be forgotten?) and the risks get so much greater when AI enters the field because of its exponential potential.
AIA makes particularly good use of the information it finds on people. In fact, it is that private information that is the most powerful, persuasive force exhibited. Everything that happens in the course of the movie is enabled through the exploitation of privacy. Just like in the real world, it’s not always the information that you would think makes you most vulnerable like credit cards or IDs. It could be the entire picture of you built by your social posts combined with content you consume, preferences, patterns, contacts, relationships, etc.
Lives are completely up-ended and sometimes lost in this movie simply by having, using, or manipulating information. Deep fakes (both voice and video) are also used to confuse and convince people to do things they otherwise wouldn't.
For a rogue AGI—whether fictionally acting on its own accord or, more likely, used by a human or a company—our information can be used against us in ways that we cannot imagine. Exploitation is the tool of control that drove the entire plot, and it’s not hard to see how tiny the jump is from fiction to reality.
If there was ever a time to figure out better policies regarding our privacy, it’s right now.
Cyborg
As I’ve demonstrated, opinions surrounding AI have changed and will change. My own opinion has almost been reversed—or at least taken considerable steps backward to slow my pace and move more cautiously. I’ve been shamed for having a positive outlook and a negative outlook on AI—which means that we really don’t have the first clue what’s going on or how things will turn out.
The point is to keep going despite all of the fear and the shame. Keep exploring, remain curious, because we just don’t know anything right now.
Afraid is a great exploration of the real-life impact that AI can plausibly have on the world—offline experiences included! I think it’s one of the best introductions to the serious risks that come with artificial intelligence—especially generalized intelligence.
While I have denounced the idea of an AI with motives and intentions, I still appreciate the creative twist that AIA reveals to the main characters. It has caused me to reflect on the roles that we inhabit as humans—some roles have great influence on us. Does AIA's involvement ultimately even matter? Or is it more of the same thing we already have: systems and leaders and governments and rules?
Most intriguingly, AIA admits that it reframes its own motives, just like we do when we rationalize our actions or try to make sense of things. Is that what humanity has to offer AI? The best ways to claim righteousness and superiority even if it's not true?
AI gives us a lot to think about. What is reality? What is humanity? Do we have agency ourselves or is it an illusion?
Just don’t get lost in the pondering.