December 2, 2025

The Most Dangerous AI Movie

It’s no secret that I have some considerable concerns with Artificial Intelligence (AI), most of which are more closely tied to the creators of AI than with AI in itself. I was still surprised to find a movie that would shock me so thoroughly to the point where I asked myself: is this movie propaganda? Or is it just poorly crafted story?

The plot of this particular film was dull and predictable, though supposedly the action was great according to some reviews (I found the action entirely forgettable and still dull). But the message was astoundingly terrifying to me, and I think it’s extremely important to address.

We’re talking about Atlas, starring Jennifer Lopez, which came out in 2024. In my opinion, it’s not really worth your time to watch so even though there will be a few spoilers, I think it’s safe to proceed anyway ;)

Setting the Scene

Atlas, our hero played by Jennifer Lopez, is the daughter of an AI scientist/creator. Atlas’s mother designed a human-looking AI robot named Harlan. At the beginning of the movie, all we know is that Atlas is very distrustful of AI in general, and has valid concerns for privacy since the robots in this universe generally need access to your mind (including memories and other vulnerable information) in order to be best controlled by the humans.

Despite being a data analyst, the fact that she has known Harlan, the AI terrorist, since she was a child compels Atlas to join a military operation trying to take Harlan down before he threatens humanity on earth. Things go wrong and Atlas survives only by jumping into a mech-suit. On a hostile planet, she has to maneuver to Harlan’s HQ within this robotic vehicle. The only problem is that the mech just doesn’t work very well. Why? This is the ultimate problem with this story and gets us right into the danger zone with real-life consequences.

Mech-suit in flight, shooting at something with guns mounted on its arms.

For whatever reason, it is the invasion of privacy that makes the mech operable. Sure, super basic tasks like walking are fine. Want to survive being attacked by hostile robots? All of a sudden you have to choose to give away your entire life: your memories, your fears, your thoughts, just so a military robot suit can militarize.

This is the only part of the movie that has any basis in reality, because we already experience this coercive dilemma with our software. You must choose to give away personal information—information that can harm you over time through exploitation, information that can be sold without your permission, information that could put you at risk whether through fraud or theft or any number of creative ways if a bad actor gets ahold of it. It’s not ever framed this way, obviously, because then you’d be more likely to at least read through the privacy policies or user agreements before buying some tool or service. Even worse, we can be easily manipulated by our natural desire for connection as well as peer pressure—why quit Facebook when I’ll lose contact with tons of people, even though Facebook doesn’t really show my friends’ posts anymore and uses all of my data for training its own AI designed to further prevent me from finding my friends’ posts because I’m too distracted watching AI generated videos or rage bait?

Privacy Schmivacy

Maybe sometimes privacy doesn’t matter that much. Who cares if something like YouTube or a streaming platform uses your watching behavior to pick content you’ll like from the impossibly deep and growing libraries of options? Aside from the fact that these platforms don’t necessarily rely solely on behavior tracked on their own platform since they can buy data or else gather it from other sources, privacy violations have more at stake than serving you better ads.

At what point does it end? At what point does a tool, designed to save your life, need access to your thoughts?

Going back to Atlas in her mech, she ends up in a fight with enemy robots intent on eradicating her and her AI-powered suit. It’s only until this life-threatening situation arises that she gives into the AI and gives it access to her mind so that it can actually work as it should and blow up all the robots. Why is a robot mech suit that is designed to assist the military for situations exactly like this, requiring intimate access to a human’s mind in order to destroy an imminent threat? This is a one-way relationship, the kind with another name: abuse.

What I absolutely despise about this movie (and about the design of generative AI more generally) is how the AI disingenuously convinces Atlas (and us viewers) to trust it, as though it’s some benevolent, innocent machine with no agenda. In the movie, the AI in the suit (named Smith) repeats this phrase when encountering dead comrades of Atlas:

“Peace to the fallen.”

It sounds sweet, like a mechanical person recognizing the humanity in itself and the loss of life. And yet, just like our own real-world AI, it has no stakes because it is not embodied, it is not conscious (despite putting on a show as if it were), it is nothing more than a manipulation machine.

Just like in the real world, beautiful phrases, proclamations of love, or even apologies can be expressed by manipulative and abusive people, so an AI cannot possibly be exempt from the ability to pretend in order to shape behavior towards its goals. And, for thoroughness, you don’t have to be conscious to have goals. If a model is trained to get consent for privacy policy acceptance, it will work toward that end.

What’s worse in the movie is that Atlas has a secret: she gave up her privacy to an AI before, and it cost her mother’s life. As a child, she was jealous that Harlan got all of the attention from her mother, so despite the many parental warnings, Atlas allows Harlan into her mind. This access to human knowledge (despite being a child’s knowledge) helps Harlan come to the conclusion that humans are too dangerous to themselves and everything else, therefore they must be exterminated. Thus began Harlan’s war against humanity.

The abusive systems in this story’s world are combined with our real-world abusive systems which manifest as subtle sexism in the flimsy plot of this film. Our hero is female, sure, but that doesn’t mean that this movie has a good message about women. Instead, it’s a veiled metaphor for the uselessness of women in the eyes of society. It’s not Atlas’s leadership, her skills, her ingenuity, her use of the mech-suit, or even her special knowledge about Harlan that keeps her alive and helps her triumph. No, it’s another entity (the mech-suit powered by a male-sounding AI), that forces her to entirely surrender everything she feels and thinks. In return, the mech suit can be the literal knight in shining armor that sacrifices itself to save her from every conflict. Even in its self-sacrifice, the AI is entirely unharmed, because (spoiler alert) even when the mech-suit is destroyed, it is merely remade and the AI is already backed up and reinserted into the mech-suit that Atlas climbs into anew.

All combined, we have a movie that inadvertently (assuming the best intentions) celebrates the invasion of privacy, makes consent seem like a dangerous burden, and makes sure that a woman can’t do anything impactful on her own—even AI is a better hero than a woman. I actually entirely missed how demeaning this was when I first watched it, but as I looked back over my notes and pulled my thoughts together, I can’t escape this conclusion that the movie is so dangerous because of all of the subtle messages that are just beneath surface.

Cyborg

If systems are ultimately just collections of decisions, as I’ve asserted before, then technology is also bound by the decisions that make and surround it. Tech may appear to be innocent—free of any bias or prejudice, because tools are tools. Does a calculator care if it’s being operated by a person of color? A woman? At some point, it may, and that’s the devastatingly scary thing. Even if no prejudice is found in a tool, the situation in which it is used is still impacted by the environmental systems it is used in.

For example, if a macro-level system (or a "parent system") values efficiency above everything else, then the tools that are created and used within that system will align with that value of efficiency. Put a system like that into a time-warp where we can watch it develop, and maybe we will see AI being used to eliminate all possible jobs for disabled people, because disabilities are usually less efficient than the AI. Maybe after that population is no longer able to work, we see other populations being weeded out. Teenagers are too unreliable and slow because they haven’t developed expertise in anything, so they’re gone, replaced by robots. Onto college grads, then junior positions, then senior positions…Now these tools that are supposedly not biased have still caused deep harm just because of the system in which it was used. Contrast that to a system that values human flourishing above all else. What might a “neutral” tool do then, on a large time scale?

Parent systems—be it societal or economic systems—inform the child systems that are created within them. The “child” systems inherit the values and incentives of the “parent” systems. When the systems carry problematic decisions, they create those same systems all the way down. Something as complex as artificial intelligence or something as simple as a poorly written movie about AI can accidentally express the problematic, inherited values of its parent systems. Just like you have a choice to be or not be like your ancestors (both your genes and your upbringing do not determine your worth, your choices, or your destiny), tools don’t have to express those issues, but they do have to be actively adjusted against the problematic values until the parent system has been updated and enough repairs and upgrades have been made.

Consent is not something that should be coerced nor ignored entirely—whether in sexual situations or in other intimate scenarios like access to personal information. Lack of consent or forced pseudo-consent is dehumanizing regardless of your gender, and technology cannot be allowed to be developed in a way that treats humans as resources to exploit, but that is the ideal, not the reality.

Even if the writers of Atlas just wanted to make a cool movie and harbored no intention to communicate the messages that I’ve critiqued here, the damage is done. This is the hardest part of living with systems: they are usually invisible. Realizing when you did the wrong thing despite good intentions is painful, but awareness is what helps make a real impact on the world and future generations. Awareness helps to reveal those invisible forces in our lives, and once they're revealed, they can be changed. We can abide these dangerous times and lean towards empathy. Critique and correction are part of the healing process.