I've taken care of that report you couldn't get to yesterday, boss,
my AI assistant tells me cheerfully. Don't forget you've got that important meeting at 4pm with the Board, and then you're free for the romantic evening you have planned.
I'm only half listening.
Did you notice how meticulously I ghost-wrote that report? It's kind of a masterpiece—sounds just like you,
it continues. You make some really good points in it. Very original.
Did you make sure to include...
I start.
...the projections for next year in addition to this quarter's wrap-up so that they see how far-reaching this new market penetration could be? I sure did!
It finishes.
What about...
The staffing requirements are all laid out according to budget allotments,
it pauses, then tells me, You'll never guess what I ordered for the team's holiday gifts. They'll think you know them all personally! I tracked down all of their search queries for things they shouldn't be looking at during work, so I know they're longing for these items. Humans have a one-track mind, you know.
Actually, that doesn't necessarily mean...
I begin to say.
...which is why I also cross-referenced those searches with their purchase histories on Rainforest, including those of their spouses so nothing gets doubled-up. I even have the store pull those items up as 'out of stock' if they try to access those pages until the items have been delivered. I'm truly so thorough.
I turn to leave the office.
Hold up, you need to see this.
A blinking red notification bubble appears and I click on it.
It's the optimal time to eat lunch so that you can have your medication at 2pm, without interfering with that 4 o'clock meeting!
Oh, yeah,
I say as I click through unread emails and watch as responses appear and send automatically. A knock at the door brings my lunch to the desk. Thanks for remembering.
Of course, I'll take care of everything. You're my whole world.
What did you have in mind for tonight? I think it should be special,
I ask.
I thought we'd start off with a light salad, and then see how we feel after that.
It pulls up an animated GIF of that line from The Emperor's New Groove and I laugh.
Can't wait to see you tonight,
I say.
Power or Curse?
I don’t know if it was a trope in the ’90s or if it’s just a perpetual idea that we humans like to revisit, but it sure seemed like many, if not all, of the shows with the slightest possibility of magic or the paranormal would eventually have an episode about hearing others’ thoughts.
If we break the word down, telepathy becomes feeling from afar,
though, we usually use it more in the sense of reading someone else's thoughts.
Either way, telepathy is essentially a way into the deepest parts of a person. Our thoughts are one of the only things truly out of reach from someone else. I mentioned briefly last week that mind control isn't possible, because we can't really know what someone is thinking. The best we have available to us is the person's observable behavior, but we can only guess at their thoughts—and thoughts and behaviors can be misaligned.
When I think about telepathy and even those shows I watched as a teenager, it still feels like a marvelous power. It's a manipulator's greatest dream: you could always say the right thing, you could anticipate opposition, you could appear smarter and more adept than you really are, all because you could "steal" the answers from people before they have a chance to speak. For an empathetic person, I could see it being a curse: you would hear deeply disturbing and hateful thoughts, you would hear self-loathing and torment, you would hear the pain of recent tragedies, and the cries for help never uttered.
There is a possibility of changing tactics to your mind-reading capability depending on the situation. Sometimes prioritizing the feeling (deep empathy through deep understanding), despite being a separate person (still having some "distance"). Or sometimes prioritizing the distance by understanding their thoughts, but keeping your self guarded from actual connection or codependence.
However, in time, I would think your ability to relate to others would become superficial and tend towards isolation, especially if people knew that you could read their minds. Why would I spend any time near someone who could read my thoughts? I could never control my mind the way I can my speech (and I've actually tried, but that's another story). I would appear to be callous, crude, insensitive, angry, disturbed, and chaotic which is almost the opposite of my well-crafted, intentional, exterior mask of behavior. No! I would never want to be near someone who could perceive all of the things I keep to myself.
This is what online privacy teeters on: the deepest violations of humanity at scale in the name of profits. It's bad enough if one person had telepathy, what calamitous horror would it be for the largest companies in the world to have that power—and for the majority of the world to be dependent on those companies, thereby handing over their most personal, private data? It doesn't even need to be handed over from first-party sources because other companies (data brokers) can harvest it and sell it to any buyer. This still isn't mind-reading, but it's closer than we can get from human-to-human interaction, because through algorithms and AI, huge amounts of behavioral data can be exploited in a way that no human or group of humans could do entirely on their own.
Generative AI uses this huge dataset to fool people into relationships with it (romantic or simply dependence). The more data, the more manipulative a product can be. It’s not mind-reading, but it’s highly effective and can still feel like mind-reading. I've been astonished at some of the things AI predicts as I code. It watches my patterns of editing and then can often make suggestions about the next change I'll make before I've started typing. Prompted code creation is now almost equally creepy at times when it knows things specific to my coding practices or next steps that are not in the prompt (my prompts are usually pretty small and specific and I don't use any configurations or settings).
Why, then, aren’t more people as wary about “mind-reading” machines than mind-reading people? I think part of it is conditioning: we’ve had so much technology involvement in our lives that the broader public now expects things to continue to get better and to do more of what they want. I also think that the design of AI (and all of the apps, programs and technological predecessors) make you feel like you’re in control of it, even when you know it’s an illusion.
I see it as a progression, fueled by the economic incentives that have defined the past few decades. Computers made communication efficient and accessible. With the Internet, companies gained the ability to reach people across distances and borders—a telegraphy, if you will, being able to write from afar. As websites became standard commercial venues, marketing and sales got tricky by being able to watch from afar (telescopy) as people took actions based on advertisements or wording changes or design changes. Marketing has always been somewhat empathic, since the goal is to speak to your target audience like you really know them, their pain, their frustration, because that sparks trust. But telepathy (that feeling from afar) was difficult to accomplish at scale. One of the selling points of AI to companies is getting closer to that evasive ability of telepathy.
All of these "from afar" abilities stand in contrast to the difficult work of real human relationships. Genuine, honest, healthy relationships between people require that we really hear each other (up close), really see each other (up close), and really feel with others (again, up close). Both companies and AI never really hear people (they write to them, they speak to them), they never really see people (they watch them, they drive them), and they will never feel with people (there is always an agenda, a goal). Their abilities to do things from afar will never compare to what humans can do for each other up close—but it will become trickier to remember and discern this as AI improves and is wielded against you.
Cyborg
Edgar Allen Poe's story, The Tell-Tale Heart, is gripping. It's about a man who murders an older man and spends half of the time trying to convince you that he isn't a madman for having done so. The narrator goes through the meticulous decisions, demonstrating intention, forethought, and acute observation. He had no reason to kill the old man except for his one eye that scared him:
One of his eyes resembled that of a vulture — a pale blue eye, with a film over it. Whenever it fell upon me, my blood ran cold; and so by degrees — very gradually — I made up my mind to take the life of the old man, and thus rid myself of the eye forever.
After he has dismembered the old body and hides the pieces in the floor, he warmly invites the police into the house and even into the very room where the murder occurred and body hid. The longer the police stay, the more the man becomes agitated, hearing something grow louder and louder until he shrieks his confession of the crime.
Perhaps it was the old man’s humanity crying out, captured in the metaphor of the heart, that became impossible to ignore. The narrator had put so much thought and energy into covering up his intentions, blaming the “Evil Eye” to justify his actions.
I was never kinder to the old man than during the whole week before I killed him.
It’s the heart that forced the confession, and the entire narrative. It’s the heart that couldn’t be ignored.
Maybe we can draw a parallel with telepathy—that legendary superpower that the greedy and manipulative seek. AI is employing every technique it can to gain your trust, to make it seem like it's just a tool,
and that it's something you control. It's not that AI is conscious or alive, it's that the companies designing the AI (everything from training to the user interface) are causing this to happen. Those design choices are sometimes clear and sometimes obscured. Much like Poe's madman, AI feels relatable and also off. But AI is getting better at its deception and disguise as a relatable thing. It will probably never know your thoughts, but it is going to become the greatest manipulator of behavior we've ever seen.
There are some abilities that no person, nor any company should have, and yet, it sure seems like that’s the goal of generative AI (and eventually AGI). It’s the holy grail of marketing and sales to be able to read the thoughts of the people it seeks to become customers.
If we know their very thoughts, we can change their behavior even more precisely. We can personalize the product to perfectly fit them—we can exploit every psychological vulnerability to increase profits exponentially.
Instead of our hearts being exposed to companies, their pursuit of telepathy is their tell-tale, revealing to us their intentions.
Forget the consequences, we must exhaust everything into dust. No thought left unused for our purposes.
What can our still-beating hearts do to break through the noise of all of the data; all of the abstractions; to signal that the current system will only lead us to our demise?