August 19, 2025

What You Don't Feel Might Hurt You

In January this year, after the current U.S. president had been sworn in, I looked outside at the sky. I was very upset, because all of the warning signs in my brain were painting a picture of intense suffering and destruction across the nation and even the entire world. Trying to imagine the future of the U.S. was horrifying, and there was nothing I could do to stop it.

"It's bizarre! I look outside and everything looks fine—normal. It doesn't match," I told my wife. I couldn't wrap my head around the serene view in front of me in such sharp contrast to what I knew was coming.

Note that it is absolute privilege and luck that I was still able to look at the sky at all—I recognize some people faced immediate threats, dangers, and life-altering situations the moment the new president was sworn in.

My point here, however, is to examine a very small, but dangerous trap that is being laid down around us: desensitization. The more I hear about inhumane and deranged policies or practices perpetrated by those in government without being directly affected, the more I notice the numbing. There has been so much going on in the world that my feeble mind and body cannot grasp everything—cannot feel everything.

It is the same with artificial intelligence. I mentioned the predictive document, AI 2027, last week, and there was a subtlety that reminds me of exactly what is happening in politics. We have a set of companies doing AI research and training that are currently doing everything possible to speed up: increase revenue, increase AI capability, increase market domination. Once ChatGPT was introduced to the public, everything changed. For many of us (the privileged ones, once again) the AI chatbots and image generators were just a neat toy—we didn't feel much different. We could say, "AI still can't do XYZ," or we could pronounce that everyone else just needed to "catch up and learn the new skill of AI management." We assumed that because we didn't feel a threat meant that there wasn't a threat.

That is perhaps the biggest concern I see in AI 2027: how we are all underestimating what AI is capable of, and how we don't feel the urgency to put restrictions and regulations in place. We don't feel threatened, and by the time we do finally feel it, it will be too late.

The public conversation is confused and chaotic. Hypesters are doing victory laps. Skeptics are still pointing out the things [AI] can’t do. Everyone knows something big is happening but no one agrees on what it is.

—AI 2027, under heading: July 2027

Cyborg

I'm stopping short here (rather literally), because I'm a bit confounded. I know that we must change the conversation around AI. I know that many people are already in the desensitized state where the threat doesn't feel real or doesn't feel applicable to them. But how do we do change the conversation?

If I've learned anything from engaging in the political sphere, it's that showing up consistently is more important than doing one huge thing a few times. Maybe I'm too insignificant by myself, but the idea of a "death by a thousand paper cuts" goes both ways: I'm being attacked from all sides, but I can attack back in a similar way. Consistent action over time on one thing can make a difference, especially as more people join.

Perhaps the take-away today is to reflect on how you can change the conversation with the people you know. How might the usual AI hype at work be tempered? How might we remind people not to dismiss or devalue "AI doomsday-ers"? How might we introduce nuance, humanity, and emotion—those things that seem to be absent from the conversation?