Most of the humans at OpenBrain can’t usefully contribute anymore. Some don’t realize this and harmfully micromanage their AI teams. Others sit at their computer screens, watching performance crawl up, and up, and up. The best human AI researchers are still adding value...
These researchers go to bed every night and wake up to another week worth of progress made mostly by the AIs. They work increasingly long hours and take shifts around the clock just to keep up with progress—the AIs never sleep or rest. They are burning themselves out, but they know that these are the last few months that their labor matters.
I recently attended a 30-minute "lecture" by a Ph.D. who was trying to make sure we were all on the same page with AI. He started by saying something to the effect of:
"You've probably felt worried about your jobs; that they might go away at some point. You need to get over it."
Admittedly, I used to share the same confident bravado. I thought that AI was the answer to all of our problems. I thought it would be the enabler of the disabled, the bringer of wealth to the impoverished, the revealer of truth amidst the false. I thought that people were really the problem: if they just had more technical skills, then it would work out just fine. AI was going to be a rising tide that lifted all boats, which meant that even if your job "went away" you would just get a new job—a better one!
I'm not saying this particular lecturer is wrong or that their statement matches my own misguided thinking. I do think AI is a strong threat to our way of life in our current economy, and I don't know how to imagine a concrete future in different circumstances or in different systems. I'm at a loss for how to move forward besides following the same rut that I already inhabit.
However...
I don't appreciate how this lecturer dismissed the fears and seeming inevitability with the advice to "just get over it." You're offering us our current world but where no one can work to provide for themselves? That's the future you just painted, and that's the future we're supposed to just take?
This is not the world I want, nor the future I will accept.
It's inevitable.
Thanks to
We're inevitably marching into a fascist future.
There it is: "inevitable."
What if it wasn’t inevitable? Why do I just have to “get over it”? What if our path doesn’t take that route? What if what we do right now—not what we watch, but what we do—could alter that supposedly inevitable destination?
As I understand it, the very act of questioning inevitability is exactly the thing that hurts inevitability. For example, Laurence Rees describes how various people were "converted" to the ultimate cause of the Nazi movement in his book The Nazi Mind, including academic Joseph Goebbels who later becomes extremely close to Hitler:
Above all, it was the certainty with which Hitler expressed his vision that was the basis of his successful subjugation of Goebbels. Rudolf Hess, a leading Nazi, realized how important this quality was for the leader of the Nazi movement. He recognized that Hitler must not weigh up the pros and cons like an academic, he must never leave his listeners the freedom to think something else is right. The great popular leader is similar to the great founder of a religion: he must communicate to his listeners an apodictic faith.
If fascism and authoritarianism wants compliance and obedience, then questions interrupt that uniform line of compliance. If the system requires that we are powerless, then it must make us feel powerless, because that's the only way to exert control over huge collections of people. We are our own best prison wardens, because we believe the stories we tell ourselves.
Is it inevitable? Yes and no at the same time.
If powerlessness and compliance are the ingredients to inevitability, then we have to ask the questions and make the subtle changes that will grind the efficient march to a slow and sloppy motion.
Cyborg
"What if?" is our easiest question, but I like to take that and reframe it towards imaginative action, rather than just imagination. My favorite question starts with: "How might we...?" I stole this question-starter from, ironically, the book Sprint by Jake Knapp, which is about prototyping and designing for successful products. Perhaps by using this action-first prompt, we can prototype a future that is preferable to the "inevitable" one we're told is coming.
- How might we elevate human artists over AI slop?
- How might I change the feeling of powerlessness for me and my closest friends?
- How might my family and I communicate without relying on social media apps?
- How might I reduce the amount of money I give to businesses that support groups that are in direct conflict with my values?
- ...insert your question here.
While the lecturer may have had a point that fear and despair are not especially helpful, "just get over it" is equally unhelpful. AI companies, fascism, and all other abusive systems want the same thing from you: to stop thinking. Fear does not lend itself to higher-order thinking and rationality, so that state is indeed unhelpful and furthers the goals and agendas of abuse. Likewise, distraction, endless and mindless entertainment prevent us from thinking critically and I personally feel like content feeds subtly drain my energy, removing the motivation to take time to think and to question.
We don't need a single savior to lead us out of the inevitable rubble. We need all of us nobodies to shake the dust from the systems and sing out for humanity.
We will not lose our agency.
Resources to Explore
Your Undivided Attention Podcast: The Crisis That United Humanity—and Why It Matters for AI. "In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity rallied together to solve the problem."
AI 2027: AI forecast featuring a predictive timeline of events through the year 2027 (we're already almost halfway through...). This is based on experts' analysis and opinions in an attempt to help us recognize warning signals and ways that we might be able to steer AI towards a future that we do want to inhabit.
The Nazi Mind: Twelve Warnings from History, by Laurence Rees. A historical examination of how the Nazi movement gained power. This is an excellent read for context and psychological factors that influenced the infamous and horrifying regime. As Rees asserts early on, we don't learn "lessons" from history, because there's never an exact replica of circumstances, but there are warnings from history that prove to be exceptionally relevant.