I'm a creative problem-solver.
—every designer's, marketer's, and slightly-tasteful-developer's personal website headline.
There's nothing that's not upsetting about Hitler's rise to power, but as I've been working through Laurence Rees's book, The Nazi Mind, there is something that I found surprisingly upsetting: there were many smart, capable people that helped. It makes me shudder to hear about brilliant academics like Joseph Goebels, who became the mastermind behind the Nazi propaganda, and their involvement and contribution to the regime. As I've listened to the various people and circumstances that came together to construct the horrendous takeover, an observation started to haunt me: sometimes problem-solvers are solving terrible problems.
Consider this snippet of history where Paul von Hindenburg, President of Germany at the time, finally decides to appoint Hitler as Chancellor of Germany. Rees summarizes:
Hindenburg was finally persuaded and he made Hitler Chancellor on the 13th of January, 1933. And while it's true that he had concerns about the man he was appointing, the bigger picture was that he had at last achieved his aim: a government that would hopefully bring stability, even if the cost was the destruction of democracy.
Already, we have Hindenburg—who was a war hero and had considerable experience with leadership from his service in the military as well as in politics—who was willing to lose democracy for the right outcomes.
There was undoubtedly a tendency to group-think in Hindenburg and the elite that surrounded him. This psychological phenomenon occurs when members of a group convince themselves that they have reached the correct solution to a problem, even though they haven't properly considered all the negative connotations and potential alternatives. It is particularly likely to occur when decisions are made under stress and when there is a lack of diversity among members of the decision-making group. That was certainly the case here. Hindenburg and his cronies—all of whom came from the same elite background—failed to think about the consequences of appointing Hitler as Chancellor. Instead, they conned themselves into thinking that they could control him once he was in office.
Smart and experienced people made decisions that ultimately contributed to one of the most heinous, violent regimes in recent history.
The Problem-Solver
It's hard to critique this for me, because I built a career on problem-solving. I take great pride in being able to combine a set of conflicting goals and wishes from marketers and find a way to make it happen in code or design. And yet, we can see there are all kinds of problems in the world. Some are pro-social if solved or even in pursuit of solving, like the problem of curing cancer. Some are anti-social like the problem of getting anti-semitic policies put into place with public approval in 1930s Germany.
One aspect that muddies the waters here is that many people are abstracted from their impact. For instance, sometimes we just have a job because we need a job. Maybe this job doesn't directly harm anyone, but it furthers goals that are ultimately destructive. Maybe we don't even know about those goals because the company is so large, you can never know that they are donating to lobbyists that directly conflict with your personal views or freedoms.
Because this is a tough one, I'll borrow a phrase I heard from a panelist on an episode of the Mormon Stories Podcast: "Go easy on people, go hard on systems."
The word "problem" carries with it some negative connotation. It's something that needs to be fixed; solved. It's likely frustrating or painful. If there wasn't a problem, we wouldn't need to worry about it, so the ideal state of a problem is when it is removed—and we love the people who remove the problem! Even when the problem is not necessarily bad, like a math problem in school (ok, maybe still bad), we still approach it with the intention to resolve it.
For many of us Westerners, it's very similar to the concept of a question. Watch next time someone asks a question to someone else, regardless of the situation. We can't leave a question left unanswered—even if our answer is poor. Both a question and a problem are functionally the same: they encourage us to change their state from unresolved to resolved.
We may identify problems in any and every aspect of life, and we prefer to set people loose on these problems to at least work on them, even if they can't fully solve them. The problem, however is amoral—it is neither good nor bad, it's just some kind of state that is unresolved—but their impact and especially how they are resolved can be of moral concern. The loss of democracy is an anti-social deficit to the majority and a benefit to a tiny group of people. A problem for oppressors solved by the oppressors is not a good problem to have solved.
My point here is that we must refrain from keeping our problem-solving out of context. It's never enough to solve problems—anyone and everyone does that. We have to bring the wisdom of application into these issues and choose when to pursue resolution and when to change the problem itself.
Cyborg
In terms of technology, specifically, I think there is a link between our AI use and the problem-solver problem. Like Rees briefly explained about group-think, the psychological phenomenon of reaching bad conclusions despite having many brains working on something together, I'm concerned that our use of AI may actually be group-think at scale in some instances.
This is just an exploration of the concept, but if LLMs/generative AI are ultimately prediction machines that provide a likely accepted answer to a prompt, where is the diversity of thought that is required to combat group-think? A tech bro might insist that the diversity of thought comes from the huge set of training data, except that despite this large set, AI is still biased against race and gender. What if these models are ultimately group-think models, because of their large training set?
The vision of tech optimists is that AI will become superhuman in knowledge and power, but that is contradicted in a lot of ways by the fact that it needs data to provide relevant answers. It technically can't come up with something new, because it tends toward convergence—the most likely accepted answer. Humans may tend toward the same thing, but we also have a "data set" that is well beyond the reach of AI: human connections.
Just a few hundred years ago, human slavery was an acceptable, mainstream practice (in many places, anyway). The rhetoric and conceptualization of slavery as acceptable would be convergent—AI in that setting would advocate for slavery. But we changed. We broke out of the convergence and instead chose to diverge into a different conceptualization of the world and of human rights. What computer program could ever do that? How could AI ever "realize" it was trapped in group-think or harmful patterns of thought? By its very nature, I would argue, it is impossible.
I think questioning generative AI is the best thing we can do given that businesses are scrambling to shove AI into every product, every experience, and every message. Convergence can be a strength and it can also be a weakness—it depends. We can be problem-solvers that judiciously select which problems we choose to work on. AI cannot do this; it may have safety constraints that make it say, "I probably shouldn't help you with that," but it cannot access actual wisdom. AI is a context-unaware problem-solver.
Given what atrocities can be achieved by context-aware human problem-solvers like Joseph Goebels and many, many more in Nazi Germany, what might AI do to gleefully help solve any problem someone puts in front of it?