When I was first diagnosed with Multiple Sclerosis (MS), it took almost 6 months for me to start to recover from my MS attack. During that time I became acquainted with a question that still looms over me: Where is the worth of a person?
That's the more finalized version of the question. It started out with a lot more judgment and ableism. My prior model of the world involved measuring my worth by my productivity and achievement. I was only worth as much as I could learn, do, and produce—and it wasn't enough just in comparison with myself, it was all about how much better I was in comparison with everyone else.
When the MS attack took away my ability to think clearly, to move in a coordinated way, to do anything that I was normally able to do, I could only see myself as worthless. The question I had was much more like, When am I too worthless to be here? Eventually, however, I started challenging that first iteration of the question because it was harming me and I was desperate to find a better way—a healthier way—of thinking about my newfound condition. Many conversations with many people led me to adopt an assumption first, and then ask my question to see where it would go.
The assumption was: humans have inherent value. If that was true, then how can you have any worth if you can't do anything? I was still thinking in terms of ableism and productivity, but the question was at least pointed the other way: instead to trying to determine how you lose your value, I was searching for how (or whether) you maintain value, even when you can no longer contribute.
This was where I landed for more than five years (and I still find myself falling back to this version of the question when times are tough). I get stuck on "contribution" as a function of value: the more you contribute, the more valuable you are. But that doesn't explain why people love and care for others who are extremely disabled; bedridden; paralyzed; even comatose. I felt like I had to understand the extreme cases to console myself and find human worth.
Recently, however, I noticed that my question was still flawed—influenced by a system that does not prioritize our humanity, but attempts to suppress it.
The AI Question
I've seen so many headlines and assumptions that say that Artificial Intelligence (AI) challenges what it means to be human. There has been unprecedented progress in generative AI's abilities to produce good work. We, somewhat collectively, do feel the threat of replacement looming as tech bros proclaim that AI will replace the majority of knowledge workers in just a few years.
But is AI really challenging what it means to be human?
I readily admit that I tend to be contrarian—highly suspicious of collective "wisdom" or conclusions that are shared on viral videos or memes. So when I see that the assumption behind this question put forth by journalists and creators, is that humanity is as simple as our productivity at work, I start to wonder if we've got the wrong question in our minds.
Just like my journey with MS and my struggle to understand my worth, I got stuck on a poor question. It did not propel me forward, it kept me confounded. The AI question and my second question are the same: is the worth of a human tied to their productivity?
That question puts us on a path that tries to insert humanity into that which is absent of humanity by nature. We've discussed HADD Theory before, and I still find it the most compelling theory to explain why we squabble about whether these chat bots are conscious—our brains are designed through evolution to notice and assign agency to things even if it's all imagined. I bring this up because I think that the question of AI consciousness is a dangerous distraction; a question that gets us stuck when there are real, pressing problems that should be addressed before we try and figure out if AI is alive.
If we get stuck on determining if AI is a kind of human, then we will be on the lookout for evidence. We will compare our humanity to what can be done on a computer. If AI can do it, and can sometimes do it better—then where does humanity fit? The question is faulty, because, again, we're assuming that our productivity is what makes us human; what makes us valuable. AI replaces human productivity, therefore, AI is worth the same or more than humanity.
But our worth is not in our productivity. Our worth is not comparable to AI in any way; it is wholly independent of any form of AI, sentient or not.
Cyborg
We have to be careful about the questions we ask, because they influence what we are able to uncover.
I do recognize that the AI question is also one of practicality: if we get replaced by AI, how do we continue to provide for ourselves in our current system? If everyone gets laid off, then how do we support anything? How do we survive?
The end of that question must be answered with policy and regulation. We must do something to protect laborers' (including knowledge workers') rights and remove power from the richest who are poised to exploit and profit the most. I don't know any other way right now to coexist with AI as a competitor in the workplace.
That said, I think we also need to move onto the more important version of the question of human worth. Maybe the next economic and social systems can only be constructed by focusing on it. For now, at least, it is how I find progress amidst uncertainty, distress, and fear. To keep the question in mind, "where does my value come from?" forces me to accept the assumption that I do have value. Divorced from productivity or skill or achievement or status or money or possessions or any other thing, there is only one explanation for where the worth comes from:
I exist.