I want to write something both short and slightly personal. This is more like a blog post than my previous writing
Correlated Risks
The world has changed. Our individual risks continue to drop, while our correlated risks grow unbounded. Our risk of dying from sweating fever, from a random infection, in childbirth, or in some field to prevent the spread of communism among Vietnamese farmers, has dropped. In exchange we need to worry about the horrors of rogue AIs, pandemics, and nuclear war.
The trade-off has been advantageous for most of us so far; but that’s only so far. Terrors continue to spill out of the fact that we keep becoming better at manipulating matter, while continuing to inhabit the same fragile bodies. It’s simply true that the mechanisms necessary to cause global catastrophe, or even the end of humanity entirely, have ramped up significantly.
Ten years ago if you asked me if we could see AGI risk in my lifetime, when contemporary software could barely handle putting a linear regression into a production system, I’d have been optimistically doubtful. Even now, as fast as AI is moving, the pain and sluggishness of building and maintaining basic software infrastructure is almost comical. That’s how it goes though, it’s why people in the weeds of almost anything underestimate the risk. Each day is a painful slog when building anything new.
AI Safety and the Limits of Computation
How worried should we be about AI and our future? I’ve existed alongside communities that have debated AI safety for around a decade. Since back when the cutting edge AI was:
from sklearn import RandomForest.
I was never acutely worried, but concern over AI risk seemed like a reasonable thing to anticipate. I’m not sure I ever thought there was that much work to do in what was still the early days of compute and machine learning, but out of a planet of 7 billion or so, having one or two organizations in San Francisco dedicated to it seemed like a reasonable marginal allocation of labor.
My belief that AI safety would someday be incredibly important came from my own philosophy of empiricism. We humans are organic computers in some substrate of reality. While the upper limits of what is possible through computation, energy, and the manipulation of matter seem beyond our ability to fully conceive, why shouldn’t it be possible that a super intelligent AI bends the entire world to its will? Regardless of when it happens, it never seemed surprising to me that it might.
Simulation
While I wasn’t worrying, I was reflecting a lot on my own life, and my own humanity. A world that could create an AI superintelligence is a world where philosophical fabric we all take for granted and share would be different than the mainstream one.
When Nietzche wrote God is Dead he was referring to the common knowledge structure Europe shared of Christian morality disintegrating. He saw that the most primitive assumptions upon which society was constructed were beginning to rot. As I more deeply came to understand the implications of where modern computation was going, my illusion that the human brain is special began to disintegrate as well.
Even with the death of God, there still remained an unjustified belief, or cope, that there was something special about us. What if it’s all just bits? As we delved deeper into the world of computational complexity, a new idea emerged. The concept that our reality and experiences themselves could constitute a computer began to feel like a reasonable way of thinking about the world.
(as an aside, when my wife and I were spending our first night in the hospital with our newborn daughter, some of my first words to her were “You’re probably wondering why you’re here, and what the meaning of it all is. Unfortunately, we have no idea”)
The philosophy aside, for me personally it was an aesthetic shift in how I viewed my own life. Everything decomposed itself into information theory. I didn’t see a fundamental difference between myself and the computers we have built. The more we have pushed into silicon, the more natural this view continued to feel.
My life was easy during this time. So while this was all intellectually interesting I didn’t spend too much time contemplating what it should mean for my own outlook.
Around five years ago, things shifted when my younger brother was diagnosed with cancer. In the early days I remember we were waiting to get his prognosis in the oncologist’s office (the prognosis was excellent and remains excellent).
I have this image of a tree burned into my memory, as I watched it through the window while the oncologist quoted us survival statistics. I kept considering that tree. Was it real? Was it simulated? Was I embedded in a world of an indifferent god? Or was it a computer simulation and I was an agent who happened to feel pain? Was I programmed to feel pain? Or was it an emergent property?
The fact that I made it to age 28 without any of the true trauma of human existence is uncommon across human history. Suffering has always been central to our collective struggle. How many of us don’t have familial tragedy from the horrors of the 20th century? Looking through the history of our shared ancestors, it was a life of disease, war, and pain.
The story of our species, our ongoing story, is still one of suffering.
Doomer Realism
There is a doomer realism that has seized my generation. Depending on your cluster you probably care more about some of these so-called existential threats than others. The commonality is the same though, a neurotic anxiety combined with abandoning the fundamental human experiences.
Like all my ancestors, like all our ancestors, I’m having kids and building forward into a deeply uncertain future. I have a daughter, and I have no real clue the world she’s going to inherit. I don’t know that we won’t all die in five years. The strange and warm American opioid glow following the cold war felt like a promise of safety, which put my generation into a sleepy haze. After being awakened from it, they’ve found themselves disoriented and upset. Retreating either to fentanyl or IPAs and child-free barcades.
We also don’t seem to have the tools to deal with grand existential risk. Any number of terrible things could happen to any of us, yet we don’t dwell on these, and instead fixate on some hard to specify climate risk decades from now.
Sitting around in relative comfort and waiting for the terrible things to happen will chip away at your life. No one can promise you those terrible things won’t happen; they might. Same way in which all of your ancestors faced a far more brutal unknown than you and continued to keep moving forward.
Things are different now, worse, from the perspective of humanity. Our shared risks now mean a bad roll of the dice isn’t just the end for you and yours, but for everyone. But things are better from your perspective. Would you really trade places with any one of your ancestors? I don’t have to go too far back to identify great great uncles who died in WW1. I’m sure you don’t either. Or who died in childbirth, or from simple infections.
I could go on, but you get the point. You wouldn’t. Our risk as humans is simultaneously worse and better than it’s ever been. Your ancestors looked forward into the future and pushed themselves down through your lineage while facing down far more imminent and dangerous risks than we’re currently facing.
Take it a year at a time, have kids, build a new generation to work on stranger and harder problems, and enjoy it while you’re here.
Calm Down
This post was nominally about AI, but it’s not. It’s about living under uncertainty and risk. If you find yourself making different decisions, or losing your cool due to AI, I’m not going to say you’re necessarily wrong. That would require trying to prove your probability estimates for risk are poorly calibrated, and I don’t think I could convince you of that even if I wanted to. But might I suggest that this risk is not so much greater than others we are all living with already?
For my family the monthly blood tests, the yearly catscans, has a way of making explicit that lurching fear that’s really continuous, that really we all live with every second of every day. The first six months were the worst. My dad texted me the scan was clear. We didn’t have to worry about it for a few months at least.
I thought maybe I’d tell my coworker or something, but I didn’t. I walked down to South Lake Union. I thought maybe I’d go down there and cry, but I didn’t do that either. I just sat on the embankment and looked out onto the lake. It was warm that day, early June in 2018.
I thought I know this is an indifferent world, I know it’s some strange simulation beyond my conceiving. I don’t care though, all I know is I like being here with the people in this world that make it worth living, worth the suffering and the risk.
Got to keep on going like it ain't the end
Got to change like your life is depending on it
It's a long time coming and we're taking it in
What a wild ruse
Thanks so much for sharing this. I often suspect that our anxieties about some future extinction of humanity (eg Foom, climate change, demographics) are really a form of psychological displacement of our mourning the ongoing "abolition of man". As CS Lewis prophecied, turning the lens of science inward on humanity itself and on those things which made humanity unique (the use of reason and the capacity for creativity) has caused us to believe that, objectively speaking, nothing matters. Nick Bostrom described a weird and terrifying future world:
> We could imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.
We are often tempted to believe that we are already living in such a world.