The Mega-Diverse Coral Reef Future of Humanity
Interview with Anders Sandberg
Interview by Lydia Laurenson
Image credit: Adobe Stock
Published September 20, 2020
Anders Sandberg has a background in computational neuroscience and works at Oxford’s Future of Humanity Institute, where it’s his job to think about the future of humanity. Obviously, I had to talk to him about that.
LYDIA LAURENSON: So what’s your ideal future?
ANDERS SANDBERG: I dream of a Cambrian explosion of diversity. Lots of posthuman offshoots becoming their own thing and interacting. I want a future like a coral reef: Colorful fishes, some enormous inert systems that are just sitting there, some things big as whales and others small as nematode worms, weird symbiosis, and competition.
The standard picture of the future is in “far mode.” It's the bluish light cast by a screen. It's all glass and streamlined because we have a hard time coming up with details. If you look at the actual world: It's cluttered; it's complex; it’s horrifying and amazing and beautiful.
I'm currently working on what I call “grand futures.” The research we do at the Future of Humanity Institute, a lot is about existential risk, stuff that can go wrong. I want humanity to avoid tremendous damage, of course — but what also fills me with dread and fear is that we could create attractor states that drag us into something eternally homogenous, painful, or bad.
That’s what my coral reef vision is about. I want the opposite of that, something totally open-ended, forever ever-changing.
LYDIA: Have you seen any art that expresses this Cambrian explosion idea? The closest I can think of is maybe Charles Stross’s science fiction novel Accelerando (2005), where brain scans of lobsters get uploaded into the network and eventually become an intelligence that collaborates with future forms of humanity.
ANDERS: I was on the email thread that led to Stross’s lobsters! It was back in the late ‘90s. We were on the same email list, and someone was complaining that the news was so weird this week: Europe getting unified without a dictator, and people uploading lobsters. I read Stross’s book with great delight, because I feel I'm one of those unnamed characters at the party standing around.
Again, though, the problem is, as you get further into the future it becomes harder to describe the details. Bruce Sterling's Schismatrix (1985) is a classic, though. Another interesting novel is David Zindell's Neverness (1998). I have mixed feelings about Zindell's preachiness, but it’s another universe where humanity branches off: Some people deliberately turn themselves into Neanderthals to live close to nature, for example, and the book doesn't shy away from the fact that that's horrible when it's a harsh winter. Other people are moon-sized computers doing weird poetry. In between, you have everything from nanotechnological dolphin-humans, to people making sinister modifications on their own motivation systems.
I love that diversity. A coral reef will have moray eels, and nasty things coming out from the depths during the night. There will be weird ecological balances that are never perfect, and too much history. I don't think that's avoidable.
I'm taking, maybe, a relatively pessimistic view, that we won’t avoid suffering and bad things. I have friends who think: We should be fixing this. I don't think we can actually fix all that. I just think it's worth it. I've been working on a little paper, making a bet on it.
LYDIA: Is this for the Long Now Foundation’s Long Bets program, where they encourage people to make ultra long-term bets?
ANDERS: Yes. I have several bets. One is about ethics. By the time humanity has spread out so far that we start to lose casual connection to each other because we're so far apart in space, will we still argue about ethics? Or will we figure out that there are, say, 17 fundamental ethical systems that are self-consistent?
LYDIA: Arguably, we already ran this experiment historically: We started as a small tribe somewhere and then we became lots of tribes with lots of cultures.
ANDERS: But we aren’t that diverse. There are human universals you find everywhere, and they’ve stayed the same for 50,000 years.
LYDIA: So your hypothesis is that, in the future, we'll be profoundly different in ways that affect our ethics, and then there’ll be a new type of ethical question.
ANDERS: Exactly. You can imagine artificial intelligences programmed to have beliefs. That's uninteresting compared to beings that can modify their own beliefs, or experience things that are impossible for us now. We can't experience the experiences other people have, for example — but if you could, it would have profound effects on how we relate to each other.
Also, we often think being conscious is important in terms of ethics and values. What if there’s something like “Consciousness Prime,” that we don't have, but maybe the right kind of biotechnological product or AI program could have? And they could tell us, “It's very sad that you humans don't have Consciousness Prime, because it’s what makes life worth living?”
Liberal democracy today is based on the idea that people are really different, but that we have things in common and so we set up systems to handle that. Which may be an interesting hint that there is resilience in the messiness we have. It's just that it takes so much effort and friction. We want to avoid that, because we're lazy... until the day somebody enhances a way of making us non-lazy, and then I have no idea what society will be like.
This was written by Lydia Laurenson, editor in chief of The New Modality. It was not fact-checked. There's more about our transparency process at our page about truth and transparency at The New Modality.