Are Futurists Just Scared of Death?

By: Justin Nobel | Date: Thu, January 7th, 2016

When immortal AI creatures finally take over the planet, what will be their disposition? Probably, this blogger thinks, something resembling the humans who created them.

Perhaps it’s the turning of the seasons, but something strange is in the air. A growing number of prominent academics appear to be convinced that human beings shall soon be immortal. And yet as I’ve watched this trend develop I’ve often wondered: Are these people just scared of death?

A New Yorker article last November about the dangers of artificial intelligence helped cement my suspicions.

“His interest in science was a natural outgrowing of his understandable desire to live forever,” a British philosopher said about Nick Bostrom, Director of the University of Oxford’s Future of Humanity Institute and one of the main subjects of the article.

Bostrom’s concern these days is that artificial intelligence, or AI, will soon be developed, with unknown and potentially horrifying ramifications for humanity. He is so focused on contemplating and preparing for the coming AI end-times that he only has time to eat food in smoothie-form. Raffi Khatchadourian, writer of the New Yorker article, watches him mix lettuce, carrots, cauliflower, broccoli, blueberries, turmeric, vanilla, oat milk and whey powder into a powerful blender. It’s a diet intended to strengthen the lifespan of Bostrom’s mind and, one would presume, ensure it remains alive to see the coming AI takeover.

Bostrom is so adamant on living forever that he wears a thin leather band around his ankle with contact information for Alcor, a cryonics facility in Arizona. Upon death, his body is to be immediately transferred to their facilities where it shall be maintained in a giant steel bottle filled with liquid nitrogen. His hope is that one day technology will be able to revive him.

It is understandable that a scientifically-minded human being would realize the expiration date of their own kind and want to use their intelligence to somehow overcome that and live forever. But to what extent is it rational and worthwhile to spend time trying to attain that goal? How many resources should mankind put forth in the effort to make eternal life possible? And is such an unabashedly ego-driven quest for immortality not a bit crass in a world rife with so many other troubles?

“Put more simply,” New Yorker writer Raffi Khatchadourian says of Bostrom, “he believes that his work could dwarf the moral importance of anything else.”

How exactly would humankind even attain the sort of immortality Bostrom is envisioning?

“You must seize the biochemical processes in your body in order to vanquish, by and by, illness and senescence,” Bostrom states in a 2008 essay. “In time, you will discover ways to move your mind to more durable media.” I can already see the movie tagline:

In effort to synch human and machine minds and prevent an AI takeover, an immortality-obsessed Oxford scientist achieves human-machine mind synch and ensures AI takeover.

And as if to confirm the idea that the denial of the thing can lead in fact to the thing itself, a few pages further along in the New Yorker article Khatchadourian gives the example of physicist Ernest Rutherford, who discovered that heavy elements produced radiation by atomic decay, confirming that vast reservoirs of energy were stored in the atom. Yet in 1933 Rutherford proclaimed: “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.”

Here’s the scary part. “The next day,” explains Khatchadourian, “a former student of Einstein’s named Leo Szilard read the comment in the papers. Irritated, he took a walk, and the idea of a nuclear chain reaction occurred to him…A decade later, Szilard’s insight was used to build the bomb.”

So I am skeptical of this Future of Humanity Institute, because frankly, I think they are more in bed with the AI creatures than anyone. And come the AI revolution, one could easily see the institute emerge not as an ember of counter-revolutionary action, but a vanguard group of yes-men giddily complicit in whatever Draconian ideology the AI creatures decide to impose on the humans.

It is hard to believe that Bostrom is not thirsting for the AI takeover. And in exchange for his ever-so-desired AI-synched eternal life, he wouldn’t be the first person to join the AI guard and spy on and terrorize his own former species-mates.

Of course, Nick Bostrom is not the only human giddy over the idea of eternal life, and convinced he is on a holy mission. Futurist Ray Kurzweil, in his talks on the essentialness and inevitability of AI, can seem oddly soporific and robotic, as if the AI creatures had already staunched his blood flow and were preparing to infuse his body with the gelatin that will allow the AI synch-up.

There is an obsessiveness and narrowness of vision in all of these people that is frighteningly childish, which is precisely why their nightmare-scapes may prove a reality. That is, because the AI creatures are being invented by egotist immortality-obsessed humans whose brains may have impressive computational powers but whose moral and spiritual capabilities are primordial, the form they will take is worrisome. More simply put: selfish, immature, soulless human creators means selfish, immature, soulless AI spawn.

What if Buddhist monks were the ones leading the AI charge? I would feel a lot safer about the AI outcome. And there would be no need for puffed-up institutions like the Future of Humanity Institute. In fact, as proven by the Middle Ages, monks, and those who devote their lives to asceticism, peace, and scholarly worship, are the true Future of Humanity Institutes. But, naturally, the monks are not drawn to AI, and egotists and childish credit-seekers are.

At the end of the New Yorker article, Bostrom speaks with Geoffrey Hinton, a Google employee working on AI and regarded as one of the most prominent figures in his field. Hinton says he fears AI will be used by “political systems…to terrorize people.”

“Then why are you doing the research?” asks Bostrom.

“I could give you the usual arguments,” Hinton says. “But the truth is that the prospect of discovery is too sweet.” (Robert Oppenheimer said something similar concerning why he was working on the atomic bomb.)

Well there you go. And the future is in their hands.

Follow Funeralwise on Facebook, or author Justin Nobel on Twitter.

Leave a Reply

Your email address will not be published.
*
*

*