You Should Be Afraid of Artificial Intelligence

“Nuclear fission was announced to the world at Hiroshima.” James Barrat is author of Our Final Invention: Artificial Intelligence and the End of the Human Era, which expounds a thorough description of the chief players in the larger AI space, along with an arresting sense of where we’re headed with machine learning — a world we can’t define.

For our interview, he cited the Manhattan Project and the development of nuclear fission as a precedent for how we should consider the present state of AI research:

We need to develop a science for understanding advanced Artificial Intelligence before we develop it further. It’s just common sense. Nuclear fission is used as an energy source and can be reliable. In the 1930s the focus of that technology was on energy production, initially, but an outcome of the research led directly to Hiroshima. We’re at a similar turning point in history, especially regarding weaponized machine learning. But with AI we can’t survive a fully realized human level intelligence that arrives as abruptly as Hiroshima.

Barrat also pointed out the difficulty regarding AI and anthropomorphism. It’s easy to imbue machines with human values, but by definition, they’re silicon versus carbon.

“Intelligent machines won’t love you any more than your toaster does,” he says. “As for enhancing human intelligence, a percentage of our population is also psychopathic. Giving people a device that enhances intelligence may not be a terrific idea.” 

Sometimes, when things get hectic, it’s important to step away from the rat race for a minute. Take a deep breath, stretch out your limbs, clear your mind, and let an appreciation for the sheer number of terrible endings we’ve engineered for ourselves gently wash back and forth over your soul.