Saturday, 10 October 2015

Stephen Hawking: 'The real risk with AI isn't malice but competence'

SPAIN-SCIENCE-FESTIVAL-HAWKING

Artificial intelligence was one of the biggest topics during Stephen Hawking's Reddit AMA (Ask Me Anything) earlier this year. So it's not too surprising that Hawking used up a significant portion of his answers to that Q&A session, released by Reddit yesterday, by clarifying his stance on dangerous artificial intelligence. "The real risk with AI isn't malice but competence," he wrote to a teacher who's tired of having the "The Terminator Conversation" with his students -- that is, explaining away the notion that evil, killer robots will be the main danger with AI. "A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble." Hawking previously warned that AI could "spell the end of the human race," and he also joined Elon Musk and other notable technologists to call for a ban on autonomous weapons.

While it's a bit less exciting than robots bent on destroying humanity, Hawking's reasoning is no less worrisome. The idea that the equivalent of an AI software bug could eventually have world-changing implications isn't exactly reassuring.

"You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants," Hawking added. "Let's not place humanity in the position of those ants."

Responding to another question about when AI will reach human levels of intelligence, Hawking stressed that we don't really know when that will happen. But, he noted, "When it eventually does occur, it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right." To that end, he calls for being more careful about how we develop AI. Rather than just exploring "pure undirected artificial intelligence," we should instead be focusing on creating "beneficial intelligence."

Hawking also noted that an evolved AI will be able to have drives or goals similar to living organisms. But where living creatures focus on surviving and reproducing, he notes that AI could be driven to collect more resources to fulfill its goals, citing scientist Steve Omohundro. And once again, that could spell trouble if it's taking away resources for humans.

Pointing to a slightly more pressing issue, one Redditor asked Hawking about his thoughts on technological unemployment -- especially around the idea that we might one day reach a point where most tasks are automated, and most humans are out of work. Hawking described commonly-discussed scenarios: One where most people can live a slightly more luxurious life, if the resources produced by the machines are shared. And another where most people end up "miserably poor" and the rich people who own those machines end up consolidating wealth. At this point, Hawking sees things trending towards the second reality.

On the lighter end of things, we also learned that Hawking's favorite movie is Truffaut's Jules and Jim, and that he somehow finds The Big Bang Theory funny. Perhaps the funniest takeaway: When one Redditor asked if Hawking remembered briefly watching Wayne's World 2 at a Cambridge video store, Hawking replied with a resounding, "NO."

[Photo credit: Desiree Martin/AFP/Getty Images]

Source: Reddit



Read the full article here by Engadget

No comments: