artificial intelligence hand

Artificial intelligence has come a long way in recent years. Most people would be lost without their GPS systems and robots are already navigating battlefields and doing the housework while drones may soon be delivering packages for Amazon. What’s more, Siri can solve most of the problems that crop up in everyday life.

But all of these advances depend on a user giving the instructions. What would happen if units decided they could do it better on their own? That’s the question posed by author James Barrat in his book, Our Final Invention: Artificial Intelligence and the End of the Human Era. The author and documentary filmmaker forecasts that artificial intelligence—from Siri to drones and data mining systems—will stop looking to humans for upgrades and start seeking improvements on their own.

And they won’t necessarily be friendly, he said recently, adding: ”In this century, scientists will create machines with intelligence that equals and then surpasses our own. But before we share the planet with super-intelligent machines, we must develop a science for understanding them. Otherwise, they’ll take control. And no, this isn’t science fiction. “Scientists have already created machines that are better than humans at chess, Jeopardy!, navigation, data mining, search, theorem proving and countless other tasks. Eventually, machines will be created that are better than humans at A.I. research.

“At that point, they will be able to improve their own capabilities very quickly. These self-improving machines will pursue the goals they’re created with, whether they be space exploration, playing chess or picking stocks. To succeed they’ll seek and expend resources, be it energy or money. They’ll seek to avoid the failure modes, like being switched off or unplugged. In short, they’ll develop drives, including self-protection and resource acquisition—drives much like our own. They won’t hesitate to beg, borrow, steal and worse to get what they need. “Advanced artificial intelligence is a dual-use technology, like nuclear fission, capable of great good or great harm.

“The NSA privacy scandal came about because the NSA developed very sophisticated data-mining tools. The agency used its power to plumb the metadata of millions of phone calls and the the entirety of the Internet—critically, all email. Seduced by the power of data-mining A.I., an agency entrusted to protect the Constitution instead abused it. They developed tools too powerful for them to use responsibly. “Today, another ethical battle is brewing about making fully autonomous killer drones and battlefield robots powered by advanced A.I.—human-killers without humans in the loop. “In the longer term, A.I. approaching human-level intelligence won’t be easily controlled; unfortunately, super-intelligence doesn’t imply benevolence.”

He adds: “We humans steer the future not because we’re the fastest or the strongest creatures on the planet, but because we’re the smartest. When we share the planet with creatures smarter than ourselves, they’ll steer the future. When I understood this idea, I felt I was writing about the most important question of our time. “Everyone on the planet has much to fear from the unregulated development of super-intelligent machines. “Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we’ll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponised. Unregulated, it will be catastrophic.