How Smart Is Too Smart?
It's one of those headlines that grab your attention:
"Stephen Hawking warns artificial intelligence could end mankind"
Wow ... on so many levels.
In an interview with the BBC, Hawking feared AI "would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
We all know Hawking is a smart guy, so if he says it, maybe we should take heed.
Once the province of science fiction, AI is rapidly becoming science fact, as computing power and machine learning merge into algorithms that mimic the human brain. Robots are on the rise, pushing more and more humans to the back of the unemployment line. Scientists and philosophers are taking a deeper dive into the so-called "hard problem" -- consciousness.
Have we as a species become too smart for our own good? Maybe. But what's the point of having the most evolved brainpower in the known universe and not use it? Obviously we can't help ourselves.
It isn't enough to know we're here. Inquiring minds want to know where we came from, how we came to be and why. Maybe AI can find the answers, but I have my doubts.
Like the very logical Mr. Spock, a highly-evolved, AI-enabled entity not only wouldn't ask those questions, but also couldn't care less about finding the answers. Driven by pure logic free of emotion, it would relentlessly pursue its own idea of perfection, which might include deciding the fate of its creators. That would be us.
Hawking isn't the only smart guy sounding alarm bells about AI. It was reported Tesla's Elon Musk called the pursue of AI like "summoning the demon." Bill Gates has voiced his concern as well.
Hollywood has been warning us for years. We all remember "HAL," the computer from Stanley Kubrick's classic film, "2001: A Space Odyssey." He was pure E-V-I-L. More recently, the run-amuck androids from HBO's "Westworld" gave us fresh new nightmares.
Then there was "Fail-Safe," the chilling 1964 movie from the book by the same name, where a computer defect almost initiates World War III. As they work feverishly to understand what failed, there was this exchange:
"Even if the machine fails, the human can always correct the mistake. The machines are supervised by humans."
"I wish you were right. The fact is, the machines work so fast... they are so intricate... the mistakes they make are so subtle... that very often, a human being just can't know... whether a machine is lying or telling the truth."
Time to hide under the bed? Not yet, according to many experts, who say true AI is decades or even centuries away -- or maybe never.
Never one to trust experts, I decided to ask SIRI.
"Siri, should I be afraid of artificial intelligence?"
SIRI: "I'm not sure what to say."
Okay. I am officially worried.
Yes, I know it's spelled like "Jerry." No, I don't know why it's pronounced "Gary."