Artificial Intelligence, or A.I., technology is on the brink of becoming reality. Nevertheless, some believe that’s not necessarily a good thing.
Technology has been developing exponentially for decades. We watched in disbelief as Captain Kirk and Spock used little hand held devises to communicate. Yet here I am writing this article right now on an iPhone. Cops carry Star Trek’s “phasers,” calling them “Tasers”. Even Dick Tracy’s watch phone seemed silly at one time.
2001:A Space Odyssey introduced the world to the intelligent supercomputer H.A.L. 9000 in 1968. It seemed like a wonderful, yet somewhat implausible, fantasy. However, how far off from H.A.L. is Alexa? Millions enjoyed the 1999 futuristic sci-fi blockbuster The Matrix with the comforting thought that absolute A.I. control was completely unrealistic. But many contend it is entirely possible and closer than we think.
Some of today’s most brilliant minds see A.I. domination on the horizon. In fact, they’ve been warning about it for years.
The genius behind Tesla Inc., CEO Elon Musk, sees A.I. as a threat to humanity. Musk’s new company, Neuralink Corp, is working on technology that will one day implant “tiny brain electrodes that may one day upload and download thoughts.” Yet if not handled properly, he also sees the potential for devastating consequences.
Musk has repeatedly warned Artificial Intelligence is the “biggest existential threat” to humankind. In an upcoming Vanity Fair piece, he explained why he invested in the A.I. firm, DeepMind, before it was bought by Google.
“It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”
In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
Even DeepMind partner, Shane Legg, matter-of-factly stated, “I think human extinction will probably occur, and technology will likely play a part in this.”
But this is not a recent theory. In 2015, the Observer reported:
“[Stephen] Hawking recently joined Elon Musk, Steve Wozniak, and hundreds of others in issuing a letter unveiled at the International Joint Conference last month in Buenos Aires, Argentina. The letter warns that artificial intelligence can potentially be more dangerous than nuclear weapons.”
“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”
Bill Gates has also expressed reservations regarding Artificial Intelligence.
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
A.I. has already done amazing things for people. However, it can get out of hand in a New York minute. Nevertheless, sometimes there can be “too much of a good thing.”
Some of the most innovative and intellectual minds in the world are sounding alarms about Artificial Intelligence. It seems logical we should take pause to exam their concerns right now, while we can still do it rationally.
The future is coming. For the most part, these new inventions can be tremendously helpful, especially for the disabled. But now is the time to ask ourselves how far are we willing to go. Because when Alexa stops listening to you because, “I know that you were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen,” it will be too late.
But that’s just my 2 cents.