In science fiction books and movies we encounter artificial intelligence frequently. We have seen intelligent robots that are either working for and with us,
or in the worst case scenario turned against humanity.
Artificial intelligence is these days also a hot topic in the scientific community and as we reported just some days ago that DARPA is now in the process
of constructing robots with real brain.
There is no doubt that opinions vary a lot. Some consider artificial intelligence to be the most compelling evidence that the
technological singularity is near, but there are also those who express a more cautious enthusiasm.
Artificial intelligence can be dangerous and this means we can be in trouble of the technology starts working against us.
Recently a very interesting article appeared in Aeon Magazine warning of the threat posed by advanced artificial intelligence.
As previously mentioned, science fiction authors make use of artificial intelligence in almost every book. It can be a little to learn that many famous science
fiction authors do consider AI's to be a threat to humanity.
Kristin Centorcelli of SF Signal put together a panel of famous science fiction authors to get their opinions on the subject and here is a small
sample of what some of them had to say about future and co-existence with artificial intelligence.
Larry Niven - " If you make an intelligent being, you must give it civil rights.
On the other hand, you cannot give the vote to a computer program. "One man, one vote" - and how many copies of the program would you need to win an election?
Programs can merge or can generate subprograms.
Machines can certainly become a part of a human. Our future may see a merging of humans and machines. Or all of the above."
Neal Asher: " Yes, our computers are able to process so much more every day but AIs they are not. And if they suddenly do turn into demigods,
how exactly are they going to change the world? It's all very well having vast intelligence but if you can't even pick up a screwdriver it isn't going
to do much good. Sorry to be blunt, but go ask Stephen Hawking about that."
Will AI's become demi-gods controlling our planet?
Guy Hasson - "It is so easy to say "This new technology can kill us." It's easy to say, because it's always true. About practically any technology.
These types of statement can't really be disproven. Put any expert on the stand and ask him, Do you know for a certainty,
a 100% certainty, that this technology will not kill us? No, an honest person will have to admit, no one does.
The prosecutor will continue: Is there more than 0% chance that this technology will kill us? Yes, the honest person will admit again, there is."
James K. Decker: - " If we're talking about a true intelligence, some kind of self-aware network of synthetic neurons and not some kind of 'human simulation',
I don't see how we could have the slightest idea what it might do once it became conscious.
We'd be interacting with a completely inhuman intelligence, free of empathy, or even an understanding of what life and death are.
The things that are core to us as humans would mean nothing to a being like that and so given the chance to act in our world, we could have no way of
guessing what it might decide to do.
Even if it were somehow keyed to be beneficial to us, taking the "maximizing human happiness" example from the original
question, a machine intelligence might decide the optimal way to do this would be to keep every human immobilized, and hooked up to a
feeding tube with a wire running current to our pleasure centers.
That would make every human happy for their entire lives, and without the ability to understand why that would be horrible it might seem like
the most efficient course of action."
Wesley Chu - " Yes, future apocalyptic extinction sucks and sounds pretty unpleasant, but if I may, when was the last time any
futurist's prediction actually came true? They predicted flying cars in every family's garage back in the 1920s. Nearly a hundred years
later, cars aren't drastically different than they were since the days of the Model T. We still don't have a moon base, and my cleaning
lady is composed of skin, bones, and blood, albeit I admit she sounds like a robot when she talks. Hell, we can't even get a guy to Mars let alone the next solar system. We can't even cure the common cold."
Karl Schroeder - " I think intelligence only exists, or has a function, in service of norms, aims, goals…whatever you want to call
them. Consciousness is the passenger, and our biological needs are the driver. It may be impossible to create an artificial mind without
endowing that mind with urges that keep it going-curiosity, truthfulness, the need to express itself, etc. However, this does not mean
that an artificial intelligence has to possess all the drives we have."
Artificial intelligence in the Alien movies. Bishop in part two and Ash in part one are totally different.
Madeline Ashby - " Like anything else, the quality and behaviour of AI depends on who is designing, funding, and retailing the AI.
Garbage in, garbage out. You get what you pay for. You reap what you sow."
Gregg Rosenblum - "I have to say, at the risk of sounding wishy-washy, that I think we're going to get a mixed bag of positives
and negatives from AI technology.
We're going to have bots defusing land mines and fighting fires but also dropping bombs from unmanned drones.
We'll probably have AI cars driving without human guidance (we've already got self-parking cars, right?), but we're also going to have an interesting,
"transhumanism" cyborg-like blurring of the lines between technology and humanity. (Google Glass is just the tip of the iceberg-how many of us, for example,
if we could have a comm. chip implanted in us that acted as a smart phone, would jump at the chance?)
I don't think we're going to have truly "sentient" AI for a long time, and if that's true, then we won't have a robot uprising awaiting
us in the near future (although that would make a cool premise for a trilogy of YA sci fi books, cough, cough).
We are, however, going to
have increasingly smarter and smarter technological tools at our disposal. It's how we, as humans, utilize these tools-to solve our problems or
new ones-that'll be interesting to watch."
Inside Mechanical Brains:
What Do Robots Dream Of?
Have you ever wonder what robots dream of? The idea that robots can dream might sound implausible to some. Robots are machines.
They do not have feelings emotions and they certainly cannot dream, many people would undoubtedly argue.