MessageToEagle.com

‘Pandora’s Box’ Moment - Some Highly Advanced Technologies
May Pose A Serious Threat To Our Species - Scientists Say

17 September, 2013

Share this story:
Follow us:
MessageToEagle.com - There may be certain technologies that might pose “extinction-level” risks to our species, from biotechnology to artificial intelligence, according to a team of scientists who propose a new centre at Cambridge to address developments of these technologies.

Many scientists are concerned that developments in human technology may soon pose new risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change.

The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. See this recent online article by Huw Price and Jaan Tallinn!

Scientists at Cambridge University have now come together to identify a doomsday list of "existential risks" that threaten the planet.

Rampant climate change, bioterrorism and intelligent computers are some of the dangers being investigated by the group, which includes Astronomer Royal Lord Rees and physicist Prof Stephen Hawking.

In a speech to the British Science Festival, Lord Rees said: "In future decades, events with low probability but catastrophic consequences may loom high on the political agenda. That's why some of us in Cambridge - both natural and social scientists - plan, with colleagues at Oxford and elsewhere, to inaugurate a research programme to compile a more complete register of these existential risks and to assess how to enhance resilience against the more credible ones."

Speaking at the University of Newcastle, Lord Rees described some of the issues of most concern to him and his colleagues.

They included out-of-control climate change, runaway technologies in areas such as artificial intelligence and synthetic biology, and cyber or bioterrorism.

"We fret too much about minor hazards of everyday life: improbable air crashes, carcinogens in food, low radiation doses, and so forth," said Lord Rees.

"But the wide public is in denial about two kinds of threats: those that we're causing collectively to the biosphere, and those that stem from the greater vulnerability of our interconnected world to error or terror induced by individuals or small groups.

"To survive this century, we'll need the idealistic and effective efforts of natural scientists, environmentalists, social scientists and humanists.

"They must be guided by the insights that 21st century science will offer, but inspired by values that science itself can't provide."



In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.

This machine, he continued, would be the “last invention” that mankind will ever make, leading to an “intelligence explosion” – an exponential increase in self-generating machine intelligence. For Good, who went on to advise Stanley Kubrick on 2001: a Space Odyssey, the “survival of man” depended on the construction of this ultra-intelligent machine.

Fast forward almost 50 years and the world looks very different. Computers dominate modern life across vast swathes of the planet, underpinning key functions of global governance and economics, increasing precision in healthcare, monitoring identity and facilitating most forms of communication – from the paradigm shifting to the most personally intimate.

Technology advances for the most part unchecked and unabated.

While few would deny the benefits humanity has received as a result of its engineering genius – from longer life to global networks – some are starting to question whether the acceleration of human technologies will result in the survival of man, as Good contended, or if in fact this is the very thing that will end us.

Now a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.

“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.

“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous.

I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”



Price’s interest in AGI risk stems from a chance meeting with Jaan Tallinn, a former software engineer who was one of the founders of Skype, which – like Google and Facebook – has become a digital cornerstone. In recent years Tallinn has become an evangelist for the serious discussion of ethical and safety aspects of AI and AGI, and Price was intrigued by his view:

“He (Tallinn) said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease. I was intrigued that someone with his feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to do something about it.”

We Homo sapiens have, for Tallinn, become optimised – in the sense that we now control the future, having grabbed the reins from 4 billion years of natural evolution. Our technological progress has by and large replaced evolution as the dominant, future-shaping force.



We move faster, live longer, and can destroy at a ferocious rate. And we use our technology to do it. AI geared to specific tasks continues its rapid development – from financial trading to face recognition – and the power of computing chips doubles every two years in accordance with Moore’s law, as set out by Intel founder Gordon Moore in the same year that Good predicted the ultra-intelligence machine.

We know that ‘dumb matter’ can think, say Price and Tallinn – biology has already solved that problem, in a container the size of our skulls. That’s a fixed cap to the level of complexity required, and it seems irresponsible, they argue, to assume that the rising curve of computing complexity will not reach and even exceed that bar in the future. The critical point might come if computers reach human capacity to write computer programs and develop their own technologies. This, Good’s “intelligence explosion”, might be the point we are left behind – permanently – to a future-defining AGI.

“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”

Price and Tallinn stress the uncertainties in these projections, but point out that this simply underlines the need to know more about AGI and other kinds of technological risk.

In Cambridge, Price introduced Tallinn to Lord Martin Rees, former Master of Trinity College and President of the Royal Society, whose own work on catastrophic risk includes his books Our Final Century (2003) and From Here to Infinity: Scientific Horizons (2011). The three formed an alliance, aiming to establish CSER.

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point. “To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.”

“What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?” he adds. “We hope that CSER will be a place where world class minds from a variety of disciplines can collaborate in exploring technological risks in both the near and far future.

“Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate its millennium.”

MessageToEagle.com.

See also: ET Machines, Cyborgs Or Humans - Who Can Explore Space Best?

Follow MessageToEagle.com for the latest news on Facebook and Twitter !

Don't Miss Our Stories! Get Our Daily Email Newsletter

Enter your email address:


Once you have confirmed your email address, you will be subscribed to the newsletter.

Recommend this article:

Subscribe To Our News!

Grab the latest RSS feeds right to your reader, desktop or mobile phone.


comments powered by Disqus

Subscribe to RSS headline updates from:
Powered by FeedBurner

Go to - MAIN PAGE

Copyright © MessageToEagle.com All rights reserved.
Go to - MAIN PAGE


Get our top stories
Follow MessageToEagle.com

 Subscribe in a reader

Join Us On Facebook!


Other Popular Articles

Matrix Dilemma - Do Humans Live In The Ultimate Computer Game Of The Superior Ones?
There's a distinct possibility that the universe, our life, and everything around us are part of a vast, living and 3D holographic simulation conducted by "someone" invisible and superior to everything known in the universe! Is it the ultimate computer game of the superior ones?

Advanced Extraterrestrial Civilizations - Their Technology And Capabilities
"Any sufficiently advanced technology is indistinguishable from magic, "Arthur C. Clarke once wrote a long time ago. In this Xenology article we take a look at who could be out there and what kind of advanced technology they could posses. "Soon, humanity may face an existential shock as the current list of a dozen Jupiter-sized extra-solar planets swells to hundreds of earth-sized planets, almost identical twins of our celestial homeland.

Unusual Organisms Living On Pandora - A Fictional Alien World That Could Be Real
What kind of unusual organisms could exists on a world like Pandora? What could we expect to find there? As we are about to find out the line between science fiction and science fact is thin indeed. Pandora is the idyllic blue world featured in the movie Avatar. Its location is a real place, Alpha Centauri, the nearest star to our Sun and the most likely destination for our first journey beyond the solar system.

Xenology: Scientific study of extraterrestrials



UK Has Extraterrestrial War Technology And Is Prepared To Use It If Attacked By Aliens: Says Former Government Advisor Nick Pope

Extraterrestrial Watchers And Dangerous Interstellar Signals

Find Dyson Spheres And Powerful Alien Worlds - Let's Make Science Fiction A Reality!

Extraterrestrials Can Resemble Humans: We Can Share Similar DNA

Deception In The Universe - Meteors Can Fool Us To Think Alien Worlds Are Inhabited With Life

China And Commission 51 Focus On Search For Extraterrestrial Life

Advanced Extraterrestrial Civilizations - Their Technology And Capabilities

Extraterrestrial Radio Hotline - Why Are Aliens Not Communicating With Us?

Alien Life In The Multiverse - Are We Living In A "Rare Universe"?

Can Alien Bodies And DNA Provide Insights Into Evolution On Other Worlds?

What If Curiosity Suddenly Encounters A Martian?

Edinburgh University Offers Free Online Courses On Searching For Alien Life!

Aliens Will Look Like Huge Jelly-Fish Floating In The Air: Scientist Says

Have Super Aliens Already Left Our Visible Universe? A Closer Look At The Transcension Hypothesis

Aliens Living On Desert Worlds

Alien Life Inside A Postbiological Universe Where Time Has No Meaning

Super Aliens May Already Live Inside Supermassive Black Holes

Alien Life In The Multiverse - Are We Living In A "Rare Universe"?
A while back scientists announced that they had discovered what could possibly be considered as the first evidence of parallel universes. Many people are intrigued by speculative ideas about cosmology and find the idea of...

Our Creator Is A Cosmic Computer Programmer - Says JPL Scientist
Are we just a computer simulation? Who or what is the creator? More and more scientists are now seriously considering the possibility that we might live in a matrix, and they say that evidence could be all around us...

Do Aliens Use Intelligent Supercomputers And Quantum Communication Networks To Manage Their Societies?
Societal organization would only be possible across vast distances, only if the society was using quantum supercomputers and quantum communication networks. Keep in mind quantum computing and...

Time Travel: A Journey To The Fourth Dimension And The Incredible Science Of Dr. Who
The possibility of time travel have fascinated mankind for ages. We are all familiar with the concept of the time machine, from H.G. Wells to current sci-fi writers, and there are physicists who say we cannot rule out time travel.

Alien Species Living In The Inner Milky Way Could Be In Danger
Few people doubt there is intelligent alien life in the Milky Way galaxy, but where can we expect to find it? Astronomers think that while the inner sector of the MIlky Way Galaxy may be the most likely to support habitable worlds. Unfortunately some of these places are also most dangerous to all life-forms.

Escaping To A Parallel Universe And Avoiding "Big Freeze" Could Be Possible

‘Holy Grail’ Of Rocket Propulsion System Wanted!

Supersonic Flying Saucer Was Built By The US Air Force: Declassified Documents Reveal Schematics

W3Counter