Thursday 31 January 2013

The Cambridge Project for Existential Risk


December 5th, 02012 by Alex Mensing


Human technology is undoubtedly getting more powerful every year, and our destructive potential is no exception. The Cold War notion of ‘mutually assured destruction‘ was unthinkable for most of human history, as was the ability to fundamentally alter the climate of the planet on which we rely. As the capabilities of our technologies continue to grow, what are the ways in which we become increasingly able to bring about our own demise as a species?

Martin Rees and Huw Price of the University of Cambridge and the Skype founder Jaan Tallinn teamed up to investigate and mitigate that very possibility. In founding the Centre for the Study of Existential Risk at Cambridge University, they explain their motivation:

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake
.
Rees, Huw and Tallinn agree that scientists need to pay more attention to this issue. The long-term future of humanity is at stake, and we need to understand more clearly the power that we wield in the modern world, and how to avoid using it destructively. The issue can, in fact, be extended beyond our own species. As Stewart Brand concluded his summary of co-founder Martin Rees’ SALT talkNow that we are stewards of this planet, we are responsible for maintaining life’s possibilities in this cosmic neighborhood.
This entry was posted on Wednesday, December 5th, 02012 at 10:02 am and is filed under Futures, Long Term Thinking, Technology.
Ideas about Long-term Thinking.


No comments:

Post a Comment