John Markoff’s article in The New York Times today, The Coming Superbrain, looks at the potentialities and risks of developing ”self-aware and superhuman” machines. What ensues is the usual debate between those who believe that this type of artificial intelligence (A.I.) is the answer to our technological prayers and those who caution that creating machines more intelligent than ourselves will inevitably lead to our own destruction. Battlestar Galactica anyone?
One of the biggest proponents of A.I. development is Dr. Raymond Kurzweil, who is a co-founder of Singularity University, a school whose mission is to “assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.” (Apparently Dr. Kurzweil is also the subject of a new documentary, Transcendent Man.) Dr. Kurzweil’s seems especially interested in the possibility of achieving immortality through what he calls “uploading,” which generally speaking is the process of transferring the content and processes of our brains into a “computing environment.”
The fact that some of our brightest minds are focused on the goal of immortality always strikes me as odd. It also seems indicative of a GIANT denial of impermanence…and maybe an unhealthy attachment/clinging to ego? The idea of immortality, though sometimes fun to contemplate, has always struck me as more creepy and unnatural than alluring. But I suppose creepy and unnatural is my take on self-aware A.I. in general. Maybe I’ve been exposed to too much dystopian style science fiction, but I’m skeptical of the idea that superhuman computers and human/computer hybrids are the answers to our problems.
What do others think? Anyone planning on combating impermanence by having themselves cryogenically preserved?