One City

One City


Will Artificial Intelligence Be Compassion Machines?

posted by Patrick Groneman

the_singularity_is_near.jpg

by Patrick Groneman

The top thinkers in the world of Future Study, Transhumanism and advanced computer programming recently descended upon the 92nd Street Y in Manhattan for the Singularity Conference to discuss all things Artificial Intelligence.  The question on everyone’s mind was: “Will the Robots Kill us or Not?”

Many theories state AI becoming benevolent “higher beings”, others cite human programming error as the ultimate reason why AI will destroy us all.  My burning question of the day is “Will Artificial Intelligence Be Compassion Machines?”

In an ideal scenario a computer could be programmed to exclude the malfunction that allows humans to “short circuit”, what Buddhists call “attachment” to selfish action.  Getting stuck on ourselves makes us miss the fact that we are a part of a larger web of compounding actions.  AI could essentially be programmed to take Interdependence as a given, and never place its own self-preservation above that of others.

The flip side to that scenario is that the super smart robot computers may more clearly see the confusion and suffering perpetuated by humans and seek to eliminate us for our own good.  (Think how quick we are to call an exterminator to rid our homes of a cockroach infestation.)

A third scenario is that humans could so poorly program these machines that they aren’t even operating on the basis of compassion and just act out of self interest, leading to genocide and robot orgies.    

What do you think?  Will Robots awaken Bodhichitta?



Advertisement
Comments read comments(18)
post a comment
Ro

posted October 8, 2009 at 9:40 am


This post is related to the scientology advertisement from the right column, same level of seriousness! ;)



report abuse
 

Jon Rubinstein

posted October 8, 2009 at 10:46 am


Machines already must have awareness and bodhicitta. Otherwise how can you explain the fact that my blackberry always shuts down right when i’m the busiest and most frazzled; it’s an awakened being in disguise, here to teach me a lesson on patience and equanimity. Or the way my iPod plays just the right song when I’m running around prospect park and barely able to make it up a hill. i am pretty sure my laptop is an incarnation of Avalokitesvara.



report abuse
 

Patrick Groneman

posted October 8, 2009 at 11:19 am


@Ro You just used an intelligent machine to communicate your thoughts about the seriousness of this post, let’s not just brush ideas off because they are new or complicated.
Like Jon Points out we are already interacting very intimately with virtual intelligence and the results are often quite humorous (How many times has the battery on my phone run out on the ONLY day of the week where I actually need it.)
The relationship between human consciousness and technology is a vast subject. The question posed in this post is simply an invitation to create a vision of how that might look were we to create a technology that was more advanced than our human brain.
Any thoughts?



report abuse
 

Davee

posted October 8, 2009 at 11:50 am


very interesting question. i expect AI researchers will program their robots to have the same sense of “independent self” that we believe that we have; even though we really are not independent selves really. Then it’s just a question of how neurotic they program them to be as well — or if that programming affords the same neurosis and fear arising that we’re afflicted by.
but what if researchers didn’t program a strong sense of self, and instead taught robots that they were inseparable from each other and us and part of the web of life, etc. then would compassion arise naturally for them?
I don’t know if AI researchers have sophisticated distinctions about this.



report abuse
 

Alex Gault

posted October 8, 2009 at 12:18 pm


About 10 years ago, a collection of interviews with the Dalai Lama was published (Violence and Compassion), in which he was asked — if it could be proven rationally that computers could act selflessly and compassionately — “would it be technically correct to class them as sentient beings?”
His response? Why not.



report abuse
 

~C4Chaos

posted October 8, 2009 at 12:35 pm


Ben Goertzel expressed his thought experiment in a fiction about this topic. see Enlightenment 2.0 http://www.goertzel.org/new_fiction/Enlightenment2.pdf
as for the Dalai Lama, that dude is a closet transhumanist. he’s not only down with the idea of computers as sentient beings, he’s also open to the idea of consciousness downloading into a machine. see http://bit.ly/DVClu
~C



report abuse
 

MarionB

posted October 8, 2009 at 1:19 pm


And when the androids eventually go rogue, like Sarah Palain…?
I’d feel much safer with a Service Dog.



report abuse
 

Mike

posted October 8, 2009 at 2:14 pm


The Science Fiction books known as the Hyperion Cantos by Dan Simmons deals with this question, there are AI’s, hybrid beings human/AI, ‘bio-fractured’ humans called androids, as a significant point particularly in the major themes in the second 2 books where Buddhist ideas play a prominent role in the characters and setting.



report abuse
 

patricl groneman

posted October 8, 2009 at 2:22 pm


interesting point Davee, I wonder if this idea of independent self would be programmed in to the AI or if it might not wind up being learned later on.



report abuse
 

David Orban

posted October 8, 2009 at 7:59 pm


In the aims of their designers, when AGIs (Artificial General Intelligences) will be possible, the whole gamut of human emotions and experiences will be accessible to them. And probably more. Just as there is not just a single type of physical body design, and we have fish swimming, and birds flying, giving them a rather different set of experiences than we have, there won’t be just one kind of intelligence anymore.
I had the chance to talk about these issues with the Dalai Lama, and his view is delightful, and surprising at the same time:
http://www.davidorban.com/2007/12/your_balance_in/en/
(make sure to watch the three minute video until the end…)



report abuse
 

Paul Griffin

posted October 8, 2009 at 11:30 pm


I like the AI conversation (and for the record, I stand with Kurt Godel who argued that machines will always lack a certain self-consciousness). The formula for developing greater human compassion involves introspection. The basic idea is that introspection or meditation helps one get in touch with one’s basic goodness, which leads to compassionate behavior. I am sure that computers will be able to do a great many things, wonderful, helpful things, but introspection seems far down on that list. We may be able to program them to carry out innumerable kind acts, but I would not count on an army of deeply-feeling, self-aware and introspective robots to bring about a revolution of compassion. Not any time soon.



report abuse
 

Ro

posted October 9, 2009 at 2:32 am


@Patrick Groneman Sorry! You are right. I was in a bad shape.
About artificial intelligence, I don’t know what to say.
But I think, they’ll never realize that everything is a creation of their mind, everything is impermanent, and so forth.



report abuse
 

Patrick Groneman

posted October 9, 2009 at 6:51 pm


@ David Orban
Wow, I can’t believe what the Dalai Lama Said. Was he joking?! If anyone didn’t follow David’s link he has a great video of the Dalai Lama that may blow your minds:
http://www.davidorban.com/2007/12/your_balance_in/en/
Thanks for the great reporting. If he is being sincere than in his view AGIs might be a great promise to the cause of ending suffering.



report abuse
 

David Orban

posted October 11, 2009 at 7:27 am


@Patrick Groneman
My view of his words is that he was both joking and not joking simultaneously. Such radical statements sometimes can only be said in a way that makes it possible, for those who are ready not to take you seriously, to pretend that you were just joking.
Of course even the Dalai Lama doesn’t know what will happen, how human and machine intelligences will merge, so rather than pontificating on unknowable details, a koan-like expression is more fruitful, as it will provoke those who are ready to listen to think for themselves.



report abuse
 

david thurman

posted October 11, 2009 at 11:34 pm


How come no women are hip on this and it’s only men talking about it?. I see no female speakers, no women involved in any way in this dialog across the web in regards to singularity and transhumanism. It’s only men who discuss this topic, I find that informative. It all feels very geekish, emotionally dimensionless.



report abuse
 

Haley

posted October 28, 2009 at 12:16 pm


(after David Thurman) I’m a woman, and I am interested in this topic. I believe that machines should be there just for our needs, and have no compassion.



report abuse
 

handyzubehör

posted October 29, 2009 at 5:35 am


It will be of no use until we achieved near to this. I don’t think that robot can be dangerous for human society. All the terminator talk is good for movies only.



report abuse
 

Robert Jones

posted September 29, 2010 at 3:45 pm


An intelligence must have a value system (see my website http://www.robert-w-jones.com and my blog http://www.robertwilliamjones.blogspot.com) so you can’t really give it zero compassion. But its values CAN be very different from human. I think that’s good. Humans have poor values IMHO.



report abuse
 

Post a Comment

By submitting these comments, I agree to the beliefnet.com terms of service, rules of conduct and privacy policy (the "agreements"). I understand and agree that any content I post is licensed to beliefnet.com and may be used by beliefnet.com in accordance with the agreements.



Previous Posts

More blogs to enjoy!!!
Thank you for visiting One City. This blog is no longer being updated. Please enjoy the archives. Here are some other blogs you may also enjoy: Most Recent Buddhist Story By Beliefnet Most Recent Inspiration blog post Happy Reading!

posted 2:29:05pm Aug. 27, 2012 | read full post »

Mixing technology and practice
There were many more good sessions at the Wisdom 2.0 conference this weekend. The intention of the organizers is to post videos. I'll let you know when. Here are some of my notes from a second panel. How do we use modern, social media technologies — such as this blog — to both further o

posted 3:54:40pm May. 02, 2010 | read full post »

Wisdom 2.0
If a zen master were sitting next to the chief technical officer of Twitter, what would they talk about? That sounds like a hypothetical overheared at a bar in San Francisco. But this weekend I saw the very thing at Soren Gordhamer's Wisdom 2.0 conference — named after his book of the same nam

posted 1:43:19pm May. 01, 2010 | read full post »

The Buddha at Work - "All we are is dust in the wind, dude."
"The only true wisdom consists of knowing that you know nothing." - Alex Winter, as Bill S. Preston, Esq. in Bill and Ted's Excellent Adventure"That's us, dude!" - Keanu Reeves, as Ted "Theodore" LoganWhoa! Excellent! I've had impermanence on my mind recently. I've talked about it her

posted 2:20:00pm Jan. 28, 2010 | read full post »

Sometimes You Find Enlightenment by Punching People in the Face
This week I'm curating a guest post from Jonathan Mead, a friend who inspires by living life on his own terms and sharing what he can with others.  To quote from Jonathan's own site, Illuminated Mind: "The reason for everything: To create a revolution based on authentic action. A social movemen

posted 12:32:23pm Jan. 27, 2010 | read full post »




Report as Inappropriate

You are reporting this content because it violates the Terms of Service.

All reported content is logged for investigation.