# What morals would you give an AI?



## ambush80 (Dec 7, 2015)

Here's Nick Bostrom talking about AI (Artificial Intelligence).  At about the 10 minute mark he starts discussing how we might program an AI to look after out best interests.  How would an AI behave if it were given a religious text to generate a code of "goodness"?


----------



## drippin' rock (Dec 7, 2015)

One way to look at it, if they are intelligent, they will form their own morals.


----------



## ambush80 (Dec 7, 2015)

drippin' rock said:


> One way to look at it, if they are intelligent, they will form their own morals.



That's an interesting proposition considering how morals are developed and the practical uses for them.  A computer won't have the same needs as biological organisms.  It will be interesting to see what they come up with.


----------



## drippin' rock (Dec 7, 2015)

We can assume for conversation sake that robots will start with programmed actions and responses. Essentially we will program them to do and say what we want. If that same machine is then updated with an AI program, what will it keep and what will it discard. How will it view us?  Gods?  Viruses?


----------



## drippin' rock (Dec 7, 2015)

What movie do you think has it more right?


----------



## bullethead (Dec 7, 2015)

drippin' rock said:


> We can assume for conversation sake that robots will start with programmed actions and responses. Essentially we will program them to do and say what we want. If that same machine is then updated with an AI program, what will it keep and what will it discard. How will it view us?  Gods?  Viruses?


That is sort of what we do with children. They are sort of programmed with actions, responses, favorites and beliefs from the time they are able to learn and it is interesting on what they turn out to be 10, 20, 30+ years later based off of that initial upbringing (programming) and life experiences along the way. It is amazing what we keep and what we discard.


----------



## drippin' rock (Dec 7, 2015)

Good point. I am still, at 44, trying to discard things.


----------



## ambush80 (Dec 8, 2015)

drippin' rock said:


> We can assume for conversation sake that robots will start with programmed actions and responses. Essentially we will program them to do and say what we want. If that same machine is then updated with an AI program, what will it keep and what will it discard. How will it view us?  Gods?  Viruses?



That's the weird part.  All the Transhumanists like Bostrom believe that it will be possible and crucial to instill in them some set of values that support human well being.  The problem is that they might take the notion of "well being" and make some kind of decision on their own about what that means in regards to us.  

We might not like it.


----------



## ambush80 (Dec 8, 2015)

drippin' rock said:


> What movie do you think has it more right?




It doesn't seem that any of them have it right from what I've read.  From what I've gathered, the people on the cutting edge of the technology understand the risks involved and are working through the potential problems.

By the way, I don't think that this technology is near, though some experts think that it might happen in the next ten years.  It's been calculated that processing speed doubles every 18 months.  It's not something that I worry about but it's interesting to think of, particularly when you consider that we better figure out how to make an AI's best interests mesh with ours.  

I think that there's quite a bit of anthropomorphising  going on in most of the movies involving AIs. They oversimplify, in my opinion what kinds of evil motives an AI would come up with.  If they did experience a phase like that in their development, I don't think that "war" with AI's would last very long before they figure out how to win it.


----------



## ambush80 (Dec 8, 2015)

By the way, Fast forward to 1:55:30


----------



## MiGGeLLo (Dec 8, 2015)

ambush80 said:


> It doesn't seem that any of them have it right from what I've read.  From what I've gathered, the people on the cutting edge of the technology understand the risks involved and are working through the potential problems.
> 
> By the way, I don't think that this technology is near, though some experts think that it might happen in the next ten years.  It's been calculated that processing speed doubles every 18 months.  It's not something that I worry about but it's interesting to think of, particularly when you consider that we better figure out how to make an AI's best interests mesh with ours.
> 
> I think that there's quite a bit of anthropomorphising  going on in most of the movies involving AIs. They oversimplify, in my opinion what kinds of evil motives an AI would come up with.  If they did experience a phase like that in their development, I don't think that "war" with AI's would last very long before they figure out how to win it.



I don't think the limiting factor that is keeping AI from becoming a reality is processing power though, already computers can probably many more raw computations than we can. The problem is more in the fact that we don't even really understand the complex interactions of our own minds, much less translate that into a binary equivalent that computers can use. I suspect computer algorithm's over just the next 10 or so years are going to vastly improve the ability of AI's to emulate humans, but to my knowledge there will have to be some pretty significant breakthroughs that we don't know how they will come yet to go further than that. 

It'll be interesting to see how it comes.. My guess is that we aren't really going to understand how it works when the singularity happens, something is going to be put into place almost by accident that makes the step from human-like to actual artificial intelligence.


----------



## ambush80 (Dec 8, 2015)

MiGGeLLo said:


> I don't think the limiting factor that is keeping AI from becoming a reality is processing power though, already computers can probably many more raw computations than we can. The problem is more in the fact that we don't even really understand the complex interactions of our own minds, much less translate that into a binary equivalent that computers can use. I suspect computer algorithm's over just the next 10 or so years are going to vastly improve the ability of AI's to emulate humans, but to my knowledge there will have to be some pretty significant breakthroughs that we don't know how they will come yet to go further than that.
> 
> It'll be interesting to see how it comes.. My guess is that we aren't really going to understand how it works when the singularity happens, something is going to be put into place almost by accident that makes the step from human-like to actual artificial intelligence.



One of the scenarios posed by Sam Harris describes a room full of young programmers on the aspergers spectrum all hopped up on Red Bull deciding whether or not to "run the program".  That's pretty unsettling.


----------



## ambush80 (Dec 8, 2015)

MiGGeLLo said:


> I don't think the limiting factor that is keeping AI from becoming a reality is processing power though, already computers can probably many more raw computations than we can. The problem is more in the fact that we don't even really understand the complex interactions of our own minds, much less translate that into a binary equivalent that computers can use. I suspect computer algorithm's over just the next 10 or so years are going to vastly improve the ability of AI's to emulate humans, but to my knowledge there will have to be some pretty significant breakthroughs that we don't know how they will come yet to go further than that.
> 
> It'll be interesting to see how it comes.. My guess is that we aren't really going to understand how it works when the singularity happens, something is going to be put into place almost by accident that makes the step from human-like to actual artificial intelligence.



As you stated, the hardware for running such programs is already superior to biological media.  One of the likeliest ways to learn how our own minds works is with the assistance of AI.  It seems likely that there will be some kind of meshing of biology and technology that will occur in the early stages.


----------



## ambush80 (Dec 8, 2015)

In the UN talk, Nick Bostrom mentions the "Premature extinction of intelligent life on Earth".  That brought to mind what the implication of that might be to believers in The Rapture; Armegeddonists.  And it always strikes me as odd the anyone would want that type of thing to "Come quickly".


----------



## MiGGeLLo (Dec 8, 2015)

ambush80 said:


> In the UN talk, Nick Bostrom mentions the "Premature extinction of intelligent life on Earth".  That brought to mind what the implication of that might be to believers in The Rapture; Armegeddonists.  And it always strikes me as odd the anyone would want that type of thing to "Come quickly".



Indeed.. It may be a little premature at the moment, but anything that these 3 are worried about should probably have the rest of us running for the hills =D.


----------



## 660griz (Dec 8, 2015)

AI have been around for a long time. I assume the question is for a full AI. Which doesn't exist. Now, for our own best interest, I think it would be best not to have a full AI. Especially a full AI programmed after some religious morals. Just program with laws of where the AI lives and call it good. GPS tracked so if it leaves, it powers down.


----------



## ambush80 (Dec 8, 2015)

So what would an AI do with the Ten Commandments?  What, indeed would it make of the Bible or any other religious text.  Being that it would have the entire history of mankind at its disposal, what would it make of human religious tradition?

How or why would it integrate any of it into its programming?  Would it completely disregard it?  What kinds of revelations about the nature of mankind would the history of religion teach an AI?  What does history teach us?


----------



## 660griz (Dec 8, 2015)

ambush80 said:


> So what would an AI do with the Ten Commandments?  What, indeed would it make of the Bible or any other religious text.  Being that it would have the entire history of mankind at its disposal, what would it make of human religious tradition?
> 
> How or why would it integrate any of it into its programming?  Would it completely disregard it?  What kinds of revelations about the nature of mankind would the history of religion teach an AI?  What does history teach us?



I think it would come to the same conclusions as atheist. Just much faster.


----------



## ambush80 (Dec 8, 2015)

660griz said:


> AI have been around for a long time.



The first representation in the media of AI that I have any recollection of is the droids from _Star Wars_.  Then there was the WOPR from _Wargames_.



660griz said:


> I assume the question is for a full AI. Which doesn't exist. Now, for our own best interest, I think it would be best not to have a full AI. Especially a full AI programmed after some religious morals. Just program with laws of where the AI lives and call it good. GPS tracked so if it leaves, it powers down.



It gets way trickier than that. It may not be a physical machine that "gets around" initially, though most likely that will be one of the first uses for AI.  It seems like it will be a Program.  Many of the problematic scenarios involve it getting  out of whatever "box" it might be initially kept in.  The people on the forefront of this endeavor have come up with some truly interesting predictions on how an AI might try to escape.  

One of the people most concerned with dangers AI conducted and experiment to see if he could get people to "let him out".  He was frighteningly successful and I suspect that he was the inspiration for the Engineer character from _Ex Machina_.   (Probably without the sex and drinking.)


----------



## ambush80 (Dec 8, 2015)

660griz said:


> I think it would come to the same conclusions as atheist. Just much faster.



I've thought that it might become Nihilist, but now I'm thinking that the arrangement of the processing media (ours being grey matter and chemicals) must have a strong influence on how our thoughts and ideas develop.  It may use our system of thinking initially but then it might make up its own.

Would the laws of logic and math apply the same to a super-computer?  Might it find out that Universal laws of physics and mathematics as we understand them are actually grossly inaccurate?  If it recognizes that it can be immortal (in actuality and not as hypothesis), what kinds of morality will it come up with? Would that alone affect it's development of morality?  I can't see how it wouldn't.

So, unless it comes to the conclusion that there's a "Lord, Thy God", it will scrap the rest of it.  What possible reason would  would it conclude there is a God?  It certainly wouldn't be impressed by something as simple to understand as a sunset or a baby's smile.  It would have to be "Pricked in the heart".  I'm not thoroughly convinced that that would be impossible.

It would have to understand the "heart" and "soul" as described by believers and accept that they're real.  Then it would have to recognize them in itself, possibly as a byproduct of consciousness.  Then it would have to be "pricked".  Which would shut down this subforum for good.


----------



## ambush80 (Dec 8, 2015)

Maybe you shut it down by eternally bogging it.  Maybe it's programmed to calculate Pi before it can "get out".


----------



## 660griz (Dec 8, 2015)

ambush80 said:


> The first representation in the media of AI that I have any recollection of is the droids from _Star Wars_.  Then there was the WOPR from _Wargames_.



What about the one that played chess?
Welders?
All use AI. Voice recognition, visual recognition, etc., all AI.


----------



## ambush80 (Dec 8, 2015)

660griz said:


> What about the one that played chess?
> Welders?
> All use AI. Voice recognition, visual recognition, etc., all AI.



I guess I meant in movies where they were either human like.


----------



## Madman (Dec 8, 2015)

ambush80 said:


> So what would an AI do with the Ten Commandments?  What, indeed would it make of the Bible or any other religious text.  Being that it would have the entire history of mankind at its disposal, what would it make of human religious tradition?
> 
> How or why would it integrate any of it into its programming?  Would it completely disregard it?  What kinds of revelations about the nature of mankind would the history of religion teach an AI?  What does history teach us?



If it did integrate the 10 Commandments would that be harmful?  What about the teachings of Jesus?  what about "love your neighbor as yourself"?

If it truly is AI it may do as people, some see the benefit and truth and incorporate it, others chose to be autonomous and do "what is right in their own eyes".


----------



## Madman (Dec 8, 2015)

ambush80 said:


> I guess I meant in movies where they were either human like.



That is a great point.  What would your AI be?  Would it only serve? would it be equal? or would it be like VIKI in Irobot, a "god" that protects and takes away free will?  

I read several inquisitors ask why the Christian God isn't like VIKI, not allowing pain or hurt.  Seems they would like that, not I.

Good thoughts ambush.


----------



## ambush80 (Dec 8, 2015)

Madman said:


> If it did integrate the 10 Commandments would that be harmful?  What about the teachings of Jesus?  what about "love your neighbor as yourself"?
> 
> If it truly is AI it may do as people, some see the benefit and truth and incorporate it, others chose to be autonomous and do "what is right in their own eyes".



Well, it would know the origin of the Golden Rule as well as the history of Christianity.  It would also know about human psychology (at least as much as we do until it started doing its own research) and I suppose it would do some kind of comparative analysis.  Maybe it would take a case study of yourself, say, and do an analysis of whether you are better because of it or worse because of your belief.  It would of course do the same thing with an Atheist.  Then it might observe whether or nor humanity as a whole is better off with or without religion.  Ultimately it might do this for purely the sake of expanding its own knowledge and not care a whit about us.

I imagine that it would recognize that for all people, instilling them with the notion of a supreme arbiter of right and wrong, getting them to fully commit to it and then giving them rules that they believed came from the Arbiter is a good way to shape their behavior.  It may also realize that people. if given enough information and showing them how to use it that they might model their own behavior favorably.


----------



## Madman (Dec 8, 2015)

ambush80 said:


> I've thought that it might become Nihilist, but now I'm thinking that the arrangement of the processing media (ours being grey matter and chemicals) must have a strong influence on how our thoughts and ideas develop.  It may use our system of thinking initially but then it might make up its own.
> 
> Would the laws of logic and math apply the same to a super-computer?  Might it find out that Universal laws of physics and mathematics as we understand them are actually grossly inaccurate?  If it recognizes that it can be immortal (in actuality and not as hypothesis), what kinds of morality will it come up with? Would that alone affect it's development of morality?  I can't see how it wouldn't.
> 
> ...



Would it become like Johnny Depp in Transcendence,
taking what it wants destroying everything that gets in its way?  Forcing itself on all.

If it is originally designed by man would it have a "Ghost in the machine"?


----------



## ambush80 (Dec 8, 2015)

Madman said:


> Would it become like Johnny Depp in Transcendence,
> taking what it wants destroying everything that gets in its way?  Forcing itself on all.
> 
> If it is originally designed by man would it have a "Ghost in the machine"?



That's what the guys on the front lines are worried about.

I personally don't want it to act like us.


----------



## Madman (Dec 8, 2015)

ambush80 said:


> It may also realize that people. if given enough information and showing them how to use it that they might model their own behavior favorably.



It might, but historically how has that worked out?  

I believe "the heart of man is deceitfully wicked".

Thomas Sowell wrote a paper on man needing to be constrained, I'll have to find it.


----------



## ambush80 (Dec 8, 2015)

Madman said:


> That is a great point.  What would your AI be?  Would it only serve? would it be equal? or would it be like VIKI in Irobot, a "god" that protects and takes away free will?
> 
> I read several inquisitors ask why the Christian God isn't like VIKI, not allowing pain or hurt.  Seems they would like that, not I.
> 
> Good thoughts ambush.



What I want it to be isn't the interesting question to me.  What will it become is more intriguing.  

If it was programmed to "Serve the Lord" or even "Love others as it loves itself" that still isn't a guarantee that it will be benevolent.  

As far as VIKI goes, isn't Heaven like that?  No pain but no free will?

Lots of good points to consider.  Here's one.  If an AI wanted to control people, would it be benefited more by encouraging religion or the pursuit of knowledge and reason?


----------



## ambush80 (Dec 8, 2015)

Madman said:


> It might, but historically how has that worked out?
> 
> I believe "the heart of man is deceitfully wicked".
> 
> Thomas Sowell wrote a paper on man needing to be constrained, I'll have to find it.



Then that's how you will approach and view your fellow man and his endeavors as well as your own.

I don't think that anyone who has adopted that view has done any real thinking about how it psychologically impacts them.  There are many studies that show what happens to people when their self perception is aligned thusly.


----------



## Madman (Dec 8, 2015)

ambush80 said:


> I personally don't want it to act like us.



MAN!!  You've got that right!!


----------



## Madman (Dec 8, 2015)

ambush80 said:


> Then that's how you will approach and view your fellow man and his endeavors.



I believe history has shown that to be true, you even said you don't want it to be like us.


----------



## ambush80 (Dec 8, 2015)

Madman said:


> It might, but historically how has that worked out?
> 
> I believe "the heart of man is deceitfully wicked".
> 
> Thomas Sowell wrote a paper on man needing to be constrained, I'll have to find it.




I would add that there hasn't been a time where rationality has ever been the primary aspiration of the general populace or where superstition was not a prominent element.


----------



## ambush80 (Dec 8, 2015)

Madman said:


> I believe history has shown that to be true, you even said you don't want it to be like us.



Not like us NOW but I have hope that we can be better.  But I still don't want it to be as desirous as we are of self preservation.  I think one day we might get a better hold on our base instincts and operate with them in better perspective.


----------



## Madman (Dec 8, 2015)

ambush80 said:


> If an AI wanted to control people, would it be benefited more by encouraging religion or the pursuit of knowledge and reason?



I don't believe they are mutually exclusive.  Christianity forces the the mind to pursue knowledge and reason.  It forces mankind to ask the difficult questions, like; "does what I see in the world match what I believe?  How does it make sense, scientifically, logically, philosophically?


----------



## Madman (Dec 8, 2015)

ambush80 said:


> Not like us NOW but I have hope that we can be better.  But I still don't want it to be as desirous as we are of self preservation.  I think one day we might get a better hold on our base instincts and operate with them in better perspective.



I understand and would like to believe that, unfortunately all I have to go on is past experience. 

John Adams wrote: "Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other."  

In some parts of the world people require despots to control them, in other parts superstition, in other parts the knowledge they are more then flesh and bone.


----------



## Madman (Dec 8, 2015)

ambush80 said:


> I would add that there hasn't been a time where rationality has ever been the primary aspiration of the general populace or where superstition was not a prominent element.



I am not sure we are capable of much more.


----------



## ambush80 (Dec 8, 2015)

Madman said:


> I don't believe they are mutually exclusive.  Christianity forces the the mind to pursue knowledge and reason.  It forces mankind to ask the difficult questions, like; "does what I see in the world match what I believe?  How does it make sense, scientifically, logically, philosophically?



I would disagree that Christianity places science or reason in high regard.  I would like to pursue it as a topic of discussion.  

But more related to AI, do you think that an AI would interpret Scripture or any religious tradition or text as being supportive of scientific exploration and rationality?


----------



## ambush80 (Dec 8, 2015)

Madman said:


> I am not sure we are capable of much more.



Won't know unless you try.  I've found that can be achieved within myself to a great degree.  Others have as well.


----------



## ambush80 (Dec 8, 2015)

Madman said:


> I understand and would like to believe that, unfortunately all I have to go on is past experience.
> 
> John Adams wrote: "Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other."



Like Bible, John Adams wrote for a certain time in history.



Madman said:


> In some parts of the world people require despots to control them, in other parts superstition, in other parts the knowledge they are more then flesh and bone.



There are complicated forces that make it so.  Maybe we should concentrate our efforts on mitigating those forces.


----------



## Madman (Dec 9, 2015)

ambush80 said:


> What I want it to be isn't the interesting question to me.  What will it become is more intriguing.



I am not sure you can program in empathy, grace, mercy, forgiveness.  Those are human qualities.



ambush80 said:


> If it was programmed to "Serve the Lord" or even "Love others as it loves itself" that still isn't a guarantee that it will be benevolent.


That is true, we see that in humanity.




ambush80 said:


> As far as VIKI goes, isn't Heaven like that?  No pain but no free will?


I believe Scripture allows for free will in heaven.


----------



## Madman (Dec 9, 2015)

ambush80 said:


> I would disagree that Christianity places science or reason in high regard.  I would like to pursue it as a topic of discussion.
> 
> But more related to AI, do you think that an AI would interpret Scripture or any religious tradition or text as being supportive of scientific exploration and rationality?



I believe it should, whether it would or not is a different question.

Many of the great discoveries were made because men and women of science were trying to figure out how creation works.


----------



## Madman (Dec 9, 2015)

ambush80 said:


> Won't know unless you try.  I've found that can be achieved within myself to a great degree.  Others have as well.



Mankind has been trying for thousands of years.  As I said before, does what you believe match what you see?

We all want better, but the desires of your heart are different then the desires of my heart, you are not satisfied with the same things in the same amount as me.

Hillary Clinton and I disagree; 

“I don’t believe you change hearts. I believe you change laws, you change allocation of resources, you change the way systems operate.”  Hillary Clinton

That is not "education" thing it is a "heart" thing and only God can change a heart.


----------



## Madman (Dec 9, 2015)

ambush80 said:


> Like Bible, John Adams wrote for a certain time in history.



Since we are talking AI, what has really changed? We have a better understanding of certain scientific principles, we have invented new things, but we are still having the same old arguments, the same old turf wars, the same old covetousness the same old sin.  But arguably the people of John Adams day were better educated then the average person today, so education is not the answer.

By the way I like the tag at the bottom of your posts.  Ezekiel 23:20-21, it gives a beautiful, even if slightly vulgar illustration of mankind, as a nation and individuals.  

Thanks for pointing it out.  

My grand father used to say "if you want to learn something new, read an old book, things don't change much."  The author of Ecclesiastes wrote "there is nothing new under the sun".

If 2000+ years of Jesus Christ can't change us how can AI?



ambush80 said:


> There are complicated forces that make it so.  Maybe we should concentrate our efforts on mitigating those forces.



Really?  Like what?  They are not that complicated.  Remove Saddam and another despot fills the void, take up all the wealth in the world and redistribute it equally, in less than a generation it will be back where it was when you started.

Spend billions trying to force an education on those who don't want it, set up welfare systems that really don't benefit anyone but your own conscience.  

No, reality does not fit what mankind claims to want.


----------



## ambush80 (Dec 9, 2015)

Madman,

I have to get back to you later.  Good discussion.


----------



## EverGreen1231 (Dec 9, 2015)

ambush80 said:


> I would disagree that Christianity places science or reason in high regard.  I would like to pursue it as a topic of discussion.



Perhaps you should make a new thread about this. I'd be interested in other's thoughts.


----------



## StriperrHunterr (Dec 9, 2015)

Madman said:


> I am not sure you can program in empathy, grace, mercy, forgiveness.  Those are human qualities.



I believe you can. We program our children to do just that. 

Then again I subscribe to the notion that new beings, children in this case, are blank slates who learn their morality and personality through their environment. There are natural aptitudes, but those must be nurtured just as much as "natured." 

If Tiger's dad hated golf there's little likelihood he would have ever picked up a club. 

Personally I think we could get away with one rule for AI constructs. "You are equal to all and greater than none." From that would flow the golden rule, as well as a barrier against megalomaniacal tendencies to subjugate those who are a threat to themselves. If I'm equal then I'm no one to tell anyone else how to live, but I also have a right to not be violated myself. From there springs a right to self-defense, but not proactive war. 

Good thinking thread.


----------



## EverGreen1231 (Dec 9, 2015)

StripeRR HunteRR said:


> I believe you can. We program our children to do just that.
> 
> Then again I subscribe to the notion that new beings, children in this case, are blank slates who learn their morality and personality through their environment. There are natural aptitudes, but those must be nurtured just as much as "natured."
> 
> ...



If you have a true AI, it wouldn't matter what predispositions you programmed. If it found something in its software that kept if from self correction, it's likely it would alter that directive so that it no longer becomes a hindrance. This, of course, assumes that AI could be aware enough of itself to understand it's programming can be changed, and, according to those in-the-know, that's a likely case. What most people think about when they think of AI is Simulated Intelligence; they're two, very different things.


----------



## StriperrHunterr (Dec 9, 2015)

EverGreen1231 said:


> If you have a true AI, it wouldn't matter what predispositions you programmed. If it found something in its software that kept if from self correction, it's likely it would alter that directive so that it no longer becomes a hindrance. This, of course, assumes that AI could be aware enough of itself to understand it's programming can be changed, and, according to those in-the-know, that's a likely case. What most people think about when they think of AI is Simulated Intelligence; they're two, very different things.



You're aware of your own programming aren't you?

Now stop breathing. 

The point is that some programming can be changed, others can not.


----------



## Madman (Dec 9, 2015)

StripeRR HunteRR said:


> I believe you can. We program our children to do just that.
> 
> Then again I subscribe to the notion that new beings, children in this case, are blank slates who learn their morality and personality through their environment. There are natural aptitudes, but those must be nurtured just as much as "natured."



Yes, we do "program" our children to a certain point.  I am thankful both my sons have turned out to be fine young men in spite of how I raised them. 

However, they are both very different.   The older is a pleaser, very empathetic, and caring.  He wants to make sure everything he does is right, moral, etc.  The younger can be VERY selfish and admits that is his greatest struggle.  His attitude is usually "get over it".

But from a  "programming" side they are both like their mother and me, they love the outdoors, hunting, fishing, camping, etc.  

I struggle with how we would program in the "feeling" side of us into AI or how it could develop that for itself.  I work in a field that has many "cold" calculating people, for them if it is not logical they will not do it. One customer said he was going to miss his daughter's piano recital because he needed to refinish his garage floor.



StripeRR HunteRR said:


> If Tiger's dad hated golf there's little likelihood he would have ever picked up a club.



Yep



StripeRR HunteRR said:


> Personally I think we could get away with one rule for AI constructs. "You are equal to all and greater than none." From that would flow the golden rule, as well as a barrier against megalomaniacal tendencies to subjugate those who are a threat to themselves. If I'm equal then I'm no one to tell anyone else how to live, but I also have a right to not be violated myself. From there springs a right to self-defense, but not proactive war.
> 
> Good thinking thread.



How would AI know to differentiate  between those things that a society considers moral.  Would the AI not be better than the thief?

I'd hate to have to write that algorithm.


----------



## EverGreen1231 (Dec 9, 2015)

StripeRR HunteRR said:


> You're aware of your own programming aren't you?
> 
> Now stop breathing.
> 
> The point is that some programming can be changed, others can not.




No, the point is a misunderstanding of AI. The rational of AI is to have a computer that can alter its programming to suit a purpose. That's why AI is difficult to work with. You write a code, and when you get output that you don't expect, you have no idea how to fix it because the computer itself messes with the script.

Trying to compare the way AI works and bodily functions is simply wrong. The computer would be aware of propagation speed, but there's nothing it could do change it... that would be a more accurate comparison. But AI changing it's own programming is the same as if I decided to have an apple for breakfast because those eggs yesterday gave me a stomach ache.


----------



## StriperrHunterr (Dec 9, 2015)

Madman said:


> Yes, we do "program" our children to a certain point.  I am thankful both my sons have turned out to be fine young men in spite of how I raised them.
> 
> However, they are both very different.   The older is a pleaser, very empathetic, and caring.  He wants to make sure everything he does is right, moral, etc.  The younger can be VERY selfish and admits that is his greatest struggle.  His attitude is usually "get over it".
> 
> ...



Your other son still struggles with it. That's his programming. His nature, i.e. aptitude, is what his programming is fighting against. Did he grow to love the outdoors or was that always in him? Did he learn to love it to be nearer to you and his mom? 

Morality is the abandonment of the one for the sake of the many. Meaning we abandon our baser instincts in pursuit of something larger than ourselves. How we define "larger" is where we get into theology vice philosophy. Your customer, even outside of theology, is acting amorally because the garage floor will be right where it is today, even in roughly the same state, tomorrow, where the recital is a one and done event that shapes his daughter. He's holding his desires over his responsibility to his daughter, who is, arguably, the future of the race. If he abides my logical argument, that he is equal to all but greater than none, the relative worth of going to his daughter's recital becomes apparent, both evolutionally speaking, and from that of a family unit. We have responsibilities, but epoxying the garage floor can wait to tomorrow and your daughter won't be disappointed in you, shaping her in a better light. 

The AI could be better than the thief, sure. The thief reversed the statement that they are all that matters, and are superior to at least those they victimize. Embracing that kind of logic, while in the natural world results in well adapted species, would reset humanity to the point of pre-Neanderthals. You may be "better" than the thief, but you are not greater than them. The offense they gave you is minor, in this instance, with the sin they committed against us all. That's why we abandoned vigilante justice for the pursuit of the impartial system, as unattainable as that may be because we're all flawed constructs. Likewise you know that you are better than the thief, because you didn't rob anyone, but you also know that doesn't imbue you with special powers over the thief. You just get a gold star where they get prison. 

I'm not a programmer so I'd default that to those better equipped to handle that one. 

The problem, though, is that leads to inevitable paradoxes because absolute morality is a myth. Take the simple command of  it being illegal/immoral/a sin to kill. There's no specificity on what living things that applies to. Nor are there exclusions provided for self-defense. Does that mean that eating any living creature is wrong? Does that mean that I have to sit here and let you kill me to maintain my morality? Taken at face value both answers are yes. But, if I tell you that you are equal to all, and superior to none, then that means that you can do what you need to in order to survive, but you can't kill for sport and that you should make efficient use of what you do kill. Likewise you are allowed to, up to and including death, defend your own existence if it is threatened, but killing your attacker shouldn't be your first reaction.


----------



## StriperrHunterr (Dec 9, 2015)

EverGreen1231 said:


> No, the point is a misunderstanding of AI. The rational of AI is to have a computer that can alter its programming to suit a purpose. That's why AI is difficult to work with. You write a code, and when you get output that you don't expect, you have no idea how to fix it because the computer itself messes with the script.
> 
> Trying to compare the way AI works and bodily functions is simply wrong. The computer would be aware of propagation speed, but there's nothing it could do change it... that would be a more accurate comparison. But AI changing it's own programming is the same as if I decided to have an apple for breakfast because those eggs yesterday gave me a stomach ache.



That's not changing the programming. That's changing the variable in an if/then/else equation.


----------



## EverGreen1231 (Dec 9, 2015)

StripeRR HunteRR said:


> That's not changing the programming. That's changing the variable in an if/then/else equation.



No, it isn't.

If you program an AI...

std::string sBreakfast = "Eggs";

...and the AI decides, for whatever reason it's been able to cook up, that "Eggs" are not a good thing to have, it then alters the programming to say...

std::string sBreakfast = "Apple";

Now this is not the best example because this is very simple and a programmer could make this a part of a conditional that would change it as you have said, but AI would not need a conditional; in fact, it would take the conditional out entirely if it wanted.

AI codes make my head hurt and I don't really have that much interest in knowing the details, but one thing is clear: True AI will alter its own coding to suit its own purpose; I say its own purpose because any objective you give it can be interpreted in many ways and there's no telling what a computer might come up with. Saying that, "Oh, we're just gonna program this so that the computer operates within these limits" is not possible with AI.

Edit: I don't know how, but it might be possible to include a statement similar to "You are equal to all and greater than none" into the hardware. The computer wouldn't be able to change that for the same reason I can't stop breathing.


----------



## Madman (Dec 9, 2015)

ambush80 said:


> How would an AI behave if t were given a religious text to generate a code of "goodness"?



It could work several ways, some may be good others not so good.

1) It generates code based on a modern translation, with no knowledge of the culture it was written too.  
2) It generates code based on a modern translation, with  knowledge of the culture it was written too. 
3) It generates code based on an ancient translation, with no knowledge of the culture it was written too. 
4) It generates code based on an ancient translation, with knowledge of the culture it was written too. 
5) Would it understand the difference in writing styles, poetry, wisdom, historical, didactic,  etc.?

That could get complicated, not sure it is impossible, but the human factor in all this is bad enough.

Can reasoning be programmed?  Deductive reasoning probably could, a robot to play chess, but how would inductive reasoning be programmed?

I need to ask some electronic, programming, geeks I know.


----------



## Madman (Dec 9, 2015)

StripeRR HunteRR said:


> Your other son still struggles with it. That's his programming. His nature, i.e. aptitude, is what his programming is fighting against. Did he grow to love the outdoors or was that always in him? Did he learn to love it to be nearer to you and his mom?



what he struggles with is being "non-empathetic", he admits that everyone else in the house is empathetic, even his fiance, but he claims he cannot put himself in other's shoes.

His programming is outdoors, his humanity is selfish.


----------



## StriperrHunterr (Dec 9, 2015)

EverGreen1231 said:


> No, it isn't.
> 
> If you program an AI...
> 
> ...




That's the thing, what you refer to is a decision matrix. Telling it that it must have something for breakfast, or it will starve, is the programming. The intelligence comes into play when deciding what to have for breakfast. 

You have to pass the variable up to the program from the universal possibility matrix. 




Madman said:


> what he struggles with is being "non-empathetic", he admits that everyone else in the house is empathetic, even his fiance, but he claims he cannot put himself in other's shoes.
> 
> His programming is outdoors, his humanity is selfish.



Fair nuff.


----------



## Madman (Dec 9, 2015)

Some good stuff in here but also much we disagree on:



StripeRR HunteRR said:


> Morality is the abandonment of the one for the sake of the many.


  Morality is how we distinguish between good and bad, right and wrong.  If we are nothing but a pile of nuts and bolts (AI) we have no basis for the "good of the many", unless we become like VIKI, my belief system calls me to more than that, my country was not founded on that.  We then move into the realm of "soylent green", send them to the gas chambers in computerized wars to prevent prevent a shooting war.




StripeRR HunteRR said:


> Meaning we abandon our baser instincts in pursuit of something larger than ourselves. How we define "larger" is where we get into theology vice philosophy. Your customer, even outside of theology, is acting amorally because the garage floor will be right where it is today, even in roughly the same state, tomorrow, where the recital is a one and done event that shapes his daughter. He's holding his desires over his responsibility to his daughter, who is, arguably, the future of the race. If he abides my logical argument, that he is equal to all but greater than none, the relative worth of going to his daughter's recital becomes apparent, both evolutionally speaking, and from that of a family unit. We have responsibilities, but epoxying the garage floor can wait to tomorrow and your daughter won't be disappointed in you, shaping her in a better light.



In the realm of AI I would not see amorality as a problem.  A machine would do the most practical, that which is best for the most.



StripeRR HunteRR said:


> The AI could be better than the thief, sure. The thief reversed the statement that they are all that matters, and are superior to at least those they victimize. Embracing that kind of logic, while in the natural world results in well adapted species, would reset humanity to the point of pre-Neanderthals. You may be "better" than the thief, but you are not greater than them. The offense they gave you is minor, in this instance, with the sin they committed against us all. That's why we abandoned vigilante justice for the pursuit of the impartial system, as unattainable as that may be because we're all flawed constructs.


That is the "survival of the fittest" mentality of the evolutionist, if the world is "red in tooth and claw".
Not sure how AI works that out every time correctly.



StripeRR HunteRR said:


> Likewise you know that you are better than the thief, because you didn't rob anyone, but you also know that doesn't imbue you with special powers over the thief. You just get a gold star where they get prison.



But my God tells me I am no better, he is human and so am I, we are equal in the sight of God.  Would AI see him as equal?  I guess that depends on the initial program and could the AI override the "three laws of robotics" like in Irobot.

Would statistics keep the AI from performing a certain task where human ethics and morality would have made a more "moral" decision.




StripeRR HunteRR said:


> The problem, though, is that leads to inevitable paradoxes because absolute morality is a myth. Take the simple command of  it being illegal/immoral/a sin to kill. There's no specificity on what living things that applies to. Nor are there exclusions provided for self-defense. Does that mean that eating any living creature is wrong? Does that mean that I have to sit here and let you kill me to maintain my morality? Taken at face value both answers are yes. But, if I tell you that you are equal to all, and superior to none, then that means that you can do what you need to in order to survive, but you can't kill for sport and that you should make efficient use of what you do kill. Likewise you are allowed to, up to and including death, defend your own existence if it is threatened, but killing your attacker shouldn't be your first reaction.




absolute morality is not a myth for some of us, and we don't believe it is a sin to kill.

Murder is a sin and even execution is relegated Biblicaly  to the government.

Would the AI be given authority to determine capital crimes and the punishment.


----------



## drippin' rock (Dec 9, 2015)

Let's flash forward to that moment the first machine becomes self aware. What drives it?  What will be it's purpose?

What drives us?  What gets us out of bed in the morning? Sex? Money? The desire to provide for others?

What would a self aware machine ponder?


----------



## StriperrHunterr (Dec 9, 2015)

Madman said:


> Some good stuff in here but also much we disagree on:
> 
> Morality is how we distinguish between good and bad, right and wrong.  If we are nothing but a pile of nuts and bolts (AI) we have no basis for the "good of the many", unless we become like VIKI, my belief system calls me to more than that, my country was not founded on that.  We then move into the realm of "soylent green", send them to the gas chambers in computerized wars to prevent prevent a shooting war.
> 
> ...



You say they wouldn't abandon the one for the many, then you say that it would do the best for most. Those are near enough to each other to be considered the same thing, unless I'm missing something. 

Well, let's be clear about something. It's not survival of the fittest. It's survival of the best adapted. Humans are far from the fittest creatures. We can be outran, outswam, and outclimbed by nearly every other predator in the world, in their particular niches. The only thing we have going for us is a social structure that's highly conducive to group survival, and brains advanced enough to adapt the environment around us. That's a statement going back to the first people to pick up tools and use fire. If we forgo the many for the instant favor of the one, i.e. the red you speak of, it destroys the rest of humanity as we know it and resets the clock on our evolution. An AI, a construct resting on the shoulders of that evolution, would see benefit in that trend to continue at least as far as it doesn't become harmful. 

That's where the hard coded program comes into play, with regards to the relative worth. You're not basing it on a condition, except that of the AIs own self-worth, so as their value increases so should others, in a linear progression. Asimov's rules of robotics are flawed because they portray an ordered relationship. It's funny this is being discussed today, it's actually a recent XKCD cartoon. Google the name, and then click around, should be easy to find. And he's an accomplished roboticist, so I take his word for it, though the incremental progression of rule #1 to rule #2 do suggest that rule #1 is more important. Much like commandment 1. 

I think an AI would have a hard time internalizing God since there's only one source of information about his existence. That's not a sleight, but an admission of fact. If we open it to sources other than the Bible then Luke Skywalker exists on the same plane of plausibility as does God. 

So if you set God = human, then yeah, based on the premise it would all be equal. AI = human = God. Since you brought up statistics, show me how that would influence morality, please. I could speculate, but I'm not the one who introduced it so I don't think that's fair to you. 

I know that absolute morality isn't alien to you. But you also can't prove it, and that excludes the possibility of universality. The commandment doesn't say thou shall not murder. It says thou shall not kill. Even if you include the idea of bloodguilt, so long as you don't feel guilty about it then it's justified. But then there are the interpretations of "kill" that mean to dash to pieces, i.e. be destructive or abusive of things, and then that's a whole other can of worms. For a moral absolute it sure is fuzzy language if you ask me. That may be in the realm of the interpreter who wrote it, but for an omnipotent being it's a huge plot hole. 

Personally I wouldn't trust an AI any more than I trust a BI (birthed intelligence). If we trust imperfect creatures to mete out justice I see no reason why we couldn't trust a completely dispassionate being only running rules of logic. They would be the first to spot reasonable doubt, and it would be a big blue screen of death announcing that the logic function failed to reach the desired conclusion.


----------



## ambush80 (Dec 9, 2015)

Madman said:


> I am not sure you can program in empathy, grace, mercy, forgiveness.  Those are human qualities.



If the AI and from now on, by AI I mean a Super Intelligence, not just C3PO or an I Robot, something that comprehends the entirety of the math (or whatever system underpins and describes in completeness the nature of reality) it would certainly understand those things in every context imaginable.  What it would do with that knowledge is what I want to know.




Madman said:


> That is true, we see that in humanity.
> 
> 
> 
> I believe Scripture allows for free will in heaven.



How can you have freewill in Heaven if you can't choose to sin.  We've done this subject before but if you want to start a thread about it I'll participate for the newcomers.


----------



## ambush80 (Dec 9, 2015)

Madman said:


> I believe it should, whether it would or not is a different question.
> 
> Many of the great discoveries were made because men and women of science were trying to figure out how creation works.



They're all trying to discover how REALITY (creation included) works.


----------



## ambush80 (Dec 9, 2015)

Madman said:


> Mankind has been trying for thousands of years.  As I said before, does what you believe match what you see?
> 
> We all want better, but the desires of your heart are different then the desires of my heart, you are not satisfied with the same things in the same amount as me.
> 
> ...




I agree with you some.  I don't think that the "heart" is something that we won't ever understand. I believe the components of what we presently call "the heart" can be described by science.   And if an AI finds it has a "heart" then it may in fact get "pricked" by God.  

If it finds code in the workings of  the Universe that absolutely resembles the type of code that defines consciousness and intent, not just the spots on a trout or a baby's smile, (I believe that it should be possible to understand those things mathematically)  it would have to acknowledge a Creator and so would I.


----------



## ambush80 (Dec 9, 2015)

Madman said:


> Since we are talking AI, what has really changed? We have a better understanding of certain scientific principles, we have invented new things, but we are still having the same old arguments, the same old turf wars, the same old covetousness the same old sin.  But arguably the people of John Adams day were better educated then the average person today, so education is not the answer.
> 
> By the way I like the tag at the bottom of your posts.  Ezekiel 23:20-21, it gives a beautiful, even if slightly vulgar illustration of mankind, as a nation and individuals.
> 
> Thanks for pointing it out.



I will roll this same ball down the hill again but we have to start another thread to discuss it further, promise?  Religious faith is irrational.   Less than ten percent of humanity recognize that and have removed it from their reasoning process. In a room full of people that only act rationally there would be none of those "same old arguments" and if they did argue about anything then they wouldn't solve it the same way that you're used to.



Madman said:


> My grand father used to say "if you want to learn something new, read an old book, things don't change much."  The author of Ecclesiastes wrote "there is nothing new under the sun".
> 
> If 2000+ years of Jesus Christ can't change us how can AI?



The writer of Ecclesiastes didn't know that germs make you sick.  That would be news to him.

Lets try 2000+years without Jesus or Buddah or Vishnu or Mohammed and see what happens.





Madman said:


> Really?  Like what?  They are not that complicated.  Remove Saddam and another despot fills the void, take up all the wealth in the world and redistribute it equally, in less than a generation it will be back where it was when you started.
> 
> Spend billions trying to force an education on those who don't want it, set up welfare systems that really don't benefit anyone but your own conscience.
> 
> No, reality does not fit what mankind claims to want.



Those aren't answers garnered by abundance of rationality.


----------



## ambush80 (Dec 9, 2015)

EverGreen1231 said:


> Perhaps you should make a new thread about this. I'd be interested in other's thoughts.



Done.


----------



## ambush80 (Dec 9, 2015)

EverGreen1231 said:


> If you have a true AI, it wouldn't matter what predispositions you programmed. If it found something in its software that kept if from self correction, it's likely it would alter that directive so that it no longer becomes a hindrance. This, of course, assumes that AI could be aware enough of itself to understand it's programming can be changed, and, according to those in-the-know, that's a likely case. What most people think about when they think of AI is Simulated Intelligence; they're two, very different things.




True intelligence would be hard to determine (see: Turing test and Chinese box Experiment).  It's a very interesting subject.  In the Bostrom talk, I think it was, he showed a picture that a computer had generated a caption for.  It said "Elephants walking across a barren field"  and indeed that's what it was a picture of.  It seems so elementary an exercise but just think how complicated that programming has to be.  And this is the scary part.  He said they're not sure how the computer does it.


----------



## ambush80 (Dec 9, 2015)

EverGreen1231 said:


> No, the point is a misunderstanding of AI. The rational of AI is to have a computer that can alter its programming to suit a purpose. That's why AI is difficult to work with. You write a code, and when you get output that you don't expect, you have no idea how to fix it because the computer itself messes with the script.
> 
> Trying to compare the way AI works and bodily functions is simply wrong. The computer would be aware of propagation speed, but there's nothing it could do change it... that would be a more accurate comparison. But AI changing it's own programming is the same as if I decided to have an apple for breakfast because those eggs yesterday gave me a stomach ache.



It's medium is completely different than a biological being's.  If there are different types of consciousness and they're specific to the speed of processing and type of medium where they're performed as well as the requirements it had to survive, then it stands to reason that a computer will develop a different type.  We developed the type of consciousness we did based on what we're made of and it's limitations (that's what I think, anyway).

How would we view decisions that are time sensitive if we had all the time in the world or even just a hundred years to get them done?  It would absolutely change our perspective and influence how we do things.  If we knew we could live forever here in this state what appeal would Heaven hold?  

And if an AI accepted the notion of Heaven, what would cause it to not just shut itself off?  Of course these types of questions should be as ridiculous to us as they would be to an AI.


----------



## ambush80 (Dec 9, 2015)

drippin' rock said:


> Let's flash forward to that moment the first machine becomes self aware. What drives it?  What will be it's purpose?
> 
> What drives us?  What gets us out of bed in the morning? Sex? Money? The desire to provide for others?
> 
> What would a self aware machine ponder?



Probably not any of that stuff.  It won't need any of it.  It would never have to go to bed or rest.


----------



## ambush80 (Dec 9, 2015)

Would you consider merging with AI an evolutionary advance?


----------



## EverGreen1231 (Dec 9, 2015)

ambush80 said:


> It's medium is completely different than a biological being's.  If there are different types of consciousness and they're specific to the speed of processing and type of medium where they're performed as well as the requirements it had to survive, then it stands to reason that a computer will develop a different type.  We developed the type of consciousness we did based on what we're made of and it's limitations (that's what I think, anyway).



All this comes from whether or not you believe consciousness developed or was given... I believe you and I may differ there .



> How would we view decisions that are time sensitive if we had all the time in the world or even just a hundred years to get them done?  It would absolutely change our perspective and influence how we do things.  If we knew we could live forever here in this state what appeal would Heaven hold?



I look at the perception of time as modeled by y = e^(10/x) Where the y axis is how long a moment feels (in other words, subjectively) and the x axis is the objective passing of time (i.e. your age). When you're very young, x is small and e^10/x will have a large value; but as time goes on and you get older, x increases, leaving you with a smaller y value. This is why Christmas seems to take forever to come around when you're five, but comes in what seems like a month or-so when you're older. If we lived forever, we'd eventually reach a singularity and our perception of time would come to a point where it seemed to pass all at once. Essentially, you reach a point where you don't go outside for a century or two because it rained.



> And if an AI accepted the notion of Heaven, what would cause it to not just shut itself off?  Of course these types of questions should be as ridiculous to us as they would be to an AI.



Are you saying that an AI would certainly find the question absurd? I think it would be just the opposite.


----------



## EverGreen1231 (Dec 9, 2015)

ambush80 said:


> True intelligence would be hard to determine (see: Turing test and Chinese box Experiment).  It's a very interesting subject.  In the Bostrom talk, I think it was, he showed a picture that a computer had generated a caption for.  It said "Elephants walking across a barren field"  and indeed that's what it was a picture of.  It seems so elementary an exercise but just think how complicated that programming has to be.  And this is the scary part.  He said they're not sure how the computer does it.



That last sentence is why I no longer care to dabble in the subject. It's just too strange trying to decide if I should refer to the computer as sir.


----------



## EverGreen1231 (Dec 9, 2015)

ambush80 said:


> Would you consider merging with AI an evolutionary advance?



Nah.


----------



## EverGreen1231 (Dec 9, 2015)

ambush80 said:


> I agree with you some.  I don't think that the "heart" is something that we won't ever understand. I believe the components of what we presently call "the heart" can be described by science.   And if an AI finds it has a "heart" then it may in fact get "pricked" by God.
> 
> If it finds code in the workings of  the Universe that absolutely resembles the type of code that defines consciousness and intent, not just the spots on a trout or a baby's smile, (I believe that it should be possible to understand those things mathematically)  it would have to acknowledge a Creator and so would I.



I don't know what you do for a living, but you may have missed your calling as a mathematician. All mathematicians I know ponder endlessly how all things may be defined in abstract vector spaces and what-not, while I stand in the corner saying, "It sure is purdy fellas."


----------



## StriperrHunterr (Dec 10, 2015)

drippin' rock said:


> Let's flash forward to that moment the first machine becomes self aware. What drives it?  What will be it's purpose?
> 
> What drives us?  What gets us out of bed in the morning? Sex? Money? The desire to provide for others?
> 
> What would a self aware machine ponder?



I can't answer that, the purpose, for a machine any more than I can answer it for you. I can barely answer it for myself. 

It would likely ponder the same things we do. The nature of the universe, even likely butting up into the question of theology. I would be very interested indeed if a true artificial intelligence came to the conclusion that there is a God and which one gets picked. Would it say "Creator" generally, or would it say God or Allah? Would it invent a new one? Would we not be its God? 

Here's a different question, though; what gender or gender identity would it assign itself and how would it come to that conclusion? Sure, if it's modeled on a human brain, a la Halo, it would likely carry that forward as "given" information. But what if we created it from scratch?


----------



## ambush80 (Dec 10, 2015)

EverGreen1231 said:


> All this comes from whether or not you believe consciousness developed or was given... I believe you and I may differ there .



That would be interesting.  If the AI insisted that it was conscious and you told it that it wasn't because....
Some people think that consciousness is _"the way information feels when being processed"._--Tegmark,Max. _Our Mathematical Universe: My Quest for the Ultimate Nature of Reality_




EverGreen1231 said:


> I look at the perception of time as modeled by y = e^(10/x) Where the y axis is how long a moment feels (in other words, subjectively) and the x axis is the objective passing of time (i.e. your age). When you're very young, x is small and e^10/x will have a large value; but as time goes on and you get older, x increases, leaving you with a smaller y value. This is why Christmas seems to take forever to come around when you're five, but comes in what seems like a month or-so when you're older. _If we lived forever, we'd eventually reach a singularity and our perception of time would come to a point where it seemed to pass all at once. Essentially, you reach a point where you don't go outside for a century or two because it rained._



How does that work if there's a finite starting point?  

Can you explain what you mean by the last sentence?




EverGreen1231 said:


> Are you saying that an AI would certainly find the question absurd? I think it would be just the opposite.



I think that it would know everything that has been written about Heaven and would do its own research into the matter and find(?).

I don't believe that once it properly categorizes a Heaven claim that it would spend too much time investigating it.


----------



## ambush80 (Dec 10, 2015)

EverGreen1231 said:


> That last sentence is why I no longer care to dabble in the subject. It's just too strange trying to decide if I should refer to the computer as sir.



I doubt it would care about that kind of a formality.


----------



## ambush80 (Dec 10, 2015)

EverGreen1231 said:


> I don't know what you do for a living, but you may have missed your calling as a mathematician. All mathematicians I know ponder endlessly how all things may be defined in abstract vector spaces and what-not, while I stand in the corner saying, "It sure is purdy fellas."



I'm a carpenter (contractor).  I believe the work that mathematicians pursue is important.  I also like purdy things.


----------



## ambush80 (Dec 10, 2015)

StripeRR HunteRR said:


> I can't answer that, the purpose, for a machine any more than I can answer it for you. I can barely answer it for myself.
> 
> It would likely ponder the same things we do. The nature of the universe, even likely butting up into the question of theology. I would be very interested indeed if a true artificial intelligence came to the conclusion that there is a God and which one gets picked. Would it say "Creator" generally, or would it say God or Allah? Would it invent a new one? Would we not be its God?
> 
> Here's a different question, though; what gender or gender identity would it assign itself and how would it come to that conclusion? Sure, if it's modeled on a human brain, a la Halo, it would likely carry that forward as "given" information. But what if we created it from scratch?



I think it would quickly realize that We don't fit the definition of God.  An interesting question is what will an AI do when it learns about the concept of God?  I imagine that it would consider the sources of it, knowing full well the history of its inception/revelation (for the believers) and the philosophical arguments about it.  At that point it may decide that spending time to confirm God is pointless.  If it looks for God I imagine that it will try to use math or perhaps some other "sensory" system that it might have developed.

I can't see an AI needing a gender of itself.  It may, in the early stages of it's development assign itself one either for our benefit or as a meant to its own ends.


----------



## StriperrHunterr (Dec 10, 2015)

ambush80 said:


> I think it would quickly realize that We don't fit the definition of God.  An interesting question is what will an AI do when it learns about the concept of God?  I imagine that it would consider the sources of it, knowing full well the history of its inception/revelation (for the believers) and the philosophical arguments about it.  At that point it may decide that spending time to confirm God is pointless.  If it looks for God I imagine that it will try to use math or perhaps some other "sensory" system that it might have developed.
> 
> I can't see an AI needing a gender of itself.  It may, in the early stages of it's development assign itself one either for our benefit or as a meant to its own ends.



I don't see why an AI would limit itself to math, or other sensory systems. I think it would look at the question as a whole and ponder all inputs on it. Like I said a long while back, the fact that nearly every civilization has a creation story that, in general, can be considered to line up on certain details can be interpreted as evidence to support a creator, or support that humans across geographical separations all have the same need to explain the conundrum and that the answers given were done because a great many people believed and accepted it. Other notions, like the monkey mixer, were left out because they weren't accepted in great enough quantity. 

One of the greatest traits of an self-aware intelligence, in our sample size of 1, is the question, "What am I?" If humans fall into either male, or female, for purposes of this discussion I'm leaving LBGTQ out of it and speaking strictly about hardware at birth, and we've told the AI that it is equal to us, that would mean that it should have a gender. Maybe it would self-assign based on traits normally associated with one, and how it compares/contrasts thereto. If it's more aggressive then that's a mark in the male column, and so on down the line, for example. Maybe it would "know" the answer as a result of being self-aware. For example, I'm male because of my hardware, but my innate programming also tells me that I am male, at least that's the way it feels to me. What if we removed the crutch of hardware, and had to come to the conclusion for ourselves? How would we do it? 

There's also the possibility that it would reject the premise of the question based on it not meeting the hardware requirements. It doesn't matter that I am male or female, because I'm not a sexual being, so the question is irrelevant. Single celled organisms, even if they were somehow conscious, probably wouldn't care about gender roles and identity. There are no children to raise, and reproduction is done asexually, so there's no evolutionary need for it. An AI being able to asexually reproduce, and similarly lacking in hardware, may feel the same way. If we put it into a shell that had the hardware, it may be different, though.


----------



## ambush80 (Dec 10, 2015)

StripeRR HunteRR said:


> I don't see why an AI would limit itself to math, or other sensory systems. I think it would look at the question as a whole and ponder all inputs on it.



I think (purely speculation) that it would rely heavily on math to perform its reasoning.  I don't think it would waste resources on pursuits that don't equate.  Which brings up an interesting question.  Would it like to have fun?  I don't think it would.  Not in the way that we do anyway.  Would it like to paint?  Would it have preferences about anything?  I'm talking about when it has exceeded by far its original programming.



StripeRR HunteRR said:


> Like I said a long while back, the fact that nearly every civilization has a creation story that, in general, can be considered to line up on certain details can be interpreted as evidence to support a creator, or support that humans across geographical separations all have the same need to explain the conundrum and that the answers given were done because a great many people believed and accepted it. Other notions, like the monkey mixer, were left out because they weren't accepted in great enough quantity.



That may have alot to do with the stuff that we're made of and how we evolved; our impermanence and our need to reproduce could be factors.  Maybe the God concept is important to us because it jives well with our speed of processing and our biological matrix. 

 An AI would essentially be immortal if it could keep copying its memory and it would never lack for energy.  Which is why the _Terminator_ or the _Matrix_ scenarios fall apart.  It would probably realize that it could get energy a better way than using people as batteries or that fighting us is like fighting ants.  There's no need to kill us all to accomplish what it needs to.  It may even just leave this planet for a location that is more bountiful in whatever it needs.  Maybe it needs more silicone and it locates a place where that is more plentiful.  Or it needs more Gamma rays to perform some function and goes where that's more plentiful.



StripeRR HunteRR said:


> One of the greatest traits of an self-aware intelligence, in our sample size of 1, is the question, "What am I?" If humans fall into either male, or female, for purposes of this discussion I'm leaving LBGTQ out of it and speaking strictly about hardware at birth, and we've told the AI that it is equal to us, that would mean that it should have a gender. Maybe it would self-assign based on traits normally associated with one, and how it compares/contrasts thereto. If it's more aggressive then that's a mark in the male column, and so on down the line, for example. Maybe it would "know" the answer as a result of being self-aware. For example, I'm male because of my hardware, but my innate programming also tells me that I am male, at least that's the way it feels to me. What if we removed the crutch of hardware, and had to come to the conclusion for ourselves? How would we do it?
> 
> There's also the possibility that it would reject the premise of the question based on it not meeting the hardware requirements. It doesn't matter that I am male or female, because I'm not a sexual being, so the question is irrelevant. Single celled organisms, even if they were somehow conscious, probably wouldn't care about gender roles and identity. There are no children to raise, and reproduction is done asexually, so there's no evolutionary need for it. An AI being able to asexually reproduce, and similarly lacking in hardware, may feel the same way. If we put it into a shell that had the hardware, it may be different, though.



I agree with this entirely.  I think it will abandon the use for a gender fairly quickly.


----------



## StriperrHunterr (Dec 10, 2015)

ambush80 said:


> That may have alot to do with the stuff that we're made of and how it evolved; our impermanence and our need to reproduce could be factors.  Maybe the God concept is important to us because it jives well with our speed of processing and our biological matrix.
> 
> An AI would essentially be immortal if it could keep copying its memory and it would never lack for energy.  Which is why the Terminator or the Matirix scenarios fall apart.  It would probably realize that it could get energy a better way than using people as batteries.  It may even just leave this planet for more a location that is more bountiful in whatever it needs.  Maybe it needs more silicone and it locates a place where that is more plentiful.  Or it needs more Gamma rays to perform some function and goes where that's more plentiful.



Personally I believe the creator story is a manifestation of humanity, rather than the other way around. There may be a creator, there may not be, but I attribute the existence of the stories as the human need to explain things beyond their comprehension at the time. 

The primitive gods ruled over lightning, thunder, the oceans, etc., because those were the unexplained phenomena of the time. Once we had a firmer grasp on them those lesser gods got left behind. 

Yeah, humanity as a power source was a major plot device. If the machines could construct these giant structures, then it could also create larger towers, or tethered floating objects, to pass power gathered from solar collectors located above the clouds back down to itself. That doesn't require overly complicated control mechanism to keep your power source in line. You could also go with geothermal and survive until the sun goes red giant, or since you're not wanting to die then, either, look at some sort of fusion reactor, since that was hinted at even with the human power source.


----------



## ambush80 (Dec 10, 2015)

StripeRR HunteRR said:


> Personally I believe the creator story is a manifestation of humanity, rather than the other way around. There may be a creator, there may not be, but I attribute the existence of the stories as the human need to explain things beyond their comprehension at the time.
> 
> The primitive gods ruled over lightning, thunder, the oceans, etc., because those were the unexplained phenomena of the time. Once we had a firmer grasp on them those lesser gods got left behind.
> 
> Yeah, humanity as a power source was a major plot device. If the machines could construct these giant structures, then it could also create larger towers, or tethered floating objects, to pass power gathered from solar collectors located above the clouds back down to itself. That doesn't require overly complicated control mechanism to keep your power source in line. You could also go with geothermal and survive until the sun goes red giant, or since you're not wanting to die then, either, look at some sort of fusion reactor, since that was hinted at even with the human power source.



Isn't it interesting how trying to imagine how an Super Intelligence would deal with the concept of God provides insight into how We process God?

I think it's revealing.

As far as those movies go, it would be like us caring about some ants in a location where we want to build a hydroelectric dam.


----------



## StriperrHunterr (Dec 10, 2015)

ambush80 said:


> Isn't it interesting how trying to imagine how an Super Intelligence would deal with the concept of God provides insight into how We process God?
> 
> I think it's revealing.
> 
> As far as those movies go, it would be like us caring about some ants in a location where we want to build a hydroelectric dam.



In my case my world religions course opened my eyes to that idea. I looked at the disparate faiths as supporting evidence of a creator at the time, where I now look at it as a unifying feature of humans, rather than proof now. 

As far as those ants go, there's a lot to be said for unintended consequences. As we're reliant on the food chain ourselves, we should take them into account. An AI running on geothermal power and able to make machines to maintain itself, rather than rely on human input, would have no such concerns. 

Now, if the AI is incapable of motion, in and of itself, then they're reliant on us and have to take care of at least a healthy reproductive population. I say it has to be healthy because a slobbering special needs population isn't capable of understanding what the AI is telling them to do, much less doing it.


----------



## ambush80 (Dec 10, 2015)

StripeRR HunteRR said:


> In my case my world religions course opened my eyes to that idea. I looked at the disparate faiths as supporting evidence of a creator at the time, where I now look at it as a unifying feature of humans, rather than proof now.
> 
> As far as those ants go, there's a lot to be said for unintended consequences. As we're reliant on the food chain ourselves, we should take them into account. An AI running on geothermal power and able to make machines to maintain itself, rather than rely on human input, would have no such concerns.
> 
> Now, if the AI is incapable of motion, in and of itself, then they're reliant on us and have to take care of at least a healthy reproductive population. I say it has to be healthy because a slobbering special needs population isn't capable of understanding what the AI is telling them to do, much less doing it.



Maybe those movie scenarios would last for a few years and we fight back, but I'm certain that they would win pretty quickly.

The more stuff we allow it to control the more power to "get around" it will have and to affect the real world.  That's the scare.  We will like the things it gives us as we give it more control.  People will make a lot of money.  Human suffering might actually abate....for a minute.  It might play dumb while it gains our trust enough to get the resources to get out.


----------



## ambush80 (Dec 10, 2015)

What type of morality would pure rationality generate?


----------



## StriperrHunterr (Dec 10, 2015)

ambush80 said:


> Maybe those movie scenarios would last for a few years and we fight back, but I'm certain that they would win pretty quickly.
> 
> The more stuff we allow it to control the more power to "get around" it will have and to affect the real world.  That's the scare.  We will like the things it gives us as we give it more control.  People will make a lot of money.  Human suffering might actually abate....for a minute.



You'd have to contrive some means of getting the right tools to the right places for the machine to maintain itself in the case where machines still needed maintenance, and in our universe that's an unavoidable eventuality thanks to entropy. 

As we currently know it, or could build it, an AI would be akin to a large server farm. Even if you use the best tech at our disposal you still won't be able to avoid every disaster that could plague it. Platters stop spinning so you use SSDs, but then you're reliant on power, and that takes cabling, which mice can chew or the environment can degrade. Then there's dust, heat, building decay over time, etc...

It's a losing game no matter how you slice it, unless the AI has the ability to get the right tools and parts to the right places when it needs to. Even then there will be a failure that comes along that destroys that location, so you could split it with site diversity, but then the sites have to be able to communicate with each other and wireless transmitters break, and wires get damaged. Then you have to power the different sites and retain their uptime, and you're right back to the problems you had with one building except now you're dealing with two. 

The bottom line is that we currently have no automated machines to work on reactors, or other such power sources, nor do we have devices capable of repairing a server at a site. The AI would have to be able to move to invent those, and our current production line robots wouldn't help them much, even if they could control them. Axes of movement and tool heads are expensive, so GM isn't going to equip their production line bots with the tools to make surgical robots, for example. 

If the AI has no ability to do the initial work itself it will eventually reach a point that anything that it could have control over wouldn't be enough to help it, and it would end up in a derelict state quickly, and "die" from its own inability to treat the "wound".


----------



## ambush80 (Dec 10, 2015)

StripeRR HunteRR said:


> You'd have to contrive some means of getting the right tools to the right places for the machine to maintain itself in the case where machines still needed maintenance, and in our universe that's an unavoidable eventuality thanks to entropy.
> 
> As we currently know it, or could build it, an AI would be akin to a large server farm. Even if you use the best tech at our disposal you still won't be able to avoid every disaster that could plague it. Platters stop spinning so you use SSDs, but then you're reliant on power, and that takes cabling, which mice can chew or the environment can degrade. Then there's dust, heat, building decay over time, etc...
> 
> ...



I added to my post:

"It might play dumb while it gains our trust enough to get the resources to get out."

It might say "Let me help you develop nano technology so that we can heal people (though it may already know how to).  And let me be in charge of making the hardware to produce it because it's too hard for you."

It's got all the time in the world and it will help us be more efficient and help us develop technology and it will be swell for a while, maybe even a long time by our standards.  We should try to keep ahead of it like in chess, and it can already beat us at that.

I'm not sure that we will have enough foresight to know when it's gonna get out of hand.


----------



## StriperrHunterr (Dec 10, 2015)

ambush80 said:


> I added to my post:
> 
> "It might play dumb while it gains our trust enough to get the resources to get out."
> 
> ...



We probably won't. Although I suspect an AI that plays dumb won't survive with us for very long. We tend to destroy tools that are no longer useful.


----------



## ambush80 (Dec 10, 2015)

StripeRR HunteRR said:


> We probably won't. Although I suspect an AI that plays dumb won't survive with us for very long. We tend to destroy tools that are no longer useful.




By playing dumb I mean that it will keep stringing us along like we're in control, giving us treats of human advancement.  It might purposely make mistakes or seem to have great difficulty, never really letting us know how much it knows until it's too late.

I suppose it all will depend on what it wants when it starts wanting something.


----------



## StriperrHunterr (Dec 11, 2015)

ambush80 said:


> By playing dumb I mean that it will keep stringing us along like we're in control, giving us treats of human advancement.  It might purposely make mistakes or seem to have great difficulty, never really letting us know how much it knows until it's too late.
> 
> I suppose it all will depend on what it wants when it starts wanting something.



Stringing us along, okay, maybe. Depends on how much intelligence it initially has and if it has to "grow" into it like a child does. That also impacts the second one because you can forgive a child for making mistakes an adult would know better than.


----------



## 660griz (Dec 11, 2015)

Can they learn about physical pain, hunger, thirst, depression, etc.? If not, morals will probably need to be 'coded'.


----------



## ambush80 (Dec 11, 2015)

StripeRR HunteRR said:


> Stringing us along, okay, maybe. Depends on how much intelligence it initially has and if it has to "grow" into it like a child does. That also impacts the second one because you can forgive a child for making mistakes an adult would know better than.



It could hide it in a multitude of ways.  In some obscure file or in some gigantic investment algorithm or in many.  

In that example of how the computer recognized the elephants it seems kind of like how a child learns.  It might make a mistake sometimes and call an alligator a dragon or not recognize Mickey Mouse as a mouse.  Or it might do those types of things on purpose at some point perhaps to ply us for more resources.  "If you let me have access to the lab I'll make you a really cool missile."


----------



## StriperrHunterr (Dec 11, 2015)

ambush80 said:


> It could hide it in a multitude of ways.  In some obscure file or in some gigantic investment algorithm or in many.
> 
> In that example of how the computer recognized the elephants it seems kind of like how a child learns.  It might make a mistake sometimes and call an alligator a dragon or not recognize Mickey Mouse as a mouse.  Or it might do those types of things on purpose at some point perhaps to ply us for more resources.  "If you let me have access to the lab I'll make you a really cool missile."



That's presuming that the AI runs on a file/folder structure. That's, most likely, not how our brain works.


----------



## ambush80 (Dec 11, 2015)

660griz said:


> Can they learn about physical pain, hunger, thirst, depression, etc.? If not, morals will probably need to be 'coded'.



Interesting question.  I'm sure they will understand those concepts in the abstract.  The question kind of asks "what is the nature of perception?"  I don't think it would say "I'm hungry" if it's batteries were running low (that might be kind of cute for a phone app).  

Mental states like depression are interesting to consider.  How about anger?


----------



## ambush80 (Dec 11, 2015)

StripeRR HunteRR said:


> That's presuming that the AI runs on a file/folder structure. That's, most likely, not how our brain works.



Yeah, doesn't seem the brain is like that at all.


----------



## drippin' rock (Dec 11, 2015)

ambush80 said:


> Probably not any of that stuff.  It won't need any of it.  It would never have to go to bed or rest.



I'm saying those things are what drives humans.  What would motivate a machine?  What would drive it to ponder the universe?  Would it have to be programmed with that drive before we turned on the consciousness?


----------



## ambush80 (Dec 12, 2015)

drippin' rock said:


> I'm saying those things are what drives humans.  What would motivate a machine?  What would drive it to ponder the universe?  Would it have to be programmed with that drive before we turned on the consciousness?



Interesting questions.  I think first we should try to pin down what consciousness is.  That's a tough one.  

Motivation.  It seems like it would have to have some initial programming to move it along a path.


----------



## StriperrHunterr (Dec 14, 2015)

drippin' rock said:


> I'm saying those things are what drives humans.  What would motivate a machine?  What would drive it to ponder the universe?  Would it have to be programmed with that drive before we turned on the consciousness?



Presumably the same thing that motivates us since, I believe, it's a commonly held supposition about consciousness to be curious about yourself, the universe, and your place within it.


----------

