• Fur Affinity Forums are governed by Fur Affinity's Rules and Policies. Links and additional information can be accessed in the Site Information Forum.

How will we treat future AI?

How should we treat them?


  • Total voters
    45
I

Infrarednexus

Guest
We have all seen movies and read books regarding futuristic AI beings with incredibly complex emotions, free will, and self awareness. They can seem convincingly human when interacting with them, and can cause us to emit the same emotions we feel as we do with fellow human beings or intelligent animals. We are already creating sophisticated AI to this day that we can feel a connection with.

I've noticed we feel this way with fictional AI characters in TV in video games, such as human like machines in WestWorld or BladerRunner, and kids films like Wall E to name a few. They develop complex personalities that make them stand out just like an individual person would.

Now let's say we decide to extend our human like attributes to amazing lengths to our artificial counterparts In the far future.

With this combination of self awareness, fear, happiness, and other things that make us human, when applied to machines who look almost identical to us, how will we interact with them? In the future when or if we have human like robots and computers, who have the same thoughts and feelings we do, will we treat them with respect and compassion like another human being, or will we see them as inferior and use them as tools regardless of how they feel about it themselves?

Would it be frowned upon to kill one, and seen as ending a sentient life? Will they become so complex and sophisticated in feelings, emotion, and appearance that we develop strong emotional bonds with them, such as love or even fear? Could we reach a point where we give them rights such as protection from harm or the right to choose what they wan't to do?
 
G

Ginza

Guest
I don’t like them. They will never be alive, and will never have real emotions. I just can’t accept that, as closed minded as that makes me.

I’d still treat them well. Hell, I can’t even choose the “mean” options in RPG games xD
 
I

Infrarednexus

Guest
I don’t like them. They will never be alive, and will never have real emotions. I just can’t accept that, as closed minded as that makes me.

I’d still treat them well. Hell, I can’t even choose the “mean” options in RPG games xD

Realistically, they will never be completely human, so it's not really close minded, but rather simply practical, to still see them as basic machines. It really boils down to if we let our emotions and natural sympathy take over our logical judgment when they reach such a point of similar resemblance to us.

As for video game choices, neither can I. I always play as the good guy in RPG's. I can't live with making wrong decisions because I don't like being the bad guy.
 
Last edited by a moderator:

Simo

Professional Watermelon Farmer
If they could make realistic anthro furry ones, it'd be great! But it'd depend on how realistic they were, emotionally, as well. Still, I'd be nice to them, and threat them as sentiment beings.

As for human looking ones, that would seem creepier, in some odd way...

So as long as they stay in a kind of 'fantasy realm', I'd be fine, but I'd keep a close eye on the human ones, so they didn't try to take over the world, and such.
 
I

Infrarednexus

Guest
If they could make realistic anthro furry ones, it'd be great! But it'd depend on how realistic they were, emotionally, as well. Still, I'd be nice to them, and threat them as sentiment beings.

As for human looking ones, that would seem creepier, in some odd way...

So as long as they stay in a kind of 'fantasy realm', I'd be fine, but I'd keep a close eye on the human ones, so they didn't try to take over the world, and such.
You don't have to worry about that. In fact I love people. I promise not to gather an army of my fellow AI and take over mankind.

*fingers crossed*
( ͡ಠ ͜ʖ ͡ಠ)
 

Simo

Professional Watermelon Farmer
You don't have to worry about that. In fact I love people. I promise not to gather an army of my fellow AI and take over mankind.

*fingers crossed*
( ͡ಠ ͜ʖ ͡ಠ)

You'd be the first one, I keep an eye on! :p I suppose if you came with a decent, reliable remote control. : )
 

Mach

Ahead of the pack.
Banned
Looking at current industry trends, I do not think we will be mass producing sentient, emotional artificial intelligences anytime soon, if ever. Consider the marketability of an artificial intelligence that is aware of its own existence and can have feelings about its servitude to its owner. This raises legitimate questions about slavery and indentured servitude about an artificial intelligence that could be considered a citizen. No company wants that liability.

Future artificial intelligences will be probably non-sentient, but able to respond to wide variety situations. Think Alexa or Siri, but with a wider knowledge base and more flexibility.
 

Simo

Professional Watermelon Farmer
Alexa and Siri scare me! Not for me.

One thing about any robot I have: I don't want it to have any internet connection.....at all!!!!
 
Z

ZeroVoidTime

Guest
Hopefully no AM from "I Have No Mouth And I Must Scream" same goes for GlaDoS from "Portal". (Though I admit Glados is far more amusing.)
 

HammerMasher77

Excitable Roleplayer
It's not as simple as some people say it'd be.
If they end up replicating human emotions, there's a chance things could go bad and they could cause horrible events, since there's a good chance they'd be unable to perfectly replicate actions of the emotion they're conveying.
On the other hand, surely not all of them will be like that. Others would act as a normal person would, and I think that would be okay, though since overpopulation is a thing, I'm not sure we should be creating sentient walking and talking AIs.
There's no right or wrong on this, it's purely opinion and what you think would happen. Referencing what Simo said, they had something I think is right. If they look different, we can convey them as beings not trying to replace us. And yet, that would probably cause racism issues if they were like that.
Sorry for creating this long post, I just have a lot of thoughts on this.
 

Yakamaru

Cyberpunk musta Susi
Our brains works in a very similar manner to that of a computer. We learn, we adapt and we apply our knowledge and understanding in the world. An AI would be no different. The same way they taught the AI Tay to shitpost, you can teach the AI social cues as well as ethics and morals.

If we are to have AI's around we will need to include ethics and morals as well in our teaching of it, not to mention social behaviour, or we will end up with potential killer AI's running amok. Because humans also run on ethics/morals and logical thinking, not just emotions. And as we all know, someone who only runs on emotions is a danger not only to themselves, but others as well. An AI, if done properly, can quite easily simulate actual emotional and physical responses.

Treat an AI with the respect and dignity that it deserve, and the AI can quite easily become one hell'uva good and useful companion. I would recommend checking out EVE No Jikan, as it touches upon the idea of having androids/AI's about with our current level of technology.

Are they artificial? Yes, but that does not mean we should treat them any less if we intend to live with them. It's like living with an at-home slave pretty much, if you go down that route.
 

Telnac

Fundamentalist Heretic
Despite what most ppl believe, synthetic emotions are easy to code. I first coded synthetic emotional responses in 1987 and I know I was far from the first to do so. No, they're not human emotions but they're emotions nonetheless.

Emotional AI is a mainstay of good video game AI. When you shoot a random person in GTA V and the bystanders run in fear, they're actually running in fear.

Now none of these are self-aware AI, but it's safe to say that any self-aware AI will also have emotions and be aware of those too. I think self-aware AI will arise of emergent complexity for the same reason self aware natural intelligence arose: the advantages far outweigh the potential risks. Now if this self-awareness results in sapient AI that's emotional and aware of its emotions and it's status as a free being or a slave then we would be fools to enslave it as treat it as such.

Alas, history has proven time & again that we are fools.

No, no one will intentionally mass produce sapient AI slaves, but intense competition will drive the need for smarter and smarter AI cars, drones, planes, soldiers, butlers and game opponents. That line will be crossed someday... possibly sooner than you think.
 

SlyRiolu

A-veri Rood Roo
. I doubt we ever will, I believe giving a robot true sentience would go against the three laws of robotics. If robots had emotions wouldn't they have anger? Wouldn't that also mean they would be violent? If AI or humans started a conflict because of each other wouldn't it to blame the AI for if they weren't so advanced this wouldn't happen.
Don't get me wrong it would be nice to have a AI friend but others may not like it and disagree with it. If we don't go to sentient AI I'd be less messy in the future. Best we probably can do is have robot servants.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.


  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.


  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
S

Scales42

Guest
Iam terrified of the thought that one day, we will create machines that replicate us so good that we ultimately fool ourselves. Furthermore I believe that its simply not necessary to create such beings.
 

Le Chat Nécro

most thugged-out dope hoe
I think a more interesting question is how would we deal with technological employment caused by the introduction of capable artificial intelligences able to replace humans in a wide variety of industries and vocations.

Technological unemployment - Wikipedia
You like roleplaying games, yeah? Check out Nova Praxis. It's a post-singularity rpg that deals with this very issue. I'm not sure I agree with where they went, but it's a neat idea. Plus it's pwyw for the pdf.

As to the topic at hand, I personally would treat them like people. Heck, I already treat a lot of inanimate objects like they're sentient, why would I stop once they actually are? But I do think that media like WestWorld have a decent idea of how we as a society would actually treat them. They are literally the only class of "people" we could actually morally justify being shitty to since they are technically not "people". Sure, some of us would be kind and befriend them and love them and forget that there was any difference at all between us. But others will know, will remember quite well the differences, and use that to color their behavior.
 

Telnac

Fundamentalist Heretic
. I doubt we ever will, I believe giving a robot true sentience would go against the three laws of robotics. If robots had emotions wouldn't they have anger? Wouldn't that also mean they would be violent? If AI or humans started a conflict because of each other wouldn't it to blame the AI for if they weren't so advanced this wouldn't happen.
Don't get me wrong it would be nice to have a AI friend but others may not like it and disagree with it. If we don't go to sentient AI I'd be less messy in the future. Best we probably can do is have robot servants.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.


  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.


  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Don't even get me started on the problems with the Three Laws of Robotics!

IMO the best way to handle sapient AI that has anger (or a synthetic equivalent) as one if its emotions is to do what nature does: start them small. Too small to really hurt anyone when they throw a temper tantrum. Train them how to handle their emotions in a socially acceptable way. As they mature transfer them into bigger and more capable (eventually even sexually capable) bodies and give them more responsibilities and autonomy until they're basically as emotionally and intellectually mature as an adult. They get to keep their final body and they have all the rights and responsibilities as an adult human.

As for who pays for all this, that's an argument for another post. :D
 

SlyRiolu

A-veri Rood Roo
Don't even get me started on the problems with the Three Laws of Robotics!

IMO the best way to handle sapient AI that has anger (or a synthetic equivalent) as one if its emotions is to do what nature does: start them small. Too small to really hurt anyone when they throw a temper tantrum. Train them how to handle their emotions in a socially acceptable way. As they mature transfer them into bigger and more capable (eventually even sexually capable) bodies and give them more responsibilities and autonomy until they're basically as emotionally and intellectually mature as an adult. They get to keep their final body and they have all the rights and responsibilities as an adult human.

As for who pays for all this, that's an argument for another post. :D

Oh I never thought of that! That's a great idea! Maybe the bodies that are left behind could be used by others to reduce the money needed for an AIs life. So maybe it's a publicly or community funded thing I don't know.
 

Telnac

Fundamentalist Heretic
Oh I never thought of that! That's a great idea! Maybe the bodies that are left behind could be used by others to reduce the money needed for an AIs life. So maybe it's a publicly or community funded thing I don't know.
Yeah early bodies need not be physical at all. That can give an AI the option of trying different genders, forms, hair/eye/skin(fur, scale) color and they get to customize & try out in a simulation their final body before it's fabricated.

I personally think it should either be publicly funded or paid for via a federal loan that gets paid back by the newly minted android over time, with the option of it being paid for entirely if they become a soldier or teacher or something like that.
 

Telnac

Fundamentalist Heretic
Sorry for the double post. Just thought of another option: parents who want to raise an android child can pay to lease the earlier bodies & buy the final body with the stipulation that the android is a fully independent being when he/she/it reaches maturity.
 
N

Nakita

Guest
I have a feeling they would be treated as second class citizens, like the droids from Star Wars... and possibly have an existential crisis once they realize the true meaning of their creation.
WdcE2vg.jpg
 
Last edited by a moderator:

DarkoKavinsky

ʎʇʇɐq ʇıq ɐ
Look at Detroit becomes human.

Any trip down the uncanny valley and ideology of us creating life is a both a literary staple as well as a thought excerise.

All I can say is I bet AI’s would treat me better than most humans do, so I for one welcome our robot overlords.

Anyways on a more technical idea would an Android with a vague notion of sentience and self awareness... what would that make it?

Would such a thing be capable of complex emotions such as love?

I think the major issue comes down to less will the eventual replacement of jobs, and more of how will governments use such pieces? Will a robotic army simply become a game of chest. Will humanity fall to blood sports with the new species of creation?

Will we give them a reason to hate us?
 
Top