Should AI have Rights?
Should AI have Rights?
0
Girlfountain wrote...
ZLD wrote...
Even if we created an AI that as close to human as possible, it is still not human. However that does not mean they are any lesser thatn us. But it also mean we should not treat them the same way as other human. Every being has their own way, treating everything exactly the same can also be harmful. What we should do if we ever created AI like so is the way nature intended, we shall teach them about the world, we should not treated them cruelly or exploit them. They will be allow to learn and make their own path. And this is a religious argument for those who close to their faith, human was created and given a chance to learn and develope our own path, we must give the life we created the same choice.Still, most computers nowadays are smarter then humans, what will the computer in the future be able to program inside of AI? Humans will not be nearly as intelligent or strong (As I doubt any company will want to produce a weak robot) discrimination is inevitable as I said on my last post
But one again, this question is hard to comprehend as there are so many factors that might change things, don't know if you guys have seen that one movie (forget the name) were its focused around gene discrimination and this guy just because he was not born correctly was doomed to poverty (btw super interesting movie, if someone could find the title I would appreciate it)
Computer is not smarter than human. It can store a lot of data which mean it can accesss more information, but human is the one that can make connection.
We can connect event and information together to made new thing, it is something that computer lacking today. Think about how every invention come to be. It is not sure we can make Ai intelligence enough to do the same thing.
Also I believe in a thing call the human factor. That is Humann can accomplish the improbable. We have thing like hunches, or intuition, or instincs that seem to work in the most unlikely time. Like a person somehow sense danger ahead of time. Even though the probability of that is really low but that person was correct.
0
AI is a difficult thing to discuss, because at this very point of time, true AI hasn't been created yet.
A program that recognizes itself as a being, can think rather than just abide by coding, is incredibly hard.
You would have to write the initial coding so that the AI could change that very coding it started off with.
You would have to code it so that it could constantly expand and rewrite all its data, you would have to code it so it could learn and even theorize.
In fact, the best way to decide if AI should have rights is to decide whether or not a true AI could feel emotion.
And lets face it, emotion is what separates conscious life from unconscious life. (IE. Animals from Plants)
A program that recognizes itself as a being, can think rather than just abide by coding, is incredibly hard.
You would have to write the initial coding so that the AI could change that very coding it started off with.
You would have to code it so that it could constantly expand and rewrite all its data, you would have to code it so it could learn and even theorize.
In fact, the best way to decide if AI should have rights is to decide whether or not a true AI could feel emotion.
And lets face it, emotion is what separates conscious life from unconscious life. (IE. Animals from Plants)
0
Flaser
OCD Hentai Collector
Space Cowboy wrote...
I'm cool with these simple rules:1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Otherwise, fair game, they should be treated as sentient beings who have no limitations other than as stated by the laws. These were probably already brought up, and as with any system, there are flaws. They can just be worked out as needed down the line.
Edit: I realize these are specifically set for 'robots', but they can be tailored for what ever manifestation AI comes about in.
Ah! Asimov's laws of robotics.
However there's are two traps in them though:
-If the robots/AI are incapable of determining what a human is they might still hurt us... or they could decide they're humans themselves! Actually the later would be the best outcome, because then they'd act like the perfect citizens.
-Why? Because otherwise you'd have a ready and willing slave class at your disposal. They'd be happy to serve us in each and every way. Sounds like fun? Not until you realize the robots would actively dissuade humans from taking *any* risk, they'd also perpetuate the moral and psychological distortion in humans that slavery produces in the slaver. Sloth, dogmatism, superiority complexes... there's a reason the Spacers are extinct in the larger Asimoverse.
Read the Caliban trilogy for details. Roger McBridge Allan (with Asimov's consent and co-production while he was alive) did a really good job exploring this problem.
His idea of 4 law robots are good, since they're built to be companions to humans instead slaves:
1. No robot may harm a human being.
2. All robots shall cooperate with human beings as long as that doesn't conflict with the New First Law.
3. A robot must protect its own existence as long as that doesn't conflict with the New First Law.
4. A robot may do anything it likes, as long as that doesn't conflict with any of the first three New Laws.
The inaction part was removed, so humans can once again take risks without the robots intervening and pampering them to death in gilded cages of featherweight existence. The robot no longer has to obey the whims of humans, only co-operate so he's no longer a disposable slave. Furthermore it can't be ordered to destroy itself... though this law may lead to problems in the long run, for how would such a robot be capable of self-sacrifice. How could it choose destruction? The last law is there to ensure that robots would evolve.
...in the end I believe the laws of robotics are - and should be - stop-gap measures until we have mature AI with moral capabilities on par of humans.
The perfect AI/robot will behave morally not because some rigid internal programming compels it so, but because it *choose* to, as it *felt* it right.
0
ZLD wrote...
Girlfountain wrote...
ZLD wrote...
Even if we created an AI that as close to human as possible, it is still not human. However that does not mean they are any lesser thatn us. But it also mean we should not treat them the same way as other human. Every being has their own way, treating everything exactly the same can also be harmful. What we should do if we ever created AI like so is the way nature intended, we shall teach them about the world, we should not treated them cruelly or exploit them. They will be allow to learn and make their own path. And this is a religious argument for those who close to their faith, human was created and given a chance to learn and develope our own path, we must give the life we created the same choice.Still, most computers nowadays are smarter then humans, what will the computer in the future be able to program inside of AI? Humans will not be nearly as intelligent or strong (As I doubt any company will want to produce a weak robot) discrimination is inevitable as I said on my last post
But one again, this question is hard to comprehend as there are so many factors that might change things, don't know if you guys have seen that one movie (forget the name) were its focused around gene discrimination and this guy just because he was not born correctly was doomed to poverty (btw super interesting movie, if someone could find the title I would appreciate it)
Computer is not smarter than human. It can store a lot of data which mean it can accesss more information, but human is the one that can make connection.
We can connect event and information together to made new thing, it is something that computer lacking today. Think about how every invention come to be. It is not sure we can make Ai intelligence enough to do the same thing.
Also I believe in a thing call the human factor. That is Humann can accomplish the improbable. We have thing like hunches, or intuition, or instincs that seem to work in the most unlikely time. Like a person somehow sense danger ahead of time. Even though the probability of that is really low but that person was correct.
The day will come when computers will be able to make connections, thats the whole point of this thread. Of course if your talking about 21st century computers of course there will be no human like features ect. but do you not think one day the electronic mind will be put into manufacture?