The term most commonly used to define a robot is “machine.” That puts a robot in the same category as a smartphone, remote control, vacuum, or a drill. Or does it? Robots are currently being developed with all kinds of bells & whistles, making them more and more human-like. However, being human-like is not the same as being human. Or is it? The idea of “robot cruelty” involving murder, torture, or other mistreatment presupposes, at the very minimum, a sentient entity. In fact, the legal definition of murder requires the killing of a human being, while when speaking of torture, we include both humans and animals – though when involving the latter, we typically refer to the act as “cruelty.” With the sentient being / machine distinction in mind, would the military actually terminate a test involving a maimed, struggling robot because it was “inhumane” to continue? Yes it did. Would people refuse to harm a robot dinosaur because they became too emotionally involved? Yes they did. While most robots are not such sentient beings, it appears to be only a matter of time before “feeling” robots are the norm. Once that happens, should they be given certain legal rights? Without legal protection, we can envision robots as being subject to of all kinds of abuses – allowing people to commit horrific acts, without sanctions, against people-like machines. We may also conceive of some people not distinguishing between people-like robots and persons, resulting in transference-like behavior. For instance, some people may rationalize that if it is alright to engage in inappropriate acts with a child robot, it is also appropriate to do so with a human child. What is the solution? Do we limit robots’ characteristics so they are less “human”? Give them human-like rights? Do nothing? What do you think? For additional reading on this topic, click here.
Who is Who or What is What?