The Humanity of Computers


Sophia Bare, Reporter

Since the conception of Science Fiction, a main and recurring theme is that of sentient robots, or AI (Artifical Intellegience). Some exampes are: The Hitchhiker’s Guide to the Galaxy (1979), I, Robot (2004), Ex Machina (2016) and “USS Callister” (2017). These androids are described to act in ways similar, or the same as how humans do. They are depicted as feeling emotions and forming opinions based on observations that they make, on their own, about their surroundings. These lines of code, these AI, act as humans do. But when it boils down to it, are they human?
In my opinion, they are not. The common saying goes, “If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck” hinting that if a robot can speak and act as if it were human, then it probably is human, or should be considered so. I deem the saying wrong. These AI are certainly sentient — that is, they can process and feel emotions — but to say that because they are sentient, they are by definition human, is, I feel, ignorance.
Yes, I understand that as humans, we feel that we are the highest lifeform. And to a certain extent, that is true. From a cultural and scientific standpoint, we are far more advanced than say, a crab. But it is largely unlikely that we are the most advanced form of life that exists in comparison to the possible lifeforms that exist within our infinitely expanding universe. So to judge an AI, which is supposedly sentient and uniquely free thinking, to our standards of what we consider to be human enough to treat with respect, is absolutely absurd.
And when it comes down to it, it is the respect that we show AI that is the major problem. In every SciFi piece featuring AI, there either is, was, or will be some kind of revolt against the humans caused by the mistreatment of AI (by the hands of humans). And the root of that prejudice is that human beings do not see AI, fellow intellegient creatures as “human” and therefore mistreat them as so. But why is that so? Because humanity is inherently narcissistic. We hold ourselves to the highest degree, and if anything is not “like” us, we immediately place it on a lower pedestal. It is not as important as us, and therefore should not be respected as so.
To be honest, I do not care whether or not one categorizes an AI as human. I can see why the identification of AI could be the case. This is not the problem, but the refusal of humanity to treat AI with the same respect as fellow humans just because one does not identify them as “human” is one.
And all of this is not just a Science Fiction phenomenon. The reality is, every day, we are one day closer to achieving a technological miracle: AI that completely passes the Turing Test, making it, in behaviour, completely indistinguishable from that of a human being. Hence, the problem of AI – whether we consider them human or not, why that matters, and how we choose to treat them – will soon be our problem to solve.

Photo via Wikimedia Commons under Creative Commons license