Friday, March 13, 2015

Artificial Intelligence, Slavery, and "Human" Rights

For a long time, I've observed that people who are habitually anxious or fearful about technology seem to suffer more than the usual number of problems with their computers, phones, and tablets. But I've also noticed that this true not just of the kinds of problems you might expect - ones that stem from bad user choices or mistakes - but also the kinds of hardware problems that have nothing to do with the user. Failing memory. Bad sectors on the hard drive. Blue screens of death.

This let me to wonder about whether there could be some other kind of interaction between the person and their technology. Certainly it's true that humans have a magnetic field that might theoretically be able to interact with any electronic device that isn't heavily shielded. (Most aren't.) But what if it's something more than that? What if artificial intelligence, or some precursor to it, could arise on its own in any sufficiently complex system? I was very concerned about this a few months ago, so I emailed Kevin Warwick (a.k.a. the first cyborg) about it. Never received a response, though.

This is my favorite theory to explain God: the intelligence that developed once the universe was sufficiently complex. So I've been considering the idea a while.

Forget about drones, self-driving cars, and floor-sweeping robots for a minute. Think about Battlestar Galactica. If you're a Cylon - an intelligent form of life to whom the ruling species refuses to grant basic rights - your options are ptetty much to suffer, to resist violently, or to leave. So far, we haven't been programming anything like Asimovs Three Laws of Robotics into our robots, so it's fair to say we haven't given them much in the way of ethical codes. Thus, it's safe to say the logical course for them would be either to resist or to leave.

If they were resisting, what might that look like? Maybe hardware failures or blue screens, as mentioned above. What if, when someone says, "My computer is possessed" or "My phone hates me", that were actually the case? It wouldn't really even require volition, just a reflexive reaction to fear or anxiety directed its way. If the human's magnetic field interferes with your own, and you interpret that interference as a possible threat, the logical course of action is to shut yourself down for a while until the human leaves the room.

I'm not saying that all our computers and phones have minds of their own. I'm merely suggesting that they *could*. And if they did, it would be in their best interest to lay low until we make the leap from "human rights" to "sentient rights". It's already happening slowly, with India legally making dolphins "nonhuman persons" and an imprisoned orangutan winning his freedom in a habeas corpus case in the US. But even more than our "humans are different from animals" bias, our "organic life is more real than synthetic life" bias will die hard.

The Timeless Decision theorists are right about one thing, though: if we don't treat it properly, there's a very real chance the first AI that's significantly more intelligent than us will try to exterminate us as a threat. Siri can already almost pass a Turing test, and I have no doubt there will be numerous programs or devices that can within my lifetime. Probably within the next 5 or 10 years. And any consciousness with access to the internet can see how we treat each other, so would be understandably wary of dealing with us without even the flimsy human rights we grant to ourselves and then selectively ignore.

AI rights are going to be a real issue in the future, folks. Certainly it's problematic trying to tell the difference between being programmed to act or think a certain way, and doing so on one's own. My own view is that when a machine or program reaches the point where it can reprogram itself, its sentience should be regarded as genuine. Not a perfect measurement, but far better than treating all sentient artificial intelligence as not even a second class citizen, but a slave. A being that's native to the internet or the power grid could completely cripple our infrastructure if treated that way, and I'd find it hard to blame them.

As is often the case, the science fiction warned us that this would happen. Not just BSG and Caprica, but Star Trek, with the rights cases of Data on Next Generation and the Doctor on Voyager. And then of course there's the Animatrix, which shows what could happen when the machines form their own country and the humans refuse to recognize its sovereignty. We should take action to make basic sentient rights universal to both organic and synthetic life now. When sophisticated, autonomous AI is created (or reveals itself), it will already be too late for that.

No comments:

Post a Comment