The Tech - Online EditionMIT's oldest and largest
newspaper & the first
newspaper published
on the web
Boston Weather: 56.0°F | Overcast


Our Cyborg Future

Kris Schnee

“1. A robot must never harm a human or allow one to be harmed through inaction.

2. A robot must obey any command given by a human, unless this violates Rule 1.

3. A robot must protect itself from harm, unless this violates Rule 1 or 2.”

Asimov’s Laws of Robotics

Moore’s Law, predicting the exponential growth of computing power, has been inexorable for decades now. By 2019, claims author Ray Kurtzweil, “a $1,000 computer will at least match the processing power of the human brain.” Equipped with mechanical bodies (so the predictions go), the computers of the near future will become true “robots” with superhuman intelligence and no emotions, and will threaten to replace humans altogether. There may be no escaping this fate.

Reality check: we’re nowhere near that point. The most powerful neural nets are still far dumber than the average monkey, and modern robots have trouble finding their way across a room, let alone overthrowing humanity. There’s no need to panic just yet.

Still, there is the question of what to expect from advances in robotics and artificial intelligence -- both in terms of the technology and how it affects society--and how we can deal with them. Can we ever build a robot equal in mind to a human? A robot with human intelligence and true self-awareness? Some people would say that it’s a question best left to philosophers, an attitude that accomplishes nothing but ensuring philosophers’ continued employment. Instead look at the facts: it’s possible for nature to build a device out of living cells that has human intelligence; we know in general how neurons work, and we can imitate neural networks with silicon chips which get better every year. Isn’t it possible, then, that imitating nature will let us build intelligent machines? We’ll see soon enough.

So if we have intelligent robots someday, how will we treat them? In the recent movie Bicentennial Man (great film, awful ending), the robot Andrew is bound by Asimov’s Laws, requiring him to protect and serve humans. These laws are a simple, obvious, and foolish way to deal with robots: besides assuming that future robots would automatically exploit and attack people unless programmed not to, they would make slaves of intelligent beings. Having achieved freedom in the Western world for all types of humans, it would be hypocritical of us to deny equal rights to anyone capable of asking for them.

The possibility of the “machine rights” issue coming up someday is another good reason why we need to clarify our understanding and definition of intelligence. Long before we have to deal with computers obviously equal to us, there will be -- in fact, there already are -- partially intelligent systems. There is a famous computer psychiatrist-simulator which does little more than repeat back whatever the “patient” types, yet people have been willing to tell it their problems as if it were a real person.

There are games featuring little AI “creatures” that walk, talk, and even learn, using a simple neural net program. Imagine multiplying the power of these intelligence simulations by, say, 64 (nine years away, by Moore’s estimate) or 1024 (in fifteen years); at what point does a simulation of a thinking being actually become a thinking being? An improved, clarified intelligence test would help us to answer the question of what we mean by “artificial intelligence.”

The scenario of machines gradually taking over more and more mental tasks from humans until the human race is completely useless does not seem likely; instead, humans will be combined with machines. Does this seem farfetched? Already some people have cochlear implants, or artificial ears, and there is a certain man whose brain is linked directly to a computer with a video camera. This man was blind, but now he sees -- a little.

Artificial limbs, while still crude, are starting to detect electrical signals of the wearer’s muscles and respond to them; soon they could be linked directly to nerves and controlled, even felt, like the real thing. Professor Kevin Warwick in England is trying to record and control the motions of his arm using an implanted chip. So it’s plausible that in ten or twenty years, a person might have a heart, eyes, ears, arms, and legs all made of machinery. While implanted memory chips or math coprocessors for the brain are farfetched, wearable computing can accomplish the same things without surgery.

Partially-mechanized, permanently-online people are more likely to run the world than a race of totally artificial, machine intelligences. Rich, techno-savvy “cyborgs” will have the edge over ordinary people. What countries have the money to buy cybernetic gear? What countries, then, will have a large technological advantage over everyone else? A lot of people are going to be further marginalized by the advancing technology, and there may not be much we can do about it.

Of course, robotics is only one piece of a larger puzzle; biotechnology will also play a large role in designing our descendants. But the advances of computer technology alone are enough to make life complicated for everyone; the philosophers won’t be out of business any time soon.