Media Lab Project Creates Interactive EnvironmentBy Damian Isla
First in a series profiling research projects at MIT.
Researchers at the Media Lab are creating a computer system that replicates a user's body in a computer environment where it can interact with other inhabitants of the virtual world, including a hamster and a dog that understands spoken commands.
The Artificial Life Interactive Video Environment system is also being used to develop a virtual aerobics instructor, which, when completed, will be able to provide accurate, instantaneous feedback on body position and movement for simple aerobic exercises.
ALIVE uses the latest in pattern recognition and artificial intelligence technology to immerse the user in an augmented reality that combines both real and virtual elements.
"Computers, as they are, are deaf, dumb and blind," explained Associate Professor of Media Arts and Sciences Alex P. Pentland, who heads the Vision and Modeling group, one of two Media Lab groups working on the project.
Humans and computers "live in separate worlds. ALIVE brings those two worlds closer together, by allowing computers to understand human input on a more human level," Pentland said.
Using only the flat image provided by a normal video camera, ALIVE can detect the position and movements of a person using the system. The program can then add other subjects, with which the user may interact, to the computer environment.
System recognizes hands, feed
The system recognizes a person by isolating and analyzing their outline. Then, based on prior information about basic human shape and anatomy, it finds their head, hands, and feet.
Assuming the user stays on the floor, the ALIVE cameras can construct a complete 3D model of the scene.
ALIVE uses two microphones located at the base of the screen to follow the user's movements and to respond to verbal commands as well as general noises such as clapping or shouting.
"There's only so much data you can get from one microphone, especially when there's a crowd standing around watching," said Sumit Basu G, who is working on the project. "By having two, we can get more specific, localized information from the user."
Second group creates animals
The AutonomousAgents group at the Media Lab uses this data to determine how the virtual inhabitants of the world will react to the user and to each other.
These inhabitants currently include a hamster, a predator, a puppet, and a dog.
The dog, the most sophisticated of the agents, responds to the user's spoken commands and gestures. The user can also throw the dog, named Silas, a virtual ball to play with, pet it, or feed it.
The dog's program wrestles with several basic needs, including the need for human attention, hunger, and the need for sleep.
The hamster's actions are similarly determined by the varying degrees of the intensity of the desire for food, which it can beg from the user, the desire to be petted or have its belly scratched, and the fear of the predator, which can be let loose by the user.
At the same time, the predator is torn between its desire to catch the hamster, and its fear of the user, whom it regards as a predator itself.
Goals focus on interaction
The groups hope that the ALIVE system will, in the long run, help to bring about new and freer means of interacting with computers. Involving no wires, gloves, or goggles, ALIVE allows for more unrestricted movement. This could help to open the computing to more non-technical individuals, especially children and persons with disabilities.
"What's missing from computers is not the networking or the power or the speed,"Pentland said. "What's missing is that our computers don't live in the same sensory world as we do. In a sense, our computers don't know who we are."