Simulated characters that learn to use their limbs
Researchers in Leipzig have demonstrated software designed for robots that allows them to "learn" to move through trial and error.
The software mimics the interconnected sensing and processing of a brain in a so-called "neural network".
Armed with such a network, the simulated creatures start to explore.
In video demonstrations, a simulated dog learns to jump over a fence, and a humanoid learns how to get upright, as well as do back flips
The simulated human learnt to do back flips
Ralf Der at the Max Planck Institute for Mathematics in the Sciences has also applied the software to simulated animals and humans.
The only input to the network is the types of motion that the robot can achieve; in the case of a humanoid, there are 15 joints and the angles through which they can move. No information about the robot's environment is given.
The network then sends out signals to move in a particular way, and predicts where it should end up, based on that movement.
If it encounters an obstacle such as itself, a wall or the floor, the prediction is wrong, and the robot tries different moves, learning about itself and its environment as it does so.
"In the beginning, we just drop a robot into a space. But they don't know anything, so they don't do anything," Professor Der said. The neural network eventually picks up on electronic noise, which causes small motions.
It eventually tries larger motions as it learns about its range of movement. "It's like a newborn baby—it doesn't know anything but tries motions that are natural for its body. Half an hour later, it's rolling and jumping," Professor Der said.
I call it a plug-and-play brain
This approach is far more flexible than traditional programming, in which movements are painstakingly planned out in a well-defined space. As conditions change, so can the robot's behaviour.
Moreover, the software can be used with any kind of robot, and Professor Der has tried the system on simple wheeled systems. "I call it a plug-and-play brain," he said.
"The classic thing in robotics is 'bring this' or 'play this chess game and win'—the task is given," says Daniel Polani of the University of Hertfordshire. "Ralf Der's system is only defined by what it perceives and does, but there's no goal. It's a very good approach."
For now, the network learns behaviours such as how to stand up, but promptly forgets them. Der and his colleagues are working to create a long-term memory, so that when the robot finds itself in similar situations, it knows what to do.
He will present the video demonstrations at the Artificial Life XI conference in Winchester this week.
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.