BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Robot, Know Thyself: Engineers Build A Robotic Arm That Can Imagine Its Own Self-Image

This article is more than 5 years old.

Robot self-awareness is one step closer today after engineers at Columbia University built a robotic arm that can imagine itself.

The ability to recognise the self and imagine that self in different scenarios is unique to human beings and a key factor in how advanced we’ve become. Being able to imagine oneself in future unrealised scenarios helps us to build towards that goal. And revisiting past experiences for analysis, a trait that some animals can also accomplish, helps us to continually learn the best response in a given situation.

Our self-image isn’t static either. We adapt and change that image over time, as our circumstances and experiences change.

But that’s not generally how robots learn. They’re fed knowledge through human-designed simulations or modelling or by time-consuming trial and error. Which is why what Columbia engineers have done is so interesting.

Columbia University

They built a robot arm that had no knowledge of physics, geometry or motor dynamics and no idea if it was a spider, a monkey, an arm or a random geometric shape. But after a day of “babbling” to itself (translation: intensive computing), it had created a self-simulation. The arm then used that self-simulation to adapt to different situations and tasks.

“If we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it’s essential that they learn to simulate themselves,” says Hod Lipson, professor of mechanical engineering and director of the Creative Machines lab, where the research was done, in a statement.

Lipson and his PhD student Robert Kwiatkowski created a pretty simple robot, an articulated arm with four degrees of freedom. When it was first switched on, the arm just basically flopped about going through around a thousand trajectories. Then it used deep machine learning to create its first self-model – which was wrong. But after 35 hours of training, that self-model was refined to being consistent with the physical body of the robot to within about four centimeters.

When asked to pick something up and place it back down, the robot was pretty successful. In a closed loop system – on that allows the robot to recalibrate its original position between each step on the trajectory based on the internal self-model – the arm could grasp objects on the ground and place them in a bin with 100% success.

In an open-loop system, where there’s no external feedback for recalibration, just the internal self-model alone, the rate was 44%.

“That's like trying to pick up a glass of water with your eyes closed, a process difficult even for humans,” said lead author Kwiatkowski.

To change its operation, the engineers simulated damage by putting in a deformed part and the robot was able to alter and re-train its self-model and return to picking up objects.

“This is perhaps what a newborn child does in its crib, as it learns what it is,” said Lipson. “We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”

Follow me on Twitter or LinkedIn