This Clever Robotic Finger Feels With Light

The nerves in human fingertips are great at sensing things. For robots, learning to touch is more complicated.
a finger touching device
Courtesy of Columbia University

Robots already have us beat in some ways: They’re stronger, more consistent, and they never demand a lunch break. But when it comes to the senses, machines still struggle mightily. They can’t smell particularly well, or taste (though researchers are making progress on robotic tongues), or feel with their robotic grips—and that’s a serious consideration if we don’t want them crushing our dishes or skulls.

In a lab at Columbia University, engineers have developed a strange yet clever way for robots to feel: Let’s call it the finger of light. It’s got a 3D-printed skeleton embedded with 32 photodiodes and 30 adjacent LEDs, over which is laid a squishy skin of reflective silicone, which keeps the device’s own light in and outside light out. When the robot finger touches an object, its soft exterior deforms, and the photodiodes in the skeleton detect changing light levels from the LEDs. This allows the system to determine where contact is being made with the finger, and the intensity of that pressure. In other words, if you shook this robot’s hand, it wouldn’t feel it, in a traditional sense; it would see it.

For decades, roboticists have been developing ways for machines to feel, a field called tactile sensing. A very basic method is using a transducer to convert pressure into an electrical signal. But, says Columbia roboticist Matei Ciocarlie, “the gap that's been really hard to cross, traditionally, is there is a difference between building a touch sensor and building a finger.”

Courtesy of Columbia University

A rigid transducer might sit well on a table, where it can freely sprout all kinds of wires, but fitting all that into a small, deformable finger has been a big challenge. A robot, after all, needs to have flexible digits if it’s going to pick up objects and feel them. Soft fingertips also help establish a firm grip. So roboticists have had to find workarounds. A company called SynTouch, for example, has pioneered a finger covered in electrodes, which is overlaid with a soft skin. Then, they inject saline in between the skin and the electrodes. When someone touches the finger, the electrodes detect the changing resistance through the saline, registering the location and intensity of that touch.

The Columbia team’s new finger works in much the same way, but instead of electrodes and saline, it’s got those LEDs and photodiodes. When someone pokes the finger, all of the photodiodes look for changes in the amount of light they’re receiving. A photodiode closer to the poke will detect more of a change, while a photodiode on the opposite side of the finger will detect less. The system gets that information in fine detail, because 32 photodiodes times 30 LEDs equals 960 signals, which is a ton of data from a single poke.

“Extracting information out of those 1,000 signals in an analytical way—it's very, very hard to do,” says Ciocarlie, who developed the system. “I would venture to say that it's impossible without modern machine learning.”

Courtesy of Columbia University

Machine learning comes into play when they’re calibrating the system. They can stick the finger on a table, point it upward, and use a separate robotic arm to prod the finger in precise spots, using a specific amount of pressure. Because they know exactly where the robotic arm is jabbing the finger, they can see how the photodiodes detect light differently at each location. (If you take a look at the GIF above, you can see the system both localizing the touch and the intensity as the red dot swells with more pressure.) Despite the large amount of data collected per jab, with machine learning, the system can crunch it all.

“So that's the missing piece, the thing that's really become available to the field really in the last maybe five years or so,” says Ciocarlie. “We now have the machine-learning methods that we can add on top of these many, many optical signals, so that we can decipher the information that's in there.”

This mimics how humans learn to wield our own sense of touch. As children, we grab everything we can, banking our memories of how objects feel. Even as adults, our brains continue to catalog the feel of things—for example, how much resistance to expect from a steering wheel when you’re turning left, or how hard to bang a hammer against a nail. “If we were to put you into the body of another person somehow, you would have to relearn all the motor skills,” says Columbia electrical engineer Ioannis Kymissis, who developed the system with Ciocarlie. “And that's one of the nice things about the plasticity of the brain, right? You can have a stroke, you can knock out half of the brain and still relearn and then function.”

This new robotic finger, though, has its limits. While it can gauge the pressure it's placing on an object, it’s missing out on a bunch of other data that people can sense through our own hands but often take for granted, like temperature and texture. But interestingly enough, the researchers think they could listen to the robotic finger’s slip, or its motion as it slides over a surface.

“When you have slip, there's a little bit of a singing—if you ever put your ear against the table and run your finger on the table,” says Kymissis. If you’re holding on to, say, a wet glass, the slip might happen on a small scale, then “spread” to your hand’s entire contact area as the glass slides out of your grasp. By listening to the characteristic noise of an object slipping out of a robot hand equipped with these new fingers, the machine could correct its grip before the slip spreads across the whole hand.

What’s fascinating about this research is that while the engineers take inspiration from human biology, they mix up the sensory inputs in a decidedly un-human way. Human fingers rely on nerves to feel, but this new robotic finger sees objects, and perhaps one day will hear its contact with the surface.

In the future, this may lead to robots that can better manipulate human objects, because they’ll be able to combine vision with a sense of touch, just as we do. The ability to use both is particularly helpful when dealing with cluttered environments that contain a bunch of objects, or situations in which a direct line of sight is blocked. Think about how you might reach into a messy drawer: Your primary sense is vision, but you switch to your sense of touch as your hand gets deeper into the drawer and closer to the object you want.

A robot might have the same kind of problem: Perhaps the robotic arm can’t find an object it needs to grab because it’s at the bottom of a pile. Or maybe the robot arm itself gets in the robot’s line of sight. To be truly masterful at manipulating objects in the real world, a robot will have to freely switch between vision and touch.

“Tactile sensing can facilitate robot manipulation, especially when the robot gripper occludes objects from cameras,” says UC Berkeley roboticist Ken Goldberg, who wasn’t involved in this work. This new system, he adds, is a great improvement over previous robotic fingers that used electrodes overlaid with rubber to sense touch. These collected limited data, like simply determining whether or not the robot was making contact with another object. But thanks to the power of light, the new finger can provide much finer detail about everything it touches.

Robots are a long way from matching the sensitivity of the human hand, sure, but we’ve got a good feeling about this clever new finger.


More Great WIRED Stories