Peter N. Belhumeur | Turning a New Leaf on Face Recognition

Peter N. Belhumeur
Professor of Computer Science
This profile is included in the publication Excellentia, which features current research of Columbia Engineering faculty members.
Photo by Eileen Barroso

Could centuries-old techniques used to classify species hold the key to computerized face recognition? Peter Belhumeur certainly thinks so. Face recognition has many potential uses, from verifying financial transactions to recognizing criminals. Today’s systems work by superimposing a subject’s face over images in a database. If they align, the computer samples pixels from each image to see if they match.

The process is not very reliable. “Recognition algorithms make mistakes that they should never make, like confusing men with women, or one ethnicity with another,” Belhumeur said. Belhumeur was working on improving those algorithms when Smithsonian Institution taxonomists asked for help developing software to classify plant species from photos of their leaves.

Instead of superimposing images or matching pixels, Belhumeur drew on the wisdom of taxonomists dating back centuries. They classified plants by asking a series of questions whose yes-or-no answers narrowed the choices until they came to the right plant.

To this end, Belhumeur has developed LeafSnap, a new mobile application available on the iPhone and iPad. The free app allows users to photograph a leaf, upload it, and see a list of possible matches within seconds. LeafSnap’s database covers New York City’s Central Park trees and the 160 species in Washington, D.C.’s Rock Creek. Belhumeur, who co-developed the software with colleagues at the University of Maryland and the Smithsonian, hopes to eventually map species across the United States and give users the ability to add their own images to the database.

The way this technology works “is exactly the opposite of how computerized object recognition is done,” said Belhumeur. “Instead of pixels, we are comparing visual attributes.”

Belhumeur wondered if he could use a similar strategy to recognize faces. “Could we develop software that made qualitative decisions about each image? Is it a male or female? Young or old? Broad or pointy nose? … If we could build reliable classifiers to answer these questions, “ he said, “we could search for pictures based on their attributes.”

Belhumeur’s system uses roughly 100 labels, ranging from eye and nose shape to hair color and gender. In tests that compare a photo to a known image, like an identity card, it outperforms pixel-based technologies.

It also makes it possible to search for pictures with words that describe visual attributes. “We could search through a database based on a victim’s description of an assailant, or use it to search one’s seemingly endless collection of digital photos,” he concluded.

B.S., Brown, 1985; M.S., Harvard, 1991; Ph.D., Harvard, 1993

500 W. 120th St., Mudd 510, New York, NY 10027    212-854-2993