Pushing the Boundaries of Augmented Reality

In an April 2002 article in Scientific American, Steve Feiner predicted that wearable augmented reality (AR) glasses would be commonplace within 10 years. While they might not be exactly commonplace yet, precisely a decade later, Sergey Brin, cofounder of Google, was photographed wearing a prototype for Google Glass. The product came on the market in 2013.

Feiner, professor of computer science and director of Columbia Engineering’s Computer Graphics and User Interfaces (CGUI) Lab, has long been at the center of AR’s evolution. This April, Feiner received the SIGCHI 2018 Lifetime Research Award. Last October, the IEEE International Symposium on Mixed and Augmented Reality recognized Feiner’s contributions to the field with the Career Impact Award, and recently he was named an IEEE Fellow. In fact, Feiner has been working to develop augmented and virtual reality systems for nearly 30 years.

Throughout his career, Feiner has understood how AR could be harnessed in the service of humanity. He foresaw that it could allow firefighters to retrieve the layout of a burning building and thus avoid otherwise invisible hazards; tourists, to glance down a street and access reviews of restaurants in view; or soldiers, to pinpoint the positions of enemy snipers who had been spotted by drones.

Medical student Shirin Sadri works on an AR system that allows surgeons to view and interact with patient-specific anatomical models. (Photo by Timothy Lee Photographers)

Such scenarios are all within reach—as well as another promising one under development right now in Feiner’s lab. There, Gabrielle Loeb and Shirin Sadri, medical students at Columbia’s Vagelos College of Physicians and Surgeons, are working with Feiner to create a system using commercially available AR eyewear to guide physicians performing endovascular surgery. A surgeon outfitted with the wearable display can view and interact with patient-specific anatomical models as he or she operates.

Feiner continues to innovate in VR as well. Last fall, he and his team bested 150 entries to win the grand prize in the NYC Media Lab’s annual demo competition, demonstrating a wearable VR user interface that allows users to precisely and efficiently teleport themselves within a world-in-miniature virtual city environment by preorienting an avatar.

In one remote collaborative task assistance project, sensors register an object’s pose in space to create a 3D virtual copy (shown here on the screen), which can be manipulated by a remote user, as demonstrated by Carmine Elvezio, a researcher in Feiner’s lab. (Photo by Timothy Lee Photographers)

However, creating seamless and effective AR for users working alone and together, indoors and outdoors, remains Feiner’s holy grail. Unlike the self-contained virtual world of VR, AR involves integrating a virtual world with the real world, which Feiner considers an even greater challenge in that it requires geometrically aligning virtual content with physical objects, both stationary and moving. As Feiner puts it, the virtual content needs to be designed and laid out in a manner “respectful” of the user’s live physical environment, so that it complements rather than interferes. For example, a less important virtual object shouldn’t obscure a key object, whether physical or virtual.

Feiner and his team created the first outdoor mobile AR system, in 1996, using a seethrough display—years before smartphones and commercially available Wi-Fi. Their so-called “Touring Machine” superimposed, on an optical see-through display, information about campus buildings and departments; later they worked with colleagues in the Journalism School to overlay 3D models, images, text, and audio to tell the story of the 1968 student strike. In its initial iterations, users wore an external frame backpack containing a computer and error-corrected GPS system with antennas to track the user’s position. An orientation sensor mounted on the headset relayed the orientation of the user’s head to the computer interface.

The march of technology has made mobile AR vastly more convenient and affordable. Today, Feiner’s lab is employing mobile innovations to devise a collaborative AR authoring system for Columbia’s Making and Knowing Project, a research and pedagogical initiative directed by Pamela H. Smith, Seth Low Professor of History, in the Center for Science and Society at Columbia University. AR technology will complement a digital, critical edition of an important, anonymous 16th-century French artisanal and technical manuscript, enabling project members to document physical experiments inspired by the manuscript.

As display and tracking hardware and software continually improve, the whole arena of task assistance could yet be transformed through AR technology. In collaboration with Barbara Tversky, a cognitive scientist at Teachers College, Feiner’s lab is currently creating and evaluating distributed AR systems that make it possible for a remote expert (human or virtual) to advise a local technician performing spatial tasks. The expert can manipulate and annotate virtual copies of the physical objects with which the technician is working, demonstrating what to do in 3D and within the technician’s environment. By interactively visualizing complex tasks in place, a step at a time, one day soon these AR systems “may be able to help someone perform a task better than an expert standing next to her,” Feiner said.

By Marilyn Harris