Changxi Zheng | Are We Hearing This For Real?
Assistant Professor of Computer Science
Computational advances in graphics and animation have moved cartoons far beyond movies that entertain kids. Computer graphics research has had a stunning impact on the synthesis of realistic visual effects like photorealistic images and motion, as in Pixar films like Toy Story and Brave. But, says Computer Science Assistant Professor Changxi Zheng, there are still a lot of huge challenges that remain: “simulating large-scale complex phenomena is still extremely time-consuming and impractical, and, for many situations, it’s unclear how to produce synchronized multi-sensory data including both motions and sounds.”
Zheng’s research is focused on creating immersive virtual realities via a computer, which, he says, “requires generating a full spectrum of human senses, from visual effects to audible sounds.” He is addressing the challenges of creating these senses both computationally and practically, observing that “successful computational methods will significantly enhance computer-generated virtual realities and lead to useful applications in entertainment, medicine, training, and design, and more.”
One of the areas Zheng is working on is synthesizing realistic virtual sounds automatically synchronized with simulated motions. If you think about pouring a cup of water into a container, while you can predict the motion of water using fluid dynamics, can you also generate the resulting bubbling sounds? “This question is interesting,” he says, “because sound, an important human sense like vision, can dramatically enhance the realities of virtual environments.”
For years, people have recorded sounds and then played them when certain events occur. But "canned" sound cannot be synchronized with simulated motions, says Zheng, and you end up with annoyingly repetitive sound effects, like the same boring sound piece you keep hearing when playing a video game.
Zheng studies the physical principles of sound phenomena, builds mathematical models, makes necessary approximations, and weaves practical computational algorithms to synthesize the sounds. He says it has been difficult to simulate them like graphics applications, especially because it is impractical to wait hundreds of hours for simulating the motions of a few seconds long. For example, fluid sound synthesis is difficult to achieve because you need to track a large number of air bubbles, resolve their very fast motions or vibrations that produce sounds, and then figure out sound propagation through the fluid’s changing geometry and air.
As Zheng explains it, sound propagation through the fluid interfaces is largely affected by the fluid geometry. “Imagine,” he says, “the fluid geometry as a loud speaker that forces the sound along certain directions to be louder while it's weaker along other directions. As the fluid flows, it changes its geometry, so that the loud-speaker effect, which we call sound scattering, is time-varying, and coupled with the complex fluid shape. All of these make the synthesis of fluid sound quite difficult.”
While Zheng’s main research focus is on sound, he is also working on building fast, cost-effective algorithms to create realistic virtual environments—both visible and audible—for all kinds of applications, from computer graphics to robotics.
“Physics and mathematics boils down complex physical phenomena into elegant equations,” he says. “I'd like to use them as seeds, planting them into a computer, fertilizing with computational algorithms, and growing them into realistic virtual worlds. For me, this is always fascinating.”
BS, Shanghai Jiaotong University (China), 2005; PhD, Cornell University, 2012