Matthias Zwicker, Wojciech Matusik, Fredo Durand, and Hanspeter Pfister

Automultiscopic 3D displays

Matthias Zwicker, Wojciech Matusik, Fredo Durand, and Hanspeter Pfister    Automultiscopic 3D displays

source: s2015siggraphorg

Automultiscopic 3D displays allow a large number of viewers to experience 3D content simultaneously without the hassle of special glasses or head gear. This display uses a dense array of 216 video projectors to generate images with high angular density over a wide field of view. As users move around the display, their eyes smoothly transition from one view to the next. The display is ideal for displaying life-size human subjects, as it allows for natural personal interactions with 3D cues such as eye-gaze and spatial hand gestures.

The installation presents “time-offset” interactions with recorded 3D human subjects. A large set of video statements was recorded for each subject, and users access these statements through natural conversation that mimics face-to-face interaction. Conversational reactions to user questions are retrieved through speech recognition and a statistical classifier that finds the best video response for a given question. Recordings of answers, listening, and idle behaviors are linked together to create a persistent visual image of the person throughout the interaction. This type of time-offset interaction can support a wide range of applications, from creating entertaining performances to recording historical figures for education.
.
.
.
.
.
.
.
source: gizmodo

Viewing 3D content without glasses or goggles has proved to be one of the toughest things for interface designers to achieve—it never really looks right. At this year’s SIGGRAPH, a group of researchers presented a display that creates a 3D human in stunning detail using a cluster of 216 projectors.

A team from USC’s Institute for Creative Technologies has built an automultiscopic 3D display which essentially makes a 3D model of the person with video. After capturing video of a person using 30 cameras in intensely bright light, the images are divided among the 216 projectors. The projectors are arranged in a semicircle around a large screen, so as viewers walk around the screen their eyes smoothly transition from one projection to the next. The result is feeling as if you can see crystal-clear depth and detail.

Since it’s so realistic, the tech is being used to create full-scale “digital humans” which could be used in a museum or educational context. Speech recognition helps cue up answers to questions so it feels interactive even if it’s not. And because the humans are so realistic, you feel like the person is actually making eye contact with you and listening closely as you’re talking to them.

When I saw this at action at SIGGRAPH it was playing the engrossing memories of a Holocaust survivor. Unlike so many other attempts at holograms or innovative VR experiences that allowed you to “talk” to people, this one felt the most real. And true to the description, as you walked from one side of the screen to the other, you were able to see new details in his face and clothing.