As an important part of The Walt Disney Company, the Disney Research company’s mission is to increase the value for the company by developing and delivering scientific and technological innovation to create ‘the science behind the magic.’ Far more than singing, dancing Mousketeers, these talented researchers complete basic and application-driven research in many topics, including animation, robotics, computer graphics, human-computer interaction, and more. Disney Research has also utilized 3D printing technology many times, most recently to design bendable machines and bring animated characters to life; the researchers even developed their own 3D copier last summer.
Now Disney Research is using advanced technology to allow groups of people to enjoy augmented reality (AR) together, but unlike the shared reality HoloViveObserver system, you don’t have to hold a device or wear a bulky, head-mounted display. The Magic Bench, a custom software and hardware platform, is actually a combined augmented and mixed reality (MR) experience, and allows users to not only see and hear animated, virtual characters, but feel them as well.
“One of the compelling ideas behind mixed reality is this ability to share the same space as an animated character,” said Moshe ‘Mo’ Mahler, principal digital artist at Disney Research. “And this is where an exciting new form of storytelling begins.”
Want to hang out with your friends and get stuck in the rain with a giraffe, or have an elephant hand you a glowing, gold orb? The Magic Bench makes it possible! It actually instruments the environment in the combined experience, instead of an individual person running the show – this lets multiple people enjoy the ‘walk up and play’ experience together.
“This platform creates a multi-sensory immersive experience in which a group can interact directly with an animated character. Our mantra for this project was: hear a character coming, see them enter the space, and feel them sit next to you,” said Mahler.
Anyone who sits on the Magic Bench will be able to see themselves in a mirrored image on a large display that’s in front of them, which creates a third person point of view. A depth sensor then reconstructs the environment, so instead of superimposing one video feed onto another, the participants can occupy the same 3D space as a computer-generated object or character.
“The bench itself plays a critical role. Not only does it contain haptic actuators, but it constrains several issues for us in an elegant way,” Mahler explained. “We know the location and the number of participants, and can infer their gaze. It creates a stage with a foreground and a background, with the seated participants in the middle ground. It even serves as a controller; the mixed reality experience doesn’t begin until someone sits down and different formations of people seated create different types of experiences.”
While the experience does require the physical bench, a color camera and depth sensor from a Microsoft Kinect were used to create a HD-video-textured, real-time 3D reconstruction of the bench, the participants, and the environment. The scene is then reconstructed using an algorithm, which aligns the depth sensor information with the RGB camera information.
To make the whole thing look as normal as possible, a modified algorithm creates a 2D backdrop, which gets rid of any depth shadows in areas where the depth sensor doesn’t have a corresponding line of sight with the camera. Then, both the 2D and the 3D reconstructions are put into position in the virtual space, and then populated with 3D effects and characters to create a composite capable of interacting with virtual light and shadows.
A large team of scientists from Disney Research Pittsburgh wrote a paper on their work, titled “Magic Bench – A Multi-User & Multi-Sensory AR/MR Platform.” Co-authors include Mahler, Ali Israr, James Krahe, Shawn Lawson, Jake Marsico, John Mars, Jim McCann, Kyna McIntosh, and Alexander Rivera.
The abstract reads, “Many MR interactions are generated around a first-person Point of View (POV). In these cases, the user directs to the environment, which is digitally displayed either through a head-mounted display or a handheld computing device. One drawback of such conventional AR/MR platforms is that the experience is user-specific. Moreover, these platforms require the user to wear and/or hold an expensive device, which can be cumbersome and alter interaction techniques. We create a solution for multi-user interactions in AR/MR, where a group can share the same augmented environment with any computer generated (CG) asset and interact in a shared story sequence through a third-person POV. Our approach is to instrument the environment leaving the user unburdened of any equipment, creating a seamless walk-up-and-play experience. We demonstrate this technology in a series of vignettes featuring humanoid animals. Participants can not only see and hear these characters, they can also feel them on the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch.”