During a disaster situation, first responders benefit from one thing above anything else: accurate information about the environment that they are about to enter. Having foreknowledge of specific building layouts, the locations of impassable obstacles, fires or chemical spills can often be the only thing between life or death for anyone trapped inside. Currently first responders need to rely on their own experience and observations, or possibly a drone sent in ahead of them sending back an unreliable 2D video feed. Unfortunately neither option is optimal, and sadly many victims in a disaster situation will likely perish before they are discovered or the area is deemed safe enough to be entered.
But a team at the Defense Advanced Research Projects Agency (DARPA) has developed technology that can offer first responders the option of exploring a disaster area without putting themselves in any risk. Virtual Eye is a software system that can capture and transmit video feed and convert it into a real time 3D virtual reality experience. It is made possible by combining cutting-edge 3D imaging software, powerful mobile graphics processing units (GPUs) and the video feed from two cameras, any two cameras. This allows first responders — soldiers, firefighters or anyone really — the option of walking through a real environment like a room, bunker or any enclosed area virtually without needing to physically enter.
“The question for us is, can we do more with the information we have? Can we extract more information from the cameras we’re using today? Understanding what we see is critical to making the right decisions in the battlefield. We can create a 3D image by looking at the differences between two images, understanding the differences and fusing them together,” explained Trung Tran, the program manager leading Virtual Eye’s development at DARPA’s Microsystems technology office.
Users of Virtual Eye would be able to take note of the layout, visualize any hazards, identify optimal paths of entry or potentially locate survivors completely risk free. Two drones or robots would be inserted into a questionable environment, each outfitted with a camera. The cameras would be strategically placed at different points in the room with opposing viewpoints. Both video feeds would then be fused together with the Virtual Eye software and converted into a 3D view, and it will extrapolate any missing data using the 3D imaging software so the real time virtual reality feed is complete.
Here is some video of the Virtual Eye system in action:
The Virtual Eye system works thanks to NVIDIA mobile Quadro and GeForce GTX GPUs that are small enough to be portable but powerful enough to generate the virtual reality view. The NVIDIA GPUs were specifically chosen because they have the muscle to accurately stitch the two video feeds together and extrapolate the 3D data in real time while also being able to fit inside a laptop. Currently the Virtual Eye system is only capable of combining data from two cameras, however Tran expects that to change soon. The DARPA team is hoping to have a new demo version that is capable of combining up to five different camera feeds by next year.
While the system was specifically created for military, emergency or battlefield applications, as with most technology developed by DARPA it has plenty of potential real world applications as well. The technology could be used to broadcast sporting events or live performances in streaming 3D virtual reality with only a handful of cameras. It would also allow users to visit locations anywhere in the world, from museums to Mount Everest, without needing to leave their homes. Discuss this software over in the DARPA 3D Imaging Visual Eye forum at 3DPB.com.[Source: DARPA]