Creating 3D maps and worlds can be extremely labor intensive and time consuming. And ultimately the final result might not survive the close scrutiny of those expecting real-world emulations. A new technique developed by scientists at The University of Manchester's School of Computer Science and Dolby Canada, however, might make capturing depth and textures for 3D surfaces as simple as shooting two pictures with a digital camera--one with flash and one without.
For a high-level description of the technique, here is the abstract from a presentation given about it during the "Perception & Hallucination
" session from
earlier this month:
"A Perceptually Validated Model for Surface Depth Hallucination
Capturing depth to represent detailed surface geometry normally requires expensive, specialized equipment and/or collection of a large amount of data. By trading accuracy for ease of capture, the authors of this paper aim to recover surfaces that can be plausibly relit and viewed from any angle under any lighting. This multiscale shape-from-shading method takes diffuse-lit and flash-lit image pairs, and produces an albedo map and textured height field. Using two lighting conditions enables subtraction of one from the other to estimate albedo. Experimental validation shows that the method works for a broad range of textured surfaces, and users are frequently unable to identify the results as synthetic in a randomized presentation."
Credit: Maya Skies|
First an image of a surface is captured without flash. Portions of the
surface that are higher appear brighter, and portions that are deeper
appear darker. The problem is that the different colors of a surface
also reflect light differently, making it difficult to determine if the
brightness difference is a function of depth or color. By taking a
second photo with flash, however, the accurate colors of all visible
portions of the surface can be captured. The two captured images
essentially become a reflectance map (albedo) and a depth map (height
"Software then compares the brightness of every matching pair of pixels
in the two images and calculates how much of a pixel's brightness is
down to its position, and how much is due to its colour.
That information is used to produce a realistic rendering of a
surface's texture. By altering the direction of illumination on the
virtual surface the system can generate realistic shadow effects."
This technique is already being utilized to capture 3D textures of the surfaces of Mayan ruins. The rendered images are being incorporated into the "
" project, which the
Chabot Space & Science Center
says is a "bi-lingual full-dome digital planetarium show featuring the scientific achievements, and the cosmology, of the Maya
." The show is scheduled to start showing in select planetariums in the summer of 2009.
The technique is still in development. For instance, one aspect that researchers are still working on is how to capture an image that incorporates more than one surface field, such as vines growing up a brick wall. As the technique extracts a height field, it is not possible to "represent the two separate distinct bits of geometry
," according to researcher Mashhuda Glencross.
Preliminary tests show that people could not tell the difference between images captured using this technique and images captured using the more expensive and time-consuming approach of laser scanning. And while the technique is currently being used to capture 3D surfaces of real-world objects, it is possible that aspects of it can be incorporated into easier, quicker, and less expensive ways to generate 3D surfaces and textures for virtual worlds, such as games.