Keith Fife and his colleagues at Stanford University have come up with they call a "multi-aperture image sensor," which allows digital pictures to collect and store depth-of-field information. Unlike other 3D camera setups, the data is gathered at the chip level, so no expensive and complicated multiple lens setups are required.
Instead of devoting the entire sensor for one big representation of the image, Fife's 3-megapixel sensor prototype breaks the scene up into many small, slightly overlapping 16x16-pixel patches called subarrays. Each subarray has its own lens to view the world--thus the term multi-aperture.
After a photo is taken, image-processing software then analyzes the slight location differences for the same element appearing in different patches--for example, where a spot on a subject's shirt is relative to the wallpaper behind it. These differences from one subarray to the next can be used to deduce the distance of the shirt and the wall.
Because of the redundancy of the image information, visual "noise" can also be eliminated by comparing all the image layers to one another. File format is yet to be determined, but you can do it right now as a JPEG with an additional metadata attachment. Very cool.