What is 3DSANDBOX?
3DSANDBOX is an interactive, realtime 3D art experience by James Anderson, Clay Burton and Shaurjya Banerjee. 3DSANBOX uses an experimental 3D capture technique to extract a realtime anaglyph video from a single camera that is constantly in motion.
EXTRACTING 3D from a SINGLE (MOVING) CAMERA
Our single camera 3D capture system was inpired by a piece by Santiago Caicedo called Moving Still, in which Caicedo used a video he recorded out of a the window of a train, and used a time delay between the frame displayed to each eye to create a stereoscopic image.
Since the train is moving with a relatively consistent speed, we can assume that some number of frames before the current frame the image was of the scene from some distance back towards the origin of the train. This alows for a time displacement to obtain a stereoscopic image. By changing frame delay between the two eyes, you can exaggerate the sense of depth, exactly as if increasing the inter-ocular separation.
Watch MOVING STILL by Santiago Caicedo
We adapted this idea of moving a camera at a fixed speed from the domain of linear motion to that of rotational motion, spinning the camera at a constant speed on the spot instead of moving along a railroad or track. We used a simple adaptation of the APVCam, a device for camera motion control that I originally designed for APVFeedBakE, a video feedback sculptural device.
The image processing component of 3D SANDBOX was written using openFrameworks. The oF application stores and processes frames from a USB camera to create the "stereoscopic delay" effect. The output is a mix of the newest frame of video and a frame delayed in time. Each is colorized to red or cyan. When the motion control unit signals completion of a full arc, the colorization of the two frames is switched to match the new direction of motion.
Incoming video frames are stored using ofTexture objects arranged in a circular buffer. The most recently updated frame and the frame at some subtracted offset in the buffer are given as inputs to a GLSL pixel shader, where they are converted to luma-valued images and assigned to either the red, or both the green and blue output channels.