Stereo Photography for the Digital Age

05 Mar 2015 22:19

Back to list of blog posts

[I]f your inclination is to champion lost causes, the case of stereo photography is ready-made for you. This lost pup, with us since the very beginnings of photography, continues to occupy a third-rate position in the photographic scheme of things. However, its intrinsic beauty, coupled with the fact that no other process can approach it for sheer realism, makes stereo a perennial favorite that will never quite fully leave the photographic scene; but it will never attain the pinnacle of acceptance that other photo processes have, and that's the pity of it all—it really is one beautiful way to present a photograph.

Excerpt from “3-D Updated”, Paul Farber, U.S. Camera & Travel…January 1966. Time for another update.

How Depth Perception Works

“The camera is an instrument that teaches people how to see without a camera.”—Dorothea Lange

Before worrying about equipment, back to basics: how can “depth” be added to a picture? Many of these techniques have nothing to do with a special camera, and work great with “ordinary flat” photos, too. Conversely,

Fujifilm Go 3D! quick-start guide to W3 camera
Also bear in mind: how can depth add to a photo? An extra dimension of realism opens up lots of artistic opportunities and can turn “looks like being there” into “feels like being there”.1 But abstraction can have its own value. Photography is an art of exclusion; sometimes even black-and-white frees the mind to “focus” on a critical element.2

Binocular (two-eye) cues

Stereopsis: Near objects shift further between the eyes' horizontally-displaced perspectives than far.

  • Key to most 3D photography—practically any involving multiple cameras or lenses.
  • Striking and precise.
  • Reduce the separation “stereo base” for close-up “macros” to keep the different perspectives' “disparity” within the brain's ability to “fuse” into a single picture and not exaggerate depth past “ roundness”.
  • Increase the stereo base well past eyes'-width to “hyperstereo” for distant scenes (without any close elements) for a more visceral appreciation of their form.

Convergence: Eyes turn in to match up on very near objects toward the inner edges of their respective fields of view.

  • Mismatches their views of distant objects, “focusing” attention on the subject.
  • A little goes a long way: close-ups with too wide a stereo base are hard to appreciate as a whole, and can hurt.

Monocular (one-eye) cues

Motion parallax: Near objects shift further than far in a single moving viewpoint.

  • Aerial 3D photography
  • “Wiggle” animations that flip or, better, smoothly transition between perspectives. Can work well with eye problems—including “eye” being singular.

Motion parallax depends on multiple views over time rather than views from multiple positions, so it has little application to 2D photography. Panning blur relatedly sets off moving subjects in a single shot, but differentiating depth as such could require lateral “trucking”, or at least moving the camera in a wide arc, and also tend to make distant objects fairly sharp and thus distracting unless they are also defocused. Motion parallax is, however, an important technique in video.
Defocus: Objects appear progressively blurrier the nearer or further they are in relation to the plane of focus.

  • For shallow depth of field, zoom in, get close, and use a camera with a larger image sensor—in that order, so this can work with even a simple camera so long as it's not fixed-focus.
  • Defocus primarily concerns the amount of blur, not its character. Bokeh, a Japanese word for fogginess or confusion that means “idiot” when overstressed, is essentially irrelevant; pathological cases of defocus as its own complex visual pseudo-detail include mirror lenses' donuts and 5 bladed apertures' large pentagons slightly stopped-down.
  • However, Canon's aperture may be electronically adjustable during an exposure to emulate an apodization filter. (Use with flash could get complicated.)
  • Blurred foreground items can distract as they catch attention but frustrate examination. Keep them small and at the margins.
  • Easy to simulate from a depthmap, which can be generated from a stereo pair or sequence—with each step being much more flexible than perspective-shift lenses. How about a deep or non-flat sharp area, with perfectly soft gaussian-blur “bokeh”?

Shading: Shadows from and on objects can indicate their and others' three-dimensional shapes.

  • Slightly diffused light often works best for curves' subtleties. Harsh light turns forms into lines..
  • Fill flash can soften otherwise harsh shadows on the subject.

Lighting's tone and color can complement shading. Consider the hazy glow of sunrise, and the setting sun's oblique yellow spotlight against the opposite sky's deep blue curtain. Understanding lighting is one of the most important parts of photography, and off-camera lights some of the most important accessories.
Size: Familiar-sized objects that look big must, intuitively, be close.

  • Wide-angle lenses let you get very, very close, exaggerating the disproportionately shrunken background's distance. (They're very popular for making real estate look enormous in ads.)
  • But don't get too close with stereo to avoid excessive depth effect (on actual close up objects, one's eyes readjust to see distance; the stereo view is typically a fixed set of projections at any given point in time).
  • Balance subjects' size, color and contrast across a scene.

Perspective: Converging lines and tapering-off of familiar shapes indicate distance.

Color: “Cool” colors toward blue appear to recede, while “warm” colors toward red “approach”.

  • Chromosteropsis” has long been recognized in art.
  • It's thought to result from blue, like distant objects, focusing more closely than red.
  • Also, the blue sky is distant, while objects bearing or illuminated in other colors are typically nearer and comfortingly warmer than the open air.
  • Some theorize red cars warn others better of their approach. “Relationship Between Car Color and Car Accident on the Basis of Chromatic Aberration”, S. Shin et al., Department of Computer Information Engineering, Kunsan National University, Korea, 2013.3

Accommodation: Eye muscles pull harder to focus on distant objects. 2D and most 3D pictures are displayed on simple flat surfaces, so accommodation cannot contribute to perceiving depth in them (aside from the color illusion). A hologram, volumetric, or light-field display might use it.

Taking 3D Pictures

There are several ways for a camera to record depth in a scene—some similar to our eyes', some very different. A key difference in the camera's typical job is the need to capture the entire scene at once so that we can look around as we please later. Any of these techniques will have its limitations and pathological cases, so they can be combined for technical applications.

  • Strange patterns, hard flash, and laser light aren't the most flattering, so methods that use their own light could be kept from affecting the pictures meant to be prepared for viewing by taking them immediately before or afterward (or, in video, interleaved between frames, either with quick pulses or trailing rolling shutter's scans), or in light that can be filtered out, such as infrared, ultraviolet, or polarized in a certain way.


The simple and pretty technique of taking separate pictures from viewpoints few inches apart for each eye to look at is pretty much the industry standard. Because it leaves depth to be inferred from comparison of the two views, it does not collect complete data for uniform surfaces on its own. For technical purposes it can be assisted, for instance by projecting a pattern on the image pair, or projecting a pattern from an offset position to capture in a single image.

Coded aperture

The different viewpoints for stereoscopic imaging need not be in separate cameras, and the depth computation need not be done in one's head. A broad, strangely-shaped aperture can distort “bokeh” distinctively by subject depth and defocus direction, enabling a computer to pick out a sharp image and its gradations of depth from the blur. The Vivitar “Q-DOS” lens takes the simpler approach of colored sub-apertures to produce stereo pairs. The effect is modest and depends on a little defocus; the color could mostly be computed back (needing filling in perhaps at marginal occlusions) from the complementary-colored separations.

Light field photography

A specialized sensor can capture the color and intensity of incoming rays focusing from particular angles onto the image plane. Measuring the light's angle in fine increments competes with capturing detail in the overall image.
Consumer light field cameras exist. But they provide relatively low resolution and sensitivity due in part to capturing many more perspectives than necessary to determine the depths of objects in the scene by stereopsis and fill in most occlusions in the otherwise two-image view. The data files may be of interest for tinkering.

Time-of-flight imaging

A specialized camera can measure depth directly by the time its specialized flash's beam takes to arrive back at each part of the sensor.


With a beam of coherent light, one needn't use something like a lens to pick out a particular picture, two, or three: the entire interaction with the object can be recorded as an interference pattern. These can be digitized, for practical color video.4

  • Models are already accustomed to photographers' assistants carrying big reflectors, and flash-transmitters' red focus-assist lights: swap the crinkly foil for a half-silvered mirror, the light for a pulsed LED laser, and…don't forget…the designer-brand sunglasses for some laser-proof goggles?

Viewing 3D Pictures

How can each eye get its own separate view now that these have been recorded, considering they face pretty much the same direction and can't focus super-close? Carving, sculpting, or building up an actual 3D model for each to actually take in from its own perspective isn't generally practical!


Just look at them! Training each eye on an individual member of a side-by-side pair of photos takes some getting used to—in particular, it requires separating convergence from focus as the pair will be moderately close, but require looking “straight through” for each eye to see its own side, or nose-gazingly cross-eyed for each side to see its own. (Contrary to a famous movie critic, the various kinds of 3D work fine after some practice. Like an old flickery black-and-white movie, a TV, or an Imax screen, they're incomplete representations of the real world that need only detract not-too-much from especially interesting happenings worth preserving and sharing.)

  • Does not require special equipment or lose color, resolution, or framerate.
  • Generally works best with a viewing position almost directly in front of a pair of pictures so that each will appear at a similar size and shape in one's field of view.
  • Sometimes prescribed as therapeutic.


Allows a medium-wide image angle: eyes can naturally cross quite far to examine close-up objects, but not “wall”. Still, each picture can occupy at most half of the normal field of view in front of one's face, and extreme perspective toward a too-far-out edge can make the views hard for the brain to “fuse”.

  • Easier than parallel for some individuals.
  • They won't stick that way!


Easier than cross-eyed for some individuals.
Limited image angle: because eyes don't naturally turn outward, the pictures' centers have to fit side-by-side pretty much directly in front of each eye. At not-overly-close viewing distances, this doesn't allow much image width.


Gadgets to facilitate viewing of flat stereo pairs. Typically incorporate lenses to adapt the eyes' distance accommodation to an actually-close view, enabling a wider angle of view by holding a small parallel-view pair very close and making parallel viewing feel more natural.
A common now-antique variety was invented by Oliver Wendell Holmes, Sr., father of the famous judge. It has a cushioned forehead rest, a set of magnifying lenses, and a rail to hold a standard “stereo card” with the pictures set as far apart as the average pair of eyes at the correct distance for the lenses.

  • A strong pair of reading glasses could be a readily-available, cheaper, and smaller substitute, but require holding the card and/or positioning one's head at a comfortable distance.

Some modern stereoscopes like the Berezin Pocket 3Dvu, which clips to the head, incorporate mirrors like a pair of periscopes oriented sideways to substitute for a range of eye-crossing or “walling” for a variety of sizes and distances. A device with supplemental lenses (or possibly a simple pair of reading glasses) would be best for the smaller cards, because it lets them be held closer to look bigger; the mirror variety is more versatile. Ones with both lenses and mirrors, or variable diopter lenses, do not appear to be commercially available. That's OK – there are better fancy, expensive options.
The “Viewmaster” system is a very impressive inexpensive stereoscope typically sold through toy stores that uses backlit cardboard discs with miniature stereoscopic pairs of slides. The reels typically come ready-made, but there are cameras and mounting tools to prepare your own – mostly antiques from when film stereography was more popular.

Autostereoscopic Displays

Generically refers to display devices that provide for 3D viewing without supplemental equipment or special viewing techniques. Common varieties (as of 2015) use a series of louvers or cylindrical lenticular array to direct alternating sets of pixels from a fairly standard 2D display panel to each eye. Very easy to use, but typically limited to a viewing position directly in front of the screen so that each eye views its intended interleaved section of the screen. Picture quality is only moderate as each eye has only its overlaid part of the screen, and does not view the entire thing for the same angle (the divisions seem to typically be uniform, not optimized for off-axis viewing toward the edges).

Filtered Glasses

Filtered glasses can pull out for each eye one of multiple images displayed by the same surface.

  • Get your own, especially for the fancy close-fitting sunglasses style. Pinkeye is not a way to see red-blue 3D—it just hurts. For a group, try disposable (but reusable, if kept carefully) cardboard-framed glasses or a UV-sterilization safety-goggle cabinet with plastic ones.


Glasses with complementary-colored lenses and corresponding overlaid separations of the images for each eye in otherwise ordinary flat color displays. Works best with scenes and color pairs resulting in separations with some detail for each eye to drive stereopsis—including grayscale, and, in particular, not pure colors of the lenses, which would appear not at all in either side. Red-cyan is the most popular pair of colors, and works best with images containing few strong red and blue areas.
Desaturating and lightening colors generally provides a better 3D effect by providing some detail to both eyes in all areas; “Dubois” anaglyphs remap colors that would otherwise appear in only one view.
An adjusted amber-blue process by the trade name “Colorcode” is optimized for 2D viewing without glasses by making one eye's image a light blue, to which the eyes are not particularly sensitive. The eye behind a deep blue lens adapts itself to the dimness to see that image well enough for stereopsis.
Cheap and easy, but the differently tinted images give inaccurate color and tiring viewing.
Common for inexpensive 3D books and movies.


Glasses with lenses polarized linearly in perpendicular directions, or polarized circularly in opposite handedness, coupled with images for each eye displayed (typically by means of separate filtered projection onto a screen that maintains polarization). Commonly used with specially arranged digital projectors (LCDs typically use variable polarization to filter light, and project polarized light), or filtered slide or movie projectors.5

3D “Wiggles”

Typically simple GIF back-and-forth animations presenting a stereo pair through motion parallax rather than stereopsis.
Smooth transitions between each side's view from multiple actual perspectives, “morph” transitions interpolating the individual subjects' positions (which can be created with GAP), or simpler interpolated or interleaved intermediate frames can appear more natural or at least more pleasing.
Work well with eye problems (including just having one!)

Lenticular prints

A lenticular 3D prints typically use a series of slices of images running parallel to each of the cylindrical lenses affixed in front. Each eye gets slightly offset views according to its angle in relation to each of the lenses. The multitude of image slices that can fit behind the lenses enable multiple viewing positions and smooth sideward transitions. As with 3D “wiggles”, intermediate frames can be made with multiple cameras or morphing software like GAP.

Shutter Glasses

Head-Mounted Displays

Little screens or projectors can be worn directly on the head. Some simply show the artificial display, with optics to make it appear at a comfortable distance. These used to be called “virtual reality.” (Wide-field eyepieces might provide an exceptionally immersive view.). Others combine it with the real world, for instance via a partially-reflective surface, for “augmented reality”. Sets for this include Google Glass, a rumored Google Glass II, and Microsoft Hololens.

Volumetric Displays

These are futuristic devices to actually lay out an image in a three-dimensional space, as by sweeping a volume rapidly with an array of lights or inciting fluorescence at the intersection of laser beams in special materials.

Light Field Displays

A special array can project the light forming an image not only in different colors and from different places, but at different angles to come into focus at different virtual depths. This could combine with a head-mounted display for more realistic virtual reality.


Generally costly individual prints, but coming soon to TVs?

Make Your Own Fancy 3D Camera


  • For the typical qualitative advantages of fancy cameras:
    • Focus speed and tracking, especially for SLRs—due not so much to the viewing mirror from which they get their name, but a little secondary one that diverts a portion of the incoming light to a dedicated “phase detect” autofocus system before exposure.
    • Ultrawide and telephoto lenses
    • Low-light sensitivity
    • Recognition – not always a good thing. It can provide credibility (credulity?), conversation-starting camera-derie, or prompt people to try to help you with something that seems important, but it might also get you robbed or obstructed as a “professional”.
  • Better picture quality, although that's not as important as one might think. It is great for large 2D prints. The twinned mid-range pocket camera components that seem to be inside a good 3D compact do have some unsharpness and noise, but the depth effect is so impressive you'll hardly notice and viewing the two images simultaneously distracts from the fuzz. Modern exposure and color-balance will make the pictures very pretty aside from being a little soft.
    • The additional precision may be more important in computational applications.
  • Inventory control: if your 3D camera works great for 2D, no need to maintain or carry a separate set of 2D gear. But 3D can be a little fiddly, so it doesn't hurt to bring along a compact for emerging action.
    • A dual-camera 3D setup can work quickly if set to autofocus and autoexposure, the lenses are left at a medium-wide zoom, and the sleep timer is set long enough—15 or 30 minutes for instance—that the cameras don't turn off during periods of regular use. But leaving an electronic viewfinder on will drain batteries fast.
    • To have the “best”. “Knowing” you have the “best” may free your mind to pursue an artistic vision. Or it may just not quite work at what it's not quite needed for in the first place, fail to satisfy or motivate working out the kinks, and feed GAS.
    • Bragging rights – all the better if you made it yourself :)


Generally, by combining two fancy regular 2D cameras to work as a stereoscopic camera with state-of-the-art speed and quality. Lightfield cameras are expensive, low-resolution, have a very small stereo base (the size of their lens), achieve focus in a limited number and range of zones, and reportedly compute fake bokeh that may be able to be computed from a simpler stereoscopic sequence of images leaving more pixels for detail in the picture rather than duplicative viewing angle. Holography requires very controlled conditions.
A pair of typical market-leading DSLRs will work great. Or even mirrorless “EVIL” cameras. Many companies make good cameras, but the smaller and newer ones may not have as robust peripheral capabilities, which is important when these are used to synchronize them and need to work consistently and fast to make accessories such as flashes useful to both cameras.
The cameras and lenses don't even have to match perfectly, although it's recommended. Matching cameras should give the best shutter-sync equipment compatibility and timing, and color matching without special adjustments; also, pre-matching resolutions may be required for certain workflows. The lenses' focal lengths should match, and matching the lenses themselves would best match aberrations and distortions that are likely harmless in themselves, but distracting if mismatched. In particular, a cheap old Canon can work well alongside a newer one –although EF-S compatibility is important to enable use of inexpensive wide-angles. If one camera will control both lenses' focus, make it the newer one: autofocus systems are a key area of improvement over time and the Canons with on-sensor phase detection use stepping motor lenses much more quickly and smoothly in live view. The lesser camera's weaknesses can be minimized by arranging for its output to be viewed by the non-dominant eye (typically the left) and/or the one viewing the colors to which the eyes are less sensitive through anaglyph glasses.
Mount them together securely, with routes for any electrical and mechanical couplings for adjusting the cameras in use, aligned precisely, and arranged to distribute strain away from leverage on stress points like tripod sockets and cable ports. And keep the system manageable—maybe even elegant?
Add some electronics to synchronize the shutter and focusing. Don't worry, the “electronics” can be as simple as a headphone cable. Synchronization of the shutters is critical for general use: if it's off by even a tiny fraction of a second, enough for motion to turn objects in the picture—whether or not the primary subject—into a mushy or streaky blur if that were the exposure time, their sharp images will misalign enough to distort their perceived depth.6 And synchronizing focus is also important, in order to catch quick subjects and because adjusting it individually as needed for pretty much every picture would be unwieldy. Focusing also happens to be part of most cameras' typical shot-preparation sequence, so its timing is easy to synchronize externally in the process of synchronizing the shutters. What exactly the cameras focus on is generally not quite as critical: autofocus cameras typically do a good job of choosing something relatively close to the camera, big, and close to the center of the frame, which is likely to be the primary subject. Some can even seek out faces. Or you can choose particular focus points on each camera to match up on your composition or focus-recompose (the center ones in particular tend to cover a fairly wide area and will likely overcome the camera's parallax to overlap on all but the smallest and closest subjects). Stopping down will take care of little mismatches, while still allowing noticeable background blur. And taking a few extra pictures should compensate for the few misses. Linking the lenses' focusing is, however, sometimes practical and especially helpful for video: they'll not only reach the same subject, but at the same time and by the same guideable path.
Next to match up the other settings, such as exposure, which don't have to be adjusted right as you take each shot. These should be the same for each camera, for equal viewing angle, depth of field, and brightness. But little mismatches in magnification and brightness are easy to correct later. So you can just set each camera's by hand to reasonable defaults for a scene: exposure and sensitivity checked when you move to a new area, and a zoom setting a little wider than generally needed for a sequence (you can crop, but you can't “paste on”). An extra zoom scale is easy to make and stick on to one or both camera's lenses to easily match that setting by hand; a mechanical hand or power-driven coupling can be added for video. Exposure, like focus, can be automated, but some modes work better than others with two cameras looking at slightly different things and, depending on how they're mounted, one upside-down.
Flash can be tricky: on modern cameras it commonly delays the shutter to fire a test “pre-flash” and possibly to perform other preparations. Full flash sync speed may not be available across both cameras even when they synchronize well otherwise. Using pro makes' systems, more powerful external units, less-automated modes, and “high-speed” sync's extended illumination can help in marginal cases. Indoors, fancy multi-flash setups for artsy lighting can work great!


There are several options. Commercially available ones include neat “Z-bars” (with the middle snaking between the cameras, or laying behind them with baseplates wrapping forward) for certain compact cameras, simple flat “stereo bars”, sometimes with precise but bulky Arca-Swiss double clamps to hold cameras with custom contoured baseplates to point precisely forward or elongated for “hyperstereo” wide-base photoso of distant objects, and vertical mounting bars to hold the cameras closer together, almost baseplate-to-baseplate, anchored to a common horizontal member. Common custom “twin rigs”, as they're known, typically mount compact cameras to Z-brackets assembled from two hardware-store metal corner braces, use simple side-by-side twin bars, or use homemade vertical mounting bars, perhaps coupled to one another or at the top and bottom. (There are also big front-surface partially silvered “mirror rigs” for high quality 3D video from a tripod with an adjustable effective stereo base.) The systems generally available so far may work, but they can be smaller, easier to handle, and more elegant (thus less obtrusive on location). And cheaper—but cost should not be a primary concern. The cameras are valuable, mounting problems could damage or drop them, and carrying two spares may not be practical.7 The time that goes into making and using the system will be much more valuable than most of the supplies and wrong or bad tools can be unsafe. So, here are some you can make at home. And a couple ideas for more advanced hobbyist projects.

Stereo Bar

Simple flat metal “stereo bars” with slots and tripod screws for mounting cameras side-by-side are very easy to use. Some, such as this fancy one from Really Right Stuff or similar less expensive but good ones from Sunwayfoto accept “Arca-Swiss” standard tripod mounting plates which, when custom-fitted to the cameras' bases or equipped with anti-rotation pins for compatible cameras, keep the cameras pointed straight forward. Others like these even take more elaborate, but photographer- and even bar-bendingly heavy, finely adjustable mpanorama heads and leveling bases. (Headless screws with smooth cylindrical tips are known as “dog point”: these could make good cheap anti-rotation pins to add directly where needed—screw them in from the back side rather than looking for a screw with an appropriately sized cylindrical head to use as the pin because their larger body will be stronger and easier to drill and tap for.) But they typically hold the cameras at least their own width apart side-by-side, for at least a mild hyperstereo effect. Narrow cameras improve the situation (but tend to be simpler compacts). There are also “vertical bars” to mount the cameras their typically lesser height apart on the stereo bar.

Inversion of one camera via hotshoe

A good cheap way to mount relatively light, short cameras with one narrow side and a relatively flat top close together on a stereo bar can be to mount one inverted by its hotshoe to the bar. Get a strong steel hotshoe to tripod screw (not socket) adapter. If your stereo bar has a narrow slot to retain tripod screws that narrow to a thread-depth unthreaded part near their heads, file off the threads near the hotshoe plate of the adapter on the sides that will face the channel, protecting the rest of the threads and the hotshoe plate with tape. (Clean the filings off the plate and yourself before re-approaching your other camera stuff!) Mount one camera upside down to the bar so that the narrow sides of the cameras face each other. Fill in the space between the top plate of the upside-down camera and the bar, if needed, so that it is not exposed to sideways leverage (not tightening the hotshoe retaining nut all the way will let the top of the camera, rather than the hotshoe, take more of the strain as the camera shifts). The cameras chosen should have roughly the same distance from the lens mount to the top plate and the bottom plate, but if one camera sits with its lens a little higher or lower, shim the space under it.

Setscrew (“Grub” screw)

A simple headless setscrew, adorably called a “grub screw” (like a bug) in British English can be an incredible photo accessory: it costs next to nothing, weighs next to nothing, and, with a few strips of Scotch tape and a shutter-release cable for synchronization (more on that later) can literally add another dimension to each and every picture. You needn't make any kind of “rig” but instead screw the cameras into each other. The key lidration is that the pictures will all be in “portrait” format, which reduces lenses' horizontal angle of view (introducing an additional “crop factor” of 1.5) and loses resolution in cropping to “landscape” (although a DSLR has plenty of actual resolution, even with relatively few megapixels due to their high quality).
The cameras should match (mismatched ones may sort-of work) and should have:

  • Tripod socket directly under the lens (not off to one side), so that the two pictures offset only in the horizontal axis when the image margins are square. This is pretty standard, to facilitate panning without lateral motion.
  • Tripod socket made of metal, and better yet, mounted in a metal body for sturdiness. The metal construction is pretty standard on fancy cameras. Metal bodies are not, and typically very expensive new, but can be inexpensive used—if one is willing to accept a generation or so older model. (All DSLRs since at least the Canon 10D and, apparently, Nikon D100 have excellent speed and much better image quality than all small-sensor compact cameras to date.)
  • Tripod socket close to the underside of the lens, not separated from it by a large grip or other projection to overly increase the gap between the cameras. This is pretty standard; big integrated grips are common only on super-expensive top-of-the-line “professional” cameras.
  • Broad, flat baseplate with tripod socket relatively centered front-to-back. This will be the bearing surface between the cameras, and should not create excessive leverage of the socket with a close-in edge.
  • Battery and other critical ports beginning no closer inboard to the tripod socket than the opposite edge of the camera projects from the tripod socket, so that they can be opened without separating the cameras. This can be tricky. Check on high-resolution baseplate pictures. The enthusiast Canon x0D series should work; Canon Rebel / Kiss (xx0d) will not.8
  • A means for synchronizing the shutters as discussed later.

The setscrew should have the ¼” x 20 standard tripod thread (or the 3/8 x 16 size, if you have some large or non-US-market camera with that big socket size), a socketed drive (for uniformity and strength end-to-end), be ½ inch long, have a “cup” or other threaded, non-stabby tip, and be made of nicely finished stainless steel. This may seem like a lot of requirements, but it describes a setscrew that is fairly common in every regard and readily available from hardware stores' assortments. The choice of material is important: steel will not bend, break outright, or fatigue easily and will not facilitate the corrosion of the cameras. Stainless also will not itself degrade readily, but it may not be the best material for projects that involve metalworking because it is less “machinable” and several times more expensive—though only frustratingly costly for small parts if each good piece takes several errors.
Screw it finger-tight into one camera and then screw the other onto it until both face in the same direction and are as close together as they will get without pushing hard. (If the screw bottoms out holding them apart, get a shorter screw.) There will probably be a narrow gap, around which the cameras will wobble.
A shim will take care of the gap and an easy one is cellophane tape, preferably the “frosted” variety to enhance surface grip for the glue that will eventually lock them together. Apply strips of it at the front and back edges at the mating surfaces of the cameras by trial and error, unscrewing them between attempts. Many attempts may damage the tape; if that happens, try again starting with about the right amount already stuck together. Estimate the cameras' rotational alignment by setting their lenses front-down (and focused and zoomed to the minimum physical extension) on a hard, flat surface9, and finalize it and check their toe-in or -out by attaching a basic synchronization apparatus (such as a cable) and taking some pictures—most accurately with a telephoto lens. The top edges of each should align (not critical right now) and the left and right edges should be off by the distance between the lenses for parallel alignment suitable for general use at medium-close to infinity distances. If the cameras are toed-in, remove one or more layers of tape from the back edge of the mating surface and add them at the front edge.10
When the toe-in is correct, unscrew the cameras a part of a turn, apply a little white glue to the tape's top surface, screw them back together, re-adjust the rotational alignment, and allow them to dry overnight. Resting them face-down on their lens ends should prevent creep.
Attaching the strap through only one camera's lugs, with the other hanging freely from the first, rather than with one end on each camera, will reduce unscrewing forces between the two. Because this mounting method's basic implementation occupies the tripod holes, use a beanbag to steady the cameras if needed. There are hot-shoe tripod adapters, but using one hotshoe to hold two cameras sideways would invite damage to the hotshoe and even a drop of the cameras.
White glue and Scotch tape form a temporary mounting method: an abrupt twist will break their bond and unscrew the cameras to free them for other uses or even resale. (One camera can be used on its own with the other attached to its bottom but not doing anything for occasional 2D needs.) The bond may soften over time and in damp conditions, so bring the glue and tape with the cameras to re-glue them overnight on travel.

Common Baseplate

A more sophisticated compact mechanical interface between the cameras' baseplates could take up the fraction-of-a-thread slack in the connecting screw11 and hold them tight against unscrewing more conveniently than with tape and glue.
One way to make this involves converting the mostly metal base-gripping assemblies from two quality battery grips, which can be inexpensive for older camera models or if the grips and their electronics have become damaged or unsightly. The Canon BG-ED3 for the EOS D30, D60 and 10D, works well: it has a freely-rotating tripod screw driven by a thumbwheel projecting beyond the camera's front and back edges and an anti-rotation pin matching a socket under the camera. Remove the small screws12 holding the base-gripping assemblies to the remainder of the grip. Align the faces that run parallel to the sensor plane—probably the rear ones—and the tripod-socket screws (not necessarily the thumb wheels, which may connect to the tripod-socket screws by gears). Measure and mark where each would extend past its counterpart, and cut this end off (precisely perpendicular to the sensor plane face for neatness) if needed to prevent obstructing the battery door. Then attach the two assemblies to each other, each facing forward, with something that withstands tension and extends as little above the surface as possible. “Chicago screws”, a.k.a. “sex bolts” (really!) set into holes through the entire assemblies (or, if possible, the metal plates on the non-camera-side of the gears, which will be the middle of the interface assembly) work well; put some close to the center as well as around the edges to prevent the two bowing apart. Tiny flathead screws into threaded sleeves could reuse the base-gripping assemblies original fastening holes as much as possible, and allow complete camera-to-baseplate flushness and anti-rotation pin snugness.
Double-sided Arca-Swiss tripod-plate clamps are also available and could be used as-is on camera baseplates (or even L-brackets, for a bulky Z-bar like mount). Look for one whose tightening hardware does not extend out on both sides, so that it does not protrude back toward the user and which will work with safety stops on your camera plates to keep either from sliding out one side and dropping.
Cameras' baseplates are often discretely removable parts,13 and tripod mounting plates and L-brackets are common, simple, customarily expensive accessories. “Arca-Swiss” seems to be the predominant high-end standard plate style, but there are larger, and rounder polygonal, styles too. Try laying out your own replacement camera baseplate or tripod plate for integral base-to-base coupling, or or L-bracket for Z-bracket style coupling.14 The baseplate would not need to be customized to a particular camera if it follows a common anti-rotation pin layout, although doing so could give it a better bearing surface. To hold the lenses side-by-side in a Z-bracket combination, L-brackets would need a camera (or type) specific base to coupling-center distance; adjustability could weaken a given design but swappable shims could easily adjust it within a range. The coupling should have a positive lock and ideally be symmetrical so as not to require separate designs or permit mismatches. It should retract, remove, or have a low profile avoiding contact points for tripod mounting of the separated camera and to prevent snags. (An L-bracket's unused base would allow mounting a pair of cameras off-center while its arm, and the other's, held them together). One option would be symmetrical keyhole- or foot-style fasteners on one side of each camera's plate that would insert into the other side's with the lenses off-center, then lock as the cameras slid home. Projections (such as rails) on the right side of a baseplate, and matching indentations (such as grooves) on the left, or projections on the top side of an L-bracket's arm and indentations on the bottom would hold the entire mating surfaces in proper orientation and reduce stress on the fasteners.15 An oversized L-bracket arm (perhaps forming an unusual tripod coupling) with an open middle could admit the side of the camera into the space normally comprising the center of the plate in order to bring the cameras closer together.
In lieu of a quick-release coupling, screws into holes ideally tapped only on one of each side of an interface (so that slack could be taken up) could couple the cameras tightly and strongly.
Extra attachment points on the bracket could admit holders for (or combine to grab directly onto) other accessories, such as a “tablet” for advanced control, processing, and networking.


A “Z bar” is basically a bar in the shape of a stretched-out “Z”, with the middle part vertical or near-vertical and the top and bottom legs parallel but separated by height, used to hold two cameras together with their lenses vertically aligned and their shorter sides facing inward so that landscape-format pictures can be taken with a reduced stereo base. Although simple in design, they can be deceptively difficult for an average do-it-yourselfer to make: they must be stiff, tough, fatigue-resistant, preferably light and compact, and very precise.16 Steel meets these requirements pretty well, but the strength of even the basic “mild” variety, considered one of the easiest to work, can be a challenge for a typical do-it-yourselfer used to little home-improvement projects. Here's how to do it.
Choose the cameras. A simple Z-bar's height (between the inside faces of the legs) and tripod-hole placement will vary from camera to camera, so select the cameras before you select the bar. The cameras must be compatible with any desired control couplings and have one side—typically the left—that extends little from the lens. (If the cameras can have either side extend little from the lens, great—no need for a Z-bar as a regular flat one will do!) It is desirable for the cameras' baseplates to be configured for an anti-rotation mechanism such as a pin to avoid the imprecision, tightening strain, and stress transferable to more delicate parts from a friction-fit on the baseplate, or the complication and weight of a contoured cradle, and for the cameras to not have any frequently-needed access ports such as a battery door close to or “inboard” toward the narrow side from the tripod socket so that a broad bearing area in each direction can prevent harsh leverage. It is also best for any needed connectors, such as for shutter-release, to be out of the way of the Z-bar's path along the bottom and narrow side, to be away from the path's edge so as not to require breaks in the strength-critical edge of the bar, and/or to have placement conductive to plugging the cameras “into each other” with a short, relatively straight connector. A touchscreen interface can be handy to facilitate configuring each camera from the back while one is upside down. Canon SL1's work very well because they are small, keeping bar stiffness and, thus, weight requirements down, they are light, being easy to carry themselves as well as not requiring a stronger bar, everything is easily configurable through a touchscreen, and the seemingly unique vertically-centered-to-the-lens, roughly-front-to-back centered left-side (narrow side) placement of their radially-symmetric 2.5mm “submini” shutter-release connector, which on Canons synchronizes prefocus and shutter release by simply plugging the cameras into each other, permits the only other connection between the cameras to be a simple double-ended male adapter ensconced in a medium-sized hole through the middle of the Z-bar.17 The port-cover flaps can be removed without damage to get out of its way by taking off the little plastic cover next to them with a JIS screwdriver temporarily.
Choose the material. Mild steel works great. Steel is much stiffer than other common metals such as aluminum and much more ductile cold than typical tempered aluminum alloys. (Either could be heat-bent, but this is inconvenient and potentially dangerous to do by hand, and the aluminum would need to be retempered to bring back to anywhere the steel's strength.) The stiffness is especially important for a simple solid bar, as opposed to, for instance, an I-beam or box girder whose width would require a greater proportion of stretching to bend the element and so is much stiffer for its weight. The solid bar presents a flat bearing surface on each side and does not need special bending techniques to not collapse, or a complex shape such as a recurve to form inside corners into which the cameras' typically almost-sharp edges can fit. A bar stiff enough to keep the cameras from flopping about the central bar (and any connectors there) or twisting with long lenses will be much stronger than the cameras and probably the user. Because, as a rule, every kind of steel is equally stiff within its elastic range, the extra-hard or strong kinds will just be unnecessarily difficult and possibly less capable of bending. Stainless steel wouldn't rust, but mild steel doesn't rust rapidly in conditions that are suitable for electronics either. It is much more difficult to cut and requires polishing for the familiar pretty shiny or satin finish. Finally, mild steel costs little—which is more important as you start out and spoil a few pieces. “Hot rolled” is cheapest, often noticeably bowed across its width, and has a dull, hard, slightly rough “scale” coating. “Cold rolled” is much prettier: flat, bright, and square. Cold rolled leads straight to a more elegant and precise-looking result, but the finishing adds stress before you begin to bend: if it noticeably weakens (gets softer), cracks, or even breaks as you bend it, scrap the damaged piece and try hot-rolled. 1/8 inch thick works well for compact DSLRs; perhaps 3/16 for mid-sized ones, and as little as 3/32 for compacts. Stiffness increases with the //cube// of thickness, more or less, and sharp bends relatedly strain thicker bars much more than thinner ones, because the outside must stretch and the inside compress more around the bend radius. So thicker bar would serve little purpose. A wider bar resists twisting about its own length much better, but one about the width of the camera's own baseplates (pretty narrow for a digicam, an inch or inch and a half for most DSLRs, or about two inches for one open in the middle to accommodate cameras directly side-by-side) is typically fine and most compatible with the cameras' original ergonomics. As you order material, watch out for your bender's width capacity, often just less than 2”.
Design the bar. Typically, a basic Z-bar will run the flat widths of the bottoms of the cameras from their narrow sides, past the tripod mounts, up a battery and/or memory-card door on each one's wide side. If this length would end very near one side of the tripod mount, the cameras may be able to be used and adequately braced with a notch for the door and an extension to bear soundly on the rest of their bases on those sides. Removing the cameras to change their batteries is not recommended if it will disturb delicate electrical-connection areas. The vertical part of the bar may tilt—like a “Z”, but less acutely—to brace the cameras at their upper corners as well as their bases and keep them as close together as possible if these tilt inward, as common on cameras with contoured grips. Attachments to the narrow, typically left, sides of the cameras such as a shutter-release interconnect for SL1s or plugs into other kinds of cameras may require increased spacing or cutouts (don't cut the edges, primarily responsible for stiffness), as may strap lugs. An extra-wide bar with a large cutout in the vertical for the near sides of the two cameras to touch (or nearly so) would minimize the stereo base, which tends to be larger than optimal for full-size cameras. Holes will be needed for tripod screws (and are typically threaded to retain the special tip-threaded style) and are highly desirable for anti-rotation pins, which “full dog” cylinder-tipped setscrews can serve as.18 Other desirable accessories include a strap, which can couple to a pair of holes on a horizontal element designated as the “top”, near the vertical (even precisely over the center of the setup, if there's much bend to the “Z”), a tripod hole (the Joby Micro is especially elegant; use a washer to keep the tripod screw from penetrating a thin bar far enough to damage the cameras).
Measure, lay out, and cut the bar. Basic hand precision metalworking techniques work well to lay out the bar: measure and mark with a fine ruler, divider, and scribe, into layout fluid (or a Sharpie). Use a perfectly flat reference surface, such as a “surface plate”. Start holes with a center punch to reduce wandering—the automatic type, carbide-tipped, can be very handy. Start with the bar a little too long and mark out succeeding elements after you bend and drill preceding ones to compensate for or at least spread out the impact of inaccuracy. The vertical can be bent further sideways to take up a little extra length before the tripod holes are drilled, and shims can always take up a little slack.
Refine the bar's bends as you make them. Inexpensive benders such as the Harbor Freight “compact bender”, like, presumably, the Hossfeld bender of which it is a small version, are not very precise. Sighting against the right-angle adapter in a consistent way and measuring afterwards, determine where the bend is made in relation to a mark. Check whether the bend is truly square-across, or if it's crooked with the end tilting to one side, as you bend in small increments and take out the bar periodically. If it's crooked, shift its bending position slightly to reverse the problem. Be careful where you put your fingers! A length of steel pipe over each end of the part can straighten out bowing—wrap it in heavy tape to reduce surface marring. A huge adjustable wrench sideways over each end can eliminate twists.
Drill the holes with a press—center-drill/countersink bits are very precise. Tap them as needed with a standard hand kit.

Universal Z-bar

The iShoot Universal L-Bracket and Improved Universal L Bracket suggest a route to a universal Z-bar. The baseplates' adjustable sliding tripod mount points and flanges already accommodate a range of horizontal camera positions and work well.19 Anti-rotation pins would more securely prevent sideways twisting, and shortening on the right side (or a design that maintains integrity, including safety catches to not slide completely apart, when much of that is cut off) would better permit battery access . The vertical arms could couple opposite each other, perhaps by dovetailing (the mating surfaces could be arranged so that each has its component “in front of” the other's when its baseplate is down, optionally with complete dovetails parallel to the image plane, rather than one component fitting simply “inside” the other so that they could be identical), with a long coupling area to prevent up-and-down twisting, or both baseplates could adjustably mount to a common vertical plate or rails. If the vertical portion is perpendicular to the horizontal portions, rather than bent into a “Z”, each side's parts could be a standardized L-bracket to simplify the numbers of parts to make for various tripod accessories. A large opening on the vertical portion would allow fitting the cameras close side-by-side and provide clearance for electronic connectors between them, although it might have to be bigger than an Arca-Swiss plate's outline for most cameras.
Tripod-mount accessories are typically aluminum, and L-brackets' “elbows” are often thin—stronger would be better for a Z-bracket to be carried around with a camera cantilevered off the other end. Besides this, you could make a universal Z-bar by fixing two of the current models' verticals together—but consider a cord between the tripod screws to keep from dropping a camera if it breaks.


A large box, such as a tough plastic suitcase-style case with an optical window, with a custom-contoured inside (perhaps based on temporarily fixing non-extending lenses against the window, and making a pick-and-pluck or foam-in-place cradle for the camera) could provide a strong, simple, protective structure to hold cameras in position for 3D picture-taking. Gopro sells such a box ready-made for the rugged compact video cameras it makes. Limited physical access to the cameras requires a remote; built-in wireless would enable a very secure shell.20
A large box-style case, perhaps rounded for strength could work well for a one-off underwater rig (test extensively first!). A “zero-pressure” case would incorporate a collapsible portion in the watertight section (perhaps a tough, normally-full air bladder attached to the box sheltered within its own enclosure) to match ambient pressure, neatly counterbalancing leaking and crushing forces.
The rigidity of a large, semi-open “box” frame could work well for complex multi-camera or extremely accessorized setups.

Conformal stereo / accessory mount

A fancy “Z-bar” could be milled to conform precisely to grip the bottom and sides of the cameras (with a nonskid, non-scratching surface) or even incorporate a sculpted grippy form like a pair of intertwined battery grips. It could incorporate conveniences such as an integral quick-release mount, protections such as a crash bar, and extra capabilities like a mount for a tablet running a standard open operating system for annotation, processing, and sharing.
Stereography (including a series of images from different perspectives to eliminate occlusion, and from similar perspectives—whether handheld-scanning, perhaps as detected by an acceleration sensor, or even precisely “wobulating” the sensor, taking lens, and/or projection equipment for a light array with stabilization-style adjusters or using superresolution techniques—for higher resolution could generate the 3D models to turn into the contoured mating surfaces). Moving a sensor relative to a micro-lens array could allow sampling a light field's directionality as well as its colors in very high resolution. The speed of a mechanical low-pass filter suggests this scanning could be done within timeframes suitable for modest action photography.

Lens-mount connection

Cameras already set up for compatibility with longer-back-focus lenses, such as EOS M, micro-four-thirds, and other SLR makers' “mirrorless” lines, provide an obvious strong firmly-aligned space for a stereo mount: in place of the adapters. A common body for them, which may need internally-swivelling bayonet mounts to attach to both cameras close together, could not only hold the bodies but route lens-connection signals entirely internally. It could communicate its presence back to the cameras (or one of them, to tell the other another way) via typical lens-to-camera communication or otherwise to configure themselves for 3D. The space behind the lenses could accommodate focal reducers, teleconverters, electronically adjustable synchronized filters, diaphragms, and zooming groups. Mirrors and/or prisms to adjust convergence could also go in the optical path. At the cost of strength and lightness, the two sides could expand apart for increased stereo base, again under automation if desired.
A shim-based, mechanically adjustable, or electronically compensated focus adjustment and a centering adjustment between the two would optimize sharpness and out-of-the-camera alignment.


Single-camera single-shot 3D techniques obtain perfect shutter synchronization, often at the expense of field of view and image quality. An optical relay typically conveys two viewpoints through the usual lens to side-by-side positions on the sensor. Makes include Pentax, Stereotach, Miida, and Kula.
Binocular-like prism assemblies might work better or be more durable than the common simple outward-facing periscope-like mirror assemblies, or convey images from a pair of wider-angle lenses. Autofocus-coupled, synchronized lenses might work, at least in contrast-detect mode: their rays for each side likely wouldn't be oriented as phase-detection sensors would expect, at least in the horizontal axis along which the lenses are doubled up off-center.
An anamorphic lens could change the camera's semi-wide view to super-wide, accommodating two normal landscape-format views side-by-side.


For close-ups, a convenient handheld short stereo base option would be dual lens 3D attachments with a very short base, such as the Loreo 3d Macro Lens in a Cap or even side-by-side apertures (sequential, colored, or filtered to different parts of a sensor—or separate sensors to which the common view is transferred—with a color-independent mechanism like polarization). Cyclopital 3D makes a base-reducing macro adapter for simple purpose-built 3D cameras.


  • For slightly wide-base hyperstereo pictures, try a wide purpose-built stereo bar, a tool cleverly adapted to the purpose as “DrT's HyperBar” (far cheaper and probably tougher than most camera stuff!) or a stiff square tube or channel, available pre-painted and holed for making wall-mount shelves and racks.
  • For very wide-base hyperstereo, work with a friend—much more engaging than a tripod and prevents camera theft. Adjust the cameras, explain the zoom setting and composition (easily done by aligning a key feature with a viewfinder marked focus point or grid), and trigger them with a single wireless remote.
  • A laser rangefinder can measure distances to faraway scenes to suggest stereo bases. Out of its range, try a map showing topographical features—but the longer interocular distances required for much depth effect would tend to put the cameras out of usual shouting or basic radio-trigger distance.

Adjustable base

  • Unlike simple side-by-side mounts, a “mirror rig” provides a variable stereo base adjustable down to zero. It could be dynamically adjusted with a servo motor like the lenses, perhaps even tracking the scene—for instance, relieving excessive convergence but not totally eliminating the in-your-face stereoscopic effect of a tight zoom.

Shutter Sync

Tripping the cameras' shutters at “exactly” the same time ensures that the differences between their views are due only to perspective – not potentially distracting changes in details or depth-distorting mass changes in position. The need for precision varies by application from nonexistent for still lifes (they'll hold still while you even move the same camera for the second picture) and low for relatively static scenes (pushing two shutter buttons at the same time may do fine; at worst some leaves will turn in the wind or a person in the background may move a bit), to medium for moving subjects including people walking around (most wired setups with prefocusing to ready the cameras will work) and high for fast action and flash (detailed settings tweaks can be important).
The best simple way to synchronize the shutters is to simply plug the cameras' shutter-release ports into one another, if the cameras support two-stage shutter release over this link: a half-press to wake up, prefocus and otherwise ready the cameras, and a full press to take the pictures. Canon SLRs do, with a submini audio cable or double-ended “C3” cable. This, or a connector between dissimilar-connectored but electronically compatible cameras like an inexpensive and a costly Canon SLR, can be made by splicing corresponding wires from two cables (check pinout) or connecting the other ends (typically submini-plug) of two cables designed for a standardized remote-release interface into a coupler (whose outside and entering plugs can be surrounded by heatshrink tubing for protection and securement). Olympuses' USB remote terminals seem incompatible with this shortcut; Nikons can be plugged into each other – at least by the 10-pin terminal on expensive models.
The next step up in complexity is to plug both cameras into a remote. (Cameras' wireless interfaces typically aren't set up to support two-stage remote signaling, although they may support simultaneous shutter release by a delayed remote procedure and focusing during the fixed, ample delay.21 An infrared signal can be routed to the front receivers of multiple cameras with fiber optics.) This can be a purpose-built wired remote release fitted to both cameras with a Y-connector or a wireless receiver with a shutter output and a button for optional manual triggering like the Yongnuo RF-603. A simple Y-adapter into which to plug the standardized remote ends, typically submini stereo plugs, typically works well. The timing may be more reliable when triggered by an electronically-generated signal or electronically-triggered completed circuit rather than a simple switch coupling, as mechanical connections take time to fully complete and the two cameras (like multiple wireless receivers) could have different sensitivities. Check reports for reliably consistent timing.
The “LANC Shepherd” is a custom remote triggering, adjustable-delay device, with flash support, for certain Sony cameras. A related device is called “ste-fra LANC”. The term LANC refers not to a manufacturer but a camcorder command system.
“Finally”, accessory two-stage wireless remotes with a common transmitter can be used. Like an opto-coupler, this eliminates any possible electrical crosstalk between the cameras. It should also eliminate uncertain timing as mechanical switches complete their work and should should work with pretty much any set of cameras. Elaborate sets like the Yongnuo 622-C-TX and Pocketwizard Multi-Max enable adjustable delays (the Pocketwizards, at least, for multiple cameras) but tend not to be small or cheap. In addition to universal compatibility (with cameras having release ports), they're great for hyperstereo: set the cameras as far apart as you like, within reason, and have no cords to trip over.
Electronic connectors can obstruct close side-by-side placement of cameras or just dangle out to snag things. Connectors purchased as components can be adapted to low-profile right-angle versions by routing the cable sharply and perpendicularly outward (typically, in a range of directions; choose the one most convenient for the application) and reinforcing its small connection with a strong material such as epoxy. A connector body that would extend from its socket further than needed for this can be truncated. Frans van de Kamp demonstrates as part of his StereoDataMaker (SDM) trigger.
Canon SL1s' placement of their shutter-release port at the vertical center of the lens, on the narrow side of the camera, offers an even more elegant connection option, and the simple symmetrical round shape of the connector simplifies construction. Just make a double-ended male adapter (which is only inappropriate to use for power transmission, as it could create a dangerous electrified spike.)

  • Get two 2.5mm submini “jack” clamp-and-solder-on connectors (actually more than two as several will be wasted; these are inexpensive direct from China); heatshrink tubing that will go around the connectors' bodies (Harbor Freight hot-glue-coated “marine” type works well; a candle can shrink it); basic soldering equipment including some alligator clips and a “helping hands”, a needlenose pliers and diagonal cutter, and flue tape for conformal heatsinking.
    • The connectors' plastic jackets are not used.
  • Wrap squares of flue tape around the connector's male ends (“prongs”). This will prevent melting their delicate plastic insulator components and scratching them generally. (Connectors with heat resistant internal insulation would be even easier to use).
  • Orient the connectors so that their outer tabs are on opposite sides. Bend these tabs outward a little so that they can wrap against each others' metal main bodies, so that the prongs are exactly opposite one another.
  • Gently pivot the connectors' middle-radius tabs (which seem to be crimp-fit) to be on the same side, coming together, when the connectors are thus oriented.
  • On each connector, bend the central terminal's tab out, perpendicular to the prong. Apply a blob of solder to its central stump. Cut off one connector's tab—leave the other's in place.
  • Test-fit the connectors together with the central terminals stumps touching. Trim the ends of the middle-radius terminals' tabs so as not to keep the connectors further apart. Trim the ends of the outer terminal's tabs so as to come only to the back of the metal plate from which the prong extends, to brace and stick against that and create a neat camera bearing surface.
  • Solder the middle-radius terminals' tabs together. This can be easier if they are held together with a little loop or spiral of uninsulated solid wire, but keep that neat enough to not short anything else out.
  • Use the remaining central terminal tab, together with a little more solder if needed, to apply heat to the central stumps' solder so that these fuse together. Afterward, cut off the tabs' protrusion past the assembly's overall radius.
  • Solder the outer tabs to each opposing connector's metal main body.
  • Apply heatshrink tubing around the connectors' tabs and bodies. Shrink it, then trim off the excess parallel to the metal plate at the plate from which the prong extends.
  • Remove the flue tape and clean off its adhesive.

For the option of remote-triggering both of the cameras, wire in a cable extension to the conjoined terminals before sealing them.
Some settings tweaks for the cameras for more precise synchronization are described for flash use, as to which precise timing is particularly important.
The best synchronization possible would probably be achieved with the aid of the cameras' internal computers. SDM, for compact Canons, makes a start at readying all internal functions but the final, standard taking of the exposure. (One needs to build a special trigger; Harbor Freight sells a tiny, inexpensive flashlight readily available in the US that could be adapted with instructions available online—if you'd rather do it yourself rather than have Frans van de Kamp build one for you at a very reasonable price according to the instructions he generously shared.) Perhaps Magic Lantern could do the same…and more, not needing a significant setup delay with the much quicker DSLR. Canon 7DII's are already set up to precisely synchronize their shutters and framerate to an external flickering light22—why not an internal or external signal?
Or…by not trying very hard to synchronize the shutters at all. Using a single-camera setup shares a single shutter, with some limits (or advantages for macro) to stereo base and optical capabilities. For ultra-high speeds: while mechanical shutters can stop a horse; strobes can stop a bullet and electronic shutters can stop an explosion—some humble DSLRs' sensors implement high-speed, essentially global, electronic “shutter” techniques of their own. The camera's (or multiple cameras') shutters could be opened as the action approaches, then a common flash (with a neutral density filter to dim out the sun) or high-speed electronic shutter (or fully electronically timed pair) can expose both pictures, perhaps triggered by an electronic accessory known as a “lightning trigger” (for pictures of lightning). Panning the cameras (or a mirror into which they look, perhaps more suited to rapid rotation) against a dark background as a strobe-lit scene develops in front of them would enable capture of a sequence as little parts of a single exposure.

Video Sync (“Genlock”)

To avoid the distraction and even depth-distortion of one eye seeing a slightly lagging view of a scene moving in a video—motion being the entire point of using video—it's important to record both cameras' frames at exactly the same time. Cameras' ongoing communication for this purpose is called “genlock” (generator locking). But the feature is typically not provided on cheaper cameras, so it's up to improvise. Here are some options:

  • Use a purpose-built 3D camera like the Fuji W3. Most everything will be taken care of for you; video displays are typically lower resolution, and obscure low-resolution blur in motion.
  • Some report good results using StereoData Maker to trigger the beginning of a recording with Canon compacts.
  • Synchronize the cameras at least to the frame, at least at the beginning of a recording, with a sharp signal like a clapperboard or flash in each recording. Similar signals after each clip could measure drift over time. High framerates allow more granulairty in dropping a frame here or there to keep synchronization close, but collect an otherwise unnecessary volume of data and can conflict with high-resolution recording.
  • Program genlock or a similar “soft sync” against some signal correlated with framerate into the camera oneself. (Magic Lantern has discussed this, but seems not to provide it yet.)
  • On some cameras, trigger the shutters via the same electronic remote release while recording video to simultaneously “reset” each one's recording operation. If the cameras will be connected by their remote releases for mostly-still use, consider building a connector for an external release into this connection (but not in such a way that it will strain the cameras' delicate, hard to fix internal connectors if strained).
  • On some cameras, use an infrared remote to start video recording simultaneously. The one built into the Canon 270EX II flash (which conveniently won't be needed on the camera for its usual purpose when recording) can work nicely.23 Just move it out of the way once recording is started, reflect it off of something close up, direct its output to the camera's sensors from behind with a fiber optic cable, or build a set of synchronized opto-couplers for the cameras.

If the cameras use rolling “shutters” for video, and one is inverted, it's best to invert that one's scan pattern so the beginnings and ends of each scan line up, too.


Synchronizing cameras' shutter timing for flash can be trickier than synchonizing them without flash, mostly because modern digital cameras introduce a delay for a “preflash” with the shutter closed—or many, with multi-flash setups—in order for them to measure the flash power needed for the exposure.24 Electronic flash is extremely quick, so little variations in timing will give stark dark patches rather than subtle disparities from motion between the two images. Multiple trials of the camera's workings as viewed by a high-speed video camera, or, cheaper, examination of their high-speed pictures of a cathode-ray tube TV or monitor's scanning beam, would help iron out the timing differences, variability, and causes. There are a few quick fixes:

  • Avoid the few highest shutter speeds with built-in flash, at least if it can't be set to manual mode. Its pre-flashing seems to be slower than a big external flash's. If the standard flash sync speed is 1/200 second, try reducing the shutter speed to, say, 1/60.
  • Avoid the highest and near-highest shutter speeds with automatic flash. Dialing back to, say, 1/100 is likely to work fine. But, for all of these setups, experiment, including with moving subjects that may introduce focus tracking delays.
  • Avoid the very highest shutter speeds with manual flash, including big standalone units (for which fuller-featured modern flashes will provide a distance computer on their rear LCDs for given power settings).25 Dialing back to, say, 1/160 should reliably work well, although the highest speeds may work too.
  • Complex multi-flash automation is likely to require a substantially reduced shutter speed, to, say, 1/30, for the second camera to accommodate the first's numerous preflashes. (It may also capture the preflashes, but in practice this seems to be OK because they're pretty weak—the main flashes will form the vast majority of both images.) A non-automated multi-flash setup simply triggered by a basic remote transmitter should not introduce delay, however.
  • Second-curtain flash also works only at substantially reduced shutter speeds, especially with automation. This is not actually much of a problem because the point of second-curtain sync is to capture a trail of blur leading up to the flash. Especially with automation's preflash at the beginning of an exposure, the end-of-exposure timing of the burst would seem to risk the non-flash camera missing it entirely, but this rarely happens at slower shutter speeds: it seems to be not tightly aligned to the very end of the exposure.
  • Delay the second camera when the first one is using flash.
    • Simply putting a flash on each camera might cause automation to interfere—both could be set to manual mode, or one fired into a box to duplicate automation delays (with the drawback of much wasted power and recycle time).
    • Switching the non-flash camera to perform additional or slower tasks before its picture, such as using silent (slower auxiliary mechanics) mode or continuous autofocus, can delay it for a few moments which may align its timing better with the flash camera's for manual, automatic, first-curtain, or second-curtain flash, or all of them. Details depend on the particular camera.
    • Extending the non-flash camera's shutter speed by a small fraction of a stop might make only a negligible difference in its non-flash tone (flash brightness being primarily aperture-driven at typical sync speeds), but ensure catching the other's light.
    • High-speed sync mode seems to provide a generously longer-than-sync speed light supply sufficient to illuminate both cameras' views (perhaps to accommodate low-sync-speed focal-plane shutters that take relatively long to move across the sensor, at the expense of some wasted power?) Because it's a series of pulses, not an instantaneous burst, the fact that it works for each camera doesn't mean the synchronization is accurate to even the sync speed, let alone the instantaneous speed across any given patch of the sensor. But even this can satisfactorily reduce blur, possibly leaving acceptable mis-sync, in action, or weaken daylight for powerful flash modeling light to outshine in portraits without a neutral density filter.
    • Programming the cameras to communicate and work together for 3D (to match other settings, drive 3d-specific adjustments like stereo base or convergence, share their lens, flash, GPS and other taking data, and even process the pictures) could include delaying one for the other's chosen flash mode's automation to work. StereoData Maker supplemental free-software firmware for compact Canons enables the cameras to get “as ready as possible” together for an external signal, and, it seems, even calibrate their video timing to optimize synchronization (as well as they can in the absence of referencing an external signal) – this could be the starting point of super-precise synchronization with much fuller-featured SLRs.
    • A fancy adjustable-delay remote such as the Yongnuo 622-C-TX or the PocketWizard MultiMax may enable setting compensation for each particular kind of flash or other delay. They also enable “Super Sync” matching of focal-plane shutters' tracking to the brightest part of a relatively long, bright flash burst for potentially greater efficiency than special high-speed-sync modes, but using this with multiple cameras would require much closer synchronization than for simply enabling each to capture a single burst within a wider window of time, and would probably require that each camera's shutter move in the same direction (normally not the case with inverted cameras, but reversible with electronic rolling shutter, or, of course, a non-issue with an electronic global shutter.)

Focus Sync

Getting the two cameras to focus on the same thing, and fast, is perhaps the most difficult essential challenge in stereo photography. Exposure and zoom can generally be manually adjusted for a given scene and tweaked in post-processing. Shutter timing needs to match, but it's pretty much a simple standard operation to trigger. Focusing has to be done for pretty much every picture, and involves not only choice as to what to focus on and precise, often iterative, even ongoing adjustment to do it. For video, matching the paths and timing of focus matters too. Here are a few ways to do it.

  • Manually focus each camera separately. Works with pretty much any camera, but slow. Good for portraits. A two-stage shutter release or other means of keeping the cameras awake and fully ready to take a picture on signal may still be necessary for precise shutter synchronization, especially important with flash.
  • Scale or zone-focus. Also works with pretty much any camera, and fast (no per-picture delay), but conflicts with shallow depth of field. The two-stage shutter release may again be needed to keep the cameras ready for precise shutter synchronization.
  • Let each camera autofocus itself simultaneously. A two-stage remote shutter release with connectors (incluing matching wireless receivers) to each camera, or, in the case of some cameras, coupling them to each other (such as Canons via a submini stereo audio-type connection or spliced 3-pin connector, as the particular camera may have, or Nikons via a 10-pin cable some take, at least) will wake up and focus the two cameras simultaneously with a half-press, keep them focusing in continuous AF mode, and ready them to trip their shutters simultaneously with a full press.
    • Most phase-detect sensors that are not accompanied by face recognition (most don't have the feature) will by default focus on something big, near, and close to the middle. They'll typically find the same general thing by default. But working around foreground obstructions or reaching small targets with a cheaper camera's sparse array of focus points can be unreliable when not confirmed directly.
    • For increased certainty both will focus on what you want, select a particular autofocus point or area and focus-recompose in single autofocus mode. Generally the center is most sensitive; if off-center, pick corresponding parts of the image rather than corresponding orientations with respect to the particular cameras if they're rotated with respect to one another. For close-ups, “toe in” the autofocus points for your particular working distance (this might be programmable to happen automatically), or prefocus each side separately.
    • Continuous autofocus running on both cameras can reduce shutter-sync accuracy as each may continue doing something focusing-related for a tiny but inconsistent period before tripping its shutter. Some cameras can be set to prioritize release over focusing more than usual to ameliorate this, a function that may remain relevant where one camera is alternatively configured to run continuous autofocus to drive both cameras' lenses, with the other camera not having to focus anything. Ultrasonic or stepping motors rather than basic micromotors also seem to speed both to take their respective pictures.
    • Live view is a direct, precise but typically slow method with versatile independent touch-to-focus, face recognition (which, like evaluative metering sometimes doesn't work properly upside down)26 that may be great for the common portrait technique of focusing on the nearest eye and not much else with shallow depth of field. It's faster with some cameras' special phase-detecting image sensors.
    • A camera with a flash or flash controller attached may delay a fraction of a second for extra steps with that before tripping its shutter.
  • Make one lens track the other. This can be done by hand by matching focus scales, mechanically by connecting two lenses' focus rings (provided that they're fixed firmly to the focus workings, as with Nikon body-motor autofocus lenses; rods may be more precise than belts, but have more limited throws especially without unwieldy extension arms; and, moreover, added resistance may wear the camera) and other adjustments, or electro-mechanically by rotary encoders on the master lens's settings and a servo on the slave's (the approach that seems to be described for a commercially-available 3D broadcast kit). But why? The next option is easier and better!
  • Have one camera autofocus both lenses. This works well with Canon stepper-motor (STM) lenses: the digital signals convey identically to each lens with ease, and the lenses' quantized motors translate them identically into focusing movement. Canon cameras designed for some kind of on-sensor phase detection, such as Hybrid AF, can drive them much more smoothly during video—the better versions just eliminate some last-minute back-and-forth focus tweaking.) Sigma DN “linear” motors are reportedly also a kind of stepper: they could likewise share a camera's signal or an interpretation of its ultimate focusing instructions to their needs potentially even better avoiding or correcting lens-specific drift (the company's move to user-upgradeable firmware potentially speeding progress).
    • A cheap and easy-to-try way is to couple one STM kit lens's camera-connection wiring to the other's.27 With a detachable coupling, the lenses can be used normally as before, too. This generally works smoothly, and when it does, it works very accurately: if something isn't right, it's typically noticeably wrong, not subtly off lurking to be discovered later.28 Due to Canons' all-electronic lens communication, the apertures will be synchronized, too—with fine-grained adjustments from the camera, they'll “pull” together on both lenses and so provide exactly matching shutter-priority autoexposure.29
      • A simple way to perform the modification is as follows, and this gallery illustrates. Most is identical for each lens. Wear (and ground) an anti-static strap. Remove the screws holding the plastic mount from the back of the lens.30 Remove the plastic mount. Remove the screws holding the printed circuit board on the back of the lens. Free the rear circuit board. Use a soft probe such as a toothpick to loosen the clamp connectors and engage the holes on the stiff tabs holding in the flex leads to within the lens to free the circuit board. Set the rest of the lens aside, and flip the circuit board over to access a set of solder points opposite the camera contact assembly. Splice a set of wires to each of the contacts except the DLC pin.31 Silver-plated, PTFE (“Teflon”) insulated stranded wire is ultra-durable and easy to use as the jacket won't melt; 22 or even 26 gauge is flexible and has ample mechanical strength and conductivity for a camera project.32 Strip off a little of the insulation, twist the strands together, then pinch the bundle to flatten it out. “Tin” it with a generous coating of solder (but not a great round blob), then clip off the end and flared sides to make a little square contact pad at the end of a well-sheathed wire that can bend against others without shorting.33 Decide where the wires will head out of the lens (or to a connector leading out of the lens). An area on the lens barrel at the D-GND pin side and far enough from the mount to clear the circuit board's thickness requires cutting only one layer of plastic, is easy to access with the wires and generally unobtrusive in operation, although a prototype reveals potential for interference with zooming the lens to its very widest setting. Consider other positions (or different, inward-facing positions on the two lenses) especially if you'll add connectors to them. Affix a wire to the solder point for each pin (no more solder should be needed; just heat the wire's pad until the solder blobs unite) except, in the basic implementation, the DLC pin.34 Pre-orient the wire toward its exit from the camera (or connector installation point), starting from the end (if any) whose wires will go on the bottom of a neatly overlapping bundle. Use the soldering iron to melt the connection hole in the lens body to avoid creating a shower of shavings or cracks from forceful cutting. Trim off a melted “lip” that may form on each side of the hole with a small, very sharp knife. Remove any loose debris from the lens. Return the circuit board to its original orientation with respect the lens body. Attach the wires to a connector (such as a computer-type connector with enough contacts) mounted into or against the non-rotating part of the lens barrel, or run them out the hole, labeling each one with its connector if the wires aren't already coded (for instance, by color).35 One could run the wires from one lens directly to a second's board, but modifying each lens separately and coupling them afterwards via built-in connectors and a cable or at least splicing or mating plugs from each's wires afterwards will be easier to handle. Reconnect and reattach the circuit board to the lens body, then reattach the mount. Put some tape over the “slave” lens's contacts to insulate them from its camera—preferably something slicker, stronger, and stiffly stickier than, even if not as conductive as vinyl “electrical tape”, which tends to snag on the pins—preventing conflicting operation instructions or triggering error conditions. If something doesn't work, don't worry: unless you zapped the electronics (thus the strap), cooked them (unlikely, as this doesn't involve soldering directly to any chips), or badly broke something (which takes force), it's probably fixable. Just work backwards and see what went wrong.
      • Connecting the lens's camera connections won't automatically connect one lens's manual-focus encoder to the others, although further work to do this (or construct an external manual-focus signal input) might be worthwhile considering the smoothness of focus-by-wire and the precision of stepper synchronization.
      • The lenses' stabilization simply works if it's turned on. Good luck finding that in an expensive “cinema” lens! Try it with a damped suspension harness to smooth out bigger motions.
    • A more elegant and long-term economical way would be to modify the cameras to support focusing coordination. Again, a few options:
      • Tap one camera's focus contacts to communicate with the other's lens. The master camera's conductors could be soldered, silver-epoxied, or spring-fit against one camera's pins (for instance, by a set of tense holes through which the pins would pass), and led through a gap in the body (which could be easily created in the non-lens-facing side of a mounting ring taken from an extension tube—some of which are easy-to-work plastic—or obtained as a spare, or made through a slim mount-extension for lenses that are able to take up the extra flange focal distance by themselves adjusting “past infinity”). The slave camera's conductors could be insulated from its lens, and instead used, if at all, for their springiness to hold the similarly-routed input contacts (which could have solder-blob contacts, and be fixed laterally in relation to their own mounting ring) against their lens. A DIY flexible printed circuit could easily provide thinness and lateral stiffness.
      • A frame nestled at the camera's opening as for supplemental antialiasing and small camera-based versions of costly astronomical filters (but preferably not interfering with the mirror) could hold the supplemental contacts in place, and even electronics to support wireless communication (thus eliminating the need for any physical camera and lens modification), interpretation and/or recording of the focus position signal.
      • The camera's own wired or wireless communication capabilities currently used for tethered operation could be programmed to synchronize focusing. With the “slave” camera thus converted into a partner, it could use its own knowledge of the distance, area, and object being focused on to perfect its own focus. Compared to this ongoing, quick, detailed communication, having one signal the other to match settings like its aperture, shutter speed, sensitivity, and shutter-release timing (distinct from flash and focus-lock delay), and follow or accept a common source for video-frame timing, should be easy.
      • Synchronized autofocus, aperture, and even power-zoom could work through a focal reducer, teleconverter, or other optical coupler to work with many existing lenses, including cheap ones from old manual-focus systems. All of the added parts could fit within the space they create between the existing cameras and lenses, and their single body would mechanically couple the cameras (with fixed, adjustable, or power-driven interocular distance which could be keyed to focus depth) as well as optically synchronizing them. The gadget could work on its own, but programming the cameras to recognize it and turn to matching up other details on their own would be very convenient.

Zoom Sync

For still photography, the cameras' lenses need only be periodically be adjusted to approximately desired, approximately matching zoom settings. Loose framing (with particular attention to closeups, for which a few inches' parallax can take up a significant part of an image) can prevent cutting off part of the picture on one side or the other, and post-processing can adjust for slight mismatches in magnification. Slight mismatches in depth of field from equal f/ratios but different aperture sizes at slightly different effective focal lengths are typically not noticeable. Simply setting the lenses to approximately equal positions on their built-in zoom scales can work fine, while improvements would add convenience and marginal extra fast-action capability. For movies, post-processing to match and smooth transitions would be more complex, so it's more important to actually synchronize zooming, preferably in a way compatible with gradual, steady adjustment. Here are some options:

  • Set the lenses to match by hand. This is very simple.
    • Turning the cameras to see one scale or the other can be eliminated by adding a focus scale. One easy, reversible way to do this is to precisely measure the distances between the focus markings (on center, using a precise instrument such as a calipers-but be very careful and/or cover sharp points to prevent scratching) and make a replacement with a label-maker, either printing from a computer or measuring the character and space lengths and proceeding by trial and error with a print height of choice (perhaps economized by printing multiple lines high). Brother P-touch extra-strength-adhesive tape, clear with black or white markings to match the lens's, works well.
  • Mechanically synchronize the lenses. This requires basic hobbyist skills. One way to do this is with a simple tangential connecting rod affixed to collars, but that is restricted to fairly limited zoom throws—especially without connecting arms protruding far from the lenses. Another would be “timing belts” (a name for precise toothed belts generally, not just the sometimes fail-unsafe, expensive-to-swap camshaft connection unfortunately in many car engines.) Some are sold for follow-focus purposes by “Swedish Chameleon”. They could be used to connect two lenses to one another—most convenient with wide-range zooms that don't need to be swapped often, and most suitable to cameras or lenses firmly braced against rotation as by pins or connected tripod feet. These could even be connected to a follow-focus, too. Some common sizes match some lens grips' teeth well: the Swedish Chameleon belts work well with the rings on Tamron 16-300mm zoom rings (a lens unfortunately showing substantial superzoom performance compromises). They would still need to be fitted, spliced if needed (which weakens the belts' embedded stretch-resistant filaments) and tensioned, as with miniature springs, to remove residual slop. Follow-focus gear rims and friction collars are sold for lenses; a more precise solution might be matching-pitch follow-focus belt inside out affixed around the lens, perhaps by temporary adhesive or gap-filler that would mold itself into the lens's own grip.
  • Electronically synchronize the lenses' zooms. This would require more sophisticated skills, but would be potentially far smoother, more precise, and more reliable. Rotary encoders using position tracking36 or direct electronic stepping positioning as for focus, could finely and smoothly synchronize zoom, aperture (including within a shot for apodization effects), neutral density, polarization, and any other rotation-configured settings.
  • There is a very old Canon EF 35-80mm power zoom lens, not otherwise much appreciated. Its setting might be easily electronically synchronized, albeit likely with some drift like non-stepper focus motors.

Applications & Extensions


  • Dual-lens camera: Focus and other lens-setting synchronization techniques for dual-camera rigs could easily enable synchronization of two lenses' positions on a single camera body. A single body would be more compact, more rigid, reduce interocular distance, allow lower-level, more precise timing control, and eliminate the need to configure ongoing communication between two devices. Semi-permanently installed wide-range zoom lenses could accommodate a robust, firmly attached but less convenient to remove sync belt. 75mm spacing would allow perfect interleaved spacing of 35mm film frames.
  • Smartphone, etc. synchronization
    • A smartphone with a camera at each end (or two or more on the top edge) would be very handy for 3D, and synthetic bokeh could even give larger cameras' apparent depth of field.
    • Programs such as “Synchrocam” can couple multiple handheld computing devices for 3D photography. Their near-top-edge camera positioning and small size would enable short stereo bases for closeups. Maybe wireless networking could even “genlock” them!
    • The “Fire” phone appears to integrate quad stereoscopic cameras, but, ostensibly, for the bizarrely limited purpose of tracking the user's gaze. If these can be accessed, corrective lenses applied if they're designed for close-focus only, and the overall system determined to be secure, it could make a nice 3D camera.
  • Smartphone image-splitter 3D attachment: a thick case could accommodate an image-splitting mirror 3D attachment with a short stereo base especially suitable for selfies, small groups, macro, object identification and other common phone uses. The space near the battery area could accommodate an extra-large or supplemental battery.
  • Handheld device 3D mount: A pair of compact, high quality “lens style” camera units could securely and accurately mount to the back of a standard or accessory case for a smartphone or tablet which would guide their operation in 3D.
  • “Flash” projectors: A regular or LED flash could be equipped with a pattern projector to improve automated depthmapping.
  • Left-handed grip: Techniques for attaching an entire other camera could also be used to affix a contoured grip and dual-stage remote release (or even control wheels, perhaps salvaged from a cheap or otherwise obsolete but electrically-compatible grip) beside a typical camera's stubby left side.
  • Polarizer orientation. One or more cameras' polarizers, especially if they need to be power-synchronized anyway, may as well be automatically kept in the proper orientation with respect to the sun by reference to location, direction, and time, which a fancy camera (or smartphone) generally knows anyway.

Photography techniques

  • It seems worth investigating whether image pairs from a dual-pixel autofocus sensor could provide very short-base stereo images directly. If even with all available processing techniques (even too slow for autofocus guiding), a sensor-wide autofocus array is particularly good at determining when something is in focus after an iterative process, rather than precisely how far it is out of focus, racking focus and checking focus at each incremental distance would make a more exact depthmap than a single shot of data from the array.
  • View-duplicating techniques: A beam splitter such as a mirror rig (or, for that matter, a prism array used for color separations) could be used for non-stereoscopic applications simply dividing an identical image to be sensed differently for applications such as one-shot HDR or across multiple wavelengths. Infrared flash might be invisible to animals and improve resolution, to be colorized from low-resolution ambient-lit visible images. Where precise alignment is not important, stereo camera settings-synchronizing techniques and simpler mounts could contribute to applications such as interleaving mid-range cameras' shutters, mechanical or (depending on roll-time) electronic, with a simple alternating multi-output trigger for extreme framerates in sports: with two or three basic action SLRs, you can truly just capture the whole moment and decide later. Video could similarly be interleaved with offset genlocks.

Enhanced viewfinder

  • Depth information, or coded defocus information (or blur computed electronically or visible optically via an an equivalent aperture giving the same blur as the taking lenses') and parallax-corrected frames showing the area to be captured over an extra-wide field of view and “stereo window” within would be informative for videography. It could be overlaid on a basic viewfinder image from a “master” camera (or subsystem) being used as an autofocus-assist unit rather than to take the pictures directly. A wider field of view would require a different focal-length setting, and if it comes from a different, not-actually-being-focused, or non-parfocal lens, would not be suited to simple duplication of focusing instructions but would require computations to drive the taking lenses.
    • If the autofocus array is particularly good at determining when something is in focus after an iterative process, rather than determining precisely how far it is out of focus, the AF-guidance unit will actually have to focus. A coded aperture, such as a color-coded aperture, might work better to determine how far out of focus the various parts of the focus-guiding camera's view are, and its strange visual effects would be irrelevant.
    • An optical through-the-lens viewfinder with many split-image sections (which might be colored to easily distinguish the sides) could show the amount and direction of defocus across a scene at a glance. While not pretty, it would be convenient for visual focus-pulling.

Focus pulling

  • A camera with superior phase-detect and stepping-motor autofocus could act as an basic electronic “first assistant camera” focus-puller for others with other critical attributes:
    • Genlock
    • High resolution
    • High dynamic range
    • High speed
    • Retro. Amaze your film school with what might come from ancient gear with a Canon kit lens or basic wide angle's focus pull and sharpness. Try a phase-detect sensor behind a beamsplitter to automate a custom movie camera. At the rate 35mm eats film, might want to do things right with a real focus puller—but this might help him snap into focus like Autotune.

Adapters should be checked for focus matching. Some with optical elements may be internally adjustable; others could be machined or shimmed. Electronic compensation to the focus signal should also be possible, but require advanced skills to set up. If the guide camera is using its regular phase-detect array—which could be suited to the technique of tracking the subject with the center of a pivotable master camera as an assistant frames the view with the taking cameras—AFMA may function.

  • Eye-control as in the EOS 3 (which unfortunately doesn't track eye movement as is) would be an intriguing way to select autofocus points over time, although it may be too easy to get distracted from for continuous use in long videos. The eye's own accommodation to a viewfinder that preserves the original scene's depth differences might also help guide the cameras.
  • For pixel-perfect focus more directly, the cameras could share information on the area and depth, thus parallax, or recognized object upon which to focus to each make their own direct refinements.
  • Object-identification focus: A camera with wide, uninterrupted direction-detecting autofocus coverage could track an object identified by special illumination, perhaps with a pattern indicating the precise point to focus on (or area within which to look for an element such as a face to focus on if possible). This could be pulsed between the main cameras' exposures, or differentiated by an attribute such as color, and the master camera could follow a program to look out for it, or simply be blinded to others as with IR-only filtration (and focus compensation for the other cameras to the extent the lenses are not sufficiently achromatic.)
  • Jonathan Yi's irreverent “Canon Cine Primes vs. L Series review” notwithstanding, for most of us it is manual focus that is “bull” and “breaks often”! Use the modern miracle of autofocus!
  • Touchscreen remote control: if a touchscreen's inputs are unable to be duplicated by a typical wired or wireless remote, a mechanical finger or array of nubs might be set to push on it, or electrical circuitry adjusted above it, for remote input that may be determined by other systems such as eye-tracking or simultaneous focus-target instruction for multiple cameras.
  • Manual-focus lenses: Economical manual-focus SLR lenses or super-fancy rangefinder ones could be autofocused by mounting to a big helical, with a moveable sensor (perhaps itself in a big helical), or with a gripping mount like a “Top-Off” jar opener (perhaps extended to multiple padded claws or tightening cams in a big polygonal layout) to drive floating elements. On-sensor phase detection would allow minimal flange focal distance, and stepping-motor precision would eliminate back-and-forth not suited to their stiff, precise mounts. A tiny Leica lens collared by a STM focusing unit might slip right into a mirror-up Canon. On the other end of the size spectrum, a big linear stepper could focus a press-style camera, with a computer perhaps recognizing each lens by coded markings on its board.
  • Zoom: The focal-length communication ability of modern zoom lenses could serve as repeatable breakpoints for precisely automatable power zooms.

Integral processing

  • Tethered computer: A standardized open platform such as a tablet PC or smartphone could enable a stable, extensible platform for configuring and possibly synchronizing both cameras and processing the 3D pictures into output formats as they are taken. Current single-camera open-source projects include digiCamControl and pktrigger.
  • Onboard: The cameras' own computers might likewise support some automatic configuration-matching and pre-processing of 3D images and video as they are taken. Canons are at least accessible with Magic Lantern; Samsung's Linux might support reliably portable programming developed as part of a package also suitable for tethering to a fuller computer.

Image editing

  • Artificial bokeh: 3D rendered “bokeh” can not only provide a “perfect” smooth Gaussian blur or any other shape desired, but need not develop linearly with depth. A subject's entire face can be in focus. The plane of focus can become more of a drop-cloth.
  • Perspective correction: depth detection could enable automatically warping the picture as needed to place the camera “parallel” (or at any other angle desired) to the scene, or to flatten out a curved subject such as a paper.

Computational photography

  • Depthmapping. Knowing the distance between the cameras and/or being calibrated on known distances, absolute as well as relative depth could be determined.
  • Focus settings other than simply “on the subject” could be cloned. For instance, sequences for:
  • Multi-camera: An array of cameras could be controlled much as a stereo pair for applications such as wraparound 3D modeling. To focus on a common point a different distance from each would require redetermination of focus depth for the various cameras and signaling them (or letting them signal themselves) rather than blind copying of the first's.
  • Color recognition: A depthmap (even a rough one gathered from autofocus points) coupled with knowledge of a flash's or other standard light's power could determine a scene's absolute tone and incident-light color. This could improve white balance and flash-balancing, for instance with multiple units or electornic color filtering or selective reflection.

Commercial applications

  • Item identification for shopping and selling. Selling secondhand goods (and purchasing new ones, although deliberate showrooming is unethical and may ultimately kill its host) can be much easier if the items can be automatically identified and descriptions and values pulled together from Internet sources. 3D size and shape recognition could greatly narrow down a computer's task. Floppy items like clothes could be arranged in standard positions. Over time, the system would learn to refine itself from the choices made by its users, who would do well to insist on free availability of the database lest they simply build up a market advantage against themselves.
  • Photo organization. 3D models of places and people, including size, would be much easier to recognize and sort out than flat pictures alone. Time and location data could narrow the possibilities further.
  • Appearance classification for online dating. Users could accurately search one another's 3D pictures or models built up from one or more users' 2D pictures by fine-grained appearance classifications (not just dimensions), broad correlation , or even by reference to an unavailable third person. Simply “liking” or not is old-school—how about a big game of “Guess Who?”

Comments: 0

Add a New Comment

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License