When most people look at a scene, they’re able to discern subtle shading differences from the brightest areas to the darkest. If we’re out on a sunny day, and enter a dark building, it might take our eyes a few moments to adjust to the new range of light. If we then look out the door we entered through, the outdoors might be a bright glare that makes details hard to pick out. We might wish to see details from the shadows and the highlights simultaneously. The difference between the brightest and darkest areas of a scene are what is known as the "dynamic range" of a particular lighting situation. The human eye is pretty good at resolving scenes that include a large dynamic range, though even our vision has limits.
The limitation on dynamic range is exacerbated by photography, as film has a dynamic range narrower than human eyesight. Most digital camera sensors, or the formats they save their pictures in, are even more constrained. Most consumer digital cameras save their pictures as 8 bit JPEG files. That is, they use 8 bits to represent the amount of light in each pixel, meaning they can only represent 256 different light intensities. For capturing scenes with a wide dynamic range, that just doesn’t work.
Traditional Dynamic Range Solutions
The traditional way to deal with this in photography is to adjust the exposure. One typically sets the exposure to capture a segment of the dynamic range, so that the portion of the picture one is most interested in is well represented. This might mean, though, that some portions of the picture will be blown out (overexposed) or too dark to resolve details (underexposed). Most cameras have an exposure compensation adjustment that lets one shift the range by plus or minus some amount, typically up to 2 EV. This doesn’t capture any more dynamic range in a single picture, but rather sets where the captured dynamic range falls to best represent the subject of the scene.
If you’re shooting pictures of your friends on a ski slope, for example, you might adjust the exposure so that their faces are well exposed and the snow behind them is blown out, since the detail of the snow isn’t your subject. On the other hand, if you want a picture for a brochure touting the snow grooming of a ski resort, you might adjust the exposure so that details in the snow are well represented, while the skiers are just dark shadows.
Introducing HDR Photography
Wouldn’t it be nice if we could have a single image that covered the high dynamic range of such situations so that we had clarity from the brightest portions to the darkest? With High Dynamic Range (HDR) photos (sometimes called Dynamic Range Increased or DRI photos), we can. Further, we can do this without waiting for a breakthrough in photosensors or film, and without even necessarily replacing our current cameras. The trick is that we take multiple pictures at different exposures, so that we cover the area of the dynamic range we’re interested in, and then we blend them together to form a composite image that includes more information than any one image represents.
You may well be familiar with doing this with panorama shots to capture a field of view that’s grander than your camera will capture in any one shot. There, you might take a shot, move the camera so that your next shot was only partially overlapping the previous, and then merge them, using the overlap as a reference, so that the new image was wider and/or taller than you could capture in a single frame. With HDR photography, we’re doing the same thing, only to increase dynamic range, rather than field of view. Here, our shots will typically represent the same field of view, just at overlapping exposures such that each picture captures a portion of the dynamic range. With a series of images about 2 EV apart, you can capture details from the darkest to the lightest portion of your dynamic range. How many images? That depends on how wide the dynamic range you’re trying to capture is. For most outdoor daylight shots, I find three images is sufficient, and I’ve sometimes gotten away with just two. Let’s now look at some details of how we might shoot and combine them.
Composing the Scene
The first consideration is what type of scenes are or aren’t suitable for HDR photography. Since we have to take multiple shots of the same scene, often with a noticeable gap in time as we make adjustments to the exposure, static scenes are best. A crowded street scene, with many moving people or objects is not going to work well. Similarly, things that change from one shot to the next aren’t going to be good for your shot. A night shot of a deserted street that includes a traffic light will only work if the light is showing the same coloured light in all exposures. Since the images should be exactly aligned, shooting them from a tripod is ideal, so some place you can do that is to be hoped for, as well.
How you capture your different exposures will depend, somewhat, on the features of your camera. Some cameras have a feature called Automatic Exposure Bracketing (AEB). This will take a picture at a given exposure, followed as closely as possible with a shot that’s underexposed and one that’s overexposed by a user set amount, all with one push of the the shutter button. If you’re fortunate enough to have such a camera, setting your AEB step to ±2 EV might give you all you need for many HDR shots. Many consumer level digital cameras lack an AEB function, though, but that shouldn’t be a significant barrier. All of my HDR photography has been done with consumer level digital cameras that don’t have AEB.
Another feature that helps is the ability to set focus and exposure locks independently of each other, and to have them remain in place even if you aren’t holding the shutter button at halfway. In an ideal setup, I’ll place my tripod, with my camera on a quick release plate. Using the LCD screen, I’ll compose the scene to include the elements I want, and then clamp down the tripod adjustments. Then, I take note of the areas of different light intensity in the scene. Pay attention to the darkest and lightest parts, as well as any significant sections in between. At this point, I’ll remove the camera from the tripod via the quick release plate, and aim the camera at the darkest part of the scene. Using spot metering, I’ll lock the exposure at that setting, and then set the camera back in the tripod to capture that exposure. I then repeat this for progressively lighter portions of the scene, and may finish with an overall reference using average metering.
How many of my shots are taken using this ideal process? Precious few, alas. One of the main reasons I use a compact digital camera rather than a DSLR is that it’s small and light enough to keep with me at all times. My camera lives in my purse, so that it’s handy whenever I wish to take a picture. Often, when I see a scene that might make a good HDR shot, my first thought is that it’s too bad that my tripod is back at home. That doesn’t keep me from trying to capture the scene in a series of images that might work for HDR, though. The process is the same as if using a tripod, except there’s no tripod, so I try to pay more attention to alignment issues, and look for referents that I can place in certain portions of the viewfinder to minimize the amount of shifting from shot to shot. Realistically, shots from this method will never be as good as those taken on a tripod, but I have gotten some nice, very usable results this way.
The biggest hassle with doing this is aligning the images for HDR generation, and some of the software options make this easier than others. Some people try to avoid the alignment issue by generating multiple exposure JPEGs from a single RAW image. While this can give nice results for a narrow dynamic range, it really doesn’t produce HDR images, since the amount of light that hit the photoreceptor is limited to what it was for that one RAW image.
Choosing an Approach
So, now we have our collection of photos covering the dynamic range of the scene. What do we do with them? First, we have to decide how we’re going to process them. One method used is to stack the images as layers in a program such as Photoshop or Photoline, and then blend them together using combinations of intensity based masks or curves, with various blending modes between the layers. I’ve seen some fantastic results, particularly of night shots, done with the masks this way. Some of the curves-based images have been less impressive to me. In any event, doing this well requires more familiarity and practise with these techniques than I have.
That leaves me with the second approach, which is to use one of the many programs that are emerging for producing HDR images. These generally process the images into an HDR file which is said to be scene referenced, rather than output referenced. That is, the HDR file isn’t intended to printed, or even viewed directly, as it stores the light values as they are relative to each other in the scene, rather than what can be easily be printed or displayed. From this, one applies a process called tone mapping, which produces an more conventional image, usually in JPEG format, for printing or display. Depending on the algorithm and settings used, one can get very different tone mapped results from the same HDR file. Most of these programs also include some tools to help you align your images.
It seems Photomatix (Mac and Windows) is the first such program many people come across, as it was for me. It provides basic tools to help with alignment and ghost images (e.g. people walking through a scene which are in some of your shots but not others), though I found it difficult to align images shot by hand with this. The open source Qtpfsgui is free and available on multiple platforms, but has no alignment tools and assumes a familiarity with the details of various algorithms that is beyond that of a novice. A recent entry is the Mac-only Hydra, which has very good alignment tools, but really just automates the layer curves blending technique describe above, which gives you limited results.
Alignment tools that are also very good are what first drew me to Dynamic Photo HDR, despite it being Windows only (I use a Mac, primarily). I’ve had the best luck aligning images that were shot handheld in this program. It also has a simple and effective tool for removing ghosts from images. The tone mapping algorithms offer a range of styles, each with many options you can adjust via sliders to preview your results. Since I have a virtual Windows system on my Mac, it is possible for me to run this, and it’s became my primary HDR tool. All of the programs I’ve mentioned have free trials, so it’s worthwhile checking them all out for yourself, and finding what you’re comfortable with.
Bringing the Shots Together
With the process now laid out, let’s look at an example using the Dynamic Photo HDR software. In this picture, representing a normal (that is, non-HDR) exposure, the window on the steeple of the church is blown out by sunlight reflecting on the glass panes.
(view large image)
Details of the windows and the woodwork on the white portion of the steeple are lost to overexposure. Conversely, the foreground and most sections that aren’t in direct sunlight are too dark to perceive much in the way of detail. We can take a lighter exposure to see what’s in the shadows, and a darker exposure, to see the details that were blown out of the original.
(view large image)
(view large image)
While each of these provides details that were absent in the original picture, neither makes a satisfying portrayal of the scene by itself.
Almost any of the packages mentioned above use the same basic workflow. After starting the software, you specify the input files for the scene, and I used all three of these files. You usually also
have to specify the relative EV values of the different pictures. Dynamic Photo has an option to analyze and guess at that, and I’ve found no reason to second guess its estimates there.
(view large image)
Next, you adjust the alignment so the images are directly overlapping. Even though these images were shot on a tripod, it was still necessary to shift things by a few pixels, because the camera came on and off the tripod a few times between shots, causing the tripod to be jostled slightly. If there’s something in one frame not in the others, or that moves within a frame, you might use the ghost masking feature of your software to choose which (if any) to include. Though it’s not always possible, I simply waited between shots so that there weren’t people in any of them.
(view large image)
Generating the Final Image
At this point, you’re ready to generate the HDR image, and this might take a few minutes, depending on the speed of your computer. The result you get might not be all that pretty, on first blush. Remember, this is scene referenced, rather than output referenced, so it’s not meant to look nice on your screen at this stage. Making it look nicer is what tone mapping is for, and to that, you typically select from a list of methods, each of which will allow you a number of options you can tweak via sliders. There’s generally a preview function, so you can play with the choices until you find a result that’s pleasing to you, before rendering the final result.
(view large image)
If you’re not sure, you might do multiple copies. Here we have two results from Dynamic Photo HDR – the first produced via their "Eye Catching" method, and the second by their "Photographic" method. Other packages give you similar variety to change your options during tone mapping to get results that look pleasing to you.
"Eye Catching" (view large image)
"Photographic" (view large image)
As you can see in either result, the details in both the shadows and the blown out portions of the steeple are now visible. Saturation has been boosted to different extent in both images as well, but that’s an option that can be turned down in tone mapping, if you prefer. You have the power to render the scene as it is most pleasing to you, while showing details that would be lost in any single shot with today’s digital cameras.
Hopefully, this has inspired you to look at your pictures in a new light, and notice where details might be being lost on the high and low end of the lighting. Further, you should now have some idea where and how to go about recovering those details. Good luck with your experimentation in the fun new world of HDR photography!