For any non-geeks that have stumbled across this, HDR refers to an image with a High Dynamic Range. That just means that there are more than 256 tones of a single color (Red, Green, Blue). Your eye can see in the light from a starry sky. It can also see on a sunny day. It’s sensitive to a lot more than 256 tones. The camera is capable of recording more than 256 tones too, and if you make multiple images of the same thing, you can change the exposure of each image, so that the camera will record any range of tones you like. Then all you need to do is figure out how to combine those images in order to print them, or put them on a display screen.
There are some excellent tools for working with digital images. Unfortunately, for me, I don’t have access to most of them. You see, I’ve had some bad experiences with a certain ubiquitous software company from Redmond, Washington, and I chose not to use their products whenever possible. I use Linux on my personal computer, phone and tablet. If I could figure out how, I’d probably run it on my refrigerator and toaster oven too. A consequence of my intransigence is that I can’t run the popular Adobe products for photo manipulation. They don’t run in Linux. Yes, I know, they run on Apple PCs, but I don’t happen to have one of those, and being relatively impoverished at the moment, am not likely to get an one.
There, now that’s out of the way.
There are two levels of conversion needed to combine multiple digital images with a combined high dynamic range, into a single LDR (low dynamic range) image, i.e. one that can be displayed by 256 tonal values for each color.
First, the images need to be converted from the format produced by the camera (called RAW) to a suitable common graphic file format, such as JPEG, TIFF or PNG. That RAW image conversion requires some pretty specialized software, because in addition to knowing about the graphical file format, it needs to know all about the camera and the lens. There are more than a few of those, so it gets complicated. I use a program called DarkTable for that. It’s open source software from a project at http://www.darktable.org/. I use it all the time, and it works great. In the case of HDR-LDR conversion, I only need a small fraction of its many great features.
My camera lens (and all others too, as far as I know) has a number of flaws. One of them is called chromatic aberration. This causes one edge of linear objects in an image (like telephone poles, trees, buildings, etc) to have a bluish cast, and the other edge to be reddish. Photo editors hate chromatic aberration, because it becomes really ugly when you enlarge the image. The magic that needs to take place to get rid of chromatic aberration is different for each lens, and the program user (me, for instance) doesn’t have the faintest idea what the parameters need to be for it to work properly. The program already knows, thanks to the intelligence built into it by its author. I think I mentioned that it’s a great program.
So, my first step (after making three images of the same thing at three different exposures) is to remove the chromatic aberration from all three images with DarkTable. This doesn’t change the RAW image. It generates a new file with the defect removed. I usually tell DarkTable to generate TIFF files for this step, because they are loss-less (you can reproduce the original image exactly from the generated file.) This is not true of file formats like JPEG. Repeatedly loading, modifying and re-saving a JPEG file will degrade the image a little, each time you do it, because a JPEG is an approximation of the image it contains, while a TIFF file is an exact representation. The TIFF file automatically contains many handy pieces of metadata (information about the image), among which is the exposure information, which is important in the next step.
I load my three TIFF files into a program called Luminance HDR. It will take the several image files (three is the customary number) and combine them into a single LDR image that contains all of the tonal information from the input images, compressed into one of 256 tonal values per color. Luminance HDR is also open source, and available at https://github.com/LuminanceHDR/LuminanceHDR.
Luminance is very good, but it’s still a work in progress, so it’s necessary to work around a few little warts in order to make it work. For instance, I’m sure that one day it will be able to remove chromatic aberration from raw input files. (It can already read RAW files and process them just fine.)
That aside, Luminance HDR does a lot. I don’t mean to disparage it. However, I spend a lot of time experimenting. There are many many options that need to be chosen. Very little guidance is available, and the default options aren’t necessarily helpful. If you’re patient though, it will do the job very well. It can do some really cool stuff once you’ve figured out how.
The Process Steps
First, of course, make the photos. I did that one evening recently. I produced three RAW photos of the setting sun from a camera mounted on a tripod. They were properly exposed at -2, 0, and +2 exposure values (otherwise known as f-stops).
I removed the chromatic aberration from them with DarkTable as described above, and produced three TIFF files.
I loaded the TIFF files into Luminance HDR. After specifying the images to load (that’s the intuitive part), I ignored all of the other options in the input dialog window, because I already know that the rest of the options in the window are not applicable for images that are made on an isolated tripod-mounted camera. They deal with aligning images, and getting rid of ghost visual artifacts that can occur if, for instance, someone walks in front of the camera while one of the images is being recorded.
I clicked the “Next” button, and was asked to chose a profile for the HDR creation model. There are six predefined options. You can also build your own custom option if you’re really trying to drive yourself insane. The documentation advises you to use the first option, and if that doesn’t work, to try a different one. Although that’s not terribly informative, it’s good advice. I chose the first option. It produced an image with some ugly red splotches in one of the shadow areas. I tried again with option number two and got a good image.
After the images were loaded and merged into one HDR file, I played let’s experiment for a while to determine which tonemap operator to use. A tonemap is a the procedure that creates the LDR output file from the merged HDR data. Each operator has its own set of parameters, and it’s up to you to discover what the appropriate values need to be for a particular image. Since I have been using the program for some time, I knew I wanted to use either the Mantiuk ’08, Fattal, or Drago tonemap operators. I eventually settled on Fattal, and a set of parameters that I had used previously for outdoor shots. Since there is no information about what any of the parameters do. You get to figure it out. Fortunately you can try as many times as needed until you have a result you’re happy with. The result is displayed on the screen. I can give you one tip though. Always generate the output image at the resulting size that you want for your file. The tone map process runs much more quickly with a smaller size, but if if you change the output size after you get a result you like, the output might be completely different in very unpleasant ways.
The final image is pleasing, I think, rather much as the scene appeared to my eyes. In my opinion, that’s the whole purpose of the exercise. Without the HDR-LDR mapping process, the result would either have a blown sky, or everything in the foreground would be in silhouette.