DSLR Raw Editing: Dark Frames and Flat Fields
So I know I’ve already covered the basics of raw photos, but there’s a few things worth elaborating on that I think deserve a separate post, so here we are.
There’s three things that can be easily solved with raw photos, two of which are obnoxious and annoying and the other is.. less so. These are dead pixels, noise, and lens vignetting, solved by dark frames, and flat fields.
Disclaimer: this is even more down the path of “Only professionals or hardcore nerds would ever have to do this.”
The Problems
Bad (sensor) pixels can create some actually obnoxious discolored spots on an photo, and are usually amplified by the processing pipeline from the camera’s sensor to the final photo. Therefore, they can be rather annoying to see, even if they’re only a few pixels in size, at the end, they might just stick out like a sore thumb if treated the wrong way by the processor. Noise on the other hand, is also annoying, but if you don’t zoom in or don’t have a giant region of flat color, it’s likely going to be hidden just by the natural complexity in the scene itself.
And on the other side, there’s lens vignetting, which… okay almost nobody will really notice at all unless they stare and you take a picture of a literal blank white screen.
Dead Pixels
A camera sensor is, like many things in the digital graphics space, made of pixels. These pixels can, for some reason or another, fail. Either they’re stuck at one constant value regardless of what you point the camera at, or they’re just going to be black, with no output at all. The only problem is that when the output is processed, this will create a disturbance on the final image that’s definitely going to be larger than a single pixel, and depending on how it failed, might be noticeable without that much required searching and staring. The problem with them is that, minus the ‘complete black’ case, they can be hard to automatically detect and filter out. And while it can be done, it won’t be perfect, having a reference image or a human to say “this is the bad one” can produce better results.
Noise
Every photograph is going to have noise, but there’s two types: chroma (color) and luma (brightness) noise. This can be caused by everything, your ISO settings, random entropy of the photons, the temperature of the sensor producing false returns… Chroma noise is more obnoxious, but luma noise is usually less of a problem. Chroma noise is pretty specific to digital photography, and can be filtered out rather easily.
Luma noise, on the other hand, also goes by another name: film grain. While it won’t look exactly the same as a film shot, luma noise has always been there, and low-intensity luma is, depending on who you ask, either enjoyable (that old film look), or just not noticeable. Luma is also the harder of the two to filter out without just blurring the entire photograph. Chroma? Click a button. Luma? You’ll need to sit there deciding for yourself where the appropriate value is between ’noise’ and ‘blurry.’
Lens Vignetting
Camera lenses are some amazing pieces of technology, though they tend to have an issue. Lenses are round. Sensors are rectangular. The edges of a sensor may get slightly less light than the rest, meaning the corners of a photo are going to be slightly darker, an effect known as vignette. Since this is caused by the lens, it’s called… lens vignette.
Now again, this is almost unnoticeable, especially in photographs of actual scenes, though if for some reason you absolutely cannot deal with this, there is a fix.
The Fixes
Both of these fixes prefer raw files, so they can work closer to the sensor output and produce better results.
Dark Frames
A dark frame is a shot taken with the lens cap on, or in other words, a photo where there is no light. In an ideal world, this would be 100% black, with nothing else across the frame except completely black pixels. Naturally, this is not an ideal world, and so a dark frame will not contain pure black. Instead, it’ll show the noise a camera captured against a pure black background, and any pixels stuck on a certain value.
Dark frames need to be taken at the same ISO and shutter speed as the photo you want to correct, and ideally should be taken on location, so that the environmental conditions match up. Additionally, the more you take, the better, so it can average the results.. to a point. After a small handful, it should be just fine.
Yes, they’re the more annoying of the two.
Flat Fields
A flat field is the polar opposite of a dark frame: it’s a photo taken through a diffuser of some sort, pointed at a bright white, making a final picture that’s as close to pure white as possible, or ideally, all one flat color, it doesn’t have to be perfectly white.
Flat fields only need to be taken once, though you’ll need quite a few: every shutter speed, aperture, and zoom (if applicable), for each lens that you will be using. Now that you have your giant collection, if you look close, you’ll notice that the edges are darker. This is the lens vignette, now highlighted clearly against a bright background. Once you have them, they won’t change, so you can just apply them to processing afterwards (instructions differ per software, obviously), and you’ll have a perfectly flat image with no darkening on the corners! For whatever use that will be.
Note that when I say “Every zoom”, I just mean ‘most common and/or ones you’re most likely to use.’ My lens has about four or five marked positions just to give you an idea if where you are, and a photo at each one of those is good enough to be interpolated between by the software. However, the more you take, the (slightly) more accurate it will be, up to the granularity that your camera can record its focal length, so usually 1mm.
And one final fun use: flat fields can spot dirt on the lens or dirt on the sensor and remove them too, but you’ll also need to take flat fields for, on top of everything else, the focus setting too, because changing focus on a camera actually very slightly zooms the image too, therefore changing the position of stuff slightly, and messing with the algorithms used to remove it. In my opinion it’s just easier to, if you see stuff on the lens, unless it’s really bad and you need to remove it in post now, either just deal with it, or clean it and keep going or retake what you need to, it’s less effort to clean the lens or even the sensor than it is to find a suitable light source (most electric lights won’t work because they’re dark for a fraction of a second and that will throw it off), configure everything the exact same way, take the picture, hope it’s right, keep going until it’s right, and finally go home and take 30 seconds to remove the tiny speck of dust that you can’t see when the image is scaled down to the resolution that it’s going to be used at.