What’s a Stop?
We’re photographers so we all understand the concept of a stop. In terms of exposure, it simply refers to doubling or halving of the amount of light striking the sensor by changing the shutter speed or the lens’ aperture. With digital, we can also change the sensitivity of the sensor – also measured in stops – by doubling or halving the ISO value. Simple, right?
But what happens when we get our images onto the computer. What’s a stop of brightness in an RGB or Lab color space? Recently I got sucked down the rabbit hole of exposure and grayscales to sort out a few answers for myself. What I found was revealing and a bit surprising, at least to me.
It started when I was watching an online video by a well-known photographer and author. He displayed a grayscale like the one above. He explained that in the Lab color model 50% is considered middle gray and then defined a “zone system” using zones 0 through X as shown. All of that made sense and I’m sure we’ve all seen similar grayscales.
The decision to create 11 steps or zones was arbitrary, but how does that relate to plus or minus a stop of exposure? It’s not unusual to see a zone system grayscale like this with the steps also labeled as plus or minus one stop increments. I’ve generally found that a printed grayscale using my own printer and preferred paper can only generate about a seven or eight stop range from blackest black to the brightest white, but that’s the output. Modern cameras are frequently described as achieving 12 to 14 stops of dynamic range, but that’s input.
So, how does a stop of exposure input translate to a stop of output?
Welcome to my rabbit hole.
I know from dabbling in photographic reproduction of artwork that what you photograph isn’t necessarily what you get. It’s actually very difficult to photograph something and then faithfully reproduce the colors and tones precisely. If you photograph a grayscale and then process the images you generally won’t get the same grayscale in your image without special processing. Digital cameras are designed to create pleasing tones and colors, not necessarily accurate tones and colors. So I knew I needed something other than a simple grayscale.
My approach was to set up gray screen on my iPad and then photograph it out of focus at a wide variety of shutter speeds. My plan to find a starting point that would let me cycle from pure black to pure white in one-stop increments. The camera I used was a Sony a7r2. I ended up with the camera set to ISO 100 at f/8.0 and used shutter speeds ranging from 30 seconds all the way to 1/8000th of a second. The “correct” exposure was very close to 1/4 second since that’s what gave a result very close to middle gray.
I then applied only default processing in Lightroom, set the white balance, cropped each frame into a small square and loaded them into Photoshop. By spreading them out from darkest to lightest I created a grayscale that truly represents my camera’s one-stop increments.
The chart above shows the result. The grayscale at the top is the result from the camera. The one at the bottom is a straight linear grayscale generated in Photoshop to have 14 brightness levels to match the one from the camera. The percent values are the Lab luminosity. The number below is the sRGB value (assuming red, green and blue are equal).
All of the samples were created with default processing. Of course, we have numerous tools in Lightroom to shape the tones of an image, but my goal was to see what happens before I start tweaking the photo.
First some basic housekeeping. I’m choosing to describe the gray levels two ways, the Lab luminosity percent and as an 8-bit sRGB grayscale value. In either case, we only have a limited range to work from. Nothing is blacker than pure black (zero) and nothing is whiter than pure white (100% or 255 in Lab or RGB respectively). In 8-bit nomenclature, we only have 256 levels available to describe all the tones between the two endpoints. A 16-bit image still has pure black and white endpoints, there are just more possible values between the two ends of the range.
I said earlier that 50% luminosity in Lab was middle gray. Notice that the sRGB value is NOT 128. In fact, the sRGB equivalent brightness is a level of 119, not 128. It’s also 119 in the Adobe RGB (1998) color space, but it’s only 100 in ProPhoto RGB. That may or may not be important to you in processing your images, but it is worth knowing.
I immediately made some observations. If you look at the change in Lab luminosity from stop to stop you’ll see that my camera plus default processing gives me very non-linear results. The first stop of change on either side of middle gray is 20 points of difference. The second stop is only about 15 points difference. The next stop of difference is even less.
What that means (at least to me) is that once the RAW image is processed the majority of the available levels of brightness are clustered around the center of the scale. In fact, 70 percent of the available tones (from 44 to 224) are within plus or minus two stops from proper exposure. As the exposure level gets farther from middle gray the number of tones is less and less so you’re more likely to have problems with banding or other processing issues.
That result really surprised me. I expected the tones at one stop intervals to be more evenly distributed with uniform steps between stops, more like the linear grayscale shown for reference. Instead, most of the information is clustered around the middle of the range.
If you chart the values shown for Lightroom’s default processing you find the familiar ‘S’ curve used to add contrast to mid-tones.
If you print your work then you also know that you’re going to lose detail at both ends of the tonal range. For my purposes, I’ve defined my usable range as from sRGB 11 to sRGB 252. Anything beyond those will probably be seen as pure black or pure white in a print. If you’re counting, that gives me nine stops of latitude to work with around middle gray even though the camera captures a wider range in the RAW file.
The preponderance of tones are clustered around a middle gray exposure, but what about colors? Intuitively it would seem that if most of the gray tones are centered in the middle of the scale then most of the available colors will also be centered rather than at the endpoints.
To answer that question I found an image online at Bruce Lindbloom’s website (www.brucelindbloom.com/index.html?RGB16Million.html) that contains exactly one pixel for each possible color combination from (0,0,0) to (255,255,255). I downloaded a copy of the image and inspected it to evaluate the number of unique colors for each level of brightness.
The chart above is a luminance histogram showing the number of unique colors that correspond to each level of brightness. Again, notice that the vast majority of available colors fall in the middle. If you again choose endpoints of 44 and 224 it appears that over 90% of the colors live in that range of plus or minus two stops from middle gray.
So, if it’s important that most of your image lives in the mid-range where most of the tone values lay, then it’s even more important for colors.
An obvious question then is what does this mean for Expose to the Right (ETTR) methodology? (ETTR advises that the best results can be obtained by using the brightest exposure that still protects the highlights.) ETTR methodology claims that you get the best results since the RAW file captures the most information in the brightest tonal areas of an image because of the doubling and halving of values. In contrast, the resulting processed image contains the most information in the mid-tones clustered around middle gray.
To explore the idea further I added three more rows of information to create the chart above. First I inspected the RAW files directly using RawDigger (www.rawdigger.com) to see the actual values in the file. My camera yields four channels, Red, Green, Blue & Green2 (there are twice as many green sites as there are red and blue). Values for each site range from zero to 15,680. The values shown are an unscientific estimate of the average luminance in each of the files. Notice that each step of brightness is double the previous with the exception of the last two steps. (Could it be that the sensor and processor are attempting to protect the highlights?)
You can certainly see the logic behind ETTR. For instance, the step from +2 to +3 exposure is a difference of 4000 units, but the step from -2 to -3 is a difference of only 125. With 4000 tone levels to work with you should be able to better capture subtle differences in tone and color. ETTR aligns the large lush values of bright exposures in the RAW file to the details in the mid-tone range of the processed image file.
But ETTR also involves reducing the exposure level in Lightroom. I’ve also added two additional rows of grayscale samples, one where the Lightroom exposure slider was moved to minus one stop (such as you might do when using ETTR) and one where the slider was moved to plus one stop.
Predictably, moving the exposure slider plus or minus moved the middle gray patch plus or minus one stop as well. It also retained the central pattern where plus or minus a stop is represented by plus or minus 20 points of Lab luminance.
Other aspects of the results surprised me though. The minus adjustment didn’t gain any detail in the 8-second exposure, it simply compressed the tones in the top few stops. Even though there were potentially thousands of steps of information those steps were compressed into very small steps in the processed file. It also tended to crush the shadows more than I would have expected. The end result appears to be less dynamic range.
Moving to the plus one-stop exposure adjustment did not seem to have a similar effect. In fact, it seemed to protect the highlights to some degree yet effectively lifted the mid-tones and shadows. The result appears to be the same if not more dynamic range than the default processing.
I’m not entirely sure what conclusions to draw but it does make me question the value of ETTR with modern sensors. There is lots of data available in the RAW file highlights but there is also more than enough in the mid-tones. However, once you get three to four stops below middle gray the RAW data gets pretty sparse so if those tones are important you probably want to try to place them closer to middle gray.
From this set of tests, it appears that a correct exposure that protects the highlights will give the best results, even if it means underexposing the scene overall. Of course, ETTR always advised protecting the highlights so maybe that conclusion isn’t that much different. But protecting highlights seems to be the most important aspect.
Some other questions came to mind as well:
Is my camera meter accurate? Yes, it said 1/4 second was the correct exposure and indeed that translated into very close to 50% luminance.
Is the Lightroom exposure scale accurate (in stops)? Yes, if you adjust exposure up or down in one stop increments the tones move according to the top scale.
Is the Capture One exposure scale accurate (in stops)? No. When you adjust exposure up or down in full increments is does not change the luminance values in full stop values. That’s usually not important but might be interesting to know.
What happens when you photograph a grayscale? Lightroom and Capture One tend to add contrast to the RAW image to make pleasing tones and colors. The result is that if you perfectly expose for middle gray, then the adjacent gray values will be different from the original with the darks being represented as darker and lighter tones being represented as lighter.
Capture One does give you the ability to set base characteristics to use a linear curve so that tone values match more closely, that’s especially important for art reproduction work. It lets you choose between pleasing tones and colors or accurate tones and colors. I don’t know of a corresponding setting in Lightroom that gives you that control.
Color Checker values: Patch 22 (bottom row, third from the right) is the closest to middle gray with a Lab luminance of 51 percent. However, when you photograph the Color Checker with a proper exposure the resulting Lab values won’t match published values with default processing. Patch 21 has a published value of 67 percent compared to a photographed value of 69 percent. Patch 23 has a published value of 36 percent compared to a photographed value of 32 percent. Close, but not exact. Also, creating a custom profile does not seem to make an appreciable change in the gray values.
My conclusions? After all of this, I’m not sure I’m really any smarter or better prepared to make great photos. Perhaps the most interesting finding is seeing how RAW converters tend to prioritize tones surrounding middle gray in the center of the histogram while giving less “bit depth” to the ends of the tone distribution. It shows the importance of doing as much processing as you can with the RAW file in Lightroom or Capture One before moving to Photoshop to fine-tune the image. (Once you move the image to Photoshop you have the distribution of tones “baked into” the PSD or TIFF file.) Lastly, the results highlight the flexibility of modern sensors and in addition, reinforces the importance of protecting highlights.
Also, I’d also be curious to see the results if anyone duplicates this process with their own camera.