Digital Zone System

August 22, 2014 ·

Christopher Schneiter

THE LONG ROAD TO EXPRESSION

The term “Digital Zone System” has been bandied about since the advent of digital imaging. To many, the term Zone System became a buzz term. The Zone System was bullet proof, and became synonymous with fine images. People have seemed desperate to give their images the kind of importance that using the Zone System seemed to give them. Many have used the term, but it hasn’t helped because they were trying to assert a film workflow on a digital paradigm and it didn’t work. So they think that the digital output is inferior. It’s not. In many ways, I see it as superior. It’s just different.

MAKING A CASE FOR DIGITAL IMAGING

Since the advent of digital technology, there has been talk of the use of “Digital Zone System”. What I’ve seen though, is that all too often, the talk has centered around trying to make it work much in the same way as the Zone System did with film. I’ve encountered many people who insist on using the spot meters that they spent so much money on, and measuring tonal range, as if somehow the old techniques we used would magically transform the Images.

There is a nostalgia about shooting with film, with many espousing its virtues and superiority over digital capture. I personally see that in many ways a digital file is far superior than film, especially since resolution has caught up. The first  distinction is the malleability of a digital file. Images can be easily and perfectly retouched and repaired, something that could never be done without a lot of work on film.

Figure 1

This image was destroyed in a darkroom accident when chemistry was splashed on it. It was cleaned as thoroughly as possible, but the damage could not be repaired conventionally. I carried it around for 25 years, until it was finally scanned and was retouched digitally. It carries the character of the original print.

Digital files also have the potential to convey much more information than could be had in a silver-gelatin print. Tonal ranges are precisely quantitatively controllable, allowing the photographer to control areas that in the past were limited by conventional materials.

Figure 2

Again, I’ll use an image from a scanned negative. In addition to many processing flaws (which were also a drawback of film) being repaired, this image exhibits a much longer tonal range, allowing the viewer to be able to see all the detail on the water. To reveal this amount of detail, it would have been necessary to lower the overall contrast of the image, causing the print to go flat. A digital application allows control of small areas of the tonal range without effecting the overall contrast.

Another advantage to digital files is that it is also possible to have much more control in the rendering of images. In the darkroom, at best, dodging and burning of images was imprecise. Even in the hands of Masters, it is possible to see huge variations from print to print. I once looked up Wynn Bullock’s “Stark Tree” image, and was shocked at the variation in the collector’s prints available. Wynn was no slouch, yet his prints were inconsistent. Along with the difficulty in dodging and burning, there was the fact that images would “dry down” differently depending on the paper they were printed on, and from variability due to time and temperature variations and the chemistry used. This led to infinite variations in density.

Figure 3

This image always was problematic in a few ways. First, the densities are very delicate. If the highlights were too light, there would be no detail or substance. If they were too dark, the shadows would block up and the whole image would look muddy. The density variations could be caused by exposure, development, or as any darkroom savvy person knows, by print dry down. As the print dried, it would grow darker, and this would vary depending upon the paper it was printed on and even the temperature of drying. Ansel Adams was a very early practitioner of using a microwave to dry his test prints! Many of the tones in this image lie on the thresholds of the tonal range.. More on this later.

Another problem I always had with this image was that I always wanted the little moon to be just a tiny bit lighter. When I tried to dodge it, try as I might, I never could be precise enough to do it without getting a halo around it. And I was pretty good in the darkroom too. When I was able to scan this negative after many years, not only was I able to nail the exposure without the variability of conventional processes, but I was able to get the proper density of my little moon, and repeat this in every print, whether I print it today, or in 10 years.

So, where does this leave us? How does this tie into the Zone System? I have intentionally begun this article using images that were created using the Zone System to show how using digitization as an asset improved them. So how can Zone System concepts be used for digital capture?

At it’s root, there is nothing exotic or mysterious about the Zone System. It is and always was  basic proper technique of exposure and contrast control. Let’s break down it’s key elements.

ON TO THE ZONE SYSTEM

In preparing to use the Zone System, testing is required. In addition to compensating for variability between lenses, exposure, and processing habits, this testing gives us data on how our specific equipment and material combination performed. This data would allow us to produce an exposure and contrast controlled negative that would reproduce that tonal range on a specific paper. Each time we use a different film, paper, or  even a different camera or lens, it was best to retest to account for variations.

The Zone System is based upon keeping densities within specific tonal parameters, or within the material thresholds. Working within these thresholds gives us a negative that will print factually on a certain paper. There are several tests.

The first Zone System test we do is to determine the threshold of the paper that we will print on. To do this test, we do a test strip at different exposures, at a specific enlarger height on a dedicated enlarger. This is then processed at a specific time and temperature to assure consistency. After the print is washed and dried, it is evaluated. You choose the darkest step in the print where there are no darker steps present. This is the paper threshold, and this is the exposure time/ development/ paper combination we will use in your testing. This will be our standard print time.

Next, we need to determine the proper film speed for that film, or the film density threshold, or the density where we can see a very subtle density change when printed at our standard print time. I always found that, usually, I needed to use at least a full stop more exposure than the rated speed (200 ISO instead of 400 for Tri-x).

The final test is to determine the proper highlight threshold for the film speed we have determined. To do this, We expose film to four stops over middle gray, then vary the development time. The longer the development, contrast and density increases. By trying different development times, and printing the negatives at the standard print time, best highlight threshold density is found.

After testing, the second challenge is learning how to use this data. Our testing  has set up our negative to operate using a specific tonal range. So what happens if we have a very high contrast or low contrast scene? We must first determine how wide or narrow the contrast range is is. To do this we used very expensive spot meters, and once we figured it out , we needed to vary our exposure and development time to change the contrast. In my case, to determine my ideal “expansion or compaction” development times, that called for more testing! Once we find that development, we can alter our tonal range to give us a good printable, factual image with our chosen film, paper development combination.

Ansel often referred to the negative as being the score, and the print being the performance of that score. I had the preconception that if I could just produce a perfect negative, then my images would also be perfect. I learned the hard way that it doesn’t matter how good your data is; you still have to learn how to use it. You still have to finish the image. You still have to make a performance of it!

This is how I lived for many, many years. I loved the darkroom, and was very good at it. But remembering all of the testing makes me feel exhausted. Don’t get me wrong. I love testing (as you’ll find out later), but I’ve spent a whole lot of time at it!

Ultimately, The goal in using the Zone System was to make the tonal range in our negative fit the tonal range that our printing techniques will provide. Information is locked into the negative, and there isn’t all that much that can be done to alter it after processing.

AND INTO THE NEW MILLENNIUM

And now we live in a digital world. The conception is that “It’s easy”, so now everybody is a photographer. This may seem true at face value, but there still is a lot that most people ignore. Fortunately for us, many of the same controls we used with film are still valid, although we access them in different ways.

With film, it was necessary to have all of the tonal range under control in the negative. The tonal range and contrast needed to be fit to the paper that it was to be printed on, and this all needed to be predetermined. Luckily for us, When we work digitally, all of the significant controls are done after exposure.

The basics are still the same. You must still start with a good exposure that fits the tonal range of your sensor. Yes, we can do a lot of manipulation of the tonal range, but if the information isn’t there, it can’t be replaced..  With film, we placed zones in our meter readings to determine our tonal range and how we needed to develop the film  to fit a certain tonal range. Digitally, our goal in exposure is to make a correct exposure that fits the tonal range capable of  our sensor, with no loss (clipping) on either the highlight or shadow ends. This represents good raw material for post processing. For maximum quality, it is critical that exposure is correct so that we have usable information.

ENTER THE HISTOGRAM

One of the most incredible tools that came with digital capture, and possibly one of the most ignored, is the histogram. The histogram is built into most cameras, yet all too often is turned off, with the photographer using the LCD to judge exposure. Many people never take the time to learn how to use it properly.

Rule #1: LCDs LIE! The view of an LCD is corrupted by the ambient light it is viewed in. If the ambient light is bright, the LCD will look dark. If viewing light is dark, it makes the LCD look light. The LCD is not an accurate means of exposure judgement.

In film days, we spent huge amounts of money  on light meters. With practice, we could become very accurate in their use, and, because film exposure was so critical, they were necessary. They allowed us to measure tonal range, and helped determine how we would treat our film. The drawback to meters, aside from people not learning to properly use them, was that they could be thrown off by stray light, variability, and flaws in our interpretation of a scene. I’ll never forget my agony at “over-metering” scenes, trying to include every possible tone into an image, causing me to overexpose and under develop negative, until I discovered that that was impractical. As I said, meters take practice and understanding.

Finally, with the advent of digital capture came the histogram. Rule #2: HISTOGRAMS NEVER LIE! A histogram is a graphic depiction of the actual scene, based upon actual captured data. It is immune to interpretation, or to variables such as stray light.

The histogram is constructed with the shadows on the left and the highlights on the right. The height of the “Peaks” in the histogram show the intensity of individual points in the tonal range. A good histogram starts just inside the left upright, and ends just before the right. When you have a good histogram, that’s all you can ask for. All of your information is there.

Like this: this image had a lot of green, but notice that the information starts at the left and finishes on the right, with no clipping.

     

Figure 4 & 5

 

Sometimes there will be an image that doesn’t exactly fit this ideal histogram. This image has a lot of dark, low key tones, and no really bright areas, but I knew that going in, so this is an appropriate histogram for this type of image.

      

Figure 6 & 7

 

And then there are high key images in which all of the tones in the histogram are shifted to the right.

   

Figure 8 & 9

The commonality in all three of these images and their histograms is that the tones in the histograms remain within the uprights, except when they are planned otherwise. Any tones that clip, or “climb the wall” on either end will have no tonal gradation, and therefore no chance of detail. A histogram that is shifted either to the left when not planned to be is underexposed, and can have significant noise. A histogram that is shifted to the right without clipping may be overexposed, but can be darkened for use. Some use the term “expose right”, or expose so that the histogram is as far right as it can be without clipping. The important thing is to have all information within the two ends of the histogram. This way, you have maximum information to work with in the image. This is all you need: an image that matches the tonal range that matches the capacity of your sensor. No zones, no complicated calculations, just observation of your histogram.

SO WHAT IF MY SCENE DOESN’T FIT MY HISTOGRAM?

The nice thing about shooting digitally is its malleability. If we have a flat scene, it’s an easy matter to increase contrast, locally or globally. But what if it’s too contrasty?

When we shot film, to reduce contrast in a scene, we would reduce development, making a flatter negative. We called this compaction, and it worked reasonably well. What many don’t consider is that we have a very good technique for making a tonal range fit using digital techniques; HDR. Many think of HDR as a way to make dramatic images, which it will do, but in reality, it was designed to make a long tonal range shorter, so that it could be used in a conventional way. The key is to use HDR responsibly! Really, you don’t have to have scary skies in your images!

HDR works best with scenes that have moderate amounts of extra contrast. This image is pretty extreme, but shows what can be done.

Figure 10 is a single file. As you can see, The highlight has some detail, but  the shadows are totally clipped. As you can see in the histogram, there is no detail in the shadows.

  

Figure 10 & 11

Figure 12 & 13 show the results of shooting the image as an HDR. This image had an extreme tonal range, so it was shot with a 9 stop bracket. The lighting is still extreme, but as you can see, there is dark, but visible detail in the shadows, as well as detail in the highlights. As you can see in the histogram, there is no clipping in the highlights, and only extreme magenta clipping in the deep shadow.

   

Figure 12 & 13

HDR can be a very trying discipline, and all too often the temptation is to go all “horror movie” with it. With practice, it is capable of producing a very evocative image. There are many books written on the subject, and many tutorials online. My favorite practitioner of HDR is Trey Rattcliff. On his site “Stuck in Customs” he has a great HDR tutorial. In my first Luminous Landscape article“Do you need an HDR Intervention?”   I go over my techniques. Ultimately, it’s up to you to develop a workflow that gives you the look you want.

OTHER CAPTURE CONSIDERATIONS

A few other considerations we have to look at are capture format, colorspace and working bit depth. Very often these items are passed over in the planning of images. Not choosing wisely can create a “garbage in, garbage out” situation

First, capture format. JPEG is capable of holding only a very limited tonal range. New cameras are better than they once were, but conventional thought holds that JPEG can only hold about a five to seven stop range, which is fine for most scenes, but in higher contrast situations, can lose out on either the shadows or highlights, or both. JPEG includes a lot of processing within the format. Color is adjusted, and the images is sharpened.  In addition, JPEG format is less flexible, and can not be adjusted as easily as RAW format without showing artifacts in the image. In general, JPEG is a compressed format, riddled with flaws, has a shorter tonal range, does not respond well to editing, and should not be used for serious output. If an image is to be used for a JPEG-suitable output such as web, it can be saved as JPEG.

To the contrary, RAW format is very similar to a negative. All of the potential information is possessed in the image file, and can be twisted and changed without much degradation. RAW is also capable of a much longer tonal range — to allow a tonal range easily up to ten stops with most cameras. RAW is a blank slate, with no inherent colorspace, bit depth, sharpening, or color correction. You have ultimate control of the images from the outset.

Fortunately, when shooting RAW, you can choose whatever colorspace and bit depth you desire during processing in Camera Raw. However, when capturing in JPEG, the image will always be an 8 bit image (less information).

One thing many people neglect to select in camera is colorspace. Most cameras offer only the colorspace choices of  SRGB and Adobe RGB. Camera manufacturers default to SRGB out of an overly simplistic view of camera use.  Like JPEG, SRGB is a very narrow colorspace. Monitor default is usually SRGB too, and SRGB looks fine on them, as well as the web. Many, many people shoot SRGB JPEG’s and are very happy with what they see on their monitors, but are heartbroken when their images are output on a non-SRGB device (such as an inkjet printer) because their colors just seem to disappear. SRGB holds a far smaller amount of colors (gamut), and like JPEG, should not be used for capture or processing. Cameras should be set to Adobe RGB if you must shoot JPEG. When the camera is set for SRGB, it physically can’t capture a wide gamut. The only way an SRGB file can expand it’s gamut is through interpolation.

Since monitor default is also SRGB, it’s important to set up a monitor working space, so that you see your colors correctly and don’t inadvertently  convert your files back to SRGB. I recommend Prophoto or at least Adobe RGB.

If capture is done in RAW, Color space can be set and actually can be made wider than Adobe RGB. I export my RAW files as Prophoto, which has a wider gamut than even your printer or monitor can see. Prophoto is then converted to Adobe RGB for printing.

In properly processing an image, bit depth is extremely important. Eight bit has much less information than sixteen bit. When an image is processed, it is, in effect “ripped apart”. This is what causes things like banding in transitional areas of the sky. If an image is processed in 16 bit, the tonal range will still be ripped apart, but then if it is made 8 bit for output, the banding with go away. If you start in 8 bit, there is nothing you can do about it. Set up your Camera Raw interface at the widest colorspace you can, and export in 16 bit. This gives you the most material to work with.

SO, WHAT’S NEXT?

At this point, we have a properly exposed image. No clipping. In Camera Raw, we are set to export the image as 16 bit and in Prophoto color space. Processing is up to you, although I find it useful to follow a personal workflow protocol. Personally, I prefer to do any exposure tweaking from the middle of the tonal range, Then I deal with the highlights and shadows. I personally like adding a little clarity, but I usually don’t have to do much if my exposure is correct. Most of my ACR work is tweaking and is totally subject to personal taste.

A few things that I find make a huge difference are pre-sharpening and making sure that I get rid of any chromatic aberration.  Aberration and noise can be removed in the lens correction tab. It’s good to at least check this when setting up your raw file, so it doesn’t cause problems later. I learned this when preparing a show. I was actually hanging the show of 24×36″ prints when I looked closely at an image and saw incredible aberration! Check it at the very beginning!

Pre-sharpening is something I’d never thought about before, but it makes sense. Our cameras have anti-aliasing filters to cut down moire. These work by softening the image, so we are actually starting out with a soft image. After capture, we can go back in and selectively sharpen the image and control noise in the same tab. This will bring the image back to maximum sharpness at the very beginning. Because this is done in ACR, it doesn’t seem to have the aliasing problems that output sharpening can have.

After Camera Raw, open your 16 bit, wide gamut image and finish to suit. Be sure to maintain color space protocol. Do all work in the same colorspace (Working Space), and do major tonal editing in Camera Raw. It is in Photoshop where the image can get “Ripped apart”.

WHERE THE MAGIC HAPPENS. OUTPUT.

I’ve become a firm believer that when working digitally, the print is where the magic happens. There are two major concerns that once addressed will make your prints stand out.

The first concern is a knowledge of the highlight and shadow thresholds of your materials. In my last article, “Beyond Calibration 2.0” ( /beyond-calibration-2/ ), There was a discussion of print thresholds. These are what will give you a feeling of delicacy and liveliness in your images. I call it “air”. Take care in your image processing to observe tonal thresholds. Your print threshold is the point at which tonal gradation begins. This is exactly what determined the difference between a proper “Zone System” image and one that wasn’t. The Zone System image had an open, airy feel, and the tones in the image fell where they should. When you place your important tones in the right place, the print looks livelier, and has what I call a “relaxed” feeling. In “Beyond Calibration 2.0”, there are full instructions on how to determine your thresholds.

The second concern incorporates your print thresholds, and allows  you to give attention to every area of the tonal range. This is soft proofing, and is very important. Again, In “Beyond Calibration 2.0” there are full instructions.

The confusion that many have is that if they have a good RGB file, and if you have a good output profile for your printer, ink, paper combination, you should be able to just push the button, and get a good print, correct? That’s what I thought too, but let’s look at what we’re trying to do.

The image that we see on our monitor is a rear-illuminated image and consists only ofred, green and bluecolors. In RGB, black consists of lack of illumination. On the other hand, our prints are viewed only by reflected light. We have the colors comprised ofcyan magenta and yellow, andblackis printed with black ink. While cyan, magenta and yellow will approximate a black, it is actually an ugly dark brown, and needs black as a “key” color, or the “K” in CMYK.  These two ways of seeing colors are as different as dogs and cats. When we print an RGB image, we use a profile as a “translator” to allow it to appear relatively correct when it is printed and viewed as a CMYK. But, a profile can only get close to the RGB when converted. There are several very predictable changes that happen to the image. The image is converted to CMYK behind the scenes, translated from the RGB file. The printer is not printing an RGB, but rather to CMYK.

When a profile changes an image from RGB to CMYK, three things happen. First, the image will become a little darker as a result of the surface of the paper. Second, contrast drops, and third, there is a drop in saturation. You would think that some genius would come up with an action that would compensate for these changes, but unfortunately, every image is different. Some images change very little, and some change a lot, so an action won’t work. It’s up to you. If you want really fine prints, you have to learn how to soft proof. Again, I have full threshold and soft proofing instructions in my last article “Beyond Calibration 2.0”.

It is important to remember that when we print, we are trying to make a CMYK image look just like it’s RGB cousin. Because they are genuine “cousins”, they will never look exactly alike. Each has it’s own unique “genetic” material, so will always be just a little different. You can use these differences to make your prints even more expressive than the original RGB. Often, the misconception is that the input/output curve for the print is linear with the curve for the RGB. This is absolutely not the case. Often, subtle areas such as striations in clouds simply die. The rest of the print will look fine, but subtle tonalities will blend into other tones, and not be visible. Soft proofing is where those tones are brought out.

Figure 14

I think as a reaction to this miserable winter we’ve been having, I’ve been working on a series of images that ride right along the highlight threshold. There are no blacks. All tones are different shades of white. In threshold areas, tones tend to compress, these are the types of images where separation simply evaporates. In this image, the subtle horizon tones tended to blend together. To manage an image like this, deep knowledge of the RGB scale comes into play. Sometimes, you can’t even see the separations on the monitor, but you can see subtle differences by watching your RGB numbers. This is where Zone System knowledge comes into play. In this image, all tonalities are above Zone VII (The light gray band in the middle). Through use, I’ve come to know Zone VII as 240 RGB. All tonalities in this image then fall between 240 and 250 RGB, except for the snow directly in front of the horizon line. It is important to recognize that even this snow is not pure white. It falls at 250, which views as white, but is actually light, light gray, and gives what I call “substance” to the tonality. If this area was allowed to be 255 or pure white, it would lack substance and be empty. The highlight threshold for this paper is 253, so 250 is very close to pure white, but makes the snow look material, instead of disappearing into white. Because this image is mostly white, the edges were slightly burned to about 245 RGB to separate from the white background of the paper in the print. It is very important to remember that in fine printing, the image will not present itself in the same way that it does on the monitor. Issues such as edge density and accurate tonalities can make or break an image..

Figure 15

A soft proofing curve doesn’t usually look anything like other curves that you may have used. Above is the soft proof curve for the snow image. Because this is totally a high key image, all of the corrections fall in the top three points. Soft proof curves are often very subtle. Some bump up and down. But the point to remember is that with the curve, you can address every point in the tonal range.

The first, and very important step I take when making a soft proof curve is that when I open the curve, I lock it down with a point at the quarter tone (bottom third, no point on this curve because it is high key), the mid-tone, and the three quarter tone. I do this because once you start moving points around , it can change the other areas in the curve. You don’t want that.

Let’s start at the very top point. Notice that this point is dragged down from the top of the curve just a tiny bit. This is just to 253, my highlight threshold. This gives my highlights “Substance”, or the “air” I spoke of in reference to Zone System images. Again, the threshold is the point at which tonality begins. Above this point, everything will be pure white.

The next point is my 250 point in the snow in front of the horizon. Just as in Zone System use, I have “placed” that point to make sure it is exactly what I want. Again, I have found that no matter how well calibrated your monitor is, often subtle tonalities like these can not be seen. You must rely on the numbers and you must know what those numbers mean.

The third point is the trees at the horizon. This area is near 240-245 RGB. Zone VII. Even though it is actually light gray, it translates as textured white. This is the type of tone that falls apart in a printed image as opposed to an RGB. There is such a subtle tonal difference from the white of the snow, that it tends to fade into nothing. Therefore, it’s important to increase the contrast between that and the point above it. To do this, I use what I call “mini-curves”. Note that the 240 point is just a little lower on the curve and is therefore a little darker. This “places” that point in the low 240 range, but it does one other thing. Those who have used curves know that the steeper the curve, the higher the contrast. This is why in an “S” curve all the contrast lies in the mid tones, and the toe and shoulder flatten out. The same is true with these two highlight points. In addition to precisely “placing” my lower point, the increased slope between the points increases the contrast between them. Therefore, even though we are dealing with very subtle highlight detail, we can keep the tonalities separate. a smoother curve will not do that. This is especially helpful in say, a landscape, where subtle striations in clouds tend to blend together.

A final aspect of output is saturation. This varies from image to image, but often, when you apply your soft proof profile to your image, you will see a blueish cast descend over the image. This is your saturation dropping. My last step is often to add a Hue and Saturation layer and giving the image just enough  added Master saturation to get rid of the desaturation.

As a last step, we must take into account the limits of our materials and equipment. Even though I have been working in 16 bit and Prophoto color space. I have to remember that my printer is really designed to work in 8 bit and Adobe RGB. I am dealing with far too much information to be reproduced. Remember to reduce bit depth and gamut for use. I have found that because of the huge amount of information in a 16 bit image, high key specular highlights will tend to blow out. If the image is reduced to 8 bit for printing, it will smooth out tonalities such as subtle gradations in the skies of a landscape. Likewise, although Prophoto will allow me to maintain maximum gamut throughout my workflow. Prophoto cannot be accurately reproduced! The only way for the printing algorithms  to handle all that gamut is to compress or clip it (rendering intents) Prophoto is a wider colorspace than even your monitor can see, but working in Prophoto allows you to maintain as full a gamut as you want. But you must remember to convert it to a smaller gamut for use. Inkjet printers are designed to reproduce Adobe RGB, and many digital printers can only handle SRGB. Be sure to CONVERT your files for output. Converting rearranges the numbers in the file without appreciable visual change. Your results will be more controlled, and you won’t suffer the degradation that can happen with gamut compression or clipping.

IT’S IN YOUR HANDS

As I have developed my capture, editing and soft proof skills, I’ve realized that each step is incredibly important to the final output. I have grown to see the RGB file as the weak cousin of the print, and often, I find my prints much more pleasing than what I see on the monitor. I see the RGB as raw material that lives only in the electric environment of my computer. A print is an actual thing. Current thought holds that printing is purely a push-button endeavor. Nothing could be farther from the truth.

Ansel Adams used to say (relating printing to music performance) that “The Negative is the score, and the Print is the performance of the score”. This is still very true in digital printing. There are many, many additional steps that need to be taken to give your print the life that it deserves. I have found over and over, that usually, my prints are more pleasing than my RGBs.

All too often, there has been a reference to “Digital Zone System”, which in many ways is a mis-characterization. Yes, there are similarities to Zone System in these techniques, but , with Zone System, the Stage was set in the negative. This is still true with RGB, but in the end, now, our print expression is in the output end. There still is too much reference to an image “coming out” nicely, but in reality, only YOU have control of your output, but it takes practice, and a lot of time, just like it did in the darkroom. I believe that digital production is far superior to film, and that thinking otherwise is a Luddite way of thinking. Through knowledge of your tonal range, the RGB scale and how output really works, you will be able to produce truly superior images.

WHERE TO NOW?

I’ve put a lot of thought into this subject. I’ve studied, processed images, and have long been a “color geek”. I have a long background in conventional chemically based imaging, and now have been doing digital imaging for a long enough period that I can draw correlations between old technology and new. In the spring, I will be beginning a series of workshops on digital output and many other subjects. If you are interested in refining your photography, from capture to output, making it better than you could ever do with film, drop me an email ( ces@christopherschneiter.com ), and I’ll put you on my mailing list. As the time gets closer, I’ll keep you updated. Also, if there are any other subjects you’d like to learn in a workshop, let me know. The plans are to begin workshops here in Michigan, and possibly expand into location specific workshops. Keep in touch!

Christopher Schneiter

Adjunct Associate Professor of Photography

Lansing Community College

Lansing Michigan

Chris’s Website

©Christopher Schneiter 2014


 

August 2014

.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Avatar photo

Born December 30th 1954 Began Photographing 1969 in Kalamazoo Michigan Bachelor of Fine Arts, Photographic Illustration, 1978 Rochester Institute of Technology Adjunct Associate Professor of Photography, Lansing Community College, Lansing, MI

You May Also Enjoy...

Selfoss

January 13, 2009 ·

Michael Reichmann


Scanning Colour Negatives: Raw or Not?

April 17, 2011 ·

Mark Segal

Introduction Some think the optimal way of scanning colour negatives is to minimize editing intervention at the scan stage by scanning exactly what the scanner