Five Technologies We Want (But Won’t Get)

The Time for True Innovation Has Come

With the fifth generation of top-level DSLRs about to drop, it’s time to pause and consider what these cameras trulycouldbe.  Not what they likely will be, that is, but what heights they could achieve if the camera makers really put their minds to it.  For certain, the new models widely expected to be announced by Sony and Nikon this month, with Canon never far behind, will up the pixel count and supposedly usable ISO ranges to new Olympian heights.  But will they really improve the camera as a tool for the serious image maker?

This essay is an anticipatory lament for the five technologies I really wish we were about to see.  While each covers a slightly different aspect of the photographic process, they share two unifying themes: better integrating the camera with the photographer and leveraging the tablet computing revolution to improve the capture-process in the field.  Digital cameras today realize a very high percentage of their ultimate image quality potential. What they do not yet offer are ways to move photographer into a new paradigm of man/machine relationships, nor to really capitalize on tremendous computing power we can now carry into the field with us to improve the image capture/creation process. This shortcoming is ripe for change.

The State of Play

It used to be that any imaginative soul with a CNC machine and a few million dollars could make a camera.  They were relatively simple, if precise, mechanical instruments that required a bit of design imagination and mid-level manufacturing skills to produce. But that day is long past. Today, producing a camera involves designing and building not only the physical box, but also an entire computer system to operate it. Moreover, that computer requires but an operating system and UI to make it usable.  This is not the stuff of basement entrepreneurs, and the rapid contraction of the photographic industry, in terms of participants, is a direct reflection of the staggering cost and complexity of bringing a modern digital camera to market.

So then why do they mostly still suck to use? Why has no one come up with almost anything useful in the way of control-system advancements in the decade that digital technology has been around? Why can’t Canon even master the art of mirror lock-up? But I digress.

In truth, manufacturers have been locked in a decade-long race for their lives as the camera product-cycle shrank from years to months and the race to ever-greater numbers of megapixels dominated the development and consumption subcultures.  But now we find ourselves at a plateau of sorts.  The megapixel race is largely over. Cameras are actually sticking around on the market long enough for people to actually wear them out. Perhaps the paradigm is shifting back to a development pace we used to see with film.  Perhaps it’s time for some real innovation to re-enter the equation.

So now is the perfect time to consider some real advances in camera design. New technologies that will make users go “wow” and actually improve the experience of photography. 

Herewith follows my own humble suggestions, in no specific order, on what camera makers should be working on today. All of these technologies have one thing in common: they are all possible in the now. In fact, some of these have been possible since before digital, but no one has had the foresight or creative power to port them into photographic applications

While I am an optimist by nature, it seems unlikely that any of the major players will have thought outside the box sufficiently to realize the advances I propose below. That is sad because all are within reach. However, in the spirit of innovation, I present this list, secretly hoping someone will make me eat crow.

Church Raven, Newfoundland

Nikon D3, 70-200mm f2.8

The Technologies

#1 Full Voice Control

This is a ‘no brainer’.  Voice recognition technology has been around since a 20-35mm lens was considered exotic. I can talk to my car, my word processor and even my $100 cell phone. It only makes sense that I should be able to control my camera simply by speaking to it. Nothing would overcome the ergonomic mediocrity of most modern cameras faster than a single button, placed cleverly at the place one’s right thumb wraps around the back of the camera, with which one could key the tiny microphone just beneath the rear LCD andtellthe camera what to do. Photojournalists already use the built-in mic to record notes about their images, the names of subjects in images, and so forth. The hardware for voice control is all there already.

“Aperture Priority.  f8.  ISO 50.”

“Self timer. Mirror up. Three shot bracket. “

“Program. Continuous AF. High speed.”

It would be so brilliant. And so easy.  Most high-end cameras already have a built-in microphone which should be suitable to the task, so little would be needed in the way of re-engineering.

“Ah yes,” the doubters will say, “but the American century is over. Most of our key customers don’t speak English, at least not  as their first language. No one could bear the cost and complexity of programming every camera in every language! And how can you justify the insult of leaving a language out?”

Well, my friends, the answer is easy. Voice recognition is actuallysoundrecognition. That it’s a language doesn’t matter. Indeed most cars with voice control for navigation are voice independant and have multi-language support.

In any event, a camera is used by one user. With one voice, and one language. By my best count, it would take between 100 and 200 words to make every single major function on the camera controllable, including one-third stop increments of ISO, aperture and shutter speed.  The answer, therefore, is to create software which “learns”, the way most good voice recognition software does.  Have the user connect the camera to a computer via the ubiquitous USB connector, and have him or her repeat the key words into the camera mike, with the camera at eye level, as the words display on the computer’s  monitor.  This process could create a highly effect vocabulary in a very short period of time.

With such a small vocabulary, and the microphone literally at the user’s lips, I can’t imagine that it would take long to ‘train’ the camera to understand the user’s commands, irrespective of language. The experience of controlling a camera by voice is the closest we will ever get to controlling it simply by thought. It would be a quantum advance in camera technology and would revolution the field.  Easy and amazing. Just imagine for a moment the level of integration with your camera that this advancement would achieve. Do a little experiment. Next time you’re out shooting, press the button closest to your thumb and tell your camera what you want it to do.  Then, stew in frustration while you compare how quick and intuitive that was compared to the morass of modal menus you’re about to wade into toactuallymake the camera do what you finished saying a minute ago, when the light was still good.

This is the #1 new technology we need to see.

See….Nikon already put the mic right there for us!

#2 Live View Focus Masking

The new truth of super-resolution sensors is that focus matters like never before. Good olde autofocus just ain’t cutting it.   The answer is simple: display a live-view image of the frame with all in-focus areas masked out with a colour overlay.   This one is almost a reality in a few instances. Sony recently added focus masking to their NEX cameras via a firmware update, and video cameras have had this feature for years. Phase One’s focus masking on the superb IQ backs also does this, but alas only to a taken frame – a limitation on the CCD architecture of all present MF backs. For many applications, this is good enough, but for the focus mask to be LIVE and zoomable – up to 100% − would be the ultimate. 

The promise of digital was that we would never have to wait to know we got the shot. Thus was born ‘chimping’. Alas, that promise has been proven untrue. Certainly there are fewer exposure disappointments these days (thought still too many for lack of technology #4…), but personally I get as many, and probably more, focus disappointments now than I did with film.   It takes so little error to reduce 24 or 40 megapixels to an effect 10 or 20. Live view focus masking would solve this problem for all static subjects (i.e.: landscape work).   If Phase One and Sony can do it, so can the billion-dollar big boys. This one is close, really close. Let’s hope someone gets it right this next go-round.

Dewy Reeds, Newfoundland

Nikon D3, 70-200mm f2.8

#3 Expose to the Right Exposure Mode

Ok, Reichmannbeat me to the punchon this one with his comprehensive article on the need for, and potential implementation of, an Exposure To The Right (ETTR) mode. In terms UI, it could be made even better by being coupled with live view and displayed in Technicolor on the LCD.  The top highlight flashes, blown shadow gets overlaid in yellow or red.  An ideal implementation of this technology would then let the user make live adjustments to exposure, even in the smallest increments, with the touch of a button, or wait for it……with a voice command (hey, why haven’t I thought of that before?!) Too easy. Too excellent. Too much to ask?

  #4Now…..give it all to me on my iPad  

Indulge the fantasy: we now have live view focus masking and live ETTR.  The next step is to take this all up a notch from the back of the camera to a huge, high resolution screen. But where would we find one of those nowadays? After all, it’s not like everyone has a luscious 8×10 backlit LCD touch-screen computer with them in their pocket. What do you think this is, 2009? 

Ok, sarcasm aside, the next logical steps is to send all of this live control data wirelessly and in real time to an iPad (or equivalent device).  A 3-inch million pixel LCD is nice. An iPad is nicer. For critical landscape work, it would be an easy addition to field-kit.  With a cleverattachment bracketor hotshoe mount it could sit comfortably atop my camera, acting as a giant LIVE ground-glass.

The user-control experience of driving a camera completely from an iPad would be a dream come true for landscape photographers. Studio shooters in the still-life realm have had a version of this for years through the tethering process. Moving this technology into the field would be an epiphany for landscape photographers.  Many of us who have been in photography since the days of film know the pleasure of working with a view camera and the ultimate level of control and image-interaction it allows in the creation process.  However, when transposed into the modern technological paradigm, the pain that was an essential part of the view camera experience can largely be erased.  

Perfect framing, perfect focus, perfect exposure; all in 8×10 glossy and up to 100% pixel-view: dare we dream? If the processing power is not quite there yet the tablet segment of the market, it soon will be.  The smoldering horsepower under the hood of the new top-end Macbook Air suggests that the super-compact form factor no longer spells anemic processing.

This proposed technology also offers an unparalleled opportunity for one of Apple’s rivals in the tablet market to gain some cross-marketing traction. The iPad rules.That’s not likely to change.  Apple’s obsessively tight controls over I/O and iPad developers, however, is a major turn-off.  What better opportunity for an innovative entrant into the tablet market to create software, running on a rival platform, which makes their tablets the perfect companion for a particularly advanced brand of camera? Who wouldn’t go for the $500 up-sell to a $7,000 camera in exchange for tablet-based live-view?  This could be a leading innovation in camera technology.

#5 – Touch-Pad Based Zone System  

While all the technologies I have enumerated thus far are easily within reach today, with what would be relatively modest innovation, I’m going to indulge one that’s a little outside the box.  Not less do-able, mind you, just a little further out.

This technology involves going even further with exposure control than ETTR, and fusing the image processing step (the one everyone hates) more closely with the capture step (the one everybody loves). Here’s the idea: my gorgeously composed, perfectly focused image is displayed on my touch-screen device. Now, let’s go live with the image itself.  Let me zoom in a bit, touch an area, and have the device select all continuous and adjacent areas with the same colour and luminance, just like we’ve been able to do with the lasso tool on Photoshop for the last decade… Then, let me adjust that selection till I have exactly the desired area selected.   At this point, let me  select what Zone (i.e.: the scale of tonal value from I to XII as described in the Zone system) I want to place that part of the image in.  Then, let me do  the same for as much of the rest of the image as I want, area by area, until I have something that looks like one of Ansel’s work prints,with Zone system annotations all over it.

Then, it’s time for a little computational magic. Have the camera take as many bracketed frames as it needs to get good tonal data for every zone, andmeta-tag the image-group with my Zone selections on the tablet.  Then, when the files are imported into image processing software, allow me to apply a basic capture-sharpen to the whole batch and put HDR blending to work on the rest of the images to create a single master tiff with rich and correct data everywhere I’ve asked for it,with the tonal relationships I chose at the time of image capture.  This would work as well in colour as in B&W. 

Of course this could be done now with Photoshop. But that would involve using Photoshop. Worse, it would involve work in the sterile misery of the computer room, not in the field, at the scene which inspired the image in the first place. So this last proposed technology is really a crie-de-coeur from a hater of computers and a lover of photography, to leverage our modern technologies and create an artistically maximized photographic experience.  

Misty Isle – Bonavista, Newfoundland

Nikon D3, 70-200mm f2.8


It is somewhat frustrating that, as photographers, we are hostage to computer engineers to give us the products we really want. Of course, there are some truly gifted engineers who also happen to be gifted photographers, but they are few and far between.  Or perhaps the marketing and design branches of the big manufacturers lack a spirit of creative innovation, because what we have gotten for the last five years have been unimaginative products produced by a creatively stunted industry.

One of the reasons I personally cheer so hard for Phase One to thrive and survive is that, as a small design shop run by real photographers, they get it. At a small, but vastly ambitious scale for a company of their size, they are pushing the frontiers, truly innovating in photography, with technologies such as the touch screen on the IQ series, focus masking and their sort-of live-view from CCD (a truly daring feat, however successfully you deem its implementation). 

But someone needs to bring real innovation to the mass market.  The ideas for how to make cameras better by quantum leaps, and not mere evolutionary baby-steps, are neither complicated nor hard to come by.  They’re available right on the Internet, in places like this article, entirely free for the taking.

Gentlemen of the Megapixel, the ball is in your court.  Blow us away, please.

August 2011