Not So Fast in Dismissing Moore’s Law

August 27, 2012 ·

Michael Reichmann

Ray Maxwellhas writtenan interesting article on diffraction and the future of Moore’s Lawin digital photography. While I agree with some of his conclusions, I think it is a bit premature to dismiss Moore’s Law and its effect on us.

Moore’s Law covers the number of transistors than one can produce for a fixed amount of money (seehttp://en.wikipedia.org/wiki/Moore’s_lawfor more details). Transistor count is important because almost all digital circuits use transistors as building blocks. Pixels on digital sensors are made using combinations of transistors. In fact the devices that Moore used to illustrate his point back in 1965 were CCD devices that were primitive ancestors of today’s digital camera sensors. Moore argued that the cost per transistor would drop by a factor of two every 12 to 24 months. Over time Moore’s Law has been generalized a bit to be more than just transistor count – most people in the technology world use the term “Moore’s Law” to mean the rapid increase in computing power at the same or lower cost. Transistor count is the primary driver of computing power but clock speed, bandwidth, latency, and parallelism also contribute to power, and also have been growing at a rapid pace – doubling the power of computing devices on roughly a 12 to 24 month time scale.

Every year since Moore’s first discussion of the law in 1965, somebody in the computer business says that Moore’s Law is about to end, because of problems in semiconductor manufacturing. Moore’s “Law” is just an observation about economics of the computer business; it isn’t a law of nature. So it is certainly possible that one day it will stop. However, tens of thousands of solid state physicists and semiconductor engineers wake up every morning dedicated to finding a way to keep the pace of the industry going. So far their combined inventiveness has beaten back every challenge.

Ray Maxwell’s article argues something different – he is not concerned about whether Moore’s law keeps going for computers. Instead he says that for digital cameras, the effects of diffraction will stop further improvements in resolution and pixel count.

Diffraction is a fundamental and inescapable part of optics, and it has some quite profound effects. This issue was first raised on Luminous Landscape back in 2007 in a reply I made toan article by Charles Johnson. This became hugely controversial because it challenged the conventional wisdom that the sharpest pictures come from stopping down to f/22 or even f/32. In fact, if you stop down to f/22 on a 35mm sized sensor you are effectively working with a 2 megapixel sensor – because diffraction robs you of resolution. Various threads and comments on LL and other sites doubted the points I raised (to put it mildly), but since then diffraction has been treated on LL several times including by Rume Osuna and Efrain Garcia/do-sensors-aeoeoutresolveae%c2%9d-lenses/, and also/understanding-lens-diffraction/in theUnderstanding Series. You can see the results for yourself; f/22 has much less resolution compared to f/8 on a modern DSLR. Indeed for maximum sharpness, most DSLRs cannot be stopped down below f/11, or even f/8 without a large cost in resolution. The excellent Cambridge in Color site referenced gives a handy calculator of diffraction effects.

Ray Maxwell feels that diffraction means that there will be an end to Moore’s Law having an impact on digital cameras, and specifically on sensor resolution. He suggests that the “megapixel wars are over” and that 35mm full frame sensors won’t get any larger. He concludes by discussing a hypothetical 100 megapixel camera saying “Marketing people may produce one of these cameras, but no physics expert will buy one.”

Well, I have a PhD in physics, and think of myself as something of an expert on the topic. But I expect that I will buy a 100 megapixel sensor camera some day (although likely not for a few years).

Here is why. Diffraction is a problem, but there are a bunch of reasons that pixel counts will continue to rise.

The first is that we all have fast lenses. The diffraction limited aperture for 100 megapixels / 35mm full frame size is f/4.6 , 50 megapixel is f/6.4 . There are plenty of lenses where a 50 or 100 megapixel sensor will show some benefit. Indeed most macro lenses work well open. So do many super-telephotos like 300mm f/2.8 or 600mm f/4 – they are typically used wide open where there would be no problem with a 50 to 100 megapixel sensors.

So, a hypothetical 100 megapixel camera would not get full benefit at f/11, but it will get the benefit and wider apertures which could be much of the time for many photographers. A 50 megapixel camera would have an even wider range of applicability.

A second reason for more resolution, ironically, is that most lenses are not diffraction limited – they are much worse. The diffraction limit is the best you can do, but most real lenses are worse and have distortion and aberrations, such as coma, chromatic aberration and others. These problems can to some extent be corrected in software – simple versions of this exist in DXO, Camera Raw and other programs today. However all of those adjustments in effect cost you resolution – or conversely you can do a better job with higher resolution sampling. One of the best things you can do with a 100 mp sensor is use it to create a much better 50mp or 25mp image through intelligent down sampling.

Indeed it is likely that this will be an automatic option on the camera. We already have the option on most DSLRs to choose what resolution JPEG file is produced. On Canon’s latest cameras, you can even choose between raw and “small raw”. If you are shooting at an aperture that is above the diffraction limit, or under some other conditions, you will automatically get the full resolution, otherwise the camera bins the pixels and produces a lower resolution file. Besides saving space and time, binning like this greatly reduces noise.

A third point about diffraction is that it depends on color. Remember that our pixels are not really R,G, B at every position but exist in a Bayer filter arrayhttp://en.wikipedia.org/wiki/Bayer_filter. Software in the camera, or during raw-file processing reconstructs R,G,B values for each point using demosaicing algorithms to approximate the true values at every point.

This works in part because our eyes are much more sensitive to resolution in monochrome (luminosity), and not very sensitive to spatial resolution in color. Yet the wavelength of light is different for Red, Green and Blue light. The Cambridge in Color diffraction calculator assumes the middle value – green light (550 nanometer wavelength) and the color layout used in a Bayer array.

Blue light is much higher resolution – about 1.5 times higher linearly (2.25X in number of pixels) than green light. Engineers designing a 100 megapixel sensor, would almost certainly consider using a different approach than the Bayer array. With a different array, advanced demosaicing algorithms could, in effect, extract the maximum resolution from the shortest (blue) wavelengths which are not diffraction limited and use it to improve the image. Using this approach the diffraction limited aperture occurs at a higher f-stop than given by the Cambridge in Color diffraction calculator discussed above. I don’t know exactly how much extra resolution can be extracted this way, but it is surely more than the naive Airy disk calculation for green light and today’s Bayer array.

All of these effects mean that camera purchasers will see some improvements from increasing sensor size. It’s true that they won’t be as dramatic as we have seen in the last decade when we went from 6 to 25 megapixels. But I expect that there will be noticeable improvements in image quality up to 50 megapixels, and likely eventually to 100 megapixels (for 35mm full frame sensor). They will not be huge benefits in resolution, but remember that with Moore’s Law operating the extra pixels ultimately get very cheap. So while I agree that the difference between 6 and 25 megapixels is a much bigger deal than between 25 and 100, there is still likely to be a value proposition that makes sense.

Eventually there will be a limit to the number of pixels for 35mm full frame sensors, because diffraction will finally defeat any increase in resolution. At 275 megapixels the diffraction limited aperture for 35mm full frame is f/2.8 (again, for green light and Bayer array). Will we see a 35mm full frame sensor that large? Possibly not, but it seems a bit extreme to think that we will stay at 25 megapixels forever.

However, Moore’s Law is about more than just the NUMBER of pixels! It is also about making pixels deeper (more bits per sample). Or making pixels cheaper; today’s 25 megapixel sensors are nice but the cameras are still expensive. Over time those sensors will get much cheaper and that will drop camera prices. I think that is a pretty valuable thing for photographers. A second effect is that Moore’s law also makes physically larger sensors cheaper. Once 35mm sensor size runs out of gas the next step is medium format. 6 x 4.5 cm format can easily take 100 megapixels even with simple techniques, and multiples of that with the points discussed above. Those larger sensors are affordable (albeit barely) only because of Moore’s Law.

Moore’s Law is not just about sensors either. The chips in the camera that handle noise reduction, and give you the speed and bandwidth for high frame rate and other features all derive their power from Moore’s Law and related effects. We absolutely will benefit from those.

So, I think it is a bit premature to bury Moore’s Law in digital photography. It has dominated the drive to quality we have all enjoyed, and it will continue to improve our cameras going forward.

July, 2009

___________________________________________________________________________________

Nathan Myhrvoldwas formerly formerly Chief Technology Officer at Microsoft, and is co-founder of Intellectual Ventures.

Avatar photo

Michael Reichmann is the founder of the Luminous Landscape. Michael passed away in May 2016. Since its inception in 1999 LuLa has become the world's largest site devoted to the art, craft, and technology of photography. Each month more than one million people from every country on the globe visit LuLa.

You May Also Enjoy...

Accepting Payment

April 12, 2013 ·

Michael Reichmann

Please use your browser's BACK button to return to the page that brought you here.


Recoleta – Sleeping

January 13, 2009 ·

Michael Reichmann

Please use your browser'sBACKbutton to return to the page that brought you here.