In early July, 2009 I published an essay byRay MaxwelltitledWhy Moore’s Law Does Not Apply to Digital Photography. This was followed up byNot So Fast in Dismissing Moore’s LawbyNathan Myhrvold. Few essays published on this site have generated as much heat and follow-up email.
In an attempt to provide a voice to the many perspectives offered, the following is a response by Ray to Nathan’s reply and also some of the more informed and articulate emails that I’ve received. As for me – my brain hurts, and I’m simply heading out to do some photography. Case closed.
By Ray Maxwell
I have read Nathan Myhrvold’s essayNot So Fast in Dismissing Moore’s Law.
I agree completely with his analysis in every way…However…I will quote some lines from his essay and add my comments.
The first is that we all have fast lenses. The diffraction limited aperture for 100 megapixels / 35mm full frame size is f/4.6 , 50 megapixel is f/6.4 . There are plenty of lenses where a 50 or 100 megapixel sensor will show some benefit. Indeed most macro lenses work well open. So do many super-telephotos like 300mm f/2.8 or 600mm f/4 – they are typically used wide open where there would be no problem with a 50 to 100 megapixel sensors.
I have two lenses for my Canon 5D MkII. They are both L lenses. One lens is the 24-70 f/2.8 L. The other lens is the 70-200 f/2.8 IS L. I also like to stop down to f/8 or f/11 for more depth of field for most of my images. Therefore for the typical set of lenses and shooting that I do, a full frame sensor with more pixels will not create sharper images for me. I suggest that my needs are typical for a large part of the photo community. I would guess that the lenses that you refer to, that can benefit from a higher pixel density, are purchased by 5% of less of the market.
The other point we have not discussed is that as the pixels get smaller we loose dynamic range and signal to noise ratio. See this URL:
So…if you want to move into this very high end of photography, you will need to move into medium format or larger format for the sensor to keep the pixel size large enough that you do not sacrifice these other parameters. Life is always a compromise.
For an even more extreme opinion on this topic see Ctein’s article “Why 80 Megapixels Just Won’t Be Enough…”http://theonlinephotographer.typepad.com/the_online_photographer/2009/02/why-80-megapixels-just-wont-be-enough.html
Ctein suggests that for the highest quality 8” x 10” prints you will need 400 megapixels. I will leave it up to Michael Reichmann and Brooks Jensen to rebut this claim since they have both done extensive tests with 8”x10” prints from different cameras with different sensor sizes and densities.
No Brick Wall Ahead: Why Moore’s Law does apply to Digital Photography
By Harold M. Merklinger
Let me say at the outset that I am not contesting any of the arguments set out by Ray Maxwell or the information at cambridgeincolour.com to which he refers. All the arguments made have technical merit. Where we differ is with respect to one basic assumption. As is the case in almost any discussion, the assumptions can influence the conclusions. I offer an alternate view of photography’s future.
Nathan Myhrvold has countered Ray’s position with a number of valid arguments, many of which address issues beyond those I will discuss here. But even he, in my humble opinion, may be on the conservative side of the issue.
The diffraction of light is indeed a real physical phenomenon which does influence the resolution we observe with our current imaging systems. Ray’s implied assumption – which I challenge – is that there is nothing we can do to mitigate the problem. Nathan seems to accept this also. There are things we can do, and at least one of those things is something most of us already do.
The "diffraction limit" is not a brick wall that cannot be penetrated. By analogy it rather more like the sound barrier. Since about 1947 we have known that airplanes can fly faster than the speed of sound, but the rules of aerodynamics are somewhat different from those that apply in the sub-sonic region. Bullets and whips had broken the sound barrier long before 1947 in any case.
With wave phenomena – such as light and sound – there is diffraction, and there is a "diffraction limit". But, just as there is supersonic flight, there is also "super-resolution". My own technical background is in underwater sound with some experience in radio and radar. In these fields we routinely exploit modest degrees of super-resolution to achieve our design objectives. If you own a cellphone (mobile phone) you probably already possess a super-directive microphone. That is, you have a microphone that has a stronger directional sensitivity characteristic than it "should" based upon the normal rules of diffraction. This "super-resolution" is achieved by adding, subtracting, phase-shifting and re-combining the sound pressures received at two or three small, closely-spaced holes. In sound, (as well as radio and radar) it is in principle possible to achieve any desired directional characteristic (degree of resolution) using an appropriate number of closely-spaced receivers and suitably processing their signals. The processes involved are somewhat intolerant of noise and physical errors, so there are practical limits to what can be achieved economically.
As light is a form of electro-magnetic energy very similar to radio waves, I extrapolate that the same principles should also work for optics. Thus ultimate resolution is more about practical, affordable technology that it is about any hard physical limits. Let’s get back to photography. The original "diffraction limit" is just a definition proposed by John William Strutt (Lord Rayleigh) in the context of the ability of a person using a telescope to resolve two equally-bright stars. He somewhat arbitrarily decided that the two stars could be declared resolved if the brightness between the two brightness peaks fell 26.5% from the level of the peaks themselves. (Rayleigh actually expressed the criterion in a different way, but this is the net result.) What happens if the two stars are not equally bright? Or what if a particular observer requires a 50% drop in brightness? What if we can detect a 1% drop in brightness? What if we can devise some processing system that will reliably detect a 0.05% drop in brightness? The simple answer is that the result we get depends upon our technological abilities and our actual needs. There’s lots of room for debate in defining where the diffraction limit actually lies in the first place. But that really just begs the question. Is there anything we can do about it?
Well, yes there is something we can do about it, and most people reading this probably already do. It’s called "unsharp mask". The sharpening process done by an unsharp mask filter in Photoshop or other image-processing program will magnify that dip between the two brightness peaks in Lord Rayleigh’s resolution problem. We can probably get at least a 50% drop in brightness between the peaks. Alternatively, we can see a 26.5% drop between peaks that are closer together than required previously. This is super-resolution. There is a penalty for undertaking the process: the noise in our image is increased. If we had complete freedom from noise in the first place, we would be able to achieve any degree of resolution we desire. Image quality will suffer unless the image is as perfect as physically possible to begin with. As no real camera will ever deliver a perfect image, there will always be some limit on achievable resolution. Where that limit lies depends upon the state of our imaging technology.
Here’s a real-world photographic example of what can be achieved. I photographed a china flower arrangement using a Canon 50D with 100mm f/2.8 Canon Macro lens at whole stops from f/2.8 to f/32. We’ll examine what we can do with images at f/16 and f/32 compared to the result at f/5.6. Recall the Canon 50D should be diffraction limited somewhere around f/7. In the first example, we compare the unaltered camera JPEG image at f/16 directly against the f/5.6 image. It is clearly degraded. The third image is the f/16 image after applying the unsharp mask filter (1.0 pixels, 95%, 0 threshold) plus a 10% increase in overall contrast. I think you will agree that there is not much difference between the processed f/16 image and the f/5.6 image.
Next I tried working with the f/32 image. Using the unsharp mask filter with settings of 2.3 pixels, 122%, 0 threshold, I could get back to about the f/16 image with one or two notable exception. The specular highlights are "too big". The problem here is that the camera clipped the specular highlights and there is thus no information left within these highlights to re-shape into the ‘proper’ image. It may also be noted that thin white lines against a darker background are not narrowed or thinned as they ought properly to be. This simply shows that the unsharp mask filter is not really the optimal filter for the job. There are at least two other existing optical processing techniques that can be called upon to do a better job: de-convolution and spatial filtering. We won’t discuss these here; my only point is that there are known image processing techniques that could be used by camera manufacturers (and maybe are being used) to recover image degradation caused by diffraction. The optimum filter will depend upon both the actual lens being used and the aperture to which it is set. The success of the process also depends upon the original resolution of the image; we cannot extract detail on a level that was not recorded in the first place. There are smart interpolation algorithms, but this is really guessing at the missing information based on certain other assumptions about the subject of the photograph.
We need not stop with image processing. It should be possible to design super-resolution lenses. There will be a penalty: there may be light loss as well as loss of contrast. Still, there is nothing about wave physics that prevents super-resolution lenses from being achieved in principle. I would not be at all surprised if it has been done already. Ultimately, it may be possible to produce sensor arrays that enable direct measurement of the actual electro-magnetic field, as for radio and radar. If that should become possible, we will even be able to dispense with the physical lens! Our lenses will be in the form of software. (Glass lenses might still be useful, but they will not be necessary.)
The point of this article is to demonstrate that there is still plenty of scope for technological innovation that should enable the creation of images that defy the "diffraction limit". Diffraction is simply something we need to learn to deal with. Moore’s Law, as it might apply to photography, has a ways to go yet!
I personally expect 35 mm sized cameras to reach the 100 megapixel level if not beyond. But there are other technological limitations that are relevant to digital photography also. I long ago scanned one of my 5 by 7 inch negatives which I estimate to contain about 200 megapixels of information,155 megapixels in the final cropped image. I should in future be able to retake that same image using a 200 megapixel 35 mm format camera equipped with a high quality, diffraction-limited 45 mm f/4 lens (instead of the 240 mm lens at f/22 on the view camera). Printing that image has posed another problem. At the present time the image must be printed at a size of at least 20 by 34 inches in order to display all the information. At smaller digital print sizes, much of the detail, especially highlight detail, is simply erased. But the detail is there in a 5 by 7 inch conventional contact print! One just needs a 5x (or better) loop to see it.
Current technology permits single image capture at the gigapixel level (see: http://www.gigapxl.org/) and I see no reason why future consumer technology should not permit economic capture of images at the level of say 200 to 400 megapixels. I expect this will be possible with affordable equipment that can be hand-held. Perhaps I am being too conservative. On the other hand, the practical limit on image size may be set by what is pictorially useful, rather than what is technologically possible.
Many great photographs have been – and are being – made with diffraction-limited lenses on view cameras. A 50 mm lens at f/10 and a 300 mm lens at f/64 produce optical images containing the same amount of diffraction-limited detail, as well as the same depth-of-field characteristics. How many pixels are on that piece of 8 by 10 inch film? Remember, too, that sharpening is nothing new; it’s one of the things that intermittent agitation in chemical development does for us.
Moore’s Law is all about technology – learning how to use physics – not about physics itself. We must be careful not to limit our projections by assuming that all there is to know is what we currently know. I am reminded of another technological forecast made about aircraft in 1911. The writer projected that while Morse code radio communication from aircraft to ground should be possible, speech-modulated communication would never be. Wind noise in the microphone would surely defeat any such attempt. (Even super-resolution at the microphone would not help that problem!)
By Dan Seligson
One of the interesting and surprising developments in microlithography for semiconductor manufacturing is that Rayleigh’s theory of diffraction limits has been found to be wanting. In the bad old days, let’s say 1985, it was taken as fact that the diffraction limit was practically the wavelength of the illumination source. Higher numerical aperture (that is faster lenses) would help you, but they would only help you get to that limit. So, optical wavelenths led to UV wavelenths and then to deep-UV of approximately 250nm, and then on to 193nm where we are today. If Rayleigh were right, the chips of today would have minimum features of about 200nm. In fact, they’re closer to 40nm.
Rayleigh’s law is not a law, it’s an approximation. By designing very sophisticated cameras (so-called steppers), light sources, and masks, highly motivated people figured out how to do better than this rule of thumb suggested. No "laws of physics" have been broken. Approximately 25x more information per unit area is being recorded than was thought possible.
So, like Nathan Myrvold, I too have a PhD in physics, and I am optimistic about the application of Moore’s Law to digital photography. I worked at Intel on these lithography issues, and other things, for 17 years.
By Dan Wells
Here’s a third perspective that may fit into your very interesting series on Moore’s Law and photography. I’ve been a photographer since high school – some 20 years – and have been following the technology industry for the same length of time (briefly as an IT professional, but mostly from the sidelines). I have come to wonder where the limits of our applications limit what we need, rather than the limits of the possible. While I certainly cannot claim the industry credentials of Nathan Myhrvold, an outside perspective may be useful here…
One question that the recent Luminous Landscape commentators on Moore’s Law have not raised is how much utility we get out of our increasing compute power (or pixel count, data storage, network speed, etc…). I would argue that for most purposes, a few odd applications such as simulation modeling notwithstanding, the utility we get from increased performance is plateauing. The megapixel race is a perfect example of this – as camera resolution increases, it takes a larger and larger print to see the difference, and the requirements for ancillary hardware from lenses to tripods get tougher. For many uses, a 10 mp DSLR is perfectly acceptable, and it is a rare use indeed where a 24 mp DSLR still does not provide sufficient detail.
As the type of work one does matters immensely in what technologies are of real benefit, it is important to describe my own biases. I am a landscape photographer, working on every scale from macro to the occasional grand vista, and, preferring to display my work as large prints. Many of my subjects require great dynamic range to capture, although perhaps not quite as much as some Western landscapes, because I work primarily in the Eastern US, where there are more trees to cut and diffuse the light. My primary camera is the Nikon D3x, which I chose for its rugged construction and field worthiness while providing image quality sufficient for a 24×36 inch print. I have been extremely satisfied with the D3x after 10,000 exposures over the last six months, and regularly print very large images with overall image quality that I have not seen from any previous camera that I have used, and with image quality so good that I cannot see how to improve the prints with a simple improvement in technology – with any previous camera, I would sometimes look at an image and say "more resolution would have allowed me to make a larger print" or "I would have gotten that shot if only I had more dynamic range, or more color depth, or some other technological fix".The D3x is good enough that I no longer do that with any regularity. My maximum print size is now limited by the largest printer I am willing to live with (in my case 24 inches), rather than by insufficient resolution. I do not have extensive experience with any other DSLR over 20 MP, and I suspect that there might be another camera or two on the market (even without going to much more delicate medium format digital) that also meets my personal standard of "good enough".
The differences between early generations of most electronic products, including DSLRs, are very obvious, apparent to every user, and make the upgrade worthwhile even for casual use. In contrast, the differences between later generations of a maturing product are important only in increasingly specialized uses. ANY reasonably modern DSLR is so good that most people don’t need any more. A good DSLR from 2004 will make an 8×10 print of many subjects that is nearly indistinguishable from a similarly-sized print from a D3x. The D3x’s superiority is identifiable in large prints, or in high-contrast situations where its dynamic range is important. The average casual photographer taking photographs of their family is well-served by most DSLRs introduced since the Nikon D70 of 2004, and increasingly serious photographers find that a camera suitable to their work has been introduced at various times between the D70 and the D3x. While the very recent (and expensive) D3x does offer extremely high image quality, it requires a 24-inch printer to see all the detail the camera can resolve. Any increment above the D3x will probably be visible primarily on prints exceeding 24×36 inches! The maximum detail resolvable from a D3x is available only using very good lenses, and with the camera on a sturdy tripod (preferably carbon fiber to damp mirror shake). The commitment in space (both for the floor-standing printer and to display the prints), expense and time needed to fully utilize a D3x is considerable, and any future DSLR offering even higher image quality will have similar or higher ancillary requirements. Few photographers need, or can utilize, 24 MP – for those who can, the D3x is an incredible tool, but its full potential is apparent only if it is used in a way that appeals to only a small percentage of photographers (although to a much higher percentage of Luminous Landscape readers).
3 MP generation ( Canon EOS D30, Nikon D1) Introduced in 2000 – produced very good prints up to 5×7, acceptable to 8×10. Notably problematic to use for many applications, due to limited dynamic range (5 stops), odd color spaces and high noise above ISO 200. These early DSLRs showed the promise of digital photography, but were tricky to use and produced their best results in a limited range of circumstances. Skill required to overcome equipment limitations, especially in dealing with dynamic range issues and color spaces with significant gaps. These cameras would feel awkward and experimental to anyone who tried to use one today, although their resolution is actually sufficient for many uses, especially for the many people who never print above 4×6 inches. While technically easy, it would be difficult to make a "modern" 3 MP DSLR that made economic sense, because it really wouldn’t be any cheaper to make than a higher-resolution camera, due to the cost of the large sensor, which is somewhat resolution-independent.
Mature 6 MP generation (Nikon D70, Canon EOS 10D) Introduced 2003-2004 – Radically improved over the early DSLRs, much easier for almost anyone to use (and image quality is now superior to 35mm film in most respects). Dynamic range has improved to 6.5 – 7 stops, and the color spaces are free of glaring gaps. These cameras will produce a very good 8×10 print with ease, and can be pushed as far as 11×17. Resolution is probably broadly similar to ISO 200 35mm print film, although the noise from these sensors is less than the grain of the film. These are also much better cameras (ignoring the digital side) than the first generation. Nikon introduced their iTTL flash technology at this generation, and Canon jumped from a very crude 3 point autofocus system to a 7 point system.
In a very real sense, this generation marks the point where digital photography has matured to the point that the average amateur who never makes large prints needs little more (although dynamic range was still significantly less than print film, requiring more care with exposure). A truly modern 6 MP DSLR, with the dynamic range of a more recent camera, would serve the needs of 90% of amateur photographers (although a significantly smaller percentage of Luminous Landscape readers, many of whom are at least part-time professionals, and who tend to make large prints of high-detail subjects).
10 MP generation (Nikon D200, Canon 40D, etc…) Introduced 2005-2006 – More of an incremental improvement over the already adequate 6 MP DSLRs, but more resolution with slightly improved color and dynamic range. There is little doubt that any decent camera of this generation can produce a color print that exceeds the quality possible from any reasonable 35mm camera, lens and film (e.g. not Ektar 25 exposed through a favorite Leica lens and hand-printed with care). B+W printers may still argue in favor of Tech Pan and other very slow films. These cameras are easily capable of producing an 11×14 inch print, and can be pushed above that.
One interesting wrinkle in this generation is that the Nikon D200 marks the first time that any DSLR other than the pro series (EOS-1 or Nikon D1, D2, D3 series) used a more upscale camera body. The $3000 EOS-D30 and its successors used a body that was basically a $300 EOS Elan, with some features actually moved from less expensive models (the Elans had far better autofocus than the D30 or D60) – from the 10D ( with improved AF) on, the digital EOS was basically a $300 Elan with a $1000+ digital side. Nikon had done the same thing, using a $300 N80 body, until the D200. The D200’s improved AF, metering and weather sealing were more reminiscent of a modernized $1000 F100 film camera than a N80.
Apart from fairly specialized applications, any amateur photographer who has been using a 35mm film camera should be happy with a D200 – quality certainly deteriorates as ISO increases, but high ISOs are more usable than any 35mm film of the same ISO. Further improvements require large prints or unusual subjects to see. In the right conditions, these cameras can even approach the image quality of low-end medium format film (645).
12+ MP generation (crop-frame) (Nikon D300, Canon 50D, etc…) Introduced 2007-2008 – Even more incremental than the jump from 6 to 10 MP, but more modest improvements in resolution, dynamic range and color space. Ideal print size 12×18 inches, although 16×24 is certainly possible from a good image. Perhaps the most significant improvement is in high-ISO quality. This generation of cameras has high ISO handling that no film ever came close to. Usable ISO 3200 settings are commonplace, and they look like ISO 800 film. At base ISO, the image quality of these cameras is roughly comparable to 645 print film, perhaps with slightly lower resolution, but compensated for by the low noise. There is a wide range of cameras in this class available, ranging from $600 models with body features suitable for casual photographers up to the $1500+ Nikon D300 with professional-grade AF and metering systems and a weather sealed body.
Low-resolution full-frame (Canon 5D, Nikon D700, etc…) Introduced 2004-2008 – Resolution no better than the most modern crop-frame cameras, but the full-frame sensor has additional advantages. Not only does the larger sensor allow for better use of wide-angle lenses (albeit while reducing the power of telephotos), depth of field control is better as well. In part due to their relatively low pixel density, even older models have very good image quality per pixel (high dynamic range, excellent color rendition). Perhaps the prototypical camera in this class is Canon’s 5D from 2005 (although it was introduced after the 1DsII). Many low resolution full-frame cameras have additional features in the body, sensor or both (superb autofocus, extreme high ISO performance, etc…). Image quality is well into medium format film territory, with an ideal print size at least approaching 16×24.
20+ MP full-frame (NIkon D3x, Sony Alpha 900, Canon 5DII and 1DsIII) Introduced 2007-2009 – Combine the pixel density of a modern crop-frame camera with the sensor size of a full-frame model, and these cameras are the result. At least on the D3x, everything from the AF system to the anti-aliasing filter is the best Nikon can make. Low ISO image quality is remarkable (not just the resolution, but 11 stops of dynamic range), with an ideal print size of 24×36 inches. Print size is probably effectively unlimited, although I have not used a printer larger than 24 inches, because very large prints are always viewed from a distance. Drawbacks (in addition to the cost) include sensitivity to everything from shutter speed (hard to handhold) to depth of field. Image quality, properly handled with the right lens, is higher than any "normal" roll film camera ever built (exclusive of 6×17 cm cameras and some aerial cameras that used roll film as large as 8 inches wide)- it takes medium format digital or a view camera to exceed the image quality of these cameras. For maximum image quality, extreme care is required – not only is a tripod nearly essential, but a remote release is very desirable.
Given what is on the market today, is a straight Moore’s Law improvement to a 40 MP DSLR – which is almost certainly possible, because the densest crop-frame DSLRs (such as the Canon 50D) have pixel densities that would equate to nearly 40 MP on a full-frame camera – the most desirable direction to go? A 40 MP DSLR would generate approximately a 60 MB raw file, might well out-resolve all existing zoom lenses in the corners, leaving full utility only with a few primes, and would almost certainly be so sensitive to camera shake that it only achieved its full resolution on a sturdy tripod (even a 24 MP DSLR is almost tripod-dedicated, the hand holding speed on the D3x seems to be around 1/250 second). Furthermore, the print size to actually see all the detail it was capable of would be somewhere around 30×45 inches. A 44-inch printer is an ugly piece of furniture approximately the size of an upright piano, and a 30×45 inch print requires oversize mat board and a very large wall space.
Instead of reaching ever higher in resolution, would it not be more productive to design a wide range of cameras between 10 mp APS and 24 MP full-frame, but with a wider range of features? More options like the new Olympus E-P1, cameras that redefine the boundary between compact and DSLR, would be welcome . A good 12 MP APS sensor rangefinder would be a logical complement to a bulky professional DSLR for many photographers – better yet if it takes a new line of compact lenses plus SLR lenses with an adapter. I wouldn’t give my D3x up for a "Nikon E-P1", but I would certainly consider adding one to my bag for the times I don’t want, or can’t have, the big camera. A DSLR specially made for photo students would be an interesting product – reasonable size, reasonable price, but full access to manual controls. The Rebels and D40xs of the world are made for snapshooters, and difficult to control manually (no second dial, among other issues) – a camera with the same features, but different controls would be ideal for students. Smaller, but significant features to add to today’s cameras could include built-in wireless flash control (why can the D300 do something the D3x can’t?), built-in GPS (Nikon makes a $300 GPS-enabled compact, but I had to pay $250 and add a cord to add GPS to my $8000 DSLR???), or wireless remote release (again, why can a $300 compact do this, but a professional DSLR uses a $75 cable release)? In a mirrorless design like the E-P1, it would even be possible to design a camera with a tilting sensor, redefining depth of field!
Similar arguments apply to many other electronic products – maximum performance in computers at the moment is being driven almost exclusively by video games. For non-gamers (who are also not nuclear physicists), a computer a couple of years old is almost certainly adequate, and there is even a significant subset of computer users (those who use only word processing and e-mail) who could use a 20 year old Mac IIci very happily. Even the most demanding still photographic applications depend much more on RAM and disk space than they do on the ultimate in processor power. Would it be more productive for the computer industry to focus on better ways of using computers, ranging from easier to read, higher density screens to voice command, and on computers that require less maintenance, than on increasing raw power that is used almost exclusively by gamers? Apple is on the right track here – Macs fit into lives and homes, don’t get viruses, and (perhaps not coincidentally) often lag in pure Moore’s Law specifications. With the number of photos being shared electronically, would truly automatic color calibration be too much to ask? Serious photographers calibrate their monitors (and grouse and grumble about the time they spend doing it), nobody else does (and they get some very odd colors). I’d love to see a self-calibrating monitor at a reasonable price! Another new feature that would make a lot of sense is hassle-free automatic backup, for which Apple’s brilliant Time Capsule is the first step. Basically a router with a hard drive, the Time Capsule not only backs up any desktop Macs on your network, it also listens for your laptop connecting wirelessly and backs it up when it finds it. Its capacity is limited, and it won’t grab really large photo libraries like most Luminous Landscape readers have, but it works for many people. A next step would be a "smarter" Time Capsule that also knew how to back up Windows PCs, cell phones and even WiFi enabled cameras and iPods. It wouldn’t be a huge step to no longer have to worry about syncing any device – they all automatically backed up to your home network when they walked in the door. More intelligent power management (and avoidance of "overkill" computers – don’t use a big computer where a little one will do the job) could go a very long way towards reducing our carbon emissions and dependence on foreign oil – surely a more important goal than the Moore’s Law goal of increasing the frame rate of some (usually violent) video game. The word processing and e-mail user who could use an old Mac IIci could, with modern technology, use a computer that consumes two or three watts instead of two or three hundred. Most users, apart from a few professionals in graphically intensive fields, could use a computer that consumes less than 50 watts.
Perhaps the most important goal of all is designing electronics that last. The equipment we have performs so well that we should rarely have to replace it for performance reasons. However, the build quality of most electronic devices is so poor, and they are designed to be replaced rather than repaired, that they only last a couple of years. The industry knows how to build better equipment – look at any professional camera for an example. Nobody would ever throw out a D3x or a 1DsIII because it broke down – it is beautifully built to break as infrequently as possible, and when it does, it is designed to be repaired. Now that most of the electronics we use are good enough to do their jobs well for many years from a technical standpoint, why don’t we start building them to last? Nothing annoys me more than walking into an electronics store and seeing rack after rack of cameras, computers, televisions and everything else that are made to be thrown away, when we know perfectly well how to build the same things to last. Yes, it will cost more initially, but the long-run economic, environmental and social (really good equipment can only be made by reasonably treated workers) return will come when we don’t have to keep replacing equipment that still does its job. In many types of equipment, Moore’s Law has had its day – the leaps in performance no longer bring leaps in utility – and it is time to build the mature products that will last for as many years as that beloved old Leica M3, Nikon FM2 or Pentax K1000.