On the computer side, Apple’s new chips could be incredibly significant, by far the big computer story of the year for photographers – if they can replicate their results with low-power chips as they scale up. Apple has never built a high-power chip, and the fact that they have some extremely impressive results from what are, in effect, iPad processors doesn’t mean that they can build a higher power core that is as impressive. It also doesn’t mean that they can’t. The three new machines are a MacBook Air, a 13” MacBook Pro and a Mac Mini, all of which had generally used low-power Intel processors in the 15 watt range, although the Mac Mini occasionally got higher power options.
We know three things about the new CPUs, and there are a couple of important ones we don’t. The first important thing we do know is that they are very, very fast – almost shockingly fast for what they are. Early benchmarks are showing that, with software optimized for Apple Silicon, the initial, very low power Apple Silicon Macs are about as fast as a 16” MacBook Pro, which is the opposite of a low-power laptop. The 16” is a performance-optimized mobile workstation that can feed the CPU alone over 65 watts under certain circumstances, and it can easily draw over 100 watts as a full system (my primary computer is a 16” MacBook Pro, and I’ve seen it use 140 watts in short bursts). An Apple Silicon Mac using a 15 watt CPU matching that kind of machine is beyond impressive. Since the two laptops have much longer battery life than their Intel equivalents, they are almost certainly running the CPU well BELOW 15 watts most of the time. Non-optimized software seems to cost Apple Silicon about 25% or a little more, which means that they’re still faster than any other low-power notebook.
The second thing we know is that these machines are incredibly tightly integrated – much more like an iPad than a conventional computer. Even when Apple soldered the RAM onto an Intel MacBook Pro, it was still connected conventionally on an electronic level. Building a machine with more RAM was simply a matter of adding more chips or using higher density chips (and it was easy to build iMacs and Mac Pros with upgradeable RAM – simply solder slots on instead of chips). The RAM in the new Apple Silicon machines is within the System on a Chip (SOC) – technically, it’s not on the same die with the CPU, but it’s extremely tightly coupled.
There’s no way to upgrade these without replacing the motherboard, which has been true of Apple laptops (but not the Mac Mini) for years – but this takes it two important steps farther. First of all, it may or may not be trivial for Apple to build similar machines with more RAM. All three of the initial models can only be configured with 8 GB or 16 GB. If a 16” MacBook Pro shows up with very tight RAM limits, that’s probably an architectural weakness (16” Intel MacBook Pros come with up to 64 GB). If the maximum RAM configuration on the forthcoming 16” Apple Silicon MacBook Pro is 64 GB (or more), that’s a good sign. If it’s 16 GB, that means that the very fast RAM with tightly coupled access to the CPU and GPU is also limiting memory capacity in a way that will be very destructive on higher-end Macs. If it’s 32 GB, that means Apple has some room, but also some serious constraints.
Second, I’m not at all sure this design CAN accept upgradeable RAM. If we see 27” iMacs with soldered RAM, that’s a good indicator that what had been a production choice on some Macs has become an inherent design feature. What does this mean for the Mac Pro, where very high RAM limits and upgradeable RAM are fundamental features of the design? Maybe it’s not inherent – it’s a design that makes sense for these low-end machines, where 16 GB RAM limits have been common, but a similar CPU can accept discrete memory in a higher end design.
The GPUs on these machines are also extremely tightly coupled to the CPU, and they share the RAM. Since none of these machines would normally have had a discrete GPU, we can’t tell whether that’s an inherent feature of Apple Silicon, or simply the way they designed it in some low-power Macs. The fact that the CPU and GPU share RAM makes access incredibly fast, but it also means that you lose a gigabyte or more of already limited RAM to the GPU – much more with high-resolution monitors or in 3D applications. If they make sufficiently fast GPUs, and the RAM limits are much higher in higher-end Macs, an inherently integrated GPU might be a problem that only affects the Mac Pro, where many users use multiple very high-end GPUs. If RAM limits stay low, this GPU design will only be acceptable at the low end.
The third thing we are coming to know is that porting software is going quite well. iPhone and iPad apps run natively, and a lot of Mac software, including photographic applications, is showing up in Apple Silicon versions. Kudos to my friends at DxO for the first high-end raw converter to be Apple Silicon native. As of right now, the only version of Lightroom or Photoshop that runs natively is the cloud version, but there is a beta of Photoshop running natively, and native support IS planned for Lightroom Classic, which is good news. Both Lightroom Classic and Capture One are running using Rosetta 2 emulation, with the 25% performance penalty. Capture One has issues with tethering, according to their tech support forum, which Phase One is fixing. I don’t have an Apple Silicon Mac here, so I can’t check what works and what doesn’t – I’d be very concerned about drivers for older hardware (any film scanner…).
The known incompatibility that IS NOT going to be fixed is any application that tries to run Windows software (Parallels, VMWare, etc.). The reason those programs (and Boot Camp) work at all is because a Mac was basically a PC – these new Macs ARE NOT PCs. All that Parallels and VMWare did was create a virtual machine that was just like the actual hardware, but let you run two operating systems at once (one directly and one virtual). There are two possible workarounds. One is that someone could write a (much slower and more complex) piece of software that fully emulates a PC, instead of merely creating a virtual machine that is similar to the actual hardware. These programs existed for PowerPC Macs, which also weren’t PCs, but they were slow and buggy.
The second workaround is Windows for ARM. These Macs are relatively closely related to (rare) ARM-based Windows computers – close enough that a Parallels approach might work. Right now, Windows for ARM isn’t available as a standalone product. People in the Windows Insider beta program CAN get it, and it seems to work with a beta version of Parallels. If Microsoft decides to release Windows for ARM as a consumer product, it should work – with limitations that only certain Windows applications work on ARM. Many office applications do work, most games don’t as far as I know, and professional graphics applications are very hit-or-miss. If we start seeing more ARM-based Windows computers, and a greater diversity of them – right now, apart from a few servers, they’re all ultralight laptops – we will probably see more software.
The big question is how these chips will scale. Right now, they’re basically iPad processors, allowed to run a little faster with better cooling. The M1 processor in the Macs has four “big” cores and four “little” cores, the same as the iPad Pro. Right now, the iPad Pros are on an older core generation, while the latest iPhone and iPad Air use the same cores as the Macs, but only have two “big” cores and four “little” cores. The next iPad Pro will probably have the same core configuration as the Macs, and be only a tiny bit less powerful, entirely due to cooling (and perhaps a GPU difference).
To build a faster chip for more powerful Macs, Apple has two options. They could either use a LOT of “big” cores that are nearly identical to the fastest cores in an iPhone or iPad (the same design at a higher clock speed due to cooling) or they could design a “jumbo” core that doesn’t show up in phones. A 16” MacBook Pro has the power budget for about 12 or 16 “big” cores, a 27” iMac for something like 24 or 32 of them, and a Mac Pro for 128 or some huge number. The advantage of this design is that it uses the existing iPhone/iPad cores, which Apple designs every year, and which are excellent performers for their power consumption. The disadvantage is that a lot of applications want a few fast cores, rather than a lot of slower cores. A “jumbo” core that kept the fastest laptops in the 4-8 core range, the iMac around 8-16 cores, and the Mac Pro around 32 cores, would make developers’ jobs a lot easier than dealing with a big workstation that is basically 64 iPhones tied together. Since Apple hasn’t built a jumbo core, we don’t know how it would perform.
Similarly, the iPhone/iPad type architecture that ties the CPU, memory and GPU so tightly together could present serious issues for higher end computers. On the initial models, the storage is also soldered in place. Soldered storage is found on all recent Apple laptops – it doesn’t seem to be electronically different and more integrated here. Apple could offer larger capacities or even upgrade slots easily, unlike with the RAM and GPU, which seem to be deeply integrated.
It’s not a major issue for a MacBook Air (which has never had, and rarely needed, options above 16 GB of RAM or 1 TB of storage), while both the Mini and the 13” MacBook Pro have actually lost memory options – 32 GB on a 13” MacBook Pro (a factory-installed option) is a rare configuration, because a very high-end 13” is not much cheaper than the much more capable 16” model. The Mac Mini, many models of which have been user-upgradeable, is far more likely to have 32 or even 64 GB of RAM – options the Apple Silicon version doesn’t have at any price, let alone as the cheap DIY options available on some Intel models. All three of these models have always had integrated GPUs, so there’s nothing really different with the tighter integration – the built-in GPU is a good performer. If this design really is much more tightly integrated, Apple will run into trouble with machines that have historically had discrete GPUs, and now cannot.
We really need to see higher-end Apple Silicon Macs to get an idea how serious the issues are, and even whether they exist at all. If we see a 16” MacBook Pro with 8 jumbo cores (plus 4 little ones), a maximum RAM capacity of at least 64 GB and GPUs that outperform the Mobile Radeons in the Intel-based models, we’re doing fine on the portables. If the 16” MBP is a 16-core machine using nothing but iPhone cores and has a maximum RAM capacity of 16 GB, there are serious architectural weaknesses. Similarly, a family-room oriented iMac might be able to get away with soldered RAM and a maximum of 32 GB (unless it’s a very low end machine, 16 GB wouldn’t cut it on a desktop bundled into an expensive display), but a machine aimed at creative pros would need configurations with 64 or 128 GB, preferably with expansion options.
The 27” iMac has an important photography and video market, and really risks losing that market if it gets low RAM limits and soldered memory. If a 27” iMac ends up with 32 iPhone-type processor cores, Apple had better have REALLY good developer tools that help software take advantage of the architecture – right now, a lot of software has trouble using anything more than 4 cores or so, and “now: exports your photos at 1/8 of possible speed, performs like a really good phone” is not a great tagline…
This (and the even worse problems the Mac Pro faces if the limits turn out to be inherent to the architecture) is not inevitable. It is quite possible that Apple has a plan for this, and we’ll see 8-core MacBook Pros and 32-core Mac Pros running at record speeds with plenty of memory, storage and graphics. Apple may well have gotten this right – they have a good track record with similar transitions. A few power-optimized machines that leverage iPad processors don’t tell us anything either way, except that Apple seems to have done a very good job with the software.
On the PC side, the big news is AMD processors, and in Intel’s response to AMD’s threat. For the last several years, AMD Ryzens, Threadrippers and Epycs have had a performance and price/performance advantage on desktop computers over stalled Intel chips, In 2020, we saw highly credible Ryzens for laptops as well – AMD mobile chips had been limited to value-oriented notebooks well into 2020, neither performing well enough for fast notebooks nor low enough in power consumption for very slim ones.
As 2020 rolled on, we began to see a few gaming laptops using new 8-core Ryzens that can give the top-end Intel chips a run for their money. What we still haven’t seen are the real creative-pro versions of these machines – anything with 4K displays, large RAM and SSD capacities, etc. These are probably coming, and Intel has also potentially released new chips that might offer larger performance increases than we have seen in the past few years.
Will we see more ARM based PCs, especially in light of the very strong performance lead Apple seems to have taken in ultralight machines? Apple is making ultralight laptops that perform like mobile workstations in many ways – even as it remains to be seen whether they can scale that to bigger machines, it’s a challenge for other manufacturers. Will someone respond with high-performance ARM machines running Windows?
Going farther, if Apple DOES manage to scale their ARM performance, will Windows manufacturers introduce larger ARM laptops or desktops? If so, what does that mean for software releases – will we start to see more high-end software for Windows on ARM? Right now, of Adobe’s Creative Cloud apps, Lightroom (the cloud version) is native, Photoshop exists in a beta version, and the remainder are incompatible, including Lightroom Classic. This situation will almost certainly improve if Windows on ARM becomes more mainstream.
While less interesting to photographers, Intel has taken note of Apple’s successes with big and small cores, reducing power consumption while doing simple tasks while allowing higher performance for complex ones. Most phones, and many tablets have been doing this for years – and it makes a great deal of sense there, where the battery is small and many tasks don’t need a lot of performance. The Apple Silicon Macs are the first full-scale computers I am aware of to use this type of chip design, but Intel just released a raft of similar processors, mostly for ultraportable laptops.
The only advantage of such a chip to a photographer is when they aren’t working on photos – any type of editing will automatically trigger the big cores. It will provide a substantial power savings when, for example, working on e-mail on your editing laptop. It’s more important for ultralight laptops that photographers rarely buy, because a big laptop with a big battery will already have very good battery life at low loads (the 16” MacBook Pro is the champ here, lasting 10 hours or so writing e-mail). An ultralight with a small battery will need to save every last watt, and low-power cores can help. An important consideration for photographers looking at any laptop with a hybrid (otherwise known as a BIG.little architecture) CPU with two types of cores is that only the big cores really count for what we do. If you’re looking at a machine with four big and four little cores, whether it’s a Mac or a PC, that’s really a quad-core computer. Many iPads are especially insidious in this regard, because they can have more little cores than big ones.
Moving from computers to displays, will this be the year where 8K, OLED or both become mainstream? LG has just teased a 31.5” 4K OLED monitor, notable because almost all OLED monitors to date are either much smaller (smartphones, a few laptops, a few portable monitors for high-end video) or much larger (55” and larger TVs, the very occasional gaming monitor modified from a TV). There are very rare professional video reference monitors in the 15-25” range, but they’re well over $10,000. Dell once had a 30-inch OLED for $3500, but it is no longer available. If LG actually releases a mainstream OLED monitor in a relatively standard size, that is a very interesting item indeed.
There are several important questions to ask. First, is it illusory from a printing viewpoint? LG claims a million-to-one contrast ratio, which a photographer would express as 20 stops of dynamic range. What on earth captures 20 stops, and what can output 20 stops? The very best sensors are giving us something a little under 15 stops, if you include some noisy deep shadows, and in the rare case of a true 16-bit file – most cameras output 14-bit raw files, which lose a bit of very noisy information in the deep shadows. The GFX 100 is an exception, and it has allowed us to see what’s in the bottom stop of a modern sensor – not much!
The best performance in deep shadows I’m aware of is the D850/Z7 sensor at ISO 64, with other modern Sony designs very close behind, and the EOS-R5 sensor also in the running (again, I haven’t used the EOS-R5 – Canon, if you’re listening…). In terms of realistic dynamic range you might want to print from, they’re all in the 12+ stop range, even though they’re approaching 15 stops from an engineering standpoint (this is why the 14-bit raw files don’t really matter – that lost half stop or so is unprintable). The best printers on a good glossy paper are in the neighborhood of nine stops, so the art in processing your photos is largely in how to get a 12+ stop image of what might have been a 20 stop scene (especially if the sun is in the frame) onto a nine stop print. This is what all the techniques of the Zone System and related crafts are about.
If your goal is a nine stop print, a 20 stop monitor doesn’t actually help. Really good LED-backlit LCD monitors have a real contrast range somewhere slightly under 1000:1 – that’s the range my EIZO CS2740 (considered a true photographic reference monitor) measures in, and it is similar to other top-end monitors. That’s approaching ten stops – it can’t show everything that’s on the sensor, but it can show everything that is going to print. When a blocked shadow is tamed enough not to be black on the print, or when a highlight is going to be moved from paper white to detail, it begins to show up on a good monitor. Non-OLED monitors that show higher contrast do it by varying the backlight across the screen – useful for gaming and watching movies, but misleading for color-critical photography (and very difficult to calibrate). For photography, you want an even backlight.
If you had a monitor that displayed 20 stops instead of ten (an OLED), and was almost perfectly even (a very, very good OLED), what would it give you? It would show all of the detail the sensor captured, including detail that wasn’t going to print. Since the monitor is no longer similar to the print, proofing becomes much more important, just as it is when making a print from the nine-stop CS2740 to five-stop Japanese Washi paper. Only certain images work on Moab Moenkopi Washi (a beautiful paper), and proofing often reveals an image that looks great on screen, but just won’t print on that paper. With an OLED, that experience is potentially there for EVERY paper – you’re editing the image in the monitor’s full gamut and don’t realize it’s unprintable. If the OLED is as good as a top-end EIZO ColorEdge or NEC SpectraView LCD in ways other than contrast, but has 20 stops of dynamic range, it allows us to proof any possible situation – very useful. Will the first photographer-oriented OLEDs meet that standard? Will they have reflectivity similar to paper? Every OLED I’ve seen so far is ultra-glossy, while serious photographic displays are matte and used with anti-glare hoods.
ViewSonic is showing a photographer-focused 8K display (the 32” VP3286-8K), another possibly useful item. An 8K monitor can show the image from a 24 MP sensor at 100%, with some room to spare for palettes. It can’t quite show the full image from a 45 MP+ pixel monster at 100%, for a couple of reasons. First, the only one of the pixel monsters to be truly 8K (the others are all 8K plus a little), the EOS-R5, resolves cinema 8K (8192 pixels across), while the ViewSonic is 8K UHD (7680 pixels across). Second, most cameras use a 3:2 aspect ratio, while the monitor is 16:9 – there are extra pixels above and below the 16:9 image. Even so, an 8K monitor can show most of the image from a high-resolution camera at actual pixels, a useful characteristic. It’s also extremely sharp for a desktop monitor, at 280 pixels per inch (ppi). That’s significantly higher pixel density than a 5K iMac Retina Display or even Apple’s expensive Pro Display XDR (218 ppi), and in the range of a 4K laptop or a Retina iPad, both of which are viewed from much closer.
It has three possible disadvantages before having actually seen one. One is that it is very difficult to drive, both in terms of the graphics card requirements and what kind of cable it might need. It has a Thunderbolt 3 interface as well as DisplayPort, but, unless ViewSonic has done something nonstandard, 8K over Thunderbolt 3 is only 30 Hz, which could lead to flicker. Theoretically, they could be using two Thunderbolt 3 interfaces (two ports, which have to be on separate buses) to get 60 Hz – but that would be confusing, because not every pair of Thunderbolt 3 ports would work. For example, the 16” MacBook Pro has four ports, but only two Thunderbolt buses. The two ports on the left side of the computer share a bus, as do the two on the right side. To connect the monitor for full performance would require use of one port on each side of the computer, and would reduce the Thunderbolt bandwidth available for storage and other uses substantially. All iMacs as of January 2021, except the iMac Pro, only have ONE Thunderbolt bus – using two ports on the same bus doesn’t help, so an iMac can’t drive the monitor at full resolution above 30 Hz. Recent Mac Pros have plenty of Thunderbolt bandwidth, and should be able to drive this monster with ease. Most PC laptops won’t drive it at all, either through Thunderbolt or DisplayPort, but most desktop PCs with a recent high-end graphics card should have an 8K ready version of DisplayPort. It might be using Display Stream Compression, supported on some newer graphics cards, to reduce bandwidth and alleviate some of these issues – but at what cost to image quality?
The second disadvantage is uncertain – how stable is it? It’s using an enormous amount of bandwidth, as detailed above, and the slightest imperfection in cables or processing could very easily lead to flicker, dropouts or no image at all. Even 4K monitors can be very picky about cable and graphics card quality – this is pushing four times as much data. In addition to any issues caused by the panel itself and the connection, how good is the backlight? ViewSonic’s best monitors have traditionally been quite good, but not in the same class as EIZO or NEC’s best. It’s not easy to engineer a perfectly even (both in illumination and color) backlight across a 32” screen, and that’s part of what you get with the best monitors. I haven’t yet seen a ViewSonic I’d put in the EIZO class. Of course, they could have gotten it right this time, and they should have on a $5000 monitor. The final disadvantage is that it IS a $5000 monitor!
Neither 8K nor OLED monitors will probably become mainstream in 2021, but both are not too far over the horizon. If either of these two displays or a competitor to either of them make an impact at all this year, we should expect to see more like them in future years. Dell introduced a desktop OLED monitor very similar to the new LG in 2016, and an 8K display in 2017… Surprised? Neither of them made much of an impact at all, and the OLED UP3017QW is discontinued, while the 8K UP3218K remains in the line as a hard-to-drive curiosity. Both were very expensive, and Dell is not a top-end professional monitor supplier. Very few people were willing to pay $3500 for OLED or $5000 for 8K, and those few who were wanted an EIZO or NEC monitor. ViewSonic and LG are in the Dell category, maybe a small step above in terms of reputation, but they are not the places most of us would look for an ultra high-end monitor, although LG does know OLED panels better than anyone, from years of TV experience. We don’t know the price of LG’s UltraFine OLED Pro, and we know that ViewSonic’s VP3286-8K will list for $5000.
It’s interesting to look briefly at where TVs and projectors are headed, primarily in their role in displaying, rather than editing, our work. 8K TVs have been around for a few years, but have always been expensive, top-end options from the higher-end makers. As of January 12, 2021, B&H lists 18 8K TVs from Samsung, LG and Sony, ranging in screen size from 55” to 88”, and in price from $2300 to $30,000. 4K TVs from the same makers start at less than ¼ the price for the same size, and anything over ½ the price of an equivalent 8K set is either OLED or a special design. AT CES 2021, value-priced TV maker TCL announced an 8K line. There are no prices attached as yet, but the history of these lower-end brands is that, when they enter a market segment, the big brands can no longer charge as high a premium for the same feature. Will TCL’s entry mean that we see 8K around $1000 by the end of the year? Might we even see 8K from a higher-end brand in that range?
One of TCL’s new 8K TV’s – how much cheaper will they be than other 8K sets?
Even if we start seeing reasonably priced 8K TVs, it will be a few years before 8K photo display becomes common. Most importantly, there are very few ways to get an 8K image onto an 8K TV. One that is actually useful to still photographers is to put the image on a USB stick. Many TVs have a built-in slideshow function, and 8K TVs should do that in 8K. Apart from that, there are effectively no 8K content players (no cable boxes, no DVD/BluRay players and either very few or no streaming devices). About the only thing that will send an 8K signal is a computer with a new enough, fancy enough graphics card. Additionally, TVs are infrequently replaced. Even once the majority of TVs sold are 8K, the majority we will encounter will be 4K or even HD for years to come. The only actual utility of 8K TV as a display device to photographers in 2021 (and probably until 2024 or later) is the completely captive option. Put some images on a USB stick and bring the TV with you to where you’re displaying the images…
OLED TVs are becoming more and more common, and there are limited circumstances where it is worth editing images with them in mind – especially if you are displaying in a situation like a high-end business lobby. It’s worth asking if the screen they have in mind is OLED if you’re working on a show for those circumstances. If it is, HDR techniques may come into their own – the challenge is that you won’t be able to see the results on your non-OLED monitor.
Projection is certainly changing from where it was a few years ago. It wasn’t all that long ago when VGA or SVGA digital projectors with wildly inaccurate colors were the norm. I remember hauling a 50 lb Christie installation projector (borrowed from a local university where I taught and knew the head of AV Services) to one presentation in 2015 or so – there was no other decent projector to be found! At this point, every photographer who presents on a projector regularly should own (or share) a 1080p or 4K projector whose colors they are familiar with and calibrate. They are no longer horribly expensive, and, as long as the room’s not too big, they aren’t especially heavy, either. Unless you present in high-end venues, don’t trust the projector in the room, since SVGA is still far too common, and many of them are going to be wildly mis-calibrated. 4K projectors in a reasonable price range are still “fake 4K”, using pixel-shifting to get a resolution somewhere between 1080p and full 4K. True 4K still starts over $5000. Is that something that’s going to change in the next year or two? Might we see Epson’s 3LCD projectors, historically a good choice for photography, in a 4K version?
There are two big software stories for 2021. One is that Lightroom Classic seems to have escaped the ax for at least another year. It is going to be native on Apple Silicon and possibly Windows on ARM, and its fall 2020 release was the most significant we have seen in a few years. Whether this is because Adobe has pulled back from their “all cloud, all the time” rhetoric of a few years ago or because they haven’t been able to add features to Lightroom CC fast enough to replace Lightroom Classic is unclear to me. At the earliest, Adobe could announce a merger of the two versions into a cloud-centric product at Adobe MAX this fall, and it would probably take another six months to a year to complete. It may not happen then – I am fairly clear that it was once in Adobe’s timeline, but I am less clear that it still is. Maybe they got too much backlash from people with large libraries, or who wanted features that Lightroom CC was slow to add more than they wanted mobile integration.
The second big story is AI coming to most photo editing software. Just about everyone is making a big deal out of AI and neural network based features. This is what smartphones have been doing for years, calling it computational photography. Approaches vary, from DxO’s DeepPrime noise reduction and sharpening tool to Luminar’s sky replacement and other dramatic image changes. In DxO’s case, the AI feature merely improves an existing tool, and is under the photographer’s control. Luminar’s tools run much closer to the boundary or gray area between photography and digital art, and produce an image that is much less close to what actually existed, and with much less control. Photographers have been debating what’s a fair edit (and who does it – minilabs with auto-enhance features caused quite a bit of debate in their day) for years. There’s a whole article to be written on AI, its effects and its ethics, but it’s certainly an area to watch.
The last piece of our winter roundup is what’s happening to photography in the world. Social media has changed photography in some enormous ways over the last few years, and many of us aren’t terribly happy with the result. I have to admit that I have never had any social media account, nor any interest in one – it’s just not a piece of society that has interested me. More photographs are being taken, and fewer taken seriously, than at any time in history. The influencer-driven aesthetic of the big platforms is controversial, as is the fact that there is almost no way for artists to make money – the license agreements we are required to click on in order to participate assign far too many rights to Silicon Valley billionaires. We need better ways to get our work out there, platforms that are photographer-driven, curated in some way, whether by juries, community vote or another method, and platforms that let us benefit from our own work and set the terms of how it can be used. We need a greater diversity of platforms, so we can each participate in those that make sense for us. Right now, the ability to reach any significant audience largely comes with expectations of turning over both our privacy and our copyright to Facebook or another tech giant. They can run ads for products that a photographer may or may not approve of, and they reap all of the profits. They don’t help us market our work, nor even help us give it to those we want to have it.
For reasons almost entirely unrelated to photography, the big social media platforms have become pariahs. One of the few issues that gets bipartisan support in the US Congress is serious regulation of Big Tech. Both liberals and conservatives feel that the big platforms have far too much power to determine who gets to say what, and that they use that power almost exclusively to boost “engagement”, a metric that, in the simplest terms, means increasing the number of advertisements people see. There is absolutely no sense of responsibility on the part of tech company leaders – no sense of using their power for a better world, a world more meaningful than making a pair of shoes follow you around the Web. Most of the decisions aren’t even made by humans – machines decide what we see online, and the only way they know whether they’re doing it right is by whether we see more ads. Newspaper editors and TV journalists may disagree with each other, as they should – but there is a broadly shared sense of being in it for society, not just to increase advertising.
The European Union, the state of California and other governments have already passed laws protecting privacy and increasing tech companies’ responsibility for what users are fed online. I suspect that 2021 is the year the US Congress acts in some important way, and I suspect that it will be bipartisan. I have no idea what the new online world will look like, and I strongly suspect that this is a story that will take more than a year to play out. Photographers are bit players, mice trying not to be stepped on as the elephants dance, but we might well benefit from a greater diversity of platforms. We should make sure to make our voices heard in the debate, and to be active in promoting platforms that serve our needs better. We may end up paying for spaces to show our work that have previously been free – and we should be willing to pay, but we should demand something for our money. We should demand better presentation, not littered with ads that we don’t necessarily approve of. We should demand the rights to our own work – if we license a work to a social platform, we do it for one specific use, with clear expectations. We should demand a share in whatever profits our work generates – it is reasonable to pay for a place to sell images, but not to have our images used to sell something that is not ours. We should demand choice in how our data are used – if we want to participate in something where Big Data matches us with what a computer thinks we should see, fine – but we should always have the option of saying “no”.
On an optimistic note, COVID vaccines are coming, and 2021, especially the second half of the year, may be a great time to step back out into the world to start seeing through our cameras again. Personally, I have a major hike on the Pacific Crest Trail planned for the summer, and there are a lot of mountain landscapes waiting to be captured, edited and printed. I have a variety of bird photography trips in the planning stages as well as other landscape journeys. As it becomes safe to do so, this is a great time to plan what you want to do next with your images. By the summer or fall, it should be safe to take a trip, or a workshop, or to see people and take photographs of our joyous reunions. 2021 will be a photographically exciting year, and may it be both personally and artistically fulfilling for all of our readers.