Conflating User Issues

I see a common theme among many since Sony pushed their top pro camera to 50mp: “I don’t want more pixels in a sports camera.” If not sports, then event. If not event, then wedding. Simply put there’s a relatively large group that claims that fewer pixels would be better for them.

You have to ask the question: why that would be?

When you do, many will immediately say that “smaller pixels mean noisier results.” Nope, not exactly. Those folks don’t understand pragmatic equivalency. If you’re outputting to a double-page spread in a magazine, that’s a fixed output size, and both a 24mp and 50mp camera will have basically the same noise profile because they collected the same number of photons over the same area and are now outputting them over the same area. There’s no user issue with noise here. None. Not for any sports, event, or wedding photographer I know. Nothing’s changed in lighting, capture size, or output size. The results will have the same quantum shot noise within a small margin of error. Best case there may be some margin issues where one approach is better than the other, but again, pragmatically, the results are essentially the same.

So you ask again, and you get this answer: “I don’t want to deal with large file sizes.”

Bingo. We have about twice as much data to collect, so our file sizes should about double. That means you need larger cards, faster communication from card to permanent storage, and more permanent storage. This is actually a bit of a user-perceived pain point (and has been for some time).

But is it really?

Kryder’s Law (similar to Moore’s Law, but for disk density) used to net us a doubling of drive capacity every year, though the complexities have slowed that to something more like 15% additional capacity a year. Since we’re mostly transitioning from physical platters to electronic storage these days, Moore’s Law will come back into play, though. 

Certainly we have no problem getting fast and large capacity cards. Sony skipped right to a 128GB minimum card size for CFExpress Type A cards. And Lexar makes a 128GB UHS-II card that works in that Sony A1 just fine that sells for US$50. You should be replacing your cards on a regular basis anyway—particularly if you’re a pro and must have ultimate reliability—so the card isn’t really the pain point. 

So when was the last time you bought a hard drive and how big was it? Costs per gigabyte of permanent storage used to go down fairly substantively each year, but as Kryder’s Law began to erode, that’s become somewhat less true. Still, the cost per gigabyte is not particularly high. A 10TB drive might set you back US$400, and that would store literally millions of Sony A1 images shot in uncompressed raw. So permanent storage isn’t that much of a pain point.

The real pain point is in moving that data, which can be as high as 102MB per image. Moving it to clients via wireless communication, or moving it from camera to permanent storage. You can solve the latter by buying state of the art cards and a state of the art card reader and using that on a state of the art computer (basically CFe and USB 3.2 minimum). And not using Lightroom ;~). Seriously, most of us sports photographers use Photo Mechanic for the task of ingesting images because it’s just faster to get to results, including captioning.

That leaves us one pain point: getting 50mp from the camera to a client via wireless. Actually, that’s mostly a workflow pain point, and Sony has done a number of things to help you with that (most of us on the sidelines use one particular workflow that Sony enabled). You can also have the camera create 2mp proxy images and push them out, and sometimes I do that (and have done that on my Nikon cameras with far smaller pixel counts). 

Finally, our poor exasperated customer who doesn’t want 50mp will tell you: “I just don’t need that many pixels.” 

That’s probably true. I have to wonder if those folks are buying the right camera in the first place. A Sony A7 Mark III or Nikon Z6 II might be the right camera for them. But FOMO and bragging rights get in the way, and they opt for the more expensive camera that does things that they don’t need done.

But even here I’d be cautious. I’ve heard the “don’t need that many pixels” complaint at 12mp, 16mp, 24mp, 26mp, 32mp, 36mp, 45mp, and these days at 50, 61, and 100mp. Those folk should think back just a bit. They said they didn’t need digital HD TV, that the old NTSC analog system was fine. They said that they didn’t need 4K, that HD was fine. They also were saying those things when they had 30” screens. So just how good does that old NTSC VHS tape now look on your 60” 4K TV? 

I’ve written this before and I still believe it will become true: we’re going to get to wall-sized displays in your house (and office). If you want to see/show your photos on that wall, expanding them to full size is going to reveal how many pixels you had when you took the image ;~). Of course, we might not get there for a few years, but that doesn’t change the issue, just delays it. Are there images in my files I wish I had been able to take with a higher resolution camera? Yep. Even in the last five years I can say that to be true for a few photos. (We now have AI-based programs that can upsize better than before, but I've yet to see one that doesn't trigger odd artifacts that might come into visibility.)

So what’s the real problem with 50mp? None that I can see that would make me say no to a 45mp Z9 to replace my 20mp D6. 

I read a conjecture recently that the number of technology improvements we’ve seen in the past 50 years will be matched by those we see in the next 10. I’m not sure I agree with that, as we’re getting closer to some physical boundaries with a lot of tech and would need some as-yet-unknown “breakthrough” innovations to keep on a faster track. I’m not sure what those breakthroughs are going to be or when they’ll show up yet. But there are plenty of folk out there looking for them. It's not that we won't have progress, it's just difficult to predict right now when we will get it. 

10 years from now I suspect we’ll be arguing about much different things in photography than we are today. I was reminded of that when I took a photo of a procedure at a medical facility recently with my iPhone. Only I accidentally had my phone set to record a “live moment” not just a single still image. Not an issue, I can still get the image I want out of the data set. But I have more information than I needed for that photo. That’s exactly where I think photography is headed next: more information. Pixel count is only part of that.

People talk about computational photography a lot these days. Those of us in the digital business were talking about computational photography in the 90’s and 00’’s; it’s only recently showed up clearly in consumer products. I think the next step is database-collected photography. By that, I mean that you’re going to have multiple devices collecting image (and other) data for longer than a moment in time, from which you can re-create a moment in time from an observational position. (We can already do that with some post mortem brute force. Researchers at Microsoft and others have “built” new photos from all the photos of an object that appear on the Internet. What I’m talking about here is you making your own “photos” from your own data, easily and quickly.)

Meanwhile, 50mp image sensors are just fine for my uses. They should be for yours, too.

 Looking for gear-specific information? Check out our other Web sites:
DSLRS: | mirrorless: | Z System: | film SLR: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system.