Pixel Density Redux

It appears that a new angst has appeared in some due to Fujifilm's 40mp APS-C camera. I see all kinds of questions that are getting posed—ones that probably should have already been posed when Canon went to 33mp in APS-C—about how all this works in terms of image quality.

We're back to pixel density, folks. 

Fujifilm's 40mp APS-C is about the same pixel density per capture millimeter as 90mp full frame. Canon's 33mp APS-C is the same density as 77mp full frame. 

The problem starts showing up when people start comparing 40mp APS-C to 24mp, 33mp, or 45mp full frame. All kinds of math raises its head that trips people up.

Linear resolution—something we want as much of as possible, all else equal—is proportional to the square root of the pixel count. Oh dear. Square roots. That's the jiggly thing that goes over numbers you never quite figured out how to calculate in junior high. So, just to be clear, at the same frame size: (updated: I'm going to detail this in pixels on one dimension assuming same sized sensor, and rounded to nearest percent to be more consistent and clear; none of the changed numbers changes my comments, however).

  • 12mp (4288 pixels) — 43% more linear resolution than 6mp (3008 pixels)
  • 24mp (6048 pixels) — 41% more linear resolution than 12mp
  • 33mp (7008 pixels) — 16% more linear resolution than 24mp
  • 45mp (8256 pixels) — 18% more linear resolution than 33mp, 37% more than 24mp
  • 61mp (9504 pixels) — 15% more linear resolution than 45mp, 57% more than 24mp

As I've written before, a 15% in linear resolution is about the threshold at which most people can detect any change. Note that I didn't write "significant change." Just that the majority of viewers will see an image with 15% more linear resolution as starting to be better in some vague way. How much better is a good question unanswered by science at the moment. 

But there's that tricky bit to think about: pixel density. 40mp APS-C happens across a distance that is two-thirds that of full frame in each dimension. So suddenly 40mp APS-C (7728 pixels over 23.5mm) has 97% more linear resolution on an object framed the same size as a 24mp full frame camera (e.g. same angle of view; 6048 pixels over 35.9mm). Technically, that's a lot, and perhaps enough for even some 24mp full frame users to consider switching to 40mp APS-C. (Hold that thought...)  

Of course lenses have something to say in what happens in your data. A poor lens—as measured in contrast line pairs per millimeter—on a high pixel count sensor might just resolve that poorness better. And then there's diffraction, which absolutely would be in play at 40mp APS-C at even f/2.8 on a desktop inkjet print (19") judged at arm's length distance. 

But things are trickier than even just applying the basic math. When I bought my Sony A7R Mark IV (61mp), I thought that it would be my new landscape camera of choice (over the 45mp Z7 II or D850). Heck, the Sony 20mm f/1.8 lens even performed ever so slightly better than the Nikon 20mm f/1.8 S lens in some of my chart tests (both are excellent landscape lens choices). Thus, more resolution and a slightly better lens should show up as better pixel results, right? Didn't happen. There's something about the way Nikon obtains and places their DNs (digital numbers) on the underlying photon-to-electron conversion that makes the shadows clearly better for me. And I tend to expose for the highlights and re-work the shadows in my landscape work. Thus I eventually sold the Sony.

As I've written many times before, we're in an era now were we get small gains when we get them, and these gains don't always come without a cost. For instance, Canon is still using anti-aliasing filters on some of their 33mp cameras, for instance. For some, that would be a cost, while for others not having such a filter might be considered a liability (e.g. moire). 

I wrote way back in 2003 that 24mp was the end of the sweet spot for APS-C. Beyond that pixel count we'd be seeing other impacts—diffraction, for instance—start to become issues we'd have to deal with and we wouldn't see as much improvement as the increase in number might suggest. My statement back then was based upon a lot of low level math, and I haven't seen much really change to alter that work. One change that might impact that math was the removal of the anti-aliasing filter, but as I just noted, that doesn't come without a downside.

Information theory also comes into play. I won't get into the formulas, but basically the pixel information you ultimately collect is dependent upon both resolution and signal-to-noise, and any decrease in signal-to-noise produces a non-linear decrease in information. A 40mp APS-C sensor has a lower signal-to-noise ratio than a 40mp full frame one, all else equal. And we haven't gotten to camera or subject motion, either, which also would impact the pixel math in terms of ultimate acuity.

So be careful in getting too excited about "big numbers" when it comes to image sensors. Yes, they keep going up—and will continue to do so—but the direct improvements are going down in the amount of change they suggest. 

That said, what I've also written in the past still applies: all else equal, more sampling is always better in digital constructs. (Note the "all else equal.") Why? Because if the data is accurate, it gives you some more discrete information about the thing being sampled. It's only the amount of "more" that is changing. So yes, higher pixel counts are better, but they're now always less better relatively than they were the last time we got a higher pixel count. Put another way: we continue to get higher pixel densities, but these are giving us less new benefit.

I'm still picking up my 20mp and 24mp APS-C cameras and being very happy with what they produce. I'm still picking up my 24mp and 36mp full frame cameras and being extremely happy with what they produce. You should be, too. Of course, my main cameras are 45mp and 50mp these days, so why listen to me? ;~) 

If I were doing more landscape work, I'm sure I'd have a medium format 100mp camera, as the increase in capture size (higher signal to noise) coupled with increased resolution should mean significantly more information content captured. (Note the "should"; my experience with the Fujifilm GFX100 was that the lens I used and mount alignment was holding it back somewhat.)

Update: re-rationalized math to pixels/mm across a single dimension (because we have different sized sensors). Changes percentages, but doesn't change conclusions. Also, fixed wording in last paragraph.

Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2023 Thom Hogan
portions Copyright 1999-2022 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system.