What's Easy and What's Difficult for Camera Makers

One thing that continues to astound me is that the camera makers continue to pick "difficult" problems to solve while often failing to solve the "easy" ones. Moreover, the difficult problems they pick may not be a problem that most of their customers want solved.

So let's just break down some new camera design needs for a moment:

  • Body changes — Easy. Most things involved here—size, grip, controls, buttons, LCD, sealing, etc.—are testing and iteration problems, not invention problems. Yes, the grip size sometimes is dependent upon battery and the battery size is dependent upon power requirements of the electronics, but these are communication and balancing issues within engineering. Any controls/buttons require cabling within the body, for which there is sometimes minimal room. But these are common problems, and should be easily dealt with. It surprises me how often camera companies make changes in the body and get them wrong. Moreover, if you're really trying to build a customer base who will upgrade over time, it's important to get this right early and then stay consistent as much as possible.
  • Menus and feedback — Easy, but time-consuming. Building out the UX—which includes some body changes (above)—is a well understood process and should have a well-understood goal. You know you've got this right when users don't need a manual to understand anything (or perhaps better: "most things"). Nikon's "This option is not available at current settings or in the camera's current state" message is a really good example of getting this wrong. The camera knows exactly what is causing the conflict, but doesn't report that to the user. This is equivalent to the old "Error -45" messages some operating systems would throw up. Okay, Houston, we've got a problem, but what exactly is that problem? The reasons why this aspect of a camera is time consuming to get right is that it requires a full, complete process involving customer interaction to do right, and it has to work in any language. Too often camera makers just "pile on" to what they've already done and we end up with sprawling, incomplete, contradictory, or confusing UX. Corollary: it's okay to do a complete redesign—as Sony just did with its menu system—as long as it makes things easier, adds logic/organization, and is clear progress from the earlier design. It's not okay to change things just to change things.
  • Rear LCD/Viewfinder — Easy, but nuanced. Let's be clear: a poor viewfinder makes for a poor user experience, and should always be avoided. Cutting costs and performance in the viewfinder is a recipe for dissatisfied customers (and yet that's exactly what happened with low-end DSLRs, and was probably a small part of their demise). With EVFs, the minimum bar today is 3.7m dot, and with good optics/positioning behind it. Blackout and lag time needs to come down (except for the Sony A1/A9). Meanwhile, the rear LCD is either fixed, tilting, or fully rotating. Cases can be made for each for different uses, so it would be important to fully understand the likely customer's use case first and foremost before making a decision here. Moreover, responsive touch is now required, and that includes focus control. But all these things are again common issues that a good engineering team should get right from the beginning. I still see too many compromises made due to economics or schedule. Getting these things right is integral to the user experience, so should have higher priority than they seem to have with some products.
  • Wired connectivity — Easy. A camera has to live in an environment of other devices, and standards apply with connectivity. USB-C and HDMI are defined not by camera makers and are widely used standards, but surprisingly the camera makers tend to be well behind current standards in terms of support. The problem here is that the bean counters don't give a lot of credence to connectivity—it's just a camera, Jim—and thus cost reduction comes into play here: use last decade's part if you can, because it's cheaper. Moreover, the internal structure of the camera de-prioritizes external communication, so even with state-of-the-art connections you get far lower transfer speeds than expected. There needs to be a mode where the camera floods the connection pipeline at full speed (and yes, the camera doesn't have to be operative as a camera in that mode). We used to have it, and then Microsoft decided MTP/PTP wasn't something that should be supported. The camera makers quickly followed (and didn't invent something new to keep transfer speed maximized) because it let them concentrate on something else.
  • Wireless connectivity — Difficult. It's not the hardware side that's difficult, as things like Bluetooth LE and Wi-Fi 6 are highly standardized and easy to obtain parts for (and even code stacks for). It's the logical side of wireless that makes for the big problem, mostly because the Japanese companies don't really control many aspects of that (Sony finally seems to have figured out that they can make Alpha cameras and Xperia phones talk more productively, but why that's only happening 13 years after they should have figured that out is a mystery, and they've really only done that at the high end). The real issue here is that camera companies don't think much about what happens to an image after it is taken. This derives partly from the film era, when the camera companies didn't really have any involvement with film processing and printing. Thus, even though electronics would allow them to have involvement about what happens to a photo, they don't prioritize it. Thing is, the customer prioritizes it, because workflow and presentation become the bulk of the customer's work now that we have auto exposure, autofocus, and auto everything. And yes, I know that Facebook, et.al., keep changing APIs and and the way photos work on their services—which is another reason why this bit is difficult—but that's just an excuse to not do what's necessary, not a real reason not to do it. 
  • Image storage/cards — Easy, but fast moving. Funny thing is, the camera companies have been on top of this one since forever. If you think about it, Secure Digital improvements, CompactFlash, CFast, XQD, and CFexpress all tended to be pioneered and standardized in the consumer world by cameras. To a large degree, the computer and mobile device worlds—and cameras are mobile devices—have merged around various technologies over time, today that being forms of PCIe. Canon, Nikon, and Sony can all point to pioneering use of a format (CFast, XQD, CFe Type A, respectively). Unfortunately, once that card is removed from the camera, the camera companies don't care about it any more. Moreover, they seem to not have understood that since they speak PCIe, they could have other solutions, as well, including fast internal storage or via-wire external storage (I'd point out that Apple has a similar issue with iOS, which still tries to avoid having a user-accessible file system). 
  • Mechanical shutter — Getting more difficult with less payoff. We have plenty of shutters now that can achieve 10 fps (and even faster). Going beyond that is certainly possible as some have already proven, but costly to do given the speeds involved and the need to minimize other things, like vibration. Of course, global electronic shutter is now the goal, as it would eliminate one of the last remaining complex mechanical mechanisms from ILC devices. So we're sort of on the verge here of something changing. Thus, no one really wants to invest to much on making the mechanical shutters perform even better than they currently do. 
  • Exposure meters — Easy, but can be made difficult. The basic job is simple—my group once did a relatively good job of this by just examining a 30 fps stream of four pixels we were controlling (the secret was in knowing which pixels to use)—but it can be quite complex and difficult. This is one of those "how difficult was the dive" issues. Some camera makers are attempting only a 2.1 degree of difficulty (3m forward pike 2 somersaults) while others keep going for 4.0 (1m forward pike 4 somersaults). Users seem to be happy with 1.0. I see putting work into this part of a camera is more of a work of honor than of need these days. Most customers are happy with repeatable results with some sort of real-time indicator as a hand-hold (which is why everyone insists that cameras have to have exposure compensation dials and on-screen histograms). 
  • Focus systems — Moderate, but time-consuming. As Machine Learning (ML) and Artificial Intelligence (AI) gets more and more involved with the auto focus systems, the testing and improvement cycles tend to lengthen. Not that the work itself tends to be difficult, but it is resource intensive. Moreover, this work tends to be driven by two of the more complex tasks (below), the image sensor and image processor. Of all the areas I list here, focus is the one that's showing the most growth in capability recently. Yet for some reason I see each maker managing to miss on small details that are important, so we're not quite there yet. That's actually good news, as it means that more investment in this task will have functional payoffs in future products.  
  • Color reproduction — Difficult, because of subjectivity (and some other things). I almost don't even want to type anything here. A camera could have "perfect" color reproduction and someone would complain about it. None of the camera defaults I've seen are accurate, they're subjectively tailored to the company's audience testing. Some camera companies have huge teams working on creating the color for their images, and that color won't match a ColorChecker chart with close to complete accuracy (and then some raw converter will put their own interpretation on the data and cause further arguments). 
  • Image sensor — Difficult. The heart of all cameras now comes down to two parts, the most important one being the image sensor. Basically, how well the image sensor works impacts pretty much everything downstream. We probably hit a reasonable ability to accurately resolve photon randomness six to eight years ago with a wide-enough dynamic range (in full frame). A Nikon D610 image still looks remarkably good today compared to the latest 24mp full frame sensors. Canon, Nikon, and Sony have all proven that they can keep that intact while upping pixel counts (at least at most ISO values you'd use and up to maximum implied print size). Sure, we've had small image benefits along the way, but perhaps not enough to justify a customer upgrading for that alone. What's happening now is that we're seeing really advanced techniques slowly working their way up from smartphone type image sensors. First was BSI (backside illumination), then stacked, and eventually we'll probably see deep trench integration and other techniques. The strange thing here is that the costs on this work are the highest (and take the longest time), but the benefits now are very small. Yet this is where Canon, Nikon, and Sony, in particular, are spending most of their engineering time.
  • Image processor — Very Difficult, but they have help. Finally, we get to the heart of what makes the modern camera tick. The smartphones have led the way with this, as the huge volume of phone sales have allowed Apple, Qualcomm, and Samsung to push more technologies and performance into a single chip. Virtually all of the "smarts" in phones and cameras these days is driven by ARM cores with dedicated memory and GPUs. That said, the camera makers have lagged behind the phones and are playing catch up. We're just starting to see machine learning (ML) cores and more trickle into camera image processors. The good news is that all the camera makers have an external engineering organization helping them with their processor in some way. Nikon uses Socionext, for instance, to build out EXPEED chips. Still, there's a tight critical path with multiple entities feeding the image processor work. I've seen instances where something has been left out of these processors because it just didn't meet the timelines. In Nikon's case, they also claimed that their current EXPEED (at the time) didn't perform well enough for the needs of the DL cameras (which likely involved image sensor pipeline speeds). If there was one place I'd be concentrating my engineering resources, it's the image processor. I can point to things with every camera maker that haven't been done because it's not supported by their current image processor.

That's not a complete list, by any means. Power distribution and consumption is becoming a bigger and more important task, for instance, and the camera makers were mostly late in supporting the USB Power Delivery standards (though they've now mostly caught up). 

With each new camera introduction I look to see which of the above tasks the company spent their engineering resources on. The Nikon Zfc, for instance, is mostly about Body Changes. Easy to do, and not a lot of resources needed to do it. The upcoming Z9 will be a different story, as virtually all of the more difficult tasks are being tackled, and apparently both deeply and broadly.                                                                                                                                           

 Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: