How Far Will Cameras Go?

This is the companion article to “How Far Have Cameras Come?

At the turn of the century when I was first using and writing about the Nikon D1, I was invited to give a talk about the future of imaging at one of the big yearly conferences. I won’t go into the details of that talk here, but it’s been interesting to see most of my guesses showing up in some form or another in eventual cameras. 

Many of my thoughts back then had to do with the pre-computational and post-computational aspects of dealing with digital data. A number of smartphones these days are doing variations of both, though the “pre” is mostly extra data gathering while “post” is where the actual computational bits come into play. 

The dedicated camera makers have been slower at adopting many of the things that were/are possible, but if you go back in time you can see many companies experimenting with their options along the way. Closer examination shows that this was done mostly in the consumer compact cameras. There’s a reason for that: those cameras sold in huge volumes, so could justify the R&D expense over more sales. 

For example, fairly early on Nikon Coolpix introduced BSS (best shot selection). The camera took a burst of images, and then selected the one that it thought was best (in some iterations, you could override that choice). I believe Nikon was mostly looking at subject motion and focus to determine “best,” but as you get more computational ability (and now AI), you could add other factors into making such determinations. 

The problem with dedicated cameras today and in the foreseeable future is that the volume (~6m ILC/year) means that you have to spread R&D return on investment more over time, not units. So “big” changes will likely happen less often than “small.” 

Still, there’s a lot that can and should be done for future cameras. 

I’ve been hammering on “user programmability” for 15 years now. The camera companies have begrudgingly given us more “control customization” instead. I believe this is an architectural thought lock on their part: they simply either don’t want to spend the time and money to accomodate it, or they don’t have the expertise to achieve it (it’s a software issue, not a hardware problem). 

I come from Silicon Valley (literally). I’m used to blank slate ideas, and then trying to achieve them. In 1968 Alan Kay first described the KiddiComp, which a few years later had turned into the Dynabook. His idea became a Holy Grail in Silicon Valley: a flat tablet device with the power to do almost anything, which could be programmed by children. I put that last bit in italics, because most of the companies pursuing the Dynabook idea were mostly focused on the first clause in that sentence: flat tablet device with the power to do almost anything. Like the GO Penpoint device I worked on and evangelized beginning in 1989. Left out of most Dynabook wannabes—including the current champion, the Apple iPad—was Kay’s original notion of child-level programmability. 

I still believe that the camera companies eventually have to add programmability to their devices. They’re failing to see what DJI is doing with drones—both automatic and learned programming—and what the sophisticated camera user wants. All the third-party gadgets for more finely controlling time-lapse, stop-motion, light/sound triggers, and more, are plenty of evidence that the camera companies aren’t providing something the customer wants. Those are “programs.” i requiem meam doleat.

Okay, I don’t rest my case. Because we have something else we need to talk about that intersects this: paid features or subscriptions. As I’ve pointed out in earlier articles, that's coming to cameras. We saw Sony take a whack at this with the now defunct PlayMemories (“buy this; oh sorry, we cancelled that and kept your money”). Sony is now back with “for a fee” grid updates. Since a grid line is just a pixel overlay, you could let the user program that with a PNG file, but… 

Programmability requires on-going support. Support costs money. Costs mean fees. So I’d say that somewhere in the future, we’ll probably get what I’ve been asking for 15 years, but only if we pony up more moola (more on that later in the article). 

AI, of course, has been a theme for awhile now (and not just in cameras). Most of the things labeled AI today are trained. That means that you feed source material to the engine so that it learns something. Right now, most of that is not done in real time, which means that the first AI things we’ve seen in cameras has been the camera company training an aspect of the camera, typically exposure and/or focus. 

You can see how poor the current training actually is by traveling to Africa ;~). Apparently the focus algorithms were never trained on things with spots, on many mammals such as elephants/hippos/rhinos, on reptiles of any kind, and if they work on them, only on insects with wings. Some of this problem probably was due to the fact that a lot of the training involved happened right into and during the pandemic. Some of it was not understanding what the customer wanted.

It’s a pretty easy thing to predict that those weakly-trained focus systems will get better trained. But here’s where things will eventually go: real world, real time training. What if I could train my focus system to recognize my sports teams’ uniforms and helmets? Yeah, I want that, and I think we’ll likely get some variant of that (however, note my caveat below). 

But the big “win” for AI will eventually be “style.” I’m going to go back to Ansel Adams for this one. While Adams would go out and create an optimal negative, when he got to the darkroom to create a print, he would manipulate different sections of the negative to give it different tonality in the final result. Over time, Adams developed a style. If you look at his early prints, they have less of this manipulation and tend to avoid pure black, while the later versions had more extreme changes. 

With the recent changes to selection in most post processing tools, I’m no longer processing the full image, I’m processing regions, much like Adams did, and creating my own style from that. To a large degree, all professional photographers have to create and maintain a unique style in order to stand out. However, AI will basically make that harder and harder for pros to do, and easier and easier for amateurs to say “make this image look like a Thom Hogan one.” Yuck. But it’s coming. Absolutely. Fortunately, Nikon hasn’t yet noticed me enough to train their future AI on me ;~). 

Much like focus, style should be real world trainable in the future, meaning that I can establish it as I photograph. To do this, the camera will have to start being able to recognize and mask at least as well as Photoshop does today (which is a lot to achieve). When I can do something in the field to “lower the sky exposure and push it slightly more towards blue,” you’ll know we’ve gotten there. It will happen, but when is another story, and those pesky mobile devices may get there first.

Indeed, that last clause is an important one. Mobile phone sales have climbed the hockey stick and are now on a plateau. For Apple, Samsung, et.al., to continue to make money, they need to either grow unit sales or push list prices up. Both of those can be done by paying a lot of attention to the things that I just wrote about. In other words, next year’s iPhone always has to be better and do more than this year’s. We’re running out of physical attributes to improve, so the computational and machine learning aspects will ramp up more. So if you want to know what your dedicated camera will have in 10 years, look at tomorrow’s phones.

I mentioned that we’ll be asked to send more money to the camera companies in the future. Thing is, the actual hardware cost will go down in terms of inflation-adjusted costs. What we pay US$2000 for today will be priced equivalent to an inflation-adjusted US$1000 not too far in the future. Why? Because it can. The reduction of physical costs (fewer mechanical parts, fewer chips, fewer everything) pretty much guarantees that. The Japanese cost reduction programs for consumer electronics actually run faster than inflation. They will be incented to give up some of that gain back to the customer in order to try to keep volumes up. 

However, many of the things that might be added to today’s cameras can have high (and sometimes ongoing) R&D and support costs. The business trend is to charge for what it costs you. So if you want all those possible future features, you may find that you have to pay for the ones that go beyond today’s capabilities, particularly if the feature is mostly software-enabled.


Caveat: we’ve been moving from a stills-oriented world to a motion (video)-oriented one. That starts to change how the camera makers should focus their technology attentions, even if you, the customer’s, goal is extracting stills. For example: “follow this person/player and select the best images to extract.” We already have real-time—but expensive and complex—video systems that can isolate and highlight a single player, including “making them bigger” compared to the others, or putting an exposure highlight on them, or providing a track of where they went. Eventually, expensive and complex things get “chipped” and become cheap and automatic. Mirrorless cameras are essentiallly full-time video systems by default (the image sensor is “always on”), so you have to watch what’s happening in video to see what will likely trickle down to stills.

 Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: