How Far Do We Look Ahead?

In doing some cleanup on my systems, I came across my old "100 ideas in 20 minutes" presentation about photography tech possibilities, particularly computational photography, which I first created for an imaging symposium at the turn of the century (and was last updated in 2005). 

While some of my future imaging suggestions will seem familiar—pixel shift and what some of the computational multi-sampling smartphones are currently doing, for example—some are still not yet done by anyone, which surprised me a bit. If I could think of things that could be done that still aren't available, how many other folk out there can come up with their own set of still undone ideas?

Of course, I shouldn't exactly be surprised. After all, I asked for stacked semiconductors back in 1980 at a big tech conference with all the usual suspects, and was clearly told this was not remotely possible by IBM, Intel, and other semiconductor makers. One thing I learned even before that, though, was "never say it isn't possible." So I just ignored them and kept asking. 

Why was I asking for stacked semiconductors? Well, it struck me from all the Cray propaganda marketing that speed was defined by shortest distance, so why would you restrict your layout to two dimensions? (Implicit answer: "because that's how we designed our equipment to be able to build things." Also: "we believe we wouldn't be able to dissipate the heat.")

Many engineers tend to be linear thinkers. They see how X is done today, so they immediately tackle a small, incremental improvement to X using known iteration techniques. I'd call that somewhat lazy engineering in high tech, as understanding Moore's Law tells us that incremental improvements cascade from that constantly with even only modest additional effort. Process size reduces? Chip complexity and/or speed increases. Speed increases? Data access increases. The list goes on and on. A lot of tech engineering tends to be just trying to keep up with Moore's Law, not taking new advantages of it or changing something else about how you're using it. The change from CISC (Complex Instruction Set Computer) to RISC (Reduced Instruction Set Computer), for instance, was a big one that's still having trickle effects, particularly in the GPU and NPU arenas. But such dramatic shifts tend to happen much more rarely than iteration.

Many of you know that Science Fiction has foretold many a technical advancement we've eventually ended up actually creating. Arthur C. Clarke, for instance, was telling us what we could do with satellites long before we had any. Science Fiction writers tend to be non-linear thinkers. They jump start themselves into a future where anything is possible and are not constrained by current capabilities, ideas, or limitations. 

Artificial intelligence has long been one of those Sci Fi possibilities. 

People ask me all the time about whether I think Artificial Intelligence is going to destroy photography. My answer to that would tend to be no. One reason came to mind as I was looking at my old presentation. While I hadn't done this in that presentation, I now see that there were some logical and distinct "categories" my turn-of-the-century ideas all fit into. For instance, abstraction, enhancement, and optimization. AI is mostly targeting the enhancement category. As most of you know, I'm an optimalist—as opposed to optimist ;~)—and thus I'm not overly excited about generative photo AI. But if AI can take an as-optimal-as-possible data set and find a way to enhance that without adding information that's in contradiction, I'm all for that. 

The painting world used to be about realism. And then it wasn't. So what we're seeing with generative AI and photography these days is often something similar: a camera-taken photo is "real" but a "generative AI" "photo" isn't. I don't have any issues with both existing, but I do want to know which one I'm looking at ;~).

Aside: a lot of what photo AI is doing is actually because you asked for it ;~). You didn't control your background while taking the photo and now have a distracting element you want to remove. But you also don't want to learn and spend time doing proper masking, let alone use restoration techniques on the affected area. Too much trouble, too much time, too much to learn. What you really wanted was a button to press to do all that automatically. Well, you got it. The reason why it has to be AI-driven is that there are quite a few decisions that have to be made, and you also don't want a button that, when pressed, then asks you a long series of questions. I call this "shortcutting." The reason why AI is getting so popular now in so many aspects of photography is that it relieves you of having to make so many decisions. 

Returning back to the presentation I found: it was prompted by a request for a paper on differences in how imaging might be done differently in the 21st century. Specifically digital imaging. After all, I had been doing some form of digital imaging since the early 1980's.

But it really doesn't matter when you create such a forward-looking assignment; At any given moment in time, what is possible 10 years from that moment is clearly different than what is being done at that moment, and that difference is increasing, not decreasing. The real question is whether you're open and looking for truly new ideas or not. 

Most of the people who are involved in looking to the future that I talk to about this seem to think that AI will be the primary driver for the next 10 years. I worry about that, but probably not for the reason you think. Machine Learning (ML) and Large Language Models (LLM) are the big AI pushes right now, but both of them tend to be backward-looking, not creatively imaginative and making unexpected forward leaps. (Yes, I know about hallucinations in LLM, but those aren't necessarily forward leaps, they're incorrect backward analysis.) Moreover, the trend in AI has been to replace or augment a real time capture with something conjured from existing work. Where are the future technologies that improve the real time capture abilities? 

We're actually in a period with photography similar to what happened in other artistic disciplines. For example, music. Music up until the multi-track recorder came along tended to be 100% real time. Even a recording was just a capture of a real time performance. But then we started creating overdubs, and then sampling and sequencing, and then auto tune and much more. The result was Kraftwerk and Depeche Mode—both of whose music I like, by the way—but this was pushing music away from real time performance to tech-augmented compilation. It's ironic that musicians have all found that they need to return to real time performance to make money, but to me this is exciting because it also means that they are having to play with real-time things to stand out. The Grateful Dead had it right all along ;~).

So the question I'm pondering today after my short trip down memory lane is this: as a photographer in the field in the year 2034, what will I be using and how will it make the experience of taking a photo different? Yes, I'm sure new technologies will be part of that, but note that I'm asking this from the photographer perspective, not the technology perspective. That's important, and I think it's the thing that often gets missed when companies try to come up with and execute a future product plan. What benefit does the user really get?

To finish off, I'm going to hint at one thing I believe will happen: Plogging. Not blogging, not vlogging, but plogging. 

I'll leave you to figure out what I think plogging is ;~). (No, it's not photography logging.)

 Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: