May I ask that you start your photographic-related shopping by clicking on any of the B&H links on this site. B&H is this site's exclusive advertiser. Starting a purchase from any B&H link on this site helps support this site.
This page of the site contains the latest 10 articles to appear on bythom, followed by links to the archives.
Ricoh Upgrades Your Shirt Pocket?
I'll be right up front about this: I have mixed feelings about Ricoh's update to the GR, the GR IV. While I've not had the chance to use one for photography yet, just going through the details via press release and other Ricoh-provided information makes me feel like there's a bit of contradiction now going on.
Specifically, price versus design (and by implication, use).
The original GR in 2013 was one of the early APS-C compacts, and sold for US$800 against a plethora of smaller-sensor compacts that were morphing in all directions (simple auto, breadth of features, more pixels/lens, etc.). As a basic, shirt pocket camera with considerable 16mp image quality, it became a carry-everywhere and street photography staple. The GR II in 2015 added Wi-Fi at no price increase. The GR III in 2018 added 24mp and sensor-IS, but lost the flash, with the price bumping up to US$900.
Today Ricoh officially introduced the GR IV, with a 26mp sensor, a better IS, a redesigned 28mm (equivalent) f/2.8 lens, better image processor, 53GB of internal storage, and a microSD slot (instead of SD), all for...wait for it...US$1500. We're now nearing Fujifilm X100VI territory, but frankly, the control structure, lack of an EVF, fixed Rear LCD, lack of weather sealing, and poor video specifications start to raise some real questions in my mind at the price point.
Yes, the GR IV is pocketable and half the weight of the X100VI. That's probably it's key selling attribute. But the GR IV, unlike the Fujifilm, still seems very compact camera-ish to me. Small, fiddly physical controls that haven't changed in over a decade and challenging battery life (despite a bigger battery not compatible with earlier GR's), for example.
Despite what I just wrote, the GR IV's real competition isn't the Fujfilm X100VI or Sony RX100VII, it's your mobile phone. So the real question here is whether you're getting the value for substituting a dedicated camera for the current mobile phone in your pocket. And that's exactly where my mixed feelings come in. Yes, I'd prefer Sony's 26mp APS-C sensor that's used in the GR IV to the trio of little beasts that in my current iPhone. But that really only happens at 28mm ;~). While the Ricoh offers "crop" focal lengths, suddenly things start to even up some.
But here's the real issue: even Ricoh's press release doesn't provide enough distinction about why you'd want the compact, as it uses the word "snapshot" as it describes the camera (e.g. "...excellent choice for snapshot photography, even for professional photographers."). Most professional photographers I know are using their phones for so-called snapshots.
Am I glad Ricoh continues to iterate the GR? Sure. But I get this distinct impression that they're tackling that with a fairly narrow mind now. At the new price point, I expect better (and more). The basic UX hasn't changed a lot since the old GXR modular camera design, though it seems to have a cheaper feel than the GXR's.
Another Subscription is Announced
Canon today announced EOS Webcam Utility Pro and will discontinue the basic EOS Webcam Utility approach on August 25th. EOS Webcam Utility now uses a freemium model that requires an account with Canon. For free you get dirt basic connection. For US$50/year, you get a host of features that, in my mind, don't add up to the price for 99% of those doing Cam-ing on the Web.
Normally you'd expect that when you purchase a camera, that it enables you to do everything that the camera can do. But just as automakers poked at SaS (service as subscription) as a way of adding function-for-pay, the camera makers have dabbled at it from time to time (e.g. Sony's custom gridlines for a fee).
The thing that concerns me is that no camera maker has proven to be good at software, and when you start trying to sell subscriptions for function, you're in the software business.
This latest poke at SaS that comes from Canon has a list of the paid features that reads a lot like "duplicates functions I'm getting from other existing products."
Why would Canon be doing this? Mostly because they don't want to be muscled out of an area that they thought they should own. You need a camera to Zoom conference, for instance, but much of what the video stream is doing once received by the computer is now controlled by Zoom and others. The camera is just a widget you need that you connect with a cable. Once you've bought the widget, the camera maker is out of the picture (pardon the pun).
The most revealing feature of the new EOS Webcam Utility Pro subscription option that suggests this displeasure at being muscled out is this: "change camera settings via app."
I've written it for decades now: if you're actually a "systems camera" maker, you need to enable the ecosystem that surrounds your product. Not partially, but fully. Canon is trying to not just control but own the entire EOS ecosystem. Witness the non-licensing and apparently suit-threatening surrounding the RF mount, for instance. I'd tend to say that Canon's approach here is contradictory to one of their long-stated goals: own the majority of the camera market in volume.
I have no problem paying for good software that's maintained, upgraded, and added to regularly and which also has excellent support. Do I get any hint that I'd be getting that from Canon's new offering? No.
The danger is that any of these pay-for-function things attempted by a camera company gets real traction. Given that four companies own over 90% of the market, once the dam breaks, we'll be flooded with SaS, with no indication that they actually care much about what the customers need or think. Customer service and support at all the camera companies pretty much sucks right now, and I don't see it getting better any time soon.
Tariff Update
tl;dr Prices won't be going down, only up
It's been awhile since I wrote about the tariff issue. It's time for an update.
TACO (Trump Announces, Chickens Out) doesn't actually describe the tariff situation, as some in the media seem to think. It's more like TICK, TICK, TICK (Trump Incites, Changes Kick). It's a classic bullying tactic being applied to key financial relationships between countries.
Take Japan, for instance. The established tariffs on photo goods—which the US doesn't make and isn't likely to make—was in the single digits, as low as 2% on many items. Trump announced a 35% tariff to be applied starting in August, then negotiated a 15% tariff. Incite, then change the kick (back away in this case).
However, 15% is still a double digit increase and much higher than the previously existing tariffs, And this pattern is replicating across multiple countries where "agreements have been announced." (I put that last bit in quotes because an agreement is a finished document, and we haven't exactly been seeing those.) Vietnam was going to be 40%, now will be 20%. China was 145%, now is temporarily 30%. Thailand is currently at 20%, though Trump's August 1st deadline originally promised a 35% tariff.
All four countries I just mentioned produce significant amounts of the cameras and lenses we buy here in the US. Products from all four will be more expensive soon, if not already.
Here's the thing that doesn't get mentioned in most tariff discussions: the effect of tariffs is time-lagged. Fujifilm raised their prices on August 1st, but that's not a reaction to the August 1st tariff changes, it's trying to catch up with the initial tariff increases, particularly on China. Likewise, Nikon raised prices on China-produced products, and is about to raise prices (September 1st, I hear) on their non-China built products, but those new prices are triggered by the initial tariff increases, not the new ones, such as the 20% tariff on Thailand products
You can see the slow-roll issue in the financial statements of foreign companies (and even US ones). Pretty much everyone didn't change their pricing in Q2 of 2025, but you see their financial statements say that they took a hit from the tariffs by not doing so. Most have been making price changes in Q3 2025—which is where we currently are—with some of those are still in progress and which won't really change until September price lists to dealers go in effect. By the time we reach the fourth quarter of 2025—the holiday buying season—higher prices will be the norm, though it remains to be seen how holiday sales and promotions might impact final pricing. No matter what, those sales will be starting from a higher price point, so it's still effectively a price increase..
This is similar to the supply chain problem that the COVID pandemic caused. Initially, everyone had inventory of both finished items in stores as well as some just-in-time production inventory. As the just-in-time capability dried up, so did production. Eventually that led to product shortages. Impacts like tariffs and supply chain shortages take time to progress through the system before becoming obvious, often six months or more to trickle through to maximum impact.
We're probably only halfway through the tariff-induced price changes here in the US, and that's assuming that there are no new changes after the September ones take effect. Trump keeps finding new "invisible demons" to tax as he tries to make other countries do what he wants them to (prediction: they won't, particularly long-term).
The problem I see brewing is this: if the camera companies have to make another round of tariff adjustments here in the US, the holiday market, when the most photographic gear is sold, gets completely messed up. I'm starting to think that the holiday discounts this year won't even match previous list prices. That's going to create rougher times for camera dealers, even big ones.
The takeaway here is simple: tariff impacts have a delayed impact. The full force of a tariff isn't known until all existing inventory in the chain is cleared and all items moving through the economy are being taxed. at the new rates And even then, given the whimsical nature of the TTT (Today's Trump Tariff), we still might be seeing price changes far into the future.
Overall, for all hard good products in the US, tariffs averaged about 2.5% in 2024. Here in 2025 they're currently averaging 17% (source: Jefferies). That's a significant change.
Which brings us to "who pays the tariff?" Trump has insisted that Americans won't pay it. While I've seen some evidence that some of the producers lowered their price some at the port, the bulk of the evidence says that most of the tariffs are being paid at the port by the importer. (There's also a temporary impounded import where the item sits in the US and doesn't get tariffed until it is removed from the temporary US warehouse; some of the camera companies have used this in anticipation of the initial tariff being lowered at some point, which is what has happened for Japanese imports.) So the tariffs are mostly being paid by the importing company (typically the camera company subsidiary). Whether they pass them directly on to customers is the question.
To date, that was a half-and-half solution, as described above. The camera companies are afraid of losing customers, so they've moderated their pricing changes to be rolled out over time. The Nikon Z5, for example, was on sale at US$1000, and now is available at US$1100, a 10% increase. But the list price is still the same US$1400 (and going to go up I believe US$50 on September 1st).
While that 10% increase at first looks the same as the originally thought 10% Thailand tariff (now 20% under TTT), another factor comes into play: the tariff is on the distribution price, not list price. With cameras, that price can be half of sale price, so NikonUSA could be paying 10% on US$550 (US$55) but you're paying 10% on US$1000 (US$100). Thing is, the actual tariffs have bounced around so much, so fast, I'd not want to be the one trying to keep track of everything. Nikon clearly raised the prices of everything that's coming in from China last month, but there's no indication that the China deal is a "final deal" (if there is any such thing in the Trump world; he has a reputation for trying to negotiate after a contract is signed or a job finished).
Mainstream media has been wondering why we haven't seen inflation budge much with all these new costs. Don't worry, it will. Soon. It didn't budge in Q2 2025 when the tariffs started because they were initially absorbed as losses at most companies. But with most making price adjustments in Q3 2025, these will now show up in later inflation figures.
If you look at the port import volumes, there was a significant uptick in late 2024 and early 2025, which would seem to indicate inventory buildups, particularly of goods from China. This was followed by a huge dip in shipping when the original tariff amounts were announced, but bounced back temporarily to historically high levels again when the first TICK happened. That said, the current forward projection for the Los Angeles port—which is where most of our photo gear comes through—is for a double-digit drop starting in August through the holiday season (source: National Retail Federation).
Putting on my economist's hat—my PhD work was in New Technology Economics and Management—pretty much all the models I have available for forecasting say the same thing: prices on anything imported are going to go up and likely stay up. How much is the question no one can answer for sure. Macroeconomic theory says something has to give, but so many variables are involved it's easy to get a detail wrong in a prediction. For example, three distinct possibilities for cameras/lenses exist: (1) volume goes down due to higher prices; (2) volume stays relatively the same for cameras but other products bought with disposable income have to go down (e.g. fewer lenses and accessories bought); or (3) camera companies lower prices (in other words, absorb more of the tariff) in order to keep volume intact. But note in #3, that's lowering prices from a higher price point (post tariff) to start with.
Going back to that Z5: will it hit US$1000 again during the holidays? Maybe (assuming no new tariff changes come along). That would require Nikon to discount it US$450, which is on the high side of what they've done before. I believe NikonUSA was ready to price it at US$900 for Christmas 2025 as it was, given that it's an older camera that has paid back its R&D costs manyfold. Still, the tariffs now make it difficult for the <US$1000 full frame camera to exist.
The bigger problem is that what I wrote about above is happening (slowly) across all imported products in the US, from food to parts for making US autos and homes. The cumulative impact of the implied inflation from that is what economists are worried about. And given the size of the US economy, if it starts to tank (e.g. recession), that will impact the rest of the world, as well. As I've written before, this is a play we've seen before, and it doesn't have an uplifting ending. Well, prices get uplifted, but that's not what I meant ;~).
Progress Update
Yes, I've been quiet with my sites recently. As I noted in the Spring, I'm working on revisiting site design and information. It continues to be a work in progress, as the tool I'm using isn't really ready for prime time production yet (Real Soon Now).
However, along the way in rethinking my Web presence and diddling with prototype designs, new concepts, and more, I discovered something: The Internet as you knew it is dead.
You may have heard the term SEO before. That stands for Search Engine Optimization, something pretty much every site owner and maintainer has been dealing with for decades now.
SEO is really a euphemism for "get Google's attention." With 90% of the search market these days (and a majority pretty much since inception), whatever Google wants Web sites to do has had to be paid attention to. At the onset, Google used "experts" to tag sites that should be prioritized in search results, but humans are expensive and Google has lots of computers, so the method of prioritizing sites quickly gave way to algorithms.
Worse still, as Google's revenue became more reliant upon selling ads, the old search mechanisms started breaking in favor of what made Google more money. Try this as an illustration: type "Kodak Porta" into a Google search bar. Did that bring up the company that actually makes that film or better yet point you to the data sheet for Porta? Nope. Instead, today I get this stack of things (in this order):
- five sponsored ads
- Amazon!
- An AI QA
- Wikipedia page
- An fstoppers.com review (good on you and your SEO, fstoppers)
- eight "popular products" (more ads)
- more of the above
Nowhere on this first page of search results is the producer of the product, who might actually have the best answer to your question(s). Of course, if all you wanted to do is buy some Porta, well, your results were sold to the highest bidders.
This is no longer useful search, and it's getting worse. Even DuckDuckGo has a similar problem with the search term I just suggested, though they at least claim that they protect your private information. In April, the US District Court held that Google violated antitrust law, specifically because the Google search methodology has been abused. What will happen from that decision is still unknown, but I don't think it makes a lot of difference.
Follow the money. Always follow the money. The reason why Google wants you to use an Android device, the Chrome browser, and their Search engine is simple: that gives them full control over you, and they can track and promote to their heart's content. But you might have noticed that third thing in the stack of things I got from a Google search: an AI response. And this leads me to my original bold-faced assertion, above: in the coming year or two, search as we know it will disappear and be replaced by AI interaction.
This introduces all kinds of interesting topics, including where did AI get its information from? My site currently blocks some of the AI engines from scraping it, but that quickly became a futile game as the only one that seems to respect .HTACCESS file restrictions is ChatGTP. I blocked another AI engine—you might guess which one—only to find that it simply resorted to accessing the site using a different server.
I'm writing the above because what's happening is going to broadly impact your use of the Internet soon, if it hasn't already, and what you're reading right now is an Internet site. Search no longer points to "reliable data sources." AI engines are pattern matching, not knowledge bases, so hallucinate answers that are incorrect, misleading, or use poor language. I recently responded to someone's "is this AI summary correct" query with a pretty scathing breakdown of all the things that were wrong with what AI had written. Sloppy wording, loose language, incorrect assertions, inconsistent information, and more. Remember, most so-called AI engines are really just pattern matchers. They use a huge collection of previously posted information to form their response, and create that response in similar patterns. If someone with a much scraped Web site once wrote "diffraction is ignorable," then guess what answer you might get in the future?
I've tried—to the best of my ability, as I'm the only one populating my sites, books, articles, and presentations—to provide useful, detailed, accurate, and helpful information, and I've been doing that on the Internet for over 30 years (yes, 30+). My project this summer was to try to figure out how to do that better, and redesign my sites for the future Internet I can foresee.
But Google, Meta, Microsoft and others want a future whereby they scape all information from others, then are the sole ones presenting it. That is dangerous in more ways than you might first imagine. You might have noticed more and more of the best information providers moving behind paywalls. I'm bemused to find that there's an Ayn Rand irony happening here. One of her assertions was that the uncompromising free-market capitalists were fighting against so called "second handers" who attempt to live through others. This all became the primary theme in her novel Atlas Shrugged, where the creators, inventors, and scientists all retreated from the second-handers. Hmm, Google has become quite the second-hander, hasn't it?
Why did I write "irony" in the previous paragraph? Because the current administration and large corporations may say they're following or approve of Rand's ideas and are free market believers, but they don't seem to realize that they've become the ones Rand was railing against. While I don't really agree with Rand's ideas—she oversimplifies and ignores a great deal—it appears I'm now talking much like her heroes do, thus another irony ;~).
As I've noted before, I'm at a point in my life where I don't need to do this anymore. However, I enjoy what essentially is a new form of teaching.
All the above has got me rethinking and recalibrating, though. I'm not interested in helping Google serve you while they extract a constant toll (disclosure: I own some Google stock, both directly and in ETFs). Which may mean that I need to put my best and most timely information behind a paywall. What I'm now exploring in my summer Web redesign are ways of doing that while still leaving exposed a large amount of useful, detailed, accurate, and helpful information. There are plenty of others who've shown that this can be done. For instance, MacMost, a site you Mac users should be aware of.
So if you got this far, don't expect major site refreshes as I emerge from my summer pontifications. I'll continue to do what I've been doing for a while before eventually deploying byThom 4.0. And I won't deploy at all unless I can guarantee that what I'm going to do is better than what has come before.
___________________
Is there a photographic implication in the above? Sure is. At the simplest level: you should reinvent and experiment rather than blatantly copy. I've written before that photography tends to be faddish, with some in-the-moment trend becoming how everyone composes and processes until that eventually becomes blasé and the drum majors up front start down a different path. But if you want to really stand out, you need to explore your own path and ignore what everyone else is doing.
Is That All There Is? (Image Sensors, That Is)
Sensor evolution is not stuck, it's been diverted.
Could we have 100mp full frame cameras today? Sure. But except for those of us obsessed with information in data capture, what would you be using such a camera for? Diffraction and other attributes start to raise issues you can't ignore. 100mp implies a 38" print, and exactly who's making such things these days? A 96mp pixel shift (e.g. 24mp sensor) deBayers the data, reduces aliasing and increases dynamic range, which would be bigger benefits for most people.
Could we have more dynamic range today? Not really. The current sensors are extremely good at rendering the randomness of photons without adding any significant electronic noise. Any dynamic range gain would likely only come through getting rid of the Bayer filtration, which is an area that is being constantly explored, but because of likely costs, we'll probably see in phones first. Even then, it might be only a stop or so.
So what is actually happening with image sensors?
Simple: they're getting faster at what they do. In particular, that comes in two basic areas: (1) analog to digital conversion; and (2) data offloading speed (both internally on the chip as well as externally from the chip). Speed has given us better focusing, higher frame rates, and blackout free viewfinders, to name three major improvements. Most of us would rather have those three things than more than 45mp or even another stop of dynamic range.
The next step is likely intelligence on sensor. We've already seen benefits from stacking electronics behind the image sensor, but so far that's mostly a one-way process aimed at increasing data speed. Nikon has a patent (and prototype sensor) that demonstrates what can happen when you apply two-way processing in the stack. That example is mostly centered on treating exposure differently in small blocks of photosites (16x16), which is another way of getting to a dynamic range improvement. But I can imagine other two-way intelligence that would impact focus systems, as well. Heck, I remember how we did white balance on the original color QuickCam, and I could see that being applied, as well.
Speed and intelligence in the stack leads to all kinds of camera improvements. Do you really need more pixels? Do you really need to more accurately record the randomness of photons? I think not, and from what I can tell, neither do the camera engineering groups designing tomorrow's cameras. Oh, don't worry, the marketing departments think the numbers game works to sell cameras, so we'll see some pixel count gains and small dynamic range boosts, because those will produce better "numbers" than yesterday's cameras. But as we've already seen with Sony's 61mp full frame image sensor, the benefits are minimal enough to be ignored by most. Again, for landscape work, I'd rather have that 24mp pixel-shifting camera than a Sony 61mp camera.
The Great Manipulator
Here’s something I don’t understand the moral logic of: a rumor site that posts watermarked images clearly originating from another rumor site, who themselves watermark the posted images they obtained from elsewhere. It appears they expect no one to scrape and republish their scraped images while they don’t mind scraping and republishing from others. The best I can come up with is “see, I had it first.”
One of the things I learned in business school was not at all directly about business. It’s that everyone has moral positions that are often either (a) not thought through; or (b) based upon an unchallengeable first premise. You can’t talk about ethical business practices until you wade through those two things, and they turned out to be immense walls that almost everyone in the class had built for themselves and was hiding behind.
Now, you might not think this has anything really to do with photography, but it does. One of the challenges of the day is to accurately describe what exactly is a photograph and what is completely an artist’s creation. This problem raised its head early with the first version of Photoshop and other similar tools. Indeed, there became a clear separation between the capture process (film), which was tough to fake, and the output process (digital scan and processing), which was.
Probably the most “real” photos ever we re contact sheets. You took a photo, you developed the film, you directly duplicated that to paper (you might have to reverse values along the way, but that was a simple direct reversal). A contact sheet, particularly one annotated by the photographer themselves is about as close as it comes to “I saw this, pressed the button, and this is what was captured.”
You probably saw right through that last sentence, though. The photographer could have manipulated the scene, altered the exposure and tonal values (think filters), or did something in the processing that wasn’t the norm (think cross processing). Still, a contact sheet with a clear forensic trail is about as close I’ll ever get to “this happened, this is what it looked like, and here’s the proof.”
Today we’re in an age where anything is possible, and often happens.
Photoshop sort of pioneered the whole image manipulation idea, and with the Generative AI additions Adobe has recently made, trying to tie a pixel to something in the real world is now impossible. Moreover, startling things are happening.
Recently I was processing an older image a bit differently, including an area I had cropped out before because there wasn’t a way to “fix” it reliably (part of a vehicle was out of focus and blocking part of the frame). This time I used the whole frame and asked Photoshop to render something in the area I had cropped out before. What surprised me is that what Photoshop put in that area that was nearly exactly what my vehicle was blocking. Mopane trees often have distinct bark distortions, usually caused by elephants. That particularly tree had what I thought was a unique elephant mark. After all, I could see that mark clearly in some of my other images taken that day.
However Photoshop’s Generative AI managed to put a near duplicate of the tree into position for me, complete with what seemed to be the same mark! I actually spent several minutes comparing the fake tree trunk with the real one, and couldn’t find any clear difference. What should we make of that?
Long ago my mentor Galen Rowell and I talked at length about photographer ethics and morality. Every photographer has that (whether they know it or not). Most never think about it or even realize what their practices actually are. Some, like those in my business class, live behind walls they’ve constructed that on close examination are much like the Emperor’s New Clothing.
NANPA (North American Nature Photography Association), of what I was once a charter member, developed a Principles of Ethical Field Practices back in 1996. They’ve since expanded what was once a small card you could carry into a full book. Meanwhile, I was running Backpacker, and we practiced things like Leave No Trace. Today we also have a host of other similar well meaning organizations and statements of practices that just try to dot every t and cross every I. Yeah, that’s not what we need, either, though I support those efforts for the most part.
What you need is a small set of clear, reasoned, and practiced first premises. You then examine all your decisions based upon that.
I’ve written about this before. I once was flabbergasted to see a well-known professional photographer pull up a blooming, living plant at a World Heritage site, move it to where he wanted it, take his photo, and leave the dying plant where he had placed it. Apparently his morality was “whatever makes my image better.”
While I’ve been known to move rocks for an image, I also move them back. In fact, upon lifting a rock, I make sure that it doesn’t have a clear, active ecosystem under it before moving it, otherwise I put it right back down. Yes, I want my photo to look better, and that rock didn’t help (or would help more over there), but I also didn’t want to disturb the natural sense of things any more than I would by walking through that environment.
You can probably guess where I’m going with this. I get asked a lot these days about whether or not they should use any of the AI tools we now have available to us. My answer is yes, as long as you have an ethical stance and can treat AI like I do rocks.
For example, a common situation that happens in wildlife photography is that if you expose for the underside of a flying bird (which is usually in shadow), you’ll probably blow out the sky. Well, that’s not what we see in that situation (bird detail, no sky detail). Our eye/brain constructs the scene so we remember the beautiful bird flying against the (often interesting) sky. Well, that’s the photo I want to capture, too.
How do you do that within my “move/replace the rock” ethic? Simple. 1. Exposure for the bird and take as many BIF photos as you’d like. 2. Go back and expose for the sky (with no bird in it) on the same track. 3. In Photoshop, Sky Replacement using the well exposed sky with the well exposed bird layer. That’s what I would have seen and remembered, so why does pressing a camera button mean I have to give up one aspect of what I saw?
Of course we now have smartphones that sort of do that on the fly. Adobe’s Project Indigo camera app essentially is taking multiple exposures with my iPhone and processing them into a finished image much the same way (actually better, as it will stack out matching detail to reduce noise in it).
What’s not cool is not understanding what your morals are and what ethical practices that will enable (or destroy). Blindly using AI without thinking about what’s being done, how, and for what reason is lazy ass ethics, in my book. Whatever you do needs to be defensible, and defensible across repeated actions without having to add “but if X, then Y” constructs.
I want you to see the world I experience in my photos. That’s my job as a photographer. But my other job as a photographer is to make sure I don’t start sneaking away from that and start inventing things that I didn’t experience. At the point where I cross that line I’m no longer making photos, I’m creating art. In my Backpacker days, we labeled that “photo illustration.”
The thing you can never do with a camera is “capture reality.” You make a lot of decisions without realizing it. For instance, which lens is on the camera, which direction the camera is pointed, where you’ve focused, and when you press the shutter release (or when the camera does that for you). If you’re creating JPEGs, you’ve picked color, contrast, saturation, and sharpening aspects of the final pixels. Thing is, the photographer standing right next to you is making different decisions. So is his photo unreal and yours real? What’s just out of frame can be as important to the meaning of the photo as what’s in the frame, after all. Using a longer lens is a form of editing out things if you think about it.
My goal in taking a photo is “this is what I want you to see.” Generally, that’s what I saw and felt when I was there, all done without changing anything from the way it is or could be (those temporarily moved rocks ;~). Moreover, when I process an image, I’m going to process it in a way where I control how your eye takes it in so that you’re taking in the same thing I took in, and not wandering off paying more attention to the unimportant things in my photo.
Painters start with a blank canvas and add. Photographers start with a full canvas and subtract. We both seek the same thing: visual and emotional impact from what’s on the canvas.
______________
Bonus: The interesting thing is that I’m more manipulative of the things in my frame when I photograph Events and Weddings. In fact, I’m darned manipulative, because most human subjects go into what I call the “frozen stick” pose the minute they see a camera pointed at them. A few go into a “crazy pose” they’ve developed or learned. Neither looks natural, so I have to intervene. The great thing about my current cameras is that they don’t make a sound when I take a photo and can hold focus when I’m not looking through the viewfinder, so my subjects often are just interacting with me and not noticing that I’m taking photos. Great. That’s what I want: natural interaction. Or simplified: natural.
With sports, I can’t manipulate others, only me. I have to think like coaches and players and know the plays. Anything less and I get random images that may or may not capture the right moment. But if I have a strong feeling the next play is coming to a specific spot, guess where I am? I want my sports photos to feel like you’re right there at the point of action, so that’s where I need to get. I manipulate me.
But note the word I used: manipulate. When you take a photo, you’re making seen and unseen manipulations. The real trick to becoming a great photographer is taking control of those manipulations. Just make sure you have an ethical stance of what you can/should/will manipulate and why.
Out of Okavango
Yes, I'm back.
Sort of.
As noted last month, I'm taking time between now and about Labor Day to get a number of behind the scenes projects done. But my total quiet for the last three weeks was because I was deep in the Okavango without Internet service. You'll see more from that trip later this year; let's just say it was seriously productive (45 different lions).
I'm back to covering big news, but don't expect much in the way of new articles for the next two months. If I have time, I'll pop one of the hundreds of new articles I've been working on to the site, but for the most part, I'll be on low volume this summer.
First byThom Minimum, Later byThom Max
As I noted earlier this year, I'm taking a couple of long breaks from posting this year so that I can work on new products you'll want, including site rethinks and redesigns. You just saw one of my planned breaks as I worked on updating the Mastering Nikon JPEGs book and created the new Mastering Nikon Customization book. From the response so far it seems I was correct in anticipating demand for more information in these areas. With Nikon updating firmware and a Z5II book to still complete, there's plenty just in the book side of the business to do.
Since I haven't perfected cloning yet, when I deep end on these projects, I don't typically have the bandwidth to keep all the sites updated with new material. Given that I'm in the middle of site re-designs, I also don't want to add much new material when I'll just have to redesign it. Moreover, the less interrupted I am, the faster these things get done.
I'm about to enter another planned quiet period. Look for new site material in late July, after which I'll take another short break.
My goal is to come out of 2025 with sites, books, and more that are better than ever, and far exceed what you've seen in the past. It'll just take a little alone time for me to get ahead of you again.
How is it Better?
I think we've officially come to the point where every new product announcement needs the first paragraph to clearly enunciate how it is better.
With everyone now iterating full lineups (and Chinese lens makers trying to elbow in), each new announcement from a camera or lens maker tends to be something that existed before (or existed in someone else's lineup and is now being copied). Coupled with a slower development and release pace, "truly new product" tends to happen fairly rarely. In terms of cameras this year, that meant only three of sixteen were something noticeably new (Canon R50V, Fujifilm X half, and Fujifilm GFX100RF, and you might argue that the first is just a different body for an existing camera and the latter is mostly an existing camera with a bigger sensor).
Lenses haven't really fared any better than cameras. Of the 25 announced in the first half of this year, yes, I see some "stretching" being done—Sigma 300-600mm and 400-800mm and all the f/1.2 optics—but nothing has struck me as "wow, that's new."
For everyone in the camera industry, including you, the customer, your primary thinking about any product announcement now really has to consist of the headline question: how is it better?
The just announced OM-5 II is a good example to consider. The shape of the hand grip changed, you finally have the menu system that OMDS has been using since the OM-1 several years ago, a button has changed function, plus two new minor video features and two minor still features. Oh, and there's now a brown panda version. That's not enough "better" to get me to pay US$300 more for it (today's pricing; the OM-5 is currently on sale).
Compare that to the Fujifilm X-E5 or Nikon Z5II. Both those cameras got considerable feature/function upgrades from their previous models. If you were considering the previous model, you should be more impressed with the new one, as you can clearly tick off a substantial set of "better" items. On the other hand, the Nikon Z5II sells for US$700 more, so it had better be better.
But I mentioned "everyone" in the camera industry, so let's take a peak at how that looks for a few others:
- Camera maker — "Better" for them comes in one of two ways (or both): cheaper to make, or will sell more. I don't get much of that from the OM-5 II, but do from the Fujifilm X-E5 (the Nikon Z5II is tending to steal customers from the Z6III).
- Camera dealer — It's only "better" if it sells more. Dealers live off of inventory turns, and they only get that through sales volume. Dealers complain when a new product doesn't tweak the turn bar up, and strongly complain when the turns instead become boxes sitting on shelves.
- Rumors site — The more surprises—particularly ones that can be revealed in the run-up, something that Fujifilm's supported leaks do well—the "better," but they need a series of those, not just one big surprise at the end. The slower development schedules and the meh releases are hurting these sites, as there's not a lot of reason to visit them if they're not constantly giving you "sneak" tokes.
- News/Forum sites — When your news is meh, nobody pays much attention, and when your fora are discussing how meh a new product is, the cathartic post effect wears off very quickly. You can almost figure out the "better" product announcements by how many "should I update" posts there are, which these days are already well down from their peaks.
- Professional photographer — Starting to not care. Realistically the top tier products from every camera maker are more than sufficient enough to carry them through the next few years, and nothing in the lower end products is making amateurs snipe any more seriously at our tails. Lenses, on the other, do get our attention if they fill a hole we either knew we had or didn't know we had. But both those things require more than a press release that says "we added a new lens."
- Amateur photographer — I'd argue that even the entry bodies and lenses from all the camera companies are good enough to take pretty incredible photos. Plus, with apps like Adobe's Project Indigo now showing that smartphones can do even more than most thought, as long as you're not going larger than, oh, 11x14" in output I'd say that you don't need more, and even "better" cameras and lenses aren't going to give you any benefit unless you actually learn to use them for that benefit.
We're sort of in one of those lulls the auto industry kept getting into: "yes, we have new cars this year, but some of those are just new grills and a couple of other visible things that we think make it look snazzy."
Are You Better Than Average?
Just a reminder: Average means you're right in the middle. About half the folk are better than you, and the half are worse than you.
I was struck by a few of the results when I came upon the thanaverage Web site. In particular:
- 71% of participants think they are more perceptive than average.
- 58% of participants think they are more creative than average.
Perception and creativity are two crucial aspects to taking good photos.
You really have to perceive something in order to take a photo at all. Anyone standing in a certain area in Paris can perceive that that there's a very tall metal structure jutting up in the air. Most will take a photo of it (the Eiffel Tower, if you are one of those that 21% that think they're not smarter than average). Creativity after the perception is where great photos lie, though.
Galen Rowell used to often ask me about what I was feeling as we wandered landscapes together. The fact that we were using our feet to explore meant that we were almost certain to see (perceive) things that others wouldn't, as we were well off the beaten path most of the time. Even when we were exploring a beaten path, we weren't actually on the actual path all that much. "Perception" wasn't our problem.
Meanwhile, Galen's "feeling" question spoke to the creative process. If I felt something I was perceiving, the critical question was "how do I get those that view my resulting image to have that same feeling?" Often those feelings were clearly emotional, such as isolation, smallness, surrounded. When was the last time your photography instructor told you to capture a feeling or emotion like that?
I often I see students caught up in the "I perceive a thing" construct. They then proceed to point their camera at said thing and voila, press a button, and they're done. When they post those photos, I can pretty much guarantee that the viewer will quickly flick to the next one (or flick left!). Sitting in airports and planes I get a lot of time to watch people doom scrolling (yes, I snoop; I'm a curious guy). Flick, flick, flick, pause, flick, flick, flick.
Wait, what made them pause? Typically something they've never seen before (or perhaps not in this way), but more often than not I see they've stopped on an image that has a real story in it. It's not a photo of a thing, it's a photo containing a story in which the thing is a key player.
Perhaps it's all the filmmaking training I had early on that makes me go directly to story: films (and TV shows, and streamed shows) are pretty boring without a good story, right? Moreover, the director of photography—why isn't it director of filmmaking?—works hard with the director and actors to emphasize and embellish the story. For instance, they're more likely to light a drama in noir (dramatic lighting) than just turning on a bunch of big lighting panels.
Let me repeat something I've written before, there's a progression in photographic thinking: noun, adjective noun, adjective noun verb/adverb, complete sentence. Noun isn't much of a photo (elephant). Adjective noun is better (large elephant). Adjective noun verb starts to get interesting (large elephant trumpeting). Complete sentence or story is what I really want, though (large elephant trumpeting at predator making them uneasy). Curiously, the two extremes (noun and sentence/story) are the easiest to photograph: noun just means the thing needs to be in frame, while story means you have to show everything in the story. I'd argue that if the story isn't there, the photograph isn't there.
While I'd like you to come up with your own story, if you're just starting out on this journey into better photography, go ahead and cop a placeholder summary from Hollywood: adventure, drama, comedy, buddy, fantasy. When that elephant comes along, are you making an adventure, drama, comedy, buddy, or fantasy photo story?
The implication of the thanaverage results is that you're a perceptive, creative individual (okay, you're really just above average). My question to you is how is that showing up in your photos? Is it even showing up in your photos? What are you perceiving that others aren't, and how are you creatively tackling that to create a photo that's unique? If there's a photographer you admire, how are these two things showing up in their work?
Galen would travel to a new place with a bunch of pre-conceptualized stories based upon extensive research (fantasy photos that needed to be realized), but he also was very open to his perceptions when he was there, which always opened up entirely new stories. His best images were some of the former, some of latter. None were simply a noun.
If you made it this far, I'm going to suggest something outlandish, but instructive. Go to amazon.com. Search for "Dick and Jane level 1." There's a good chance if you're older as I am you encountered these books as a kid. Buy one. Take a close look at the illustrations. For instance, the book I'm looking at right now has a great potential photographic story from the get go in "Away we go." William S. Gray would have been a great photographer, and the vocabulary he was using was as basic as it gets, plus his sentences are tight and succinct. You don't need a huge vocabulary, incredible creativity, or pyscopathic perception to make an image do great work. But you have to tell a story, even a simple one.
You're above average. Now show it through your photography.
Looking for older News and Opinion stories? Click here.