This page of the site contains the latest 10 articles to appear on bythom, followed by links to the archives.
Taking a Break
A major family emergency has arisen that requires my attention, plus I’ve got some client obligations this month, so I’ve decided to take May off from posting. I’ll be back in early June with, I’m sure, a lot of thoughts and new information. Maybe even with the redesigned site, though that will depend upon how distracted I get by the family situation.
The good news is that things are moving slowly in the photo industry right now, so I doubt I’ll miss any major announcements.
As many of you know, I take periodic one-month breaks from the Internet every year, so this isn’t a new thing, and I almost come back energized. So enjoy the spring flowers, follow my advice on zsysteruser about getting to know your lenses better, and we’ll both have more to talk about when I get back.
April 20 to 26 News/Commentary
LEDE ON
With NAB behind us, we move into the Father’s Day/Graduation buying period still with virtually nothing new to add to the gift list. We’re definitely in a drought state when it comes to anything other than Chinese lenses.
Here’s another of my economic-side warnings: the supply chain just took another hit with the Iran conflict. That might not be evident to you yet, but it is to all my buddies still building things. That's because they know the sequence that starts tripping the dominoes, and they’ve seen some dominoes already falling. You need helium for semiconductor creation, you need oil for plastics creation. Both are starting to be hoarded and rationed in SE Asia in anticipation of not being able to replenish the existing supplies soon. I’m starting to hear backchatter that even Apple is now considering postponing product launches. The worst thing in the tech world is to announce a Great New Widget and not be able to deliver it to demand. Doing that gives competitors time to respond, and, of course, you’re not getting gonzo dollars from the buzz you create.
The traditional camera makers, with their 8m unit volume overall a year, are going to have a difficult time getting the volume of parts, particularly new parts, in order to keep up any semblance of pace in product launches. Even a company such as GoPro sells more cameras these days than do Nikon, OMDS, Panasonic, and maybe even Sony. Volume gives you purchasing power in tight situations as we’re about to encounter.
My advice continues to be about the same as it has been for quite some time now: if you’re waiting for something, I hope you like waiting. If you see something at a reasonable price that you want, best pick it up while it’s available. We’ve gone from GAS (Gear Acquisition Syndrome) to BRAKE (Be Reasonable About Keeping Extending).
——————————
News
DxO Nik Collection 9
DxO continues to work on expanding and extending the Nik Collection. Version 9 now adds AI masking (both depth and general/object), a new color grading tool with a unique single wheel controller, blending modes, and three new specific tools: chromatic shift, glass effect, and halation. Other new features include hover preview for presets, an update to the local adjustments palette, the ability to copy and paste location adjustments, the ability to move masks between the plug-ins, color masks, simplified export and return to Photoshop options.
But wait, there’s more. Photoshop masks can be pulled into the plug-in, effect layers are already live in Photoshop while Nik is still running, the U-point interface has been expanded to include elliptical and polygonal control points, and much more.
Nik is now claiming that plug-ins open 30% faster and that all code in the Nik Collection is now authored by DxO (e.g. no legacy Nik or Google code).
The above is all what we were all hoping would happen with DxO’s ownership of the long-lived and well-liked Nik Effects. Some product acquisitions die on the vine, as did Nik when Google bought the company. Others eventually spread new, better wings, and that seems to be what’s happening with Nik Collection. Nik has remained one of the few tools I leave installed in my photo processing suite, mostly because there are things you can do with it that are very tough to do without the plug-in.
Nik Collection 9 is available now and is US$180 for new users, as little as US$100 for those upgrading (depends upon which version you’re upgrading from).
——————————
Commentary
Seven Types of Lenses?
I mentioned it in passing in the NAB Show recap, but it requires more comment: Viltrox now claims seven lines of lenses (Air, EPIC, EVO, LAB, Pro, Raze, and unnamed). Now, not all of those types are always available at every focal length, but enough are that we’re starting to see at least three variants in a mount (typically some combination of Air, EVO, LAB, and Pro).
One wonders whether extending this strategy any further has a real payoff. While Canon and Nikon traditionally have had a small range of choices at each focal length, that’s typically topped out at three. For example, for a prime, f/1.4, f/1.8 or f/2, f/2.8, or some variation on that. With zooms the overlap tends to be fast aperture, mid-aperture, variable aperture. And even then, the “multiple choice” is limited to mostly the 24-200mm range, where you have buyers at low consumer, mid-consumer, and prosumer/pro levels.
Both Canon and Nikon are highly analytical when it comes to their sales data. They have a strong tendency towards “what sells.”
So I guess my first question for Viltrox is “are all these named variants really selling in enough quantity to justify?” My followup question is “why this naming scheme? Are you sure that customers understand it well enough to hone in on the right lens for them?”
The Product Line Marketing Manager in me keeps trying to build out a simple, understandable matrix, but Viltrox’s matrix just looks messy and sprawling. For lenses, I think the definitions typically should fit a three by three grid:
| consumer | slow aperture | convenience |
|---|---|---|
| prosumer | moderate aperture | competence |
| pro | fast aperture | quality, extras |
Anything beyond this starts to become a complex marketing problem. Too complex to maximize revenue with lower cost goods (Viltrox’s lenses tend to be affordable).
As I’ve been handling and using various Viltrox lenses at the same focal length, I’ve found that there often is very little optical nuance to pull out for my eventual review. Viltrox seems to be keeping sharpness up across all the lines, with whatever differences that do show up via the optical design appearing in vignetting, distortion, focus breathing, and other parameters that don’t tend to be on most people’s decisionmaking tree.
I get Viltrox’s rapid and constant product launches. Sometimes you need to appear active to customers and you learn things by doing a steady stream of products. But I’m not sure what Viltrox is learning. They appear to be treating lens production as more a commodity business, but it is not. Too low volume to utlimately be commodity driven. The Japanese are very good at realizing when they have to move up value to protect the business long-term. Somehow, the CIPA companies are selling far fewer products these days, but ultimately taking in about the same number of dollars, for instance. There’s a strong ceiling in what you can accomplish in the ILC (interchangeable lens camera) market, and no one has found a way to dramatically raise that ceiling.
The question I have about Viltrox is this: once they've established their ultimate market penetration with lenses, then what? I have to think that they’d expand to building a camera, but that’s not the only possibility. Moreover, a huge, full line of lenses with only one camera would be weird, so what, would we get a messy line of cameras, too?
In tech, you’re always running. You’re either running to catch up, you’re running with the pack, or you’re running ahead to the new goal you’ve found that the others haven’t yet. It feels to me like Viltrox ran to catch up, found themselves in the pack, and then just started dancing while they ran with the pack in order to call attention to themselves.
-------------------
Wrapping Up
And in other news
▶︎ Another One Bites the Dust? The Coolpix P950 has apparently been discontinued in Japan. It’s difficult to interpret that, as it’s still available in other markets. It’s possible this is just a prelude to a minor change (ala the P1000 to P1100) due to parts changes, which trigger recertification across markets.
▶︎ The Last Roadmap. Panasonic now seems to be the last camera maker providing a lens road map, and lo and behold, it has been updated to include a wide prime in the same line as the just-announced 40mm f/2, and a large-aperture telephoto zoom. That will make 22 lenses in the L-mount from Panasonic. My question to Panasonic remains: where’s the L-mount professional video cameras?
NAB Show News
When you're on the press list at NAB Show, your email InBox piles up an impressive number of press releases and interview offers (currently over 100, so I’m going to stop counting). This year, by my math, about 70% of that first batch had the abbreviation AI somewhere in the headline or lede paragraph. PR-fluff-wording also abounds ("...turns post-production into a strategic lever...").
In alphabetical order, here are the new offerings that I believe have some implication to my readership. Note that I will update the following list if I find new things to report (updated or added items start with a red triangle).
▶︎ Atomos acquires Flanders Scientific. This represents an extension of Atomos into post production, as Flanders makes reference monitor systems. The goal here is providing known color monitoring from on-camera (capture) to monitoring (wireless review) to grading suites (post production). The Flanders name will remain as a sub-brand, but this also means that these high-end monitors will now get a world-wide distribution system.
▶︎ Blackmagic Design Resolves to Steal Lightroom Users. A bit surprisingly, Blackmagic Design's DaVinci Resolve 21 launches with a new still photography tool set, including organizing, rating, raw processing (Canon, Fuji, Nikon, and Sony), and even tethering options (Canon and Sony). I use the words “bit surprisingly" in the previous sentence because DaVinci Resolve was already the ultimate everything-but-the-kitchen-sink software program. Now Resolve has the kitchen sink, too. Since this was a beta announcement prior to NAB Show, there's still some lack of clarity as to exactly what will be in the free version of Resolve versus the paid version (US$300 one time fee) when it releases.
The big plus with these new still abilities is that it allows Resolve's extensive and Hollywood-tested color grading and node management (when something is applied in the processing chain) to still photos. The big minus is that, to take full advantage of that, you'll be learning an entirely new post processing skill set. Hollywood (plus Bollywood and all the other woods) will almost certainly embrace this, as it means that you can now match the still photos taken on set to the color, tonality, and processing of a final film using the same set of tools and workflow. Coupled with the products Blackmagic Design is making that enable cloud and group work, this brings more users into shared processing for pools of images and videos.
One final bit: DaVinci Resolve doesn't just run on Mac and Windows machines, it runs on Linux, as well. This new version will quickly become the most sophisticated Linux still processing available.
▶︎ Canon detecting photons. It’s not really an option for those of you reading this site—though some well-financed wildlife endeavors such as the BBC will certainly be interested—but Canon has introduced a new box-type camera, the MS-510. This 1” sensor camera needs only 0.0006 lux to capture a full-color image. To put that in perspective, that’s -7.4EV at ISO 100 (a nightime landscape with almost no moon). It’s just a 3.2mp camera, though, and requires an external power supply. It will capture FullHD at up to 60fps, though, which is why I mentioned the BBC. At US$22,800 just for the camera—you’d still need one of Canon’s broadcast-style lenses in the B4 mount and a power source—it very well may worth it to add to the BBC crews’ gear for some not-seen-at-this-level-before footage in the deep, dark jungles of the world.
I’ve written for a long time that I believe that photon-detecting cameras—as opposed to the photo-accumulating ones we currently use—are coming, and Canon has been plumbing this technology for awhile. The MS-510 now is starting to put it in very usable form for some niche markets. Canon uses SPAD (Single-Photon Avalanche Diode) in their designs, and as you might guess, the real issue raised with photon detection is internal bandwidth within the sensor to acquire each photon and move that data to where it can be used. When Dr. Fossum at Dartmouth first described his own photon-detection invention, the Quanta Image Sensor, he envisioned hundreds of millions of individual detectors running at as much as 1000 times a second in order to build a structure of when and where a photon came from.
You might be surprised to find that single photon detection dates back into the 1960’s. The problem has not been that you can’t detect a photon, it’s that if you want to deal with more than one as we do in photography and video and create a visible file, you quickly generate huge data sets. Canon has now shown that they can handle FullHD at 60fps, so we’re now over the threshold at which photon detection becomes useful. The issue now is bringing costs, processing, and power requirements down.
▶︎ DJI Launches Osmo Pocket 4. The Pocket 3 is something I've been using extensively as I prepare to go MAX, so I'm intimately familiar with what it can do (and it can do a lot). The new version at first doesn't seem to be different, as it uses the same image sensor. However now you can produce 4K/240P with it, there's a new 10-bit D-Log, it has 107GB of internal storage, has a bit more battery life, and adds two new buttons in landscape mode (2x/4x crop, and a customizable button). While these things seem like subtle changes, for some they will make big differences. Plus, there's been a price reduction with the new model. There’s bad news, though. At the moment, it’s unclear whether DJI will get permission to sell the Pocket 4 in the US.
Rumor has it that a Pocket 4 Pro is next, to be introduced in a month or so, and which features optical zoom via a second lens/sensor, much like smartphones do. The wide angle view is still derived from a 1” sensor, and a 3x lens alongside the wide angle one likely goes to a smaller sensor.
▶︎ Insta360 Is Going m4/3. Behind the scenes, Insta360 is showing off their new m4/3 camera, an EV-less camera that reminds me a lot of some of Samsung’s old APS-C lower-end offerings. Once again we get the 20mp sensor. Other known features are an articulated Rear LCD, front and rear Command dials, and a reasonable hand grip. But are they really going to try to call it a Z1? I’d think Nikon would have immense trademark issues with that.
▶︎ Glyph Introduces CFexpress Cards. Glyph Technologies has long been known for its higher end storage drives. Now, it’s entered the CFexpress market with 256GB, 512GB, 1TB, 2TB, and 4TB CFexpress Type B 4.0 cards. While these cards are all labeled with 3700MB/s read speeds, note that the 256GB has a sustained write speed is 550MB/s, the 512GB is 1100MB/s, and the 1TB is 2100MB/s. None of the cards I saw are labeled with the CFA’s certified testing logo, but Glyph is saying they’re all compatible with the Z8/Z9 (though they recommend at least the 512GB version). The initial pricing is relatively low (US$240 for 256GB) given the NAND pricing hikes recently. I’ve used a number of Glyph products over the years, and most have proven to be highly reliable. The exception is an 18TB hard drive that just likes to unmount and remount from time to time on macOS Tahoe.
▶︎ GoPro's Next Generation. While it looks somewhat like previous models—the small blocky brand shape identification is retained—GoPro's new Mission 1 series of cameras is pretty much all new. Starting with the larger 4:3 (Type 1) 50mp image sensor and backed by the new GP3 processor (5nm process, with 2x the speed plus a neural core), the net result is shallower depth of field, better low light capability, faster frame rates, less heat buildup, enhanced stabilization, and better battery life. 32-bit float for audio recording from the four built-in mics is also supported. The Rear LCD is 14% larger, the buttons easier to find, the cameras can take stills in raw mode, and there's even a clip-on "make it a compact camera with grip" option. Three models are being launched:
- The Mission 1 is the basic camera with a max of 8K/30P (at 16:9). Maximum 4K speed is 120fps.
- The Mission 1 Pro provides 8K/60P, including Open Gate, and runs a max of 240fps in 4K.
- The Mission 1 ILS is the same as the Pro, except instead of the built-in lens, it features a (rather large looking for the box size) m4/3 lens mount. Sadly, there’s no autofocus support.
GoPro once again will launch with a ton of SKUs (e.g. Creator kit, filters, housings, wireless mic system, LED lighting, and more, but pricing on everything is up in the air until the product releases at the end of May or early June.
I believe it was with the launch of the Hero3—GoPro is now on Hero13 and this new system would be 14)—back in 2012 that I first wrote that GoPro needed to create a interchangeable lens mount version. The demand was there for a C-mount version, and has been ever since. I’m not sure why they’ve chosen m4/3—probably because of consumer availability over C- or B4-mounts—but that’s too big a mount for the camera size, I believe. And without autofocus support, I’m not sure what the real gain was in adapting m4/3.
▶︎ Nikon Z Cinema Gets a Lens. It’s well known that Nikon will create a line of lenses specifically for the video side, called Nikkor Z Cinema. On Sunday Nikon launched a short teaser video that didn’t say anything (okay, we learned that they have Focus Lock and standard gearing, plus an A/M focus switch, and that there might be a total of nine lenses planned).
▶︎ Panasonic Gets a Lens. A compact 40mm f/2 L-mount lens that looks suspiciously like the Nikon 40mm f/2, but isn’t, as a couple of key specifications show up in the optical design. This is a lens that the Panasonic S9 has been looking for: small, light, competent.
▶︎ SanDisk’s New Cards. SanDisk announced new CFexpress 4.0 Type B and SD UHS-II cards. The new SanDisk Extreme Pro CFexpress cards are marked with 800 and 400 certified markings, and come in 128GB, 256GB, 400GB, 1TB, 2TB, and 4TB sizes. The SD Extreme Pro cards are either V60 or V90, and come in 128GB, 256GB, 512GB, 1TB, or 2TB sizes (the V60 version also comes in 64GB size).
▶︎ Viltrox Adds EVO Lenses. Viltrox added autofocus 35mm f/1.8 and 55mm f/1.8 EVO lenses in multiple mounts (including Z-mount), joining the 85mm f/2 EVO to form a core trio. These lenses are priced at US$395. These new lenses form Viltrox seventh series of lenses, and slot between the Pro and the Air series in both performance and size/weight. Let's see, I think the primary Viltrox series now go LAB, PRO, EVO, and Air, in that declining order.
Reader Questions Answered
Let's mix it up a bit, since camera news ain't happening until NAB next week. This week I'll answer a few reader questions that have popped up recently.
"Does the 1/focal length rule still apply to shutter speed in the age of image stabilization?"
It wasn’t a “rule.” It was a rule of thumb. What one person could achieve in terms of a good image of a static subject was (and remains) different than what another person could achieve.
As to image stabilization: lens stabilization will achieve different results than sensor stabilization. Sensor stabilization has issues with telephoto lenses (certainly by 200mm), for instance, because it is moving the focal plane two-dimensionally, not rotating the image three-dimensionally at the optical center. Overall, however, any form of stabilization should let someone use a slower shutter speed on a static subject than without stabilization. The question is “how much slower?” And that depends upon you.
However, you’ll note that I keep writing (and emphasized) “static subject.” In my Nikon Field Guide I pointed out that subject motion requires very specific shutter speeds in order to capture edges accurately. For a person walking across the frame, that would be 1/125. Image stabilization doesn’t “improve” that requirement.
Distance to the subject and the subject speed are the key variables; a secondary variable would be motion direction vis-à-vis the camera. Something moving at 5mph across the frame at 12 feet requires 1/500 second while at 100 feet 1/60 second is probably fine. For subjects moving towards the camera those numbers probably drop two stops (1/125 and 1/15).
So this notion that “stabilization lets you use lower shutter speeds” is an overstatement, at best case. And when it does, how much lower will depend a lot on your handling discipline. You’re far better off considering subject motion when you choose a shutter speed, regardless of stability setting.
"I was panning while photographing the takeoff of a plane and was completely parallel to it, but it appears that the front of the plane is slightly out of focus and the back is not. What would cause this?"
Most answers you’d get from the Interweb will say this was due to decentered lens elements or possibly heat waves. But the more likely culprit is image stabilization. You’re likely panning with something going 120 to 150mph. Stabilization capabilities are higher in the center of the frame than the edges, often by two stops or more.
With sensor-based stabilization, there’s also a limit to what can be done at the far edges of the frame. The stabiizer can only move the platform so far, plus shifting the sensor doesn’t 100% correct for when a tilt or rotation needed to be done. Even though they claim 5-axis stabilization, many mirrorless systems really only assess and correct only to the center of the frame, and CIPA numbers are usually reported for center axis only. On some cameras, Nikon now allows for stabilization calculations to be done at the location of the focus sensor that’s used, which may be off-center. So if the camera was focusing on the tail of the plane near one edge of the frame, the front of the plane at the other edge may have less (or wrong) correction.
With lens-only stabilization on older systems (e.g. DSLRs) there was also an interaction between shutter speed and stabilization motor speed that causes unwanted aliasing (which I’ve written about for two decades now, and for which I got a lot of initial pushback on, but have been basically been proven correct; my information came directly from engineers who designed the system, after all).
I’ve been writing the following since image stabilization first appeared: in situations where you don’t need it—e.g. when you’ve got a shutter speed that should freeze even that fast moving plane—stabilization is best turned off. While best case is that stabilization corrects all problems, the worst case is that it introduces new flaws. A likely case is that you get some edge aliasing.
Here’s the rub: cameras come with stabilization turned on by default these days, and that does stabilize the viewfinder for camera handling and framing, so there’s an immediate perceived advantage. With really long telephoto lenses and no stabilization active, most users would have trouble keeping their camera aligned with moving subjects. So another perceived advantage. But don’t let this fool you into thinking that stabilization is perfect or will always do the right thing and not introduce any image changes.
Stabilization is a tool that has consequences. Use it 100% of the time and you’ll eventually find images that show those consequences. It's impossible to know for sure whether that was what happened in your case, but it's my first suspect.
"What’s the heaviest lens you’d use when carrying the camera using a neckstrap?"
Simple rule of thumb: if the mass of the lens exceeds the mass of the body, you should be carrying the combo by the lens. The more the mismatch and the more the lens mass is centered away from the mount, the more important it is that you follow this rule.
The lens mount is the point of weakness between body and lens, and the mount is designed to break (on both body and lens) once stressed past a certain force level. The reason for that is that repairing a mount is far cheaper than repairing damage to the body or lens structure.
Beyond that, camera+lens these days tends to exceed three pounds even on the simplest of systems. That’s a lot of force on your neck, too. Almost none of us who carry cameras all the time use traditional neck strap carrying. At a minimum, we use shoulder straps, but this is where Cotton Carriers, various sling belts, and harnesses come into play. Even then you need to be careful, as just hanging a camera+lens off a carrier doesn’t isolate g-forces on the mount unless there are multiple points of contact.
"How do you tell if your lens mount is off?"
Let’s start with some basics: in the modern Z System cameras the image sensor is adjustable using a threaded screw system. In the DSLR era, sensor planes were adjusted using small shims, and shims of only certain thickness were made, so very small mount deviations were often common. The current system is not only much more adjustable, but stays locked in place. At the factory, after the lens mount is positioned on the body, there’s an automated alignment step that makes sure that the lens mount and image sensor are parallel and then locked into place with a small dab of locking solution. I’ve rarely seen a Z camera out of alignment from the factory, and haven’t seen even a single out-of-alignment camera in the Z9 generation models.
So what would make a lens mount be off? Basically two things: (1) impact damage to the mount, typically via offset force on the lens (e.g. when camera falls and hits by the lens); and (2) a broken VR sled under the image sensor. Both these things would need fixing by Nikon repair, they’re not things you can deal with yourself.
Coupled with that is whether the lens itself is manufactured correctly. In particular, any tilted lens element can cause the image plane to not be parallel to the mount and sensor. In practice, this looks like the lens mount being off, but isn’t.
So, how do you determine between lens element positioning and lens mount positioning problems? Well, first thing is to see if you have any issue at all. That’s done by taking images of something on the same plane (and making sure your camera is absolutely parallel to that plane) and looking for corner differences in acuity. One corner or side blurred more than the others is probably internal lens element alignment issues. A full side of an image different than the other could suggest potential mount issues and needs to be investigated further.
However, many people performing such tests do this on charts up close. The problem here is that it is very easy to be out of alignment with the chart, which will look like a mount alignment problem, but is really a chart alignment problem. If you want to be sure, put a mirror dead center in the frame on the wall/chart you’re photographing and verify the that you see the center of lens in the center of the mirror. You don’t want to be off by even a little.
If getting things parallel isn’t something you want to take the time to do, infinity is your friend. If you can get yourself reasonably square to things very distant (e.g. buildings near the horizon), the amount you could be off from parallel is so small it shouldn’t show in your images.
Next: is this a one lens problem or all lenses problem? One lens problems are certainly a lens problem, not a mount one. All lens problems can indicate a mount a problem.
One common complaint from people is “but I never dropped my camera.” Thus when Nikon repair claims “impact damage” they object and think they shouldn’t pay for the repair. But note that I wrote that I’ve rarely (and more recently, never) seen a mount issue from the factory. Nikon’s QA procedures have tightened considerably over the years, and the method by which alignment is done has been improved and made very reliable.
However, mounts are designed to break. On the camera side, incredibly shallow screws are used to hold the mount in place. This is because it is far cheaper to repair mount alignment/damage than it is to replace structural damage in the lens or camera body. Constant vibration, constant jamming, accidental impact forces (even bumping into someone else with your lens) can degrade the mount alignment (sometimes slowly via repeated incidents). I watched students when changing lenses jam their lens into the mount because they're in a hurry, not always correctly, and then almost use force to get the lens mounted correctly.
The vibration/impact reasons are why I never travel with lenses mounted on camera bodies. Have you ever seen how people treat your carry-ons in the overhead? I’ve even had a goat fall down a cliff with some of my gear strapped to his side (long story). You want no forces to be applied to the mount whenever possible. So you keep body and lens separate whenever you’re not using the gear. And when you are using the gear, you stay aware of any and all impacts, as well as oblique and gravitational forces that might impact the mount. You’ll note in the above question I don’t tend to carry cameras by neck straps. That’s gravity I’m trying to deny, as a front heavy lens is constantly putting a force on the mount, particularly as I walk with it.
"When someone writes or talks about how a lens “renders”, what do they mean?"
An excellent question. Often in photography we have terms that are bandied about that aren’t defined, or for which there is an assumed (but not necessarily agreed upon) definition.
The dictionary definition of “render” has multiple possibilities: (a) to reproduce or represent something; (b) to produce a copy; (c) to cause something to be.
None of those definitions quite tell us enough for what is meant by using the word render when it applies to lenses. Here’s my take: the minute someone starts talking about how a lens renders, I believe them to be speaking about how the optics impact the depiction of a three-dimensional world into a two-dimensional capture.
Some people will say that, no, what they mean is how well the lens mimics what our eyes do. Unfortunately, we have multiple problems trying to use that definition: (1) our eyes are not the same; (2) most eyes have defects; (3) our eyes don’t resolve to a two-dimensional capture (our retinas are curved); and most importantly (4) it’s actually the brain that does most of the work, because it interprets signals and can do so in different ways (e.g. synesthesia).
Layered on top of this is viewer experience. All of us have decades of experience looking at images that were captured with simpler lenses. Much like 24fps makes motion capture more “film-like” than 60fps (more TV-like), central sharpness with non-managed optical characteristics as you move towards the corner of a capture frame is something we’ve been conditioned to.
So let’s get back to how a lens renders. Using my definition—three-dimensional to two-dimensional reproduction—you can start putting measurements and values on how accurately a lens is doing that. One that doesn’t get talked about much with still photography but which we talk about all the time in film and video is cats eye bokeh. It’s pretty common to encounter small bright lights or highlights behind a subject, particularly at night. When they defocus (they’re behind the focused subject), light points should become larger, round, out-of-focus blur circles, with no new characteristics. That’s generally true for almost all lenses on the central axis, but as you get to the corners you start to see round turn into elliptical or eye shapes (cats eye), often because of where light baffles are located in a lens (they’re eclipsing part of the out of focus blur).
I could dissect every parameter of a lens and apply this same criteria—the three dimensions get correctly captured in two dimensions—and for the most part, I do that with my reviews. Pretty much everything in the performance section of one of my lens reviews is dealing with this definition of “rendering.”
The tricky part these days is that lens designers added a tool to their optical designs in this century: lens corrections done via software processing on the image data. Three easier-to-post-correct rendering flaws are often ignored (or not fully minimized) in current lens designs: vignetting, linear distortion, and lateral chromatic aberration. I have no problem with this where we’re not talking about massive eventual pixel changes (e.g. >2EV vignetting, >2% distortion, >2 pixel CA suppression). The likelihood that modest pixel changes have real visual consequences is small enough to be ignorable, and the correction is arguably better than the uncorrected.
Moving these optical problems to post correction allows lens designers to better concentrate on the real thing that can distinguish a lens: contrast. MTF is the way we usually measure that, but some will talk about acuity, sharpness, resolving power, and other related stand-ins. Before aspherical lens polishing came to the scene, lenses pretty much had their highest MTF in the center, and that fell off as you moved further from the center. Aspherical lens elements started to change that, and modern lens designs can be quite good at corner-to-corner contrast.
That last bit is what typically leads to lens rendering debates. Technically, our eye-brain is highly center oriented: we see “sharp” in a narrow area controlled by where we point our eyes. Our brain deprioritizes data off that narrow field, which can feel like lack of focus. That’s similar to lens designs of the previous century: sharp central area falls off outside of it. It’s dissimilar to the way many of us view most of our photos now: on a flat screen fully (or almost fully in the case of a 27” monitor) within our central vision predilection. My prediction is that Millennials and later are not going to have this same debate down the road: they’re being conditioned to seeing edge to edge where they want it sharp (or rather, as sharp as the DOF would define).
Personally, I put a stake in the ground a long, long time ago: I want optimal data capture. That would imply a perfect lens with no optical liabilities. I can easily process in corner blur effects later, but it’s really difficult to process in corner sharpness when it was never there in the first place.
Your mileage will vary. But I have to ask: how do you know what that is? Are you even thinking about this? Or is your reaction to lens rendering always “I know it when I see it?”
April 1 to 6 News/Commentary
LEDE ON
Had you listened to me when I first warned about the coming storage shortage late last year and just bought stock in the key storage companies, you would have doubled your money twice with SanDisk, doubled your money once with Micron and Western Digital, and almost doubled your investment with Seagate, and that's despite the fact that the war with Iran has everyone now betting against semiconductor and tech companies because they won't be able to keep it up due to a shortage of Helium. Here's my update: storage constraints are likely to continue all this year and next, and that's assuming the semiconductor plants can still get gassed up.
That said, don't invest in the storage companies at this point. While they still have upside growth potential, they also are now riskier than before. The new bet—assuming you're a gambler—is on the short side against companies that rely upon buying storage (the exception to this is a couple of players such as Apple, which has been buying up long-term commitments). You can't, for example, tell someone to buy a US$1000 camera and put a US$1000 card in it, or to get a new computer that suddenly costs twice as much as it used to because of storage component costs (or it stays the same price and reduces RAM and storage capacities).
Bottom line: SSD, card, RAM, and even hard drive supplies are short and getting shorter. Do not delay purchasing these if you can find stock at a good price. However, the shortage will ultimately create another problem: counterfeiting. The guy on the corner who as you pass says "psst, dude, you need some memory?" is not your friend. Let's be careful out there...
——————————
News
Nikon in space
The crew of the Artemis II was given extra photographic instruction by National Geographic, and is using Nikon cameras as they make their mission to circle the moon. Unfortunately, the images that are mostly coming out of NASA that you're seeing appear on other sites have gone through processing (Lightroom Classic tags appear in the EXIF, and there are a few small changes when compared to the originals, though NASA does not appear to have used Dehaze or noise reduction; see pixels view, below), so be careful about assessing them via what a Web site posts. Apparently, NatGeo's "training" didn't have the astronauts populating IPTC data, either, as I'm seeing none on the originals. Maybe if NASA had actually asked a Nikon expert for help... ;~)
Almost all of the first images I've seen from the mission were taken with a Nikon D5 (DSLR) and Nikkor 14-24mm f/2.8G (F-mount). The two aurora image shown above, for example, is D5, 22mm, f/4, 1/4 second, ISO 51200, manual exposure, matrix metering, exposure compensation of +1EV.
I took the original and ran some simple processing on it to emphasize the "little blue marble" idea, and came up with this:
The crew has other Nikon gear with it, including a Z9. If you want to see the images from the mission, you can do so at images.nasa.gov. When you go there, you'll see a Show EXIF Data button underneath each individually viewed image that allows you to see a fair amount of the camera data. As I write this, all of the images have been either from a Nikon D5 or a GoPro Hero 4 (too bad Nikon dropped the KeyMission, right?).
——————————
Tip
Staying on top of security
More photographers use Macs than the general public does when measured as a percentage. The myth that the Mac, because of its lower market share, was less targeted by malicious hackers and thus more secure than a PC is just that, a myth. The entire Apple ecosystem is the reason: macOS (Mac), iOS (iPhone), iPadOS (iPad) and the rest compromise a huge user base while sharing a lot of the same code base, so targeting Macs is pretty much the norm now. When I update this site, I'll be updating some of my Mac security advice (current advice is here).
Basically, the change to my advice is this: you need to be on the most recent version of macOS now. That, coupled with a firewall/virus protector package, and using something like SilentKnight to verify that all the security updates are getting installed and run is your best protection if your computer connects to the Internet. If you don't connect to the Internet, how are you reading this? ;~)
Recent events, most notably the DarkSword exploit, are making what I wrote in the last paragraph the safest way to run a Mac now. That's because the current OS's get first fixes (it took over a week and Apple going back on a previous policy before some older versions got fixes). To put it simply, DarkSword lives off of vulnerabilities in a number of key areas from the kernel through all of the app layers, and just visiting a malicious Web site can lead to full device compromise that will be unseen by you and requires no action on your part. We now know that DarkSword has been in the wild since November 2025, which is one reason why you need a security package that can scan your system for it (Apple's XProtect, in theory, also does that, but I've seen XProtect fail to scan for days at a time).
The problem, of course, is that Apple is aggressive about deprecating older things as they update their operating systems. This means that some of your older software and devices won't work with the new macOS as it launches. The way around this is to have a machine with enough RAM and internal storage so that you can virtualize an older version of macOS within the new one. For instance, I'm running macOS Monterey inside of macOS Tahoe (I even put the Monterey dock on the left and the Tahoe dock on the right so I know which one I'm using). This is not a trivial process, nor is it particularly complex. It's a bunch of simple steps using a virtual host (such as Parallels). But I make the point about RAM and storage because you're probably going to consume double the RAM and storage to do this right, perhaps more depending upon your older apps and data.
I wish Apple did this right (they have a virtualizer that developers use so that they can test across multiple versions of the macOS). At the point where a completely new named version of macOS comes out, when it installs, it should offer to install your older setup intact in a new virtual machine. Until then, you and I have to do this manually, and it's something that can and should be automated, not waste our time learning the steps and then carefully following them.
——————————
Commentary
Behind the scenes can be brutal
One thing you might have noticed about my sites recently is that they load faster. My first redesign, filmbodies.com, essentially snaps onto your display (assuming you've got a reasonably fast Internet connection). Even though I'm now using larger images, the average load time is down to 1.2 seconds for a first time (no cache) hit. I've seen it go as low as 906ms. And that was with a Google font API lookup that I still need to resolve and remove, otherwise it would have been less than a second. The site I use to do this analysis grades that as a B (by contrast, the photography site I show an example of below gets a D). I'd like to get that to an A, but I still have some work to do on filmbodies. (By contrast, bythom currently is graded as A, despite loading twice as slow, so it isn't all about speed.)
The more background services you use (Google fonts, analytics, ad tracking, user tracking, database calls, etc.), the more a site is likely to sometimes load slowly or seem to sputter as it loads, as the demands on each of those services running behind the scenes can produce slower responses at times. Here's an example of how much is going on in the background for a site you probably know and how one weak link in the chain can make it sputter into ridiculous load times:
One of the things I've noticed since the US attacked Iran is that Web servers are clearly getting attacked more frequently. Which makes for a higher chance that a service a site is using responds more slowly. While I positioned the removal of ads and tracking mechanisms from my sites as a change from making you the product to making my sites' information the product, one thing I was clearly thinking about as I started the redesign process was bringing as much as possible back to the server I control and whose performance I can monitor and maintain.
——————————
Commentary
Overextending
Yes, I know that the dearth of new cameras has got all the photography Web sites in a tizzy, but a headline of "Two Legends Return" to describe what executives said at CP+ about considering development of a LX100III or OMPen F was just pure clickbait. I can add six characters to their headline and keep it clickbaity but more accurate: "Will Two Legends Return?" And the answer to that question is that we still don't know, but at least now the two companies in question, OM Digital Solutions and Panasonic, have expressed that they're clearly considering it.
While we're at it, I'll tell you the likely reason that the kimono is being opened slightly about future developments (typically the Japanese executive response is always something along the lines of "we don't talk about potential future products"). It's resource planning, basically. With the supply chain cutting off access to so many parts and delaying introductions of cameras already in progress—and that will worsen over the coming year—having a better idea of what resonates most with customers is going to be absolutely necessary to stay in business.
For instance, in OMDS's R&D they have enough resources to do one major release in the foreseeable future, so should it be OM-10 or OM-Pen? It really should be both, as they fill different needs in an overall lineup, but I don't think OMDS has the capital and resources to do both near simultaneously, let alone the ability to market and sell two lower level models simultaneously. So by saying they're considering making a Pen replacement, they can better gauge fan response to that. If I'm reading between the lines correctly, they either already have an OM-10 ready and are trying to figure out the next model after that, or they're distrusting whether an OM-10 is the right "next camera" for them.
The way Panasonic is responding to the "will there be an LX100III" question is more amusing than functional in my assessment. It's as if they're waking up to the "sudden" popularity of compact cameras. I put the "sudden" in quotes because it was clear to me that the camera companies were cutting off compact production arbitrarily starting in 2018 (and later to deal with parts shortages and supply chain problems due to the pandemic), not because people didn't want to buy them. I believe the Japanese camera industry 100% missed the mark by aggressively eradicating compacts from their lineups. Okay, they saw that as a way of creating an arbitrary method of raising per unit value, which is a form of optimizing component acquisition to produce better gross product margin. The pandemic simply exacerbated that. But the customer demand for compacts was still there at a much higher level than the Japanese were delivering, and only recently do we see them acknowledging that.
I'd have to knock Nikon for this, too. The camera they should have introduced by now is the Coolpix Z. Essentially their update of the Coolpix A to compete with the likes of the Fujifilm X100VI and the Ricoh GRIV. This would fit well with their stated (but not always followed) attempt to cater to prosumer and pro users. To this day, the A is a usable, competent camera that produces excellent imagery. Imagine if it had been updated to Z System levels and included those Flexible Picture Controls at the press of a button. However, Nikon basically didn't know how to market the Coolpix A when it was introduced (together with the forgettable P330), particularly since they also had Nikon 1 models they were trying to push at the time. The A was just one of 12 Nikon compacts (and 3 Nikon 1's) that were introduced that year, which Nikon marketing had troubles keeping up with. Meanwhile that year the D610 was an emergency response to the shutter splatter problem they were dealing with, and the Df got all the rest of the marketing attention that year. Nikon thinks that the Coolpix A didn't succeed because no one wanted it. Well, they're right, they didn't tell anyone why they'd want it. Shimatta!
One problem with everyone giving up on compacts for so long is that we're once again on the cusp of the smartphones making another advance up the lower end of photographic capability. That said, all three potential cameras I just mentioned (Coolpix Z, OM Pen, and LX100III) would likely be high enough in capability and status that they could be hits. But have they returned? Masaka.
-------------------
Wrapping Up
And in other news
▶︎ Atomos Shinobi controls ZR. Firmware update 11.07.00 to AtomOS now allows a Shinobi II to control the ZR with current firmware, including doing touch focus on the remote monitor.
March 24 to 31 News/Commentary
LEDE ON
The camera company executive interviews with the media at the CP+ show in Japan are slowly hitting the Interwebs as PR at each company signs off on them. One thing that seems to have happened is that the kimono widened a little and we got some talk about what many companies are working on (the OM Pen F isn't dead yet, though Generalissimo Franco still is). I suspect that's because they're embarrassed that they didn't introduce anything new in months, and want customers to realize that they haven't all left the engineering buildings for a long hike up Mt. Fuji never to return. Just as soon as someone tells them where the new parts all went and that they're ready to use, they'll get right back to work.
——————————
New Compact
Panasonic muddles on
The new Panasonic TZ300 strikes me as a "must do something" launch. Except that what they did is take out the viewfinder from a TZ200 and not add anything particularly significant. But at least Panasonic has a compact superzoom again.
Let's start with the sameness between the TZ200 and the new TZ300: same 20mp 1" image sensor (though it's now backlit), same 24-340mm (equivalent) f/3.3-8 lens, same fixed Rear LCD. There's nothing really wrong with the image sensor; it's fine for its size, and I'm sure the BSI version is going to deliver maybe a third of a stop more dynamic range. The lens I found wanting on the TZ200, though, and nothing seems to have changed. Sony's RX100VII lens, which wasn't exactly given accolades, beats the same Leica-designed 24-340mm (equivalent) f/3.3-8 lens that was on the TZ200 I tested. Maybe I had a bad sample? But I seriously doubt that, as others complained about it, too. I suspect that diffraction is a limiting factor on that lens, and it had quality control issues the last go around.
Meanwhile, we still get DFD (Didn't Focus Dead-on; no, wait, Depth From Defocus) will all its issues, and a video burst for continuous action (4K Photo). The 4K video, unfortunately, is cropped, so you basically end up with a 36-540mm lens when 4K is invoked in any way. Yes, we do now get a (Europe-required) USB-C port for communication and power, but we get a micro (Type D) HDMI port when a microphone port probably would have been more useful.
The sad thing is this: eight years ago the TZ200 had very good control and handling for a compact superzoom. But it left a lot to be desired in the quality of the stills and photos that were taken. This new TZ300 pretty much is identical in control and handling to its predecessor, but Panasonic simply didn't really address any of the shortcomings we reviewers noted eight years ago, and added that new one of "no viewfinder."
So what we have here is that a Camera Company sees that compacts are selling again, so dusts off something they stopped making many years ago and puts no effort into improving it. Missing from the TZ300 are phase detect autofocus, Panasonic LUTs (!), a tilting and better Rear LCD, and a bunch more things you'd expect these days. It's as if Toyota resurrected the Scion thinking that its time had finally come.
When companies take the wraps off something they had mothballed and put no effort into their iteration effort, to me that shows not just a lack of imagination and creativity, but almost a sense of desperateness.
Which brings me to this: does Panasonic actually have a coherent brand strategy? Across both stills and video, I'd say no, they don't. Not when you can now buy an eight-year old design that's been downgraded.
——————————
Legal
Did we win or lose?
The judgement in GoPro's lawsuit against Insta360 has been published. Both sides are declaring victory. On the utility patent side (the technical stuff), Insta360 was found not to have infringed on stabilization, distortion correction, aspect ratio, and leveling patents. On the design patent side (how it looks), Insta360 was found infringing on the Hero camera design, at least for Insta360's original models (they've since changed their design). Insta360 claims they spent at least US$10m defending themselves against GoPro's suits. It's unclear how much GoPro spent as it appears to be buried in General and Administration expenses, though their 10K for 2025 did list legal expenses as "substantial."
As to why GoPro sued in the first place, it's easy to see the management motivation: sales to Asia and Pacific (APAC) in 2023: US$245m. In 2025: US$77m. GoPro's making noise about the new camera(s?) they'll launch at NAB later this month. Let's hope that gets them back to innovating rather than protecting.
Meanwhile, DJI has now sued Insta360 for patent infringement on drones. It seems like all the action camera companies are now drowning in lawyers.
——————————
Tip
macOS updates require free drive space
I've seen this pop up on a number of sites, so I wanted to warn you so that it doesn't happen to you. When you update from an earlier macOS to the current macOS Tahoe (26.4), if you have older external drives mounted that are not AFPS GUID formatted, the update may try to update those drives to the current Apple standard (and you want that, because AFPS has safeguards that the old Mac OS Extended (Journaled) format doesn't).
The problem happens when you don't have enough free storage space left on a drive. The process of changing formatting requires a great deal of free space (Apple says 40GB). Apparently the update process doesn't check if it has enough space (or perhaps generates an unknown need for space and can't calculate whether you have it). Drives that fail the updating process become unusable as they're in multiple formats that haven't been resolved yet.
So: don't keep external drives attached to your Mac when making macOS updates that might involve drive format updates.
Moreover, you should always have a complete backup of your internal drive, as if you ever encounter this problem when trying to update the internal drive, it can end up unbootable and you'll have to perform a complete reinstall to get it running. I've warned about not accepting Apple's minimal SSD sizes for a number of reasons, and this is one of them: those of you with 128GB internal SSDs can get to a storage threshold that makes the system not updatable (though you would have also noticed and ignored performance drops before you got there).
This is one of the trickier aspects of high tech: deprecation of older technologies can result in loss of data if you're not paying attention. But we need deprecations to happen to make progress in features and performance. Additionally, in today's environment, you really can't afford to just "freeze" your system and forgo security updates, so you need to be updating.
-------------------
Wrapping Up
And in other news
▶︎ Apple discovers ticoRAW. With version 26.4 of macOS (Tahoe) Apple finally has built in support for Nikon's High efficiency* and High efficiency raw formats. Apple becomes the last major player dealing with raw files to enable this support. Apple's support page, however, still lists that only Lossless compressed is supported, but that page was last updated in February 2026.
▶︎ AI gobbles up Sony cards. Just in time for World Backup Day, Sony is no longer supplying CFexpress cards (Type A or B), or Tough SD cards to dealers and customers. Referencing the shortage of NAND chips, Sony made the decision to at least temporarily halt distribution of cards, as they cannot guarantee supply. This is a bit disturbing as Sony is a fairly major player in storage cards (though their Nextorage spinout may now be bigger).
▶︎ Nikon divests Mark Roberts Motion Control. When Nikon acquired the British-based MRMC back in 2016, it raised a lot of eyebrows. Yes, the MRMC robotic arms were mounting Nikon DSLRs for unattended placement at big sporting events at the time, but the big use of arms with remote control really happens at a level and in areas where Nikon has lost ground (Olympic events, Hollywood, etc.). Curiously, now that Nikon has elbowed back into Hollywood with RED and has one of the best mirrorless cameras for sports in the Z9, Nikon is now in the process of selling MRMC. I don't think the recent write downs at the MRMC division played well in Tokyo, and I had been expecting MRMC to play more closely with Nikon Imaging, but MRMC seems to have a mind of its own. While their robot arms have been used in ways that caught viral attention recently (Apple TV+'s Severance and the Netflix Ed Sheehan one shot production in New York), I've noticed that more and more I'm seeing non-Nikon cameras on Mark Roberts arms. Since I'm planning on attending the NAB Show coming up, I had already noticed that MRMC had a separate booth—though across the aisle—from Nikon this year. Now I think we know why (RED is in Nikon's booth).
▶︎ DNG is now an ISO Standard raw image format. Twenty years later, Adobe's attempt to create a standard for raw files is now officially recognized by the International Standards Organization (as ISO 12234-4). Back in the early days of Fred Miranda and dpreview forums, many discussions about the then splintering raw format definitions were visible (all camera maker raw formats actually utilize a form of TIFF under the covers). It was a heated topic, particularly when companies such as Nikon introduced encrypted white balance information into their raw files. The issue then, and still present to this day, was archival longevity of raw files, as the formats for Canon, Fujifilm, Nikon, Olympus, Panasonic, and Sony are all proprietary and not documented in a way that would allow someone to write a 100% accurate processing tool in the future (or today, for that matter). At the moment, Leica, Ricoh, and Sigma save raw files as DNG. Whether the standardization process will get any of the major makers to convert future products to DNG is unknown, but for a company like Nikon that embraces ISO standards throughout their organization, I wonder if this might be the tipping point for the future.
March 16 to 23 News/Commentary
LEDE ON
Things continue to be quiet in the photography scene. We did have a bunch of new rumors pop up, including a Chinese zoom lens, another Chinese autofocus lens entrant, hints that GoPro may expand beyond the wide angle Action camera, and more. But in terms of news? Yawn (wake me when something happens again).
——————————
Commentary
Gerald undoes the color world
Gerald Undone, a well-known and respected YouTuber who mostly caters to the sophisticated videographer crowd, seems to have upset a few people who only noted his "grades" for color. As in Nikon gets an A, while Fujifilm's overall report card showed a D. Nikon's Flat Picture Control gets an even better grade than A for photo accuracy.
Yes, we're back into Color Science feuds. Basically, accuracy of color versus colors people like. I should point out that I come from the video world (dating back to the early 1970's) and because video always has to worry about what's happening downstream of the capture, accurate color at capture has always been a priority. It's my priority with stills, too. I think I first wrote in 2004 that if you baked something into your original capture files (e.g. JPEG), it made the job of changing things later much more difficult. The more you baked, the more difficult later changes became, up until the point where you "burnt" you original and would never be able to recover usable data later from the burned sections.
The doom-scroll side of the world doesn't mind baking. Indeed, they count on some baking to set their images off from others in the scroll. Brighter, contrastier, more colorful, unusual color palette, and more. That's one of the things that happen when you're taking the same composition as others ;~). Pleasing color is one of the reasons why Fujifilm resonates with the content creators: picking a different film simulation gives you instant baking. Nikon's more recent foray into Recipes is similar, and even the wording speaks to "baking."
I think everyone really needs to understand—after accounting for color blindness and cataracts—just what world they want to live in: accurate or pleasing color. I'll point out that this was the case even back when I picked up my first film camera in the 1960's. You're in one camp or the other, though these days with digital, you can be in the accurate camp and migrate any time you want to the pleasing camp. Pretty difficult to do the opposite, though.
Which brings me to LUTs in the video world. I should point out that one of Gerald's businesses is selling accurate LUTs, which is one reason why he's doing all this technical color analysis in the first place. In N-RAW or N-Log video from my Z9-generation cameras, I like his LUTs a bit better than Nikon's free LUTs. His LUTs do a better job of getting you to a "broadcast accurate" color in your base material. If you want to grade looks into that after the fact, you're working from a better original data set using his LUTs.
——————————
Commentary
Better than raw
A couple of questions, a few posts, and now dpreview's interview with the Camera Intelligence's Caira camera designers remind me that we did the right thing in the beginning at Connectix in 1994 when we designed the QuickCam. It's why Apple copied what we did when they got around to building cameras into laptops, tablets, and phones. And it's a really simple idea.
Let's start, however, with what current cameras do: data from the image sensor is instantly (and sometimes within the sensor) managed into a complex physical image processing pipeline. That includes "gain" control, "correction" of raw data (in Nikon's case White Balance Preconditioning among other things), as well as lens corrections, among other things. Many of these things happen before the camera actually creates a raw data file, let alone processes the raw data into a JPEG, HEIF, or TIFF image.
What we did at Connectix was dirt simple: as few parts as possible to get truly raw image sensor data captured in real time into the CPU of a Macintosh (and eventually Windows PCs). The stream of the data was more important than getting each data packet "corrected." Particularly when we and others eventually combined the stream of image data with streams of other data, such as gyroscopes.
I'll take a really simple example to illustrate. Photons are random. So when you capture a snapshot of them via a shutter (electronic or mechanical) you freeze whatever random photons you've managed to capture. The randomness of photons is our primary source of visual "noise" these days. So what if you captured the moment via one frame of image capture but captured the data stream before and after? You could look at the pixel data for two, four, eight, or sixteen images in the stream and, where there's not motion, use methods to "fix" the photon randomness (I published an article back in 2011 about how to do this; I talked about it at a graphics conference a decade earlier). With the QuickCam we were even trickier: we used the stream to constantly evaluate both exposure and color change (e.g. someone turns on a light), as well as to produce images.
To a large degree, the smartphone cameras have gotten better at final results because they're doing some of these stream related things (as does the Caira), particularly in low light. Meanwhile, the camera makers have gotten worse. That's because they rely upon a physical image pipeline that does pieces of the work all along the way, plus the sensor-to-ASIC path goes through a limited amount of memory and not always at the speeds that the smartphones work at with their direct to compute cores approach.
Which brings me to another point: we always knew processors would get faster. Moore's law itself predicts that. So keeping the connection between image sensor and CPU as short and unmanaged as possible was always going to provide the ability to do more later at the CPU side. Today, that includes a lot of machine-learned things in the smartphones (AI, if you will). The dedicated cameras are doing that on limited (or no) real-time streams well away from the image sensor, and with process size cores that are far bigger, more power hungry, and slower than the mobile devices.
So what's better than raw data? All data as it is produced, and the full stream of it, not a slice of it every now and again. Which leads me to Professor Eric Fossum's work. He was in on the original CCD and CMOS image sensor development at the NASA Jet Propulsion Lab (JPL), but at Dartmouth he helped develop what he originally called the JOT image processor (now called Quanta), which basically just produces a stream of data that tells you when and wherefrom every photon arrived at the image sensor. After introducing a 41mp 2.2-micron sensor in 2022, the company he and his students started has gone 100% silent. I can't tell whether it was absorbed by someone else or what, but I can see that the quanta image sensor is still getting development activity.
So you wanted to know what's better than raw. It's knowing when every photon hit the focal plane and from where. Couple that stream of data with all the other image processing things that are going on in today's very sophisticated and fast computing cores, and I believe you'll get even better results than we obtain today.
——————————
Reader Question
Carry on my wayward friends
One of the things I've been trying to build in the background is a huge database of reader questions with my answers. I've been doing "reader questions" off and on here at byThom for decades, but it's time to start dialing that in a bit more. So expect a reader question coming with each future News/Views. Today's question: "What’s the heaviest lens you’d use when carrying the camera using a neckstrap?"
Answer: I have a simple rule of thumb for this: if the mass of the lens exceeds the mass of the body, you should be carrying the combination by the lens (e.g. strap attached to the tripod collar). The more the mismatch and the more the lens mass is far away from the mount, the more important it is that you follow this rule. That's because the lens mount is the point of weakness between body and lens, and the mount on both is designed to break once stressed past a certain force level. The reason for that is that repairing a mount is far cheaper than repairing structural damage to the body or lens structure.
Beyond that, camera+lens these days tends to exceed three pounds even on the simplest of systems. That’s a lot of force on your neck, too. Almost none of us who carry cameras all the time use traditional neck carries. At a minimum, we use shoulder straps, but this is where Cotton Carriers and various sling belts and harnesses come into play. Even then you need to be careful, as just hanging a camera+lens off a carrier doesn’t isolate g-forces on the mount unless there are multiple points of contact.
——————————
Now Arriving at Tab 2
Only three months late
This week I published my first completely redesigned Web site: filmbodies.com. I've started with that site because it's the smallest of my sites and the one that gets the fewest updates and additions, and I wanted to make sure things work as I want them to in this new style before committing to the bigger bythom.com and zsystemuser.com sites, which would need a lot of extra work if I got something wrong enough that I needed to abandon the style and start over.
It's been a long, winding path to where I ended up. I have so many different prototypes of sites now I'm going to have to build an archive drive of just all my ideas and testing. In the end, though, I decided to keep things closer to what I've been doing rather than further. That said, there are a ton of behind the scenes changes, some of which I coded myself, some of which were coded by others. But filmbodies is now my first site that 100% respects screen size, plus it also respects Night/Day settings on your devices. A 404 page, redone SEO, and other missing elements are all features, though I haven't hooked up the contact page, 404, and redirects yet. Overall, filmbodies should be leaner and faster. I can also update it quicker, too, plus site backups are now automatic.
Even more exciting is that I rewrote (or at least re-edited) every key article on the old filmbodies site, then added a couple of new ones just because it's difficult to stop me at anything once I get started ;~). So much so, there's even a brand new, free book available to film SLR users who visit the site (though you can enter a donation price, should you care to). The letters on my keyboards seem to be wearing off. I've also cleaned up a lot of images because on the original site many of them were only 384 pixels wide! That tells you how far things went back, as I originally used a two-column site design with a maximum column size of 500. But now, all will be new again. New text, new photos, new everything.
Now that filmbodies is live, it's time for me to start knocking down other site dominoes...
-------------------
Wrapping Up
And in other news
▶︎ DxO PhotoLab 9.6. This new update adds the DeepPrime XD3 noise reduction (for Bayer sensors), adds diffusion on AI masking to make the edges more natural, and adds a new high-fidelity DNG compression routine.
▶︎ Affinity gets bug fix. The new combined Affinity application was updated to version 3.1, claiming to fix over 200 bugs. Canva added a new light interface for those that objected to the dark one, a new Convert to Curves function that changes pixel selections into editable vector curves, a new Live Tone Blend Groups function, and some other minor bits. Affinity seems to generate a love/hate response from users, mostly due to its combined Illustrator/Photoshop/Indesign mimicking UI, but frankly, it's a free Photoshop (near) clone that works, and I'm not sure how you can hate that.
▶︎ GIMP gets an update. Speaking of Photoshop alternatives, GIMP (Gnu Image Manipulation Program) just updated to version 3.2, which finally adds non-destructive layers. In fact, layers got a lot of additional attention in this release, though not for the sort of layers that we tend to use with photos. Instead, the new bits have to do with text layers, linked layers, and vector layers, which are more graphic-design oriented. The MyPaint Brush tool was updated, as well as the Text Editor. The UI got a lot of touchup and adjustment, though it still has a geeky, old-school Unix flavor, and now JPEG 2000 and AVCI images are also supported. Curves now supports Presets. There's even a new Cornish language version of the UI, which brings to 86 the languages GIMP supports.
▶︎ Another Viltrox Vintage Flash. Viltrox introduced the Vintage V2, a US$37 basic flash unit with a rechargeable lithium battery. While there are TTL compatible versions for Canon, Fujifilm, Nikon, and Sony, the only controls on the flash are really Automatic plus 1/2 to 1/16 power. With a GN of 6 (feet, about 2m), it's not very powerful. The best application for this flash would be in situations where minimal fill is useful.
News and Commentary March 9 to 15
LEDE ON
It's tempting to say "nothing happened this week." Everyone in Tokyo is taking a breather after a successful CP+ show, after all. But of course a lot did happen (e.g. war is now ongoing, more tariff gyrations, fear of economic slowdown, yen appreciation against the dollar while falling against the Chinese Yuan Renminbi). For Fujitsu, Nikon, Panasonic, and Sony, their fiscal years end at the end of March, and the last month hasn't quite worked out exactly as each expected. When the full year financial results are reported in late April and early May, I expect that to be followed by some new short- and mid-term management plans that were at least partly triggered by recent events. Nothing earth-shocking is in store as far as I can tell, but I'm hearing a lot of micromanagement bits starting to leak in Tokyo, and even here in the US subsidiaries.
——————————
Tip
Reducing Noise on Input
Bill Ferris, an Arizona wildlife photographer who's active on dpreview recently wrote a short post that echoes what I've been teaching for some time now. To reduce noise in your input data:
- Use the widest aperture that provides acceptable depth of field.
- Use the slowest shutter speed that stops subject (and camera) motion.
- Fill the frame with your composition.
The first two are about optimizing exposure. Exposure is LIGHT filtered by APERTURE filtered by SHUTTER SPEED. When you use too small an aperture or too fast a shutter speed you're effectively increasing the randomness of photons in your image, which is the primary source of "noise" these days.
The third is more about the visibility of noise. If you have to crop your original data to get your final composition, at the same output size you increase the visibility of what noise was captured. So if your goal is a 24" print and you cropped your original 2x and output to that size, you'll get a significant increase in the visibility of noise that is in the image versus having enough full frame pixels in the first place. This part is trickier than you think. One reason why I stopped using m4/3 is the 4:3 aspect ratio. Almost all my output is 16:9 these days, so in using 4:3 for the capture I'm always significantly cropping my final image. This makes the area captured that's used in the final image even smaller than I obtain with full frame, thus amplifying the randomness of photons. (Note that choosing sensor size is all about trade-offs. For me, the trade-offs no longer work great. For you, they might. See next.)
Yes, noise reduction software can help, but that's dealing with things you've already captured. Remember, my mantra is optimal data capture, optimal data processing. What I'm talking about in this tip is optimal data capture. Where noise reduction software comes into play is with optimal data processing.
After I launch byThom MAX I'll eventually serve up my three-part seminar on noise: (1) sources of noise; (2) minimizing noise during capture; and (3) dealing with noise after the capture. One of these (long) presentations is already done. The other two are in progress. Until then, enjoy this brief hint at a key component of presentation #2.
——————————
Commentary
The case for high-end APS-C
In this week's tip, I mentioned aspect ratio, capture area, and tradeoffs. I'm lucky enough to be able to fill the full frame with my subjects, both on the sidelines at sporting events, and in Africa taking wildlife photos. While I "suffer" from size, weight, and price penalties doing so, matching top full frame bodies with the best possible lens is still my primary choice. I'll put what I create up against anyone, as I'm using optimal gear using optimal capture techniques.
But realistically, most folk on safari aren't getting the animal approaches or carrying the really long lenses I and other pros do. Particularly in Kenya and Tanzania, there are times when you simply can't fill the frame with your composition, even at 600mm. The nice thing about the Nikon Z8/Z9 is that they also are convenient APS-C cameras, too, by simply flipping a setting (which can be automated into a button). Sure, you only get a 19mp crop that way, but if that 19mp is fully used and you've set exposure properly, the results can still be stunning at up to about 24" long axis.
One reason the Nikon D500 and the Canon 7D DSLRs were so popular is that they gave you much of the top end body capabilities and performance in a smaller package where you could use somewhat shorter optics, plus you saved a ton of money by equipping that way. I know of one professional sports photographer who has never given up on his D500's because he gets what he needs from a lighter, smaller outfit. Newspaper and Web sports photos simply don't need massive file sizes, and photos are used at sizes that don't reveal the noise gain from the smaller capture area.
There's no accepted definition of consumer, prosumer, or professional when it comes to cameras. However, I'd tend to point to the D500 and 7D (and now R7) bodies as truly prosumer, as they have professional body attributes that are reduced slightly by some consumer-type approaches. For instance, sensor size, which has a big impact on cost, all else equal.
Nikon got burnt by overextending in DX (APS-C) during the DSLR era. As sales volumes came down, Nikon found that they hadn't cleared the previous generation of cameras when they were launching new ones, so at one point Nikon was selling anywhere from eight to twelve (!) cameras in a very tight spacing. After the D500, Nikon moved radically away from proliferating APS-C cameras.
Sony has done something different. They, too, were over proliferating APS-C early in the mirrorless era (NEX and A#### cameras) and ran into the same inventory pile up. Now they've moved to four spaced-out models that don't iterate very often (if at all).
Canon seems to be the only one iterating mirrorless much like they iterated DSLRs, with models all over the spectrum, including the arena this comment is about: high-end APS-C. The Canon R7 and Fujifilm X-H2s are really the only two remaining high-end APS-C cameras that fit the prosumer definition I prefer to use (pro features and performance with some consumer limitations). Apparently, Canon will update the R7 sometime this summer, pushing it even closer in pro features and performance.
But where's Nikon? They were the ones that really established this market (with the D100 originally, and particularly with the D300 and D500 later on). Nikon even has the right lens set in the Z System for the sports/wildlife DX user (e.g. the 400mm f/4.5 VR S would be a great lens to pair with a Z90 prosumer body, as well as the 70-180mm f/2.8 for a second body).
And where's Panasonic? Their m4/3 start seems to have them afraid of even trying APS-C.
I'd argue that top-end APS-C bodies should be present in any mirrorless lineup that wants to be complete and win more customers. Canon looks like they'll have that with the upcoming R7 II. Fujifilm still needs to up their game with autofocus performance on the X-H2s. Nikon needs something far better than the current top of their APS-C line, the Z50II. Panasonic needs to stop watching pitches go by and swing their bat. Sony thinks the A6700 is top end enough, but I'd disagree on a number of levels, starting with the position of the viewfinder, which impacts rotation stability on fast action.
In particular, Nikon and Sony seem to be iterating less often and concentrating mostly on rationalizing a modest number of full frame bodies. This is a little like an auto maker deciding that they don't need to make a full range of product, just a quiver of mid-size and large SUVs. We know what happens when you do that: you get smaller. I wonder how small a camera maker can let themselves get before they become irrelevant. Perhaps Sony doesn't worry about that because they have a history of casting off businesses that become irrelevant. But Nikon doesn't have that opportunity, as Imaging (cameras and lenses) are near 50% of their overall business, and the only group that is consistently profitable enough to keep expanding the overall company.
So while I'll continue to use my full frame Nikons, I'll be the first one to congratulate Nikon when (if) they come out with a high-end APS-C model. It would be the right camera for so many customers, I just don't know why it doesn't exist already.
-------------------
Wrapping Up
And in other news
▶︎ Photoshop gets an AI Agent. Adobe's latest Photoshop public beta includes a new AI Agent, which you can chat with to process your image. Short form, you tell it something and it uses the built-in Photoshop tools to accomplish that. For instance, you might type "reduce the highlights and boost the shadows and also remove distractions." This works a bit like an Action, in that each step will be executed individually and thus leave an entry in the History panel, but it works slower than an Action so that you can see the steps as they are performed. You can also use the chat to ask the AI how to do something, and it will just describe the steps it would take instead of actually performing them.
My take on this is that it's useful if you don't know how to do something specific, such as apply a blend mode only to the highlights, but this new AI feature is not something I'd tend to use myself. For instance, this AI Agent doesn't agree with me on cropping, let alone what constitutes a highlight or shadow. Thus it is more brute force than I would tend to use Photoshop. Great Photoshop processing never reveals what was done and requires subtlety in decision making. I don't believe the Agent is up to that level at this time.
Pending Reviews
I'm in a bit of a rock and hard place location at the moment. I've continued to work on reviews and get them ready to publish, but I'm prioritizing getting new site designs readied and rewriting everything I've published before on filmbodies, bythom, and zystemuser, while also building out the product for byThom MAX. I apologize for the lack of reviews lately, but that log jam will break, probably early this summer.
To wit:
- Ricoh GRIV review will appear with the refresh of the bythom site.
- Nikon ZR review will appear with my ZR book sometime in the next month or two on current site.
- Additional Z-mount lens reviews will likely wait until the site refresh this summer:
- Tamron 16-30mm f/2.8
- Nikon 16-50mm f/2.8 VR DX
- Nikon 24-70mm f/2.8 S II
- Nikon 24-105mm f/4-7.1
- Nikon 35mm f/1.4
- Nikon 35mm f/1.7
- Tamron 35-100mm f/2.8
- Nikon 50mm f/1.4
- Nikon 58mm f/0.95 NOCT
- Tamron 70-180mm f/2.8 G2
- Nikon 70-200mm f/2.8 VR S II
- Tamron 150-500mm f/5-6.7 VC
Most of those reviews have completed testing and are completely written, but awaiting time for me to do editing and provide examples, charts, and peripheral materials. So there will be a spurt of reviews hitting once the sites are refreshed.
At the moment I have six Chinese Z-mount lenses here in the test queue, but I'm waiting to see what happens with mount licensing before I go further with them. I also haven't acquired a Nikon 28-135mm f/4 PZ, as I don't know how much interest there is in that lens. Likewise, the 35mm f/1.2 S is a bit out of my wheelhouse, so I don't have one of those currently in test. I will, however, eventually get round to those two Nikkors, as I want the new zystemuser.com to be "complete."
If you have a specific question about any of these products, by all means ask me via email and I'll try to give you a quick answer.
Weekly News and Commentary March 1 to 8
LEDE ON
By this point last year we had four significant camera introductions. This year, none. Every company I talk to quietly confirms that their original plans for this year have all been pushed later and later, to the point where some off-the-record-but-honest responses tend towards “we don’t know when our next big product intro is going to appear; later this year is our best guess.”
The AI elephants in the room have got all the mice scrambling for floor space. When I talk to my supply side friends, they all tell me “100% of what we’re shipping is going to AI demands.” That starts with chemicals and wafers, and ends with completed chips. I’ve been told that even Apple is scrambling to keep their supply chains going full steam. So to continue the analogy: even the cape bufallo is finding the room crowded and difficult to navigate.
This isn’t going to let up in 2026, as the current stated plans of the AI elephants indicates that they'll want even more of the room in 2027. Meanwhile, we’re in the midst of key top management planning time in Tokyo, as the end of the fiscal year for all but Canon at the end of this month triggers final consideration on how to make the coming year better. Through the grapevine I’ve heard a couple of the ideas being considered in Godzilla’s stomping grounds. Those won’t work. Godzilla will still stomp them (oops, my keyboard seems to have substituted a new metaphor!). The ideas that might work at keeping sales numbers up are (a) more significant firmware updates; (b) better marketing, particularly in customer education; (c) deeper targeted discounts that aren’t 100% on repeat but linked to (b) and (d); (d) event-driven customer experiences; (e) bundle parties (cameras plus lenses producing discounts); and (f) making a louder splash when you do have a significant new product to introduce.
But a new great camera with a new sensor, new ASIC, lots of updated features and performance, that will make you run to the store to see it? We’ll eventually get a couple of those this year, but they’ll be later than expected, and probably available in lower volume than demand. I doubt those few new key cameras will move sales numbers much.
——————————
Report
WPPI 2026
Once again I attended the wedding and portrait oriented show for professional photographers that occurs each year in Las Vegas. I went mostly to keep in touch with what’s happening in an area of photography that I don’t generally practice, but also because I’m working on a book that has a section on lighting, and lighting and posing are the two headline ingredients at the show. It’s not that I don’t know light, having lit my first studio in 1973, it’s that running full lighting setups is not something I do every day, and I wanted to make sure that I’m not fogetting something as I write new material.
I’m not going to get too into the WPPI details here—if you have a specific question drop me an email and I’ll try to answer it—but I did have a number of observations as I attended sessions, took photo walks with instructors, and browsed the large trade show of gear and services that runs alongside the educational components:
- The back of the camera is the primary compositional choice. I mentioned this last time I wrote about WPPI: most photographers here simply compose with the Rear LCD. There were a couple of holdouts. Mostly old-school names you’ve heard before who started with SLRs, used DSLRs, and now treat their mirrorless systems the same way (see Joe being Joe, below). But the younger crowd would be quite happy with the viewfinderless ZR, and I saw quite a few of those around the show. As I’ve noted before, when you’re dealing both with your camera and someone posing in front of it, hiding behind the viewfinder tends to break the connection between you and your subject. Connecting with your subject is how you help interact with them and adjust their pose.
- CaptureOne is the tether choice. Virtually every live demonstration (of which there were hundreds) had the camera tethered into a MacBook running CaptureOne. A number of presenters even went so far as outright claiming “CaptureOne is the only reliable choice" for tethered work. I never saw a hitch with a CaptureOne tether in any session, though there were lighting and audio hitches, so maybe there’s some there there to that claim. Also: almost everyone was using some of Tether Tools bits and pieces (e.g. cabling).
- Creativity is not dead. On a Fujifilm walk with a monster combo of GFX100II and 20-35mm f/4 lens in hand, instructor Chris Berry told me to “break the camera.” Not literally, of course. But rather stop doing things the way I usually do. It’s true, as a raw user, I don’t tend to break outside of long-considered exposure norms. So I grabbed Elvis, cranked up exposure compensation to +3EV, dialed in a filtered Astia+G, and took the image at the head of this section. Not. What. I’d. Normally. Do. And that was sort of the point Chris was making: the creativity you need to stay afloat in this business comes from pushing boundaries, trying new settings, and “breaking” things. Indeed, that’s one of the places that “personal style” comes from. When Insta was new, running a filter on your capture was the thing that set an image apart and possibly stopped a doom scroll (to some degree, still does). But if in going outside the usual box you start finding something you like and that tests positive when you show your images to others, then you’re doing the right thing. Ultimately, photos are to be looked at, examined, maybe even studied and analyzed. If your image looks like every other one out of a Canikony at default settings, it would be incredibly difficult to sell your photography to others as a service. WPPI has thousands of attendees who are making money off of photography. One reason they do is that they keep tuned in to what’s happening on the “creative” side. (And yes, that image is +3EV in exposure. At the end of this news story, I’ll show you how I processed it to get closer to what I was thinking. But it’s pretty incredible what the Medium Format image sensors can hold in terms of dynamic range, and the level of detail at 100mp is pretty amazing. The above image in its original form would print 37” on the long side, by the way.)
- But don’t forget to do what sells. Another speaker went through the process of “breaking” something to demonstrate images that catch attention, but he runs a production studio and keeps careful track of his actual sales. He reports this: the creative images pull people into the studio, but customers still end up choosing to buy the more traditional ones. So make sure that you’re still giving them that option. This plays into something I learned when doing editorial work for magazines: (1) the magazine was attracted to me because of a style/look I produced; (2) they asked for images specifically that fit their style/look and tended to run those over (1). But every now and then, my style/look won the call, which triggered other magazines to ask about my availability. Put another way: do what sells, but also do more. That more should be unique to you and the result of your creative process going further than what you were asked for.
- Speedlights are too old school to consider. Virtually no one was using camera-company flash units. The standard in most demonstrations was Profoto A2 and B30 strobes. A few used the lower cost Godox (or Adorama Flashpoint) similar lights. The first reason the on-camera flash units get no love is output: you get more light from the big rigs. The second is ease in “modification.” Soft boxes, grids, gels, and more all work very simply in the current Profoto designs, and are extremely fast to set up and/or swap out. When you really want to control the light as do the portrait and wedding photographers, you need a lot of power, up close, and softened. Of course, a full three-light and modifiers Profoto rig is going to set you back US$6000+, and you still need stands to put the lights and modifiers on. So it’s a tough sell. But instructor after instructor kept making it look easy, so I’m sure Profoto made quite a few sales.
——————————
Report
Joe Being Joe
I’m always up for watching Joe McNally in action. He tends to start out from a tame starting point that isn’t a lot different than the rest of the pack, but as he starts working a scene his special powers start heating up and next thing you know you’ve got a juiced up Full Joe running around the room. A good case in point was his presentation at the Lighting Summit, where, like most of the other instructors he began with “a model and three lights” and did the usual things with them. Here’s Regular Joe making soup:
You see Joe at the left, his model and some of the lighting in the middle, plus the result of the image he just took via the CaptureOne tether on the far right. (Pardon the exposure here; there’s virtually no ambient light in the room, and I have no control over the stage light or the projector. If you’re wondering: Z50II with the 16-50mm f/2.8 lens at ISO 6400 to 12800 on these images, as I’m using a slowish shutter speed to keep the frequency-based ambient lighting from throwing color bands and drifts.)
At some point as he’s working, some secret chemical starts getting dispersed into Joe's bloodstream and he begins building his superpowers. At some point he'll wonder if he really wants to just make soup any more. Maybe a spicy salsa, stew, or curry instead? I mentioned the “creative” side above, and one aspect of that is noticing things and another is not self-editing. These are things that the superpowered Full Joe does in spades.
Eventually Full Joe jumped off stage with his model and one light, started interacting more with his model asking her to do nontraditional things (acting!), raised the light over her head, engaged the audience in what he was thinking, and ended up with this setup:
And here’s what came out of his camera into CaptureOne:
The strange ceiling light and some of the backlight color comes from the fact that Joe added two Profoto A2’s with colored gels behind the audience and pointed them straight at the camera (you can see them on the previously taken images on the far right, before he repositioned himself and the model so that the lights didn’t appear directly. Again, this is me taking a photo of a screen showing the CaptureOne tether, so I have no control over the color and contrast. The actual image looked far better on Joe’s laptop, obviously.
Joe being Joe, he also spouted something new I hadn’t heard him say before, and that’s about aspect ratio. Basically Joe says he’s mostly using 1:1 as his aspect ratio now because then he never has to turn the camera on its side. So much for vertical grips.
One of the things I’ve been planning to do with byThom MAX is cover conferences and trade shows more extensively, including new things such as video, interviews, live streaming, and more. But the infrastructure for byThom MAX isn’t done yet, so you’ll have to settle for today’s abbreviated text and photo sample. I want to “always be teaching” in MAX form, so expect me to venture into that in new ways.. Who knows, maybe I’ll become the world’s oldest TikToker.
——————————
Z Versus China
It seems clear that Nikon has gone completely cease and dissist on Chinese Z-mount lenses recently. The Viltrox suit coupled with letters from Nikon lawyers makes that perfectly clear. Many of those Chinese lens makers (or at least their US distributor) were at WPPI, so I spent time there asking the obvious questions.
No one would talk much in the way of specifics, but the generality was pretty much always “we’ll still be selling Z-mount lenses in the future.” I did note that the ones that seemed the most nervous and vague about what’s going to happen were those with “performance” autofocus lenses. E.g. Viltrox, but not Laowa.
Which brings me to a hypothesis. Nikon has a dozen or so patents surrounding the Z-mount. Some of those are size and physical attributes, which almost certainly wouldn’t hold up to scrutiny in court and can be easily reverse engineered in a clean room. Others are extensions of the F-mount protocols, which have been known and used by third party lens makers without Nikon’s permission for decades now. It would be difficult for Nikon to assert those F-mount patent specifics now, as they’ve not done anything I can see to protect that information in the past. The FTZ adapter pretty much proves that Z System cameras still understand and can use F-mount protocols, when necessary, as the FTZ turns out to be nothing more than a signal pass-through for the most part.
Which brings us to the Z part of the Z-mount communications. For instance, there’s a second synchronous serial data stream now, and that appears to be there to manage better focus performance. Nikon is just starting to unleash that themselves, and I believe that one of the Z9II’s new features will be related to that new channel, as well. You have to ask yourself, for instance, why do the 24-70mm f/2.8 S II and 70-200mm f/2.8 VR S II focus faster? Yes, some of that’s the new SilkySmooth focus motor system. But I believe some of the faster focusing is due to specific Z-mount protocols, and that will become clearer with the next generation of cameras from Nikon.
Viltrox is also entering into the “focuses faster” realm now with their PRO lenses, and my hypothesis is that Viltrox has now picked up on something beyond the base F-mount protocols and are implementing this on their lenses. Couple that with Nikon completely missing their lens attachment rate goals (2:1 was the stated goal, they’re getting <1.5:1 out of China), and it seems that Nikon sees the problem being a financial one, which is the reason that it now appears that they’re asking for mount licensing fees.
Coupled with this is the entreprenuerial nature of the China lens market. There’s a lot of cross licensing, design sharing, and engineer movement going on between the various Chinese lens companies. So knowledge about what works and what doesn’t is getting shared very rapidly among the Chinese lens producers. This suggests a couple of things: Viltrox is the first to be sued because they are the ones showing the most impressive growth and now starting to nibble into the arena where Nikon S lenses play, but the entire Chinese lens market is learning from Viltrox, thus the warning salvo of cease and dissist letters.
I’m not against Nikon asking for companies to sign mount licenses and even paying FRAND (fair, reasonable, and non-discriminatory) royalties. However, the way Nikon is going about things and the fact that they haven’t communicated to Nikon customers what they’re doing and why is just clearly a bad business decision getting badder by the day. All this in a year when there won’t be a lot of new camera announcements from them, too.
I’m starting to see this as a “make or break” year for Nikon. Either they remove these frictions, launch a successful Z9II and some key lenses of their own, plus figure out how to move more units through dealers, or they don’t, don’t, and don’t. In the “make” case, Nikon will see sales growth and some market share increase to continue the positive path they’ve been on. In the “break” case, Nikon will lose some customers and find themselves in a deeper market share war with Fujifilm for third place, with the potential for dropping to #4.
It’s probable that top management and the legal team in Tokyo don’t believe any customers see any of this legal action. They’d be 100% wrong. It’s possible that top management thinks that customers won’t be concerned about any of this legal action if they did know about it. That would also be 100% wrong. Nikon needs to resolve the mount license situation quickly, and the basic way to do that is FRAND patent licensing coupled with making a clear announcement about that.
-------------------
Wrapping Up
And in other news
▶︎ Apple adds Neo, updates other MacBooks. It was a big week for Apple in terms of updates and discontinuations. I won’t be updating my full Mac advice articles on this site until I deploy the new site design, so we’ve got a lot to cover here in short form:
- Discontinued: MacBook Air M4 (13 and 15”), MacBook Pro M5 (13” with 512GB SSD), MacBook Pro M4 (14 and 16”), Mac Studio M3 Ultra with 512GB memory, Studio Display A13, and Pro Display XDR. Many of these are great products for photographers and are seeing discounting as retailers try to get rid of their inventories. Don’t be afraid to buy those that I recommend.
- Added: MacBook Neo, MacBook Air M5 (13 and 15”), MacBook Pro M5 (15”) with expanded SSD, and MacBook Pro M5 (14 and 16”) with Pro and Max chip options, two new Studio Display options (regular and XDR).
There’s a lot to unpack. First, generational speed gain is about 15% from M4 to M5, all else equal. While that sounds modest, as you move into the advanced versions (Pro and Max) some of the core changes may make more of a difference, if utilized. That said, most of you won’t notice the difference between M4 and M5 performance, all else equal.
Let’s start with the MacBook Neo. It’s very tempting as it has a low starting price and doesn’t give up a lot to get there. But I won’t be recommending it to photographers because the storage limitations (max 8GB RAM, max 512GB SSD) are too constraining. It’s not that the Neo can’t do Lightroom/Photoshop, it’s that you’re almost immediately into the Ram Doubler-like compression and swapping as you work on images, even 12mp ones. As you start to do more, you’ll wait more. I just don’t see a Neo being able to grow with you as you start taking and processing a lot of images. If anything, you’ll chew through the life of the small SSD fairly rapidly with all the swapping going on. The more megapixels your camera has, the more you’ll feel that.
On the other hand, Apple did something I 100% approve of with the new MacBook Pros: 1TB is now the minimum SSD size. For a prolific photographer I’d still recommend storing your images on an external drive, but this larger built-in drive size is no longer likely to get in your way of storing your catalog, cache, and swap files. A base level 14” meets all of my basic requirements for a photographer’s laptop now and lists at US$1699. I’d still consider upping the RAM to 32GB, as RAM=performance in the Apple Silicon world. That extra RAM will add US$400 to the cost, though. But the result would be a wickedly good machine that travels well and should last you for some time. You can still go crazy upgrading a 14” MacBook Pro (or 16”), getting you all the way to US$7049 (M5 Max, 40-core GPU, 128GB RAM, 8TB SSD, nano-texture display). But at that extreme you’re deep into “competes with highest-end desktops” performance, and I’m not sure why most of you need that in a portable. (Disclosure: some of us do. I have a MacBook Pro M4 with 64GB RAM and 8TB SSD, but that’s because I need my desktop when I travel.) What I’d suggest to all those considering the MacBook Pro is to look at sales on the M4 models, but definitely not be afraid of the lowest level M5 models.
The MacBook Air is somewhere between the Neo (not a great photographer tool) and the MacBook Pros (superb photographer tools). We now have 13" and 15” Airs with M5, 16GB, and 512GB SSD starting at US$1099 and US$1299 respectively. If you want a really excellent option, pick the 15” M5, 32GB, and 1GB SSD at US$1899. That’s not going to have issues with Lightroom or Photoshop, give you some growth space, and provide excellent performance for virtually anything you might do.
Meanwhile, the Apple external display choices need to be noted: the basic US$1599 Studio Display didn’t reallly change, it mostly got an updated A19 chip at its core (the original was A13). The more expensive Studio Display XDR is where changes came: downgraded from 32” and 6K to 27” and 5K, and updated to A19 Pro chip with a built-in camera. The XDR version is now “only” US$3299, though. Note that neither of the new displays work with Intel-based Macs; we’re in an Apple Silicon world now.
All in all, Apple is pushing Mac very rapidly in capability and performance that now provides workstation-level ability in a highly portable form. Apple’s not done with Mac for the year, but this first salvo is pretty darn awesome. Somehow Apple has kept a lid on their pricing (with a little juggling) while continuing to push performance at a high pace. The MacBook Neo’s an insane machine for its US$599 price if you’re just looking for a computer to do Web, email, office productivity, and other basic chores. But it’s not quite enough for an active photographer on the road; the base MacBook Air would be the better choice.
———————————
As promised, here is the start of one of my “processed” versions of Elvis. The primary difference is that I’ve applied so far is an anamorphic flare. What I was thinking about as I was “breaking” the camera was “Elvis wears white and he’s in heaven and I’m at the museum where all the old Vegas neons goes when it dies,” so I want an all-white, almost heavenly look.
So what I’m doing here is breaking Photoshop. It needs a lot more breaking to get to what I was imagining, but now you might be able to see where I’m headed. I need to get home and work on this with the big computer some to do what I want. The flare needs to go behind Elvis and I need more flare effects. It’s difficult making these decisions on a laptop.
News/Views
- April 2026
- March 2026
- February 2026
- January 2026
- December 2025
- November 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- December 2022
- December 2021
Looking for older News and Opinion stories? Click here.