Let's mix it up a bit, since camera news ain't happening until NAB next week. This week I'll answer a few reader questions that have popped up recently.
"Does the 1/focal length rule still apply to shutter speed in the age of image stabilization?"
It wasn’t a “rule.” It was a rule of thumb. What one person could achieve in terms of a good image of a static subject was (and remains) different than what another person could achieve.
As to image stabilization: lens stabilization will achieve different results than sensor stabilization. Sensor stabilization has issues with telephoto lenses (certainly by 200mm), for instance, because it is moving the focal plane two-dimensionally, not rotating the image three-dimensionally at the optical center. Overall, however, any form of stabilization should let someone use a slower shutter speed on a static subject than without stabilization. The question is “how much slower?” And that depends upon you.
However, you’ll note that I keep writing (and emphasized) “static subject.” In my Nikon Field Guide I pointed out that subject motion requires very specific shutter speeds in order to capture edges accurately. For a person walking across the frame, that would be 1/125. Image stabilization doesn’t “improve” that requirement.
Distance to the subject and the subject speed are the key variables; a secondary variable would be motion direction vis-à-vis the camera. Something moving at 5mph across the frame at 12 feet requires 1/500 second while at 100 feet 1/60 second is probably fine. For subjects moving towards the camera those numbers probably drop two stops (1/125 and 1/15).
So this notion that “stabilization lets you use lower shutter speeds” is an overstatement, at best case. And when it does, how much lower will depend a lot on your handling discipline. You’re far better off considering subject motion when you choose a shutter speed, regardless of stability setting.
"I was panning while photographing the takeoff of a plane and was completely parallel to it, but it appears that the front of the plane is slightly out of focus and the back is not. What would cause this?"
Most answers you’d get from the Interweb will say this was due to decentered lens elements or possibly heat waves. But the more likely culprit is image stabilization. You’re likely panning with something going 120 to 150mph. Stabilization capabilities are higher in the center of the frame than the edges, often by two stops or more.
With sensor-based stabilization, there’s also a limit to what can be done at the far edges of the frame. The stabiizer can only move the platform so far, plus shifting the sensor doesn’t 100% correct for when a tilt or rotation needed to be done. Even though they claim 5-axis stabilization, many mirrorless systems really only assess and correct only to the center of the frame, and CIPA numbers are usually reported for center axis only. On some cameras, Nikon now allows for stabilization calculations to be done at the location of the focus sensor that’s used, which may be off-center. So if the camera was focusing on the tail of the plane near one edge of the frame, the front of the plane at the other edge may have less (or wrong) correction.
With lens-only stabilization on older systems (e.g. DSLRs) there was also an interaction between shutter speed and stabilization motor speed that causes unwanted aliasing (which I’ve written about for two decades now, and for which I got a lot of initial pushback on, but have been basically been proven correct; my information came directly from engineers who designed the system, after all).
I’ve been writing the following since image stabilization first appeared: in situations where you don’t need it—e.g. when you’ve got a shutter speed that should freeze even that fast moving plane—stabilization is best turned off. While best case is that stabilization corrects all problems, the worst case is that it introduces new flaws. A likely case is that you get some edge aliasing.
Here’s the rub: cameras come with stabilization turned on by default these days, and that does stabilize the viewfinder for camera handling and framing, so there’s an immediate perceived advantage. With really long telephoto lenses and no stabilization active, most users would have trouble keeping their camera aligned with moving subjects. So another perceived advantage. But don’t let this fool you into thinking that stabilization is perfect or will always do the right thing and not introduce any image changes.
Stabilization is a tool that has consequences. Use it 100% of the time and you’ll eventually find images that show those consequences. It's impossible to know for sure whether that was what happened in your case, but it's my first suspect.
"What’s the heaviest lens you’d use when carrying the camera using a neckstrap?"
Simple rule of thumb: if the mass of the lens exceeds the mass of the body, you should be carrying the combo by the lens. The more the mismatch and the more the lens mass is centered away from the mount, the more important it is that you follow this rule.
The lens mount is the point of weakness between body and lens, and the mount is designed to break (on both body and lens) once stressed past a certain force level. The reason for that is that repairing a mount is far cheaper than repairing damage to the body or lens structure.
Beyond that, camera+lens these days tends to exceed three pounds even on the simplest of systems. That’s a lot of force on your neck, too. Almost none of us who carry cameras all the time use traditional neck strap carrying. At a minimum, we use shoulder straps, but this is where Cotton Carriers, various sling belts, and harnesses come into play. Even then you need to be careful, as just hanging a camera+lens off a carrier doesn’t isolate g-forces on the mount unless there are multiple points of contact.
"How do you tell if your lens mount is off?"
Let’s start with some basics: in the modern Z System cameras the image sensor is adjustable using a threaded screw system. In the DSLR era, sensor planes were adjusted using small shims, and shims of only certain thickness were made, so very small mount deviations were often common. The current system is not only much more adjustable, but stays locked in place. At the factory, after the lens mount is positioned on the body, there’s an automated alignment step that makes sure that the lens mount and image sensor are parallel and then locked into place with a small dab of locking solution. I’ve rarely seen a Z camera out of alignment from the factory, and haven’t seen even a single out-of-alignment camera in the Z9 generation models.
So what would make a lens mount be off? Basically two things: (1) impact damage to the mount, typically via offset force on the lens (e.g. when camera falls and hits by the lens); and (2) a broken VR sled under the image sensor. Both these things would need fixing by Nikon repair, they’re not things you can deal with yourself.
Coupled with that is whether the lens itself is manufactured correctly. In particular, any tilted lens element can cause the image plane to not be parallel to the mount and sensor. In practice, this looks like the lens mount being off, but isn’t.
So, how do you determine between lens element positioning and lens mount positioning problems? Well, first thing is to see if you have any issue at all. That’s done by taking images of something on the same plane (and making sure your camera is absolutely parallel to that plane) and looking for corner differences in acuity. One corner or side blurred more than the others is probably internal lens element alignment issues. A full side of an image different than the other could suggest potential mount issues and needs to be investigated further.
However, many people performing such tests do this on charts up close. The problem here is that it is very easy to be out of alignment with the chart, which will look like a mount alignment problem, but is really a chart alignment problem. If you want to be sure, put a mirror dead center in the frame on the wall/chart you’re photographing and verify the that you see the center of lens in the center of the mirror. You don’t want to be off by even a little.
If getting things parallel isn’t something you want to take the time to do, infinity is your friend. If you can get yourself reasonably square to things very distant (e.g. buildings near the horizon), the amount you could be off from parallel is so small it shouldn’t show in your images.
Next: is this a one lens problem or all lenses problem? One lens problems are certainly a lens problem, not a mount one. All lens problems can indicate a mount a problem.
One common complaint from people is “but I never dropped my camera.” Thus when Nikon repair claims “impact damage” they object and think they shouldn’t pay for the repair. But note that I wrote that I’ve rarely (and more recently, never) seen a mount issue from the factory. Nikon’s QA procedures have tightened considerably over the years, and the method by which alignment is done has been improved and made very reliable.
However, mounts are designed to break. On the camera side, incredibly shallow screws are used to hold the mount in place. This is because it is far cheaper to repair mount alignment/damage than it is to replace structural damage in the lens or camera body. Constant vibration, constant jamming, accidental impact forces (even bumping into someone else with your lens) can degrade the mount alignment (sometimes slowly via repeated incidents). I watched students when changing lenses jam their lens into the mount because they're in a hurry, not always correctly, and then almost use force to get the lens mounted correctly.
The vibration/impact reasons are why I never travel with lenses mounted on camera bodies. Have you ever seen how people treat your carry-ons in the overhead? I’ve even had a goat fall down a cliff with some of my gear strapped to his side (long story). You want no forces to be applied to the mount whenever possible. So you keep body and lens separate whenever you’re not using the gear. And when you are using the gear, you stay aware of any and all impacts, as well as oblique and gravitational forces that might impact the mount. You’ll note in the above question I don’t tend to carry cameras by neck straps. That’s gravity I’m trying to deny, as a front heavy lens is constantly putting a force on the mount, particularly as I walk with it.
"When someone writes or talks about how a lens “renders”, what do they mean?"
An excellent question. Often in photography we have terms that are bandied about that aren’t defined, or for which there is an assumed (but not necessarily agreed upon) definition.
The dictionary definition of “render” has multiple possibilities: (a) to reproduce or represent something; (b) to produce a copy; (c) to cause something to be.
None of those definitions quite tell us enough for what is meant by using the word render when it applies to lenses. Here’s my take: the minute someone starts talking about how a lens renders, I believe them to be speaking about how the optics impact the depiction of a three-dimensional world into a two-dimensional capture.
Some people will say that, no, what they mean is how well the lens mimics what our eyes do. Unfortunately, we have multiple problems trying to use that definition: (1) our eyes are not the same; (2) most eyes have defects; (3) our eyes don’t resolve to a two-dimensional capture (our retinas are curved); and most importantly (4) it’s actually the brain that does most of the work, because it interprets signals and can do so in different ways (e.g. synesthesia).
Layered on top of this is viewer experience. All of us have decades of experience looking at images that were captured with simpler lenses. Much like 24fps makes motion capture more “film-like” than 60fps (more TV-like), central sharpness with non-managed optical characteristics as you move towards the corner of a capture frame is something we’ve been conditioned to.
So let’s get back to how a lens renders. Using my definition—three-dimensional to two-dimensional reproduction—you can start putting measurements and values on how accurately a lens is doing that. One that doesn’t get talked about much with still photography but which we talk about all the time in film and video is cats eye bokeh. It’s pretty common to encounter small bright lights or highlights behind a subject, particularly at night. When they defocus (they’re behind the focused subject), light points should become larger, round, out-of-focus blur circles, with no new characteristics. That’s generally true for almost all lenses on the central axis, but as you get to the corners you start to see round turn into elliptical or eye shapes (cats eye), often because of where light baffles are located in a lens (they’re eclipsing part of the out of focus blur).
I could dissect every parameter of a lens and apply this same criteria—the three dimensions get correctly captured in two dimensions—and for the most part, I do that with my reviews. Pretty much everything in the performance section of one of my lens reviews is dealing with this definition of “rendering.”
The tricky part these days is that lens designers added a tool to their optical designs in this century: lens corrections done via software processing on the image data. Three easier-to-post-correct rendering flaws are often ignored (or not fully minimized) in current lens designs: vignetting, linear distortion, and lateral chromatic aberration. I have no problem with this where we’re not talking about massive eventual pixel changes (e.g. >2EV vignetting, >2% distortion, >2 pixel CA suppression). The likelihood that modest pixel changes have real visual consequences is small enough to be ignorable, and the correction is arguably better than the uncorrected.
Moving these optical problems to post correction allows lens designers to better concentrate on the real thing that can distinguish a lens: contrast. MTF is the way we usually measure that, but some will talk about acuity, sharpness, resolving power, and other related stand-ins. Before aspherical lens polishing came to the scene, lenses pretty much had their highest MTF in the center, and that fell off as you moved further from the center. Aspherical lens elements started to change that, and modern lens designs can be quite good at corner-to-corner contrast.
That last bit is what typically leads to lens rendering debates. Technically, our eye-brain is highly center oriented: we see “sharp” in a narrow area controlled by where we point our eyes. Our brain deprioritizes data off that narrow field, which can feel like lack of focus. That’s similar to lens designs of the previous century: sharp central area falls off outside of it. It’s dissimilar to the way many of us view most of our photos now: on a flat screen fully (or almost fully in the case of a 27” monitor) within our central vision predilection. My prediction is that Millennials and later are not going to have this same debate down the road: they’re being conditioned to seeing edge to edge where they want it sharp (or rather, as sharp as the DOF would define).
Personally, I put a stake in the ground a long, long time ago: I want optimal data capture. That would imply a perfect lens with no optical liabilities. I can easily process in corner blur effects later, but it’s really difficult to process in corner sharpness when it was never there in the first place.
Your mileage will vary. But I have to ask: how do you know what that is? Are you even thinking about this? Or is your reaction to lens rendering always “I know it when I see it?”