Software Week Recap


Some things about lenses to keep in mind

Original: 6/27/2010 on front page

HDR
Ju
ly 26 (commentary)--I've been mostly silent on software for some time now. There's a simple reason for this: if I can't keep up with the hardware reviews, I don't have a chance with software. It changes faster and has more entrants vying for attention.

Copyright 2010 Thom Hogan

The state of software keeps changing. The image just above is an example. This is a five-image HDR stack. To date, these five images have posed a problem to every HDR program that's come around (read on), and you can see some of that in today's image. But wait, this is close to the best I've been able to do so far from those images with an HDR program!

Let's step back a bit. HDR stands for high dynamic range. Some scenes, and this is one of those, far exceed our camera's abilities to record. If I set my exposure for the sunlight hitting the very peak of Denali, the foreground disappears into the noise floor:

Copyright 2010 Thom Hogan

But if expose for the foreground, Denali fills electron wells to saturation and disappears in an over-abundance of white:

Copyright 2010 Thom Hogan

According to the EXIF, those exposures are the extremes of a five stop bracket. The problem with HDR is this: paper is decidedly LDR (low dynamic range). Even great direct displays aren't exactly HDR. We have to compress our captured tonal range into our output (even for "normal" exposures), and this causes problems. If the image above looks a little flat to you, that's because we've got 13 stops squeezed into maybe half that. If you thought post processing was difficult, post processing HDR complicates the problem.

The way I did "HDR" back when I took this image was to essentially take a mountain exposure and a foreground exposure and layer them in Photoshop. Since we've got a pretty clean break between the two areas, it's easy enough to create a layer mask that does this. You can see an example of the same image done quickly that way at the bottom of this paragraph. Looks a little better, doesn't it? That's because I independently compressed the highlight (mountain) values, then did the same for the foreground values. I treated the two extremes as separate pictures and separate processing problems. Fortunately, both areas kind of fade into midtones, so I was able to make those midtone values mostly meet up in the middle. But the top and bottom of the image are definitely highly processed to get tones where I want them.

Copyright 2010 Thom Hogan

A few years back programs started appearing to give photographers more control over melding an exposure bracket series into an "HDR image." Probably the best known of those is Photomatix (that's MATT-IX, not MAY-TRICKS as many mispronounce it). Photoshop itself now has a Merge-to-HDR function. And recently we've gotten some new players, most notably HDRExpose, which is what I used (quickly) to do the top picture. If you look closely you can see some halo-ing and other artifacts of the HDR process in that image. Still, I've found that HDRExpose gets me closer to what I want more quickly than the other alternatives I've tried. (To be fair, I should point out that the images I'm showing here are some of the toughest I can throw at converters. There's smoke in the air, which is distorting colors. The red channel is blowing out, even in some of the foreground midtones. The pixel data is just all over the place and needs a lot of moving. This isn't your everyday sunny shot at the equator.)

Since I know you're going to ask, I'd rate things this way for HDR, from weakest to strongest: Photoshop Merge-to-HDR, Photomatix, HDRExpose, do-it-yourself.

Doh!

But within that progression is hope. Software keeps getting better. After years of prodding to do better, the Adobe raw converter profiles are now in the realm of good for Nikon DSLRs, and I get regular emails from the team working on them asking if it's better. Yes, it is. Much. It can be even better ;~).

One of the things I've noticed in revisting some older images is that I'm getting better final product from them today. Some of that is my own knowledge, which has obviously progressed over the years. But some of it is that the software tools have gotten better. It's almost as if we can go back into the darkroom and run our film through a different chemical set. It's a strong reason to shoot raw (or at least raw+JPEG).

This is one of the reasons why I say that the act of photographing is "the capture of optimal data." If you get the data capture right, as software gets better you're going to find that there's more in your pixels than you thought. But this is where we start: optimal data capture. If you goof up on that--and I did on this shot--you're going to have a ton of problems down the line. So, next, we'll deal with that. Later in the week, we'll get back to software products.

Optimal Capture
The act of photographing should be "the capture of optimal data." If you get the data capture right, as software gets better you're going to find that there's more in your pixels than you thought.

Today's top image (as was yesterday's, now on the software week recap page) is an example of non-optimal data capture. Today's problem is also dynamic range: to keep the upper half of the image within the sensor capture area, the shadow side of the elephants is definitely too lowly recorded. I can't bring up more information there because it's buried deep in the bits along with noise and rounding errors.

At the top of our sensor's range, we worry about electron well saturation (255 in 8-bit values), and we worry about it in all four channels (we have two green channels that are slightly independent). To keep us from going over the top of the sensor's capacity, we use UniWB techniques to evaluate channel histograms. That's the only way that it's possible to get a reasonably accurate assessment of the highlight data, unfortunately.

On the other end, the dynamic range of our camera plays a part. All else equal, I'll pick up the body with the largest dynamic range (that would be my D3x at base ISO, my D3s at high ISO values). But beyond that, we want accurate information in the shadows, which is why we use 14-bit capture if we have it available. On a D3x, for instance, this reduces frame rate but adds just a slight amount of precision to the lowest data values. More bits also mean fewer rounding errors introduced in the data.

But there's much, much more to getting optimal data than just getting the exposure right. I once asked a well-known perfectionist photographer to list all the things he could think of in terms of optimal data, from capture to processing. Here's a condensed list (yes condensed, highly so):

  • Cleanliness: Clean camera chamber, clean sensor, clean lens, clean lens contacts.
  • Blocking: correctly-sized lens hood with matte surface, lens with correct internal light baffling, viewfinder eye-piece closed.
  • Exposure: UniWB histogram checked, sensor cooled (likewise battery cooled).
  • Lens: carefully chosen (probably) prime that has been verified accurate, Live View focus verification, aperture chosen in optimal range (e.g. stopped down a stop to diffraction limited).
  • Settings: VR off, Mirror-up, UniWB, flat contrast slope, no sharpening.
  • Support: heavy tripod, top-notch head, remote wireless release.
  • Post Process: large Color Space (e.g. ProPhotoRGB), 32-bit processing of data, hardware gamut matching.

A few things should be obvious from that list. First, we can't usually do it all (e.g. I can't usually cool my sensor on safari). Many of the things require attention to equipment (lens, support). Some involve lots of pre-shot work (UniWB channel histogram check, focus verification, etc.). But the kicker is that some require settings that don't give you a usable JPEG (UniWB, zeroed out Picture Control settings). It's a pity that the camera makers don't understand that, because it makes our work so much more difficult in the field.

But note what happens in post processing: large tuned gamut with 32-bit floating point math processing. Guess what? We don't have that in software yet. Well, we have bits and pieces of the puzzle from some specialty software, but if you're using, say, Lightroom, you've got a slightly mismatched gamut (ProPhotoRGB) and mostly 16-bit math (and math that sometimes rounds when we don't want it to).

Is that bad? Not at all. What we have for raw conversion and image editing today is far, far better than what we had ten years ago. If we do record absolutely best case data in the field, we're closer than ever to being able to extract that little extra bit today in our software than we were yesterday.

So why did I take this diversion today into a realm outside the software itself? Because many of you aren't recording optimal data. It's the old computer axiom: garbage in, garbage out. When I talk about getting a more accurate conversion out product X versus product Y, I'm doing so when assessing optimal data. If I throw non-optimal data at the converters, I get a different result. As a matter of fact, if you don't shoot non-optimal data I'm not sure that you can complain about the job any converter does, nor can you accurately assess differences between them.

So keep that in mind as I write more about the software we have available to us today. Give it mush and you get mush in the output. Give it precision and we're now starting to get precision in our output.

Conversion
Ju
ly 28 (commentary)--Let's get the dirty work out of the way right up front: I no longer recommend Capture NX2 as the best NEF raw file converter [ducks head to avoid 70-200mm thrown at him by Nikon management].

Back in the time of D1's and D100's, Nikon Capture was about the only game in town that could really get you to the highest quality rendering of your optimal capture (see yesterday's installment, now on the recap page). With each passing generation of Nikon DSLRs, the gap between what Capture could do and what you could get out of other raw converters got smaller and smaller.

What didn't get smaller was Capture's quirks and problems. If anything, they've grown. The big Capture killer seems to be OS updates, sometimes even minor ones. Capture gets updated late and still doesn't support many things that have been mainstream for awhile now (64-bit or reliable multicore support, for example). The oddball UI remains oddball yet incomplete. Nikon's idea of workflow is antiquated and not photographer-centric. And there seems to be a steady stream of installation and crash problems being reported. In short, Capture feels more and more like a neglected child than the power tool it once was.

Meanwhile, other converters just get better and better. Not just at conversion, but in adding other useful features and increasing their conversion performance speeds. If you want to see fast batch processing on a multicore machine, try BibblePro. If you want to see plenty of control and features, the Adobe converters come to mind. If you need carefully crafted camera profiles, the PhaseOne converter (Capture One) does a nice job out of the box. If you want every last ounce of pixel goodness out of your NEF and are a geek at heart, try something like RPP (unfortunately Mac only).

Simply put, there's just not enough on the positive side of the slate to recommend Capture NX2 any more. Thus, I'm officially dropping my recommendation of Capture NX2 [dodges multiple D3000's thrown at him].

So what do I recommend? Well, I've been using Adobe Raw Converter most of the time lately (with dips into RPP for tough images). It fits into standard workflows better, it works fast on my current machines (in 64-bit, multicore), and with some gentle nudging and tweaking and use of custom camera profiles, it delivers conversions that look just fine that I'll put up against any Capture NX2 conversion. Conversions that are linear distortion corrected, lateral chromatic aberration corrected, vignetting corrected, filtered to take out some slight idiosyncracies with Nikon's Bayer filtration, and more.

Not that ACR (and Lightroom) are perfect. They are not. They still come up with some odd white balance values on their own (but that's what reference images are for with UniWB: just balance to the reference and apply to all the images--oh, wait, even that's easier done in the Adobe apps than in Capture NX2 ;~). I've long complained about ACR being a bit too "orange" on the reds. Adobe's latest profiles are very close, but I still find myself in the HSL controls doing tweaking. But I found myself doing similar tweaking with Capture NX2, too, usually to try to remove Nikon's slight magenta bias in skin tones.

As I wrote earlier this spring when I wrote about workflow, I prefer working in Photoshop over Lightroom for converting and editing images. Lightroom is where my images end up, not where they start. So conversion means for me: 16-bit Smart Object with lens corrections applied, a white balance usually from reference, and some slight HSL tweaking. I do not use Adobe noise reduction or sharpening (more on those things later in the series).

What I don't like about the current Adobe solution is dealing with multiple images (HDR, panos, stacking, etc.). I'm finding that I tend to go back to what I was doing with Capture NX2: convert files individually into 16-bit TIFFs, then deal with them together. Adobe still hasn't quite got the right photographer-centric workflow for the multi-image user. I want to select images to group, apply conversion and review it on each individual image (e.g. flipping through the stack as I apply conversion changes), process the conversions into a new group (probably of PSDs to preserve the Smart Objectness, but also possibly TIFFs), and then apply the multi-image technique of choice on them (HDR, pano, stack, etc.). And that last bit may not be Adobe's ;~). Perhaps if I knew Bridge better I could do some of what I'm asking for, but I still think I'd find it falls a bit short of the workflow I want.

Still, what I'm saying is that Adobe's conversion is perfectly acceptable and a better choice for me now than Nikon's. If I have some real tough issue to solve or really am trying to extract maximal detail, I'll use RPP on an image, but I do that fairly rarely.

That said, I've looked at several other converters lately, and if Adobe's solutions didn't exist, I'd be using them instead of Capture NX2. CaptureOne seems to be slightly easier to get excellent white balance and color out of than Adobe. Its conversions are quite good, too. BibblePro, as I noted, really maxes out the resources on my system (12GB RAM, SSD scratch drive, fast RAIDED data drives, quad cores) and gobbles through batch conversions faster than anything else I've tested. It, too, is capable of excellent conversions.

But let me put things a different way. Which converters have I tried that I felt I could extract really high quality conversions from a well made D3x image from? Adobe Raw Converter, Aperture 3, BibblePro, CaptureOne, Lightroom 3, Capture NX2, RPP, and Silkypix. I'm sure there are more, but that's about the extent of my current testing at the moment. In other words, every converter I've tested can be tweaked to get critically good conversions out of current NEF files.

Aside: if you're still using a D1x or D100 or another very early Nikon DSLR, things aren't quite so clear. I've noticed, for example, that the 10.4mp conversions for D1x files on some modern raw converters aren't doing as well as you can get out of Nikon Capture. At least one has dropped 10.4mp conversions completely.

So it isn't the conversion itself that's the big issue any more. The big issues are three: (1) How does the converter fit into your workflow? (2) How fast is the converter? (3) How long does it take you to "tweak up" a NEF to its fullest capability in the converter? So:

  • Adobe Raw: Great, Fast Enough, Fast
  • Aperture 3: Great, Fast, Fast
  • BibblePro: Good, Wicked Fast, Fast
  • CaptureOne: Good, Fast Enough, Very Fast
  • Lightroom: Great, Fast Enough, Very Fast
  • Capture NX2: Fair to Good, Slow, Very Fast
  • RPP: Fair, Slow, Slow
  • Silkypix: Good, Fast Enough, Fast

You can see why I only use RPP on challenging images: it doesn't really fit into my workflow all that well and it takes time to extract that last extra bit of information from the NEF.

What's really happened with converters is that all the other players have gotten better memory management, started utilizing multiple cores, and tuned their software better to both the underlying OS and the photographer's workflow. Nikon hasn't. So as the advantage Nikon had with understanding the spectral aspects of their sensors and applying demosaic techniques to the data has slowly eroded, it's exposed other weaknesses in the software.

Put simply: Capture NX2 needs a rewrite from the foundation up. It needs more resources devoted to it. Nikon needs to commit to keeping up with the Macintosh and Windows platform changes. Nikon needs to deliver 64-bit, multicore processing. Nikon needs to integrate better into differing photographer workflows. That's a lot of work to do. And that's not even addressing the funky user interface and non-standard controls, let alone bugs and crashes. But the need to all of those things when the competitors have is the reason why I'm withdrawing my recommendation of Capture NX2 [moves aside so that D3 thrown at him by Nikon employee doesn't hit him].

My new converter recommendation? Try the latest version of ACR or Lightroom. If for some reason those don't work for you, try Aperture, BibblePro, or CaptureOne. If none of those work for you, I'd sure as heck like to see your NEF files, because there's something in yours that isn't in mine.


 

bythom.com | Nikon | Gadgets | Writing | imho | Travel | Privacy statement | contact Thom at thom_hogan@msn.com


All material on www.bythom.com is Copyright 2010 Thom Hogan. All rights reserved.
Unauthorized use of writing or photos published on this site is illegal, not to mention a bit of an ethical lapse. Please respect my rights.