The compilation of April 2010 front page essays in one spot.
Originals: appeared 4/5/2010 through 4/9/2010
I've been surfing the Web a lot since I returned home, trying to catch up on the things I missed while Internet deprived. One thing I'd previously noticed further struck me: there's been a modest but important shift in many people's attitudes about cameras, and it's now showing up in camera sales numbers. Some of the camera makers seem to get it (or were just lucky in their design changes), while others still haven't yet figured it out. This week I'm going to be doing a series of articles on the future of cameras. Today, we'll start with some basic observations.
Up until a year or two ago the forums across the Internet were pretty much completely filled with "we need more" (pixels, lenses, high ISO, video, you name it). The general mood was that while current cameras were "good," future cameras would be "better." Better in tangible ways that would show up in images. And that showed up in pretty much every survey I know of, too: people were buying new generations of digital cameras to replace older generations and the decision was often centered on purely technical aspects (more pixels, better image quality, more features).
Today, things are a little different. True, there is still a group that is asking for more. But I'm noticing more and more people backtracking and looking at the convenience of the product as a primary decision point. Instead of pushing forward and buying the More Pixels, More Expensive Camera line, I see more and more people finding things like the m4/3 bodies or the latest round of all-in-ones or even low-end DSLRs or high-end compacts and declaring "that's enough camera for me." Coupled with that comment is usually another: "and I'm enjoying photography more."
In short, more and more people are getting back to just shooting photos and fewer are are obsessing over the technology involved.
This is a normal technology phase: the technology backlash. Early adopters are absolutely technology-driven, as technology always enables things that weren't possible before. Late adopters are convenience-driven, as the technology is by then mature enough that the kinks of using it have been worked out and the designs have been commoditized for the masses. Certainly for compact cameras we're well into the late adopters, and I suspect that we're already there for DSLRs, too.
A couple of weeks ago I gave a presentation to Nikon executives and designers in Japan. One of the things I based my presentation on was that DSLR sales have peaked and that market growth will never return to the levels we've seen, if growth returns at all. This, of course, presents a problem to the camera makers. Camera phones are quickly gobbling up the low end of the compact camera market as they become "competent enough." Mirrorless interchangeable lens cameras are quickly gobbling up the low-end of the DSLR marketplace because they deliver 90% of capability in a smaller, lighter, and more convenient package:
This means that the already declining compact market and the flat DSLR market are actually about to collapse faster. In collapsing markets you have two choices: win market share over competitors (usually via price), or reinvent the product and start a new market.
My choice, and the crux of my presentation, was to reinvent the product. Yes, mature products can and are reinvented from time to time. Indeed, the DSLR is a reinvention of the SLR. Everyone who had a film SLR at some point decided that a DSLR offered advantages worth switching to. So the question becomes: what is the camera that a DSLR user would want to switch to because the advantages of it are so clear and useful? And how does a "camera" get reinvented in the first place?
Well, you have to think about how cameras fit into the world around them. To date, cameras have been designed for linear processes: you take a picture and move that picture to the "post processing world" where the camera is not involved any more:
But anyone who's used the camera on the iPhone intuitively understands that this isn't the optimal way to look at "imaging." A modern camera should live at the center of your imaging world, more like this (this is the highly simplified version of outputs from the camera of the future ;~):
When you start thinking about the camera that way you quickly start thinking of it differently than just an "image acquisition device." For the time being, I'll leave it to you to figure out how such a camera departs from our current designs, but the answer, once found, is obvious and compelling.
Video or Not?
Before we get to today's installment, let me catch you up with my In Box. Basically, you've flooded it with "I agree" emails to the point where I wasn't able to respond to most of them. It seems a lot of serious photographers are finding joy in "lesser" cameras these days. But there was one additional point I didn't make in my essay that also showed up in these emails: high-end cameras are too heavy. By the time you slap a competent lens on a D3 series body, you've probably got five pounds hanging off your neck. That, too, is one of the reasons why you see some backing up a bit and going downscale. The DX and m4/3 bodies are competent enough for most types of photography, after all. Memo to Nikon: if the D4 comes out as heavy or heavier than the D3, you've missed a large opportunity.
When the D90 appeared and provided the first glimpse of DSLR-video, we instantly heard conflicting choruses of shouts from two different camps. One side didn't want video in their still camera, the other side saw a new tool that didn't exist before and demanded more.
Before we get too far, I need to clear up something. The common belief seems to be that I'm completely against adding video to still cameras. Some who haven't actually looked at my resume also seem to think I'm not only anti-video but have no video experience at all. Neither is true. I actually have more training and professional experience in video than I do in stills. It's only been in the last decade that I've spent most of my time on the still side of things (though I do still dabble in video by helping others with their productions).
Yes, I've written strongly about how I don't want camera makers to divert engineering efforts from still features to video features, but that doesn't mean I'm against video. The key word is "divert."
Diversion is happening in several different areas of engineering. It starts at the sensor. The D3s has two key differences in its sensor from the D3. One is useful to still photographers and videographers: extra light efficiency. The other is useful only to videographers: video is enabled, and Nikon has spent time to reduce the rolling shutter effects caused by CMOS sensors recording motion. Beyond the sensor, we also have feature and UI changes. There, the answer is the same: there are changes that benefit both still and video, and others that benefit only video.
But look at the updates to the 5DII in comparison. As far as I can tell, all of the updates so far relate to video. In essence, the Canon engineers on that project have diverted all resources to video. The 5DII has gotten to be a better video camera since its introduction, but the still side is still the same. This is not the approach I want to see camera makers take, as it means that our still cameras get frozen where they are today, and there's still more those of us who use them would like to see done.
Still, one argument that usually gets made is that "video has to catch up to stills on these cameras." Implicit within that argument is that the cameras are used equally for stills and videos. To date, I haven't seen such a user in real life, though I'm sure they exist. What I see is that still users mostly use the still features, plus there's now a raft of videographers that have picked up a DSLR and are using them for video productions. The so-called "hybrid user"--who has professional level still and video needs--seems to be mostly missing when I look around, though I know I'll get emails from about a half dozen folks who will claim to be that user after they read this short essay ;~).
But let's back up a bit, because several key points need to be considered:
- Video in still cameras already existed before DSLRs. That's partly because the small sensors already supported it. There's a difference between enabling something that's already there and designing new abilities. But the fact that the video world didn't embrace Coolpix-like cameras with video brings us to:
- The real interest is large sensor video. What was missing in the video product market was the middle. Plenty of video cameras existed prior to DSLR video. Dozens of companies making hundreds of products. But the bottom line is this: they were basically all small sensor products. The real interest is simply the large sensor of the DSLR cameras, which enables a look you can't get with the existing small sensor video equipment. Which means:
- Video equipment makers didn't see the need or demand of their target market. I find it ironic that the Canon XL1 was one of the premiere "mid-level" video cameras only a few years back. Now it's a Canon 5DII. While at least this kept the dollars in the Canon coffer, if you were CEO of Canon what would you be saying to your video division right now? "Way to go?" No. "How the heck did you miss that?" is more like it.
Unfortunately, video in DSLRs came at the same time as lagging sales. Remember what I wrote in the last installment: still camera users are no longer flocking to every technology upgrade. Thus, anything that might goose still camera sales has gotten a look. And in the case of the Canon 5DII and Panasonic GH1, a lot of videographers found something they liked. This looked like "additional sales" to DSLR makers. Of course they'll divert their engineering efforts to video: (a) the still users aren't as interested in updates any more; and (b) video users will flock to the right set of features. IF A AND B THEN C: (c) concentrate development on the added video.
Those of you with long memories will remember what my position has been for some time now: Nikon should have made a large sensor, F-mount endowed, dedicated video camera, which in my most recent commentaries I've dubbed the Nikon V1. Such a video camera should share as much as possible with the still cameras: same batteries, chargers, lenses, cards, you name it. But it should also be optimized for video, which means different ergonomics, consistent but video-centric UI, and quite a few specialized features (balanced XLR inputs, channel mixing, raw video output, and a host of other things). Should it be able to take stills? Yes, but it should be optimized for video. Should the still camera (D3s) be able to do video? Yes, but it should be optimized for stills.
Or here's another possibility: the D4h, D4x, and D4v. Each optimized for a different task but centered on the same basic internal technologies (USB 3.0, UDMAII, new sensor, new AF, and so on). If you don't optimize a product for a task but instead try to make it a jack-of-all-trades, you end up pushing design toward the center ("good enough") and not towards the edge ("state of the art"). Remember, these are pro bodies we're talking about. If we were talking about the replacement for the D3000, things might be a little different: there "good enough" and "jack of all trades" are reasonable design goals. But at the high end? Not a chance.
Unfortunately, the camera makers tend to all follow one another. Whenever any individual company has a success at something that takes sales away from the others, the others all follow exactly down that same path. Ultimately, this doesn't change much in the camera market: you get short-term share changes that don't tend to hold or dislodge the big players. As I've already advocated: I'm in favor of changing the whole landscape: reinvent what a camera is, what it can do, and how we work with it. But we're still not quite to the point where I want to discuss that yet.
Technology Keeps Marching On
One thing ought to be clear to everyone by now, but apparently isn't, so we need to get it out of the way before we get to future camera designs: technology, once initiated, keeps moving forward. If it didn't, all those engineers at all those companies would be out of a job, right?
I'm reminded of my first computer: 2k of RAM which I painstakingly soldered to the board (and 2K of RAM in 1975 was considered a lot). Today I'm using 16GB of RAM in my desktop system. This, of course, was propelled by Moore's Law (not really a law and usually misstated), which enabled a predictable reduction in size of transistors over time.
Technology is like that. Once you discover how to do something, large quantities of engineers begin the process of improving that. Their jobs depend upon that. The personal computer of today is mostly recognizable from the personal computers of the mid-70's, but basically everything has been tweaked and pimped out to nth degree by constant engineering.
Digital cameras are no different. Since the invention of the CCD for imaging, there's been a constant stream of engineering efforts dedicated towards perfecting it. When you've got a multi-billion dollar a year market to serve, those engineering efforts get pretty intense, and there an awful lot of engineers putting their brains to work at solving bottlenecks and problems.
And so it is that some things are given: future cameras will have more pixels and higher efficiency in their sensors. That's so clear to me I'm tempted to call it a fact. Indeed, as if to prove that point, Panasonic's small sensor roadmap just crossed my desk this morning as I was writing this. In the 1/2.3 sensor size (LX3-type cameras), 10mp came out in early 2008, 12mp in early 2009, 14mp at the start of 2010, 16mp is scheduled for the start of 2011, 18mp for the start of 2012, and a whopping 20mp is scheduled for 2013. Technology marches on.
Curiously, Panasonic has also added two unspecified sensors to their 1/1.8 line: "High Sensitivity I" and "High Sensitivity II." So it's clear that the other Holy Grail of sensordom is being sought, too: more efficiency. Technology marches on.
Once started, it's difficult to stop the technology march. Remember, the fabs producing these products cost billions of dollars and need new products coming to keep them efficient, and the engineering staff dedicated to these endeavors is numbered in the tens of thousands and costs many more billions to maintain. Somewhere in a high tech lab you'll find someone working on incrementally small things that impact sensor technology, like purity of the silicon base. In the office next to her, there's another engineer who's working on elimination of photosite crosstalk.
Meanwhile, in other labs you have the dreamers. These are the folks that think up things like flipping the sensor over (BSI) or eliminating the Bayer filtration (Foveon) or changing the photon detection completely (Quantum Dots). Wins here have to be big to be profitable, but a big win moves technology forward in leaps instead of steps.
None of these things are going to stop. Silicon will get better. Indeed, it's fairly easy to imagine a 3-layer sensor that's more efficient than the current D3s sensor some time in the future, so those of us using state-of-the-art cameras will find that they're not so state-of-the-art some day. Maybe not tomorrow. Maybe not in 2011. But eventually. We simply aren't close enough to any physical boundaries that would stop any meaningful gains in the core hardware component of our cameras.
Meanwhile, there's another kind of lab attacking the images themselves. It started with the NRO and other spook agencies around the world who needed ways to "enhance" images taken from planes, trains and satellites. Many of the things we take for granted--noise reduction, edge enhancement, stitching, etc.--had their genesis in spy imaging. Here, too, we've not come close to pursuing all paths or extracting all possible information out of the bits we've already captured.
A few years back I was asked to present a "what hasn't been done but will be" talk for a conference of imaging professionals. I just looked at that paper. Less than half the things I mentioned have made it to commercial products (it's difficult for me to judge what's actually being developed as patent issues sometimes keep things invisible until they're ready for release to the public).
One of the items that is currently being explored commercially is using the accelerometer in a device to correct motion after the fact. No, I don't mean VR, which is a real-time correction system: I mean keeping a high-frequency time record of the device's motion with notation of where image acquisition begins and ends, then doing some very high level math to try to remove motion out of the recorded pixels. This, too, derives from spy tech: those satellites aren't always stationary to the object they're photographing.
So in image processing technology marches on, as well. Even simple things--like using more precise math--have small impacts on what we can pull out of our pixels, and now that we have multi-core CPUs, we can afford to throw some computer horsepower at precision math.
Thus, no matter what else happens with future cameras, we can count on two things: the technology to accurately record photons (sensors) will progress, and the technology to interpret the data (image processing) will progress. Given that cameras also use computer chips (CPUs, ASICs, memory), the ability internal to the camera to do image processing will increase, as well.
So, if nothing else happened, Panasonic would probably be giving you a LX3-sized sensor with 20mp that you could extract better images out of by 2013. Is that good enough to sell you a new camera?
My answer is no. Indeed, the recent intro of the iPad is one reason why, just as HD on your television is another: the devices on which we will likely consume images and videos on don't need 20mp. Or 16mp. Or even 12mp. Thus, the advantage of technology giving us "more" continues to erode. We saw this with CPU clock speeds in computers, for instance. As I write this on a 2.66Ghz computer with multiple applications running simultaneously, it turns out that I'm using less than 20% of the CPU processing horsepower at peak. Would a faster CPU help me much? No.
So just as the computer industry hit several points where continued CPU engineering produced faster chips that really didn't do enough to compel users to buy a new computer, we're hitting one of those speed bumps with cameras, too. Technology marches on, but it's not giving us much bang for the bucks we spend on it.
My conclusion from this is that the rest of the camera has to get better and more compelling for camera sales to resume any upward sales momentum.
What's Wrong With Our Cameras?
Before we begin today's episode, let me again address what I've been seeing in my In Box. The #1 comment has been something along the lines of "I hope your reinvented camera is smaller and lighter than the beasts we have today." Well, so do I. But I'm mostly addressing function, not form, in this set of essays. As we saw with the D3/D700 combo, you can have virtually the same function in two different forms. I do think that the camera makers haven't been listening close enough on the weight/size aspects, but that's a subject for another essay (and if there's any camera designers reading this who are scratching their heads at this point, I invite them to read Weighty Advice).
Okay, I lied. Before we can start reinventing the camera we need to discuss what is wrong with our current cameras.
As many of you know, I keep a list of things that people want in their cameras, some of which can be found in a summary page on this site. Currently, the full list has over 300 items on it, and I'm sure that's far from complete.
Such lists seem a little strange, though, as current DSLRs have evolved over a 50+ year period and have a ton of features in them.
A D3 series body, for instance, has 24 items on its SHOOTING menu, 24 on the SETUP menu, and about 50 Custom Settings. And the SHOOTING menu and Custom Settings can have four iterations (banks). My D3s has 25 buttons, five dials, and three switches. I long ago lost count of the combinations and permutations in which a Nikon DSLR can be set. The number is in the millions.
Fortunately, Nikon hasn't made a total mess of the organization (though there's much to be desired still, as many things still need some tender attention). But still, to a newcomer there's an almost overwhelming array of choices in how to "set up" the camera. Yet still virtually every photographer I know has a long laundry list of things they want added or changed.
There's obviously something wrong with current camera designs if we have millions of ways of configuring the camera but we still all are asking for more. Let's explore some of the issues:
- There's no master set. Despite four banks in two of the menus, there's no way completely reconfigure a Nikon DSLR from one shooting setup to another in one step. True, some other cameras, including even Nikon's own Coolpix P6000, have the ability to do this. But even where this feature is present, the feature itself is limited to one or two setups. I don't know about you, but I have a half-dozen or more common setups I use. Having to do this via individual settings slows me down. Oh, and Nikon? Changing settings files with firmware revisions is a pain, too; have you not heard of tagged file formats?
- Preset limitations. Ever want to set a self timer to one minute? Or embed a JPEG Fine in the NEF? Or set the Lock button to something you actually use? Or set the Interval timer to more than 999 images? These are all requests I've fielded from professionals (the last one from someone shooting sequences for a major network television show). Basically, engineers and product managers in Japan sit at their desks figuring out what any group of "useful" settings might be. They do a pretty good job of seeing the major portion of the bell curve, but anyone who has an even slightly outlying request is usually going to find what they want missing.
- Missing info. Related to the preset limitations is the notion of preset displays. For awhile we had WB in the viewfinder (D2 series), but it's gone again. Again it's the bell curve thing: the camera designs apply to 80% of the users, but the 20% of us who have other desires are completely unserved.
- Old-school communication. The 10-pin connector dates back 20+ years, the 5-pin flash connector dates back as far and has been overloaded with variations to the point where even Nikon doesn't support them all any more, the USB connector is a brute force pipe and requires Nikon's SDK to communicate with. There's not much modern about the way the camera communicates with other things.
- Connections to nowhere. I spent a great deal of time recently in Japan looking for something useful that could plug into the HDMI port that seems to be present on all cameras these days. Apparently no one in Japan has thought of the obvious things we want, but instead think that this is for connecting to a big-screen TV. Guess none of the engineers have done any low-level macro work lately ;~). (Disclosure: I've been helping someone come up with something that would be useful connected to the HDMI connector.)
- One size fits all. Need the rugged pro camera? Well DX shooters are out of luck. The big battery, built-in grip, heavy-duty weather sealing, full set of buttons, is only available on the D3s and D3x. Want D3s or D3x performance in a lighter, smaller body? Out of luck again.
- Niche be damned. For almost ten years I've been surveying serious photographers (D90 and up, basically). For almost ten years the surveys have showed the same thing: a large subset of these would buy a dedicated black and white camera. Not a Bayer camera that converts to black and white, but a dedicated black and white camera done right. I suspect that because Kodak's dedicated black and white DSLR never sold very well, the camera makers are afraid such a product will bomb. But perhaps Kodak's camera didn't fail because it was black and white. Basically, the camera makers never liked exploring the niches, and now that they sell millions of cameras a year, they are hooked on "mass market." They only want the highest volume products, even at the pro level.
- Follow the leader. Innovation is difficult, I know. But the Japanese companies are mostly iterators. They find something that works and iterate it to death. If a competitor thinks of a feature they didn't, they latch onto that and iterate that to death (mirrorless and video being the current fads). This comes at the expense of doing something about the other issues I've already mentioned.
I've picked on Nikon here because this is a Nikon-oriented site, but variations of the same issues apply to virtually every camera maker. The bottom line is that every amateur and every pro runs into limitations in their current cameras that bother them. Every amateur and every pro can point to missing features or products that they'd prefer.
The question, of course, is whether you can ever satisfy all demands. And the answer is that no you can't. But you can satisfy more of the demands than the camera makers have done so far. Basically, any small success by any camera maker has them all rushing to "get to the middle" along with the others. And so we get small iterations that are copied and copied and copied.
What we need is a big iteration that solves many of the problems I list above but still plays to the camera makers' strengths. Can it be done? Yes, I believe it can.
The Camera Redefined
Before we begin today's episode, let me again address what I've been seeing in my In Box. It seems that some of you have been reading (or guessing) ahead. Some of that is due to comments I've made elsewhere recently, some is because there is a logical conclusion to be made and it doesn't take an astrophysicist to see it. One reader, Greg Hopkins, a designer at a Cincinnati-based CPG company, even went so far as to illustrate my "redefined camera" as he was imagining it:
Greg is the first one to admit his illustration is tongue-in-cheek, but he's not terribly far from the truth, as you're about to see.
So it's time to pull things together and start describing our future camera. If you haven't read the previous portions of this extended essay, please do so before continuing. I will be building on them today.
One simple observation that ought to be obvious to everyone is this: today's cameras are pretty much computers. They have CPUs, memory, and even a USB port. Yet despite all those things, they don't act much like the computers we are used to these days: they aren't very configurable, they aren't extensible, they're insanely proprietary, and ironically they don't do all that much with their CPU or communication ability. I'm reminded of the old Wang word processors: pretty darned good at a dedicated purpose, but you had to take what Wang gave you.
Back in Silicon Valley--where I spent most of my career--I often talked about two things: sine waves and customization. Tech products like computers go through sine waves of progress as they're pulled back and forth between two extremes with conflicting goals. Mainframes centralized. Minicomputers decentralized. Networks centralized. Personal computers decentralized. The Internet centralizes. (And the original Palm decentralized, the iPhone centralizes, so guess why Palm needed to reinvent itself?) In other words, the sine wave of progress here was the tension between user control and organizational control.
Customization is easier to understand: all else equal, the user ultimately will want to customize their tools. That's because the user knows best what they need, and the maker can never give them everything they need.
Funny thing is, cameras never went through much of any of the above. It's as if Wang just kept making word processors with slightly faster CPUs and a few more features for generations and generations and everyone else just copied that. Not a single camera company has really broken the mold and tried venturing into uncharted (but highly game changing and profitable) territory. Talk about conservative design.
So what should a camera maker do? Simple: decentralize and make a camera that puts the user in control of everything. Just as this was a radical concept to the existing computer companies when personal computers hit the scene in the mid-seventies, it'll seem like a radical concept to today's camera companies, too. It's uncharted territory. But it's also a huge ocean of opportunity.
Let me cut right to the chase. Here's how I'd redefine the camera:
- Modular. Remember that non-stop technology march? Well, we can completely junk our equipment every time a sensor generation comes down the pike, or we can just replace the sensor module. Which would you prefer?
- Programmable. This necessarily doesn't mean you, the user, has to write programs. It means that there's a known API to the underlying hardware (and modules!) and a way to take advantage of it. Whatever you need to do, there should be an app for that, not a dedicated feature with restrictive parameters.
- Communicating. The camera sits in the middle of so many processes and initiates most of them. But right now we're using Sneaker Net to communicate (that's an reference to the old practice of taking a disk out of your computer and going down the hall to put it in another one in order to move files). But here's another thing: cameras should be able to communicate with other cameras, other camera accessories, and things outside the camera world, all simultaneously. Right now most of the communication that is done by our cameras is proprietary, highly restricted, and often sequential.
Some of you will balk at all three aspects of this future camera and say we don't need any of these things. I disagree. We've been fighting all three areas for a long, long, time. Let me give you a single example. You're a wedding photographer. A good one. One who tries to keep up with the latest options.
How do those three things I just outlined help our wedding photographer? Let's look:
- Modularity. Wedding photography goes through fads. Many brides not only want all aspects of their wedding fully covered, but also want pictures that stand out from those of their friend's weddings. Thus, the style of imagery tends to follow fads. For a year or two, near-IR wedding work was the in thing. Every now and then, B&W returns as the "in style." We could buy new cameras every time styles like this happen, or we could just swap out the sensor module. Wedding party photos were outside in the sun but the reception is in the most dank and dark venue you've ever encountered? Swap out the Megapixels module for the SeeInTheDark module when walking from one to the other. Sensor modules that should be available:
- High resolution (e.g. D3x-like)
- Low light (e.g. D3s-like)
- DX (e.g. D300s or current top-of-the-line DX sensor)
- B&W (probably should be high resolution, as it'll be better at low light than color version)
- Square (e.g. maximum image circle capture)
- near IR sensitive (perhaps with special Bayer filtration)
- no block sensor (low light sensor with no UV or IR blocking, ala Fujifilm's UVIR cameras)
- video only (sensor maximized for video production needs, e.g. no rolling shutter)
- Communications. (I'm swapping order for a reason you'll see in a moment.) Our wedding photographer has an assistant covering the event, too. She wants all images to be named and stamped in the order shot, regardless of which camera they came from. How do they do that? The cameras communicate and synchronize clocks. But the photographer also doesn't want to be downloading dozens of cards after the event, so images are being communicated as they are taken to a central portable server she brought with her. Again, some form of communication is needed. And the lights she has set up should serve both photographers. That, too, requires communication. Perhaps the videographer wants to know how to sync stills up with his video. Hmm, now the still and video cameras need to communicate.
- Programmability. Each photographer can have shot lists on the camera to check off. Each camera can check to see if matching settings are being made with the other (if you're shooting JPEG, this can become important). Each camera can be reporting which wedding this is (writing IPTC data up front) so that all the images from all weekend weddings are already correctly tagged and bagged at the receiving end. Since the images are being synced and served, we can do more. Each photographer can mark images as they're taken as "put in real-time slide show" and once the server has them they can be displayed for guests. Heck, we can go further. Say a guest comes up to the photographer and asks if they can get a copy of "that shot you just took." With the right programming, we can automate a process of entering an email address on the camera and starting the process that sends a watermarked sample with an offer to purchase.
I've intentionally kept my example somewhat simple here, as I don't want to get bogged down into specifics about what a wedding photographer does and doesn't want, I just want to illustrate a few bottlenecks that get solved. Indeed, that's part of the issue with our current cameras. As I outlined in the previous part of the essay (What's Wrong with Our Cameras?), despite having millions of setting possibilities, we still hit limitations that frustrate us. Even something simple like file naming can cause us to add extra steps downstream to our workflow.
Workflow starts at the camera, and we need to be able to control it right there, not downstream when we finally get an image over to a computer. Unfortunately, we have almost no workflow control on our current over-featured cameras. None. We can't even really rename files. Much of the "ingest step" is currently done at the computer when it could be done better at the camera as "outgest." Moreover, images destined for a specific place (Facebook, Web gallery, email, etc.) should probably just go straight there, not have a separate post-shooting workstep that requires a computer (there are tons of variations here: all automatic, only upload images selected on camera, move but don't post until verification, etc.).
The irony of the current situation is that one company has managed to do much of what we want: Apple. The iPhone shows just what happens when you put programmability and communication together with a camera, though I'm surprised Apple hasn't figured out the connection between iPhone and iPhoto yet. My iPhone can tell me where I am, tell me where the sun is and the moon will be, put watermarks on my images, stitch panos, apply tilt-and-shift-like effect, email them or send them directly to places I want them, and much, much more. My US$7999 D3x can't do any of those things. Doesn't anyone else find something wrong with this picture?
One common complaint I hear when I speak about an open, modular, programmable camera is this: the camera companies are terrible at software. Yes, they are. Even worse at it than you think. Obviously, at the root of my redefined camera is software, basically an Open Kamera OS (the OK System, if you will). This indeed may be one of the reasons we don't have what I hope for: the Japanese companies don't know how to do operating systems very well. If you've ever tried to program a PS3 you'll know what I mean. (The core OS in your Nikon DSLR is a Fujitsu-written DOS variant, by the way.)
This doesn't mean that the camera companies shouldn't attempt creating an OK System, though, it means that they need to get better at software, and fast. And a certain type of software, at that. We don't need the camera companies to become experts at DAM products or PhotoShop plug-ins, we need them to be experts at building a solid core foundation upon which features and applications are built.
I'll conclude my essay at this point. Obviously, I've thought about this a lot more than may show here. I've got dozens of real-world examples of how things would change with a modular, programmable, communicating camera. I've got a long list of features and applications that could be made that give us capabilities we're a long way from today. But if I tried to roll all those into one essay, you'd still be reading sometime until a few weeks after Hell freezes over.
Hopefully I've presented the case for Open Kamera well enough for you to understand. And I hope you join me in asking for some camera maker, any camera maker, to take the leap and bring the idea to fruition. If not, we'll just have to wait for Apple to add a big imaging sensor to the iPad ;~).
Apr 12 (commentary)--As you might expect, I received plenty of reaction to my "Camera Redefined" essays last week.
First up, noted Nikon Pro and celebrity photographer John Ricard wrote to tell me "when I want to send a D3 image to Twitter while I'm on a shoot, I use my iPhone to photograph the D3 screen. Then I use Photoshop Mobile to adjust, PhotoMarkr to add a watermark, and I send the iPhone image directly to Twitter with another app. It's insane that I can't do these things with my D3--especially when you consider that the iPhone has one button, but the D3 over 20." John gets it. I guess I can add him to the wait list for my redefined camera along with dozens of other pros. Note that some may think this is a simplistic or silly thing to want to tweet from a pro camera. It isn't. Today's pro needs to stay connected and market themself constantly. By keeping his clients aware of what he's working on today, John is keeping himself at the forefront of their minds, not waiting for them to call with work. John's example is but one of hundreds I'm collecting (by all means, let me hear your idea of what a modular, programmable, communicating camera would do for you).
A few more comments about your comments:
- Complexity. Some of you worried that modularity, programmability, and communications adds complexity to the product we're using today. Yes and no. DSLRs by definition are already modular (but not in all the ways we want them to be), programmability doesn't necessarily mean that a user needs to program it (though I would advocate that we should be able to if we want to), and setting up communications can be a real chore (witness the WT-4). In my career in high tech, I took such things as challenges to solve, not hurdles to stop me from trying. Indeed, as part of the process I went through prior to writing these essays, I took the D3000 user interface and rebuilt it to accomodate my proposed future camera changes. I'd argue that my design is actually more direct and simpler than the existing D3000. Simply put (pardon the pun), complexity is only there if you put it there.
- Price. Some of you worried that prices for a modular system would be higher than a non-modular one. Yes and no. First, modularity itself shouldn't add any tangible cost to the higher end products (though it might to low-end ones). Connectors and packaging are inexpensive components compared to sensors and electronics, though a sensor/digital module would have to be precise to keep alignments correct. But consider the following: a D3s for US$5300 versus a D3m body for US$2400 and an S sensor module for US$2900. (Also consider that you might have a DX sensor module for the camera, too, which would be great for wildlife shooters.) Suddenly, body backups or upgrades seem less expensive and more desirable. (If you wonder where the body price comes from, check the price of the F6.) Basically, modularity gives the user the choice of when to upgrade basic camera capability versus image quality capability.
- Viability. A few of you questioned what was in it for the camera makers (a good question, actually, because if there's nothing in it for them they won't do it). This one's difficult to answer without a very complicated spreadsheet and a raft of survey results to refer to. But there's one simple answer, I think: camera prices are headed down. I know it doesn't seem like that from the price of the D3x, but if you look at CIPA's average revenue per unit figures over the past few years you see it immediately. The normal Japanese company reaction to declining revenue per unit is to engineer out cost. An alternative approach--and a better one in my book--is to find a way to increase unit volume and add incremental sales based upon that volume. In other words, instead of selling a widget for US$100, you sell a widget for US$80 and widget accessories for US$40. Modularity doesn't work exactly like that, but it's a relative to the strategy. Done right, you end up with more revenue and more profit on the same basic camera unit sales volume.
- Capability. As I noted, the Japanese camera companies haven't exactly distinguished themselves in software, and what I proposed requires a clearly competent software capability. I don't know how to answer this one other than to say that perhaps the Japanese companies need to look more outward for software expertise. The Nikon-Fujitsu firmware venture is just "more of the same." A Nikon-Silicon Valley venture might not be. Put another way: the software expertise required is out there, it just isn't currently where we need it to be.
- It's not really open. "Open" comes in many forms. I'm not advocating open source, however. I simply don't think the camera companies would even consider it. Moreover, since we're dealing with a real time base system OS here (think multitasking), it's important for the manufacturer to have full control over what is going on at the hardware level. What I mean by Open Kamera is a system that is programmable and extendable, and that those functions are well documented, well supported, and available to third parties. We can talk about open source later. Right now we need a big step towards a basic kind of "open." Without it, the camera makers are going to find themselves playing in smaller and smaller ponds.
Three Year Plan
Apr 13 (commentary)--Someone asked me a good question yesterday: "So, if you were in charge of Nikon, what would your three-year plan be?" I've got just enough ego to think I can answer that, not enough ego to know that I'm right.
Most Japanese companies keep long-term maps of what they plan on offering. With Nikon's cameras, their plans tend to revolve around the four-year design cycle of the pro bodies. Every four years we see a clear technology boost (new technologies, improved technologies, additional technologies) in the basic pro platform. This then trickles into the lower models and is refined and iterated in the upper models over the course of the next four year period.
(If you're wondering about what technologies might be in the D4 generation, here's my current expectations: new battery technology, a surprising new AF system, USB 3.0, next generation UDMA, and of course significant sensor/imaging upgrades.)
The problem with a four-year cycle is that user expectations sometimes move faster than that, and fads like mirrorless and video develop quicker than a company might be prepared to deal with them. This leads to frantic and fractured development in the sub-cycles. The good thing about a four-year cycle is that a company can spend the time to fully investigate and develop leading edge technologies and make sure that its base platform is state of the art at each iteration. Real innovation and progress requires time: you can't do it in the typical 18-month product cycles we're now seeing for many of the cameras.
I wouldn't change Nikon's four-year cycle. But I first started writing about modularity prior to the start of the current cycle (which should produce the D4 in 2011), and programmability early in the cycle. Communications would have been an obvious technology area to be exploring, as well. Given Nikon's repeated WiFi offerings (e.g. WT-4), they've been paying attention to communications. Thus, had I been in charge of Nikon, a D4 introduced in 2011 would likely be modular, programmable, and communication-equipped platform from the git-go.
The real question is how do you respond mid-cycle, which is where we are today? (Note that once you're modular and programmable, you can use modules and programs to respond quicker to market changes, another reason to change.) This is a tougher question. Sometimes you get caught up in tectonic plate shifts of market demand when you're not ready. This generally means that you have to start gap-plugging development teams and push them as fast and as hard as you can. Mirrorless is one of those areas that is triggering that kind of development right now (Sony's camera will be introduced in May; not yet sure when Nikon's will be introduced). The good news is that the camera companies all have the ingredients to produce what Olympus has produced so far: you just take your low end DSLR, remove the mirror and viewfinder, plug in your compact focusing algorithms, and package it all back up. The only big decision comes with lenses, where you need to think about the future of the mount and what options users really want (Sony is making a 24mm equivalent prime: congratulations, they got a sophisticated user want correct, though they still need a 50mm equivalent prime, too). Mirrorless isn't really any new technology or earth-shattering change to the basic notion of what a DSLR is: it's just revising and repackaging of things you already have in the toolbox.
My basic answer to the "in charge" question is this: the secret sauce is software. Nikon has all the hardware and technology pieces pretty much in control. But Nikon--like all the Japanese camera companies--has yet to fully understand the full role of software in modern devices. The notion of firmware is as outdated as the notion of BIOS. Yes, we still need these things, but they're not the secret sauce, they're just glue. Users don't care about glue, but they can sure distinguish and appreciate the tastes of different secret sauces.
Thus, my priorities at this point would be ordered like this: (1) programmability (getting the software right); (2) communications; and (3) modularity. The interesting thing about #1 is that it can be done in parallel to whatever is happening right now in development. It's not difficult to imagine a completely different firmware running on even the current cameras and enabling capabilities that weren't there before. Communications can be done external to the current camera designs (witness WT-4) and integrated later as you tackle modularity.
Entirely missing from Nikon's game is video. Yes, I realize they've got DSLR-enabled video, but I mean: where are the video cameras? Nikon should have bought Flip before Cisco did and it should have introduced a dedicated F-mount video camera by now. Video will indeed get more important in the future, but hybrid solutions are not really the answer. Just as in still photography, lenses will become one of the big differentiators in video, and that plays to one of Nikon's core strengths. But video requires different lens designs, and so far only Panasonic seems to be exploring that. So also in my "three-year plan" (which is really four years) would be "establish Nikon as a video provider."
Speaking of lenses, there's not enough urgency in Nikon's lens group. Gaps need to be filled, production needs to be goosed for a number of lenses (try ordering a 500mm f/4), and new technologies need to be pushed harder. Lenses are an important module of modularity, after all.
I'll have more to say about Coolpix soon, as I've been testing Nikon's latest offerings. I'll just say that Nikon is basically keeping pace in many areas, but not excelling and has yet to pick up on key differentiators, like ruggedness. One of my goals as Nikon's leader would be to excel across all products, not just keep pace. So Coolpix would need work, too.
Some nascent programmability has appeared, most notably:
And here's one of the reasons why "open cameras" start to be very, very interesting: