Amin
Hall of Famer
In many forums, we've come to apply the term "generation" to refer to the each replacement of a sensor technology. It's a term that implies a significant leap in technology, and we've been trained by the camera manufacturers to think this is what is occurring. Obviously it is in their interest to have us believe that our current camera is hopelessly obsolete, in need of replacement by the next camera. Are they correct?
Before asking that question, one might ask "Do I actually need a better sensor than the one I have now?" "How often do I miss a shot or fail to get the image I want because of a lack of dynamic range or poor signal to noise characteristics at a desired ISO?" It's the old "I never needed to use any film faster than ISO 100" argument, and there may be some validity to that. Let's assume, for the sake of this post, that I do in fact need a better sensor than the one I have right now. The next question is, "How much better is that 'next generation' sensor?"
That's where the manufacturers get tricky with us. The other day, I was listening to the podcast This Week in Tech, and they were having an interesting discussion about how while hardware advances in computing occur exponentially (Moore's Law), software advances were less impressive. The example given was that smart phone hardware has become incredibly advanced, while smart phone operating systems remain, by comparison, immature.
As is the case with smart phones, the on-board processing chips in cameras see exponential gains in processing power over time. Those ever-more-powerful processors can be used to drive powerful in-camera software. Such software can be used to increase the apparent dynamic range of JPEG files through "underexpose and push" processes (eg, Canon Highlight Tone Priority, Olympus Shadow Adjustment Technology), which are sufficient to fool the dynamic range testing protocols of some of the more prominent technical review sites. That processing muscle coupled with refined software can also be used to apply increasingly sophisticated noise reduction and sharpening algorithms in camera, with resulting advances in the signal to noise performance at any given ISO.
To what extent do these JPEG processing advances comprise the "generational" gains promised by camera companies? Here's a typical example:
Similar statements have been made by just about all of the camera manufacturers.
When confronted with such a claim, the first thing to consider is that this could mean just about anything. Even a poor processor with poor software can slather on noise reduction at the expense of detail. Sadly, this is often what exactly what a company means by "improved noise performance". Let's assume though, that the manufacturer is truly speaking of a significant gain in detail relative to noise. Where does such an advance occur? Is it the sensor? The processing? To take this particular Canon example, how much of the advance owes to that "high sensitivity 10.0 Megapixel image sensor" and how much to that "enhanced DIGIC 4 image processing technology"?
Before asking that question, one might ask "Do I actually need a better sensor than the one I have now?" "How often do I miss a shot or fail to get the image I want because of a lack of dynamic range or poor signal to noise characteristics at a desired ISO?" It's the old "I never needed to use any film faster than ISO 100" argument, and there may be some validity to that. Let's assume, for the sake of this post, that I do in fact need a better sensor than the one I have right now. The next question is, "How much better is that 'next generation' sensor?"
That's where the manufacturers get tricky with us. The other day, I was listening to the podcast This Week in Tech, and they were having an interesting discussion about how while hardware advances in computing occur exponentially (Moore's Law), software advances were less impressive. The example given was that smart phone hardware has become incredibly advanced, while smart phone operating systems remain, by comparison, immature.
As is the case with smart phones, the on-board processing chips in cameras see exponential gains in processing power over time. Those ever-more-powerful processors can be used to drive powerful in-camera software. Such software can be used to increase the apparent dynamic range of JPEG files through "underexpose and push" processes (eg, Canon Highlight Tone Priority, Olympus Shadow Adjustment Technology), which are sufficient to fool the dynamic range testing protocols of some of the more prominent technical review sites. That processing muscle coupled with refined software can also be used to apply increasingly sophisticated noise reduction and sharpening algorithms in camera, with resulting advances in the signal to noise performance at any given ISO.
To what extent do these JPEG processing advances comprise the "generational" gains promised by camera companies? Here's a typical example:
Canon statement regarding the S90/G11 compared to the G10 said:Canon’s new Dual Anti-Noise System combines a high sensitivity 10.0 Megapixel image sensor with Canon’s enhanced DIGIC 4 image processing technology to increase image quality and greatly improve noise performance by up to 2 stops.
Similar statements have been made by just about all of the camera manufacturers.
When confronted with such a claim, the first thing to consider is that this could mean just about anything. Even a poor processor with poor software can slather on noise reduction at the expense of detail. Sadly, this is often what exactly what a company means by "improved noise performance". Let's assume though, that the manufacturer is truly speaking of a significant gain in detail relative to noise. Where does such an advance occur? Is it the sensor? The processing? To take this particular Canon example, how much of the advance owes to that "high sensitivity 10.0 Megapixel image sensor" and how much to that "enhanced DIGIC 4 image processing technology"?