Bit Depth in Depth
So now that we have determined that what is called True Color for still photography is actually the same as what film and video shooters call 8bit, lets move on to why and where more information is better and when you need to shed the extra info, and yes there will be math.
With 8-bits per channel color encompassing most of human visions capabilities, why do we really need to have our cameras acquire more? Some of the answers lie in how we see vs what a camera can see. The human eye can accommodate vast ranges of color and density, ever notice that everything seems sharper and crisper to your eye on a bright sunny day, or when you walking around at night the colors are far more muted into differing levels of grey? Our vision accomplishes this by shifting the receptors between the rods and cones in your eyes. The cones view better color and the rods show better contrast, especially in low light levels, where the cones have a harder time translating color.
Exposure Latitude—not nearly what you can see
Ok, lets get to the guts of the issue how much can I record and still get a good image?
For most uses the existing 8bit acquisition is just fine. The latitude of most cameras of this type is is around 10 stops of exposure from white to black, while that sliding scale that our brain controls is closer to 14+ stops of latitude in daylight or more in in very low light due to its ability to dynamically adjust. Many photographers do something similar by exposing under the rules for HighDynamicRange or HDR, where multiple exposures, one each for shadow, mids and highlights.
A camera sensor does not have the ability to adjust it’s exposure or sensitivity in the same manner as our eyes do. By increasing the amount of information the sensor records, the acquisition of both the Color and Luma levels is quantitatively greater. So increasing the amount of information color the camera can record in bit depth actually mimics the separate capture of shadows, midtone and highlights used in HDR photography without the issue of combining multiple images into one.
The numbers don’t lie.
Bit depth is about the numbers. Here more is better to a point. 8bit color with 256 levels of grey per channel (28) is dwarfed by comparison when recording at 10bit (1024 levels), 12 bit (4096) or 16bit (65,536) levels of grey and trillions of possible colors can be recorded. (Note: that 16 bit has same number of colors in the entire CMYK color spectrum as in each color channel.) Until recently such a large volume of information was prohibitive on all but the most powerful workstations, higher quality compressed codec’s like Apple’s ProRes, or the Raw workflows common with todays cameras from Red, Canon and Sony.
A 16bit color space is 256 times as much information as captured when recording 8bit video. That means for every level of grey in 8bit, the 16bit file offers users an additional 255 levels of information. That can translate into huge gains in image quality and control over luminance in the master file. With all this extra information buried within the context of the camera master, one sees that all of the work behind HDR is now being incorporated behind the lens with the newest cameras that are coming out on the market so that image reproduction will soon allow the next generation digital cameras to record contrast and luminance ranges beyond what the best film ever could.
All of this just to throw some of it away?
That is the best part about working with this much information, it’s being able to move the color around to best accommodate the differences between formats when converting the files to various display or viewing types. We have all seen banding from our images at various times, those broken steps of density happen when there is not enough information to fill in the gaps. With all that extra color info, most of our existing tools are able to handle conversions that offer the viewer the highest quality without sacrifice.