TrueColor vs. 8-bit part 2

TrueColor vs. 8-bit  part 2

Bit Depth in Depth

So now that we have determined that what is called True Color for still photography is actually the same as what film and video shooters call 8bit, lets move on to why and where more information is better and when you need to shed the extra info, and yes there will be math.

With 8-bits per channel color encompassing most of human visions capabilities, why do we really need to have our cameras acquire more? Some of the answers lie in how we see vs what a camera can see. The human eye can accommodate vast ranges of color and density, ever notice that everything seems sharper and crisper to your eye on a bright sunny day, or when you walking around at night the colors are far more muted into differing levels of grey? Our vision accomplishes this by shifting the receptors between the rods and cones in your eyes. The cones view better color and the rods show better contrast, especially in low light levels, where the cones have a harder time translating color.


Showing a visual difference between 8bit and 10bit on a gradient file.

Showing a visual difference between 8bit and 10bit on a gradient file.

Exposure Latitude—not nearly what you can see

Ok, lets get to the guts of the issue how much can I record and still get a good image?

For most uses the existing 8bit acquisition is just fine. The latitude of most cameras of this type is is around 10 stops of exposure from white to black, while that sliding scale that our brain controls is closer to 14+ stops of latitude in daylight or more in in very low light due to its ability to dynamically adjust. Many photographers do something similar by exposing under the rules for HighDynamicRange or HDR, where multiple exposures, one each for shadow, mids and highlights.

A camera sensor does not have the ability to adjust it’s exposure or sensitivity in the same manner as our eyes do. By increasing the amount of information the sensor records, the acquisition of both the Color and Luma levels is quantitatively greater. So increasing the amount of information color the camera can record in bit depth actually mimics the separate capture of shadows, midtone and highlights used in HDR photography without the issue of combining multiple images into one.


The numbers don’t lie. 

   Bit depth is about the numbers. Here more is better to a point. 8bit color with 256 levels of grey per channel (28) is dwarfed by comparison when recording at 10bit (1024 levels), 12 bit (4096) or 16bit (65,536) levels of grey and trillions of possible colors can be recorded. (Note: that 16 bit has same number of colors in the entire CMYK color spectrum as in each color channel.) Until recently such a large volume of information was prohibitive on all but the most powerful workstations, higher quality compressed codec’s like Apple’s ProRes, or the Raw workflows common with todays cameras from Red, Canon and Sony.

A 16bit color space is 256 times as much information as captured when recording 8bit video. That means for every level of grey in 8bit, the 16bit file offers users an additional 255 levels of information. That can  translate into huge gains in image quality and control over luminance in the master file. With all this extra information buried within the context of the camera master, one sees that all of the work behind HDR is now being incorporated behind the lens with the newest cameras that are coming out on the market so that image reproduction will soon allow the next generation digital cameras to record contrast and luminance ranges beyond what the best film ever could.

All of this just to throw some of it away?

That is the best part about working with this much information, it’s being able to move the color around to best accommodate the differences between formats when converting the files to various display or viewing types. We have all seen banding from our images at various times, those broken steps of density happen when there is not enough information to fill in the gaps. With all that extra color info, most of our existing tools are able to handle conversions that offer the viewer the highest quality without sacrifice.


Share This

About the author

Contributor Gary Adcock specializes in creating and streamlining workflows for Episodic and Feature Production and Post-Production. Gary's extensive knowledge in Camera Technology, Image Acquisition and OnSet Data workflows is regarded by many as one of the most knowledgeable resources in the industry. With emphasis on understanding camera and production technologies and how they relate to post production and delivery. Gary assisted in writing the Data Handling Procedures and Best Practices documentation for IATSE Local 600 (Cinematographers Guild) as part of an ongoing education initiative project. Gary also regularly contributes to Macworld Magazine, as well as now being a regular contributor on, he is also a moderator and blogger at Gary’s client list includes: Adobe, Aja Video, Apple, Arri, Autodesk, Sony, Panasonic, and JVC. Media outlets like CNN, MSNBC, Discovery Networks, MTV, WGBH (Antiques Roadshow), FOX, National Geographic Inc. and the Nat Geo Networks worldwide. Commercial clients include such prestigious brands as McDonalds, Taco Bell, HBO, MLB, NASA, Citibank, NBA Entertainment and NFL Films. Recent Production Work has included Transformers Dark Side of the Moon 2011 (3nd unit), Just Like a Women (2012) (data and dailies colorist), NBC's Playboy Club (camera tech), ABC's Detroit 187 (remote recording technologies). Gary can be followed on Twitter @garyadcock or on his new Wordpress blog about Technology and the Chicago food scene at

View all articles by Gary Adcock

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>