An SLM color space
AFAIK, every existing color space has its strengths and weaknesses, with respect to how much of the color space of what the human eye can actually distinguish it can represent. And translating from one color space to another other requires an algorithm for each combination of respective color spaces, or an intermediate color space which means *two* steps where loss of information can occur.
The solution:
The human eye physically perceives color in three dimensions: 420-nm resonance, 564-nm resonance, and 534-nm resonance. My proposed color space would simply store a color in terms of how much it excites those three respective cone cell types, thus covering human vision's entire color space (sorry to the tetrachromats) in only three values (just like HSB, Lab and RGB). I believe that the reason this color space hasn't been used thus far is that you can't use it to directly reproduce (display) a given color--i.e., e.g., if you shined a 564-nm light (even a monochromatic one) at the given intensity, it would also excite the 534-nm receptors to some degree. However, given a specific display device's color space, you could translate a color from this color space to that one for displaying or otherwise format converting. Thus this color space could be used as an intermediate/universal color space for when translating between color spaces or for digital photographic manipulations. AFAIK, even Lab and HSB aren't used directly for displays or pigment combinations anyway.
Also, digital cameras/video cameras should be made with sensors of these respective resonant frequencies instead of the typical red, blue and green and should save in that format. (You may not be able to *display* colors in SLM primaries, but if the eye can perceive in those resonant frequecies, a camera should be able to too.) This would capture images in a more true-to-form way which means images could be represented optimally on *any* display device/printer and also photo manipulation algorithms would incur less loss of information. Of course, the camera's PC software should be able to transparently convert those files to the currently widely used standards, or the camera itself should optionally save in either/both formats. This may require special filters that have spectral envelopes that reflect those of cone type sensibilities.
Tuesday, July 29, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
i've since found out that one reason they use the absolute color space models they use such as CIE is that they're easily computable. LSM wouldn't be easily computable because the response curves aren't governed by equations so much, they're just curves, so it would take a lot more cpu.
the thing is that "a lot more cpu" is a lot cheaper nowadays (*at least* on pc's if not on devices
Post a Comment