Why is YUV/YIQ often more convenient to work with than RGB? Where is it used?
Answers
Answered by
0
As far as I understand, color models like YIQ, YUV, YCbCr, etc. are
just "rotated" versions of RGB. That is, they are all RGB multiplied
by some matrix (an affine matrix in case of YCbCr).
I assume the big advantage of these models is that you have the
luminance ("brighness") information in one single component (Y) and
the chrominance (color) information in the other two, while in RGB
these characteristisc are implicitly represented in all three
components.
I can imagine that having them seperated is useful for image
processing, compression (cause this way you can easily keep more
luminance than chrominance info, since the human eye is more sensitive
to luminance differences), or displaying on black & white TV's (just
display the Y channel). This is also true for the HSV (or HSI, or HSL,
or whatever you want to call it) model, which also has one luminance
and two color components, although the HSV model is ofcourse not an
RGB-rotated variant like the rest.
Is this right so far, or did I completely miss the actual purpose of
those models? :)
Now, what I wonder, are there any essential differences between these
color models? I noticed YCbCr uses somewhat difference RGB-weights for
the luminance than YIQ and YUV. But for the rest, what's the big fun
of these different models? And how about CIE xyZ, Lab, or other
models, are they all also based on one intensity + two color channels?
What are generally their advantages in comparison with the rest?
More specifically, I'm working on some image processing software that
mostly depends on getting accurate luminance information (for example
for edge detection), so I thought using the right color model might
make a lot of difference.
just "rotated" versions of RGB. That is, they are all RGB multiplied
by some matrix (an affine matrix in case of YCbCr).
I assume the big advantage of these models is that you have the
luminance ("brighness") information in one single component (Y) and
the chrominance (color) information in the other two, while in RGB
these characteristisc are implicitly represented in all three
components.
I can imagine that having them seperated is useful for image
processing, compression (cause this way you can easily keep more
luminance than chrominance info, since the human eye is more sensitive
to luminance differences), or displaying on black & white TV's (just
display the Y channel). This is also true for the HSV (or HSI, or HSL,
or whatever you want to call it) model, which also has one luminance
and two color components, although the HSV model is ofcourse not an
RGB-rotated variant like the rest.
Is this right so far, or did I completely miss the actual purpose of
those models? :)
Now, what I wonder, are there any essential differences between these
color models? I noticed YCbCr uses somewhat difference RGB-weights for
the luminance than YIQ and YUV. But for the rest, what's the big fun
of these different models? And how about CIE xyZ, Lab, or other
models, are they all also based on one intensity + two color channels?
What are generally their advantages in comparison with the rest?
More specifically, I'm working on some image processing software that
mostly depends on getting accurate luminance information (for example
for edge detection), so I thought using the right color model might
make a lot of difference.
Similar questions