KelSolaar on develop
Update various docstrings. Implement support for "I G P G … Merge pull request #633 from co… (compare)
>>> Lab_1 = np.array([100.00000000, 21.57210357, 272.22819350]) >>> Lab_2 = np.array([100.00000000, 21.57210357, 269.22819350]) >>> delta_E_CIE1994(Lab_1, Lab_2) 0.22984929951933436 >>> delta_E_CIE1994(Lab_2, Lab_1) 0.23218794545964785 >>> delta_E_CIE2000(Lab_1, Lab_2) 0.23669273745958291 >>> delta_E_CIE2000(Lab_2, Lab_1) 0.23669273745958291
The first consideration here is whether your camera data is linearised properly, i.e. it is devoid of artistic tweaks, e.g. tone curve, LUTs and, has been decoded to linear RGB values. Only then you can do a RGB to Lab conversion that is meaningful.
Question 1) Spectrometer is the way to go indeed
Question 2) In order to discard the illuminant from the problem/"equation", you would need to measure spectrally both your chart and the illumination source, compute the measured chart values under the measured illuminant, and then, you can compare with the camera captured values of the chart under the same illuminant
Ok. So everything you are saying would allow me to isolate color differences caused by purely the camera, eliminating differences due to the illuminant from the problem. That all makes sense to me. I'm still digesting some of what you are saying so bare with me here. A few things
So, from what I understand of your response, is it completely incorrect and not meaningful to convert the RGB data I get from the camera to L*a*b*& space without knowing the chromaticity of my illuminant? Or, could I just use D50 as my illuminant, and assume that the DeltaE I get between the X Rite published values and my measured values is due to a combination of the camera and the illuminant? Does that approach make any sense at all?
So I recognize there are a few sources of error here
I'm willing to neglect 1 as small compared to 2 and 3 and ignore it outright. 2 and 3 are the issues I'm interested in, and I'm wondering if I can measure the DeltaE due to both of those effects simultaneously. So, if I assume my illuminant is D50 (which it's not), and convert to L*a*b* I will get some numbers. I guess my fundamental question is this: Does it make any sense to compare the resulting values of my conversion to the published values at all? Would that give me a quantitative measure of my color difference due to both the camera and my illuminant?
Are you trying to achieve a particular goal?
Yes. Basically, I built two versions of a light box with a different lighting configuration and LED model in each. The same camera is used in both light boxes. Pictures taken in the first version have very inaccurate color rendering. The second one, to my eye, looks much better. I'm trying to prove to others that the second version with new LEDs drastically improves color rendering in a quantitative, non-subjective way rather than to just show them pictures from the second one and let them decide whether it's an improvement or not. Additionally, I also want to have some quantitative way of measuring the improvement in color rendering relative to the first version as we make adjustments in the second version.
Yes but this would be hard to use meaningfully.
What do you mean by "use meaningfully"? I'm mainly using this computation as a measure of relative improvement in color accuracy between two photographic environments, holding all other things fixed except the shape of the light box and the illuminant used. Is my computation meaningful when used to measure a relative improvement in color accuracy between two versions of my light box? I guess I'm still not grasping how the calculation I'm proposing can't be used in a general sense. The way I'm looking at this problem, there are 2 points in L*a*b* space I'm dealing with:
Ignoring actual physical differences in the color checker, in my mind the DeltaE distance between these two points is a cumulative measure of all error sources in my system, is it not? Does that not that have meaning in some sense? Am I missing some critical piece of understanding about the science and math of this problem space? I'm fully willing to accept my own naivety, as everything I know about color science I learned while working on this project by piecemeal collecting and reading sources online as I needed them.
Additionally, I also want to have some quantitative way of measuring the improvement in color rendering relative to the first version as we make adjustments in the second version.
Irrespective of the camera, you can measure the quality of the light itself, assuming you have the spectral distribution you could use metrics such as CQS or CRI to do so, we have TM-30-18 and CFI17 in a feature branch but they are not merged yet