Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 23 10:45

    KelSolaar on gamut_section

    Ensure that plotting definition… Stage current work. (compare)

  • Sep 18 09:34

    KelSolaar on gamut_section

    Add dependency to "Trimesh". (compare)

  • Sep 17 07:56
    KelSolaar closed #874
  • Sep 17 07:56
    KelSolaar locked #874
  • Sep 17 07:56
    KelSolaar edited #874
  • Sep 17 07:55
    KelSolaar labeled #874
  • Sep 17 07:55
    KelSolaar labeled #874
  • Sep 17 07:55
    KelSolaar commented #874
  • Sep 17 02:48
    KelSolaar commented #874
  • Sep 17 01:22
    abnormally-distributed opened #874
  • Sep 16 08:46

    KelSolaar on develop

    Update various "__all__" attrib… Update broken "README.rst" file… Merge branch 'feature/v0.4.0' i… (compare)

  • Sep 15 10:52
    KelSolaar closed #872
  • Sep 15 10:52
    KelSolaar locked #872
  • Sep 15 10:52
    KelSolaar edited #872
  • Sep 15 10:51
    KelSolaar labeled #872
  • Sep 15 10:51
    KelSolaar labeled #872
  • Sep 15 10:38
    Naughty-Monkey opened #872
  • Sep 15 09:05
    coveralls commented #869
  • Sep 15 08:44
    coveralls commented #869
  • Sep 15 08:17
    KelSolaar synchronize #869
Thomas Mansencal
@KelSolaar
The functions will not be able to fully fit the target and source spaces so it is expected to have some differences, worth noting that polynomials methods might be subject to "explosions" outside the domain they are used on. Put another way, they are very good at interpolating the colours defined by the target and source spaces but extrapolation outside that is bound to behave very unexpectedly.
Dani
@otivedani
Hi @KelSolaar , thank you for helping me to understand! I will get to learn more from the thread too. In other words, the functions take 3 -> 3 (as in 3D, rather than 1D each channel), right?
What I mean to create 3D LUT is, I have an idea to apply the correction to a neutral 3D LUT image rather than the source image itself, like this : https://streamshark.io/obs-guide/converting-cube-3dl-lut-to-image , so the adjustment could be reused. Did you think it is going to work?
Thomas Mansencal
@KelSolaar
Yep, that would work! Note that Colour can write 3D LUTs also, e.g. my_lut = colour.LUT3D(size=33); colour.io.write_LUT(my_lut, 'my_lut.cube')
Dani
@otivedani
Great! Thank you, this lib is awesome!
Thomas Mansencal
@KelSolaar
You are welcome! :)
Geetansh Saxena
@SGeetansh
Hey @KelSolaar,
I wanted to work on #796 and just required a little guidance on what information and output format needed.
Marianna Smidth Buschle
@msb.qtec_gitlab
Hi, I need some help regarding using "colour_correction".
I have a x-rite color chart with 24 colors, and I have found these values online as the reference:
reference_colors = [[115,82,68],[194,150,130],[98,122,157],[87,108,67],[133,128,177],[103,189,170],
[214,126,44],[80,91,166],[193,90,99],[94,60,108],[157,188,64],[224,163,46],
[56,61,150],[70,148,73],[175,54,60],[231,199,31],[187,86,149],[8,133,161],
[243,243,242],[200,200,200],[160,160,160],[121,122,121],[85,85,85],[52,52,52]]
and I have read the values from a raw image from my custom camera:
extracted_colors = [[60,54,59],[131,93,86],[50,60,83],[40,44,41],[82,76,108],[88,134,147],
[158,95,68],[40,49,90],[128,57,54],[41,31,46],[99,117,71],[172,120,72],
[35,43,80],[45,71,58],[113,43,39],[188,147,72],[136,65,84],[44,80,118],
[213,211,212],[146,146,150],[96,96,100],[58,58,63],[38,38,45],[28,30,39]]
and I want to use it like this:
corr_color = colour.colour_correction(extracted_colors[k], extracted_colors, reference_colors, method='Finlayson 2015')
I am however in doubt about the need to normalize them from [0,255] to [0,1] range and about the need to linearize
from what I understand the reference values I found online should be sRGB, so I expect I need to linearize them?
and is that with the "colour.models.eotf_inverse_sRGB()" or "colour.models.eotf_sRGB()"?
and I believe my image is pure raw and doesnt have any gamma encoding
Thomas Mansencal
@KelSolaar
Hi @msb.qtec_gitlab!
So it looks like you are using sRGB 8-bit values here, so you must convert the array to float, divide by 255 and then apply the eotf_sRGB to decode
>>> colour.models.eotf_sRGB(colour.utilities.as_float_array([122, 122, 122]) / 255)
array([ 0.19461783,  0.19461783,  0.19461783])
Marianna Smidth Buschle
@msb.qtec_gitlab

thanks,
yeah, I also got to that conclusion after seeing examples and comments from different places.
I ended up just using the reference colors from 'colour' since I could see my x-rite chart was from January 2014:

D65 = colour.CCS_ILLUMINANTS['CIE 1931 2 Degree Standard Observer']['D65']
REFERENCE_COLOUR_CHECKER = colour.CCS_COLOURCHECKERS['ColorChecker24 - Before November 2014']
REFERENCE_SWATCHES = colour.XYZ_to_RGB(
        colour.xyY_to_XYZ(list(REFERENCE_COLOUR_CHECKER.data.values())),
        REFERENCE_COLOUR_CHECKER.illuminant, D65,
        colour.RGB_COLOURSPACES['sRGB'].matrix_XYZ_to_RGB)

Based on the example from https://github.com/colour-science/colour-checker-detection/blob/develop/colour_checker_detection/examples/examples_detection.ipynb
I also tried converting the reference I previously had from X-rite by doing what you said (I had also tried the cctf_decoding()/encoding() as in the example above) and I could see that the values where quite close but not really an exact match...
Is that because of all the colorspace conversions (xyY->XYZ->RGB)?

I can also see that the CCM really improves my images, even though it is not perfect
The images are white balanced in advance and the gain and exposure time has been adjusted in order to maximize the dynamic range while avoiding clipping
Are there any more steps I should apply before the correction which can improve the results further?
I can see that the biggest improvement to my images by applying the CCM is in the color saturation
Marianna Smidth Buschle
@msb.qtec_gitlab
Lastly, I have the option to apply the correction to either RGB or YCrCb images, is there a colorspace which is better or worse for that?
Thomas Mansencal
@KelSolaar
It might be better to apply the correction in a perceptual-uniform space but it really depends what you are trying to achieve, e.g. are you trying to minimize errors between different cameras or between a camera and the standard observer.
You could use Finlayson (2015) but it is really highly conditioned by the input data and here a ColorChecker 24 is often not enough
Geetansh Saxena
@SGeetansh
@KelSolaar Hey Thomas, not to disturb you but you were about to send a sample file, right? Also, do you want me to layer the two sets and send you a mini report?
Thomas Mansencal
@KelSolaar
@SGeetansh : Unfortunately, I did not manage to get a hold on some of the UPRTek data yet, it is coming though! Be great to have the data layered indeed, can do that wherever you want!
quicker might be to cull the spectral data and save it as two different CSV files and load them with colour directly
Geetansh Saxena
@SGeetansh
@KelSolaar Thanks!
Geetansh Saxena
@SGeetansh
@KelSolaar I am a little confused about where the code for this would go. I see we currently have a function read_spectral_data_from_csv_file. Are there any other types other than sekonic and UPRTek that were already implemented that I can have a look at? If not, where should I fit the new parser?
Thomas Mansencal
@KelSolaar
No, we don't really have any equivalent so far!
We would probably add two new parsers, one for Sekonic and one for UPRTek, the latter sharing most code from the former as they are for practical purposes the same.
Can we move this discussion to the issue though, it will be easier to track down later?
Geetansh Saxena
@SGeetansh
Sure. My bad.
Marianna Smidth Buschle
@msb.qtec_gitlab
@KelSolaar we have a custom camera (machine vision camera) which is being used for live sports streaming (ice hockey) and the goal is basically to make the picture look good, more true vibrant colors (we have actually several cameras, so they also have to look similar).
We can definitely see that we are missing on the color saturation and that is mostly what the CCM I tried generating seems to correct.
Because of the live requirement going to CIE Lab or something is a bit too costly, so that is why it would be preferable to keep in the native RGB or the YUV we transform to before encoding to H264.
I had the impression that when using degree=1 all methods where pretty much equivalent? And that Finlayson differed when going to higher orders?
And because it is indoors sports the illumination won't change, so I expected that doing the calibration once (like I already do for white balance and exposure) would work...
Thomas Mansencal
@KelSolaar

I had the impression that when using degree=1 all methods where pretty much equivalent? And that Finlayson differed when going to higher orders?

Correct!

it is indoors sports the illumination won't change, so I expected that doing the calibration once (like I already do for white balance and exposure) would work...

Yes! This is almost the ideal scenario, ideally you would sample the illumination with a spectrometer, e.g. Sekonic C7000, measure the chart reflectances with another one, e.g. X-Rite I1 Pro, from there you could generate the reference chart values under the illumination of the location and calibrate the camera against that.

Marianna Smidth Buschle
@msb.qtec_gitlab
@KelSolaar so if I want the camera to be able to reproduce what an expectator in the stadium is seeing I would have to measure the reference chart values under that illumination with a X-Rite I1 Pro and preferably do the calibration and correction under a perceptual uniform space like CIE Lab.
But I don't get why I also need to "sample the illumination with a spectrometer, e.g. Sekonic C7000"? (Does the "X-Rite I1 Pro" requires those values?)
And, if like now, I use the reference values from D65, I would be "distorting" the picture to look like it was taken under such illumination? That might not be a very big problem for me if the picture is pretty, even though it might not be exactly the same color a person would see live. Or is there more I should be aware of?
I have read the blog post you mentioned, but as my camera is a custom one it hasn't been "carefully color calibrated", this is what I am trying to achieve in some sense ;)
Thomas Mansencal
@KelSolaar
So measuring the reflectances makes them illuminant independent so that you can then integrate them again with an illuminant of choice!
In that case it would be to generate reference chart values with the illumination that you have measured in the indoor hockey location.
then with the chart captured by the cameras under that illumination you can correct much more precisely because the values are for the actual location, not an unrelated illuminant.
Marianna Smidth Buschle
@msb.qtec_gitlab
So the "X-Rite I1 Pro" can't directly measure the reference values by just illuminating it with the light source in question?
It needs information which I need to obtain by sampling the illumination with a spectrometer, e.g. Sekonic C7000?
Thomas Mansencal
@KelSolaar
So you need to separately measure reflectance and irradiance, idea is to do reflectance once for your chart and then you measure irradiance on location
You can measure ambiant light with an i1 Pro but it might not be the best too for the job, it might work though, there is a head for that: http://cdn.northlight-images.co.uk/wp-content/uploads/2018/12/ambient-head.jpg
Volker Jaenisch
@volkerjaenisch_gitlab
Hi! To introduce myself. I am working on scientific projects with my small company inqbus.de . We are mostly into open source development for our customers.
Volker Jaenisch
@volkerjaenisch_gitlab
I have a stupid question. Stupid as no real background in spectroscopy beside a degree in physics. I have the spectral intensity of the LED illuminator [Measusred by a DOAS-Spectrometer]. Also I have the spectral reflectivity of the xrite checker card from literature. I have the camera RAW images of a probe and the checker card illuminated by the LED. I like to do a manual white point calibration of the camera data based on the checker card.
  • In what color space should I plugin the camera raw data into Colour?
  • In which colour space shall I determine the transformation matrix for the probe foto?
Thomas Mansencal
@KelSolaar
Hi @volkerjaenisch_gitlab!
So you could convert the reference reflectance of the X-Rite ColorChecker with that of the LED, it gives you the reference values, then you would really only need to compute a colour correction matrix between the Camera RGB values and that of the ColorChecker under the LED light.
Volker Jaenisch
@volkerjaenisch_gitlab
@KelSolaar Thank you for the fast response. What is the best practice for deriving the CCM. The CCM I will obtain depends on the color space I do the fitting in. Am I correct that if I do the fitting in the LAB-Color space that then the corrected image will be more "authentic for humans"? Or the other way around: What color space is the "best" to get a most "authentic" representation?
Thomas Mansencal
@KelSolaar
I guess it depends on the context, if you are using repeatable fixed lighting conditions and your prime usage is producing images for human consumption, it is fine to use a perceptual uniform colourspace.
The risk though is that the process is not exposure invariant at this point and thus hues will twist as you change exposure of the image.
Volker Jaenisch
@volkerjaenisch_gitlab
@KelSolaar Thank you for this fine answer. In our project (Doing archive fotos of filter probes) the context is indeed (by design) fixed: Exposure, gain, optics do not change. In the next step even a spectrometer will be included into the device to measure regularly the aging of the illuminating LEDs.
Thomas Mansencal
@KelSolaar
Sounds like a good case to try a perceptual space yeah
you could try a few and see which one reduces Delta E
I would also try a regular RGB one for verification
Ravi Yadav
@raviy0807
Hello Everyone, Does anyone have tried raw to srgb space conversion of DNG file using the color library? I am getting the issue where image if produced with very high values. i.e. image supposed to be in range [0,1], but after color space conversion from cam_to_xyz_srgb , the range if image is [-0.2, 3.4] .
Complete description is written here : colour-science/colour-hdri#19