Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 18:31
    KelSolaar closed #886
  • 18:31
    KelSolaar locked #886
  • 18:31
    KelSolaar labeled #886
  • 18:30
    KelSolaar edited #886
  • 10:00
    ujjayants opened #886
  • 09:16

    KelSolaar on cam_unit_tests

    Embed "Hunt" fixtures. Embed "LLAB(l:c)" fixtures. Embed "Nayatani (1995)" fixture… and 1 more (compare)

  • 06:21

    KelSolaar on cam_unit_tests

    Embed "Hunt" fixtures. (compare)

  • 05:48

    KelSolaar on cam_unit_tests

    Ensure that "CAM" output corre… Embed "ATD (1995)" fixtures. Embed "CIECAM02" fixtures. (compare)

  • Oct 18 08:27

    KelSolaar on cam_io

    (compare)

  • Oct 18 08:26
    KelSolaar closed #882
  • Oct 18 08:26

    KelSolaar on develop

    Update various docstrings. Use "Dataclass" for CAM specifi… Add "colour.utilities.has_only_… and 1 more (compare)

  • Oct 18 07:42
    coveralls commented #882
  • Oct 18 07:01
    coveralls commented #882
  • Oct 18 06:54
    KelSolaar synchronize #882
  • Oct 18 06:54

    KelSolaar on cam_io

    Add "colour.utilities.has_only_… (compare)

  • Oct 18 06:37
    KelSolaar synchronize #882
  • Oct 18 06:37

    KelSolaar on cam_io

    Update various docstrings. Use "Dataclass" for CAM specifi… Add "colour.utilities.has_nan_o… (compare)

  • Oct 17 08:54

    KelSolaar on develop

    Update "README.rst" file. Merge branch 'feature/v0.4.0' i… (compare)

  • Oct 17 08:37
    KelSolaar synchronize #882
  • Oct 17 08:37

    KelSolaar on cam_io

    Update various docstrings. Use "Dataclass" for CAM specifi… Add "colour.utilities.has_nan_o… (compare)

Thomas Mansencal
@KelSolaar
You will also need imageio
We have a reasonably complete guide here: https://www.colour-science.org/installation-guide/
And another for contributing there: https://www.colour-science.org/contributing/
Geetansh Saxena
@SGeetansh

https://github.com/colour-science/colour/runs/1862191569?check_suite_focus=true

I just checked the CI, and apparently, it isn't running any tests for windows python 3.8.

Installation was successful. Thanks!

That might be where the problem is coming from then. Can you check that matplotlib is installed in the same environment as colour and working?

Saransh Chopra
@Saransh-cpp
I got busy and was not able to work on the PR further, @KelSolaar I tried both of the suggestions which you left in the comments but the tests are still failing. Can you please go through the PR once again whenever you get the time?
Thomas Mansencal
@KelSolaar
@SGeetansh : yes we are aware that the tests are not running for 3.8 for Windows, we haven’t had time to figure out why as there is no obvious reason as to why.
Geetansh Saxena
@SGeetansh
They are not working locally too.
Geetansh Saxena
@SGeetansh
@KelSolaar I think I got the solution. I'll run it on the CI of my fork and open a PR.
Thomas Mansencal
@KelSolaar
Thanks, I saw the PR, I would be keen to understand why they don't run specifically on that matrix index
Will continue the discussion there!
@Saransh-cpp : Yes will do!
Geetansh Saxena
@SGeetansh
@KelSolaar Couldn't find the exact reason but:
Geetansh Saxena
@SGeetansh
Oh, sorry for mentioning it here again.
Dani
@otivedani
Hi, I am Dani, nice to meet you all. I am just beginning to try colour-science for the first time. Particularly I'm interested in colour.colour_correction() function, and wanted to ask for some help. Concluding from the documentation, does those function take samples of colours and trying to interpolating the colours to create 3D LUT?
Dani
@otivedani
As the proof of concept, I got a normal photo of ColourChecker and another same image that I edited to make it more 'cool' (as in colour temp), and extracted colors from each square of both images. Then I'm trying to generate LUT for my low temp image back to the original using color_correction(). I got the results were 'close' to the original, but not exact. Does this is expected (from comparing image to image), rather than comparing to the actual RGB color of the chart?
If anybody want to check, this is my source : https://gist.github.com/otivedani/81d50b0ab3a959a02e7b60d2f6f43941 (colab link available). Thank you in advance
Thomas Mansencal
@KelSolaar

Hi Dani!

Concluding from the documentation, does those function take samples of colours and trying to interpolating the colours to create 3D LUT?

Not quite, depending on the variant they either do a linear transformation via a 3x3 matrix that wil rotate and scale the colours to the best fit, or do something similar but using a 3xn polynomial to distort the space to the best fit

Thus, there is no data discretization/3D LUT involved, only functions.
The functions will not be able to fully fit the target and source spaces so it is expected to have some differences, worth noting that polynomials methods might be subject to "explosions" outside the domain they are used on. Put another way, they are very good at interpolating the colours defined by the target and source spaces but extrapolation outside that is bound to behave very unexpectedly.
Thomas Mansencal
@KelSolaar
Dani
@otivedani
Hi @KelSolaar , thank you for helping me to understand! I will get to learn more from the thread too. In other words, the functions take 3 -> 3 (as in 3D, rather than 1D each channel), right?
What I mean to create 3D LUT is, I have an idea to apply the correction to a neutral 3D LUT image rather than the source image itself, like this : https://streamshark.io/obs-guide/converting-cube-3dl-lut-to-image , so the adjustment could be reused. Did you think it is going to work?
Thomas Mansencal
@KelSolaar
Yep, that would work! Note that Colour can write 3D LUTs also, e.g. my_lut = colour.LUT3D(size=33); colour.io.write_LUT(my_lut, 'my_lut.cube')
Dani
@otivedani
Great! Thank you, this lib is awesome!
Thomas Mansencal
@KelSolaar
You are welcome! :)
Geetansh Saxena
@SGeetansh
Hey @KelSolaar,
I wanted to work on #796 and just required a little guidance on what information and output format needed.
Marianna Smidth Buschle
@msb.qtec_gitlab
Hi, I need some help regarding using "colour_correction".
I have a x-rite color chart with 24 colors, and I have found these values online as the reference:
reference_colors = [[115,82,68],[194,150,130],[98,122,157],[87,108,67],[133,128,177],[103,189,170],
[214,126,44],[80,91,166],[193,90,99],[94,60,108],[157,188,64],[224,163,46],
[56,61,150],[70,148,73],[175,54,60],[231,199,31],[187,86,149],[8,133,161],
[243,243,242],[200,200,200],[160,160,160],[121,122,121],[85,85,85],[52,52,52]]
and I have read the values from a raw image from my custom camera:
extracted_colors = [[60,54,59],[131,93,86],[50,60,83],[40,44,41],[82,76,108],[88,134,147],
[158,95,68],[40,49,90],[128,57,54],[41,31,46],[99,117,71],[172,120,72],
[35,43,80],[45,71,58],[113,43,39],[188,147,72],[136,65,84],[44,80,118],
[213,211,212],[146,146,150],[96,96,100],[58,58,63],[38,38,45],[28,30,39]]
and I want to use it like this:
corr_color = colour.colour_correction(extracted_colors[k], extracted_colors, reference_colors, method='Finlayson 2015')
I am however in doubt about the need to normalize them from [0,255] to [0,1] range and about the need to linearize
from what I understand the reference values I found online should be sRGB, so I expect I need to linearize them?
and is that with the "colour.models.eotf_inverse_sRGB()" or "colour.models.eotf_sRGB()"?
and I believe my image is pure raw and doesnt have any gamma encoding
Thomas Mansencal
@KelSolaar
Hi @msb.qtec_gitlab!
So it looks like you are using sRGB 8-bit values here, so you must convert the array to float, divide by 255 and then apply the eotf_sRGB to decode
>>> colour.models.eotf_sRGB(colour.utilities.as_float_array([122, 122, 122]) / 255)
array([ 0.19461783,  0.19461783,  0.19461783])
Marianna Smidth Buschle
@msb.qtec_gitlab

thanks,
yeah, I also got to that conclusion after seeing examples and comments from different places.
I ended up just using the reference colors from 'colour' since I could see my x-rite chart was from January 2014:

D65 = colour.CCS_ILLUMINANTS['CIE 1931 2 Degree Standard Observer']['D65']
REFERENCE_COLOUR_CHECKER = colour.CCS_COLOURCHECKERS['ColorChecker24 - Before November 2014']
REFERENCE_SWATCHES = colour.XYZ_to_RGB(
        colour.xyY_to_XYZ(list(REFERENCE_COLOUR_CHECKER.data.values())),
        REFERENCE_COLOUR_CHECKER.illuminant, D65,
        colour.RGB_COLOURSPACES['sRGB'].matrix_XYZ_to_RGB)

Based on the example from https://github.com/colour-science/colour-checker-detection/blob/develop/colour_checker_detection/examples/examples_detection.ipynb
I also tried converting the reference I previously had from X-rite by doing what you said (I had also tried the cctf_decoding()/encoding() as in the example above) and I could see that the values where quite close but not really an exact match...
Is that because of all the colorspace conversions (xyY->XYZ->RGB)?

I can also see that the CCM really improves my images, even though it is not perfect
The images are white balanced in advance and the gain and exposure time has been adjusted in order to maximize the dynamic range while avoiding clipping
Are there any more steps I should apply before the correction which can improve the results further?
I can see that the biggest improvement to my images by applying the CCM is in the color saturation
Marianna Smidth Buschle
@msb.qtec_gitlab
Lastly, I have the option to apply the correction to either RGB or YCrCb images, is there a colorspace which is better or worse for that?
Thomas Mansencal
@KelSolaar
It might be better to apply the correction in a perceptual-uniform space but it really depends what you are trying to achieve, e.g. are you trying to minimize errors between different cameras or between a camera and the standard observer.
You could use Finlayson (2015) but it is really highly conditioned by the input data and here a ColorChecker 24 is often not enough
Geetansh Saxena
@SGeetansh
@KelSolaar Hey Thomas, not to disturb you but you were about to send a sample file, right? Also, do you want me to layer the two sets and send you a mini report?
Thomas Mansencal
@KelSolaar
@SGeetansh : Unfortunately, I did not manage to get a hold on some of the UPRTek data yet, it is coming though! Be great to have the data layered indeed, can do that wherever you want!
quicker might be to cull the spectral data and save it as two different CSV files and load them with colour directly
Geetansh Saxena
@SGeetansh
@KelSolaar Thanks!
Geetansh Saxena
@SGeetansh
@KelSolaar I am a little confused about where the code for this would go. I see we currently have a function read_spectral_data_from_csv_file. Are there any other types other than sekonic and UPRTek that were already implemented that I can have a look at? If not, where should I fit the new parser?
Thomas Mansencal
@KelSolaar
No, we don't really have any equivalent so far!
We would probably add two new parsers, one for Sekonic and one for UPRTek, the latter sharing most code from the former as they are for practical purposes the same.
Can we move this discussion to the issue though, it will be easier to track down later?
Geetansh Saxena
@SGeetansh
Sure. My bad.
Marianna Smidth Buschle
@msb.qtec_gitlab
@KelSolaar we have a custom camera (machine vision camera) which is being used for live sports streaming (ice hockey) and the goal is basically to make the picture look good, more true vibrant colors (we have actually several cameras, so they also have to look similar).
We can definitely see that we are missing on the color saturation and that is mostly what the CCM I tried generating seems to correct.
Because of the live requirement going to CIE Lab or something is a bit too costly, so that is why it would be preferable to keep in the native RGB or the YUV we transform to before encoding to H264.
I had the impression that when using degree=1 all methods where pretty much equivalent? And that Finlayson differed when going to higher orders?
And because it is indoors sports the illumination won't change, so I expected that doing the calibration once (like I already do for white balance and exposure) would work...
Thomas Mansencal
@KelSolaar

I had the impression that when using degree=1 all methods where pretty much equivalent? And that Finlayson differed when going to higher orders?

Correct!

it is indoors sports the illumination won't change, so I expected that doing the calibration once (like I already do for white balance and exposure) would work...

Yes! This is almost the ideal scenario, ideally you would sample the illumination with a spectrometer, e.g. Sekonic C7000, measure the chart reflectances with another one, e.g. X-Rite I1 Pro, from there you could generate the reference chart values under the illumination of the location and calibrate the camera against that.