Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 02 14:02
    starkfan007 opened #907
  • Nov 28 09:10

    KelSolaar on typing

    Implement support for typing an… Implement typing support in "co… Implement typing support in "co… and 4 more (compare)

  • Nov 27 10:17

    KelSolaar on typing

    Implement typing support in "co… Remove "colour.utilities.as_num… Implement typing support in "co… and 4 more (compare)

  • Nov 27 09:42

    KelSolaar on typing

    Implement typing support in "co… Implement typing support in "co… Implement typing support in "co… and 3 more (compare)

  • Nov 27 05:04

    KelSolaar on typing

    Implement support for typing an… Implement typing support in "co… Implement typing support in "co… and 4 more (compare)

  • Nov 26 13:39
    KrisKennaway closed #905
  • Nov 26 13:39
    KrisKennaway commented #905
  • Nov 25 09:40

    KelSolaar on typing

    Implement support for typing an… Implement typing support in "co… Implement typing support in "co… and 4 more (compare)

  • Nov 25 08:19

    KelSolaar on develop

    Update "UPRTek" and "Sekonic" c… Merge branch 'feature/v0.4.0' i… (compare)

  • Nov 23 09:56

    KelSolaar on typing

    (compare)

  • Nov 23 09:56

    KelSolaar on typing

    Implement typing support in "co… Implement typing support in "co… Implement typing support in "co… (compare)

  • Nov 22 09:44

    KelSolaar on typing

    Implement support for typing an… Implement typing support in "co… Implement typing support in "co… and 4 more (compare)

  • Nov 21 10:26

    KelSolaar on typing

    Implement support for typing an… Implement typing support in "co… Implement typing support in "co… and 4 more (compare)

  • Nov 21 05:26
    KelSolaar commented #905
  • Nov 21 04:33
    KelSolaar commented #905
  • Nov 21 04:32
    KelSolaar milestoned #905
  • Nov 21 04:32
    KelSolaar labeled #905
  • Nov 21 04:32
    KelSolaar labeled #905
  • Nov 21 04:32
    KelSolaar labeled #905
  • Nov 21 04:32
    KelSolaar edited #905
Saransh Chopra
@Saransh-cpp
fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory
can anyone here help me out? Thanks
Saransh Chopra
@Saransh-cpp
I found a way around it, where can I start with the learning?
Saransh Chopra
@Saransh-cpp
Hi there:), I am a complete noob in colour-science and I have been exploring the codebase and topics like colourspaces, LUTs etc. @KelSolaar, I tried using the code mentioned by you in colour-science/colour#564 by giving a list/array with RGB values to the function but it returns some negative values. HCL values, as I read, should be positive, am I going wrong somewhere? Also what is the major difference between HCL and HSL, both of them use a cylinder type architecture to describe colours, is it the gamma factor or are they same? Thank you ( the doubts might be too simple )
Thomas Mansencal
@KelSolaar
Hey @Saransh-cpp , HCL and HSL here are not quite the same, I would need to check the paper but HCL had 4 specific quadrants as far as I remember, negative values should not really be an issue provided the model round trips.
Saransh Chopra
@Saransh-cpp
Okay thanks! I will start working on colour-science/colour#564 with the code mentioned by you in the issue's comment . I guess the code would go in cylindrical.py under the rgb directory in models, I will follow the format of other conversions written in the same file and post doubts/issues here if I am not able to solve any. Thanks again!
Geetansh Saxena
@SGeetansh
Hey! I am new here. Just forked the colour repo and ran the tests after the setup. 5 errors and 2 failures. Is it supposed to be like this?
michael.mauderer
@michael.mauderer:matrix.org
[m]
It looks like you are missing some dependencies, e.g., matplotlib. How did you set up the repo?
Geetansh Saxena
@SGeetansh
I was facing some errors. Had to manually install some of the dependencies. (matplotlib, spacy, scipy). Also, the required specific version wasn't getting loaded for some reason.
Installed these three with pip.
michael.mauderer
@michael.mauderer:matrix.org
[m]
That might be where the problem is coming from then. Can you check that matplotlib is installed in the same environment as colour and working?
Thomas Mansencal
@KelSolaar
You will also need imageio
We have a reasonably complete guide here: https://www.colour-science.org/installation-guide/
And another for contributing there: https://www.colour-science.org/contributing/
Geetansh Saxena
@SGeetansh

https://github.com/colour-science/colour/runs/1862191569?check_suite_focus=true

I just checked the CI, and apparently, it isn't running any tests for windows python 3.8.

Installation was successful. Thanks!

That might be where the problem is coming from then. Can you check that matplotlib is installed in the same environment as colour and working?

Saransh Chopra
@Saransh-cpp
I got busy and was not able to work on the PR further, @KelSolaar I tried both of the suggestions which you left in the comments but the tests are still failing. Can you please go through the PR once again whenever you get the time?
Thomas Mansencal
@KelSolaar
@SGeetansh : yes we are aware that the tests are not running for 3.8 for Windows, we haven’t had time to figure out why as there is no obvious reason as to why.
Geetansh Saxena
@SGeetansh
They are not working locally too.
Geetansh Saxena
@SGeetansh
@KelSolaar I think I got the solution. I'll run it on the CI of my fork and open a PR.
Thomas Mansencal
@KelSolaar
Thanks, I saw the PR, I would be keen to understand why they don't run specifically on that matrix index
Will continue the discussion there!
@Saransh-cpp : Yes will do!
Geetansh Saxena
@SGeetansh
@KelSolaar Couldn't find the exact reason but:
Geetansh Saxena
@SGeetansh
Oh, sorry for mentioning it here again.
Dani
@otivedani
Hi, I am Dani, nice to meet you all. I am just beginning to try colour-science for the first time. Particularly I'm interested in colour.colour_correction() function, and wanted to ask for some help. Concluding from the documentation, does those function take samples of colours and trying to interpolating the colours to create 3D LUT?
Dani
@otivedani
As the proof of concept, I got a normal photo of ColourChecker and another same image that I edited to make it more 'cool' (as in colour temp), and extracted colors from each square of both images. Then I'm trying to generate LUT for my low temp image back to the original using color_correction(). I got the results were 'close' to the original, but not exact. Does this is expected (from comparing image to image), rather than comparing to the actual RGB color of the chart?
If anybody want to check, this is my source : https://gist.github.com/otivedani/81d50b0ab3a959a02e7b60d2f6f43941 (colab link available). Thank you in advance
Thomas Mansencal
@KelSolaar

Hi Dani!

Concluding from the documentation, does those function take samples of colours and trying to interpolating the colours to create 3D LUT?

Not quite, depending on the variant they either do a linear transformation via a 3x3 matrix that wil rotate and scale the colours to the best fit, or do something similar but using a 3xn polynomial to distort the space to the best fit

Thus, there is no data discretization/3D LUT involved, only functions.
The functions will not be able to fully fit the target and source spaces so it is expected to have some differences, worth noting that polynomials methods might be subject to "explosions" outside the domain they are used on. Put another way, they are very good at interpolating the colours defined by the target and source spaces but extrapolation outside that is bound to behave very unexpectedly.
Thomas Mansencal
@KelSolaar
Dani
@otivedani
Hi @KelSolaar , thank you for helping me to understand! I will get to learn more from the thread too. In other words, the functions take 3 -> 3 (as in 3D, rather than 1D each channel), right?
What I mean to create 3D LUT is, I have an idea to apply the correction to a neutral 3D LUT image rather than the source image itself, like this : https://streamshark.io/obs-guide/converting-cube-3dl-lut-to-image , so the adjustment could be reused. Did you think it is going to work?
Thomas Mansencal
@KelSolaar
Yep, that would work! Note that Colour can write 3D LUTs also, e.g. my_lut = colour.LUT3D(size=33); colour.io.write_LUT(my_lut, 'my_lut.cube')
Dani
@otivedani
Great! Thank you, this lib is awesome!
Thomas Mansencal
@KelSolaar
You are welcome! :)
Geetansh Saxena
@SGeetansh
Hey @KelSolaar,
I wanted to work on #796 and just required a little guidance on what information and output format needed.
Marianna Smidth Buschle
@msb.qtec_gitlab
Hi, I need some help regarding using "colour_correction".
I have a x-rite color chart with 24 colors, and I have found these values online as the reference:
reference_colors = [[115,82,68],[194,150,130],[98,122,157],[87,108,67],[133,128,177],[103,189,170],
[214,126,44],[80,91,166],[193,90,99],[94,60,108],[157,188,64],[224,163,46],
[56,61,150],[70,148,73],[175,54,60],[231,199,31],[187,86,149],[8,133,161],
[243,243,242],[200,200,200],[160,160,160],[121,122,121],[85,85,85],[52,52,52]]
and I have read the values from a raw image from my custom camera:
extracted_colors = [[60,54,59],[131,93,86],[50,60,83],[40,44,41],[82,76,108],[88,134,147],
[158,95,68],[40,49,90],[128,57,54],[41,31,46],[99,117,71],[172,120,72],
[35,43,80],[45,71,58],[113,43,39],[188,147,72],[136,65,84],[44,80,118],
[213,211,212],[146,146,150],[96,96,100],[58,58,63],[38,38,45],[28,30,39]]
and I want to use it like this:
corr_color = colour.colour_correction(extracted_colors[k], extracted_colors, reference_colors, method='Finlayson 2015')
I am however in doubt about the need to normalize them from [0,255] to [0,1] range and about the need to linearize
from what I understand the reference values I found online should be sRGB, so I expect I need to linearize them?
and is that with the "colour.models.eotf_inverse_sRGB()" or "colour.models.eotf_sRGB()"?
and I believe my image is pure raw and doesnt have any gamma encoding
Thomas Mansencal
@KelSolaar
Hi @msb.qtec_gitlab!
So it looks like you are using sRGB 8-bit values here, so you must convert the array to float, divide by 255 and then apply the eotf_sRGB to decode
>>> colour.models.eotf_sRGB(colour.utilities.as_float_array([122, 122, 122]) / 255)
array([ 0.19461783,  0.19461783,  0.19461783])
Marianna Smidth Buschle
@msb.qtec_gitlab

thanks,
yeah, I also got to that conclusion after seeing examples and comments from different places.
I ended up just using the reference colors from 'colour' since I could see my x-rite chart was from January 2014:

D65 = colour.CCS_ILLUMINANTS['CIE 1931 2 Degree Standard Observer']['D65']
REFERENCE_COLOUR_CHECKER = colour.CCS_COLOURCHECKERS['ColorChecker24 - Before November 2014']
REFERENCE_SWATCHES = colour.XYZ_to_RGB(
        colour.xyY_to_XYZ(list(REFERENCE_COLOUR_CHECKER.data.values())),
        REFERENCE_COLOUR_CHECKER.illuminant, D65,
        colour.RGB_COLOURSPACES['sRGB'].matrix_XYZ_to_RGB)

Based on the example from https://github.com/colour-science/colour-checker-detection/blob/develop/colour_checker_detection/examples/examples_detection.ipynb
I also tried converting the reference I previously had from X-rite by doing what you said (I had also tried the cctf_decoding()/encoding() as in the example above) and I could see that the values where quite close but not really an exact match...
Is that because of all the colorspace conversions (xyY->XYZ->RGB)?

I can also see that the CCM really improves my images, even though it is not perfect
The images are white balanced in advance and the gain and exposure time has been adjusted in order to maximize the dynamic range while avoiding clipping
Are there any more steps I should apply before the correction which can improve the results further?
I can see that the biggest improvement to my images by applying the CCM is in the color saturation
Marianna Smidth Buschle
@msb.qtec_gitlab
Lastly, I have the option to apply the correction to either RGB or YCrCb images, is there a colorspace which is better or worse for that?
Thomas Mansencal
@KelSolaar
It might be better to apply the correction in a perceptual-uniform space but it really depends what you are trying to achieve, e.g. are you trying to minimize errors between different cameras or between a camera and the standard observer.
You could use Finlayson (2015) but it is really highly conditioned by the input data and here a ColorChecker 24 is often not enough