Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 08:32
    fxthomas commented #982
  • 08:32
    fxthomas commented #982
  • May 15 21:25
    fxthomas commented #982
  • May 15 20:59
    fxthomas commented #982
  • May 15 09:35

    KelSolaar on leica_l_log

    Implement support for "Leica L-… (compare)

  • May 15 09:35

    KelSolaar on develop

    Fix broken doctests. Merge branch 'feature/v0.4.2' i… (compare)

  • May 15 09:07

    KelSolaar on leica_l_log

    Implement support for "Leica L-… (compare)

  • May 15 08:47

    KelSolaar on develop

    Rename various test classes. Merge branch 'feature/v0.4.2' i… (compare)

  • May 15 08:01

    KelSolaar on develop

    Update "BIBLIOGRAPHY.bib" file. Merge branch 'feature/v0.4.2' i… (compare)

  • May 15 06:05

    KelSolaar on v0.4.2

    (compare)

  • May 15 06:05

    KelSolaar on develop

    Improve "colour.models.*BT2100*… Improve "Delta-E ITP" implement… Merge branch 'feature/v0.4.2' i… (compare)

  • May 15 05:34

    KelSolaar on v0.4.2

    Improve "colour.models.*BT2100*… Improve "Delta-E ITP" implement… (compare)

  • May 15 05:12
    KelSolaar closed #774
  • May 15 05:12

    KelSolaar on develop

    PR: Implement support for "Reco… (compare)

  • May 15 05:12
    KelSolaar closed #981
  • May 15 05:12
    KelSolaar commented #981
  • May 14 23:21
    KelSolaar milestoned #981
  • May 14 23:21
    KelSolaar edited #981
  • May 14 23:20
    KelSolaar edited #953
  • May 14 22:45
    KelSolaar commented #982
Thomas Mansencal
@KelSolaar
that being said, you could hot-swap the functions you need with more optimised ones are required.
Let us know what you find out and we can improve things if it is too painful.
TuckerD
@tjdcs
Yeah I don't have my Ohno implementation vectorized either (MATLAB). It's a pain. Regarding numpy, yes I'm aware. One thing that gave me concern in the CAM16 code was the calls to other functions for every computational step. Function calls are cheap but not free and I've found that these can add up in well vectorized code. But it's definitely something that needs testing and examination before I really finish that assumption. As they say, premature optimization...
The other thing I'm trying to figure out, which is just a quality of life issue, is why when I run the tests any figure window tests block and have to be manually interacted with (closed) on windows before the next test can run.
Also wondering if I can configure tests to run in parallel.
(these are not necessarily pycolor issues, just me trying to get used to this dev ecosystem I'm running)
TuckerD
@tjdcs
Sadly, I have other work to do today so I don't have as much time to toy with color. But my intention is to adopt it as the basis of my work and move away from Matlab. Mathworks licensing scheme recently really disappointed me and I can't continue to build any of my tools on their ecosystem. Too much risk in the licensing for commercial use.
Thomas Mansencal
@KelSolaar
I have vectorised most of our Ohno (2013) implementation and made some optimisations, e.g. caching, etc...: colour-science/colour#951
  • colour.temperature.uv_to_CCT_Ohno2013 definition is ~100x faster.
  • colour.temperature.CCT_to_uv_Ohno2013 definition is ~425x faster.
It is merged in develop.
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
nice
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
pandas will let you manipulate the data layout as you wish to output it the way you want
otherwise it's kind of a virtual spreadsheet
Thomas Mansencal
@KelSolaar
Thank you!
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
@KelSolaar: I added more today
# 2. FAIRCHILD, Mark D. et PIRROTTA, Elizabeth. 
# Predicting the lightness of chromatic object colors using CIELAB. 
# Color Research & Application, 1991, vol. 16, no 6, p. 385-393.

# sample | Munsell notation | L* | C* | h* | Observed L*

raw_data_2 = """
1 "5R3/2" 30.58 13.68 25.46 38.1
2 "5R3/6" 31.19 33.33 25.34 39.1
3 "5R3/10" 30.89 53.40 24.60 38.8
4 "5R5/2" 51.64 9.95 26.19 55.2
5 "5R5/8" 51.41 40.99 27.15 56.7
6 "5R5/14" 52.00 71.23 29.40 62.9
7 "5R7/2" 72.49 8.72 28.37 71.4
8 "5R7/6" 71.56 29.51 28.29 72.1
9 "5R7/10" 72.60 48.64 27.81 75.0
10 "5Y3/1" 30.34 8.30 90.99 36.3
11 "5Y3/2" 30.30 13.97 89.36 36.4
12 "5Y3/4" 30.58 28.50 87.52 35.4
13 "5Y5/2" 51.52 15.87 93.54 57.0
14 "5Y5/6" 51.43 47.03 89.96 48.6
15 "5Y5/8" 51.59 59.71 88.42 50.9
16 "5Y7/2" 71.77 16.67 95.07 71.4
17 "5Y7/8" 71.82 59.83 91.06 65.5
18 "5Y7/12" 71.69 86.70 89.59 66.1
19 "2.5G3/2" 30.22 10.95 151.02 36.4
20 "2.5G3/6" 30.81 34.31 155.69 39.2
21 "2.5G3/10" 30.91 55.44 156.91 40.9
22 "2.5G5/2" 51.20 12.60 149.41 56.0
23 "2.5G5/8" 52.11 48.61 152.67 59.5
24 "2.5G5/12" 50.81 65.89 154.59 61.5
25 "2.5G7/2" 72.79 13.55 148.52 71.3
26 "2.5G7/6" 72.16 37.75 150.32 73.4
27 "2.5G7/10" 71.32 58.28 152.88 70.0
28 "5PB3/2" 30.27 9.99 275.89 38.4
29 "5PB3/6" 30.41 27.82 274.26 44.4
30 "5PB3/10" 30.42 44.05 276.56 48.6
31 "5PB5/2" 51.82 7.20 273.99 57.1
32 "5PB5/8" 52.05 31.87 272.63 61.4
33 "5PB5/12" 51.14 46.26 270.92 62.8
34 "5PB7/2" 72.08 6.05 266.63 75.4
35 "5PB7/6" 71.87 22.02 267.04 73.3
36 "5PB7/8" 71.17 28.68 266.47 74.7
"""
BL_data_2 = pd.read_csv(StringIO(raw_data_2), sep=" ", index_col=0, header=None, names=["sample", "Munsell", "L", "C", "h", "Ln"])

# Convert to xyY to match the other dataset
Lab = colour.LCHab_to_Lab(np.vstack([BL_data_2["L"].to_numpy(), BL_data_2["C"].to_numpy(), BL_data_2["h"].to_numpy()]).T)
XYZ = colour.Lab_to_XYZ(Lab)
xyY = colour.XYZ_to_xyY(XYZ)
BL_data_2["Y"] = xyY[:, 2] * 100
BL_data_2["x"] = xyY[:, 0]
BL_data_2["y"] = xyY[:, 1]

Labn = colour.LCHab_to_Lab(np.vstack([BL_data_2["Ln"].to_numpy(), BL_data_2["C"].to_numpy(), np.zeros_like(BL_data_2["h"].to_numpy())]).T)
XYZn = colour.Lab_to_XYZ(Labn)
BL_data_2["Yn"] = XYZn[:, 1] * 100

# The weight is the number of observers in the study : n = 11
BL_data_2["w"] = 11

BL_data_2
# 3. Withouck, Martijn & Smet, Kevin & Ryckaert, W. & Pointer, 
# M. & Deconinck, Geert & Koenderink, Jan & Hanselaer, Peter. (2013). 
# Brightness perception of unrelated self-luminous colors. 
# Journal of the Optical Society of America. 
# A, Optics, image science, and vision. 30. 1248-55. 
# 10.1364/JOSAA.30.001248. 

# 9 observers, reference radiometric luminance = 51 Cd/m²
# Results given in CIE LUV 1976 10° observer

# Columns : huv;10 | suv;10 | Qgeom
raw_data_3 = """
291.82 0.53 54.65
289.04 0.86 59.82
287.04 1.23 62.50
286.63 1.71 69.19
62.70 0.21 48.71
69.75 0.40 46.27
78.04 0.65 48.81
79.02 0.94 58.39
77.21 1.12 58.46
17.62 0.21 50.57
346.85 0.57 55.12
343.30 0.89 55.54
341.18 1.26 68.07
339.18 1.83 73.26
83.22 0.37 49.30
92.23 0.58 44.79
97.46 0.79 50.24
97.34 0.95 52.34
96.73 1.14 61.49
223.45 0.63 52.63
223.89 0.78 57.00
224.03 1.02 58.24
224.11 1.34 62.56
225.92 1.66 65.87
36.05 0.46 48.19
31.29 0.88 51.98
29.33 1.23 46.60
29.24 1.55 52.18
28.24 1.98 64.98
153.58 0.55 50.58
155.98 0.62 50.73
159.48 0.92 55.17
161.38 1.18 60.36
162.01 1.44 61.88
101.66 0.19 47.57
139.55 0.75 47.70
143.33 1.14 60.10
145.02 1.54 68.18
139.03 1.97 76.04
44.00 0.16 49.72
15.68 0.78 50.88
13.12 1.30 58.11
11.99 1.99 66.35
11.40 2.74 64.15
12.06 3.61 79.12
264.89 0.45 51.64
258.16 1.43 61.18
257.34 2.05 61.69
256.10 2.69 71.59
252.28 3.04 76.95
195.84 0.29 51.56
201.09 0.60 50.85
202.26 0.84 60.38
203.30 1.30 66.12
49.52 0.13 48.28
49.45 0.54 47.58
97.94 0.00 50.00
41.79 1.02 48.49
"""

BL_data_3 = pd.read_csv(StringIO(raw_data_3), sep=" ", index_col=None, header=None, names=["h", "s", "Yn"])
BL_data_3["Y"] = 0.51 * 100

BL_data_3["H"] = BL_data_3["h"] / 180 * np.pi

# Convert s, h to UV in CIE LUV 10°
BL_data_3["U"] = -1 / 13 * BL_data_3["s"] * np.abs(np.cos(BL_data_3["H"]))
BL_data_3["V"] = -np.abs(1 / 13 * BL_data_3["s"] * np.abs(np.cos(BL_data_3["H"])) * np.tan(BL_data_3["H"]))

# Because tan(h) = V / U == -V / -U, the tan function will always yield results in ]-pi / 2; pi / 2[
# if hue is in [180; 270]°, U and V are expected both negative and we need to rotate the result by 180°
# notice that .where() replaces values that have False in the corresponding row, so the logic is reversed
U_positive = BL_data_3['h'].between(90, 270, inclusive="both")
BL_data_3["U"].where(U_positive, -BL_data_3["U"], inplace=True)

V_positive = BL_data_3['h'].between(180, 360, inclusive="both")
BL_data_3["V"].where(V_positive, -BL_data_3["V"], inplace=True)

# Get u'v' from UV
D65_10 = colour.colorimetry.CCS_ILLUMINANTS['CIE 1964 10 Degree Standard Observer']['D65']
D65_10_uv = colour.xy_to_Luv_uv(D65_10)
BL_data_3["u"] = BL_data_3["U"] + D65_10_uv[0]
BL_data_3["v"] = BL_data_3["V"] + D65_10_uv[1]

# Convert to xy
uv = np.vstack([BL_data_3["u"], BL_data_3["v"]]).T
xy = colour.Luv_uv_to_xy(uv)
BL_data_3["x"] = xy[:,0]
BL_data_3["y"] = xy[:,1]

# The weight is the number of observers in the study : n = 9
BL_data_3["w"] = 9

BL_data_3
@KelSolaar: is there a way to convert CIE XYZ 1964 10° to CIE XYZ 1931 2° ?
because the last one and the next one use CIE 1964 10°
Thomas Mansencal
@KelSolaar
There is no defined transform, an approach is to generate a set of spectra that covers both sensitivity spaces and find a mapping between them.
This definition can define the outer surface for example: https://github.com/colour-science/colour/blob/develop/colour/volume/spectrum.py#L384
Then you would need to produce spectra inside that volume, and finally generate the transform between the two datasets.
I had started some work around that a while ago, might be some stuff to dig in that Colab Notebook: https://colab.research.google.com/drive/1YO6kfohVxjdGm4t6I3JMifff00BB2SuM?usp=sharing
4 replies
There is also the paper from Richard Kirk from Filmlight that might be relevant here where they map to CIE 2006.
4 replies
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
rough 3×3 matrice it is
Kirk got fair-enough results with that
Thomas Mansencal
@KelSolaar
yeah it works relatively good for observer to observer, cameras a bit less!
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
life is a bitch
hmm, now I wonder how Hellwig, Fairchild and Nayatani can compare the B/L datasets if some of them are defined with chromaticities in 2° 1931 and some other in 10° 1964
so far, I have just used xy regardless of the declaration space
Thomas Mansencal
@KelSolaar
That is an excellent question! Mark tends to reply to email, might be worth asking him.
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
These are the 3 datasets above, x is predicted brightness with HKE, y is experimental data
Thomas Mansencal
@KelSolaar
Fitting between the two observers?
Oh sorry, I just saw your message
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
the 2 observers are treated the same
Thomas Mansencal
@KelSolaar
👍
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
@KelSolaar: I will update my Collab with the fittings and datasets when I'm done, you will be able to grab the data from there as you want
Thomas Mansencal
@KelSolaar
Thank you! I will probably drop it in colour-datasets!
2 replies
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
that's a total of something like 125 observers and 180 data points
Thomas Mansencal
@KelSolaar
🙌
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
@KelSolaar: I don't know if you followed, but MacAdam moments actually make a great job at predicting HKE by treating brightness as the invert of saturation
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
Thomas Mansencal
@KelSolaar
I haven't looked too much yet but I saw some interesting stuff in MacAdam (1938), e.g. https://imgur.com/iqbHme4
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
@KelSolaar: hmm you changed your API
def normalise_multi_signal(multi_signal):
    multi_signal = multi_signal.copy()

    f_n = colour.sd_to_XYZ(colour.sd_ones(), cmfs=multi_signal) / 100
    print('"{0}": {1}'.format(multi_signal.name, f_n))
    multi_signal.values = multi_signal.values / f_n

    return multi_signal

CMFS_1_NAME = 'CIE 1931 2 Degree Standard Observer'
CMFS_1 = normalise_multi_signal(colour.colorimetry.MSDS_CMFS[CMFS_1_NAME].copy().align(SHAPE))

# Generate a LUT of test XYZ vectors
XYZ_1931 = []
for X in np.arange(0, 1, 0.05):
  for Y in np.arange(0, 1, 0.05):
    for Z in np.arange(0, 1, 0.05):
      XYZ_1931.append(np.array([X, Y, Z]))

# Convert XYZ 1931 vectors to spectra
SD = []
for XYZ in XYZ_1931:
    SD.append(colour.XYZ_to_sd(
                XYZ, cmfs=CMFS_1, method='Jakob 2019', optimisation_parameters={'options': {
                    'ftol': 1e-5
                }}))

# Convert back spectra to XYZ 1931 2°
XYZ_1931 = colour.msds_to_XYZ(np.array(SD), method='Integration', cmfs=CMFS_1, shape=SHAPE) / 100
yields AttributeError: "SpectralDistribution.__iter__" object has been removed from the API.
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
ah, forgot a .value, nevermind
Aurélien Pierre
@aurelienpierre:matrix.org
[m]
so the RMSE between 1964 and 1931 observers for arbitrary spectra is 0.00050
2 replies

converting XYZ from 1964 to 1931 with a matrix

array([[ 1.06756581,  0.07134588, -0.13992265],
       [-0.06910109,  0.988538  ,  0.13924861],
       [ 0.00140968, -0.05954396,  0.99820504]])

drops it to 0.00016

Aurélien Pierre
@aurelienpierre:matrix.org
[m]
with the XYZ grid, I upsampled the spectra with Jakob & Hannika 2019, then converted spectra to XYZ 1931 and 1964