vissyst
is an internal object and it is not meant to be called outside of pavo inners. If you want to get/visualise data used by vismodel()
, you can use the sensdata()
function: http://pavo.colrverse.com/reference/sensdata.html :smiley:
Hey @leonosper_melissa:matrix.org! Fair question — I probably need to tweak the docs a bit to be honest. coldist()
and bootcoldist()
report the kinds of contrasts being calculated though, which should help. It's a bit space-specific, since different models demand different kinds of contrast. The receptor-noise model does it's own noise-weighted thing, for example, while 'generic' colourspaces (di- tri- tetrachromatic) use Weber contrast as standard.
Since you mention "green" contrast I'm guessing you might be using the hexagon? It returns simple luminance contrast which, if you specify the long-wavelength receptor as the 'achromatic' receptor and use the bee phenotype, does indeed correspond to "green contrast" as per all the literature, like you say. So the dL values returned in the below example are 'green contrast':
### Hexagon example
# Load data
data(flowers)
# Calculate qcatch w/ long receptor for achromatic
vis.flowers <- vismodel(flowers,
visual = "apis", qcatch = "Ei", relative = FALSE,
vonkries = TRUE, achromatic = "l", bkg = "green"
)
# Into the hexagon
flowers.hex <- colspace(vis.flowers, space = "hexagon")
# Distances/contrasts. Here 'simple' luminance contrast therefore = green receptor contrast
coldist(flowers.hex)
Argh, apologies - you're right. I'm a bit hex-rusty, and I think I may have spotted a small nearby bug too. But anyway, it's actually simpler than all that. So if I'm now understanding right for Spaethe et al. (2001)'s green contrast (as in "...the degree to which a stimulus generates an excitation value different from 0.5 in the green receptor...") you can just eyeball the 'l' column in the output from the call to vismodel then - no coldist() or achromatic required. If you're green-adapted, like in that above example and many papers that use the hex, then yeah the 'l' receptor output column gives what you're after (or you can adapt to any background as you like of course). E.g.
# Load data
data(flowers)
# Calculate qcatch w/ long receptor for achromatic
vis.flowers <- vismodel(flowers,
visual = "apis", qcatch = "Ei", relative = FALSE,
vonkries = TRUE, bkg = "green"
)
head(vis.flowers)
gives
> head(vis.flowers)
s m l lum
Goodenia_heterophylla 0.56572097 0.8237141 0.7053057 NA
Goodenia_geniculata 0.35761236 0.8176153 0.8670134 NA
Goodenia_gracilis 0.01888788 0.1622766 0.7810589 NA
Xyris_operculata 0.21080752 0.7345122 0.6796464 NA
Eucalyptus_sp 0.55622758 0.8515289 0.8208038 NA
Faradaya_splendida 0.45855056 0.7828905 0.8565895 NA
Which is giving fairly strong green-contrast (beyond 0.4 - 0.6 range), as you'd generally expect of a flower.
As another example, if you use the green background as both the stimulus and adapting background, you'd expect no contrast (so 0.5):
vis.green <- vismodel(sensdata(bkg = 'green'),
visual = "apis", qcatch = "Ei", relative = FALSE,
vonkries = TRUE, bkg = "green"
)
head(vis.green)
which is what you get
> head(vis.green)
s m l lum
green 0.5 0.5 0.5 NA
visual
argument for the same idea. So no problem in practical terms, but conceptually it'd need some careful interpretation w/r/t your question(s). The hyperbolic transform is fairly bee-specific as far as I know, not sure birds or flies care about 'green' contrast per se (? flies mostly R1-6 for achro stuff), or at least there's not nearly as much evidence for them as for bees (to my knowledge - could be corrected!), etc. etc. The usual stuff to think about.
No not picky at all - I haven't thought about this in a while. Good questions. So here it's a matter of terminology jumping around a bit. Spaethe et al. does say "Because excitation can range from 0 to 1, the maximum green contrast is 0.5.", which is right. Your quantum catches range from 0 - 1, and if "no contrast" is 0.5, then the maximum contrast would be abs(1 - 0.5) or abs(0 - 0.5) = 0.5, right? So the thing to keep in mind there is green contrast as the difference from 0.5, not the absolute green-receptor stimulation per se (range 0 - 1). So for those examples of mine above, Goodenia_heterophylla offers green contrast of ~0.305 (0.705 - 0.5), while the green background offers 0 (0.5 - 0.5). Does that make sense?
To add to the confusion it looks to me (at a quick glance) that people seem to quietly slip between the two formulations of green contrast as both the absolute receptor stimulation or the difference from 0.5. They're equivalent of course, but you'll just need to keep an eye out when interpreting specific numerical results & keep on top of which they're using.
As another example from from Bukovac et al. 2017a (Why background colour matters....Dyer's group) - "...In particular, we use the definition given in Bukovac et al. (2016), and deem low green contrast to be where E(G) ∈ [0.4, 0.6]. Adapted to the ALG background, all three stimuli have high green contrast values E(G) ≥ 0.7....". So they're reporting absolute values there.
But then in Bukovac et al. 2017b (Assessing the ecological significance...) they use the other form in Table 1 - "Receptor contrasts given are excitation difference from 0.5..."
Heya. The quantum catch is calculated however you like in vismodel(), then all colspace() is doing for the categorical model is calculating the opponent channels Troje hypothesises (e.g. Fig. 5). i.e; x = R7y - R8y, y = R7p - R8p (or vice versa, can't recall) . It's a very straightforward model!
The output also returns typical 'continuous' measures of hue (h.theta), saturation (r.vec) etc. for the space because they can be useful, and in light of more recent evidence questioning the categorical nature of the it all.
Ah okay, cool cool. Well I'd have to have a poke through An et al. sometime (what's the paper sorry? Can't seem to find it at a glance), but I'm a little snowed under atm so will have a good look when I can soon. Otherwise feel free to have a wander through vismodel() itself if you like? https://github.com/rmaia/pavo/blob/master/R/vismodel.R. It's reasonably well commented & navigable.
The hyperbolic transform is is just Q/(Q+1) though, with Q calculated as per usual (illum x receptors x stimulus or background). The input spectra are an obvious source of variation, but I'm guessing you're using the same set for the comparison. Otherwise yep, I'll have to have a look at their spreadsheet internals sometime ~soon.
Hi! I have another quesiton, this time about the function "colspace". Here is some context: I am using pavo to model the visual system of the orchid bee Euglossa dilemma. I uploaded my photoreceptor measurements and lens transmission measurements and used "vismodel" to create a visual model for my species. Things seem to work well, but when I use "colspace" I get a warning message, and I think that is messing things up when I try to plot my data using the color hexagon.
Here is my code:
edilemma_model <- vismodel(rspecdata = edil_spectra, visual = edilemma, trans = edil_transmission, relative = FALSE, vonkries = TRUE)
But when I use
edil_colorspace <- colspace(edilemma_model, space = "hexagon")
I get this error: 1: Quantum catches are not hyperbolically transformed, as required for the hexagon model. This may produce unexpected results.
And I am not sure what it means exactly. Any thoughts or suggestions are highly appreciated!
qcatch = 'Ei'
in your call to vismodel(). That's the 'hyperbolic transform' of receptor catches, which the hexagon model uses as part-and-parcel of its calculations (have a look at the help docs for a sense of what it's doing, or the original reference of course for a slightly longer explanation). I'll tweak that warning message to make it more explicit too though, so it directs you to the specific solution if desired — sorry about that!
Hi @EliF777, just adding a guess to @Bisaloo's suggestion, it might also just be a little bug arising from the fact that spec2rgb()
uses vismodel()
internally with some built-in illuminant and viewer sensitivity (CIE) data. But they're both 300-700, so it will be unhappy when you feed it a 400-700 spec. It's just one of those edge-cases we didn't think to test for.
To check, just interpolate your specs to 300-700 using as.rspec(your_specs, lim = c(300, 700))
and see if it works then. I'll look at making spec2rgb()
a little more flexible too, and/or at least a bit more informative when failing. Thanks!
My main problem right now is being able to obtain JNDs (with confidence intervals) between every pair of angles, although based exclusively on within-individual distances. The problem is that I don't know how to code the "by=" argument in order for the bootcoldist function to do what I want:
boot <- bootcoldist(vis, by=pred$angle, n=c(1,1,1,4), weber=0.05, weber.achro=0.05)
`
Thank you!
Hi @abalosaurus_twitter :wave:, yes, you should be able to get them with the sensdata()
function. For the specific cases you mention, you can use:
sensdata(visual = "cie2")
sensdata(visual = "avg.v")
You're not the first person to ask this question so I've added a small hint in the documentation of vismodel()
: http://pavo.colrverse.com/reference/vismodel.html#see-also-1. It'll hopefully point people in the right direction from now on. But please let us know if you have other suggestions on how we could make it easier to discover.
aggplot()
, as documented the relevant section of the pavo handbook, you can display the legend simply by setting the legend
argument to TRUE
hexplot()
/triplot()
, I don't think we currently have an integrated way to do this. You best bet is probably to use the legend()
base R function or add the legend in post-production