These are chat archives for freeman-lab/zebra

9th
Feb 2015
Davis Bennett
@d-v-b
Feb 09 2015 17:11
@freeman-lab path to tuning arrays: '/tier2/ahrens/davis/data/spark/4dpf_gfap_gc6f_tdT_huc_h2b_gc6f_ori_1_20150130_190839/mat/tmp.npz'
code for loading:
    tmp = np.load(matDirs[curExp] + 'tmp.npz')
    betaMat = tmp['arr_0']
    statsMat = tmp['arr_1']
    tuneMat = tmp['arr_2']
Davis Bennett
@d-v-b
Feb 09 2015 19:21

ah yes and substitute

'/tier2/ahrens/davis/data/spark/4dpf_gfap_gc6f_tdT_huc_h2b_gc6f_ori_1_20150130_190839/mat/tmp.npz'

for

matDirs[curExp] + 'tmp.npz'
Davis Bennett
@d-v-b
Feb 09 2015 19:49
@freeman-lab any idea how the output of series.dims could be negative?
calling this on the text data you directed me to returns funny values for dims:
imDat = tsc.loadSeries(serDirs[curExp],inputFormat='Text')
imDat.dims
...
Dimensions(values=[(1, 1, 1, -3730, -4403, -3634, -3646, -3999, -4443, -3641, -3523, -2914, -3205, -4161, -3521, -4404, -3483, -3486, -3344, -3628, -3343, -3282, -3180, -3438, -3148, -4133, -3497, -3681, -3452, -3553, -3197, -3150, -3562, -3364, -4025, -3119, -3772,... (many more)
Jeremy Freeman
@freeman-lab
Feb 09 2015 19:51
oh, so first all for text input, set nKeys=3
it's not binary so you need to specify that manually
Davis Bennett
@d-v-b
Feb 09 2015 19:52
aha
Jeremy Freeman
@freeman-lab
Feb 09 2015 19:52
as for the values... i think this may be dff after some weird transformation
some rescsaling or something
Davis Bennett
@d-v-b
Feb 09 2015 19:52
ohh I see, those are data values that got in there
Jeremy Freeman
@freeman-lab
Feb 09 2015 19:52
if you compute the mean of a single plane, is it at all interpretable?
oh yeah, exactly
without the nKeys, the values will be the keys
Davis Bennett
@d-v-b
Feb 09 2015 20:06
yeah I'm looking at a single plane now, much much much sharper that what I normally collect
I took the time average
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:06
oh and that worked?
average of dff should be 0?
or maybe it's raw f?
Davis Bennett
@d-v-b
Feb 09 2015 20:07
time average of dff shouldn't be 0...
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:07
depends how it's computed!
if it's with a percentile, yes
agreed
Davis Bennett
@d-v-b
Feb 09 2015 20:08
can I drop an image in here, or does it have to be linked?
but yes, much sharper, however these values were computed
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:10
you can drop one in
yup, that's what i would have expected
ok, cool to know that can still be loaded!
that was the first data set we ever ingested into spark =)
Davis Bennett
@d-v-b
Feb 09 2015 20:12
:-)
how old was this fish?
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:13
dunno
misha would know
Davis Bennett
@d-v-b
Feb 09 2015 20:14
hmm i can't copy images into here
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:15
if you have a png on disk somewhere you should be able to drop it in
Davis Bennett
@d-v-b
Feb 09 2015 20:15
but the notebook will be in my notebooks folder titled view-series.npb
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:15
not sure about other formats
k cool
Davis Bennett
@d-v-b
Feb 09 2015 20:15
yeah it is striking how much better this looks than what I collect
tmp.png
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:17
you can see all the cells!
Davis Bennett
@d-v-b
Feb 09 2015 20:17
so this is with two light sheets, so to properly compare to our system you gotta only look at half
but each half has uniform image quality across the volume
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:18
wait this is with two?
i thought it was just the side
Davis Bennett
@d-v-b
Feb 09 2015 20:18
whoa really, that would be even more incredible
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:19
dunno, i guess whatever was in the 2013 nat methods paper
Davis Bennett
@d-v-b
Feb 09 2015 20:19
ah but there were probably two side lasers
Davis Bennett
@d-v-b
Feb 09 2015 20:33
@nvladimus do you have data from our scope that look as sharp as the image above? That's an average of 1000 time points from plane 25/41 from the fish used in the Ahrens Keller nature methods paper
Jason Wittenbach
@jwittenbach
Feb 09 2015 20:35
@freeman-lab I have Illustrator up and running on my machine at the moment. Want to see if you can open it as well?
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:36
@d-v-b seems fine!
sorry, @jwittenbach seems fine!
Jason Wittenbach
@jwittenbach
Feb 09 2015 20:39
most excellent
Jason Wittenbach
@jwittenbach
Feb 09 2015 20:47
So I thought I might share some results from a new analysis that are hot off the press:
the idea is to replace the mean activity of each neuron with the mean activity of that neuron as well as it's k nearest neighbors
Here is correlating mean neural activity to mean swimming strength across multiple closed-loop trials of OMR (red = positive correlation, blue = negative correlation, only correlations with absolute value > 0.3 shown)
withoutNN.png
Now here's the same thing, with the the only difference being the nearest-neighbor replacement mentioned above (k=30)
withNN.png
I'm digging it :smiley_cat:
Jeremy Freeman
@freeman-lab
Feb 09 2015 20:58
seems cool! so i understand, every circle represents the correlation between x and y, where x is the average of that neuron and its k-nearest neighbors and y is the behavior
and this is different from just computing the correlation and then averaging between correlation is non-linear
i.e. the map below is not a blurred version of the map above
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:03
Let me think about this...
1: I actually started by replacing each neurons dF/F time series with the average dF/F of it's kNN. Then I averaged over time. But average across neurons and then over time (what I did) should be equivalent to averaging over time and then over neurons (what you describe)
Jeremy Freeman
@freeman-lab
Feb 09 2015 21:04
ah, you're not correlating, you're just averaging over different blocks
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:04
2: Is computing the correlation non-linear? If so, then yes, that makes it different from some form of simple blurring (only defined over neighbors, rather than space)
no, I am correlating
I do what I just described in each trial
and then correlate activity with behavior, using the different trials as the data
I should be more clear:
Step 1: replace each neuron's time series with the average time series of its kNN
Step 2: for each of these new series, average the activity within each trial
Step 3: compute the correlation between that average activity and average swimming strength, across trials
Jeremy Freeman
@freeman-lab
Feb 09 2015 21:08
ok, so if you just did 1 + 2, it'd be the same as blurring the raw data and computing trial-averaged activity
blurring in space
but i'm pretty sure step 3 is where it matters
correlation is normalized by variance, which is very non-linear
if you beat down the noise by averaging, and then compute correlations, they can be cleaner than if you compute a bunch of noisy correlations and average them
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:09
Non-linearity comes from 3 possible sources:
1: it's blurring in "neighbor space" rather than euclidean space
2: computing corr coef is non-linear
3: thresholding before vs after averaging is some kind of non-linear
Jeremy Freeman
@freeman-lab
Feb 09 2015 21:09
i think it's probably 2 and 3
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:09
yeah, that's exactly it!
you trade spatial acuity (in 'neighbor-space') for SNR via averaging before correlating
Jeremy Freeman
@freeman-lab
Feb 09 2015 21:09
exactly
and acuity in neighbor-space might preserve the topology better than in euclidean space
though there are advantages to both
cool!
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:10
agreed
I plan to implement the euclidean version too
Jeremy Freeman
@freeman-lab
Feb 09 2015 21:10
awesome
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:13
so this is actually data from the second half of the initial high-gain training phase in Takashi's experiment
correlating the activity during that phase with the swimming during that phase
Raphe shows up rather clearly
and there's nothing memory-related about this methinks
Jeremy Freeman
@freeman-lab
Feb 09 2015 21:14
right, it the same negatively correlated
*negative correlation
seems reasonable to me
did you grab its time series?
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:15
Oops, sorry, actually low-gain phase. but still, same idea
Ah nope, not yet
I should do that
I can just pull out the coordinates from this map pretty easily and use Jascha's new functions
Side note: There are a pair of bilateral regions that are also negatively correlated with swimming that this analysis also pulls out fairly strongly. They're super lateral and about at the same ros/caud level as Raphe
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:20
Any idea what they might be? @d-v-b ? @nvladimus?
Jeremy Freeman
@freeman-lab
Feb 09 2015 21:20
yeah exactly (re: pulling the coordinates)
Davis Bennett
@d-v-b
Feb 09 2015 21:32
@jwittenbach just asked tk about this, he says that he interprets those cells to be neurons in the ventral part of the optic tectum, i.e. neurons involved in vision
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:43
Cool, thanks!
During OMR, I guess the stimulus and the swimming are pretty tightly linked. Which would explain why visual cells show up.
Though, during closed loop, the interpretation is a little unclear, since the fish ostensibly swims to cancel the external motion.
Davis Bennett
@d-v-b
Feb 09 2015 21:45
yeah it's hard to say as a rule what the fish is seeing during closed loop, although we do save the stimulus velocity to disk...
Jason Wittenbach
@jwittenbach
Feb 09 2015 21:45
I guess it's probably unclear during open loop as well, as any efference copy from the motor system might still be sent to the visual system.
Jason Wittenbach
@jwittenbach
Feb 09 2015 23:00
@freeman-lab : so it's not entirely clear that this analysis is doing a lot of nonlinear processing
here is what you get if average the neural activity by neighborhood and then calculate the correlation with the swimming (showing only the negative correlations to make it easier to see)
before.png
And this is what you get if you compute the correlations first, and then subsequently average over the neighborhoods:
after.png
Jeremy Freeman
@freeman-lab
Feb 09 2015 23:02
ah, i see
Jason Wittenbach
@jwittenbach
Feb 09 2015 23:03
One thing to note is that these colors are scaled to the maximum correlation
The coer coeffs are, on average, smaller when you do the "blurring"
which makes sense
it might be that doing the neighborhood analysis first gets a little "spatial" contrast boost
i.e. you can tell which of the neurons are really contributing to the local signal
which does play a role if you take a threshold after doing one of these analyses
Jeremy Freeman
@freeman-lab
Feb 09 2015 23:10
right, my take is that we're not gaining a huge amount here, but very well could for more complex analyses
and clearly we're gaining something
what's the difference in the max?
the max correlation
Jason Wittenbach
@jwittenbach
Feb 09 2015 23:11
computing them now...
Jason Wittenbach
@jwittenbach
Feb 09 2015 23:19
This is only for the negative correlations, so it matches what you see in those plots

Average then correlate

holy smokes
forgot this interprets Markdown
Jeremy Freeman
@freeman-lab
Feb 09 2015 23:20
haha
Jason Wittenbach
@jwittenbach
Feb 09 2015 23:20
correlate then average:
max(corr) = .36; mean(corr) = .07
average then correlate:
max(corr) = 0.61, mean(corr) = .14
Jeremy Freeman
@freeman-lab
Feb 09 2015 23:22
gotcha
ok, so clearly it makes a difference
which is great
means the SNR is doing the right thing
but doesn't dramatically change the spatial structure
Jason Wittenbach
@jwittenbach
Feb 09 2015 23:23
right
though I think one thing it would change would be the results after thresholding
if you compare those two plots
and ask yourself which pieces would remain after thresholding at a certain color
in one case Raphe comes out pretty much by itself
(that case being correlate then average)
but in the other case
(average then correlate)
other pieces would come out at the same "level"