Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 20 02:04
    wenxuanliang commented #101
  • Aug 20 02:01
    epnev closed #101
  • Aug 20 02:01
    epnev commented #101
  • Aug 20 01:53
    wenxuanliang commented #101
  • Aug 20 01:53
    wenxuanliang commented #101
  • Aug 20 01:52
    wenxuanliang commented #101
  • Aug 20 01:49
    wenxuanliang commented #101
  • Aug 20 01:40
    epnev commented #101
  • Aug 20 01:39
    epnev commented #101
  • Aug 20 01:23
    epnev closed #100
  • Aug 20 01:23
    epnev commented #100
  • Aug 19 22:09
    wenxuanliang opened #101
  • Aug 19 22:08
    wenxuanliang opened #100
  • Aug 15 15:53
    wenxuanliang commented #99
  • Aug 15 15:45
    wenxuanliang commented #98
  • Aug 15 03:20
    epnev commented #98
  • Aug 15 03:16
    epnev commented #99
  • Aug 15 02:38
    wenxuanliang commented #99
  • Aug 15 02:28
    wenxuanliang commented #98
  • Aug 15 02:27
    wenxuanliang commented #98
Capsaicin2
@Capsaicin2

Warning: Number of input vectors, 3, did not match the input matrix's number of dimensions, 2. 1 trailing
singleton input vectors were removed.

In mat2cell (line 73)
In apply_shifts (line 189)
In drift_revphase (line 98)
95 end
Error using mat2cell (line 97)
Input arguments, D1 through D4, must sum to each dimension of the input matrix size, [4 4 1 2].'

Error in apply_shifts (line 211)
shifts_cell =
mat2cell(shifts_up,ones(length(xx_uf),1),ones(length(yy_uf),1),ones(length(zz_uf),1),nd);

Error in drift_revphase (line 98)
phasemap1 = apply_shifts(single(phasemap1),shifts1,options_nonrigid);

eftychios pnevmatikakis
@epnev
@Capsaicin2 I’m not sure why you get this error, provided you downloaded your code after Aug 30. If you want send me the relevant images and variables in private and I can take a look.
Capsaicin2
@Capsaicin2
@epnev Sent you an email. Thanks for the help!
eftychios pnevmatikakis
@epnev
@Capsaicin2 The issue here is that the algorithm detected there is a relative shift between odd and even lines probably due to bidirectional scanning for the original file lummask1 but not for the phasemap1 file. As a result different methods for applying the shifts were used in the two cases causing a conflict. The way to fix this, is to get the options struct as an output when you apply the normcorre function so that this relative shift is passed on in the apply_shifts function. Alternatively, if you don’t do bidirectional scanning, then you can set options_nonrigid.correct_bidir = false and this will not be used at all
Capsaicin2
@Capsaicin2
@epnev Worked like a charm. Thanks!!
eftychios pnevmatikakis
@epnev
@Capsaicin2 you’re welcome
Long Lab
@LongLabGit

Hi @epnev,

I am using the matlab implementation of your code to motion correct 2-p calcium imaging data (512 x 512 px, edge length of FOV 750 um, 30 Hz, green (GCaMP6) and and red channel (red microspheres)). I usually do not have a lot of motion (around 1 - 2 pixels between frames, maybe up to 8 px across frames).

I manually generate a template (imagej) from a subset of red channel frames to feed it into 'normcorr_batch' and motion correct the red channel. Then, I apply the shifts to the green channel using 'apply_shifts'.

The motion correction itself works very well(!), unfortunately, it seems to generate two types of artifacts (see 'https://drive.google.com/file/d/1QVuISIUVAGXnoDzo7VUKEHZPwgQePm-o/view' for a PDF with example images):

  1. Many bright spots in the motion corrected red channel (and also the brightest spots in the green channel) cause stripy artefacts in the x and y direction (see white arrows in PDF).

  2. Mainly the green and less so the red channel show a raster of tiling artefacts (see white arrow in PDF).

I would be very happy about any suggestions, which could resolve these issues.

I thought the 'stripy artefacts' issue could be due to conversion problems, but after trying a lot of stuff this seems unlikely to me. This is how I handle the data:

  • load the tif stack using 'read_file'
  • convert data to single (the template is also converted to single)
  • deinterleave channels
  • run 'normcorr_batch', red channel
  • run 'apply_shifts', green channel
  • convert motion corrected data to unit16 and write tifs using imwrite ('compression','none')

Thanks so much for your help!

eftychios pnevmatikakis
@epnev
Hi @LongLabGit I’m traveling these days but will take a look.
@LongLabGit do the stripes remain if you use options.shifts_method = ‘cubic’ ?
Also, the tiling artifacts in the greed channel suggest to me a small grid_size. For the FOV you have, I’d suggest grid_size = [128, 128]. It’ll also be much faster.
Long Lab
@LongLabGit
Hi @epnev, thanks so much - both of my problems seems to be resolved!
eftychios pnevmatikakis
@epnev
@LongLabGit great!
@epnev btw, I believe you should be able to fully automatize the procedure and bypass the usage of imageJ. When no template is provided the algorithm first focuses on a few frames to get a template and then starts the procedure.
Long Lab
@LongLabGit
@epnev, I am planning to do that eventually. I started with providing a single imagej generated template to use the same template across trials (this is of course also possible if I just keep using the automatically generated template from the first trial for all the following trials).
gittangjun
@gittangjun
Hi @epnev
I Have a question about how to deduce the expression of the t_max in line 184 from the file cont_ca_sampler
eftychios pnevmatikakis
@epnev
Hi @gittangjun the impulse response function in continuous time is given by:
h(t) = (1 - exp(-t/τ<sub>rise</sub>))exp(-t/τ<sub>decay</sub>) for t > 0 (and h(t) = 0 for t < 0)
t_max is the time when it attains it’s maximum. This page might also be useful.
gittangjun
@gittangjun
@epnev Thanks for the help!
Nick Hardy
@nhardy01
Hi @epnev, I'm running deconvolveCa.m to deconvolve some calcium traces directly in the CaImAn-Matlab package. Using the 'constrined' option and 'ar2' the algorithm runs fine, but I would like to test out the first order kernel using 'ar1'. Unfortunately when I run this the denoised calcium trace is ends up having imaginary values. Is this intended? If not, any insights on what may cause this?
eftychios pnevmatikakis
@epnev
Hi @nhardy01
no in general this should not be happening. Unfortunately without taking a look at the trace I cannot really think what might be wrong. Do you also get the same error with the ‘thresholded’ method?
Nick Hardy
@nhardy01
@epnev No, the 'threshold' option produces the denoised calcium trace as normal, for both 'ar1' and 'ar2' kernels. If you have time to take a quick look, I can send over the data and code I'm using .
p.s. @epnev Thanks for the help, it's a great package.
eftychios pnevmatikakis
@epnev
@nhardy01 I can take a quick look at some point. Send me in private.
gittangjun
@gittangjun
@epnev Hi,I have some question from the file cont_ca_sampler in line 193.Why is it divided by diff(gr) and What is G1sp in line 200?
eftychios pnevmatikakis
@epnev
@gittangjun It’s required to make the math work when transitioning from continuous to discrete time representation. Check this page for some additional info (the part where I say: c[n]*(r<sub>1</sub> - r<sub>2</sub>) = h(nΔt) for n = 0,1,2,… )
gittangjun
@gittangjun
@epnev Thanks so much for your help!
Jakob Voigts
@jvoigts
I am getting occasional Index exceeds matrix dimensions errors in normcorre_batch: Error in shift_reconstruct (line 48), I = remove_boundaries(I,shifts,method,add_value); - anyone seen this? I havent managed to cleanly reproduce it - occurs in some stacks but not others.
eftychios pnevmatikakis
@epnev
@jvoigts What values of grid_size, max_shifts, overlap and max_dev are you using? I think I’ve seen this error when the patches are small the shifts are large causing the patch to move out of the window.
Jakob Voigts
@jvoigts
ah yes maybe that could explain it - i was using 'grid_size',[64,64],'init_batch',400,...
'overlap_pre',64,'mot_uf',4,'bin_width',200,'max_shift',40,'max_dev',20 - i'll try aligning the max shift with the window size better. If this is the issue it should also be easy to simply clamp the max motion to the largest legal value, right?
eftychios pnevmatikakis
@epnev
@jvoigts Yes, or simply increase the grid_size. Although it’s in number of pixels which can have arbitrary resolution, a grid_size of 128 usually works pretty well in the datasets I’ve seen (and it’s faster). Let me know if that deals with the issue
Madhavi Tippani
@madhavitippani

I have large datasets (1044X1344X2000) so I break them down to 20 to 50 frame where ever i see the movement, when i run the demo script with "T,'bin_width',10,'iter',3,'fr',4,'shifts_method','FFT','phase_flag',true,'grid_size',[128,128]*2" with these Normcorr options and gsig 10, gsiz 20, I get this "Error using mat2cell (line 89)
Input arguments, D1 through D4, must sum to each dimension of the input matrix size, [4 6 1 2].

Error in apply_shifts (line 199)
parfor ii = 1:lY"

it works fine when i use the whole dataset
eftychios pnevmatikakis
@epnev
@madhavitippani It might be that this is caused by the small number of frames but I’m not sure. Does it work when you set options.shifts_method = ‘cubic’?
jacobbaron
@jacobbaron
Hi @epnev, I've been using NoRMCorre for alignment of volumetric 1p data and I've been running into a few problems:
  • It seems to me like increasing the number of iterations should be the same as running the alignment multiple times, however, I've found that increasing number of iterations doesn't seem to do much of anything, while running alignment multiple times iteratively can do a lot. Is there something I'm missing here?
  • I find that the code will sometime crash unexpectedly on some frames depending on the parameters I set. I have been getting errors like 'Index exceeds matrix dimensions' in 'remove_boundaries.m' after some number of frames. I'm not sure which parameters are in conflict here.
eftychios pnevmatikakis
@epnev
Hi @jacobbaron
  • Increasing the number of iterations is not same as running the alignment multiple times. If you increase the number of iterations what happens is that in each iteration a new template will be used but the data that gets registered is always the original raw data. I admit that this practice has become redundant since we now even with one iterations the first few dataframes are registered twice just to get a good template.
  • The error in remove_boundaries.m might be due to some interplay between the parameters grid_size,max_shifts, overlap and max_dev are you using? I think I’ve seen this error when the patches are small but the shifts are large causing the patch to move out of the window.
  • Are you registering the whole volume at once, or plane by plane? Volumetric registration can be very memory intensive. You might want to take a look at this small note.
jacobbaron
@jacobbaron
Thanks @epnev! That clarifies about the iterations. That makes sense to me about the error. I currently am rigidly registering the whole volume initially to align the z-planes (there is a bit of movement in Z) and then non-rigidly aligning each plane independently using a grid size of [64,64], max_shifts 15, overlap_pre [16,16], overlap_post [8,8], max_dev [4,4]. The recording size varies but is around 200x200. I've tried varying these a bit and reducing the max_shifts from 20 -> 15 helped some, also it seemed like increasing overlap_post to [16,16] helped. Is there a good rule of thumb to help choose these?
eftychios pnevmatikakis
@epnev
@jacobbaron When doing non-rigid correction I usually look at the spread of the shifts of the different patches. See lines 52-72 in the demo file. If the spread is small it means that non-rigid motion is low and you can increase patch sizes. An increased patch size brings more robustness and is faster at the expense of being less flexible when strong non-rigid motion is present. Similarly, Increasing overlap is a good practice in that it forces the patches to have more similar motion but can also be restricting. I always have an at least [16,16] overlap value for both pre and post, typically [32,32] for the standard 512 x 512 FOV with a pixel corresponding to roughly 1um^2.
Georgia Pierce
@gpierce5
Hi @epnev, I'm using CaImAn to segment 2D 2p data and it's working well so far, thanks! I see in detrend_df_f, you are using a kronecker tensor product in the calculation of df/f. Are there any intuitive explanations of what this is doing that you could point me toward? I am unfamiliar with this.
eftychios pnevmatikakis
@epnev
@gpierce5 It’s probably to match the dimensionality of the baseline to the dimensionality of the data but I can’t find exactly what you’re refering to. Are you on matlab or python?
Georgia Pierce
@gpierce5
@epnev On matlab, function detrend_df_f line 36, prctfilt line 43
eftychios pnevmatikakis
@epnev
@gpierce5 Yes, that’s the reason. What I’m doing is computing the percentile is different parts of the data, then linearly interpolate between them to match the dimensionality of the data. You might also want to try out this more automated DF/F method that I just uploaded that determines the percentile level automatically.
Mao Yeh
@maoyeh
Hi @epnev , can NonRM be used for 10um stack spine imaging? Thank you so much.
eftychios pnevmatikakis
@epnev
@maoyeh I’m not sure I understand your setup. You can use normcorre for spine imaging data. Since the resolution there is much finer motion in pixel values tends to be much higher. So you’ll probably need to set the max_shift parameter to a higher value than the default.
conorheins
@conorheins
@epnev think line 49 in detrend_df_f_auto should read:
F0(i,:) = prctfilt(B(i,:),cdf_level,options.df_window,[],0) + (F(i,:)-Fd);
instead of (F - Fd) as the last term. dimension matching
eftychios pnevmatikakis
@epnev
@conorheins You’re right - corrected. Thanks!
Jonnathan Singh
@Jonna_Singh__twitter
hey @epnev , thanks for this great tool. I'm trying to use this on concatenated 1p videos (~6k frames made up of 10 individual 600 frame videos) but I run out of memory. are there any recommendations you would have for dealing with larger data? as a second question - would there be issues with small instant movements that would come in between each individual video where the FOV might be off my a small amount?
eftychios pnevmatikakis
@epnev
Hi @Jonna_Singh__twitter please check this entry in the wiki for dealing with large datasets. As for your second question, to deal slight changes in the FOV you can use the run_pipeline.m framework, or you can simply ignore them (if you think they are too small) and let the motion correction step deal with them.
Jonnathan Singh
@Jonna_Singh__twitter
ah, i didn't see that, thanks for the advice!
Jonnathan Singh
@Jonna_Singh__twitter
@epnev sorry i didn't specify, i'm talking about using normcorre to movement correct a large data set - is there a similar way to apply norm corre to large data sets?