These are chat archives for freeman-lab/zebra

27th
Feb 2015
Nikita Vladimirov
@nvladimus
Feb 27 2015 01:18
yes, guys, this actually speeds up registration by x12 :fire: My registration for 15-min experiment now runs 2.4 min instead of 28 min :) With the same results, with accuracy of 1 pixel shift. Try it!
I took ROI 300x300 pixels in the middle of each plane, instead of taking the whole plane
datROI = dat.crop((662,362,0), (1174,874,41)) # take sub-stack, (length, width, z)
regROI = Registration('planarcrosscorr').prepare(datROI,startIdx=refR[0], stopIdx=refR[1])
print('Done taking reference')
regParamsROI = regROI.fit(datROI.medianFilter(3))
my planes are 2048 x 1024
:fish: :fish: :fish: :fish: :fish: :fish: :fish: :fish: :fish: :fish: :fish: :fish: (12 times)
Nikita Vladimirov
@nvladimus
Feb 27 2015 02:02
I could further reduce the registration time x2 :fish: :fish: , by taking only 10 planes (15,25) and extrapolating the stack motion from those. So, the run time for 15-min dataset went from 28 min to 1.4 min :)
I think it's worth adding to real-time processing, to speed things up.
:wine_glass:
:sparkles:
Jeremy Freeman
@freeman-lab
Feb 27 2015 02:41
hey nikita, that's cool!
:clap:
definitely worth knowing about
would be great if @d-v-b tried it on a couple data sets as well to see if it behaves similarly across a range of data sets
and thanks for updating those data!
we're very close now on doing the full scale test on these data
so can hopefully do some live testing next week?
Nikita Vladimirov
@nvladimus
Feb 27 2015 03:41
sure, I am ready for the test, I have reservations basically every day next week - let me know when you guys are ready!
Jeremy Freeman
@freeman-lab
Feb 27 2015 03:43
ok cool! we'll be doing proper testing on your example data in the meantime
Nikita Vladimirov
@nvladimus
Feb 27 2015 03:51
OK!
Nikita Vladimirov
@nvladimus
Feb 27 2015 15:50
Is there any way to convert registration model obtained from cropped data to a reg. model for full data?
regCrop = Registration('planarcrosscorr').prepare(datCrop,startIdx=refR[0], stopIdx=refR[1])
regParamsCrop = regCrop.fit(datCrop.medianFilter(3))
I wanna create regParamsFull for full data, knowing the size of data, and populating it by values extrapolated from regParamsCrop
this would allow skipping the registration of full data, and doing only registration of cropped data, and then applying it to full data
Nikita Vladimirov
@nvladimus
Feb 27 2015 15:57
I tried doing everything with regParamsCrop only, and extended the size of its transformations regParamsCrop.transformations.delta to match the full data z-range. But this altered regParamsCrop throws an error when I try to save it, apparently because other dimensions mismatch.
Nikita Vladimirov
@nvladimus
Feb 27 2015 16:06
full code
    zStart = 15 #starting plane of registration ROI
    zEnd = 25 #ending plane
    window = 300 #width and height of the registration ROI
    datCrop = dat.crop((dims[0]/2 - window/2,dims[1]/2 - window/2,zStart), \
                       (dims[0]/2 + window/2,dims[1]/2 + window/2,zEnd)) # take sub-stack, (length, width, z)
    regCrop = Registration('planarcrosscorr').prepare(datCrop,startIdx=refR[0], stopIdx=refR[1])
    print 'Done taking reference (cropped data)'
    regParamsCrop = regCrop.fit(datCrop.medianFilter(3))
# extrapolate all missing planes from known registration shifts
    xysCrop = np.array([regParamsCrop.transformations[x].delta for x in range(len(regParamsCrop.transformations))])
    xysNew = np.zeros((numFrames+1,dims[2],2))
    for i in range(zStart): #extrapolate
        xysNew[:,i,:] = xysCrop.mean(axis = 1)
    for i in range(zStart,zEnd): # populate with real values
        xysNew[:,i,:] = xysCrop[:,i-zStart,:]
    for i in range(zEnd,dims[2]): #extrapolate
        xysNew[:,i,:] = xysCrop.mean(axis = 1)  

# apply registration to full data    
    start_time = time.time()
    for x in range(xysNew.shape[0]):
        regParamsCrop.transformations[x].delta = list(xysNew[x,:,:])
    dat = regParamsCrop.transform(dat)
    regParamsCrop.save(regDir + 'regParamsCr')

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-26-7bbcd75ce622> in <module>()
      1 if doRegistration:
----> 2     regParamsCrop.save(regDir + 'regParamsCr')

/scratch/spark/tmp/spark-b6ff66f4-f033-4360-8488-2df5686813f2/spark-a72c596e-b866-42b0-9510-d81564f74bf0/thunder_python-0.5.0_dev-py2.7.egg/thunder/imgprocessing/registration.pyc in save(self, file)
    193         else:
    194             f = open(file, 'w')
--> 195         output = json.dumps(self, default=lambda v: v.__dict__)
    196         f.write(output)
    197         f.close()

/usr/local/python-2.7.6/lib/python2.7/json/__init__.pyc in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, encoding, default, sort_keys, **kw)
    248         check_circular=check_circular, allow_nan=allow_nan, indent=indent,
    249         separators=separators, encoding=encoding, default=default,
--> 250         sort_keys=sort_keys, **kw).encode(obj)
    251 
    252 

/usr/local/python-2.7.6/lib/python2.7/json/encoder.pyc in encode(self, o)
    205         # exceptions aren't as detailed.  The list call should be roughly
    206         # equivalent to the PySequence_Fast that ''.join() would do.
--> 207         chunks = self.iterencode(o, _one_shot=True)
    208         if not isinstance(chunks, (list, tuple)):
    209             chunks = list(chunks)

/usr/local/python-2.7.6/lib/python2.7/json/encoder.pyc in iterencode(self, o, _one_shot)
    268                 self.key_separator, self.item_separator, self.sort_keys,
    269                 self.skipkeys, _one_shot)
--> 270         return _iterencode(o, 0)
    271 
    272 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,

/scratch/spark/tmp/spark-b6ff66f4-f033-4360-8488-2df5686813f2/spark-a72c596e-b866-42b0-9510-d81564f74bf0/thunder_python-0.5.0_dev-py2.7.egg/thunder/imgprocessing/registration.pyc in <lambda>(v)
    193         else:
    194             f = open(file, 'w')
--> 195         output = json.dumps(self, default=lambda v: v.__dict__)
    196         f.write(output)
    197         f.close()

AttributeError: 'numpy.ndarray' object has no attribute '__dict__'
Nikita Vladimirov
@nvladimus
Feb 27 2015 16:19
:worried:
Jeremy Freeman
@freeman-lab
Feb 27 2015 16:39
hmmm i'm fairly certain this would work if you only crop in x and y
did you try that first?
cropping in z is more complicated because, as you say, it's changing the size of the delta
i think we could make it work, but need to look at it more
Nikita Vladimirov
@nvladimus
Feb 27 2015 16:48
I also cropped in z
Jeremy Freeman
@freeman-lab
Feb 27 2015 16:48
right, so i would recommend just cropping in x and y and see if that works
Nikita Vladimirov
@nvladimus
Feb 27 2015 16:48
ok, nevermind, I will skip saving the reg. params for now
Jeremy Freeman
@freeman-lab
Feb 27 2015 16:48
oh, so it's applying fine, and the problem is only in saving?
Nikita Vladimirov
@nvladimus
Feb 27 2015 16:57
yes, right
transformation is applied fine
Jeremy Freeman
@freeman-lab
Feb 27 2015 16:59
ok great! then i'll specifically look into the saving issue, good to know that's the only problem
Nikita Vladimirov
@nvladimus
Feb 27 2015 17:55
thanks, but again, there is no burning necessity right now
Nikita Vladimirov
@nvladimus
Feb 27 2015 18:10
I was only wondering if there is a quick fix for this - since i am not very familiar with the guts of these classes