Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 19 14:45
    pgleeson opened #28
  • Apr 10 07:38

    freeman-lab on master

    add missing argument (compare)

  • Apr 10 07:37

    freeman-lab on master

    finish fixes for db change (compare)

  • Feb 12 01:36
    DenisPolygalov commented #27
  • Jan 02 16:05
    somsol closed #26
  • Jan 02 15:28
    somsol reopened #26
  • Dec 03 2018 19:57
    quic0 commented #27
  • Nov 07 2018 18:10
    somsol commented #27
  • Oct 29 2018 20:43
    quic0 commented #27
  • Oct 29 2018 20:23
    freeman-lab commented #27
  • Oct 16 2018 21:38
    quic0 commented #27
  • Oct 05 2018 12:09
    xzyaoi commented #27
  • Aug 31 2018 20:50
    agiovann opened #27
  • Jul 03 2018 17:00
    somsol closed #26
  • Jul 02 2018 18:51

    freeman-lab on master

    minor fixes for db updates (compare)

  • Jul 02 2018 14:02
    somsol opened #26
  • Jun 14 2017 13:05
    alexklibisz closed #25
  • Jun 14 2017 13:05
    alexklibisz commented #25
  • Jun 12 2017 22:45
    mjlm commented #25
  • Jun 12 2017 21:46
    alexklibisz edited #25
Jeremy Freeman
@freeman-lab
not totally sure
Jasmine Stone
@syncrostone
according to the documentation, it's about 3 times faster with GPU toggled on
Davis Bennett
@d-v-b
is that for source extraction or registration?
Jasmine Stone
@syncrostone
Unclear. I didn't run registration because neurofinder data has registration already run, and the double registration was giving weird results
Marius Pachitariu
@marius10p
The cell detection part in Suite2P is actually much faster than the rest of the pipeline, and we never bothered to accelerate it on a GPU.
I expect most of the run time you got was data reads and writes, as well as the SVD decomposition, which you should lower the parameters for, because the datasets are very short.
The number of clusters also will increase the run times of the cell detection part in Suite2P. The defaults assume several hundred (active) ROIs, and is robust in that range. These datasets have an order of magnitude fewer ROIs.
*active ROIs :)
Marius Pachitariu
@marius10p
also, cell detection in Suite2P is invariant to the duration of the recording (except for data reads/writes), but the NMF methods are at least linear. Here you have recordings that are literally 100 times shorter than realistic datasets.
Nicholas Sofroniew
@sofroniewn
@marius10p most the data sets are around 5 to 10 minutes long at 8 Hz (so 3000 frames) which is around 10x shorter than typical experiments reported in the literature, but which still represent a meaningful snapshot of neural activity. Personally I often gather such datasets as I'm exploring different parts of a field of view, so having algorithms that worked well under these conditions would be very useful to me.
In general I think it would be useful to know the minimum duration / amount of data required for algorithms to perform well and we can add longer data sets to try and facilitate this
Jeremy Freeman
@freeman-lab
tried to capture some of these thoughts here codeneuro/neurofinder#17
Jasmine Stone
@syncrostone
@marius10p Can you help me figure out what parameters to use on the expanded but downsampled datasets that we are working on getting up?
Marius Pachitariu
@marius10p
@syncrostone Sure thing, I'll add this information to one of my issues. With everything at 5-10 minutes, I think I'm beginning to like neurofinder much more!
@sofroniewn I thought they were all 1-2 minutes, but just some of them are. I do agree the algorithms have to be able to work on 10 minute recordings.
Nicholas Sofroniew
@sofroniewn
@marius10p great, downsampling the 30Hz ones and getting 5-10 min data sounds good
Jeremy Freeman
@freeman-lab
fyi everyone we replaced a couple of the datasets with longer versions, as discussed here codeneuro/neurofinder#17
all are now 7-15min and around 8Hz
versions linked to on the website should all be the latest
Marius Pachitariu
@marius10p
great, thanks Jeremy.
also, I know this might be very annoying, but... the 00 series has terrible registration artifacts. Can the line-by-line registration be replaced with full-frame cross-correlation? see codeneuro/neurofinder#22
this might partially explain why all the scores for the 00 series are currently so low
Andrea Giovannucci
@agiovann
@freeman-lab Hello Jeremy, for the new datasets, is the ground truth the same or has it changed?
Jeremy Freeman
@freeman-lab
@agiovann ground truth is identical, only thing that's changed is the raw images, for a couple of the datasets
StylianosPapaioannou
@stelmed_twitter
Hi there! Great Initiative guys, Well Done! Is there a planned deadline for the Neurofinder Calcium Imaging data contest?
Jasmine Stone
@syncrostone
Hi @stelmed_twitter
Right now there is no deadline in place, but we may announce one soon.
Jasmine Stone
@syncrostone
@all
We just posted a new issue discussing ground truth definitions for the neurofinder datasets, input welcome! See codeneuro/neurofinder#24
Kyle
@kr-hansen
FYI, the example Matlab scripts don't work on loading the Neurofinder 3.00 and Neurofinder 1.00 datasets. It hits and error on line 33 with the sub2ind function. I haven't looked into it in detail, but thought I'd mention I noticed it
AndrewMicallef
@AndrewMicallef
Are any of the algorithms posted so far able to capture dendrites as well?
Shannon
@magsol
@freeman-lab Working on using this for my class, unfortunately I can't seem to get the JSON format for submissions right. I copied/pasted the python code here https://github.com/codeneuro/neurofinder#submission-format , then tried to read it back using neurofinder.load, and got KeyError: 'coordinates' on line 17 in main.py. Any suggestions?
Shannon
@magsol
Looks like, given a dictionary/JSON file in the format of the example, I need to iterate through the outer list and dereference the "regions" key in order for the neurofinder.load() function to work--is this as intended?
i.e., neurofinder.load(results[0]["regions"])
Jeremy Freeman
@freeman-lab
@magsol ah yeah you're on the right track, so the python library only works on one dataset at a time, whereas the submission format is for all the datasets
the python library expects something that looks like this https://github.com/codeneuro/neurofinder-python/blob/master/a.json
which is the regions field of one element of the results array, just as you figured out!
Andrea Giovannucci
@agiovann
@freeman-lab Hello Jeremy. File 00.04 is wrongly motion corrected around frame 2650-2730. Take care.
Jeremy Freeman
@freeman-lab
lots of new results posted! http://neurofinder.codeneuro.org/
this one appears to be doing the best and it's also really nicely documented https://github.com/iamshang1/Projects/tree/master/Advanced_ML/Neuron_Detection
Kyle
@kr-hansen

@freeman-lab
I had a question about the code you use for scoring and the metrics in neurofinder-python/neurofinder/ both in main.py and evaluate.py.

From what I can tell, centers(a,b) loops through each element in a, finds the closest centroid in b within a threshold, and removes that value from b and moves on to the next region in a. My question is what threshold do you typically use?

Also, why do you do the pairing in this way exactly. From what I can tell, I see a problem with this method in that depending on the threshold, the final scoring you get could be very dependent on the initial order of the regions in a. There may be a value later in a that better matches a region in b that you assigned to a region in a and then removed for your continual loops through the regions in a. When it comes to an appropriate region in a, the best fit was all ready removed and your final scoring may not be accurate. This would be especially important for cases where cells seem to have overlapping regions, as some of the ROIs do in the ground truth of the shared datasets.

It seems it would almost be more appropriate to determine a pairing scheme that would either do some sort of maximization over a score parameter, such as inclusion, or compute pairwise distances for each region in both a and b, then sort according to a and remove the values in b that are paired in a based on the shortest centroid distances found.

Kyle
@kr-hansen

Another comment related to the submission of .json files.

According to the instructions for submitting .json files here, (https://github.com/codeneuro/neurofinder#step-2-submit-your-algorithm) it might be worth altering these instructions due to the difference between indexing in Matlab and Python for the sake of submission. As written, if someone isn't aware of the differences and has only used Matlab, their algorithms will likely be at a disadvantage because Matlab is 1-indexed while Python is 0-indexed. For your example, converting a Matlab mask to the json file would likely never end up with a coordinate [0,1] because it would probably be [1,2] in Matlab. This will end up with their coordinates shifted by some degree. I'm assuming you aren't doing any matlab/python corrections on the back-end.

This could also be a problem on the ground truth submissions if submission labs primarily use Matlab over Python or visa-versa, though that might not actually be a problem. Just something I'd seen come up when working with other colleagues on these data that I thought I'd bring up.

Alex Klibisz
@alexklibisz
Hi, not sure how many people visit this room, but I have a question. Is there any way for us to overwrite previous submissions? I'd like to try out several methods but I'd prefer to not "pollute" the results page. Or is there maybe an API endpoint I could hit that would not save the results? Thanks!
Alex Klibisz
@alexklibisz
Nevermind - figured out that if you just submit the same algorithm name it overwrites the previous
Dario Ringach
@darioringach
I will give this a try... Quick questions (a) are images already registered? (b) Is there Matlab code that computes the scores? (c) Starting to look at the first dataset -- what are the black, horizonal lines in frame #74 (matlab index) of 000.000?
Alex Klibisz
@alexklibisz
@kr-hansen I also had doubts about the ordering of regions affecting the score. I randomly shuffled the order of regions for a single submission and re-submitted it several times and ended up getting the exact same scores each time.
Kyle
@kr-hansen
@alexklibisz that's good to know. Thanks for sharing the results of your experiment.
Justin Kiggins
@neuromusic
Anyone want more calcium movies? 543 hours of calcium movies from the Allen Institute on AWS, to be specific? https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS
Andrea Giovannucci
@agiovann
Dear all, the website does not seem to be working anymore. http://neurofinder.codeneuro.org/
Is anybody having the same issue?
Just be sure not to use the cached version of the webpage
Andrea Giovannucci
@agiovann
@freeman-lab is the website down for ever?