Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Feb 10 2021 15:31
    YijunBao commented #29
  • Oct 02 2020 09:48
    waseemabbas05 opened #29
  • Aug 19 2019 14:45
    pgleeson opened #28
  • Apr 10 2019 07:38

    freeman-lab on master

    add missing argument (compare)

  • Apr 10 2019 07:37

    freeman-lab on master

    finish fixes for db change (compare)

  • Feb 12 2019 01:36
    DenisPolygalov commented #27
  • Jan 02 2019 16:05
    somsol closed #26
  • Jan 02 2019 15:28
    somsol reopened #26
  • Dec 03 2018 19:57
    quic0 commented #27
  • Nov 07 2018 18:10
    somsol commented #27
  • Oct 29 2018 20:43
    quic0 commented #27
  • Oct 29 2018 20:23
    freeman-lab commented #27
  • Oct 16 2018 21:38
    quic0 commented #27
  • Oct 05 2018 12:09
    xzyaoi commented #27
  • Aug 31 2018 20:50
    agiovann opened #27
  • Jul 03 2018 17:00
    somsol closed #26
  • Jul 02 2018 18:51

    freeman-lab on master

    minor fixes for db updates (compare)

  • Jul 02 2018 14:02
    somsol opened #26
  • Jun 14 2017 13:05
    alexklibisz closed #25
  • Jun 14 2017 13:05
    alexklibisz commented #25
Jeremy Freeman
@freeman-lab
as for regions and frames, frames represent different time points (it's a movie), regions represent individual neurons (which have stable identifies over time)
does that help?
@jwittenbach (and anyone else) new dataset posted from @mjlm (in the harvey lab)
would be great if anyone checks if this download works https://s3.amazonaws.com/neuro.datasets/challenges/neurofinder/neurofinder.04.00.zip
Frederic Ren
@RFrederic
ahh, so the different coordinates of each region is where that particular neuron is through time. can I assume the set of coordinates for each region represent one(not many) continuous time interval in which that neuron is firing??
mjlm
@mjlm
No, the coordinates represent where the neuron is located, and this location is assumed to be constant throughout the movie. The data structure is as follows: You have a movie with, for example, 512x512 pixels per frame. The field of view of the movie is constant throughout all movie frames. Within this field of view, you have many neurons, whose location is also assumed to be constant. The goal is to figure out where the neurons are, i.e. to map pixels to neurons. The coordinates in the regions.json provide training data for this, i.e. for each region (=neuron), they list the coordinates of the pixels that were determined to be part of that region. So these coordinates index into the 512x512 spatial pixels of the movie.
To figure out when a neuron is firing, you might calculate, for each frame separately, the average intensity in all pixels belonging to that neuron. This will give you one datapoint per frame, i.e. a timeseries of fluorescence values. The brighter the fluorescence, the more the neuron is firing at that point in time.
Frederic Ren
@RFrederic
ah! Of course each neuron is consist of more than one pixel... thank you for the detailed explanation :)
Maxwell Rebo
@MaxwellRebo
Alex Eusman
@Aeusman
@freeman-lab Hi! http://datasets.codeneuro.org/ is down for me and I'm trying to understand the exact dimensions of the data. As I understand some codeneuro challenge data are multi-slice time series and others are multi-slice series, where can I find which are which? (and also how many slices are in each)
Jeremy Freeman
@freeman-lab
@Aeusman hey! thanks for flagging that, will look into it
the structure of the neurofinder data is quite simple and they're all the same
each dataset is just a collection of 2d TIFF images
the only thing that differs is the exact xy dimensions of the images, and the number of images
that info is available in the info.json file that comes with each dataset when you download it
Alex Eusman
@Aeusman
Gotcha, so all the datasets are a single slice over time?
Jeremy Freeman
@freeman-lab
yup, exactly
Alex Eusman
@Aeusman
Thanks!
Jeremy Freeman
@freeman-lab
just updated in the README here https://github.com/codeneuro/neurofinder
also FYI we'll be posting two new datasets later today
and at that point the core neurofinder datasets will be finalized
Jeremy Freeman
@freeman-lab
i'm gonna remove the algorithms already submitted cause they were just test cases anyway, and would require updating for the new data
@jwittenbach / steve neurwin once i'm done maybe you can rerun yours to make sure everything is functioning with the new data
Jason Wittenbach
@jwittenbach
sure thing! Steve Neurwin is always at the ready ;)
Davis Bennett
@d-v-b
Spikey!
joshua vogelstein
@jovo
@freeman-lab will you let us know if any 3D gets deposited into the datasets. i thought some data (not in the challenges perhaps) was already 3D? in the meantime, we'll work on supporting 2D images (we built the 3D image code based off your first light sheet data with misha)
Jeremy Freeman
@freeman-lab
ok the new Harvey lab data are posted, thanks to @mjlm and @Selmaan!
datasets 04.00 and 04.01
reran the "demo submissions" on them and everything went through fine
would be great if someone downloaded the actual data to double check formatting and stuff
@jovo there's no 3d data in the neurofinder datasets and we probably won't add any
from this point forward the datasets are more or less final, at least for this challenge
so people can start submitting algorithms!
joshua vogelstein
@jovo
@freeman-lab ok, thanks, good luck with the challenge!
Kyle
@kr-hansen
@freeman-lab I just downloaded the all.test zip folder. FYI, it doesn't include the 04.00.test or 04.01.test. I had to download those separately.
Jeremy Freeman
@freeman-lab
@kkcthans ah thanks for catching that! will fix
Jeremy Freeman
@freeman-lab
@/all @syncrostone has now posted a bunch of algorithm results, pretty interesting so far!
the "Suite2P" algorithm from @marius10p is doing the best, but isn't dramatically different from the NMF-based approaches
Davis Bennett
@d-v-b
great work! it would be helpful to see computational costs for each method, maybe starting simply with execution time?
Jasmine Stone
@syncrostone
@d-v-b Suite2P was about 20 min per dataset on a single machine, but it required a lot of memory (doesn't work on a laptop).
@d-v-b nmf and cnmf were run on the cluster and only took a couple minutes per dataset running on approximately 10 nodes.
Davis Bennett
@d-v-b
cool!
Jeremy Freeman
@freeman-lab
pretty sure Suite2P can be made much faster using GPUs, and the others will of course depend on the number of nodes, but should scale pretty well
Davis Bennett
@d-v-b
is it just an implementation detail that makes Suite2p GPU-dependent?
or is there something about the algorithm that crucially leverages GPUs
Jeremy Freeman
@freeman-lab
not totally sure
Jasmine Stone
@syncrostone
according to the documentation, it's about 3 times faster with GPU toggled on
Davis Bennett
@d-v-b
is that for source extraction or registration?
Jasmine Stone
@syncrostone
Unclear. I didn't run registration because neurofinder data has registration already run, and the double registration was giving weird results
Marius Pachitariu
@marius10p
The cell detection part in Suite2P is actually much faster than the rest of the pipeline, and we never bothered to accelerate it on a GPU.