Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 10:29
    rosella1234 edited #2391
  • 10:28
    rosella1234 opened #2391
  • 08:31
    dulietina commented #2314
  • May 09 19:54
    Garyfallidis commented #2314
  • May 09 10:08
    codecov[bot] commented #2390
  • May 09 08:13
    drombas edited #2390
  • May 09 08:02
    drombas opened #2390
  • May 08 21:09
    skoudoro commented #2368
  • May 08 21:09

    skoudoro on master

    WIP RF - move _local_tracker to… WIP RF - direction getters update stream_status and 10 more (compare)

  • May 08 21:09
    skoudoro closed #2368
  • May 08 21:09
    skoudoro labeled #2368
  • May 08 21:08
    skoudoro milestoned #2368
  • May 07 23:39
    codecov[bot] commented #2389
  • May 07 22:02
    skoudoro commented #2368
  • May 07 21:30
    skoudoro unlabeled #2310
  • May 07 21:28
    skoudoro commented #2310
  • May 07 21:26
    skoudoro labeled #2389
  • May 07 21:26
    skoudoro opened #2389
  • May 07 21:26
    skoudoro milestoned #2389
  • May 07 21:22
    skoudoro milestoned #2310
JacquesStout
@JacquesStout
Hello everyone.
I had a bit of a strange issue. I have been using the function load_trk for a while to load and manipulate tractography files, however recently it has been unable to load up the files properly.
These files are about 2G, and tractography files created the exact same but with 1/100th of the file size are loaded just fine. I therefore suspect some sort of memory issue, but it doesn't tell me any specifics, it just begins to run the function and then stalls forever.
Is there any recommendation as to what to try in order to solve this particular problem?
Chandana Kodiweera
@kodiweera
@arokem Voxel size vs seed density: would 1 mm iso and seed density=2 vs. 2mm iso and seed density =1 produce the same sort of tracts? Is there a rule of thumb in choosing seed density with voxel size for the whole brain tractography? Thank you.
Chandana Kodiweera
@kodiweera

@arokem Voxel size vs seed density: would 1 mm iso and seed density=2 vs. 2mm iso and seed density =1 produce the same sort of tracts? Is there a rule of thumb in choosing seed density with voxel size for the whole brain tractography? Thank you.

@arokem I know now it's a silly question after thinking about it. Seed density=2 will have 8 seeds in the voxel of [2,2,2] grid. That's 8 seeds per 1 mm^3 ( 1 seed per (1/8) mm^3) if the voxel size is 1 mm iso. Got it!

Chandana Kodiweera
@kodiweera
Can you please recommend a good Altas for whole-brain connectivity analysis? Thank you.
Chandana Kodiweera
@kodiweera
Hi @ShreyasFadnavis How can I obtain a copy of 'cudipy_environment.yml' mentioned in the tutorial given by Gregory Lee in the last day o the dipy workshop? Thank You.
Chandana Kodiweera
@kodiweera
Hi @arokem I did not know this exists : https://neurohackademy.org/ and https://github.com/neurohackademy/nh2020-curriculum. Posting here for others. When will be the next course? I am glad I found this! Thank you.
1 reply
Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera You can use this to get the requirements: https://github.com/dipy/cudipy/blob/2ef103c9b58b4dbc73ddd781ed385f8da673d493/requirements.txt for cudipy!
I don't think they differ -- Please correct me if I am wrong @grlee77!

PS E:\dipy> pip show cvxpy
Name: cvxpy
Version: 1.1.11

This is strange! Please can you try using CVXPY 1.0.31 ? That should resolve the issue. CVXPY 1.1.x onwards a lot of changes were made to their codebase.

Shreyas Fadnavis
@ShreyasFadnavis

Hello everyone.
I had a bit of a strange issue. I have been using the function load_trk for a while to load and manipulate tractography files, however recently it has been unable to load up the files properly.
These files are about 2G, and tractography files created the exact same but with 1/100th of the file size are loaded just fine. I therefore suspect some sort of memory issue, but it doesn't tell me any specifics, it just begins to run the function and then stalls forever.
Is there any recommendation as to what to try in order to solve this particular problem?

@frheault this maybe a question for you!

Gregory R. Lee
@grlee77

Hi @ShreyasFadnavis How can I obtain a copy of 'cudipy_environment.yml' mentioned in the tutorial given by Gregory Lee in the last day o the dipy workshop? Thank You.

See below for the contents of the file. I think I had also put it within a .zip file with the example notebooks I uploaded to Google drive along with the lecture slides. The version pasted below includes CuPy from the release channel as well, so you would not need to run the separate CuPy install command from the lecture slides. After CuPy 9 is officially released in late April, we can remove conda-forge/label/cupy_rc from the channels list in that file.

name: cudipy_demo
channels:
  - conda-forge/label/cupy_rc
  - conda-forge
dependencies:
  - python=3.9
  - numpy>=1.20
  - scipy>=1.6
  - scikit-image>=0.18
  - cupy>=9.0.0b3
  - matplotlib
  - cython
  - h5py
  - pytest
  - tqdm
  - ipython
  - jupyterlab
  - nibabel
  - packaging
  - dask
  - fastrlock
  - pip
  - pyparsing
  - pip:
    - dipy
Chandana Kodiweera
@kodiweera
Hi @grlee77 . Thank you!
Chandana Kodiweera
@kodiweera
Hi @Garyfallidis @RafaelNH @arokem @gabknight Simulated Data: after the workshop, I am building couple pipelines. I would like to test them out with some simulated data. Are there simulated data ( simulated phantom) publicly available to test dki metrics / msmt-csd pft based tractography, or any recommended datset to test them out. Any suggestion on how to validate my pipelines is appreciated. Thank you!
2 replies
Chandana Kodiweera
@kodiweera
Hi @ShreyasFadnavis as some of the processes can be run using cuda via cudipy, are you going to incorporate this capability in qsiprep?
Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera : I am guessing this is a question for @mattcieslak -- But I guess, in any case, it will take time to do this on our end!
Chandana Kodiweera
@kodiweera
Hi @Garyfallidis and @arokem I am fitting the dipy msmt-csd model to my data and now it has been running for 2 days on 16 cpus, and yet not done. I wonder if I have to set a special flag within the fitting function to parallelize it. Is it automatically parallelize across the available cpus or do I have to specify somehow? Can the model msmt-csd currently be fit on gpus? Thank you.
Matt Cieslak
@mattcieslak
@kodiweera we don't currently have any dipy tractography in qsiprep but we definitely want to add it
the qsiprep container comes with cuda enabled, so if it's accessible through the tractography implementation it will be usable on any cuda-enable computer
Chandana Kodiweera
@kodiweera

the qsiprep container comes with cuda enabled, so if it's accessible through the tractography implementation it will be usable on any cuda-enable computer

Thank you. I am also looking into bedpostx_gpu. Wouldn't be a bad idea to add it too.

Chandana Kodiweera
@kodiweera

@arokem This is I'm fitting: # msmt-csd model and its fitting

mcsd_model = MultiShellDeconvModel(gtab, response_mcsd)
mcsd_fit = mcsd_model.fit(dwi_preprocessed_data)

Does that use all available cpus by default?
Chandana Kodiweera
@kodiweera
Hi @mattcieslak @ShreyasFadnavis Usability of data: are there some threshold values of some metrics to accept/reject data after qsiprep? Is there a general concession on usability of dwi data? Thank you.
Conor Owens-Walton
@ConorOW

Hey everyone. Sorry for the basic question - getting the following error when I try to do some fiber tracking:

ModuleNotFoundError: No module named 'dipy.tracking.local'.

All the other modules and packages for this task seem to import fine (dipy.reconst.dti; dipy.reconst.csdeconv etc)

Grateful for any advice!

Bramsh Q Chandio
@BramshQamar
I think it's dipy.tracking.local_tracking @ConorOW
1 reply
Chandana Kodiweera
@kodiweera

Hey everyone. Sorry for the basic question - getting the following error when I try to do some fiber tracking:

ModuleNotFoundError: No module named 'dipy.tracking.local'.

All the other modules and packages for this task seem to import fine (dipy.reconst.dti; dipy.reconst.csdeconv etc)

Grateful for any advice!
from dipy.tracking.local_tracking import LocalTracking

Chandana Kodiweera
@kodiweera

Hi @Garyfallidis and @arokem I am fitting the dipy msmt-csd model to my data and now it has been running for 2 days on 16 cpus, and yet not done. I wonder if I have to set a special flag within the fitting function to parallelize it. Is it automatically parallelize across the available cpus or do I have to specify somehow? Can the model msmt-csd currently be fit on gpus? Thank you.

I used the multiprocess package to parallelize voxels across available cpus for msmt-csd model.

Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera I'm curious -- did multiprocess work for you?
Also, the issue is CVXPY and not the code
Please use CVXPY versions 1.0.x for MSMT CSD
1.1.x of CVXPY seem to have undergone some changes that cause this issue
We are investigating it on our end! @karanphil -- please can you also see this?
Chandana Kodiweera
@kodiweera

1.1.x of CVXPY seem to have undergone some changes that cause this issue

@ShreyasFadnavis What problem are you referring to? fitting issue caused by cvxpy or multiprocessing?

Yashvardhan Jain
@J-Yash
Hello, I have a registration workflow that is quite compute intensive. I was wondering if DIPY has the capability to use GPUs for image registration? I currently use Advanced Normalization Tools (ANTsPy) for my workflow but it lacks GPU support.
Chandana Kodiweera
@kodiweera

@ShreyasFadnavis My first effort multiprocessing failed with recursion error. It does not resolve even after increasing the recursion limit to 10000.
vol_shape = dwi.shape[:-1]
n_voxels = np.prod(vol_shape)
voxel_by_dir = dwi.reshape(n_voxels, dwi.shape[-1])
voxel_array = [voxel_by_dir[i:i + 1, :] for i in range(voxel_by_dir.shape[0])]

def msmt_csd_pool(vox):
return mcsd_model.fit(vox)

if name == 'main':
with Pool() as p:
mcsd_fit = p.map(msmt_csd_pool, voxel_array)
p.close()
p.join()

@ShreyasFadnavis Now the second solution I'm trying is simple. I chunked the volume into 20 parts and processing separately. I'm hoping it is possible to merge outputs from msmt-csd fitting.
Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera Yes -- It should work, as long as you dont chunk along the 4th dimension, i.e. gradient directions!
Chandana Kodiweera
@kodiweera
@kodiweera Yes -- It should work, as long as you dont chunk along the 4th dimension, i.e. gradient directions!
:)
Chandana Kodiweera
@kodiweera
@ShreyasFadnavis Futures3 is the best way to distribute the slices across cpus. def msmt_csd_pool(slice):
return mcsd_model.fit(slice)
with ProcessPoolExecutor() as executor:
results = [executor.map(msmt_csd_pool, slice_array)]
Trying it now. Let you know .
Chandana Kodiweera
@kodiweera
@ShreyasFadnavis It's a kind of happy moment. I could multiporcess voxels using pathos multiprocessing package. As mcsd_model.fit is a class function, it's little trcky to pickle.
But pathos uses dill and serializes nicely.
it took about 1 hour before for one slice , now only six to seven minutes on 6 cpus.
Shreyas Fadnavis
@ShreyasFadnavis
That's good news indeed @kodiweera -- Feel free to explain what worked for you in the issue : dipy/dipy#2336
So that everyone who is using MSMT-CSD can benefit :)
Chandana Kodiweera
@kodiweera
pathos can be used to multiprocess any model function ( class or not), not just msmt-csd.
Eleftherios Garyfallidis
@Garyfallidis
Nice one @kodiweera
Chandana Kodiweera
@kodiweera
Given the mask, what's the best tool to extract the region covered by the mask?
Ariel Rokem
@arokem
@kodiweera : if mask is a binary 3d array (True/False for within/outside the region of interest) and data is a 4D array containing diffusion data, then data[np.where(mask)] should be a 2D array with each row being a voxel and each column being a direction of measurement.