Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • May 10 22:47
    skoudoro labeled #2391
  • May 10 22:46
    skoudoro edited #2391
  • May 10 10:29
    rosella1234 edited #2391
  • May 10 10:28
    rosella1234 opened #2391
  • May 10 08:31
    dulietina commented #2314
  • May 09 19:54
    Garyfallidis commented #2314
  • May 09 10:08
    codecov[bot] commented #2390
  • May 09 08:13
    drombas edited #2390
  • May 09 08:02
    drombas opened #2390
  • May 08 21:09
    skoudoro commented #2368
  • May 08 21:09

    skoudoro on master

    WIP RF - move _local_tracker to… WIP RF - direction getters update stream_status and 10 more (compare)

  • May 08 21:09
    skoudoro closed #2368
  • May 08 21:09
    skoudoro labeled #2368
  • May 08 21:08
    skoudoro milestoned #2368
  • May 07 23:39
    codecov[bot] commented #2389
  • May 07 22:02
    skoudoro commented #2368
  • May 07 21:30
    skoudoro unlabeled #2310
  • May 07 21:28
    skoudoro commented #2310
  • May 07 21:26
    skoudoro labeled #2389
  • May 07 21:26
    skoudoro opened #2389
Chandana Kodiweera
@kodiweera
Great.
Can't we save the whole object for later use?
other than fields.
Shreyas Fadnavis
@ShreyasFadnavis
Hi @kodiweera ! An object is more like a blue print of all the things it can contain. I understand you may want to save multiple attributes in the same variable. You can do that using a python dictionary :)
Ariel Rokem
@arokem
You can "reconstitute your" CSDFit object from the model_parameters:
model = ConstrainedSphericalDeconvModel(gtab, response, sh_order=sh_order)
fit = ConstrainedSphericalDeconvFit(model, model_params)
Chandana Kodiweera
@kodiweera
Nice!
Serge Koudoro
@skoudoro
Hello all,
Due to an Electrical Outage on our server, the DIPY Website and the DIPY Workshop website will NOT be reachable tomorrow (03/23/2021) from 8 am - 5 pm EST.
Sorry about the inconvenience
Chandana Kodiweera
@kodiweera
save_trk(sft, "tractogram_pft.trk") : ERROR:StatefulTractogram:Voxel space values higher than dimensions. Why does this error occur?
Chandana Kodiweera
@kodiweera
I used: save_trk(sft, "tractogram_pft.trk", bbox_valid_check=False)
Now how can I remove invalid streamlines? Thanks in advance.
Chandana Kodiweera
@kodiweera

Hi @kodiweera ! An object is more like a blue print of all the things it can contain. I understand you may want to save multiple attributes in the same variable. You can do that using a python dictionary :)

I can store the whole obejct in jupyter notebook with the magic command '%store'

Chandana Kodiweera
@kodiweera
Solver Error: What is this error? It says solution may be inacurate. Should I worry?
mcsd_model = MultiShellDeconvModel(gtab, response_mcsd)
mcsd_fit = mcsd_model.fit(dwi_preprocessed_data[:,:,99:100,:])
C:\Users\user\anaconda3\lib\site-packages\cvxpy\problems\problem.py:1245: UserWarning: Solution may be inaccurate. Try another solver, adjusting the solver settings, or solve with verbose=True for more information.
warnings.warn(
Chandana Kodiweera
@kodiweera
PS E:\dipy> pip show cvxpy
Name: cvxpy
Version: 1.1.11
Steven Meisler
@smeisler
Hi all, I am wondering if anyone has a good explanation why some softwares, such as AFQ, often have trouble reconstructing the right AF? Is it more physiological (right AF not being as developed as left due to left-sided language dominance) or methodological (e.g. crossing fibers or other factors lower FA or CSD power; endpoints ROI not well defined, etc)? Really curious, because I have seen this phenomenon mentioned in papers, but there has not been to my knowledge a publication revolving around this issue.
JacquesStout
@JacquesStout
Hello everyone.
I had a bit of a strange issue. I have been using the function load_trk for a while to load and manipulate tractography files, however recently it has been unable to load up the files properly.
These files are about 2G, and tractography files created the exact same but with 1/100th of the file size are loaded just fine. I therefore suspect some sort of memory issue, but it doesn't tell me any specifics, it just begins to run the function and then stalls forever.
Is there any recommendation as to what to try in order to solve this particular problem?
Chandana Kodiweera
@kodiweera
@arokem Voxel size vs seed density: would 1 mm iso and seed density=2 vs. 2mm iso and seed density =1 produce the same sort of tracts? Is there a rule of thumb in choosing seed density with voxel size for the whole brain tractography? Thank you.
Chandana Kodiweera
@kodiweera

@arokem Voxel size vs seed density: would 1 mm iso and seed density=2 vs. 2mm iso and seed density =1 produce the same sort of tracts? Is there a rule of thumb in choosing seed density with voxel size for the whole brain tractography? Thank you.

@arokem I know now it's a silly question after thinking about it. Seed density=2 will have 8 seeds in the voxel of [2,2,2] grid. That's 8 seeds per 1 mm^3 ( 1 seed per (1/8) mm^3) if the voxel size is 1 mm iso. Got it!

Chandana Kodiweera
@kodiweera
Can you please recommend a good Altas for whole-brain connectivity analysis? Thank you.
Chandana Kodiweera
@kodiweera
Hi @ShreyasFadnavis How can I obtain a copy of 'cudipy_environment.yml' mentioned in the tutorial given by Gregory Lee in the last day o the dipy workshop? Thank You.
Chandana Kodiweera
@kodiweera
Hi @arokem I did not know this exists : https://neurohackademy.org/ and https://github.com/neurohackademy/nh2020-curriculum. Posting here for others. When will be the next course? I am glad I found this! Thank you.
1 reply
Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera You can use this to get the requirements: https://github.com/dipy/cudipy/blob/2ef103c9b58b4dbc73ddd781ed385f8da673d493/requirements.txt for cudipy!
I don't think they differ -- Please correct me if I am wrong @grlee77!

PS E:\dipy> pip show cvxpy
Name: cvxpy
Version: 1.1.11

This is strange! Please can you try using CVXPY 1.0.31 ? That should resolve the issue. CVXPY 1.1.x onwards a lot of changes were made to their codebase.

Shreyas Fadnavis
@ShreyasFadnavis

Hello everyone.
I had a bit of a strange issue. I have been using the function load_trk for a while to load and manipulate tractography files, however recently it has been unable to load up the files properly.
These files are about 2G, and tractography files created the exact same but with 1/100th of the file size are loaded just fine. I therefore suspect some sort of memory issue, but it doesn't tell me any specifics, it just begins to run the function and then stalls forever.
Is there any recommendation as to what to try in order to solve this particular problem?

@frheault this maybe a question for you!

Gregory R. Lee
@grlee77

Hi @ShreyasFadnavis How can I obtain a copy of 'cudipy_environment.yml' mentioned in the tutorial given by Gregory Lee in the last day o the dipy workshop? Thank You.

See below for the contents of the file. I think I had also put it within a .zip file with the example notebooks I uploaded to Google drive along with the lecture slides. The version pasted below includes CuPy from the release channel as well, so you would not need to run the separate CuPy install command from the lecture slides. After CuPy 9 is officially released in late April, we can remove conda-forge/label/cupy_rc from the channels list in that file.

name: cudipy_demo
channels:
  - conda-forge/label/cupy_rc
  - conda-forge
dependencies:
  - python=3.9
  - numpy>=1.20
  - scipy>=1.6
  - scikit-image>=0.18
  - cupy>=9.0.0b3
  - matplotlib
  - cython
  - h5py
  - pytest
  - tqdm
  - ipython
  - jupyterlab
  - nibabel
  - packaging
  - dask
  - fastrlock
  - pip
  - pyparsing
  - pip:
    - dipy
Chandana Kodiweera
@kodiweera
Hi @grlee77 . Thank you!
Chandana Kodiweera
@kodiweera
Hi @Garyfallidis @RafaelNH @arokem @gabknight Simulated Data: after the workshop, I am building couple pipelines. I would like to test them out with some simulated data. Are there simulated data ( simulated phantom) publicly available to test dki metrics / msmt-csd pft based tractography, or any recommended datset to test them out. Any suggestion on how to validate my pipelines is appreciated. Thank you!
2 replies
Chandana Kodiweera
@kodiweera
Hi @ShreyasFadnavis as some of the processes can be run using cuda via cudipy, are you going to incorporate this capability in qsiprep?
Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera : I am guessing this is a question for @mattcieslak -- But I guess, in any case, it will take time to do this on our end!
Chandana Kodiweera
@kodiweera
Hi @Garyfallidis and @arokem I am fitting the dipy msmt-csd model to my data and now it has been running for 2 days on 16 cpus, and yet not done. I wonder if I have to set a special flag within the fitting function to parallelize it. Is it automatically parallelize across the available cpus or do I have to specify somehow? Can the model msmt-csd currently be fit on gpus? Thank you.
Matt Cieslak
@mattcieslak
@kodiweera we don't currently have any dipy tractography in qsiprep but we definitely want to add it
the qsiprep container comes with cuda enabled, so if it's accessible through the tractography implementation it will be usable on any cuda-enable computer
Chandana Kodiweera
@kodiweera

the qsiprep container comes with cuda enabled, so if it's accessible through the tractography implementation it will be usable on any cuda-enable computer

Thank you. I am also looking into bedpostx_gpu. Wouldn't be a bad idea to add it too.

Chandana Kodiweera
@kodiweera

@arokem This is I'm fitting: # msmt-csd model and its fitting

mcsd_model = MultiShellDeconvModel(gtab, response_mcsd)
mcsd_fit = mcsd_model.fit(dwi_preprocessed_data)

Does that use all available cpus by default?
Chandana Kodiweera
@kodiweera
Hi @mattcieslak @ShreyasFadnavis Usability of data: are there some threshold values of some metrics to accept/reject data after qsiprep? Is there a general concession on usability of dwi data? Thank you.
Conor Owens-Walton
@ConorOW

Hey everyone. Sorry for the basic question - getting the following error when I try to do some fiber tracking:

ModuleNotFoundError: No module named 'dipy.tracking.local'.

All the other modules and packages for this task seem to import fine (dipy.reconst.dti; dipy.reconst.csdeconv etc)

Grateful for any advice!

Bramsh Q Chandio
@BramshQamar
I think it's dipy.tracking.local_tracking @ConorOW
1 reply
Chandana Kodiweera
@kodiweera

Hey everyone. Sorry for the basic question - getting the following error when I try to do some fiber tracking:

ModuleNotFoundError: No module named 'dipy.tracking.local'.

All the other modules and packages for this task seem to import fine (dipy.reconst.dti; dipy.reconst.csdeconv etc)

Grateful for any advice!
from dipy.tracking.local_tracking import LocalTracking

Chandana Kodiweera
@kodiweera

Hi @Garyfallidis and @arokem I am fitting the dipy msmt-csd model to my data and now it has been running for 2 days on 16 cpus, and yet not done. I wonder if I have to set a special flag within the fitting function to parallelize it. Is it automatically parallelize across the available cpus or do I have to specify somehow? Can the model msmt-csd currently be fit on gpus? Thank you.

I used the multiprocess package to parallelize voxels across available cpus for msmt-csd model.

Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera I'm curious -- did multiprocess work for you?
Also, the issue is CVXPY and not the code
Please use CVXPY versions 1.0.x for MSMT CSD
1.1.x of CVXPY seem to have undergone some changes that cause this issue
We are investigating it on our end! @karanphil -- please can you also see this?
Chandana Kodiweera
@kodiweera

1.1.x of CVXPY seem to have undergone some changes that cause this issue

@ShreyasFadnavis What problem are you referring to? fitting issue caused by cvxpy or multiprocessing?

Yashvardhan Jain
@J-Yash
Hello, I have a registration workflow that is quite compute intensive. I was wondering if DIPY has the capability to use GPUs for image registration? I currently use Advanced Normalization Tools (ANTsPy) for my workflow but it lacks GPU support.
Chandana Kodiweera
@kodiweera

@ShreyasFadnavis My first effort multiprocessing failed with recursion error. It does not resolve even after increasing the recursion limit to 10000.
vol_shape = dwi.shape[:-1]
n_voxels = np.prod(vol_shape)
voxel_by_dir = dwi.reshape(n_voxels, dwi.shape[-1])
voxel_array = [voxel_by_dir[i:i + 1, :] for i in range(voxel_by_dir.shape[0])]

def msmt_csd_pool(vox):
return mcsd_model.fit(vox)

if name == 'main':
with Pool() as p:
mcsd_fit = p.map(msmt_csd_pool, voxel_array)
p.close()
p.join()

@ShreyasFadnavis Now the second solution I'm trying is simple. I chunked the volume into 20 parts and processing separately. I'm hoping it is possible to merge outputs from msmt-csd fitting.