Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • May 07 23:39
    codecov[bot] commented #2389
  • May 07 22:02
    skoudoro commented #2368
  • May 07 21:30
    skoudoro unlabeled #2310
  • May 07 21:28
    skoudoro commented #2310
  • May 07 21:26
    skoudoro labeled #2389
  • May 07 21:26
    skoudoro opened #2389
  • May 07 21:26
    skoudoro milestoned #2389
  • May 07 21:22
    skoudoro milestoned #2310
  • May 07 18:00
    codecov[bot] commented #2368
  • May 07 15:48
    codecov[bot] commented #2368
  • May 07 15:48
    codecov[bot] commented #2368
  • May 07 15:48
    pep8speaks commented #2368
  • May 07 15:48
    gabknight synchronize #2368
  • May 07 09:40
    codecov[bot] commented #2368
  • May 07 09:39
    codecov[bot] commented #2368
  • May 07 09:39
    pep8speaks commented #2368
  • May 07 09:39
    gabknight synchronize #2368
  • May 07 09:24
    gabknight commented #2368
  • May 07 09:10
    codecov[bot] commented #2368
  • May 07 09:10
    codecov[bot] commented #2368
Conor Owens-Walton
@ConorOW

Hey everyone. Sorry for the basic question - getting the following error when I try to do some fiber tracking:

ModuleNotFoundError: No module named 'dipy.tracking.local'.

All the other modules and packages for this task seem to import fine (dipy.reconst.dti; dipy.reconst.csdeconv etc)

Grateful for any advice!

Bramsh Q Chandio
@BramshQamar
I think it's dipy.tracking.local_tracking @ConorOW
1 reply
Chandana Kodiweera
@kodiweera

Hey everyone. Sorry for the basic question - getting the following error when I try to do some fiber tracking:

ModuleNotFoundError: No module named 'dipy.tracking.local'.

All the other modules and packages for this task seem to import fine (dipy.reconst.dti; dipy.reconst.csdeconv etc)

Grateful for any advice!
from dipy.tracking.local_tracking import LocalTracking

Chandana Kodiweera
@kodiweera

Hi @Garyfallidis and @arokem I am fitting the dipy msmt-csd model to my data and now it has been running for 2 days on 16 cpus, and yet not done. I wonder if I have to set a special flag within the fitting function to parallelize it. Is it automatically parallelize across the available cpus or do I have to specify somehow? Can the model msmt-csd currently be fit on gpus? Thank you.

I used the multiprocess package to parallelize voxels across available cpus for msmt-csd model.

Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera I'm curious -- did multiprocess work for you?
Also, the issue is CVXPY and not the code
Please use CVXPY versions 1.0.x for MSMT CSD
1.1.x of CVXPY seem to have undergone some changes that cause this issue
We are investigating it on our end! @karanphil -- please can you also see this?
Chandana Kodiweera
@kodiweera

1.1.x of CVXPY seem to have undergone some changes that cause this issue

@ShreyasFadnavis What problem are you referring to? fitting issue caused by cvxpy or multiprocessing?

Yashvardhan Jain
@J-Yash
Hello, I have a registration workflow that is quite compute intensive. I was wondering if DIPY has the capability to use GPUs for image registration? I currently use Advanced Normalization Tools (ANTsPy) for my workflow but it lacks GPU support.
Chandana Kodiweera
@kodiweera

@ShreyasFadnavis My first effort multiprocessing failed with recursion error. It does not resolve even after increasing the recursion limit to 10000.
vol_shape = dwi.shape[:-1]
n_voxels = np.prod(vol_shape)
voxel_by_dir = dwi.reshape(n_voxels, dwi.shape[-1])
voxel_array = [voxel_by_dir[i:i + 1, :] for i in range(voxel_by_dir.shape[0])]

def msmt_csd_pool(vox):
return mcsd_model.fit(vox)

if name == 'main':
with Pool() as p:
mcsd_fit = p.map(msmt_csd_pool, voxel_array)
p.close()
p.join()

@ShreyasFadnavis Now the second solution I'm trying is simple. I chunked the volume into 20 parts and processing separately. I'm hoping it is possible to merge outputs from msmt-csd fitting.
Shreyas Fadnavis
@ShreyasFadnavis
@kodiweera Yes -- It should work, as long as you dont chunk along the 4th dimension, i.e. gradient directions!
Chandana Kodiweera
@kodiweera
@kodiweera Yes -- It should work, as long as you dont chunk along the 4th dimension, i.e. gradient directions!
:)
Chandana Kodiweera
@kodiweera
@ShreyasFadnavis Futures3 is the best way to distribute the slices across cpus. def msmt_csd_pool(slice):
return mcsd_model.fit(slice)
with ProcessPoolExecutor() as executor:
results = [executor.map(msmt_csd_pool, slice_array)]
Trying it now. Let you know .
Chandana Kodiweera
@kodiweera
@ShreyasFadnavis It's a kind of happy moment. I could multiporcess voxels using pathos multiprocessing package. As mcsd_model.fit is a class function, it's little trcky to pickle.
But pathos uses dill and serializes nicely.
it took about 1 hour before for one slice , now only six to seven minutes on 6 cpus.
Shreyas Fadnavis
@ShreyasFadnavis
That's good news indeed @kodiweera -- Feel free to explain what worked for you in the issue : dipy/dipy#2336
So that everyone who is using MSMT-CSD can benefit :)
Chandana Kodiweera
@kodiweera
pathos can be used to multiprocess any model function ( class or not), not just msmt-csd.
Eleftherios Garyfallidis
@Garyfallidis
Nice one @kodiweera
Chandana Kodiweera
@kodiweera
Given the mask, what's the best tool to extract the region covered by the mask?
Ariel Rokem
@arokem
@kodiweera : if mask is a binary 3d array (True/False for within/outside the region of interest) and data is a 4D array containing diffusion data, then data[np.where(mask)] should be a 2D array with each row being a voxel and each column being a direction of measurement.
Chandana Kodiweera
@kodiweera

@kodiweera : if mask is a binary 3d array (True/False for within/outside the region of interest) and data is a 4D array containing diffusion data, then data[np.where(mask)] should be a 2D array with each row being a voxel and each column being a direction of measurement.

@arokem Thanks. Also for 3D : fslmaths <input> -mas <mask> <ouput>

cjwbeyond
@cjwbeyond
hello, why the performance is lower in linux os than mac os ?
any ideas to avoid the performance issue ?
Shreyas Fadnavis
@ShreyasFadnavis
Hi @cjwbeyond ! Can you explain a little more? Which algorithm/ step is slower on Linux as compared to MacOS?
rosella1234
@rosella1234
Hi. I would like to know whether dipy has RESTORE method also for Diffusion Kurtosis fitting, since I can find it only for DTI in the example documentation. thanks
Serge Koudoro
@skoudoro

Hello @/all ,

Dr. Leevi Kerkelä (UCL) will discuss today (Thursday 15 April at 1pm EST / 7pm CET / 10am PT) his project on integrating Q-Space Trajectory Imaging in DIPY.

The zoom link for the open meeting is https://iu.zoom.us/j/84926066336.

We hope to see you online!

Cheers !

Chandana Kodiweera
@kodiweera
Hi All, are you aware of a software that can do automated DWI QC? We are looking for something to bypass visual inspection as it's a very large dataset. Thank you!
Conor Owens-Walton
@ConorOW

Hi DIPY team. Wanted to ask about a memory error I am getting when running dipy_track on a linux server.

File "~/cowenswalton/conda-envs/envs/buan_env/lib/python3.8/site-packages/dipy/tracking/utils.py", line 881, in transform_tracking_output yield np.dot(sl, lin_T) + offset MemoryError
Super grateful for any advice, or where i can find some.

Conor Owens-Walton
@ConorOW
OK I am trying to run dipy_track again on our grid but requesting a bit more memory. Hopefully this works out.
Ariel Rokem
@arokem
@ShreyasFadnavis : what do you think about running patch2self on data that has already been denoised using another method (mp-pca in this case)?
Shreyas Fadnavis
@ShreyasFadnavis
@arokem That is a great question! It is definitely possible and should work. MPPCA for instance is doing a local low-rank approximation which does not change the assumption that Patch2Self relies on. In fact, P2S can also be applied to data that has undergone other types of pre-processing such as Eddy, Motion, Susceptibility, etc. As long as the pre-processing step applied does not start adding correlation artefacts across volumes, Patch2Self can be applied after!
Also from a denoising standpoint, Patch2Self (standalone) should do as good as Patch2Self + MPPCA. Any noise that persists due to MPPCA will anyways be suppressed by P2S.
Ariel Rokem
@arokem
Thanks @ShreyasFadnavis!
Gemmavdv
@Gemmavdv
Hello all,
Does anyone have experience tracking using DSI reconstructed data? I have a dataset of 145x174x145, so I can't fit all the data since that uses too much RAM. I tried using a memmap, but this still needs too much memory (134 GB).
I can construct the model and fit the data to the model using nindex and saving to a memmap, but using that memmap in e.g. the ClosestPeakDirectionGetter causes a memory error. Does anyone know a solution to this? Thank you very much in advance!
rosella1234
@rosella1234
Hi all,
I have a theoretical question about DKI minimum number of independent directions. Indeed, we are acquiring neonatal SC diffusion images at hospital but we are limited by acquisition time constraints: the clinical protocol is already quite long and we have to put the spine DKI sequence at the end since it is for research. We set up a protocol made up of 6b=0; 13 b=700; 13 b=2100 b values for a duration of 4 min 30 s thanks to multi band technique. I know it is low as a protocol for DKI but I tried increasing directions to 15 (minimum theoretically required for DKT estimation) and it lasts 1 minute more , and it often happens the neonate wakes up from the slight sedation just during the end of that sequence, so the clinicians opted for the 13-direction sequence.
Now, I am here to ask you a suggestion about which choice to take. I have read about protocol using just 9 directions per b-value for fast DKI protocol. If I use 13 directions, would DKI measures I compute in dipy be still reliable ? Or should I use another estimation method in case of so low directions ?
thank you very much in advance
Matt Cieslak
@mattcieslak
Hi @Gemmavdv I've been using DSI for a long time, have you tried DSI Studio?
Ariel Rokem
@arokem
@rosella1234 : theoretically, I believe that this should be enough to estimate DKI. I would recommend using the msdki method, which should be generally more robust to low SNR situations https://dipy.org/documentation/1.0.0./examples_built/reconst_msdki/
Eleftherios Garyfallidis
@Garyfallidis
@Gemmavdv something is unexpected. There should be no issues in tracking with DSI data. But can you explain more what you do here? Also the shape of the data is 3D but your data should be 4D.
rosella1234
@rosella1234
@arokem thank you! so do you suggest extracting just MSDKI as a DKI metric, and not the standard ones (MK;AK;RK;KFA..)?
Gemmavdv
@Gemmavdv
Thank you @mattcieslak and @Garyfallidis . I solved the problem, I didn't actually need to use memmaps. Sorry to waste your time!
Serge Koudoro
@skoudoro

Hello @/all ,

Quick reminder for the meeting today at 1pm EST / 7pm CET / 10am PT!

@gabknight will talk about the new Tractography competition.
We will have a brief talk about our Tracking framework
@skoudoro will make a quick overview of the future Release. Let us know if there is any request.
Cheers!

ps: same link as usual: https://iu.zoom.us/j/84926066336

Steven Meisler
@smeisler
I am trying to run Recobundles on an MrTrix tractogram (.tck) using only command line tools. I am able to use SLR to get it to MNI space as a .trk. Then I segment that into bundles (recobundles), but when I try to move it back to native space (labelbundles) I get an error since my original tck file does not have a proper header like a trk file. Is there a way to include a reference image in the labelbundles command line function? If so, should I use the diffusion image, t1 image, or something else? Thanks.
Bramsh Q Chandio
@BramshQamar

Hi @smeisler
Currently, dipy_labelsbundles does not take reference files as input. However, you can write a simple python script to bring bundles into the native space. Here's the bundle segmentation tutorial https://dipy.org/documentation/1.4.0./examples_built/bundle_extraction/#example-bundle-extraction
at the end of this tutorial after extracting the bundle, we are saving the bundle in the native space. Something like this:

 `reco_af_l = StatefulTractogram(target[af_l_labels], target_header,
                           Space.RASMM)     `
 `save_trk(reco_af_l, "AF_L.trk", bbox_valid_check=False)     `

You can use the labels.npy file saved by RecoBundles command line.