Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 13:09
    oesteban commented #1418
  • 13:04
    oesteban commented #1418
  • Dec 01 17:46
    skoudoro labeled #2683
  • Dec 01 17:45

    skoudoro on master

    DOC: Adds missing documentation… Add documentation of return_mah… Merge pull request #2683 from a… (compare)

  • Dec 01 17:45
    skoudoro closed #2683
  • Dec 01 17:45
    skoudoro commented #2683
  • Dec 01 17:25
    codecov[bot] commented #2683
  • Dec 01 17:25
    codecov[bot] commented #2683
  • Dec 01 17:24
    arokem synchronize #2683
  • Dec 01 15:24
    codecov[bot] commented #2683
  • Dec 01 14:43
    arokem opened #2683
  • Nov 29 03:55
    ShreyasFadnavis commented #2682
  • Nov 28 04:29
    LaraFernandez44 commented #2682
  • Nov 28 04:20
    skoudoro labeled #2682
  • Nov 28 04:17
    skoudoro commented #2682
  • Nov 28 04:04
    LaraFernandez44 opened #2682
  • Nov 18 16:45
    skoudoro closed #2679
  • Nov 18 16:45
    skoudoro commented #2679
  • Nov 18 16:01
    ggomezji opened #2679
  • Nov 16 20:17
    drombas commented #2425
Ariel Rokem
@rosella1234 : theoretically, I believe that this should be enough to estimate DKI. I would recommend using the msdki method, which should be generally more robust to low SNR situations https://dipy.org/documentation/1.0.0./examples_built/reconst_msdki/
Eleftherios Garyfallidis
@Gemmavdv something is unexpected. There should be no issues in tracking with DSI data. But can you explain more what you do here? Also the shape of the data is 3D but your data should be 4D.
@arokem thank you! so do you suggest extracting just MSDKI as a DKI metric, and not the standard ones (MK;AK;RK;KFA..)?
Thank you @mattcieslak and @Garyfallidis . I solved the problem, I didn't actually need to use memmaps. Sorry to waste your time!
Serge Koudoro

Hello @/all ,

Quick reminder for the meeting today at 1pm EST / 7pm CET / 10am PT!

@gabknight will talk about the new Tractography competition.
We will have a brief talk about our Tracking framework
@skoudoro will make a quick overview of the future Release. Let us know if there is any request.

ps: same link as usual: https://iu.zoom.us/j/84926066336

Steven Meisler
I am trying to run Recobundles on an MrTrix tractogram (.tck) using only command line tools. I am able to use SLR to get it to MNI space as a .trk. Then I segment that into bundles (recobundles), but when I try to move it back to native space (labelbundles) I get an error since my original tck file does not have a proper header like a trk file. Is there a way to include a reference image in the labelbundles command line function? If so, should I use the diffusion image, t1 image, or something else? Thanks.
Bramsh Q Chandio

Hi @smeisler
Currently, dipy_labelsbundles does not take reference files as input. However, you can write a simple python script to bring bundles into the native space. Here's the bundle segmentation tutorial https://dipy.org/documentation/1.4.0./examples_built/bundle_extraction/#example-bundle-extraction
at the end of this tutorial after extracting the bundle, we are saving the bundle in the native space. Something like this:

 `reco_af_l = StatefulTractogram(target[af_l_labels], target_header,
                           Space.RASMM)     `
 `save_trk(reco_af_l, "AF_L.trk", bbox_valid_check=False)     `

You can use the labels.npy file saved by RecoBundles command line.

Steven Meisler
@BramshQamar got it, thanks! Ended up using a nibabel implementation https://github.com/nipy/nibabel/blob/master/nibabel/cmdline/tck2trk.py

When trying to apply cross-validation for dki model on 7T adult HCP data the following error arises:

base) synapsi@charlie:/media/synapsi/dkeTest/new_version/code$ python goodness_of_fit.py
/home/synapsi/anaconda3/lib/python3.8/site-packages/dipy/core/gradients.py:295: UserWarning: b0_threshold (value: 50) is too low, increase your b0_threshold. It should be higher than the lowest b0 value (55.0).
warn("b0_threshold (value: {0}) is too low, increase your \
Data & Mask Loaded!
Traceback (most recent call last):
File "goodness_of_fit.py", line 21, in <module>
dki_cc = xval.kfold_xval(dki_model, cc_vox, 2)
File "/home/synapsi/anaconda3/lib/python3.8/site-packages/dipy/reconst/cross_validation.py", line 107, in kfold_xval
raise ValueError(msg)
ValueError: np.mod(143, 2) is 1

Hi all, is there a way to visualize Diffusion Kurtosis Tensor 3D geometry in a single voxel in dipy? thank you
Hi, I sent this question as a mail but it bounced. Does anyone know how to do this? Thanks!

I'm computing affine transforms for co-registering anatomical MRI scans.

Sometimes I would like to re-use those transforms, so in a case similar to the example here

my final transform is the result of this command:
rigid = affreg.optimize ( static, moving, transform, params0,
static_affine, moving_affine,
starting_affine = translation.affine );
final = rigid;
after first having aligned the centres of gravity.

After this command I can call
resampled = final.transform ( moving );
and then I get the desired result.

What I don't understand is how to store the transform and re-use it later.
I have tried to just save the object 'final' with
np.savez ( 'transformation.npz', final );
and then load it later with
with np.load ( 'transformation.npz', allow_pickle = True ) as npzfile:
final = npzfile [ 'arr_0' ];
and then re-run
resampled = final.transform ( moving );
but that returns the following error:
resampled = final.transform ( moving );
AttributeError: 'numpy.ndarray' object has no attribute 'transform'

So what I understand from this is that the output 'rigid' from the affreg.optimize() command has an attribute
'transform', but when I save it and then load it again it does not any more.

Is there a(nother) way to save and (re)load transforms?


Hi, all. I have some problems when fit the free water components. Does anyone know how to fix this? Thanks a lot!

I having using the fwdtimodel and fwdtifit to map the free water compartment.

The test data is download from HCP, with 288 directions and 2 b-values:
The code is as follows:
fwdtimodel = fwdti.FreeWaterTensorModel(gtab)
fwdtifit = fwdtimodel.fit(data, mask=mask)
fwvolume = fwdtifit.f

But there are some warning information:
/usr/local/lib64/python3.6/site-packages/scipy/optimize/minpack.py:475: RuntimeWarning: Number of calls to function has reached maxfev = 1800.
warnings.warn(errors[info][0], RuntimeWarning)
/usr/local/lib64/python3.6/site-packages/dipy/reconst/fwdti.py:311: RuntimeWarning: overflow encountered in exp
SIpred = (1-FS)np.exp(np.dot(W, all_new_params)) + FSS0*SFW.T

/usr/local/lib64/python3.6/site-packages/dipy/reconst/fwdti.py:312: RuntimeWarning: overflow encountered in square
F2 = np.sum(np.square(SI - SIpred), axis=0)

/usr/local/lib64/python3.6/site-packages/dipy/reconst/fwdti.py:458: RuntimeWarning: overflow encountered in exp
y = (1-f) * np.exp(np.dot(design_matrix, tensor[:7])) + \

I wonder whether these warnings have influenced the output images and anyway to fix them?

Many thanks for your help!

David Romero-Bascones
Hi @amwink

To save/load the transformation object you can try using pickle instead of numpy:
import pickle
filehandler = open('transform.obj', 'wb')
pickle.dump(final, filehandler)

filehandler = open('transform.obj', 'rb')
final = pickle.load(filehandler)


Hi @drombas, thanks very much! pickle does the trick :)

I thought that np.save with "allow pickle" would work, assuming that it would do the same as pickle (it doesn't) or that using JSON would be possible -- unfortunately it isn't (I tried to do the nested serialisation but failed). The reason for that is that many people have reservations about pickle.

For me it does the job perfectly.

Serge Koudoro
Also, @amwink, you can look at the functions write_mapping and read_mapping in https://github.com/dipy/dipy/blob/master/dipy/align/_public.py#L218
from dipy.align import write_mapping, read_mapping we use the nifti format to save this
Thanks @skoudoro ! I guess you can save the results of a rigid or affine registration as a mapping (though it would be a bit of overkill to store them in 3 3D images when 6 or 16 floats would do :). The versions of read_mapping and write_mapping for rigid and affine transforms would just need to store 3 translations + 3 rotations or a homogeneous coordinate transformation, respectively?
Hi, is DBSI fitting available in dipy? and, if not, are there any plans to include it? Thanks!
Dear all, I want to calculate the volume of a fiber bundle in each hemisphere apart, is it possible to do that in DIPY? If yes how can I do it ?
I will appreciate any help, and any ideas.
Thank you very much.
Priscilla Galinié
Hello everyone! I am currently trying to use dipy to process DTI data, I would like to do fiber tracking. Because it is DTI data and not HARDI data, I want to use the FACT method. I read in previous messages that I can use LocalTracking(..., fixedstep = False) and then provide the DTI directions as a direction getter. And there is my issue... I tried using tenfit.directions or even tenfit.evecs but it doesn't work. Maybe it is simple but I have been looking for a while and I don't find a solution.
I would really appreciate your help, thank you in advance for your time.
David Romero-Bascones
Hi @Troudi-Abir ,
As a simple approach, assuming you have already reconstructed the streamlines, you could try:
  • Use density_map to build a density map of each bundle in voxel space
  • Binarize the density map to obtain a bundle mask (set to 1 all voxels !=0)
  • Compute the bundle volume as the product between the number of voxels in the mask and the volume of a single voxel
hello @drombas thank you very much for your help.
I want to know how to calculate the volume in each hemisphere, in other words,
how I can divide the brain to calculate the volume of the bundle in each hemidphere apart?
Hello again, please could you privide me with an exemple to calculate mean_curvature with DIPY? I will appreciate your help. Thank you
Yun Wang
Hi All, I wonder if the slides in the workshop are available ?
Serge Koudoro
Hi @wangyuncolumbia, no they are not available but you can always ask directly the speaker by sending an email
Hi @Troudi-Abir, I recommend you to look at the documentation concerning the mean_curvature, it should be straight forward: https://dipy.org/documentation/1.4.0./reference/dipy.tracking/#mean-curvature
Dear @skoudoro thank you very much for the information, It's well appreciated.
Ariel Rokem
Is there a way to save png files of a particular view from horizon? I think that it used to be possible to save out a png with the s key, but I don't think that feature was deprecated (?). Also, is there any way to control the resolution of the output?
2 replies
I come back to the mean_cuvature function, I want to calculate mean_curvature of a bundle.tck, so I wrote this command line : MC=mean_curvature(bundle),
and I get an error that says the shapes are not compatible, I could not apply this function, could you help me please.
Dear All,
Bruno Moretti
Hi all! I don’t work with MRI images but with confocal microscopy 3D images. I have images that I need to align in 3D, and I think that an approach such as the one used in DIPY affine registration should work well. However, I’m struggling to use DIPY with my images, since the format of my images is different (they're just regular 3D (x,y,z) numpy arrays). In particular, when I try to call functions such as transform_centers_of_mass, I’m lacking the static_grid2world and moving_grid2world parameters. Is there any way to obtain the grid2world parameters from the raw, non-MRI images? Thanks a lot!
@venkatbits Dear All, I am unable to find a tutorial or a documentation page for reconstructing fibers tracts based on deterministic approach for a simple 12 directional diffusion 'tensor' fit data. Can anyone kindly help me in giving the steps for deterministic tractography after 'tensor fit' to the DWI images.
Hello everyone,
I just started exploring the dipy library, starting from the tutorial.
So far I understand every step that is explained, except for the white matter mask part.
As I tried to use my own data to perform the tractography, I can not use the labels and the white matter mask provided by the example in the tutorial (the one from Stanford), as the dimensions are very different.
I have tried also just to use a whole brain mask produced using the median_otsu step, however the resulting figure was not satisfactory for a healthy participant (mostly only fibers in the right hemisphere were tracked).I suspect this mask could be the problem, but I'm aware too that these two problems could be independent of one another, so any help would be much appreciated!
Thank you
Dear All, I am unable to find a tutorial or a documentation page for reconstructing fibers tracts based on deterministic approach for a simple 12 directional diffusion 'tensor' fit data. Can anyone kindly help me in giving the steps for deterministic tractography after 'tensor fit' to the DWI images.
Dear all, I'm trying to develop a pipeline that uses dipy functions (that have been very useful so far). But I am a bit confused by the adc function in dti.py since it uses the sphere362 in the data folder, instead of the given acquisition b values and vector directions. Is there any way to get the ADC values from a monoexponential model fitting as in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4709925/ with bvecs provided as in during acquisition instead of sphere directions (nor b values converted to 1 as in the function) and thus end up with Xaxisvoxels x Yaxisvoxels x Slices X NumberofAcquisitionDirections nifty file instead of Xaxisvoxels x Yaxisvoxels x Slices X 362 due to sphere as happens by default? Thanks in advance and hope the question isn't confusing. I was just surprised since MD, FA etc. parametric maps in dti tutorial for example do use the bval bvec from your scan, unlike the adc function
Ariel Rokem

@jarcosh : yes! Instead of the 362-vertex sphere, you can create your own sphere:

from dipy.core.sphere import Sphere
my_sphere = Sphere(xyz=gtab.bvecs[gtab.b0s_mask])

Where gtab is a gradient table based on your data.

@arokem first off, thanks for the very quick response. hope my reply isn't overly long or any remaining problems down to the fact it's been a few years since I last routinely did linear algebra :-) I did what you advised. strangely enough the code exactly as you sent me left me with a gtab object with the True and False in b0s_mask inverted (so the sphere generated was a nonsense 0 0 due to two basal images) but once I reverted that it was indeed an sphere with the xyz taking our directions. apparent_diffusion_coef(tenfit.quadratic_form, my_sphere) now produces way more reasonable voxel values than my clumsy attempt yesterday to "fix" the function to force our gtab in. however it's still Xaxisvoxels x Yaxisvoxels x Slices x 14 (since we do preclinical research, it's two b values for each of our 7 directions) instead of Xaxisvoxels x Yaxisvoxels x Slices x 7 directions, so I'm afraid the information we get from using two b values is lost. Additionaly I can't help but notice that apparent_diffusion_coef does away with the original bvals using np.ones to get a fully 1 vector. Trying to document myself I see design matrixes equal a column full of 1s and xn multiplied by intercept and regression line slope, but return D or return D.T gives the column/row of 1s in the 'wrong' place (for last column or a row) and in any case doing design_matrix(gtab)[:, :6] means the 1s row/column is dropped anyway before doing np.dot. Am I doing something wrong, understanding something wrong in how diffusion analysis is usually done or how dipy functions work? Just still a bit puzzled at b values being converted to 1s and then 'dropped'
lastly, now I also get a "Vertices are not on the unit sphere." warning, but maybe that's normal due to our handpicked values and 7 vectors not being much of a 'sphere'
Hi everyone! I have an issue when trying to perform cross-validation for DKI model on 7T data from HCP release. Specifically, the following error appears: File "goodness_of_fit.py", line 46, in <module>
dki_r2 = stats.pearsonr(data_slice[i, j, k, :], dki_slice[i, j, k, :])[0]**2
File "/home/rosella/anaconda3/lib/python3.8/site-packages/scipy/stats/stats.py", line 3868, in pearsonr
normym = linalg.norm(ym)
File "/home/rosella/anaconda3/lib/python3.8/site-packages/scipy/linalg/misc.py", line 140, in norm
a = np.asarray_chkfinite(a)
File "/home/rosella/anaconda3/lib/python3.8/site-packages/numpy/lib/function_base.py", line 485, in asarray_chkfinite
raise ValueError(
ValueError: array must not contain infs or NaNs
I do not know how to solve this and why it just happens for 7T data, unlike 3T . thanks
Eleftherios Garyfallidis
@willi3by DBSI fitting is not available. However, if it is clear that DBSI is more advantageous than other implemented models for x reasons it makes sense to implement it. We may need to contact the authors. Be happy to start a discussion topic here https://github.com/dipy/dipy/discussions Please start the discussion by providing a summary of reasons that justify the time commitment.
Ariel Rokem
@rosella1234 : looks like there are some nans in that data. You might want to filter out voxels with nans before performing your correlation analysis
@jarcosh : oh yes, that should have been my_sphere = Sphere(xyz=gtab.bvecs[~gtab.b0s_mask])
But it looks like you've solved that already. Let me see if I can address the other issues:
Regarding the 7 vs. 14 directions: Don't you get the same ADC value for the two vectors representing the same direction?
I think that should be the case (and should incorporate information from both b-values, although you should be careful with that, especially if the higher b-value is higher than b=1,000, because the signal is probably not a Gaussian in many places.
Ariel Rokem
I am not sure that I follow the question about the design matrix. In case it helps, take a look at equations 6 - 9 in this paper: https://www.sciencedirect.com/science/article/pii/S1053811906007403.
@arokem I feel silly now: you are completely correct that the ADC values for the two vectors representing the same direction are exactly the same. Being happy the voxels values now seemed reasonable/annoyed that the output still seemed in the wrong dimensions and having other issues to solve I didn't compare ROIs thoroughly as I usually do and didn't notice.