Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Jan 20 15:12
    skoudoro locked #2500
  • Jan 20 15:08
    foxet closed #2500
  • Jan 20 10:39
    ShreyasFadnavis commented #2500
  • Jan 20 04:40
    foxet commented #2500
  • Jan 20 01:42
    foxet edited #2500
  • Jan 20 01:41
    foxet opened #2500
  • Jan 19 16:10
    skoudoro edited #2499
  • Jan 19 15:10
    itellaetxe edited #2499
  • Jan 19 15:10
    itellaetxe edited #2499
  • Jan 19 15:09
    itellaetxe edited #2499
  • Jan 19 15:08
    itellaetxe edited #2499
  • Jan 19 15:08
    itellaetxe edited #2499
  • Jan 19 15:07
    itellaetxe edited #2499
  • Jan 19 15:04
    itellaetxe opened #2499
  • Jan 12 19:40
    alexrockhill commented #2490
  • Jan 12 19:40
    alexrockhill closed #2490
  • Jan 12 16:55
    alexrockhill commented #2490
  • Jan 12 16:54
    alexrockhill commented #2490
  • Jan 12 07:19
    alexrockhill commented #2490
  • Jan 11 21:40
    alexrockhill commented #2490
Bramsh Q Chandio
@smeisler Projecting FA values on a tract and calculating mean per segment:
import numpy as np
from dipy.tracking.streamline import transform_streamlines
from dipy.stats.analysis import assignment_map
from scipy.ndimage.interpolation import map_coordinates
from dipy.io.image import load_nifti,

bundle #load your bundle
FA, affine = load_nifti(metric_file_name)

n = 100 #this is for total number of segments and won't change the number of points per streamline
indx = assignment_map(bundle, bundle, n)
indx = np.array(indx)

affine_r = np.linalg.inv(affine)
transformed_bundle = transform_streamlines(bundle, affine_r)

values = map_coordinates(FA,  transformed_bundle._data.T, order=1)

# here  indx has segment number/label per point of all streamlines
# values has FA value per point of all streamlines

#you can take an average of FA values based on which segment its corresponding point belongs to.

fa_mean = [0]*n

for i in range(n):

    fa_mean[i] = np.mean(values[indx==i])

#plot the mean FA profile

import matplotlib.pyplot as plt
plt.plot(list(range(n)), fa_mean) 
plt.title("FA mean profile")
plt.xlabel("Segment number")

# you can also visualize your bundle with n segments 

colors = [np.random.rand(3) for si in range(n)]

disks_color = []
for i in range(len(indx)):

from dipy.viz import window, actor    
scene = window.Scene()
scene.SetBackground(1, 1, 1)
scene.add(actor.line(bundle, fake_tube=True, colors=disks_color,

Hi @Troudi-Abir
What kind of bundle is it? Is it a whole-brain tractogram?
Steven Meisler
Hi @BramshQamar I don't think I was clear when explaining my question. I would like a single mean FA value across the tract. I was wondering if there was a way to do that without creating the tract profiles first. Or rather, is the average across all segments of a bundle a good way to find the average FA of the bundle? So in the code above, np.mean(fa_mean)
Bramsh Q Chandio
@smeisler yes, you can take a mean of the entire bundle profile like this np.mean(fa_mean) or you can give more weightage to some segments and less to others (eg: more weightage to mid-segments). Bundle shape and FA values change throughout the length of the bundle. I am not sure if it's a good idea to have just one FA value represent the entire bundle.
Behzad Golshaei
@skoudoro Thank you so much for your advice. I have done the implementation and find out the displacement field.
Chandana Kodiweera
Started building a wrapper package to fit diffusion models conveniently. https://github.com/kodiweera/difit
Paolo Avesani
In the module dipy.tracking.utils the function 'target' allows the filtering of streamlines crossing a ROI volumetric mask; does the current implementation (dipy 1.4.0) compute the filtering considering the points or the segment crossing a voxels? This detail affects the result in a meaningful way when using streamline compression.
Eleftherios Garyfallidis
@Paolopost target uses points, target_line_based uses segments (for compressed streamlines).


Links at the bottom of page https://dipy.org/documentation/1.4.1./documentation/ ("index" and "search page") are broken! Thanks!

(if you want I can add an issue on Github but I wasn't sure how you manage your doc)
Serge Koudoro
Thank you @EmmaRenauld, There is already an issue with that. We will fix it asap.

Hello everyone,
I am trying to run the following code:

from dipy.denoise.localpca import localpca
from dipy.denoise.pca_noise_estimate import pca_noise_estimate
from dipy.core.gradients import gradient_table

gtab = gradient_table(bvals, bvecs)
sigma = pca_noise_estimate(data, gtab, correct_bias=True, smooth=3)
denoised_arr = localpca(data, sigma, tau_factor=2.3, patch_radius=2)

using a dataset that is (100, 128, 3, 21) with the following bvals setup:
array([ 0., 1000., 1000., 1000., 1000., 1000., 1000., 0., 1000.,
1000., 1000., 1000., 1000., 1000., 0., 1000., 1000., 1000.,
1000., 1000., 1000.])
I keep receiving this error when I run though:

.../opt/anaconda3/lib/python3.7/site-packages/dipy/denoise/localpca.py:246: RuntimeWarning: invalid value encountered in true_divide
denoised_arr = thetax / theta

The resulting array is all np.nan's.
Any idea on how to go about this?

Patch2Self and Non-Local Means techniques work on the same data. Local PCA via empirical thresholds and Marcenko-Pastur PCA algorithm techniques throw the same error for the said dataset.
Shreyas Fadnavis
Hey @tecork ! thanks for reaching out :) I guess I know why this error is coming. I have never encountered it personally. Can you share some data that you are getting this error with?
2 replies
Serge Koudoro
Also, it will be good to create an issue concerning this warning/error. thanks @tecork
@ShreyasFadnavis Here is the dataset I was working with. Not a very impressive dataset, but I wanted to workout the denoising tools for a project I'm working on!
@skoudoro I'll create and issue as well. I just wanted to run it through the community first because I was suspecting it could very well be a user error
@skoudoro @Garyfallidis: Two questions:
  1. Is there a way to use the tensor output from dipy_fit_dti or dipy_fit_dki to create scalar maps in a subsequent step after reorienting to a T1w image?
  2. Can I fit the RESTORE model using the CLI?
Serge Koudoro
Hi @araikes, concerning your question number 2, You can not use RESTORE model using CLI. However, this is something easy to add. Can you create an issue and should be able to add it before the release in november 7-8.
Concerning your question 1, I will let @Garyfallidis answer
@skoudoro Issue added.
@Garyfallidis: I'll clarify my question since it isn't as obvious what I'm thinking now that I'm reading it. If I use ANTs to register the the b0 to a T1w or T2w image (imaging a lot of rodents...) and then reorient the tensors (https://github.com/ANTsX/ANTs/wiki/Warp-and-reorient-a-diffusion-tensor-image), is there a DIPY way to then get the scalar maps from those reoriented tensors?
Eleftherios Garyfallidis
@araikes in what form are the reoriented tensors saved?
@Garyfallidis They're a NIFTI. DIPY's tensor is NIFTI-1, so it feeds directly into ANTs ReorientTensor without any manipulation.
Eleftherios Garyfallidis
Okay can you load them back in DIPY? If yes then you can use the our dipy.reconst.dti module in a predictive way.
The question is what you want to do next.
Do you want to use these tensors to generate metrics such as FA etc?
That's the plan (at the moment)
Eleftherios Garyfallidis
Here is how to decompose the tensor https://github.com/dipy/dipy/blob/master/dipy/reconst/dti.py#L1960 into eigen values and eigen vectors.
Then you can use those to create FA etc. See function to use here https://github.com/dipy/dipy/blob/master/dipy/reconst/dti.py#L54
Makes sense. I'll see if I can get something up and working. Thanks
Eleftherios Garyfallidis
You are welcome.
@Garyfallidis It looks like I can't load them back in DIPY. If I use load_nifti (even on the unmodified tensor image produced by dipy_fit_dti) and then dti.decompose_tensor I get: LinAlgError: Last 2 dimensions of the array must be square
data,affine = load_nifti('tensors.nii.gz')

test = dti.from_lower_triangular(data)

evals, evecs = dti.decompose_tensor(test)
fa = dti.fractional_anisotropy(evals)
That produces an FA map of 0s. The tensor data when read back in is a 64x128x64x1x6 array of 0s.
Eleftherios Garyfallidis
Not sure if that will work but can you remove the extra dimensions? Use np.squeeze? To go to 64x128x64x6?

Hi all, I am working with a script that generates peaks using dipy's peaks_from_model() to track white matter pathways. I am attempting to save these peaks to a nifti from the .PAM5 file they are saved in using save_peaks(). My PeaksandMetrics object does not have the affine attribute, however even if I specify it in save_peaks() as the docs suggest it still fails to recognize it and gives the error: AttributeError: 'PeaksAndMetrics' object has no attribute 'affine'. I am using dipy 0.15, python 2.7 . Here is my code, any advice is helpful since I am still fairly new to dipy:
`csapeaks = peaks_from_model(model=csa_model,
min_separation_angle=40, mask=mask,
return_odf=True, normalize_peaks=True)

print('csa_peaks generated')
pam = savepeaks(os.path.join(Diffusion, 'peaks.pam5'), csapeaks, affine=np.eye(4))

peaks_toniftis(pam, Diffusion+'/'+PIDN+'peaksSH.nii',
+'/'+PIDN+'GFA.nii', reshape_dirs=False)`

Serge Koudoro
Hi @licataae, Sorry for the late answer. I would recommend switching to python 3 and a recent version of DIPY. Indeed, python2.7 is deprecated and this issue has been fixed quite a long time ago. However, If you have really no choice, I would recommend looking at the current codebase where we updated the save_peaks function: https://github.com/dipy/dipy/blob/master/dipy/io/peaks.py. it might help you a lot to rewrite the save function.
Thank you very much! Yes I must update my python/dipy versions... I greatly appreciate your help.

Hi, a question for the community: I would need to cluster streamline pairs rather than streamlines whilst still using QuickBundles. By this I mean that I have one streamline pair [A B], where A and B are individual streamlines, and one streamline pair [C D], where C and D are individual streamlines. What I want to do is calculate the distance between A and C, and B and D and cluster based on the total distance between the pairs as: totalDistance = distanceBetween(A, C) +distanceBetween(B, D).

If anyone has any idea of how to achieve this I would be very thankful to hear it!

Hello, I'm still newer to DIPY and coding in general. How would one go about creating an average whole brain tractogram in dipy that is a combination of all subjects' datasets in a group? My current idea is combining all bvec and bval files, transforming all raw dti data to standard space, merge all dti data, then put the resulting file through my dipy pipeline like it was a single subjects data. However, this will create a massive file and I don't think my computer can handle it. Any ideas on a better way to do this?
Basically you will need to design properly your feature and metric distance. But I think what you want to do is possible.
@Garyfallidis thank you specifically for this input and generally for your great work, much appreciated. I will work with the material you suggested.
Elie Abi Aoun
Hello everyone,
has anyone tried DIPY for fibre tracking in a fibre reinforced composite? If yes, was it successful?
Thanks in advance,
Surendra Maharjan
Hello, Thank you very much for designing this awesome package.
Do we have FFT in DIPY?
Serge Koudoro
hi @irfnt, there is not FFT in DIPY, we use scipy or numpy fft implementation usually
Serge Koudoro

Dear DIPY Community (@/all),

Do you need help to add a new feature on DIPY?

Do you want to share ideas or a moment with DIPY users and developers? or Do you have any questions about your DIPY code?

Feel free to join us on December 16th-17th during our Brainhack event! (https://brainhack.luddy.indiana.edu/)

Whatever your background or level of expertise is, you are encouraged to join us and participate: propose and discuss ideas, showcase demos, or contribute to activities initiated by others.

This free online event will be a good opportunity to connect with each other (registration is mandatory).
Some users already have proposed projects that you might be interested in:

  • Add new features to image registration
  • Sphinx Gallery Integration in DIPY
  • Improve Patch2self
  • Sprint to fix issues on DIPY
  • Implementation of the generalized volumetric atlas-based method for tractogram segmentation
  • Convert python 2 dipy script to python 3
  • Your project!

You will find all the information (schedule, registration, ...) at https://brainhack.luddy.indiana.edu/