## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Jan 20 15:12
skoudoro locked #2500
• Jan 20 15:08
foxet closed #2500
• Jan 20 10:39
• Jan 20 04:40
foxet commented #2500
• Jan 20 01:42
foxet edited #2500
• Jan 20 01:41
foxet opened #2500
• Jan 19 16:10
skoudoro edited #2499
• Jan 19 15:10
itellaetxe edited #2499
• Jan 19 15:10
itellaetxe edited #2499
• Jan 19 15:09
itellaetxe edited #2499
• Jan 19 15:08
itellaetxe edited #2499
• Jan 19 15:08
itellaetxe edited #2499
• Jan 19 15:07
itellaetxe edited #2499
• Jan 19 15:04
itellaetxe opened #2499
• Jan 12 19:40
alexrockhill commented #2490
• Jan 12 19:40
alexrockhill closed #2490
• Jan 12 16:55
alexrockhill commented #2490
• Jan 12 16:54
alexrockhill commented #2490
• Jan 12 07:19
alexrockhill commented #2490
• Jan 11 21:40
alexrockhill commented #2490
Bramsh Q Chandio
@BramshQamar
here indx has segment number/label per point
Bramsh Q Chandio
@BramshQamar
@smeisler Projecting FA values on a tract and calculating mean per segment:
import numpy as np
from dipy.tracking.streamline import transform_streamlines
from dipy.stats.analysis import assignment_map
from scipy.ndimage.interpolation import map_coordinates

n = 100 #this is for total number of segments and won't change the number of points per streamline
indx = assignment_map(bundle, bundle, n)
indx = np.array(indx)

affine_r = np.linalg.inv(affine)
transformed_bundle = transform_streamlines(bundle, affine_r)

values = map_coordinates(FA,  transformed_bundle._data.T, order=1)

# here  indx has segment number/label per point of all streamlines
# values has FA value per point of all streamlines

#you can take an average of FA values based on which segment its corresponding point belongs to.

fa_mean = [0]*n

for i in range(n):

fa_mean[i] = np.mean(values[indx==i])

#plot the mean FA profile

import matplotlib.pyplot as plt
plt.plot(list(range(n)), fa_mean)
plt.title("FA mean profile")
plt.xlabel("Segment number")
plt.ylim([0,1])
plt.ylabel("FA")

# you can also visualize your bundle with n segments

colors = [np.random.rand(3) for si in range(n)]

disks_color = []
for i in range(len(indx)):
disks_color.append(tuple(colors[indx[i]]))

from dipy.viz import window, actor
scene = window.Scene()
scene.SetBackground(1, 1, 1)
linewidth=6))

window.show(scene)
Hi @Troudi-Abir
What kind of bundle is it? Is it a whole-brain tractogram?
Steven Meisler
@smeisler
Hi @BramshQamar I don't think I was clear when explaining my question. I would like a single mean FA value across the tract. I was wondering if there was a way to do that without creating the tract profiles first. Or rather, is the average across all segments of a bundle a good way to find the average FA of the bundle? So in the code above, np.mean(fa_mean)
Bramsh Q Chandio
@BramshQamar
@smeisler yes, you can take a mean of the entire bundle profile like this np.mean(fa_mean) or you can give more weightage to some segments and less to others (eg: more weightage to mid-segments). Bundle shape and FA values change throughout the length of the bundle. I am not sure if it's a good idea to have just one FA value represent the entire bundle.
@bgolshaei
@skoudoro Thank you so much for your advice. I have done the implementation and find out the displacement field.
Chandana Kodiweera
@kodiweera
Started building a wrapper package to fit diffusion models conveniently. https://github.com/kodiweera/difit
Paolo Avesani
@Paolopost
In the module dipy.tracking.utils the function 'target' allows the filtering of streamlines crossing a ROI volumetric mask; does the current implementation (dipy 1.4.0) compute the filtering considering the points or the segment crossing a voxels? This detail affects the result in a meaningful way when using streamline compression.
Eleftherios Garyfallidis
@Garyfallidis
@Paolopost target uses points, target_line_based uses segments (for compressed streamlines).
Emmanuelle
@EmmaRenauld

Hi!

Links at the bottom of page https://dipy.org/documentation/1.4.1./documentation/ ("index" and "search page") are broken! Thanks!

(if you want I can add an issue on Github but I wasn't sure how you manage your doc)
Serge Koudoro
@skoudoro
Thank you @EmmaRenauld, There is already an issue with that. We will fix it asap.
tecork
@tecork

Hello everyone,
I am trying to run the following code:

from dipy.denoise.localpca import localpca
from dipy.denoise.pca_noise_estimate import pca_noise_estimate

sigma = pca_noise_estimate(data, gtab, correct_bias=True, smooth=3)
denoised_arr = localpca(data, sigma, tau_factor=2.3, patch_radius=2)

using a dataset that is (100, 128, 3, 21) with the following bvals setup:
array([ 0., 1000., 1000., 1000., 1000., 1000., 1000., 0., 1000.,
1000., 1000., 1000., 1000., 1000., 0., 1000., 1000., 1000.,
1000., 1000., 1000.])
I keep receiving this error when I run though:

.../opt/anaconda3/lib/python3.7/site-packages/dipy/denoise/localpca.py:246: RuntimeWarning: invalid value encountered in true_divide
denoised_arr = thetax / theta

The resulting array is all np.nan's.

tecork
@tecork
Patch2Self and Non-Local Means techniques work on the same data. Local PCA via empirical thresholds and Marcenko-Pastur PCA algorithm techniques throw the same error for the said dataset.
Hey @tecork ! thanks for reaching out :) I guess I know why this error is coming. I have never encountered it personally. Can you share some data that you are getting this error with?
2 replies
Serge Koudoro
@skoudoro
Also, it will be good to create an issue concerning this warning/error. thanks @tecork
tecork
@tecork
@ShreyasFadnavis Here is the dataset I was working with. Not a very impressive dataset, but I wanted to workout the denoising tools for a project I'm working on!
@skoudoro I'll create and issue as well. I just wanted to run it through the community first because I was suspecting it could very well be a user error
araikes
@araikes
@skoudoro @Garyfallidis: Two questions:
1. Is there a way to use the tensor output from dipy_fit_dti or dipy_fit_dki to create scalar maps in a subsequent step after reorienting to a T1w image?
2. Can I fit the RESTORE model using the CLI?
Serge Koudoro
@skoudoro
Hi @araikes, concerning your question number 2, You can not use RESTORE model using CLI. However, this is something easy to add. Can you create an issue and should be able to add it before the release in november 7-8.
araikes
@araikes
@Garyfallidis: I'll clarify my question since it isn't as obvious what I'm thinking now that I'm reading it. If I use ANTs to register the the b0 to a T1w or T2w image (imaging a lot of rodents...) and then reorient the tensors (https://github.com/ANTsX/ANTs/wiki/Warp-and-reorient-a-diffusion-tensor-image), is there a DIPY way to then get the scalar maps from those reoriented tensors?
Eleftherios Garyfallidis
@Garyfallidis
@araikes in what form are the reoriented tensors saved?
araikes
@araikes
@Garyfallidis They're a NIFTI. DIPY's tensor is NIFTI-1, so it feeds directly into ANTs ReorientTensor without any manipulation.
Eleftherios Garyfallidis
@Garyfallidis
Okay can you load them back in DIPY? If yes then you can use the our dipy.reconst.dti module in a predictive way.
The question is what you want to do next.
Do you want to use these tensors to generate metrics such as FA etc?
araikes
@araikes
That's the plan (at the moment)
Eleftherios Garyfallidis
@Garyfallidis
Here is how to decompose the tensor https://github.com/dipy/dipy/blob/master/dipy/reconst/dti.py#L1960 into eigen values and eigen vectors.
Then you can use those to create FA etc. See function to use here https://github.com/dipy/dipy/blob/master/dipy/reconst/dti.py#L54
araikes
@araikes
Makes sense. I'll see if I can get something up and working. Thanks
Eleftherios Garyfallidis
@Garyfallidis
You are welcome.
araikes
@araikes
@Garyfallidis It looks like I can't load them back in DIPY. If I use load_nifti (even on the unmodified tensor image produced by dipy_fit_dti) and then dti.decompose_tensor I get: LinAlgError: Last 2 dimensions of the array must be square
araikes
@araikes
data,affine = load_nifti('tensors.nii.gz')

test = dti.from_lower_triangular(data)

evals, evecs = dti.decompose_tensor(test)
fa = dti.fractional_anisotropy(evals)
That produces an FA map of 0s. The tensor data when read back in is a 64x128x64x1x6 array of 0s.
Eleftherios Garyfallidis
@Garyfallidis
Not sure if that will work but can you remove the extra dimensions? Use np.squeeze? To go to 64x128x64x6?
licataae
@licataae

Hi all, I am working with a script that generates peaks using dipy's peaks_from_model() to track white matter pathways. I am attempting to save these peaks to a nifti from the .PAM5 file they are saved in using save_peaks(). My PeaksandMetrics object does not have the affine attribute, however even if I specify it in save_peaks() as the docs suggest it still fails to recognize it and gives the error: AttributeError: 'PeaksAndMetrics' object has no attribute 'affine'. I am using dipy 0.15, python 2.7 . Here is my code, any advice is helpful since I am still fairly new to dipy:
csapeaks = peaks_from_model(model=csa_model,
sphere=sphere,
relative_peak_threshold=.25,
return_odf=True, normalize_peaks=True)

print('csa_peaks generated')
pam = savepeaks(os.path.join(Diffusion, 'peaks.pam5'), csapeaks, affine=np.eye(4))

peaks_toniftis(pam, Diffusion+'/'+PIDN+'peaksSH.nii',
Diffusion
+'/'+PIDN+'peaksdirections.nii',
Diffusion
+'/'+PIDN+'peaksindices.nii',
Diffusion
+'/'+PIDN+'peaksvalues.nii',
Diffusion
+'/'+PIDN+'GFA.nii', reshape_dirs=False)

Serge Koudoro
@skoudoro
Hi @licataae, Sorry for the late answer. I would recommend switching to python 3 and a recent version of DIPY. Indeed, python2.7 is deprecated and this issue has been fixed quite a long time ago. However, If you have really no choice, I would recommend looking at the current codebase where we updated the save_peaks function: https://github.com/dipy/dipy/blob/master/dipy/io/peaks.py. it might help you a lot to rewrite the save function.
licataae
@licataae
Thank you very much! Yes I must update my python/dipy versions... I greatly appreciate your help.
kenebene
@kenebene

Hi, a question for the community: I would need to cluster streamline pairs rather than streamlines whilst still using QuickBundles. By this I mean that I have one streamline pair [A B], where A and B are individual streamlines, and one streamline pair [C D], where C and D are individual streamlines. What I want to do is calculate the distance between A and C, and B and D and cluster based on the total distance between the pairs as: totalDistance = distanceBetween(A, C) +distanceBetween(B, D).

If anyone has any idea of how to achieve this I would be very thankful to hear it!

erickirby12
@erickirby12
Hello, I'm still newer to DIPY and coding in general. How would one go about creating an average whole brain tractogram in dipy that is a combination of all subjects' datasets in a group? My current idea is combining all bvec and bval files, transforming all raw dti data to standard space, merge all dti data, then put the resulting file through my dipy pipeline like it was a single subjects data. However, this will create a massive file and I don't think my computer can handle it. Any ideas on a better way to do this?
Eleftherios Garyfallidis
@Garyfallidis
Basically you will need to design properly your feature and metric distance. But I think what you want to do is possible.
kenebene
@kenebene
@Garyfallidis thank you specifically for this input and generally for your great work, much appreciated. I will work with the material you suggested.
Elie Abi Aoun
@Elie-AAA
Hello everyone,
has anyone tried DIPY for fibre tracking in a fibre reinforced composite? If yes, was it successful?