skoudoro on master
fixed bug fornon-linear fitting… Merge pull request #2717 from s… (compare)
arokem on master
fix CI's typo add python 3.10 Merge pull request #2628 from s… (compare)
@kodiweera : if
mask
is a binary 3d array (True/False for within/outside the region of interest) anddata
is a 4D array containing diffusion data, thendata[np.where(mask)]
should be a 2D array with each row being a voxel and each column being a direction of measurement.
@arokem Thanks. Also for 3D : fslmaths <input> -mas <mask> <ouput>
Hello @/all ,
Dr. Leevi Kerkelä (UCL) will discuss today (Thursday 15 April at 1pm EST / 7pm CET / 10am PT) his project on integrating Q-Space Trajectory Imaging in DIPY.
The zoom link for the open meeting is https://iu.zoom.us/j/84926066336.
We hope to see you online!
Cheers !
Hi DIPY team. Wanted to ask about a memory error I am getting when running dipy_track
on a linux server.
File "~/cowenswalton/conda-envs/envs/buan_env/lib/python3.8/site-packages/dipy/tracking/utils.py", line 881, in transform_tracking_output yield np.dot(sl, lin_T) + offset MemoryError
Super grateful for any advice, or where i can find some.
Patch2Self
(standalone) should do as good as Patch2Self + MPPCA
. Any noise that persists due to MPPCA will anyways be suppressed by P2S.
Hello @/all ,
Quick reminder for the meeting today at 1pm EST / 7pm CET / 10am PT!
@gabknight will talk about the new Tractography competition.
We will have a brief talk about our Tracking framework
@skoudoro will make a quick overview of the future Release. Let us know if there is any request.
Cheers!
ps: same link as usual: https://iu.zoom.us/j/84926066336
Hi @smeisler
Currently, dipy_labelsbundles does not take reference files as input. However, you can write a simple python script to bring bundles into the native space. Here's the bundle segmentation tutorial https://dipy.org/documentation/1.4.0./examples_built/bundle_extraction/#example-bundle-extraction
at the end of this tutorial after extracting the bundle, we are saving the bundle in the native space. Something like this:
`reco_af_l = StatefulTractogram(target[af_l_labels], target_header,
Space.RASMM) `
`save_trk(reco_af_l, "AF_L.trk", bbox_valid_check=False) `
You can use the labels.npy file saved by RecoBundles command line.
When trying to apply cross-validation for dki model on 7T adult HCP data the following error arises:
(```
base) synapsi@charlie:/media/synapsi/dkeTest/new_version/code$ python goodness_of_fit.py
/home/synapsi/anaconda3/lib/python3.8/site-packages/dipy/core/gradients.py:295: UserWarning: b0_threshold (value: 50) is too low, increase your b0_threshold. It should be higher than the lowest b0 value (55.0).
warn("b0_threshold (value: {0}) is too low, increase your \
Data & Mask Loaded!
Traceback (most recent call last):
File "goodness_of_fit.py", line 21, in <module>
dki_cc = xval.kfold_xval(dki_model, cc_vox, 2)
File "/home/synapsi/anaconda3/lib/python3.8/site-packages/dipy/reconst/cross_validation.py", line 107, in kfold_xval
raise ValueError(msg)
ValueError: np.mod(143, 2) is 1
I'm computing affine transforms for co-registering anatomical MRI scans.
Sometimes I would like to re-use those transforms, so in a case similar to the example here
https://dipy.org/documentation/1.2.0./examples_built/affine_registration_3d/#example-affine-registration-3d
my final transform is the result of this command:
rigid = affreg.optimize ( static, moving, transform, params0,
static_affine, moving_affine,
starting_affine = translation.affine );
final = rigid;
after first having aligned the centres of gravity.
After this command I can call
resampled = final.transform ( moving );
and then I get the desired result.
What I don't understand is how to store the transform and re-use it later.
I have tried to just save the object 'final' with
np.savez ( 'transformation.npz', final );
and then load it later with
with np.load ( 'transformation.npz', allow_pickle = True ) as npzfile:
final = npzfile [ 'arr_0' ];
and then re-run
resampled = final.transform ( moving );
but that returns the following error:
resampled = final.transform ( moving );
AttributeError: 'numpy.ndarray' object has no attribute 'transform'
So what I understand from this is that the output 'rigid' from the affreg.optimize() command has an attribute
'transform', but when I save it and then load it again it does not any more.
Is there a(nother) way to save and (re)load transforms?
Hi, all. I have some problems when fit the free water components. Does anyone know how to fix this? Thanks a lot!
I having using the fwdtimodel and fwdtifit to map the free water compartment.
The test data is download from HCP, with 288 directions and 2 b-values:
The code is as follows:
fwdtimodel = fwdti.FreeWaterTensorModel(gtab)
fwdtifit = fwdtimodel.fit(data, mask=mask)
fwvolume = fwdtifit.f
But there are some warning information:
/usr/local/lib64/python3.6/site-packages/scipy/optimize/minpack.py:475: RuntimeWarning: Number of calls to function has reached maxfev = 1800.
warnings.warn(errors[info][0], RuntimeWarning)
/usr/local/lib64/python3.6/site-packages/dipy/reconst/fwdti.py:311: RuntimeWarning: overflow encountered in exp
SIpred = (1-FS)np.exp(np.dot(W, all_new_params)) + FSS0*SFW.T
/usr/local/lib64/python3.6/site-packages/dipy/reconst/fwdti.py:312: RuntimeWarning: overflow encountered in square
F2 = np.sum(np.square(SI - SIpred), axis=0)
/usr/local/lib64/python3.6/site-packages/dipy/reconst/fwdti.py:458: RuntimeWarning: overflow encountered in exp
y = (1-f) * np.exp(np.dot(design_matrix, tensor[:7])) + \
I wonder whether these warnings have influenced the output images and anyway to fix them?
Many thanks for your help!
To save/load the transformation object you can try using pickle instead of numpy:
import pickle
filehandler = open('transform.obj', 'wb')
pickle.dump(final, filehandler)
filehandler = open('transform.obj', 'rb')
final = pickle.load(filehandler)
Hi @drombas, thanks very much! pickle does the trick :)
I thought that np.save with "allow pickle" would work, assuming that it would do the same as pickle (it doesn't) or that using JSON would be possible -- unfortunately it isn't (I tried to do the nested serialisation but failed). The reason for that is that many people have reservations about pickle.
For me it does the job perfectly.
write_mapping
and read_mapping
in https://github.com/dipy/dipy/blob/master/dipy/align/_public.py#L218
from dipy.align import write_mapping, read_mapping
we use the nifti format to save this