scale_function
to 1. So imagine two events that occur within 1 second of each other. From what we know about the brain's response is that there is a subadditivity of those two presentations such that the evoked response will be larger than if only one event occurred but it likely won't be twice as large. If you set scaling to 0 then the height of the outputed stimulus response will be twice as high, which would be wrong. If you set the scaling to 1, then the peak of two events will be 1, just like the peak of 1 event, although the shape of the function will be the same. Hence when scaling is set to 1, there is no additivity. Note this same logic goes into GLMs using things like FEAT: they also just assume a convolution of the event boxcar. Still, building a realistic non-linearity would be valuable, although it would likely depend largely on empirical details, since different events will elicit different amounts of additivity
import numpy as np
from brainiak.utils import fmrisim as sim
import matplotlib.pyplot as plt
# Inputs for generate_stimfunction
onsets = [10, 12]
event_durations = [1]
tr_duration = 2
duration = 100
scale_function = 1
# Create the time course for the signal to be generated
stimfunction = sim.generate_stimfunction(onsets=onsets,
event_durations=event_durations,
total_time=duration,
)
# Create the signal function
signal_function = sim.convolve_hrf(stimfunction=stimfunction,
tr_duration=tr_duration,
scale_function=scale_function,
)
plt.plot(signal_function)
isc(data)
. Do you want to share any specific barriers or code issues you are having?
Hi! I'm also new to brainiak and working on implementing an ISC analysis. Went through the ISC tutorial (number 10) on colab and now running everything locally, with my own dataset.
I created a conda env and installed brainiak from there, exactly as recommended. However, when importing the isc methods I receive the following error:
from brainiak.isc import isc, isfc, permutation_isc
>>> ModuleNotFoundError: No module named 'brainiak.isc'
I tried various different versions to import isc but so far no success. Weirdly it does work on colab, and all my other brainiak imports work perfectly. Do you have any idea what the cause is?
Hi all ... I'm using FCMA with Brianiak for my analysis. Everything makes sense to me until I started to do the permutation test as mentioned in Wang et al., 2015 for information mapping. Based on my understanding, after I randomized the data, the classification acc (per voxel) I obtain in the tuple during the feature selection step should be around chance level -- just as a sanity check. However, the top voxel's acc I got for each permutation run is around 60%, which is way above chance, and does not make sense to me -- it suggests that even if I throw random time series into the clf, the performance is still above chance. In order to make sure that this is replicable, I run FCMA permutation test with the face-scene dataset from the BRAINIAK dataset and got similar result -- with the original data, the top voxel acc is around 80%, and for the permutation test, after randomization, the top voxel acc is around 70% (still above chance).
The only change I made to the feature selection script is by adding the RandomType argument as following.raw_data, _, labels = prepare_fcma_data(images, epoch_list, mask, random=RandomType.REPRODUCIBLE)
I'm not sure which step i did wrong that ended up causing this problem, or if I misunderstood the concept and that this above chance performance was expected for permutation test. I also posted this same question as an issue on Brainiak github. I'm grateful for any sorts of help, and thank you all in advance!
Hi all, I am wanting to extract the whole brain voxel wise time-series.. I have pre-processed my data in SPM and my data dimension is 79 x 95 x 79.. I have a few questions on which I would like some guidance..
I am a newbie in Brainiak, so any help would be really appreciated..
C
, the SVC hyperparameter, in order to make the margin narrower. but it turns out that C=1
, C=100
and C=10000
do not have any differences in terms of the classification accuracy and classification confidence. My guess is that the precomputed kernel may play a role? Anyways I do not fully understand why this is the case, any help on either just SVM or just FCMA would all be super helpful! Thank you all in advance!
Hi, I am using the brainiak.reprsimil package and have pre-processed the fmri data the way it is necessary to perform a GBRSA. All data is in the needed format, i.e. the rois to be fitted are in a list (of length #subjects) with each element of the list being an array of shape time points x voxel, whereby the runs are concatenated along the time points. Furthermore, the design matrix is as needed and the scan-onsets and some nuisance regressors are provided correctly. When initiating the instance I set auto_nuisance=False, in order to completely use the given nuisance regressors.
The GBRSA is running for a while now and does not seem to come to an end. Any experiences of how long this may take for about 20 subjects with about 2000 time points each or a way to parallelise the computation and visualise the process?
Any advice or idea would be helpful, thank you very much! :)
Hello! I am helping a grad student use the searchlight code on a really big dataset/analysis and we are running into a strange problem I have never witnessed. In particular, the code runs for approximately 8 hours, then just seems to freeze and stops producing any more outputs, even if we let it run for multiple days.
To give some more details, the kernel computation takes 8-10s, we have 230k voxels and we have used up to 120 cores to run this, although we get similar results with fewer cores. The way we track progress is that we print out to a log file the time stamp that every searchlight was run. No error messages are printed in the log, it just times out, after hanging for multiple days without producing a new result. Using a back of the envelope calculation, this code should only take 5 hours on 120 cores so it is already running slow.
@manojneuro @mjanderson09
Hi everyone.. I am wanting to calculate ISC over time.. Right now the data structure is TR x Voxels x Subjects.. The code that I am working with is the following..
n_TR = 190; # Total time-points
window_width = 10
T_iscs = []
for start in np.arange(0,n_TR,10):
window_data = data[start:start+9, 0:, 0:]
window_isc = isc(window_data, pairwise = False)
T_iscs.append(window_isc)
It is giving me an output as a list with len(window_iscs) as 63..However all the values are either nan or 1..
How can I solve this??
Hello, everyone, I'm a beginner of brainiak. Where can I find utils.py of this class in https://brainiak.org/tutorials/02-data-handling/?
https://github.com/brainiak/brainiak-tutorials/blob/master/tutorials/utils.py
Hi everyone, I am trying to install BrainIAK in Mac Mini. I installed miniconda and activated it.
While installing BrainIAK I encounter the following problem. Please help. Thanks!
Verifying transaction: | WARNING conda.core.path_actions:verify(962): Unable to create environments file. Path not writable.
environment location: /Users/wu_lab/.conda/environments.txt
done
Executing transaction: | WARNING conda.core.envs_manager:register_env(50): Unable to register environment. Path not writable or missing.
environment location: /Users/wu_lab/miniconda3/envs/venv
registry file: /Users/wu_lab/.conda/environments.txt
Hi everyone! BrainIAK is such a great program, thanks for making it so available to everyone. I am currently trying to do an ISFC analysis on 2 groups. Specifically, I have independently calculated the ISFC for my two groups (i.e., I only calculated the within-group ISFCs) and I would now like the compare them - i.e. find the edges/connections where one group has a significantly stronger connection strength than the others. Kind of like a 2-group ISC permutation analysis, but with ISFC data. Does anyone know that the appropriate statistical test would be for this data? Would I be able to use the permutation_isc function on ISFC data?
thanks!
Hello everyone, I am currently trying to compute spatial ISC.. I am having a little trouble understanding the input data format.. Right now, following the tutorial I am unable to plot the iscs as well.. Instead of plotting the mean linear correlation it is plotting for 300 time-points taking subjects on the X-axis..
While computing spatial ISC should my data format be voxel x TR x Sub?? Do I need to change my data format prior to that since there are 2 transposes in 2 different steps..
Any help is appreciated..Thanks!