Hello, @kshitijd20. There are several links suggested in the message. Have you tried the one starting with "127"?
If that doesn't work, could you please confirm your Docker for Windows is properly set up by testing Nginx in your browser according to the official Docker documentation?
scale_functionto 1. So imagine two events that occur within 1 second of each other. From what we know about the brain's response is that there is a subadditivity of those two presentations such that the evoked response will be larger than if only one event occurred but it likely won't be twice as large. If you set scaling to 0 then the height of the outputed stimulus response will be twice as high, which would be wrong. If you set the scaling to 1, then the peak of two events will be 1, just like the peak of 1 event, although the shape of the function will be the same. Hence when scaling is set to 1, there is no additivity. Note this same logic goes into GLMs using things like FEAT: they also just assume a convolution of the event boxcar. Still, building a realistic non-linearity would be valuable, although it would likely depend largely on empirical details, since different events will elicit different amounts of additivity
import numpy as np from brainiak.utils import fmrisim as sim import matplotlib.pyplot as plt # Inputs for generate_stimfunction onsets = [10, 12] event_durations =  tr_duration = 2 duration = 100 scale_function = 1 # Create the time course for the signal to be generated stimfunction = sim.generate_stimfunction(onsets=onsets, event_durations=event_durations, total_time=duration, ) # Create the signal function signal_function = sim.convolve_hrf(stimfunction=stimfunction, tr_duration=tr_duration, scale_function=scale_function, ) plt.plot(signal_function)
isc(data). Do you want to share any specific barriers or code issues you are having?
Hi! I'm also new to brainiak and working on implementing an ISC analysis. Went through the ISC tutorial (number 10) on colab and now running everything locally, with my own dataset.
I created a conda env and installed brainiak from there, exactly as recommended. However, when importing the isc methods I receive the following error:
from brainiak.isc import isc, isfc, permutation_isc >>> ModuleNotFoundError: No module named 'brainiak.isc'
I tried various different versions to import isc but so far no success. Weirdly it does work on colab, and all my other brainiak imports work perfectly. Do you have any idea what the cause is?
Hi all ... I'm using FCMA with Brianiak for my analysis. Everything makes sense to me until I started to do the permutation test as mentioned in Wang et al., 2015 for information mapping. Based on my understanding, after I randomized the data, the classification acc (per voxel) I obtain in the tuple during the feature selection step should be around chance level -- just as a sanity check. However, the top voxel's acc I got for each permutation run is around 60%, which is way above chance, and does not make sense to me -- it suggests that even if I throw random time series into the clf, the performance is still above chance. In order to make sure that this is replicable, I run FCMA permutation test with the face-scene dataset from the BRAINIAK dataset and got similar result -- with the original data, the top voxel acc is around 80%, and for the permutation test, after randomization, the top voxel acc is around 70% (still above chance).
The only change I made to the feature selection script is by adding the RandomType argument as following.
raw_data, _, labels = prepare_fcma_data(images, epoch_list, mask, random=RandomType.REPRODUCIBLE)
I'm not sure which step i did wrong that ended up causing this problem, or if I misunderstood the concept and that this above chance performance was expected for permutation test. I also posted this same question as an issue on Brainiak github. I'm grateful for any sorts of help, and thank you all in advance!
Hi all, I am wanting to extract the whole brain voxel wise time-series.. I have pre-processed my data in SPM and my data dimension is 79 x 95 x 79.. I have a few questions on which I would like some guidance..
I am a newbie in Brainiak, so any help would be really appreciated..
C, the SVC hyperparameter, in order to make the margin narrower. but it turns out that
C=10000do not have any differences in terms of the classification accuracy and classification confidence. My guess is that the precomputed kernel may play a role? Anyways I do not fully understand why this is the case, any help on either just SVM or just FCMA would all be super helpful! Thank you all in advance!
Hi, I am using the brainiak.reprsimil package and have pre-processed the fmri data the way it is necessary to perform a GBRSA. All data is in the needed format, i.e. the rois to be fitted are in a list (of length #subjects) with each element of the list being an array of shape time points x voxel, whereby the runs are concatenated along the time points. Furthermore, the design matrix is as needed and the scan-onsets and some nuisance regressors are provided correctly. When initiating the instance I set auto_nuisance=False, in order to completely use the given nuisance regressors.
The GBRSA is running for a while now and does not seem to come to an end. Any experiences of how long this may take for about 20 subjects with about 2000 time points each or a way to parallelise the computation and visualise the process?
Any advice or idea would be helpful, thank you very much! :)
Hello! I am helping a grad student use the searchlight code on a really big dataset/analysis and we are running into a strange problem I have never witnessed. In particular, the code runs for approximately 8 hours, then just seems to freeze and stops producing any more outputs, even if we let it run for multiple days.
To give some more details, the kernel computation takes 8-10s, we have 230k voxels and we have used up to 120 cores to run this, although we get similar results with fewer cores. The way we track progress is that we print out to a log file the time stamp that every searchlight was run. No error messages are printed in the log, it just times out, after hanging for multiple days without producing a new result. Using a back of the envelope calculation, this code should only take 5 hours on 120 cores so it is already running slow.