Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    CameronTEllis
    @CameronTEllis
    Hi @crewalsh Sorry for the delay, if this hasn't already been answered. Unfortunately the BrainIAK ISC isn't set up to deal with NANs in the left out participant. The solution I have for that is to restrict which time points I consider for an ISC based on the specific time points in the LOO participant that are not NANs. This means the time course will be different lengths for different participants but will prevent there being NaN issues
    1 reply
    vyaivo
    @vyaivo
    Hi @squinto13 , thanks for your message about the IEM module. (I believe we met briefly at a conference through Tommy, actually! ) My apologies for the delay, my previous reply to your thread disappeared into Gitter. I adapted the IEM module for Brainiak from code written by David Huberdeau. I definitely agree that it should be expanded and generalized to capture a wider range of stimulus spaces. Adapting some of the tutorial code to the sklearn format would be a great place to start! If you are interested in working on this we would be happy to have it in Brainiak. I am happy to answer any further questions about adapting it for Brainiak (along with @mihaic), and to review the code along the way. Feel free to reach out here or by email (vy.vo@intel.com).
    Peeta Li
    @peetal

    Hi all ... I'm using FCMA with Brianiak for my analysis. Everything makes sense to me until I started to do the permutation test as mentioned in Wang et al., 2015 for information mapping. Based on my understanding, after I randomized the data, the classification acc (per voxel) I obtain in the tuple during the feature selection step should be around chance level -- just as a sanity check. However, the top voxel's acc I got for each permutation run is around 60%, which is way above chance, and does not make sense to me -- it suggests that even if I throw random time series into the clf, the performance is still above chance. In order to make sure that this is replicable, I run FCMA permutation test with the face-scene dataset from the BRAINIAK dataset and got similar result -- with the original data, the top voxel acc is around 80%, and for the permutation test, after randomization, the top voxel acc is around 70% (still above chance).

    The only change I made to the feature selection script is by adding the RandomType argument as following.
    raw_data, _, labels = prepare_fcma_data(images, epoch_list, mask, random=RandomType.REPRODUCIBLE)

    I'm not sure which step i did wrong that ended up causing this problem, or if I misunderstood the concept and that this above chance performance was expected for permutation test. I also posted this same question as an issue on Brainiak github. I'm grateful for any sorts of help, and thank you all in advance!

    4 replies
    Ghupo
    @Ghupo

    Hi all, I am wanting to extract the whole brain voxel wise time-series.. I have pre-processed my data in SPM and my data dimension is 79 x 95 x 79.. I have a few questions on which I would like some guidance..

    1. Do I have to create the mask of the whole brain (as mentioned in the 01-Setup In[2])? Secondly, on what basis do I chose my mask dimesion (the volume we want to create)? Is it going to be the same dimension as my con_000i.nii image?
    2. Next, after applying the mask to the data, in what format am I going to get the output? voxel x TR x intensity?

    I am a newbie in Brainiak, so any help would be really appreciated..

    4 replies
    Limbicode Ⓥ
    @limbicode_twitter
    HI there. Just tried to install BrainIAK using "conda install -c brainiak -c defaults -c conda-forge brainiak"
    6 replies
    It does not work. I'll get the error "PackagesNotFoundError".
    brainiak package is not available from current channel.
    thanks for help!
    Peeta Li
    @peetal
    Hi all ... I'm using FCMA with Brianiak for my analysis. I'm trying to do some analyses with the decision function outputs. I noticed that the output classification outputs are all between -0.5 and 0.5 for FCMA classification, meaning that all data points fall within the margin of the SVC. I'm not sure whether this is necessarily a bad thing. The problem is, I tried to broaden the clf confidence range by increasing the value of C, the SVC hyperparameter, in order to make the margin narrower. but it turns out that C=1, C=100 and C=10000 do not have any differences in terms of the classification accuracy and classification confidence. My guess is that the precomputed kernel may play a role? Anyways I do not fully understand why this is the case, any help on either just SVM or just FCMA would all be super helpful! Thank you all in advance!
    6 replies
    anoop023
    @anoop023
    Hi all... I am using ADNI .nii data for a project and needs to do 2 things :
    1. Extract the time series data from the .nii files
    2. Map the voxel level information to ROI level imformation .
      I am trying to use the Brainiak tool to do the task 1 , but I am not sure , whether I have to do some kind of Proprocessing of the ADNI fmri data (which is already in .nii format) , if yes, then which preprocess tasks are required .
      Secondly, is there a common place in the code where the file should be put or I have to change the paths used in code at all places.
      Thank You all in advance.
    2 replies
    manojneuro
    @manojneuro
    @anoop023 the analyses techniques in BrainIAK assume that you have completed all pre-processing steps, that you deem necessary, are completed before you start using brainiak. You can use your preferred pre-processing pipeline.
    you may also load in your data for analysis, using steps outlined in the tutorials. You can check these out here: https://brainiak.org/tutorials/. You can start by looking at the second tutorial.
    laierkasten23
    @laierkasten23

    Hi, I am using the brainiak.reprsimil package and have pre-processed the fmri data the way it is necessary to perform a GBRSA. All data is in the needed format, i.e. the rois to be fitted are in a list (of length #subjects) with each element of the list being an array of shape time points x voxel, whereby the runs are concatenated along the time points. Furthermore, the design matrix is as needed and the scan-onsets and some nuisance regressors are provided correctly. When initiating the instance I set auto_nuisance=False, in order to completely use the given nuisance regressors.

    The GBRSA is running for a while now and does not seem to come to an end. Any experiences of how long this may take for about 20 subjects with about 2000 time points each or a way to parallelise the computation and visualise the process?
    Any advice or idea would be helpful, thank you very much! :)

    9 replies
    Sheetal Jadhav
    @sheetalouette_twitter
    Can anybody tell me where is the readme file for the latatt dataset, tutorial no.8, named connectivity ?
    2 replies
    Ruimin-Kochi
    @Ruimin-Kochi
    Dear all, I just a beginner here. It takes me 20 hours to finish a dataset processing using the run_searchlight function. How should I improve the processing speed? should I increase the 'max_blk_edge ' as much as possible (e.g. the same number of cores of my computer) ?
    CameronTEllis
    @CameronTEllis
    Hi @Ruimin-Kochi, sorry to hear things are running so slowly. The searchlight function was optimized for parallel computing in a cluster environment. While you can run it on your computer, unless you have a certain computer environment, the analysis will run in serial and thus be slow. However, 20 hrs is very slow for most searchlight computations (e.g., SVM). How long does it take to run the computation you are running on a single searchlight? You can test this by making a mask containing only one voxel and then running the searchlight code on that mask to test how long it takes
    Ruimin-Kochi
    @Ruimin-Kochi
    Dear @CameronTEllis , thank you for your kind reply. Now by increasing the value of 'max_blk_edge' , I can complete the searchlight code in 2-3 hrs. I am performing a searchlight analysis on the output of fmriprep: "space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz" for epi input, and "space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz" for mask input. However, when I view the results, I find a few outliers. Is this normal or am I making the wrong input selection. Do you have any suggestions or demo examples for handling the fmriprep output in BrainIak?
    CameronTEllis
    @CameronTEllis
    @Ruimin-Kochi , That is great that you speed it up. I am not sure what you mean by 'a few outliers'? The fmriprep outputs should be directly usable by BrainIAK functions like searchlight. Indeed we have a few papers using fmriprep and BrainIAK together (e.g. Yates, Ellis, Turk-Browne, 2020)
    Ghupo
    @Ghupo
    Hi again @CameronTEllis, I am facing a problem of 'memory error' while trying to compute voxel-wise whole brain ISC for 21 subjects.. It is working fine when I am performing it for 5-7 subjects but not working for more.. The system I am working on is Intel-i7, 32GB RAM.. Can I have a work around to use less memory while calculating ISC??
    5 replies
    Solim LeGris
    @AlephG
    Hi, I am a cognitive neuroscience undergrad and I am looking for a first project applying ML to neuroscience. Any ideas where I could look for a feasible 2-week project?
    2 replies
    CameronTEllis
    @CameronTEllis

    Hello! I am helping a grad student use the searchlight code on a really big dataset/analysis and we are running into a strange problem I have never witnessed. In particular, the code runs for approximately 8 hours, then just seems to freeze and stops producing any more outputs, even if we let it run for multiple days.

    To give some more details, the kernel computation takes 8-10s, we have 230k voxels and we have used up to 120 cores to run this, although we get similar results with fewer cores. The way we track progress is that we print out to a log file the time stamp that every searchlight was run. No error messages are printed in the log, it just times out, after hanging for multiple days without producing a new result. Using a back of the envelope calculation, this code should only take 5 hours on 120 cores so it is already running slow.

    @manojneuro @mjanderson09

    7 replies
    0I24N63
    @congzhaoyang
    Hello, everyone, I'm a beginner of brainiak. Where can I find utils.py of this class in https://brainiak.org/tutorials/02-data-handling/?
    1 reply
    image.png
    Ghupo
    @Ghupo

    Hi everyone.. I am wanting to calculate ISC over time.. Right now the data structure is TR x Voxels x Subjects.. The code that I am working with is the following..
    n_TR = 190; # Total time-points
    window_width = 10
    T_iscs = []
    for start in np.arange(0,n_TR,10):
    window_data = data[start:start+9, 0:, 0:]
    window_isc = isc(window_data, pairwise = False)
    T_iscs.append(window_isc)

    It is giving me an output as a list with len(window_iscs) as 63..However all the values are either nan or 1..

    How can I solve this??

    2 replies
    Paulo Cardoso
    @cardosop

    Hello, everyone, I'm a beginner of brainiak. Where can I find utils.py of this class in https://brainiak.org/tutorials/02-data-handling/?

    https://github.com/brainiak/brainiak-tutorials/blob/master/tutorials/utils.py

    elizabeth19r
    @elizabeth19r
    hi- i'm wanting to use the FCMA but wondering if i can use a network ICA template map as my binary mask in the data preparation phase rather than a whole brain binary mask. if so, wondering what to use to make a ICA template map binary?
    1 reply
    Shihweiwu
    @Shihweiwu

    Hi everyone, I am trying to install BrainIAK in Mac Mini. I installed miniconda and activated it.
    While installing BrainIAK I encounter the following problem. Please help. Thanks!

    Verifying transaction: | WARNING conda.core.path_actions:verify(962): Unable to create environments file. Path not writable.
    environment location: /Users/wu_lab/.conda/environments.txt

    done
    Executing transaction: | WARNING conda.core.envs_manager:register_env(50): Unable to register environment. Path not writable or missing.
    environment location: /Users/wu_lab/miniconda3/envs/venv
    registry file: /Users/wu_lab/.conda/environments.txt

    1 reply
    Ryann Tansey
    @rtansey

    Hi everyone! BrainIAK is such a great program, thanks for making it so available to everyone. I am currently trying to do an ISFC analysis on 2 groups. Specifically, I have independently calculated the ISFC for my two groups (i.e., I only calculated the within-group ISFCs) and I would now like the compare them - i.e. find the edges/connections where one group has a significantly stronger connection strength than the others. Kind of like a 2-group ISC permutation analysis, but with ISFC data. Does anyone know that the appropriate statistical test would be for this data? Would I be able to use the permutation_isc function on ISFC data?

    thanks!

    Ghupo
    @Ghupo

    Hello everyone, I am currently trying to compute spatial ISC.. I am having a little trouble understanding the input data format.. Right now, following the tutorial I am unable to plot the iscs as well.. Instead of plotting the mean linear correlation it is plotting for 300 time-points taking subjects on the X-axis..

    While computing spatial ISC should my data format be voxel x TR x Sub?? Do I need to change my data format prior to that since there are 2 transposes in 2 different steps..

    Any help is appreciated..Thanks!

    1 reply
    Patxelos
    @Patxelos
    This message was deleted
    1 reply
    Frederic R. Hopp
    @freddy_hopp_twitter

    [HARDWARE PURCHASE RECOMMENDATIONS]
    Hello everyone,
    our lab currently has a local compute cluster with 2 nodes (24 cores, 128GB ram each) and 3 workstations (8 cores, 32GB each). We wanted to stack up with a few new machines and I have a very basic question: For naturalistic neuroimaging analysis (ISC, IS-RSA, etc.) would you recommend

    buying two machines with 24 cores and 128 GB RAM
    buying one machine with 64 cores and 256GB RAM

    My understanding is that two machines are better for embarrassingly parallel problems (e.g., distribtuing multiple subjects when running fMRIprep), and that a single, high-powered machine is better for costly computations (e.g., phase-randomization during ISC). As additional info, we do use SLURM as scheduler.

    Many thanks!
    Freddy

    1 reply
    pfdominey
    @pfdominey
    Greetings,
    3 replies
    Greetings - I have been using the Brainiak HMM segmentation tool on fMRI data. I would like to extract brain data with a mask in order to apply the HMM to data from the angular gyrus, posterior medial cortex, and medial prefrontal cortex. Is there a tutorial that would explain how to do this? Thanks in advance. - Peter
    gustavopamplona
    @gustavopamplona

    Dear Brainiak Team/everyone,

    Sorry for the newbie question, I'm a complete beginner in Python. I would like to run the tutorial via Python in my computer, not via Jupyter Notebook (because I want to adapt it later to my own data). Could you please point me out what to install for it? Should I try to run what is explained in "Cluster" in https://brainiak.org/tutorials/? (if so, I'm getting some error messages whenever I try)

    Thank you!!

    4 replies
    Nertl
    @NatalieErtl1_twitter

    Hi
    Sorry if this is a stupid question but I am so confused about the permutation testing bit of the ISC (10) project. I understand what the permutation test is doing and why but the output from the tutorial makes no sense to me? The tutorial output is:
    observed: (98508,)
    p:(98508,)
    distribution:( 1000, 98508)

    Please help!

    2 replies
    kadejentink
    @kadejentink
    hello everybody!
    not sure if this is the best place, but I'm doing a searchlight RSA analysis, and was wondering if anybody had any scripts they would be comfortable sharing! I learn best through reverse engineering :)
    3 replies
    kadejentink
    @kadejentink
    I wouldn't expect you to troubleshoot or anything, either ;)
    just something to reference would be helpful!
    Chihhao-Lien
    @Chihhao-Lien

    Dear all,

    I meet a problem when I try to do the ISC analysis.
    All the data were preprocessed and normalized to MNI 6th generation template (resolution: 2) via fMRIprep, then were smoothed via SPM12 and denoised via Denoiser (https://github.com/arielletambini/denoiser).

    I'm following Nastase's ISC tutorial (https://github.com/snastase/isc-tutorial/blob/master/isc_tutorial.ipynb), and I get an error when I try to use "MaskedMultiSubjectData.from_masked_images" to collate data into a single TR x voxel x subject array.

    The error is:
    ValueError: Image 19 has different shape from first image: (111, 253137) != (105, 253137)

    I find that including 3 subjects' data in the analysis will cause this error. But it's weird due to all data was preprocessed in the same way. I also check the dimensions and voxel size of these images via SPM Display function, and both are the same as the other data which won't make the same error.

    Does anyone meet the same problem or know how to solve it?

    2 replies
    Shawn Rhoads
    @shawnrhoads

    Hi all! Hope everyone is well. I have a Searchlight() question regarding multiple outputs from a kernel function. This could be multiple subjects (see example below) or multiple scores in case people like to compare multiple accuracy measures (e.g., accuracy + AUC) or distance measures (e.g., correlation distance + Euclidean distance).

    In the Searchlight tutorial for running multiple subjects, the output of the kernel function accuracy is a list of elements (with each element corresponding to the kernel's output for a different subject). When the searchlight completes, the output (e.g., sl_result_3subj) will be a 3D array with an odd form. Essentially, it's shape will reflect the input voxel dimensionality (x, y, z) (e.g., (64, 64, 26)) instead of having a fourth dimension for subjects (e.g., (64, 64, 26, 3) for three subjects). This is because the searchlight only runs in voxels where mask==1. The output array therefore is a bit odd---here's a simplistic example for sl_result_3subj: np.array([None, [subj1a, subj2a, subj3a], [subj1b, subj2b, subj3b], None, None]). The shape of this array would still be (5,), not reflecting the elements with three items.

    I would like to create an output with the form (x, y, z, s) that reflects the multiple outputs from the kernel function and that takes the masked voxels into account, but unsure what the most efficient way to accomplish this would be! Any help would be super appreciated! Thanks so much!

    -Shawn

    1 reply
    Chihhao-Lien
    @Chihhao-Lien

    Hi everyone,

    I'm looking for a proper tool/software/viewer for Inter-subject correlation (ISC) maps. I'd appreciate it if anyone can recommend a tool/software/viewer to me. (I tried to use xjview, a toolbox of SPM, but it doesn't properly support viewing ISC maps.

    4 replies
    kadejentink
    @kadejentink

    Howdy! I have what I'm afraid is a rather basic question. I set the searchlight shape to "Ball", but when I output "sl_mask.shape" as in the tutorial, it gives me cube dimensions e.g., "(3,3,3)".

    However, If I understand right, this is expected. The documentation states that the searchlight package takes a cube shape and sets certain voxels which have "...a Euclidean distance of equal to or less than rad from the center point" equal to True.

    My question, then, is how can I see which voxels have been set to True so that I can get a better visualization of the searchlight shape?

    Thank you much!!

    2 replies
    CameronTEllis
    @CameronTEllis
    @manojneuro @mihaic Hello, I just tried to run a colab notebook (number 2) and ran into an error using matplotlib. It really didn't like our version but things worked when I ran !pip install matplotlib==3.1.3. I found this info from https://github.com/facebook/prophet/issues/1691 We may want to update the requirements for brainiak to suit
    funyil
    @funyil
    Hello everyone, I'm following the fmrisim_multivariate_example notebook (https://github.com/brainiak/brainiak/blob/master/examples/utils/fmrisim_multivariate_example.ipynb ). For generating the signal at 3.1 part (3.1 Specify which voxels in the brain contain signal) , instead of specifying only one voxel with coordinates, it is also mentioned to use the ROI that specifies the signal on those group of voxels. I have the ROI as niftii image that we have created before for auditory cortex and want to use that as signal_volume. My question is that do I still need to use fmrisim.generate_signal function or how I can use this ROI to specify those voxels have the signal and visualize it? Thank you very much!!
    2 replies
    Nertl
    @NatalieErtl1_twitter

    Hi, I'm really struggling conceptually with the second part of the ISC (10) tutorial - ISC with statistics. I'm fairly new to python and I don't really understand what to do with the code. The tutorial output shows tutorial output is:

    observed: (98508,)
    p:(98508,)
    distribution:( 1000, 98508).

    It has been explained to me that 98508 refers to the number of voxels, 1000 refers to the number of times the permutation is being done. But how do I get a P value which tells me whether my two conditions are significantly different or not? Below is the code I am confused about:

    permutation testing

    n_permutations = 1000
    summary_statistic='mean'

    observed, p, distribution = permutation_isc(
    isc_maps_all_tasks,
    pairwise=False,
    group_assignment=group_assignment,
    summary_statistic=summary_statistic,
    n_permutations=n_permutations
    )

    p = p.ravel()
    observed = observed.ravel()

    print('observed:{}'.format(np.shape(observed)))
    print('p:{}'.format(np.shape(p)))
    print('distribution: {}'.format(np.shape(distribution)))

    Any basic explanations would be really greatly appreciated.

    14 replies
    Chihhao-Lien
    @Chihhao-Lien

    Hi, I'm following 2 ISC tutorials, the official Brainiak tutorial (https://brainiak.org/tutorials/10-isc/) and Nastase's tutorial (https://github.com/snastase/isc-tutorial). I notice these two tutorials use different ways to calculate ISC maps via isc(data, pairwise=False, summary_statistic=None, tolerate_nans=True).
    In the former one, it calculates ISC maps for 2 groups separately via a for loop, then it uses np.vstack to concatenate ISC maps from both tasks for the permutation test.

    # run ISC, loop over conditions 
    isc_maps = {}
    for task_name in all_task_names:
        isc_maps[task_name] = isc(bold[task_name], pairwise=False)
        print('Shape of %s condition:' % task_name, np.shape(isc_maps[task_name]))
    
    # Concatenate ISCs from both tasks
    isc_maps_all_tasks = np.vstack([isc_maps[task_name] for
                                    task_name in all_task_names])
    
    print('group_assignment: {}'.format(group_assignment))
    print('isc_maps_all_tasks: {}' .format(np.shape(isc_maps_all_tasks)))
    
    # permutation testing
    n_permutations = 1000
    summary_statistic='mean'
    
    observed, p, distribution = permutation_isc(
        isc_maps_all_tasks, 
        pairwise=False,
        group_assignment=group_assignment, 
        summary_statistic=summary_statistic,
        n_permutations=n_permutations
    )

    In contrast, the data variable in the latter one contains 2 groups' data, and they only use isc(data, pairwise=False, summary_statistic=None, tolerate_nans=True) once to calculate ISC maps.

    # Create data with noisy subset of subjects
    noisy_data = np.dstack((np.dstack((
        simulated_timeseries(n_subjects // 2, n_TRs,
                             n_voxels=n_voxels, noise=1))),
                            np.dstack((
        simulated_timeseries(n_subjects // 2, n_TRs,
                             n_voxels=n_voxels, noise=5)))))
    
    # Create group_assignment variable with group labels
    group_assignment = [1]*10 + [2]*10
    print(f"Group assignments: \n{group_assignment}")
    
    # Compute ISCs and then run two-sample permutation test on ISCs
    iscs = isc(noisy_data, pairwise=True, summary_statistic=None)
    observed, p, distribution = permutation_isc(iscs,
                                                group_assignment=group_assignment,
                                                pairwise=True,
                                                summary_statistic='median',
                                                n_permutations=200)

    I'm confused about this difference and wondering whether it makes different results when using permutation_isc becuase I think these 2 ways will create different numbers of isc maps with the pairwise approach. Does permutation_isc has different ways to calculate isc maps from different approach, thus these 2 way calculate ISC correctly?

    Nertl
    @NatalieErtl1_twitter
    Hi again, I'm working through the ISC tutorial with my own data. Does anyone know if its possible to get a 4D file for each subject instead of a 3D ISC map?
    3 replies
    manojneuro
    @manojneuro
    @Chihhao-Lien here is an explanation of the distinction between using pairwise tests vs average tests. https://naturalistic-data.org/content/Intersubject_Correlation.html You lose some of tge individual varaibilty in average group tests as compared to the pairwise tests. So which type you use will depend on the type of analysis you are doing.
    2 replies
    Chihhao-Lien
    @Chihhao-Lien
    Dear all, I use the following scripts from Nastase's tutorial to correct ISC results calculated via Brianiak tutorial. But both ISC results of my own data and the Pieman 2 data show no significant voxels when controlling FDR at 0.05. I'm not sure whether I correctly execute these scrips due to this situation. Could anyone give me some advice?
    from statsmodels.stats.multitest import multipletests
    
    # Get number of NaN voxels
    n_nans = np.sum(np.isnan(observed))
    print(f"{n_nans} voxels out of {observed.shape[0]} are NaNs "
          f"({n_nans / observed.shape[0] * 100:.2f}%)")
    
    # Get voxels without NaNs
    nonnan_mask = ~np.isnan(observed)
    nonnan_coords = np.where(nonnan_mask)
    
    # Mask both the ISC and p-value map to exclude NaNs
    nonnan_isc = observed[nonnan_mask]
    nonnan_p = p[nonnan_mask]
    
    # Get FDR-controlled q-values
    nonnan_q = multipletests(nonnan_p, method='fdr_by')[1]
    threshold = .05
    print(f"{np.sum(nonnan_q < threshold)} significant voxels "
          f"controlling FDR at {threshold}")
    
    # Threshold ISCs according FDR-controlled threshold
    nonnan_isc[nonnan_q >= threshold] = np.nan
    
    # Reinsert thresholded ISCs back into whole brain image
    isc_thresh = np.full(observed.shape, np.nan)
    isc_thresh[nonnan_coords] = nonnan_isc
    kadejentink
    @kadejentink
    When defining the function which executes for each searchlight cluster, how can I extract the location (i.e., coordinates) of the current searchlight center for troubleshooting purposes? I would also be semi-curious if anybody knew the logic of how each center was selected (although that's not my primary concern).
    funyil
    @funyil
    Hello, I'm using continuous envelope as stimulus and I'm following https://github.com/brainiak/brainiak/blob/master/examples/utils/fmrisim_multivariate_example.ipynb . In this notebook, even-related design has been used and therefore before convolution stimulus is prepared with certain stimfunction with events differential effects on some voxels. My first question is if this simulation is suitable for the continuous envelope (sound stimuli). Second is, when I prepared my envelope stimulus with stimfunction which requires onset time which is whole time series and event duration, somehow, the new stimulus weighted with my envelope seemed cut to 0 after 1000 ms while the envelope wasn't. I wonder why this may be caused and if it is possible to convolve the envelope directly with generate_stimfunc functions and use it as signal? Thanks in advance!