Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    kshitijd20
    @kshitijd20
    After running the above command I tried to run the link in browser but it doesn't connect. Can anyone help please?
    Mihai Capotă
    @mihaic

    Hello, @kshitijd20. There are several links suggested in the message. Have you tried the one starting with "127"?

    If that doesn't work, could you please confirm your Docker for Windows is properly set up by testing Nginx in your browser according to the official Docker documentation?
    https://docs.docker.com/docker-for-windows/#explore-the-application

    Yoel Sanchez Araujo
    @YSanchezAraujo
    hello, I was wondering if someone could clarify something for me. its related to the broadcasting using the searchlight module
    for reference to the code I'm running , here's a link https://gist.github.com/YSanchezAraujo/cad5135cd1f47d2c2eefc58a058eb3bf,
    in particular im wondering about line 81 in the for loop: https://gist.github.com/YSanchezAraujo/cad5135cd1f47d2c2eefc58a058eb3bf#file-searchlight-py-L81
    would i need to manually "clear" the space that is being referenced by sl.broadcast for each of the iterations of the for loop, or will it be overwritten by default? the concern would be, that broadcasting in a for loop like that, without some explicit flushing of the previously broadcasted data would lead to the use of all of the up-to n things broadcasted, as opposed to just only the one for the nth iteration
    Yoel Sanchez Araujo
    @YSanchezAraujo
    ah, ok sorry it seems that it is indeed overwritten, if I'm interpreting this correctly, checking the sl.bcast_var would show me that it's changing each time
    CameronTEllis
    @CameronTEllis
    @LouisaSmith Hi sorry about the slow reply, are you still having issues with this? From reading your error output it seems like a problem with the LogisticRegression function. It seems the sklearn changed their error checking on solver recently. You can manually specify the as an argument when setting the LogisticRegression, e.g. LogisticRegression(penalty='l1', solver='liblinear')
    @YSanchezAraujo Yes the broadcast variable is being updated every time it is set. Is this the desired behavior?
    Yoel Sanchez Araujo
    @YSanchezAraujo
    @CameronTEllis yep, that is the desired behavior. thanks!
    Soukhin Das
    @soukhind2
    Hello, I had a question about fmrisim. When we are creating a design sequence where we have events spaced close together in time, does fmrisim generate the combined signal keeping into account the nonlinearity as discussed in Friston et.al. 1998, or does it linearly add up the signal?
    CameronTEllis
    @CameronTEllis
    Hi @soukhind2. The answer is mostly no. The event time course is convolved with the double gamma HRF, which is a linear transform, but a non-linearity is applied by default by setting scale_function to 1. So imagine two events that occur within 1 second of each other. From what we know about the brain's response is that there is a subadditivity of those two presentations such that the evoked response will be larger than if only one event occurred but it likely won't be twice as large. If you set scaling to 0 then the height of the outputed stimulus response will be twice as high, which would be wrong. If you set the scaling to 1, then the peak of two events will be 1, just like the peak of 1 event, although the shape of the function will be the same. Hence when scaling is set to 1, there is no additivity. Note this same logic goes into GLMs using things like FEAT: they also just assume a convolution of the event boxcar. Still, building a realistic non-linearity would be valuable, although it would likely depend largely on empirical details, since different events will elicit different amounts of additivity
    Here is some code to play around with this concept:
    import numpy as np
    from brainiak.utils import fmrisim as sim
    import matplotlib.pyplot as plt
    
    # Inputs for generate_stimfunction
    onsets = [10, 12]
    event_durations = [1]
    tr_duration = 2
    duration = 100
    scale_function = 1
    
    # Create the time course for the signal to be generated
    stimfunction = sim.generate_stimfunction(onsets=onsets,
                                             event_durations=event_durations,
                                             total_time=duration,
                                             )
    
    # Create the signal function
    signal_function = sim.convolve_hrf(stimfunction=stimfunction,
                                       tr_duration=tr_duration,
                                       scale_function=scale_function,
                                       )
    
    plt.plot(signal_function)
    Soukhin Das
    @soukhind2
    @CameronTEllis That is a wonderful explanation. So it is generating non-linearity by basically 'squshing' the signal so that the peaks are of the same magnitude, right?
    CameronTEllis
    @CameronTEllis
    Yes exactly. Then, the compute_signal_change function can be used to rescale it to be whatever magnitude is desired.
    Soukhin Das
    @soukhind2
    @CameronTEllis That was helpful, thanks!
    squinto13
    @squinto13
    Hello gitter peeps! I adopted an inverted encoding model tutorial by Tommy Sprague from MATLAB into Python. I see there is an IEM function recently added to Brainiak, but the tutorial here is more general (can be adapted for essentially any stimulus space) and I believe is informative about the technique. Please let me know if anyone has feedback or would like me to do anything further if it seems useful for Brainiak. Thanks! https://github.com/squinto13/IEM_python/blob/master/IEM_spatial.ipynb
    1 reply
    Mihai Capotă
    @mihaic
    Hello, @squinto13! Thanks for sharing your tutorial. I think @vyaivo would be interested, especially considering her Serences Lab background.
    Ari Kahn
    @ariekahn
    Hey! I was taking a quick look at the scikit-learn < 0.22 pin. It looks like test_mvpa_voxel_selection failure is pretty much just the result of SVC defaulting to gamma='scale' instead of gamma='auto' in 0.22 onwards.
    Not sure if it matters which is being used in the test, but switching it back to 'auto' seems to fix the test using 0.22.2
    Mihai Capotă
    @mihaic
    Thanks for the investigation, @ariekahn! We will happily accept a pull request.
    Ari Kahn
    @ariekahn
    no problem! having a bit of an issue with MPI and a few of the tests, but it seems to resolve the ones that are running locally for me.
    Ari Kahn
    @ariekahn
    @mihaic before I update the examples, just wanted to make sure: the idea would be to explicitly force all example code to use the old SVC behavior for now? Looking into why scikit-learn decided to switch the default, but haven’t seen any clear reason so far
    1 reply
    Ghupo
    @Ghupo
    Hi all..I am completely new to Brainiak, and am wishing to do an ISC analysis on my fMRI data.. I have completed parts of the tutorial but still am having difficulty to understand how to construct my data structure appropriately.. I have 10 subjects' movie watching data.. If I am to do the ISC on them should I have a 3D mat file as an input like - ROI_Time-points_subjects?? And also do I have to create my own utils.py (where all the helper functions are there) to carry out the analysis.. Some guidance and direction would be hugely helpful.. Thank you
    CameronTEllis
    @CameronTEllis
    Hi! Glad you are getting started on BrainIAK. Happy to help. You are correct that you need a 3D numpy array of voxels by time points by subjects. You shouldn't need to have any other functions or setup other than brainiak: just import the isc function and run isc(data). Do you want to share any specific barriers or code issues you are having?
    10 replies
    Daphne
    @daphnecor

    Hi! I'm also new to brainiak and working on implementing an ISC analysis. Went through the ISC tutorial (number 10) on colab and now running everything locally, with my own dataset.

    I created a conda env and installed brainiak from there, exactly as recommended. However, when importing the isc methods I receive the following error:

    from brainiak.isc import isc, isfc, permutation_isc
    
    >>> ModuleNotFoundError: No module named 'brainiak.isc'

    I tried various different versions to import isc but so far no success. Weirdly it does work on colab, and all my other brainiak imports work perfectly. Do you have any idea what the cause is?

    21 replies
    Catherine Walsh
    @crewalsh
    Hi, I’m trying to run a spatial ISC analysis using the isc function (loosely based off of tutorial 10). I’m running into an issue where I have NaNs for some of the voxels for some of my subjects. From what I can tell, the tolerate_nans option is handling the NaNs when it calculates the average of all but the left out subject, but the correlation still isn't working because there are still NaNs in the single subject being correlated to the average. Is there a way to get around this issue, or am I missing something about how NaNs should be dealt with?
    Mihai Capotă
    @mihaic
    Hello, @crewalsh. Let's see if @manojneuro has any insight into your issue. If not, consider opening an issue on GitHub:
    https://github.com/brainiak/brainiak-tutorials/issues
    CameronTEllis
    @CameronTEllis
    Hi @crewalsh Sorry for the delay, if this hasn't already been answered. Unfortunately the BrainIAK ISC isn't set up to deal with NANs in the left out participant. The solution I have for that is to restrict which time points I consider for an ISC based on the specific time points in the LOO participant that are not NANs. This means the time course will be different lengths for different participants but will prevent there being NaN issues
    1 reply
    vyaivo
    @vyaivo
    Hi @squinto13 , thanks for your message about the IEM module. (I believe we met briefly at a conference through Tommy, actually! ) My apologies for the delay, my previous reply to your thread disappeared into Gitter. I adapted the IEM module for Brainiak from code written by David Huberdeau. I definitely agree that it should be expanded and generalized to capture a wider range of stimulus spaces. Adapting some of the tutorial code to the sklearn format would be a great place to start! If you are interested in working on this we would be happy to have it in Brainiak. I am happy to answer any further questions about adapting it for Brainiak (along with @mihaic), and to review the code along the way. Feel free to reach out here or by email (vy.vo@intel.com).
    Peeta Li
    @peetal

    Hi all ... I'm using FCMA with Brianiak for my analysis. Everything makes sense to me until I started to do the permutation test as mentioned in Wang et al., 2015 for information mapping. Based on my understanding, after I randomized the data, the classification acc (per voxel) I obtain in the tuple during the feature selection step should be around chance level -- just as a sanity check. However, the top voxel's acc I got for each permutation run is around 60%, which is way above chance, and does not make sense to me -- it suggests that even if I throw random time series into the clf, the performance is still above chance. In order to make sure that this is replicable, I run FCMA permutation test with the face-scene dataset from the BRAINIAK dataset and got similar result -- with the original data, the top voxel acc is around 80%, and for the permutation test, after randomization, the top voxel acc is around 70% (still above chance).

    The only change I made to the feature selection script is by adding the RandomType argument as following.
    raw_data, _, labels = prepare_fcma_data(images, epoch_list, mask, random=RandomType.REPRODUCIBLE)

    I'm not sure which step i did wrong that ended up causing this problem, or if I misunderstood the concept and that this above chance performance was expected for permutation test. I also posted this same question as an issue on Brainiak github. I'm grateful for any sorts of help, and thank you all in advance!

    4 replies
    Ghupo
    @Ghupo

    Hi all, I am wanting to extract the whole brain voxel wise time-series.. I have pre-processed my data in SPM and my data dimension is 79 x 95 x 79.. I have a few questions on which I would like some guidance..

    1. Do I have to create the mask of the whole brain (as mentioned in the 01-Setup In[2])? Secondly, on what basis do I chose my mask dimesion (the volume we want to create)? Is it going to be the same dimension as my con_000i.nii image?
    2. Next, after applying the mask to the data, in what format am I going to get the output? voxel x TR x intensity?

    I am a newbie in Brainiak, so any help would be really appreciated..

    4 replies
    Limbicode Ⓥ
    @limbicode_twitter
    HI there. Just tried to install BrainIAK using "conda install -c brainiak -c defaults -c conda-forge brainiak"
    6 replies
    It does not work. I'll get the error "PackagesNotFoundError".
    brainiak package is not available from current channel.
    thanks for help!
    Peeta Li
    @peetal
    Hi all ... I'm using FCMA with Brianiak for my analysis. I'm trying to do some analyses with the decision function outputs. I noticed that the output classification outputs are all between -0.5 and 0.5 for FCMA classification, meaning that all data points fall within the margin of the SVC. I'm not sure whether this is necessarily a bad thing. The problem is, I tried to broaden the clf confidence range by increasing the value of C, the SVC hyperparameter, in order to make the margin narrower. but it turns out that C=1, C=100 and C=10000 do not have any differences in terms of the classification accuracy and classification confidence. My guess is that the precomputed kernel may play a role? Anyways I do not fully understand why this is the case, any help on either just SVM or just FCMA would all be super helpful! Thank you all in advance!
    6 replies
    anoop023
    @anoop023
    Hi all... I am using ADNI .nii data for a project and needs to do 2 things :
    1. Extract the time series data from the .nii files
    2. Map the voxel level information to ROI level imformation .
      I am trying to use the Brainiak tool to do the task 1 , but I am not sure , whether I have to do some kind of Proprocessing of the ADNI fmri data (which is already in .nii format) , if yes, then which preprocess tasks are required .
      Secondly, is there a common place in the code where the file should be put or I have to change the paths used in code at all places.
      Thank You all in advance.
    2 replies
    manojneuro
    @manojneuro
    @anoop023 the analyses techniques in BrainIAK assume that you have completed all pre-processing steps, that you deem necessary, are completed before you start using brainiak. You can use your preferred pre-processing pipeline.
    you may also load in your data for analysis, using steps outlined in the tutorials. You can check these out here: https://brainiak.org/tutorials/. You can start by looking at the second tutorial.
    laierkasten23
    @laierkasten23

    Hi, I am using the brainiak.reprsimil package and have pre-processed the fmri data the way it is necessary to perform a GBRSA. All data is in the needed format, i.e. the rois to be fitted are in a list (of length #subjects) with each element of the list being an array of shape time points x voxel, whereby the runs are concatenated along the time points. Furthermore, the design matrix is as needed and the scan-onsets and some nuisance regressors are provided correctly. When initiating the instance I set auto_nuisance=False, in order to completely use the given nuisance regressors.

    The GBRSA is running for a while now and does not seem to come to an end. Any experiences of how long this may take for about 20 subjects with about 2000 time points each or a way to parallelise the computation and visualise the process?
    Any advice or idea would be helpful, thank you very much! :)

    9 replies
    Sheetal Jadhav
    @sheetalouette_twitter
    Can anybody tell me where is the readme file for the latatt dataset, tutorial no.8, named connectivity ?
    2 replies
    Ruimin-Kochi
    @Ruimin-Kochi
    Dear all, I just a beginner here. It takes me 20 hours to finish a dataset processing using the run_searchlight function. How should I improve the processing speed? should I increase the 'max_blk_edge ' as much as possible (e.g. the same number of cores of my computer) ?
    CameronTEllis
    @CameronTEllis
    Hi @Ruimin-Kochi, sorry to hear things are running so slowly. The searchlight function was optimized for parallel computing in a cluster environment. While you can run it on your computer, unless you have a certain computer environment, the analysis will run in serial and thus be slow. However, 20 hrs is very slow for most searchlight computations (e.g., SVM). How long does it take to run the computation you are running on a single searchlight? You can test this by making a mask containing only one voxel and then running the searchlight code on that mask to test how long it takes
    Ruimin-Kochi
    @Ruimin-Kochi
    Dear @CameronTEllis , thank you for your kind reply. Now by increasing the value of 'max_blk_edge' , I can complete the searchlight code in 2-3 hrs. I am performing a searchlight analysis on the output of fmriprep: "space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz" for epi input, and "space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz" for mask input. However, when I view the results, I find a few outliers. Is this normal or am I making the wrong input selection. Do you have any suggestions or demo examples for handling the fmriprep output in BrainIak?
    CameronTEllis
    @CameronTEllis
    @Ruimin-Kochi , That is great that you speed it up. I am not sure what you mean by 'a few outliers'? The fmriprep outputs should be directly usable by BrainIAK functions like searchlight. Indeed we have a few papers using fmriprep and BrainIAK together (e.g. Yates, Ellis, Turk-Browne, 2020)
    Ghupo
    @Ghupo
    Hi again @CameronTEllis, I am facing a problem of 'memory error' while trying to compute voxel-wise whole brain ISC for 21 subjects.. It is working fine when I am performing it for 5-7 subjects but not working for more.. The system I am working on is Intel-i7, 32GB RAM.. Can I have a work around to use less memory while calculating ISC??
    4 replies
    Solim LeGris
    @AlephG
    Hi, I am a cognitive neuroscience undergrad and I am looking for a first project applying ML to neuroscience. Any ideas where I could look for a feasible 2-week project?
    2 replies
    CameronTEllis
    @CameronTEllis

    Hello! I am helping a grad student use the searchlight code on a really big dataset/analysis and we are running into a strange problem I have never witnessed. In particular, the code runs for approximately 8 hours, then just seems to freeze and stops producing any more outputs, even if we let it run for multiple days.

    To give some more details, the kernel computation takes 8-10s, we have 230k voxels and we have used up to 120 cores to run this, although we get similar results with fewer cores. The way we track progress is that we print out to a log file the time stamp that every searchlight was run. No error messages are printed in the log, it just times out, after hanging for multiple days without producing a new result. Using a back of the envelope calculation, this code should only take 5 hours on 120 cores so it is already running slow.

    @manojneuro @mjanderson09

    7 replies
    0I24N63
    @congzhaoyang
    Hello, everyone, I'm a beginner of brainiak. Where can I find utils.py of this class in https://brainiak.org/tutorials/02-data-handling/?
    image.png