by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Mihai Capotă
    @mihaic
    Is nibabel at the latest version?
    CameronTEllis
    @CameronTEllis
    This version is installed by nilearn: Requirement already satisfied: nibabel>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from nilearn) (2.3.3) but there is a 3.0 version available. We are on the latest release of nilearn (0.6.0)
    Mihai Capotă
    @mihaic
    Please try upgrading nibabel.
    CameronTEllis
    @CameronTEllis
    In lieu of upgrading numpy? I will try that as a fix in a second
    Mihai Capotă
    @mihaic
    Yes. Installing BrainIAK in a fresh environment should install the latest version of nibabel.
    CameronTEllis
    @CameronTEllis
    To be clear, this is a fresh environment when I run it in colaboratory. And I misspoke before, I meant in lieu of downgrading numpy. I will try this now
    CameronTEllis
    @CameronTEllis
    I tried !pip install nibabel==3.0.1 along with the colab install instructions and that worked
    Is there a chance we can change the colab install instructions or is there some other easier fix we can do? I am going to have 25 people using colab/brainiak for the first time next week and would prefer to minimize wrinkles
    Mihai Capotă
    @mihaic
    Let me check.

    Installing BrainIAK in a fresh environment (done by pr-check.sh) gets me nibabel 3.0.1. I think the culprit is the first line of the Collab instructions:

      !pip install deepdish ipython matplotlib nilearn notebook pandas seaborn watchdog

    This should come after installing BrainIAK.

    CameronTEllis
    @CameronTEllis
    Ahhh I will try that to check
    CameronTEllis
    @CameronTEllis
    This didn't fix the issue: when deepdish was installed after brainiak it didnt update any of the brainiak installs (like nibabel 2.0.2)
    Mihai Capotă
    @mihaic
    If installing BrainIAK first, it should install the latest nibabel. No later upgrade should be necessary.
    CameronTEllis
    @CameronTEllis
    It might be an ordering issue: since BrainIAK is installing nilearn it is nilearn that is installing the wrong nibabel
    Although as far as I can tell from the nilearn git, it should be installing the latest nibabel actually (https://github.com/nilearn/nilearn.git). Maybe it is a mirrors issue or something?
    Mihai Capotă
    @mihaic
    Could be. I was only trying on my machine so far. Let me try on Collab as well.
    Mihai Capotă
    @mihaic
    It turns out Colab has a lot of packages preinstalled, including nibabel. A fix would be to upgrade everything when installing BrainIAK:
    !pip install -U git+https://github.com/brainiak/brainiak
    CameronTEllis
    @CameronTEllis
    Aha, that does make sense, that worked for me. That seems like a good long term solution, right? If you agree, can we update the webpage instructions?
    Mihai Capotă
    @mihaic
    @CameronTEllis, see brainiak/brainiak.github.io#63
    CameronTEllis
    @CameronTEllis
    Great thank you, I just accepted and merged
    shalailahaas
    @shalailahaas

    Hello, @zacharybretton. Thanks for the feedback. Regarding the PEP 517 error, try adding the --no-use-pep517 to the install command:

    python3 -m pip install --no-use-pep517 brainiak

    See for more details brainiak/brainiak#435
    We'll look into the problem with the newest Conda version.

    Has there been any progress on this? I can't seem to install it (neither with conda nor pip) with the latest anaconda version.

    Mihai Capotă
    @mihaic
    Hello, @shalailahaas. We added --no-use-pep517 to the Pip documentation, but Conda is not affected. What errors are you seeing with Conda?
    mwagshul
    @mwagshul
    Hi. I'm new to this software package, but it looks like the ideal environment for fMRI experimental design. Would like to use fMRIsim functions to simulate the expected response, as a function of TR and stimulus onset times, for an event-related paradigm with three stimulus types. We have already collected pilot data without any stimuli, for construction of a noise model. Can you direct me to any additional help (online or otherwise) for constructing such simulations? Thanks.
    CameronTEllis
    @CameronTEllis
    Hi @mwagshul, glad to hear you are interested in working with fmrisim. A good place to start would be the example scripts we provide in brainiak. The examples can be translated to using a 3 parameter model somewhat easily. To generate noise with the parameters from your pilot data, this script maybe more helpful. However, please feel to reach out to me directly if you have any questions and I can help debug any problems you might have
    mwagshul
    @mwagshul
    Thanks, Cameron. These are very helpful, but I have a few questions. Will send via email - best. Mark
    LouisaSmith
    @LouisaSmith

    Hello-- I am working through tutorial 5 and am running into an issue in section 3.2 Regularization Example: L2 vs. L1. I have not modified the code and have not run into any other issues. Any suggestions appreciated.

    05-classifier-optimization - Jupyter Notebook.pdf

    kshitijd20
    @kshitijd20
    Hi was anyone able to run it successfully on Windows?
    image.png
    After running the above command I tried to run the link in browser but it doesn't connect. Can anyone help please?
    Mihai Capotă
    @mihaic

    Hello, @kshitijd20. There are several links suggested in the message. Have you tried the one starting with "127"?

    If that doesn't work, could you please confirm your Docker for Windows is properly set up by testing Nginx in your browser according to the official Docker documentation?
    https://docs.docker.com/docker-for-windows/#explore-the-application

    Yoel Sanchez Araujo
    @YSanchezAraujo
    hello, I was wondering if someone could clarify something for me. its related to the broadcasting using the searchlight module
    for reference to the code I'm running , here's a link https://gist.github.com/YSanchezAraujo/cad5135cd1f47d2c2eefc58a058eb3bf,
    in particular im wondering about line 81 in the for loop: https://gist.github.com/YSanchezAraujo/cad5135cd1f47d2c2eefc58a058eb3bf#file-searchlight-py-L81
    would i need to manually "clear" the space that is being referenced by sl.broadcast for each of the iterations of the for loop, or will it be overwritten by default? the concern would be, that broadcasting in a for loop like that, without some explicit flushing of the previously broadcasted data would lead to the use of all of the up-to n things broadcasted, as opposed to just only the one for the nth iteration
    Yoel Sanchez Araujo
    @YSanchezAraujo
    ah, ok sorry it seems that it is indeed overwritten, if I'm interpreting this correctly, checking the sl.bcast_var would show me that it's changing each time
    CameronTEllis
    @CameronTEllis
    @LouisaSmith Hi sorry about the slow reply, are you still having issues with this? From reading your error output it seems like a problem with the LogisticRegression function. It seems the sklearn changed their error checking on solver recently. You can manually specify the as an argument when setting the LogisticRegression, e.g. LogisticRegression(penalty='l1', solver='liblinear')
    @YSanchezAraujo Yes the broadcast variable is being updated every time it is set. Is this the desired behavior?
    Yoel Sanchez Araujo
    @YSanchezAraujo
    @CameronTEllis yep, that is the desired behavior. thanks!
    Soukhin Das
    @soukhind2
    Hello, I had a question about fmrisim. When we are creating a design sequence where we have events spaced close together in time, does fmrisim generate the combined signal keeping into account the nonlinearity as discussed in Friston et.al. 1998, or does it linearly add up the signal?
    CameronTEllis
    @CameronTEllis
    Hi @soukhind2. The answer is mostly no. The event time course is convolved with the double gamma HRF, which is a linear transform, but a non-linearity is applied by default by setting scale_function to 1. So imagine two events that occur within 1 second of each other. From what we know about the brain's response is that there is a subadditivity of those two presentations such that the evoked response will be larger than if only one event occurred but it likely won't be twice as large. If you set scaling to 0 then the height of the outputed stimulus response will be twice as high, which would be wrong. If you set the scaling to 1, then the peak of two events will be 1, just like the peak of 1 event, although the shape of the function will be the same. Hence when scaling is set to 1, there is no additivity. Note this same logic goes into GLMs using things like FEAT: they also just assume a convolution of the event boxcar. Still, building a realistic non-linearity would be valuable, although it would likely depend largely on empirical details, since different events will elicit different amounts of additivity
    Here is some code to play around with this concept:
    import numpy as np
    from brainiak.utils import fmrisim as sim
    import matplotlib.pyplot as plt
    
    # Inputs for generate_stimfunction
    onsets = [10, 12]
    event_durations = [1]
    tr_duration = 2
    duration = 100
    scale_function = 1
    
    # Create the time course for the signal to be generated
    stimfunction = sim.generate_stimfunction(onsets=onsets,
                                             event_durations=event_durations,
                                             total_time=duration,
                                             )
    
    # Create the signal function
    signal_function = sim.convolve_hrf(stimfunction=stimfunction,
                                       tr_duration=tr_duration,
                                       scale_function=scale_function,
                                       )
    
    plt.plot(signal_function)
    Soukhin Das
    @soukhind2
    @CameronTEllis That is a wonderful explanation. So it is generating non-linearity by basically 'squshing' the signal so that the peaks are of the same magnitude, right?
    CameronTEllis
    @CameronTEllis
    Yes exactly. Then, the compute_signal_change function can be used to rescale it to be whatever magnitude is desired.
    Soukhin Das
    @soukhind2
    @CameronTEllis That was helpful, thanks!
    squinto13
    @squinto13
    Hello gitter peeps! I adopted an inverted encoding model tutorial by Tommy Sprague from MATLAB into Python. I see there is an IEM function recently added to Brainiak, but the tutorial here is more general (can be adapted for essentially any stimulus space) and I believe is informative about the technique. Please let me know if anyone has feedback or would like me to do anything further if it seems useful for Brainiak. Thanks! https://github.com/squinto13/IEM_python/blob/master/IEM_spatial.ipynb
    Mihai Capotă
    @mihaic
    Hello, @squinto13! Thanks for sharing your tutorial. I think @vyaivo would be interested, especially considering her Serences Lab background.
    Ari Kahn
    @ariekahn
    Hey! I was taking a quick look at the scikit-learn < 0.22 pin. It looks like test_mvpa_voxel_selection failure is pretty much just the result of SVC defaulting to gamma='scale' instead of gamma='auto' in 0.22 onwards.
    Not sure if it matters which is being used in the test, but switching it back to 'auto' seems to fix the test using 0.22.2
    Mihai Capotă
    @mihaic
    Thanks for the investigation, @ariekahn! We will happily accept a pull request.
    Ari Kahn
    @ariekahn
    no problem! having a bit of an issue with MPI and a few of the tests, but it seems to resolve the ones that are running locally for me.
    Ari Kahn
    @ariekahn
    @mihaic before I update the examples, just wanted to make sure: the idea would be to explicitly force all example code to use the old SVC behavior for now? Looking into why scikit-learn decided to switch the default, but haven’t seen any clear reason so far
    1 reply