Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 30 2019 17:07
    mstimberg commented #1047
  • Jan 30 2019 16:53
    thesamovar commented #1047
  • Jan 30 2019 15:40
    mstimberg commented #1047
  • Jan 30 2019 12:16
    daphn3cor closed #1048
  • Jan 30 2019 12:15
    daphn3cor commented #1048
  • Jan 30 2019 10:33
    mstimberg synchronize #1047
  • Jan 30 2019 10:33

    mstimberg on function_compiler_kwds

    Add .c and .pxd to the list of … Make Cython compilation quiet a… (compare)

  • Jan 30 2019 09:57
    mstimberg edited #1047
  • Jan 30 2019 09:56
    mstimberg synchronize #1047
  • Jan 30 2019 09:56

    mstimberg on function_compiler_kwds

    Document the new features for u… (compare)

  • Jan 30 2019 09:56
    mstimberg commented #1048
  • Jan 30 2019 07:59
    daphn3cor commented #1048
  • Jan 29 2019 16:30
    mstimberg commented #1048
  • Jan 29 2019 15:42
    daphn3cor opened #1048
  • Jan 29 2019 14:44
    mstimberg synchronize #1047
  • Jan 29 2019 14:44

    mstimberg on function_compiler_kwds

    Include external source files a… (compare)

  • Jan 29 2019 13:48
    mstimberg synchronize #1047
  • Jan 29 2019 13:48

    mstimberg on function_compiler_kwds

    Minor bug-fixes for new additio… Support external source files f… Add tests for functions with ex… (compare)

  • Jan 25 2019 17:34
    thesamovar commented #1047
  • Jan 25 2019 17:26
    mstimberg opened #1047
Dan Goodman
@thesamovar
@moritzaugustin cool! I love digging out those old notes. :) Did the notes actually prove helpful in the end or did you end up going a different direction?
moritzaugustin
@moritzaugustin
the notes helped me to get an overview about what is there already (complementing your 2012 review paper) and concretely made us use aspects of the nemo simulator for our spike propagation data structs/algos. and it helped me start the today day with a laugh ;-)
Dan Goodman
@thesamovar
that works! :)
Marcel Stimberg
@mstimberg
Um, I thought I saw this morning a question here (on my phone's gitter app) about synapses with rectangular current but now it seems to have disappeared :confused: Am I looking in the wrong place, am I hallucinating, or did they delete the question...?
wrongu
@wrongu
You’re not hallucinating – I deleted my question once we found a workaround.
Well, we have 2 different approximate workarounds that still need to be tested
Marcel Stimberg
@mstimberg
Ah, that's reassuring :) BTW, for this kind of synaptic model we've usually recommended to use a single Synapses object but with two pathways (see https://brian2.readthedocs.io/en/stable/user/synapses.html#multiple-pathways). The first pathway would make the post-synaptic potential/current/conductance go up, the second pathway would make it go down. Since each pathway can have a different synaptic delay, you can set the width of the rectangle by choosing the appropriate delay difference.
wrongu
@wrongu
Oh, this looks like a much better method! Thanks for pointing it out.
Marcel Stimberg
@mstimberg
You're welcome, we should write these things down somewhere and make them easier to find...
wrongu
@wrongu
Would it help if I made an issue on github, so that “square wave psp in brian” becomes searchable?
Marcel Stimberg
@mstimberg
I'd prefer to keep this out of the issues (apart from maybe an issue about improving the documentation), they already get (ab)used for a lot of things that are not actually issues/bugs. It's not super easy to find, but there's at least one statement about this in the mailing list archives: https://groups.google.com/d/msg/brian-development/mjAh3SmI_SA/1TCZ1UiGBQAJ
wrongu
@wrongu
@mstimberg thanks again for the help. We now have noisy square-wave PSPs :smile:
Follow up question: can synapses have randomly jittered delays?
Marcel Stimberg
@mstimberg
Sorry for the late reply, I did not get the notification...
Delays can certainly be random, you can assign to the delay variable in the same way as to all other variables, e.g. with a string that refers to rand(). E.g. this example has some randomness in the delay: https://brian2.readthedocs.io/en/stable/examples/synapses.synapses.html
Saurabh Kumar
@isaurabhkr
Hi, my name is Saurabh. I am very interested to work on " Improving Brian’s parallelization capabilities using OpenMP" for GSOC '19. I have been playing around with OpenMP for a while and I want to improve Brian by making use of parallelisation. I have not used any other parallel computing techniques yet, but I am eager to learn them as well. It would be really helpful if you can guide me with the next plausible steps, and I can be more acquainted with the project and codebase.
Thanks!
Marcel Stimberg
@mstimberg
I guess you've already seen it, but I replied in the neurostars forum: https://neurostars.org/t/gsoc-project-idea-10-3-improving-brians-parallelization-capabilities-openmp/3286
Saurabh Kumar
@isaurabhkr
Yes.
LNaumann
@LNaumann

Hey Brian2-team. I've been having problems running very long simulations and monitoring the population rate. It's a network of 5000 LIF neurons with some plasticity and other mechanisms. 5 hrs of simulation time finish just fine but for 8 hrs I get the following:

RuntimeError: Project run failed (...)

Eventually I assumed this is a memory issue (RAM/local?) and by now I'm quite convinced the PopulationRateMonitor is the problem. If I don't monitor anything, the simulation finishes with errors but if I only include the rate monitor it gives the error above again - both on my a desktop and the compute cluster. The issue is that the PopulationRateMonitor records in every time step and I can only downsample after the full run. In theory this still shouldn't take more than ~1.1GB (8 hrs36001000/0.1*32bit). Does someone know what the problem could be? Or how to work around this other than introducing another NeuronGroup taking inputs from all neurons, computing a rate variable and tracking it with a StateMonitor?

LNaumann
@LNaumann
sorry. no monitors -> finishes without errors of course.
Marcel Stimberg
@mstimberg
Hi, this is indeed a bit curious, this does not sound like that much... I assume you calculate with 32bit because you already changed the dtype to float32? You seem to be using the standalone mode, can you either use set_device or device.build (if you are using it) with the debug=True keyword or run the ./mainbinary in the standalone code directory directly? This could potentially give a more meaningful error message. Oh, and I assume that you are using the latest version of Brian (2.2.2.1)?
LNaumann
@LNaumann
Hey,
  • yes I'm using float32 and standalone mode
  • I already ran the main binary from the standalone code and the only message I got was "Killed"
  • currently I'm using Brian 2.2.1 on both cluster and desktop so I guess not the very latest one. Could that be an issue?
  • btw I ran the same simulations computing the rate within a StateMonitor at an Interval of 1 second and they finished without errors
Marcel Stimberg
@mstimberg
Hmm, no the Brian version should be fine, I don't think we fixed anything related recently (https://brian2.readthedocs.io/en/stable/introduction/release_notes.html)
How much memory do you have? The monitor also records the time of each time step, and that is stored as a 64bit floating point value alongside the rate (I admit this is not optimal for saving memory...)
Marcel Stimberg
@mstimberg
Also, both rate and time are stored in dynamic arrays (i.e. STL vectors) and the growth factor for g++ (which I assume you are using) seems to be 2. If you are unlucky, both arrays could therefore actually use ~twice the size of what you think they should use.
I'll have to think about this a bit more, but for now the only easy solution I can think of would be to record only from a subset of the neurons (assuming your neurons are not ordered in any way, you can use PopulationRateMonitor(neurons[:500]) or similar).
LNaumann
@LNaumann
The desktop has 32 GB memory and on the cluster it depends on the scheduling and other users but definitely more than enough. I give the cluster 10 GB per job just to be safe but it doesn't even throw a memory error as the error doesn't seem to reach back to python.
Also I'm setting the overall dtype using b2.prefs.core.default_float_type = np.float32 which should also et the dtype of the time variable right?
How does recording the rate of fewer neurons reduce the memory? I wasn't aware it depends on the number of neurons as it's the population rate.
Marcel Stimberg
@mstimberg
Oh, 32 GB should indeed be more than enough, very curious, I'll try to investigate this on my own machine.
About the other points:
  • No, setting the default float type to 32bit does not change the time variable, unfortunately. This is partly a bug, I'd say: there is a good reason for the time variable of the clock to always be double precision (see #981) but we don't necessarily have to use the same precision for recording the variable in the monitor
  • Forget about recording from less neurons, that does indeed not make any sense, sorry for the confusion.
Marcel Stimberg
@mstimberg
If you are fine with subsampling the firing rate (i.e. not binning/smoothing, but simply only measuring it every X timesteps), then there is solution that works around the fact that we do not allow setting dt in the PopulationRateMonitor initializer. You can set a different clock manually via:
pop_mon._clock = b2.Clock(dt=...)
LNaumann
@LNaumann
That might be an option, thank you. Although I'm not quite sure if I'm happy with subsampling. For the time being I think I'll stick with recording the rate using a StateMonitor from a single additional neurons that a fraction of the main population projects to. I already have that set up anyway because I interrupt simulations as soon as the rate is too high.
Marcel Stimberg
@mstimberg
Oh, one more thing: did you only change the dtype via the default_float_dtype preference, or did you also give dtype=np.float32 as an argument to PopulationRateMonitor? I think the argument to PopulationRateMonitor overwrites the default dtype, and it defaults to np.float64 if not specified...
Marcel Stimberg
@mstimberg
Hmm, I still wonder whether there is something else going on. I just run a simulation with a PopulationRateMonitor for 8h biological time (i.e. 8*60*60*second) and it worked fine, never going beyond ~5GB which is on the order of the expected memory usage. This was of course using the default time step of 0.1ms, but I assumed that this is what you were using as well?
Maybe a stupid idea, but you are not running out of disk space, right (in standalone mode, all the results are written to disk in the end)? I think you'd get something like "not enough space on device" instead of "Killed" in that case, though.
LNaumann
@LNaumann
I only used the default_float_type way because I thought this would sett all dtypes but apparently it's not. That definitely gives another factor 2.
Probably the network you just simulated is a lot simpler than my full one. The memory load without any monitors is already at about 4 GB so if the PopulationRateMonitor adds another 4-5 GB this gets much closer to my 10GB cluster limit. Although it still doesn't explain the failure on my desktop.
I have 180 GB free on the desktop and even more on the cluster so that can't be it.
Marcel Stimberg
@mstimberg
Sure, my network only had the monitor and a small `NeuronGroup, but I was thinking about your desktop with 32GB
LNaumann
@LNaumann
Yes, using default timestep. But rk4 integration method mostly if that adds anything
Marcel Stimberg
@mstimberg
No, that shouldn't make a difference here.
LNaumann
@LNaumann
Is there something like a built-in limit of disk space Brian is allowed to use? I still don't understand why there's no information about memory error in the error message although that seems to be the underlying problem..
Marcel Stimberg
@mstimberg
No, Brian itself does not have any limit like that.
Under Linux, you get the simple message "Killed" if the kernel kills your process, e.g. because it uses too much memory.
But I think you should get the reason why it killed the process in /var/log/kern.log? Not sure
LNaumann
@LNaumann
I can check that
Marcel Stimberg
@mstimberg
Just speculating wildly: do you maybe use set_device('cpp_standalone', directory=None), i.e. have standalone mode write to a directory in /tmp/?
LNaumann
@LNaumann
Yes, or at least similar. I use set_device('cpp_standalone, build_on_run_False) and then device.build(directory=None, compile=True, run=True, clean=True, debug=False)
Probably is the same as setting directory=None in set_device?
As far as I remember I chose to do it this way because on the cluster I'm running simulations in parallel so I would have to specify different output files manually such that they don't overwrite each other. Correct me if I'm wrong.