by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    saubhagya-gatech
    @saubhagya-gatech
    My apologies for bad formatting in chat. I will be careful next time. I am new to Gitter. I appreciate your help.

    In the example, work around is used by providing start position before each set of iterations as:

    if np.any(GR > 1.2):
    starts = [sampled_params[chain][-1, :] for chain in range(nchains)]
    while not converged:

    However, since it is provided before the while statement, the start position is never updated after first set of interations, 2000 iterations in my case, hence I see a huge shift after 4000, 6000, 8000 ....., iterations as they are set back to location at the end of 2000. For now I can move the start assignment after the while statement, but it would be nice to automate this through history file. I will be happy to provide more inputs if needed.

    Oscar Ortega
    @ortega2247

    Hi Saubhagya, I am sorry for the confusion. So, if you want to restart a pydream run, you must pass values for the start and model_name parameters. The start values should be the last parameter values sampled in your previous pydream run. The model_name should be the same model name you used in the previous pydream run. The model_name parameter is used to load the history of all parameters sampled, the crossover probabilities and gamma probabilities. Pydream can't use the history of parameters sampled to set the start argument because the history file doesn't have information about the chain that originated the sampled parameters, that's why it is necessary to pass the start argument manually.

    Regarding the Robertson example, you are right. There is an error in that example, the starts variable should be passed to the run_pydream function, it should read something like this:

            sampled_params, log_ps = run_dream(sampled_parameter_names, likelihood, niterations=niterations,
                                               nchains=nchains, multitry=False, gamma_levels=4, adapt_gamma=True, start=starts
                                               history_thin=1, model_name='robertson_nopysb_dreamzs_5chain', verbose=True, restart=True)
    saubhagya-gatech
    @saubhagya-gatech
    Hi Oscar, that helps a lot. Thank you very much for the clarification. I appreciate your assistance.
    Just another correction in the Robertson example, < starts = [sampled_params[chain][-1, :] for chain in range(nchains)] > should be after while statement. Currently it is before the while statement.
    if np.any(GR > 1.2):
           while not converged:
           starts = [sampled_params[chain][-1, :] for chain in range(nchains)]
    Oscar Ortega
    @ortega2247
    yes, thanks for pointing that out
    saubhagya-gatech
    @saubhagya-gatech
    Hi Oscar, another quick question. Does PyDream has the outlier detection functionality that is available in the original DREAM?
    Oscar Ortega
    @ortega2247
    PyDREAM implements the DREAM(zs) algorithm, which samples the parameter space based on past states. Thus, quoting Vrugt: "outlier chains do not need forceful treatment. Such chains can always sample their own past and with a periodic value of gamma=1 jump directly to the mode of the target."
    saubhagya-gatech
    @saubhagya-gatech
    Ah ok, that makes sense. Thanks a lot!
    Oscar Ortega
    @ortega2247
    Happy to help!
    saubhagya-gatech
    @saubhagya-gatech
    Good Morning,
    A questions regarding history file saving in PyDream. When we restart PyDream, prehistory files appended or over-written? I wonder if there is any difference between running a straight 50,000 steps run vs running 25,000 steps and then restarting the run from 25,000th step and same model name?
    Oscar Ortega
    @ortega2247
    yep, pydream should append the values to the history chains, crossover and gamma probabilities. If you want to save the chains sampled parameters and their correspondent logps you have to do it manually:
            for chain in range(len(sampled_params)):
                np.save('robertson_nopysb_dreamzs_5chain_sampled_params_chain_' + str(chain) + '_' + str(total_iterations),
                            sampled_params[chain])
                np.save('robertson_nopysb_dreamzs_5chain_logps_chain_' + str(chain) + '_' + str(total_iterations),
                            log_ps[chain])
    it should be the same to run 50,000 steps in one run or two sequential runs of 25,000 steps
    saubhagya-gatech
    @saubhagya-gatech
    Ah thanks. I have a question about convergence check. It is very likely I have a parameter space which is Bimodal, and the inferior mode is sampled less frequently by chains. Hence, GR factor goes below 1.2 and comes back up when a new chain ventures into inferior mode. I have picture of trace plot and GR factors attached. Does it mean that I just need to run chains for a much longer time, when all chains have sampled the inferior modes ad hence all have similar variances? Or I can assume that target distribution has been sampled the first time I get GR <1.2? There is a mention in the PyDream documentation.
    "also note that the presence of multiple modes in the posterior distributions will cause Gelman Rubin to fail, possibly falsely."
    Trace_plots.png
    GR Plot.png
    Oscar Ortega
    @ortega2247
    Yes, you are right. If the posterior distribution is bimodal the Gelman-Rubin diagnostics would fail. I don't know what to do in those cases, but I found this paper where the authors developed a modified Gelman-Rubin test that works for multimodal distributions: https://arxiv.org/pdf/1812.09384.pdf You could try to apply it to your results
    saubhagya-gatech
    @saubhagya-gatech
    Thanks. I will try that. I am having little bit f hard time understanding how gamma is adapted. Since, its default value is False, it appears that it is not a key functionality of DREAM(ZS). If I keep gamma adaptation disabled, I do not see any argument to pass gamma_0 value. I was exploring this option to improve the acceptance rate by reducing reducing the gamma_0 value. Or, is increasing Gamma level a better way to do this?
    Oscar Ortega
    @ortega2247
    You can set adapt_gamma=True if you want the gamma probabilities to be adapted. Otherwise, the gamma probabilities are defined: gamma_probabilities = [1/float(gamma_levels) for i in range(gamma_levels)], so if you want to include lower gamma probabilities you can increase the integer passed to the gamma_levels argument
    vw-liane
    @vw-liane

    Hi there! I hope you and yours are well.

    New person to Gitter & PySB here. I have some questions after going through the tutorial at https://pysb.readthedocs.io/en/latest/tutorial.html.

    1) When I run the cmd visualization command, I don't get errors, but neither do I get any visual output. Would you know what could be causing this or things I should try to debug? I can post the screenshots (how do I attach something?).

    2) Is there a spot for me to learn more about the "Model calibration" and "Modules" sections? They are empty at the moment.

    3) I saw multiple typos in the tutorial. Can I send you somehow the locations of the typos? Sorry about that; spelling is one of those small things that bugs me.

    Thank you for your time and I look forward to hearing from you.

    Alex Lubbock
    @alubbock
    Hi @vw-liane,
    1) What operating system are you using? Unfortunately, I think Gitter doesn't a way to upload images directly. You'd have to upload it somewhere like imgur and paste the link here.
    2) For model calibration, PyDREAM has PySB integration (here is a publication on that). There's more detail on the PySB modules here. We'll update the tutorial with a few links in the next release. Thanks for bringing it to my attention.
    3) We'd welcome your suggestions for improvements and fixes to the documentation. You could either open an issue on Github describing the issues, or even better, if you're comfortable with Github you could make the edits to the tutorial directly and open a pull request.
    vw-liane
    @vw-liane

    Hi @alubbock ,
    Sorry for my delay; life is wild.
    1) I am on Windows 10 64 bit. For now I will work around the visualization aspect until I explore that deeper in my project. Thank you for the tip on using imgur!
    2) I'm checking out PyDREAM as well! Thank you for that insight.
    3) Yes, I think I could do the edit and pull request! That sounds like the best option.

    When I have more thoughts/ questions, I will let you know! Thank you for your help so far.

    Alex Lubbock
    @alubbock
    @vw-liane No problem; thanks for taking the time to put together a pull request. I've just merged it.
    saubhagya-gatech
    @saubhagya-gatech

    Dear forum members,

    I am a PyDream user. I have a quick question. Is there a way to define different form of priors for different parameters?
    Currently I am using uniform priors for all my parameters. I am using following line of code:

    parameters_to_sample = SampledParam(uniform, loc=lower_limits, scale=scale)
    sampled_parameter_names = [parameters_to_sample]

    Now, I want to give uniform priors to a few parameters and gaussians to the rest. What is the best way to go about it?

    Oscar Ortega
    @ortega2247

    Hi,

    You can use any of the scipy distributions (https://docs.scipy.org/doc/scipy/reference/stats.html) as a prior in pydream.

    To use a uniform and a gaussian distribution you can do something like this:

    from scipy.stats import norm, uniform
    
    par1 = SampledParam(uniform, loc=lower_limits, scale=scale)
    par2=SampledParam(norm,  loc=mean, scale=std)
    
    sampled_parameters = [par1, par2

    Hope this is helpful

    saubhagya-gatech
    @saubhagya-gatech
    oh ok. This makes so much sense. Essentially, we have to give a list of scion distributions. Can we give distributions other than uniform or normal? Like some fitted kernel?
    image.png
    something like this:
    Oscar Ortega
    @ortega2247
    Interesting. I have never done anything like that but in theory it should be doable. Check this example from StackOverFlow to create a new scipy distribution. You just have to make sure that your distribution has a rvs and interval methods
    saubhagya-gatech
    @saubhagya-gatech
    I see. So, I can give any distribution that is definable in Scipy. That's great.
    pietromicheli
    @pietromicheli
    Hi all, i'm new in the forum.
    I have a very quick question: is it somehow possible to pass as input to the simulation a list of values (one for each time point) which is meant to describe how a Parameter (or Expression) entity must vary during the simulation?
    Oscar Ortega
    @ortega2247

    Hi @pietromicheli, if your parameters change over time as a function of the concentration of one or multiples species in your model, you can create an Expression and pass it as rules' rates. For doing something like this, you can check this example:

    https://github.com/pysb/pysb/blob/master/pysb/examples/expression_observables.py

    If you just want to pass a list parameter values to be used at different time points, I am not aware that there is function like that in pysb. However, you could simulate the first time points with the parameters that you want and then used the simulated results as the initial conditions of the next simulation that has different parameter values. For an example like that, take a look at this function:

    https://github.com/LoLab-VU/pydyno/blob/9b93be55f5e6159cc98847e41d1e44290f4dcfcb/pydyno/tools/miscellaneous_analysis.py#L9

    @alubbock might have some better ideas :)

    Leonard Harris
    @lh64
    @pietromicheli Can you explain a bit more what you’re trying to do or provide a short example? There are piecewise linear functions that might work but without an example it’s a bit hard for us to determine what the best solution might be for your problem.
    pietromicheli
    @pietromicheli

    Hi @ortega2247 and @lh64, thank you for the answers! :)

    @lh64 you're definitely right, I apologize. I'm trying to model a post-synaptic neuron activity:
    First, I simulate the gating of post-synaptic ion channels in a Pysb model. Then I use the trajectory of the open channels to calculate, for each time point, the quantity of Calcium ions that flow in the time unit. This post simulation calculation will create an array (of length equal to the time span array used for the first pysb simulation) that basically describes the time course of the post-synaptic calcium influx. What I'm trying to do now is to pass this array to a second Pysb model which contains some Calcium-dependent reactions. The goal here is to use the values of my array (one for each time step) to drive a synthesis-like reaction for a Calcium monomer that can be used by all the Calcium-dependent reactions. I really hope it's clear enough! :)

    Thanks a lot @ortega2247 , your function seems super cool for creating a kind of discrete event during the simulation, but in this case I want that my parameter continuously change at each time step :)

    saubhagya-gatech
    @saubhagya-gatech
    Good Morning Forum Members,
    I am a PyDream user. I wanted to confirm that there is no strict requirement for the number of chains in Dream(ZS), right? For 19 parameters, can I use 5 chains? For MCMC-DE, we had to use a minimum of D/2 chains where D is the dimension of parameter space. I read that Dream(ZS) does not have any such requirement. Just wanted to double-check.
    Thanks
    Rodrigo Santibáñez
    @glucksfall
    Hello everyone. I hope you're well. Is there a way to delete a Rule from the model?
    Leonard Harris
    @lh64
    @glucksfall There's not a simple function for deleting a Rule but you can try manually recreating the model.rules object by excluding the Rule(s) you don't want, using model.reset_equations to clear out the reactions, species, etc. created by BNG, and then regenerating the network. Here's a small example script I put together doing that:
    Rodrigo Santibáñez
    @glucksfall
    Thank you @lh64.
    saubhagya-gatech
    @saubhagya-gatech

    Hello everyone,

    I was wondering if it is possible to provide multivariate distribution as prior instead of independent priors for parameters in PyDream?

    Oscar Ortega
    @ortega2247
    Hi @saubhagya-gatech , sorry for the late reply. Dream(ZS) has a minimum number of chains requirement of 3. Unfortunately, pydream doesn't support the usage of multivariate distributions as a prior
    saubhagya-gatech
    @saubhagya-gatech
    @ortega2247 Thank you!
    Leonard Harris
    @lh64
    @pietromicheli You should be able to accomplish what you want using the initials or param_values arguments to the run method of the simulator. I'm attaching an example here. I defined two simple rules: X() >> None and A() + X() >> X(). In the code, I first run a simulation of the full system. Then, I run two sequential simulations, the first with only the X() >> None rule and the second with only the A() + X() >> X() rule, where the concentration of X() from the first simulation is fed into the simulator for the second simulation using the initials argument to ScipyOdeSimulator.run. To be clear, the second "simulation" is actually a series of simulations from output time point to output time point, with the final concentration of A() from the previous step and the corresponding value of X() passed into the simulator at each step. I have it loop over three different numbers of output points (10, 100, and 1000) to show how the output of the sequential simulation approaches the true result from the full simulation as the number of output points increases. I'm attaching those plots here as well. Hopefully this helps. If you have any further questions just let us know.
    Figure_1.png
    figure_2.png
    figure_3.png
    pietromicheli
    @pietromicheli

    @lh64 Thank you for the answer!

    Actually my first approach to this problem was to iteratively run one simulation for each time point just as you suggested, but I wasn't very happy with the computational cost of a such sequential simulation (expecially for a high number of time points). However looking at your code I noticed that (stupidly) I created at each iteration a new simulator object (via ScipyOdeSimulator() function). So it turned out that this unnecessary step tripled the cost of each iteration, compared to the cost that runinning the simulations (via .run() method) using always the same simulator object at each iteration would require. The overall cost is still quite high obviously, but for reasonably small number of time points it is tollerable.

    Your answer and your code has been really helpfull, thanks a lot! :)

    saubhagya-gatech
    @saubhagya-gatech
    Hello @ortega2247
    Was there any change made in the PyDream in the last one month? I installed the package again and the "start" argument is not working. It does not matter what chain initialization I give, the chains are starting at random positions. Any help here would be appreciated. Before I reinstalled PyDream yesterday, I did not have this issue.