In the example, work around is used by providing start position before each set of iterations as:
if np.any(GR > 1.2):
starts = [sampled_params[chain][-1, :] for chain in range(nchains)]
while not converged:
However, since it is provided before the while statement, the start position is never updated after first set of interations, 2000 iterations in my case, hence I see a huge shift after 4000, 6000, 8000 ....., iterations as they are set back to location at the end of 2000. For now I can move the start assignment after the while statement, but it would be nice to automate this through history file. I will be happy to provide more inputs if needed.
Hi Saubhagya, I am sorry for the confusion. So, if you want to restart a pydream run, you must pass values for the
model_name parameters. The
start values should be the last parameter values sampled in your previous pydream run. The
model_name should be the same model name you used in the previous pydream run. The
model_name parameter is used to load the history of all parameters sampled, the crossover probabilities and gamma probabilities. Pydream can't use the history of parameters sampled to set the
start argument because the history file doesn't have information about the chain that originated the sampled parameters, that's why it is necessary to pass the
start argument manually.
Regarding the Robertson example, you are right. There is an error in that example, the starts variable should be passed to the run_pydream function, it should read something like this:
sampled_params, log_ps = run_dream(sampled_parameter_names, likelihood, niterations=niterations, nchains=nchains, multitry=False, gamma_levels=4, adapt_gamma=True, start=starts history_thin=1, model_name='robertson_nopysb_dreamzs_5chain', verbose=True, restart=True)
if np.any(GR > 1.2): while not converged: starts = [sampled_params[chain][-1, :] for chain in range(nchains)]
for chain in range(len(sampled_params)): np.save('robertson_nopysb_dreamzs_5chain_sampled_params_chain_' + str(chain) + '_' + str(total_iterations), sampled_params[chain]) np.save('robertson_nopysb_dreamzs_5chain_logps_chain_' + str(chain) + '_' + str(total_iterations), log_ps[chain])
Hi there! I hope you and yours are well.
New person to Gitter & PySB here. I have some questions after going through the tutorial at https://pysb.readthedocs.io/en/latest/tutorial.html.
1) When I run the cmd visualization command, I don't get errors, but neither do I get any visual output. Would you know what could be causing this or things I should try to debug? I can post the screenshots (how do I attach something?).
2) Is there a spot for me to learn more about the "Model calibration" and "Modules" sections? They are empty at the moment.
3) I saw multiple typos in the tutorial. Can I send you somehow the locations of the typos? Sorry about that; spelling is one of those small things that bugs me.
Thank you for your time and I look forward to hearing from you.
Hi @alubbock ,
Sorry for my delay; life is wild.
1) I am on Windows 10 64 bit. For now I will work around the visualization aspect until I explore that deeper in my project. Thank you for the tip on using imgur!
2) I'm checking out PyDREAM as well! Thank you for that insight.
3) Yes, I think I could do the edit and pull request! That sounds like the best option.
When I have more thoughts/ questions, I will let you know! Thank you for your help so far.
Dear forum members,
I am a PyDream user. I have a quick question. Is there a way to define different form of priors for different parameters?
Currently I am using uniform priors for all my parameters. I am using following line of code:
parameters_to_sample = SampledParam(uniform, loc=lower_limits, scale=scale)
sampled_parameter_names = [parameters_to_sample]
Now, I want to give uniform priors to a few parameters and gaussians to the rest. What is the best way to go about it?
You can use any of the scipy distributions (https://docs.scipy.org/doc/scipy/reference/stats.html) as a prior in pydream.
To use a uniform and a gaussian distribution you can do something like this:
from scipy.stats import norm, uniform par1 = SampledParam(uniform, loc=lower_limits, scale=scale) par2=SampledParam(norm, loc=mean, scale=std) sampled_parameters = [par1, par2
Hope this is helpful
Hi @pietromicheli, if your parameters change over time as a function of the concentration of one or multiples species in your model, you can create an Expression and pass it as rules' rates. For doing something like this, you can check this example:
If you just want to pass a list parameter values to be used at different time points, I am not aware that there is function like that in pysb. However, you could simulate the first time points with the parameters that you want and then used the simulated results as the initial conditions of the next simulation that has different parameter values. For an example like that, take a look at this function:
@alubbock might have some better ideas :)
Hi @ortega2247 and @lh64, thank you for the answers! :)
@lh64 you're definitely right, I apologize. I'm trying to model a post-synaptic neuron activity:
First, I simulate the gating of post-synaptic ion channels in a Pysb model. Then I use the trajectory of the open channels to calculate, for each time point, the quantity of Calcium ions that flow in the time unit. This post simulation calculation will create an array (of length equal to the time span array used for the first pysb simulation) that basically describes the time course of the post-synaptic calcium influx. What I'm trying to do now is to pass this array to a second Pysb model which contains some Calcium-dependent reactions. The goal here is to use the values of my array (one for each time step) to drive a synthesis-like reaction for a Calcium monomer that can be used by all the Calcium-dependent reactions. I really hope it's clear enough! :)
Thanks a lot @ortega2247 , your function seems super cool for creating a kind of discrete event during the simulation, but in this case I want that my parameter continuously change at each time step :)
model.rulesobject by excluding the Rule(s) you don't want, using
model.reset_equationsto clear out the reactions, species, etc. created by BNG, and then regenerating the network. Here's a small example script I put together doing that: