Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Dec 01 15:18
    Datseris labeled #200
  • Dec 01 13:13
    MartinuzziFrancesco commented #155
  • Dec 01 09:39
    Datseris commented #155
  • Dec 01 09:38
    Datseris commented #155
  • Nov 28 11:35
  • Nov 27 16:46
    oameye opened #200
  • Nov 23 02:20
  • Nov 16 15:00
    MartinuzziFrancesco synchronize #155
  • Nov 16 14:31
    MartinuzziFrancesco commented #155
  • Nov 14 18:12
    MartinuzziFrancesco synchronize #155
  • Nov 13 20:27
    MartinuzziFrancesco commented #155
  • Nov 13 20:04
    Datseris commented #155
  • Nov 13 17:39
    MartinuzziFrancesco opened #155
  • Nov 11 09:19

    Datseris on statespacesets

    port to statespacesets (compare)

  • Nov 10 20:41

    Datseris on gh-pages

    build based on e7b7e56 (compare)

  • Nov 10 20:14

    Datseris on v2.3.2

    (compare)

  • Nov 10 19:52
    JuliaTagBot commented #116
  • Nov 10 19:48

    Datseris on gh-pages

    build based on e7b7e56 (compare)

  • Nov 10 19:29

    Datseris on main

    Include RecurrenceAnalysis to t… (compare)

BridgingBot
@GitterIRCbot
[slack] <Datseris> The main estimator is indeed the histogram based one, that's what we reference as well, while others are simply mentioned. But once again, I do not agree that sparsity would be of any concern here because (a) the histograms are at most 2D, which makes it rather straightforward to compute with binning, and (b) the user gets to decide how densely the histograms are covered by deciding how many random parameter samples to pick from. The more sparse the joint pdf is, the easier it will be for this method to identify that "the observable depends strongly on the parameter p".
BridgingBot
@GitterIRCbot
[slack] <adammaclean> late to this but I’ve been teaching an undergrad class with “Modeling life” - Garfinkel, Guo, Shevtsov. It starts from extremely low expectations of maths (i.e. basically no calculus) but nonetheless quickly build up to the analysis of interesting model behaviors, e.g. via phase planes, linear stability, etc, in a concept-heavy manner. I think for biologists looking for a way in it can be ideal.
[slack] <adammaclean> does not have any code though - I @Datseris’s book better for that (which i’ve just bought btw! look forward to reading)
BridgingBot
@GitterIRCbot
[slack] <Frederik Banning> Ahaha, nope, that's definitely not suited for my introductory series on writing ABMs in Julia and even less so for the third post. 😄
[slack] <Frederik Banning> You should really just look into how to create a new space for Agents.jl if that's the kind of model you want to build. 🙂
BridgingBot
@GitterIRCbot
[slack] <krishnab> Haha, I was of course just fooling around @Frederik Banning. Yeah, I read through the documentation and creating a new space does not seem that bad. I basically just need to write implementation for those 5 functions, add_single_agent, etc. Are there any pitfalls or challenges to look out for? I imagine I will encounter some confusing error messages that I have not seen before while developing the new space 🙂.
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> hi everybody! this question might be too naïve but I just wanted to ask if it is possible to add agents at each step, i.e. make the population grow. thanks!
[slack] <Ramiro Vignolo> or do I have to set them at the beginning and somehow "activate' them at each step?
BridgingBot
@GitterIRCbot
[slack] <Datseris> of course you can add agents during the simulation: https://juliadynamics.github.io/Agents.jl/stable/examples/predator_prey/ ...
BridgingBot
@GitterIRCbot
[slack] <krishnab> @Ramiro Vignolo yes, exactly what George was saying. in the agents' step function, you can add as many additional agents as you would like. It is really easy.
BridgingBot
@GitterIRCbot
[slack] <Manuela Vanegas Ferro> Almost same as Frederik! 😄 In my case I’d take Graphs off the list and just add Distributed for large batch runs.
BridgingBot
@GitterIRCbot
[slack] <krishnab> @Manuela Vanegas Ferro oh man, I would love to see a tutorial on running large scale simulations using Distributed. Is there a tutorial for that?
BridgingBot
@GitterIRCbot
[slack] <Frederik Banning> You have to do that in the model_step! function, not the agent_step! function.
[slack] <Frederik Banning> I mean, sure, theoretically you also have access to model in the agents' stepping function but that one is internally iterated over instead of just run once. So the model stepping function is much better suited for that purpose.
[slack] <Frederik Banning> I've used it for running parameter scans locally on my machine across all eight cores. Adding external cores to the process would be relatively easy, e.g. via ssh connections.
[slack] <Frederik Banning> Maybe it's something to include at a (much) later stage in my abm tutorial. 🙂
[slack] <Frederik Banning> And of course, @Manuela Vanegas Ferro, I also use some built-in packages like Downloads, Statistics and Distributed. Didn't think that these were noteworthy here but you're totally right
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> suplear clear, thank you very much!
BridgingBot
@GitterIRCbot
[slack] <Datseris> In the wolf sheep model it actually makes more sense to make agents in the agent stepping function, because each wolf or each sheep may generate an offspring at each step according to how well they are fed. So I guess wheteher you generate agents in the model or agent stepping functions is context specific. But the point is: you can do it wherever the hell you want; in fact (NOT RECOMMENDED) you could even generate agents from the within the data collecting functions if you are enough of a hacker ;D
[slack] <Frederik Banning> True true
[slack] <Ramiro Vignolo> understoood
BridgingBot
@GitterIRCbot
[slack] <krishnab> @Frederik Banning yeah you are correct. I misspoke about the agent step function. The most common application would likely generate agents in the model step, but like George said the user can generate agents anywhere.
BridgingBot
@GitterIRCbot
[slack] <Datseris> > Mutable struct fields may now be annotated as const to prevent changing them after construction, providing for greater clarity and optimization ability of these objects (JuliaLang/julia#43305).
This is a new feature of the Julia 1.8 version, which seems to me to be a large benefit for ABMs. In many cases agent structs are initialized with some fields that change, but many that do not.
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> Hi! Another simple question. Does it makes sense to run a monte carlo simulation on top of the agent based simulation? Thanks!
BridgingBot
@GitterIRCbot
[slack] <Datseris> monte carlo is a fancy naming meaning "random sampling", so you can do it at literally anything scientific which can have its initial condition randomized.
[slack] <Datseris> (also, please consider never using "monte carlo" again as a name. I am on an eternal quest to eradicate this utterly stupid name from usage in science, and replace it with hyper-massively more understandable "random sampling")
[slack] <Ramiro Vignolo> thanks for the quick reply!
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> but let me elaborate a little bit more. Is it common to perform random samplings for this kind of method? I mean, would it be okay if I just run the same model:
AgentBasedModel(Agent{T}; properties, scheduler, rng=MersenneTwister(seed))
with different seeds ?
[slack] <Ramiro Vignolo> Because you referred to a random initial condition, but what I was referring to a random seed all else being equal
BridgingBot
@GitterIRCbot
[slack] <Datseris> well depends if you actually use the seed somewhere...? Eg. for populating the model with random agents at random positions...? Or moving to random positions.
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> yes, it is used in many many places the model.rng
BridgingBot
@GitterIRCbot
[slack] <Datseris> New package Virtual.jl may solve the performance deficits Agents.jl has for multi-agent models. If anyone is interested in hacking a difficult, but interesting and impactful problem, have a look here : https://github.com/JuliaDynamics/Agents.jl/issues/445#issuecomment-1187730099
BridgingBot
@GitterIRCbot
[slack] <Manuela Vanegas Ferro> Yes, I’ve used it for parameter scans and that sort of contexts where you need to run your model a lot of times independently. I have access to my university’s cluster, so I do it there. @krishnab, is this what you were referring to with “large scale simulations”? In my not-so-long-term plans, I’d like to write a tutorial or something similar, but @Frederik Banning might beat me! 😄
BridgingBot
@GitterIRCbot
[slack] <Manuela Vanegas Ferro> Hi @Ramiro Vignolo. Yes, it makes sense to run the same model and just vary the seed. If you’re using random sampling in your model (which I understand you are because of the use of model.rng), there is no way for you to know if the result you see after just one run is an outlier or a common behavior of your model. You’ll want to run it a number of times in order to see the distribution of the results. Running the model once per each seed value is a way to sample from the distribution of all the possible outcomes your model could give you. Now, the number of seed values, and therefore model runs, that you need in order to have a good sample depends on the model itself but there are some heuristics to try to figure it out. Did I understand your question correctly?
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> Hi Manuela! Thank you for your super clear response and clarifications. You understood correctly and I will be doing that. Thanks!!
BridgingBot
@GitterIRCbot
[slack] <mcabbott> I missed this last week, sorry. Indeed, no hash collision. Mutual information indeed sounds like a good idea.
[slack] <mcabbott> Have not read this chapter 7, but if you have trajectories (N variables at M time points) then it seems like histogram-like methods will be hopeless, as R^(M*N) will be far too large to discretise. But you can do monte carlo things, or you can do KDE approx.
BridgingBot
@GitterIRCbot
[slack] <Datseris> Guys it seems like you are not really following what I am saying . It doesn't matter how many trajectories you have. You get an observable from them that's one dimension. You have many data points not many dimensions. If you think of a scatter plot with 10000 points, is this 10000 dimensional for you, or is it two dimensions ? The second dimension is the parameter values that all thr data points have. KDE methods are also hopeless in thousand dimensional spaces as far as I know.
BridgingBot
@GitterIRCbot

[slack] <mcabbott> Right if you observe one number, not a whole trajectory, then that’s obviously lower-dim, and some simple binning may work fine.

But if you don’t know the right summary statistic, you can also compute MI directly between parameters and observed trajectories, which is the high-dim problem. It’s likely that some parameter combinations still won’t matter.

BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> Hi everybody. Quick question. So, once I run the simulation 100 times, I would like to plot the solution using the mean a variance at each time step with a ribbon. Is there a way to do it without coding too much? Something like DifferentialEquations.jl has for EnsembleSolutions: https://diffeq.sciml.ai/stable/features/ensemble/ (go to the bottom and see the plot)
BridgingBot
@GitterIRCbot
[slack] <krishnab> Say George, not to beat a dead horse, as they say. But I was wondering if there was any relationship between the mutual information criteria and these things I keep seeing about "Global Sensitivity Analysis?" Are these the same thing, or totally different. I am referring to this video, specifically, https://www.youtube.com/watch?v=wzTpoINJyBQ&t=116s . So would this Global Sensitivity analysis perspective let someone detect bifurcations or qualitative changes in the system, or would it simply identify which parameters were responsible for most of the dynamics--as ostensibly mutual information will also do?
BridgingBot
@GitterIRCbot
[slack] <hperez16> I don’t think there is something like what you show in the link, but if you have your results stored in a dataframe, it should just require a call to transform! to calculate the mean and variance for each time step and then plot the mean column with the variance column as the ribbon.
BridgingBot
@GitterIRCbot
[slack] <Datseris> I'd recommend to actually code too much and write the five lines of code required to do this manually :) it will be better for you in the long run
[slack] <Ramiro Vignolo> simulations worked great!: https://files.slack.com/files-pri/T68168MUP-F03Q7K83BB7/download/mean-price.png
[slack] <Ramiro Vignolo> thanks for the cool package, super easy to use
Amelia Ernst
@AnErnst
Hello! I'm BACK
2 replies
I've found something horrible please help :(
Also note: switching to FiniteWall does not fix this
Amelia Ernst
@AnErnst
Nevermind, I'm dumb and had my normal vectors pointing in the wrong direction