Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Zachary Sunberg
    @zsunberg
    is helpful, especially the part about type stable functions
    and avoiding abstract fields
    BridgingBot
    @GitterIRCbot
    [slack] <Robert Moss> Pardon the PR spam, I've bumped the [compat] version for POMDPPolicies to 0.4 in a few JuliaPOMDP packages
    BridgingBot
    @GitterIRCbot
    [slack] <boutonm> I ll try to merge them today, we should add compathelpers on those packages
    Zachary Sunberg
    @zsunberg
    Great, thanks robert! Yes, we should put compat helper on all packages that we can
    Aarti Malhotra
    @aarti9
    Hi anyone ran codes in ARDESPOT repo? need some help. i could run basic tigerpomdp one, trying to run tree.jl, but facing issues
    Zachary Sunberg
    @zsunberg
    Hi Aarti, can you give some more information about what you're trying to do?
    the tree.jl file is not meant to be run by itself
    Aarti Malhotra
    @aarti9
    Thanks Zachary, I have my own custom pomdpx file like the one original DESPOT code uses, and debugged that code, but was looking for a simpler implementation and with the tree visualization, so turned attention to julia version for ARDESPOT. if i can run and debug and get the visualization, that will be great
    eventually my python code needs to interact with despot solver
    I can debug the original DESPOT code in CLION IDE
    BridgingBot
    @GitterIRCbot
    [slack] <Tabea Wilke> Thank you! That solved the generate_sor error, doesn't work yet (it wants me to implement observation), but I will give it another try.
    BridgingBot
    @GitterIRCbot
    [slack] <sunbergzach> For POMCPOW, you need an explicit observation model so that it can weight the particles
    BridgingBot
    @GitterIRCbot
    [slack] <idontgetoutmuch> I am trying to use ParticleFilters
    [slack] <idontgetoutmuch> I can see how to use predict and reweight but I can’t easily see how to use resample
    [slack] <idontgetoutmuch> Are there any examples that I have missed?
    [slack] <idontgetoutmuch> I’ve put a few more details here: JuliaPOMDP/ParticleFilters.jl#45
    BridgingBot
    @GitterIRCbot
    [slack] <idontgetoutmuch> I feel I am getting close with this
    julia> resample(ImportanceResampler, WeightedParticleBelief(pm,wm,sum(wm)), pm) ERROR: MethodError: no method matching resample(::Type{ImportanceResampler}, ::WeightedParticleBelief{AgentBasedModel{Union{Grass, Sheep, Wolf},GridSpace{SimpleGraph{Int64},2,Int64},Agents.var"#by_union#9"{Bool,Bool},Nothing}}, ::Array{AgentBasedModel{Union{Grass, Sheep, Wolf},GridSpace{SimpleGraph{Int64},2,Int64},Agents.var"#by_union#9"{Bool,Bool},Nothing},1}) Closest candidates are: resample(::Any, ::WeightedParticleBelief, ::Any, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Any) at /Users/dom/.julia/packages/ParticleFilters/fCWcv/src/resamplers.jl:17 resample(::Any, ::WeightedParticleBelief, !Matched::Union{POMDPs.MDP, POMDPs.POMDP}, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Any) at /Users/dom/.julia/packages/ParticleFilters/fCWcv/src/resamplers.jl:19 resample(!Matched::ImportanceResampler, ::AbstractParticleBelief{S}, !Matched::AbstractRNG) where S at /Users/dom/.julia/packages/ParticleFilters/fCWcv/src/resamplers.jl:37
    BridgingBot
    @GitterIRCbot
    [slack] <idontgetoutmuch> b1 = resample(ImportanceResampler(n), WeightedParticleBelief(pm,wm,sum(wm)), MersenneTwister(42)) works
    BridgingBot
    @GitterIRCbot
    [slack] <Andrew Dinhobl> In POMDPs ecosystem, is there a way to get a vector of the training "trajectory", or reward per episode? I would like to compare multiple methods by plotting their convergence
    BridgingBot
    @GitterIRCbot
    [slack] <boutonm> Yes, you should checkout POMDPSimulators.jl doc and in particular the HistoryRecorder
    BridgingBot
    @GitterIRCbot
    [slack] <Andrew Dinhobl> I think the History Recorder is what I was looking for. Thank you!
    BridgingBot
    @GitterIRCbot
    [slack] <ExpandingMan> hello all, anybody still in this channel? I have some confusion about online solvers
    [slack] <Robert Moss> Yeah we're around! What's up?
    BridgingBot
    @GitterIRCbot
    [slack] <ExpandingMan> does simulate re-initialize solvers on every call? I'm running the MCTS solver and looking at the tree state I feel like it is doing that (though I haven't completely figured out how to read the tree state yet)
    [slack] <ExpandingMan> I guess a better question is, what's the recommended way to train online solvers? is it with simulate?
    BridgingBot
    @GitterIRCbot
    [slack] <ExpandingMan> actually maybe I'm misunderstanding how this whole thing works. I suppose maybe the MCTS just assumes that the state space is so large that every encounter is new, and it just has to create an entirely new tree at every step no matter what
    [slack] <ExpandingMan> that seems to be what's going on
    BridgingBot
    @GitterIRCbot
    [slack] <ExpandingMan> yup, ok so this was a stupid question. my initial thinking was that because many MDP's have a specific starting state (e.g. most games) that this thing would have some sort of hysteresis
    [slack] <ExpandingMan> but I guess that's only in modified versions of MCTS
    BridgingBot
    @GitterIRCbot
    [slack] <boutonm> There might be an option to reuse the tree when you initialize MCTS
    [slack] <boutonm> reuse_tree::Bool: If this is true, the tree information is re-used for calculating the next plan. Of course, clear_tree! can always be called to override this. default: false
    [slack] <boutonm> I do not remember exactly how it is implemented though, whether it just saves compute time, or whether it adds new information (saved tree + what you would have computed at this time).
    BridgingBot
    @GitterIRCbot
    [slack] <ExpandingMan> ah I see... yeah, also the kinds of things I have in mind probably involve setting custom Q and N functions
    [slack] <ExpandingMan> it's pretty cool that the ability to do that is built in though... this looks almost flesible that you could plug Flux into it and do an AlphaGo
    BridgingBot
    @GitterIRCbot
    [slack] <boutonm> There is an example notebook on setting Q and N, the estimate_value parameter is also very useful 🙂
    There is an AlphaZero.jl package as well but it is not part of the POMDPs.jl ecosystem.
    Tomas Omasta
    @Omastto1

    Hi all!

    As part of my bachelor thesis, I am to implement a finite horizon interface for POMDPs.jl.

    So far, I have played around with DiscreteValueIteration.jl and have done direct modifications to the ordered_states method using reversed topological sort to achieve the desired behaviour and test it on few finite horizon MDP benchmarks (some GridWorld Examples and acyclic graph example).
    However, as I am new to Julia, I am unsure as to how to implement a more general interface, such as the one described here.

    So far, based on the proposed interface in readme, my plan is following:
    1) User implements the finite horizon interface in his MDP (stage_states, stage_actions,ordered_states,...).
    2) User selects a solver that uses the explicit POMDPs.jl interfaces.
    3) User calls FiniteHorizonMDP's solve(solver, mdp) which:
    a) solves each epoch in reversed order with correctly ordered states and
    b) returns joined Policy struct.

    My plan is also to implement Infinite Horizon to Finite Horizon conversion later on.

    Can you either confirm or reject that this is an appropriate approach?

    I will be glad for any tip which will make the implementation better.

    Tom

    Zachary Sunberg
    @zsunberg
    Hi Tom, great! that would be a really helpful contribution! Could we move this over to Github Discussions since I think that is a better place to have more in-depth detailed technical discussions?
    Perhaps the best way to communicate about this is through some simple examples, say a 1-dimensional grid world with 10 states and rewards on either end with a horizon of 5.
    Zachary Sunberg
    @zsunberg
    Do you want to take a stab at writing out what you think an implementation of that problem should look like, and then we can comment on it?
    Tomas Omasta
    @Omastto1
    That's awesome!
    I will give it a look during Christmas and will let you know afterward.
    Do you prefer using POMDPs.jl's Discussions, or will you create them for FiniteHorizonPOMDPs.jl as well?
    Zachary Sunberg
    @zsunberg
    Ok, great. I think for now we should keep all of the discussions for JuliaPOMDP packages on the POMDPs.jl Discussions tab
    BridgingBot
    @GitterIRCbot
    [slack] <rejuvyesh> Hey Omastto! Feel free to create a repository and we can take a look and help transfer to JuliaPOMDP. Just ping any of us on GitHub (@rejuvyesh)
    danortega2014
    @danortega2014
    Hi all! I was wondering if I could get some help with a generative POMDP I'm creating. I believe my issue is defining the state space which
    can't be explicity defined because I am repetitively adding to it in the transition function. Here is a link to the code: https://github.com/danortega2014/v3-/blob/main/README.md
    I get the error "ERROR: MethodError: Cannot convert an object of type Array{Float64,1} to an object of type Int64"
    Thanks again for the help!
    danortega2014
    @danortega2014
    I'm going to post this in the discussion tab.
    Zachary Sunberg
    @zsunberg
    Hi @danortega2014 I actually just opened your link and will look at it in a few minutes - but yes, I think it is better to post in the discussions tab
    danortega2014
    @danortega2014