Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
COCPORN
@COCPORN
I don't understand what you mean by 30 minutes. The repo I shared is meant for exploratory coding at best, so if that is what you're referring to it is probably random.
Tom Nelson
@Zeroshi
one thing i really need to get up to speed on is the the release pipeline for orleans
COCPORN
@COCPORN
Let's do the meetup then!
I will be the heckler, you can be the voice of reason.
Tom Nelson
@Zeroshi
yeahm you just had it hard coded. i wasnt sure if your actors generally cover 90% of the needs in 30 min
i already have it on my calendar!
i ported it to linkedin as well
COCPORN
@COCPORN
In related news, I still haven't gotten a good (in my opinion) response to my idea of only releasing grains on configurable memory pressure.
Tom Nelson
@Zeroshi
define "memory pressure"
COCPORN
@COCPORN
As in: I have a bunch of grains, and they default to being retired after 2 hours of idling.
But also: I have 14GB of RAM on this computer. And retiring them when I have 10GB available is not sensible.
Tom Nelson
@Zeroshi
ah, got it
COCPORN
@COCPORN
So you can set a limit for when you start retiring grains to, say, 6GB.
Everything else will work the same.
It is literally just an if-statement in the codebase.
Tom Nelson
@Zeroshi
yeah, i like that. use a sorted list based on max idle time
COCPORN
@COCPORN
No, you don't even need to do that. It already buckets grains, so it knows which ones to retire.
Tom Nelson
@Zeroshi
i had to do that for sql cache
i meant the silo's list
COCPORN
@COCPORN
Just don't even start that process if you have a lot of memory available. This is my suggestion.
Tom Nelson
@Zeroshi
good idea
i was thinking of it as a trigger based on memory usage
then cleanup x amount of sorted list that is owned by the silo
COCPORN
@COCPORN
It runs the eviction of old grains on a timer.
Tom Nelson
@Zeroshi
yeah, but thats 1 trigger, the other can be the memory amount
COCPORN
@COCPORN
info: Orleans.Runtime.Catalog[100507]
      Before collection#2: memory=13MB, #activations=3, collector=<#Activations=2, #Buckets=1, buckets=[1h:59m:47s.591ms->2 items]>.
info: Orleans.Runtime.Catalog[100508]
      After collection#2: memory=13MB, #activations=3, collected 0 activations, collector=<#Activations=2, #Buckets=1, buckets=[1h:59m:47s.582ms->2 items]>, collection time=00:00:00.0086615.
It does this.
You could literally just have an if statement on memory to tell it to not collect.
Tom Nelson
@Zeroshi
yup, you are right
COCPORN
@COCPORN
I am not sure if I am right. It is hard to combine with drinking and yapping, my two favorite hobbies after drinking and yapping.
Anyhows, I am heading out. Nice to talk to you so far.
Tom Nelson
@Zeroshi
why couldnt you add an if statement to determine if it should run? @sergeybykov any idea?
you too @COCPORN
have a great night
COCPORN
@COCPORN
Sergey is going to be: "OMG, this again. rollseyes" . But that's fine. :)
Tom Nelson
@Zeroshi
oh hahaha @sergeybykov Sorry! hahah
Zonciu Liang
@Zonciu
Which option could make silo failure detection faster? To make grains transfer to other silo faster
Tom Nelson
@Zeroshi
you may want to expand on that question a bit more @Zonciu
Zonciu Liang
@Zonciu
When a silo crashed or closed, grains will be transferred to other silos, but it takes time to detect the silo's status, how can I make the detection timeout shorter?
Reuben Bond
@ReubenBond
ClusterMembershipOptions.ProbeTimeout @Zonciu
Zonciu Liang
@Zonciu
@ReubenBond Thanks
Alex Meyer-Gleaves
@alexmg
Can anyone confirm that when injecting multiple IPersistentState<T> instances into the constructor of a Grain that they are loaded from the persistence store concurrently during activation? Looking at the code it appears a PersistentStateBridge<T> is created for each IPersistentState<T> instance and those subscribe to the SetupState lifecycle stage. Inside the LifecycleSubject these seem to be started concurrently for a given stage and awaited for using Task.WhenAll. I'm hoping this indeed the case because splitting up state for a single grain is a nice way of reducing the amount of data that needs to be persisted if a logical partitioning is present. The example I have is configuration data and runtime data, where configuration data is only required on activation to configure the work to be done, and runtime data is stored more frequently as work is performed. That gives you a performance boost on the write side so I am hoping the read side on activation is concurrent so there is a read benefit to be had too.
David Christensen
@OracPrime
I think I might have misunderstood cross-silo behaviour :(. I have a grain which I Get an instance of in my client. Client calls an Init method on the interface to set some values. This updates the grain in Silo1. Silo2 executes some code which requires the initial grain. I was expected it to get a copy of the grain from Silo1 or (more likely) for any calls on it to go cross-silo to Silo1. But I seem to be getting a new instance instantiated in Silo2 with unitialised values. Have I missed the point?
Jorge Candeias
@JorgeCandeias
@OracPrime That "more likely" expectation is correct. Orleans doesn't "copy" grains, it directs messages sent from interfaces to wherever those grains live. The interface hides a proxy, "getting the grain" only gets the proxy, not the grain instance itself. What type of grain is it (regular or stateless), and are you using the same primary key on all calls? Could the grain be getting deactivated without persisting its state between calls?
David Christensen
@OracPrime
I have a regular grain (Params) and a stateless ParamsCache (one per silo). I instantiate Params, call Params.Init. Thereafter things use ParamsCache, which in its activate gets Params, interrogates its values, and caches them. It works fine in the first silo to fire up, but a few seconds later a call to ParamsCache (with the same id) activates on silo2, instantiates a new Params in silo 2, which hasn't had init called on it, and returns the unset values from that.
@JorgeCandeias I'm a bit surprised that the later call to ParamsCache doesn't get routed to the one that already exists.
I'll put some deactivation logging in, just to be sure.
Reuben Bond
@ReubenBond
It should be routed to the original grain, @OracPrime. How soon after startup is all of this happening? Is it possible the cluster is still being configured during this time?
Without persistent state you can't guarantee that there are single instances for something which is purely in-memory. We are making the directory pluggable which will open up the options to choose behavior there and allow you to choose on a per-grain-type basis
David Christensen
@OracPrime
It's after server startup. The two activations in the different silos have the same timestamp to within a second. It doesn't have persistent state though.
Reuben Bond
@ReubenBond
How soon after startup, immediately?