by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 07:35
    creyke commented #4131
  • 07:35
    creyke commented #4131
  • 03:31
    rpanday starred dotnet/orleans
  • 03:16
    jsukhabut commented #6498
  • Jun 05 23:50
    pipermatt commented #6556
  • Jun 05 23:14
    ReubenBond synchronize #6539
  • Jun 05 23:03
    ReubenBond synchronize #6539
  • Jun 05 23:02

    ReubenBond on master

    Update changelog for 3.2.0. (#6… (compare)

  • Jun 05 23:02
    ReubenBond closed #6579
  • Jun 05 22:59

    benjaminpetit on master

    Use TaskCompletionSource.RunCon… (compare)

  • Jun 05 22:59
    benjaminpetit closed #6573
  • Jun 05 22:55

    ReubenBond on master

    Adding reference parameter anal… (compare)

  • Jun 05 22:55
    ReubenBond closed #6567
  • Jun 05 22:19

    ReubenBond on master

    Simple accumulator to batch gra… In RedisGrainDirectory, log and… Catch Exception instead of Redi… and 1 more (compare)

  • Jun 05 22:19
    ReubenBond closed #6582
  • Jun 05 20:45
    rodrickmakore starred dotnet/orleans
  • Jun 05 20:34
    benjaminpetit opened #6582
  • Jun 05 20:34
    benjaminpetit review_requested #6582
  • Jun 05 17:29
  • Jun 05 15:58
    drano opened #6581
Lars Thomas Denstad
@COCPORN
It is literally just an if-statement in the codebase.
Tom Nelson
@Zeroshi
yeah, i like that. use a sorted list based on max idle time
Lars Thomas Denstad
@COCPORN
No, you don't even need to do that. It already buckets grains, so it knows which ones to retire.
Tom Nelson
@Zeroshi
i had to do that for sql cache
i meant the silo's list
Lars Thomas Denstad
@COCPORN
Just don't even start that process if you have a lot of memory available. This is my suggestion.
Tom Nelson
@Zeroshi
good idea
i was thinking of it as a trigger based on memory usage
then cleanup x amount of sorted list that is owned by the silo
Lars Thomas Denstad
@COCPORN
It runs the eviction of old grains on a timer.
Tom Nelson
@Zeroshi
yeah, but thats 1 trigger, the other can be the memory amount
Lars Thomas Denstad
@COCPORN
info: Orleans.Runtime.Catalog[100507]
      Before collection#2: memory=13MB, #activations=3, collector=<#Activations=2, #Buckets=1, buckets=[1h:59m:47s.591ms->2 items]>.
info: Orleans.Runtime.Catalog[100508]
      After collection#2: memory=13MB, #activations=3, collected 0 activations, collector=<#Activations=2, #Buckets=1, buckets=[1h:59m:47s.582ms->2 items]>, collection time=00:00:00.0086615.
It does this.
You could literally just have an if statement on memory to tell it to not collect.
Tom Nelson
@Zeroshi
yup, you are right
Lars Thomas Denstad
@COCPORN
I am not sure if I am right. It is hard to combine with drinking and yapping, my two favorite hobbies after drinking and yapping.
Anyhows, I am heading out. Nice to talk to you so far.
Tom Nelson
@Zeroshi
why couldnt you add an if statement to determine if it should run? @sergeybykov any idea?
you too @COCPORN
have a great night
Lars Thomas Denstad
@COCPORN
Sergey is going to be: "OMG, this again. rollseyes" . But that's fine. :)
Tom Nelson
@Zeroshi
oh hahaha @sergeybykov Sorry! hahah
Zonciu Liang
@Zonciu
Which option could make silo failure detection faster? To make grains transfer to other silo faster
Tom Nelson
@Zeroshi
you may want to expand on that question a bit more @Zonciu
Zonciu Liang
@Zonciu
When a silo crashed or closed, grains will be transferred to other silos, but it takes time to detect the silo's status, how can I make the detection timeout shorter?
Reuben Bond
@ReubenBond
ClusterMembershipOptions.ProbeTimeout @Zonciu
Zonciu Liang
@Zonciu
@ReubenBond Thanks
Alex Meyer-Gleaves
@alexmg
Can anyone confirm that when injecting multiple IPersistentState<T> instances into the constructor of a Grain that they are loaded from the persistence store concurrently during activation? Looking at the code it appears a PersistentStateBridge<T> is created for each IPersistentState<T> instance and those subscribe to the SetupState lifecycle stage. Inside the LifecycleSubject these seem to be started concurrently for a given stage and awaited for using Task.WhenAll. I'm hoping this indeed the case because splitting up state for a single grain is a nice way of reducing the amount of data that needs to be persisted if a logical partitioning is present. The example I have is configuration data and runtime data, where configuration data is only required on activation to configure the work to be done, and runtime data is stored more frequently as work is performed. That gives you a performance boost on the write side so I am hoping the read side on activation is concurrent so there is a read benefit to be had too.
David Christensen
@OracPrime
I think I might have misunderstood cross-silo behaviour :(. I have a grain which I Get an instance of in my client. Client calls an Init method on the interface to set some values. This updates the grain in Silo1. Silo2 executes some code which requires the initial grain. I was expected it to get a copy of the grain from Silo1 or (more likely) for any calls on it to go cross-silo to Silo1. But I seem to be getting a new instance instantiated in Silo2 with unitialised values. Have I missed the point?
Jorge Candeias
@JorgeCandeias
@OracPrime That "more likely" expectation is correct. Orleans doesn't "copy" grains, it directs messages sent from interfaces to wherever those grains live. The interface hides a proxy, "getting the grain" only gets the proxy, not the grain instance itself. What type of grain is it (regular or stateless), and are you using the same primary key on all calls? Could the grain be getting deactivated without persisting its state between calls?
David Christensen
@OracPrime
I have a regular grain (Params) and a stateless ParamsCache (one per silo). I instantiate Params, call Params.Init. Thereafter things use ParamsCache, which in its activate gets Params, interrogates its values, and caches them. It works fine in the first silo to fire up, but a few seconds later a call to ParamsCache (with the same id) activates on silo2, instantiates a new Params in silo 2, which hasn't had init called on it, and returns the unset values from that.
@JorgeCandeias I'm a bit surprised that the later call to ParamsCache doesn't get routed to the one that already exists.
I'll put some deactivation logging in, just to be sure.
Reuben Bond
@ReubenBond
It should be routed to the original grain, @OracPrime. How soon after startup is all of this happening? Is it possible the cluster is still being configured during this time?
Without persistent state you can't guarantee that there are single instances for something which is purely in-memory. We are making the directory pluggable which will open up the options to choose behavior there and allow you to choose on a per-grain-type basis
David Christensen
@OracPrime
It's after server startup. The two activations in the different silos have the same timestamp to within a second. It doesn't have persistent state though.
Reuben Bond
@ReubenBond
How soon after startup, immediately?
Jorge Candeias
@JorgeCandeias
Would this be something you could put on a repro to look at in an issue?
Jim
@jfritchman
Newbie question. If I have grains for security (users, profiles, etc.) and I have other grains for a cart, order, etc. Do I deploy thoise to the same silo or do I create more domain oriented silos?
Reuben Bond
@ReubenBond
@JorgeCandeias is right, @OracPrime - this is better served by an issue where we can keep everything together.
@jfritchman It's up to you. I would keep them together for simplicity, but if your requirements call for some physical separation then you can split them up.
Max Shnurenok
@MaxCrank
Hi again guys, our integration attempts are on the way, so another question appeared. Let's say we have a data structure representing an Account which has internal ID as well as several more IDs bound to external systems. E.g. Account has Id as well as ExternalId1, ExternalId2 etc. And in different use-cases we have only one of these IDs as an input information to get an Account... but if we use Orleans to operate an Account through the Grain, we need uniformed access to leverage the reentrance and other features properly, i.e. we need to use exactly the same ID. I see several ways of handling this, each of them including a need to make intermediate storage calls for getting internal Account ID by any of external IDs for the same entity, which isn't really good. Formally, we may extract each external info into separate entities to go along with "one entity - single ID" (and also following "best practices"), but it also won't come without a price... Maybe somebody had the same case?
Samir Mowade
@sammym1982

Without persistent state you can't guarantee that there are single instances for something which is purely in-memory.

@ReubenBond can you expand on above statement from :point_up: January 20, 2020 9:06 AM
By Persistent state did you meant grain which are having state using [PersistentState] or similar attribute. I wanted to make sure I am reading it wrong :) Not related to original question but wanted to understand more if there are some nuances which I should be aware of.

Reuben Bond
@ReubenBond

The in-built grain directory implementation in Orleans is eventually-consistent, @sammym1982. That can become an issue when the cluster isn't in a steady state and calls for the same grain come in quick success - if nothing is done to mitigate it. The mitigation is persistent state. Any time a grain activation tries to write state, a concurrency check is performed to make sure the grain activation has the latest state (i.e, it has seen all previous writes). If that is not true then an exception is thrown and the grain is deactivating, making the convergence quicker.

Without that mitigation (or something to a similar effect), it's possible for multiple activations of the grain to exist simultaneously for a short period of time. Those duplicates are proactively killed as the directory converges.

A similar mitigation is to use leasing. Strong single-activation guarantees are something which we intend to add as an optional feature (it comes at a cost, but it could be opted into on a per-grain-type basis)
Samir Mowade
@sammym1982
Thanks @ReubenBond Does the runtime look for any particular etag concurrency exception in such case or any exception type during WriteStateAsync will trigger that correction?
Reuben Bond
@ReubenBond
Yes, that's right
David Christensen
@OracPrime
Thank you @ReubenBond . The problem I had was that I was using LocalClustering and had failed to specify a primary silo. So it now looks like (there's more after this, but this is the crucial bit)
ISiloHostBuilder builder = new SiloHostBuilder()
.AddGrainControl()
.UseLocalhostClustering(cBaseSiloPort+instanceNum, cBaseGatewayPort+instanceNum,new IPEndPoint(IPAddress.Loopback, cBaseSiloPort))
Previously I was lacking the third parameter. With that the silos "play nicely together" and stuff works.
Harald Schult Ulriksen
@Ulriksen
@MaxCrank we have a case where several external IDs that can point to our internal ID (added bonus - the external IDs can change). We use what we call "IdMapper" for this. This grain map external id to our internal id (we also have a reverse mapper). Since this is used at the edge of our API (most API calls to our system use external id's) we've used a cache pattern where we have stateless IdMapper grains running in each silo which loads the mapping into memory from the the persistent IdMapper grain. We then use the internalID from there on and try to only use this in our internal actor model.
Max Shnurenok
@MaxCrank
@Ulriksen Thank you very much for the response, looks like no silver bullet then :) One of the things we discussed is literally the same thing - making a generic ID provider abstraction based on some kind of cache under the hood and implementing the specific providers as needed, starting with the one for Account entities...
Harald S. Ulriksen
@hsulriksen_twitter
@MaxCrank another option is to use a state store which is quaryable, such as Cosmos/Mongo or SQL and use that as the mapping cache
Reuben Bond
@ReubenBond
dotnet/orleans#6250 << we will finally support directly configuring ISiloBuilder & IHostBuilder from TestClusters ☺ The implementation is very simple in the end.