by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 03:31
    rpanday starred dotnet/orleans
  • 03:16
    jsukhabut commented #6498
  • Jun 05 23:50
    pipermatt commented #6556
  • Jun 05 23:14
    ReubenBond synchronize #6539
  • Jun 05 23:03
    ReubenBond synchronize #6539
  • Jun 05 23:02

    ReubenBond on master

    Update changelog for 3.2.0. (#6… (compare)

  • Jun 05 23:02
    ReubenBond closed #6579
  • Jun 05 22:59

    benjaminpetit on master

    Use TaskCompletionSource.RunCon… (compare)

  • Jun 05 22:59
    benjaminpetit closed #6573
  • Jun 05 22:55

    ReubenBond on master

    Adding reference parameter anal… (compare)

  • Jun 05 22:55
    ReubenBond closed #6567
  • Jun 05 22:19

    ReubenBond on master

    Simple accumulator to batch gra… In RedisGrainDirectory, log and… Catch Exception instead of Redi… and 1 more (compare)

  • Jun 05 22:19
    ReubenBond closed #6582
  • Jun 05 20:45
    rodrickmakore starred dotnet/orleans
  • Jun 05 20:34
    benjaminpetit opened #6582
  • Jun 05 20:34
    benjaminpetit review_requested #6582
  • Jun 05 17:29
  • Jun 05 15:58
    drano opened #6581
  • Jun 05 14:25
    zeus82 commented #6580
  • Jun 05 14:23
    zeus82 opened #6580
Lars Thomas Denstad
@COCPORN
Sergey is going to be: "OMG, this again. rollseyes" . But that's fine. :)
Tom Nelson
@Zeroshi
oh hahaha @sergeybykov Sorry! hahah
Zonciu Liang
@Zonciu
Which option could make silo failure detection faster? To make grains transfer to other silo faster
Tom Nelson
@Zeroshi
you may want to expand on that question a bit more @Zonciu
Zonciu Liang
@Zonciu
When a silo crashed or closed, grains will be transferred to other silos, but it takes time to detect the silo's status, how can I make the detection timeout shorter?
Reuben Bond
@ReubenBond
ClusterMembershipOptions.ProbeTimeout @Zonciu
Zonciu Liang
@Zonciu
@ReubenBond Thanks
Alex Meyer-Gleaves
@alexmg
Can anyone confirm that when injecting multiple IPersistentState<T> instances into the constructor of a Grain that they are loaded from the persistence store concurrently during activation? Looking at the code it appears a PersistentStateBridge<T> is created for each IPersistentState<T> instance and those subscribe to the SetupState lifecycle stage. Inside the LifecycleSubject these seem to be started concurrently for a given stage and awaited for using Task.WhenAll. I'm hoping this indeed the case because splitting up state for a single grain is a nice way of reducing the amount of data that needs to be persisted if a logical partitioning is present. The example I have is configuration data and runtime data, where configuration data is only required on activation to configure the work to be done, and runtime data is stored more frequently as work is performed. That gives you a performance boost on the write side so I am hoping the read side on activation is concurrent so there is a read benefit to be had too.
David Christensen
@OracPrime
I think I might have misunderstood cross-silo behaviour :(. I have a grain which I Get an instance of in my client. Client calls an Init method on the interface to set some values. This updates the grain in Silo1. Silo2 executes some code which requires the initial grain. I was expected it to get a copy of the grain from Silo1 or (more likely) for any calls on it to go cross-silo to Silo1. But I seem to be getting a new instance instantiated in Silo2 with unitialised values. Have I missed the point?
Jorge Candeias
@JorgeCandeias
@OracPrime That "more likely" expectation is correct. Orleans doesn't "copy" grains, it directs messages sent from interfaces to wherever those grains live. The interface hides a proxy, "getting the grain" only gets the proxy, not the grain instance itself. What type of grain is it (regular or stateless), and are you using the same primary key on all calls? Could the grain be getting deactivated without persisting its state between calls?
David Christensen
@OracPrime
I have a regular grain (Params) and a stateless ParamsCache (one per silo). I instantiate Params, call Params.Init. Thereafter things use ParamsCache, which in its activate gets Params, interrogates its values, and caches them. It works fine in the first silo to fire up, but a few seconds later a call to ParamsCache (with the same id) activates on silo2, instantiates a new Params in silo 2, which hasn't had init called on it, and returns the unset values from that.
@JorgeCandeias I'm a bit surprised that the later call to ParamsCache doesn't get routed to the one that already exists.
I'll put some deactivation logging in, just to be sure.
Reuben Bond
@ReubenBond
It should be routed to the original grain, @OracPrime. How soon after startup is all of this happening? Is it possible the cluster is still being configured during this time?
Without persistent state you can't guarantee that there are single instances for something which is purely in-memory. We are making the directory pluggable which will open up the options to choose behavior there and allow you to choose on a per-grain-type basis
David Christensen
@OracPrime
It's after server startup. The two activations in the different silos have the same timestamp to within a second. It doesn't have persistent state though.
Reuben Bond
@ReubenBond
How soon after startup, immediately?
Jorge Candeias
@JorgeCandeias
Would this be something you could put on a repro to look at in an issue?
Jim
@jfritchman
Newbie question. If I have grains for security (users, profiles, etc.) and I have other grains for a cart, order, etc. Do I deploy thoise to the same silo or do I create more domain oriented silos?
Reuben Bond
@ReubenBond
@JorgeCandeias is right, @OracPrime - this is better served by an issue where we can keep everything together.
@jfritchman It's up to you. I would keep them together for simplicity, but if your requirements call for some physical separation then you can split them up.
Max Shnurenok
@MaxCrank
Hi again guys, our integration attempts are on the way, so another question appeared. Let's say we have a data structure representing an Account which has internal ID as well as several more IDs bound to external systems. E.g. Account has Id as well as ExternalId1, ExternalId2 etc. And in different use-cases we have only one of these IDs as an input information to get an Account... but if we use Orleans to operate an Account through the Grain, we need uniformed access to leverage the reentrance and other features properly, i.e. we need to use exactly the same ID. I see several ways of handling this, each of them including a need to make intermediate storage calls for getting internal Account ID by any of external IDs for the same entity, which isn't really good. Formally, we may extract each external info into separate entities to go along with "one entity - single ID" (and also following "best practices"), but it also won't come without a price... Maybe somebody had the same case?
Samir Mowade
@sammym1982

Without persistent state you can't guarantee that there are single instances for something which is purely in-memory.

@ReubenBond can you expand on above statement from :point_up: January 20, 2020 9:06 AM
By Persistent state did you meant grain which are having state using [PersistentState] or similar attribute. I wanted to make sure I am reading it wrong :) Not related to original question but wanted to understand more if there are some nuances which I should be aware of.

Reuben Bond
@ReubenBond

The in-built grain directory implementation in Orleans is eventually-consistent, @sammym1982. That can become an issue when the cluster isn't in a steady state and calls for the same grain come in quick success - if nothing is done to mitigate it. The mitigation is persistent state. Any time a grain activation tries to write state, a concurrency check is performed to make sure the grain activation has the latest state (i.e, it has seen all previous writes). If that is not true then an exception is thrown and the grain is deactivating, making the convergence quicker.

Without that mitigation (or something to a similar effect), it's possible for multiple activations of the grain to exist simultaneously for a short period of time. Those duplicates are proactively killed as the directory converges.

A similar mitigation is to use leasing. Strong single-activation guarantees are something which we intend to add as an optional feature (it comes at a cost, but it could be opted into on a per-grain-type basis)
Samir Mowade
@sammym1982
Thanks @ReubenBond Does the runtime look for any particular etag concurrency exception in such case or any exception type during WriteStateAsync will trigger that correction?
Reuben Bond
@ReubenBond
Yes, that's right
David Christensen
@OracPrime
Thank you @ReubenBond . The problem I had was that I was using LocalClustering and had failed to specify a primary silo. So it now looks like (there's more after this, but this is the crucial bit)
ISiloHostBuilder builder = new SiloHostBuilder()
.AddGrainControl()
.UseLocalhostClustering(cBaseSiloPort+instanceNum, cBaseGatewayPort+instanceNum,new IPEndPoint(IPAddress.Loopback, cBaseSiloPort))
Previously I was lacking the third parameter. With that the silos "play nicely together" and stuff works.
Harald Schult Ulriksen
@Ulriksen
@MaxCrank we have a case where several external IDs that can point to our internal ID (added bonus - the external IDs can change). We use what we call "IdMapper" for this. This grain map external id to our internal id (we also have a reverse mapper). Since this is used at the edge of our API (most API calls to our system use external id's) we've used a cache pattern where we have stateless IdMapper grains running in each silo which loads the mapping into memory from the the persistent IdMapper grain. We then use the internalID from there on and try to only use this in our internal actor model.
Max Shnurenok
@MaxCrank
@Ulriksen Thank you very much for the response, looks like no silver bullet then :) One of the things we discussed is literally the same thing - making a generic ID provider abstraction based on some kind of cache under the hood and implementing the specific providers as needed, starting with the one for Account entities...
Harald S. Ulriksen
@hsulriksen_twitter
@MaxCrank another option is to use a state store which is quaryable, such as Cosmos/Mongo or SQL and use that as the mapping cache
Reuben Bond
@ReubenBond
dotnet/orleans#6250 << we will finally support directly configuring ISiloBuilder & IHostBuilder from TestClusters ☺ The implementation is very simple in the end.
Tom Nelson
@Zeroshi
hi @sergeybykov , I was checking in to see if you had a chance to read that essay.
Kyle Dodson
@seniorquico

has anyone encountered the following error before?

Exc level 0: Orleans.Runtime.OrleansException: Reminder Service is still initializing and it is taking a long time. Please retry again later.

Sergey Bykov
@sergeybykov

@Zeroshi @COCPORN

why couldnt you add an if statement to determine if it should run? @sergeybykov any idea?

Early on, we were concerned about deactivating grains under perceived memory pressure, primarily because it wasn't clear how to measure what amount of memory is actually being used by the app logic as opposed to the amount of Gen2 memory garbage accumulated since the last Gen2 pass of the memory manager. I suspect it should be more clear these days.
It's a bit more than adding an if statement because grains activations are tracked in buckets, from most recently used ones to LRU. Other than that, at least conceptually, it could be a matter of checking for memory pressure (if configured) upon every activation collection cycle and adding an oldest bucket or two for collection.

Sergey Bykov
@sergeybykov

@Zeroshi

hi @sergeybykov , I was checking in to see if you had a chance to read that essay.

Sorry, I dropped the ball when we suddenly got some snow. I read it now. I think it'd be better for us to chat thematically over Teams/Skype than for me to try to type the same thoughts, and then for us to go back and forth. A voice dialog is an efficient means of arriving to something. 🙂

Lars Thomas Denstad
@COCPORN
@sergeybykov Yes, it was hyperbole from me. About the buckets, perhaps I've misunderstood how they work. It looks to just be sorted on expiry time, and then I suppose any grain that has activity moves itself to another bucket based on the time to live configuration. I could take a stab at implementing my suggestion, as I think the amount of work needed to have something that is useful (or can at least be evaluated for usefulness) isn't that big.
Veikko Eeva
@veikkoeeva
Veikko Eeva
@veikkoeeva
Panu Oksala
@PanuOksala_twitter
I'm running heterogeneous cluster with multiple Silos and it seems to be that all the Silos need to reference entities that are exposed in grain interfaces, is this true?
basically I need to have shared entity library that is referenced from all the Silos. This breaks the microservice architecture
If I don't reference this shared library from one the Silos, the grain calls may crash in binary serialization, because the passing through silo don't know about entities
Lars Thomas Denstad
@COCPORN
Perhaps I am misunderstanding, I am not sure what you mean by heterogeneous cluster if the silos are different. How do you expect the silo to be able to deserialize something that it doesn't have an implementation class for it?
Panu Oksala
@PanuOksala_twitter
I except that when calling Silo that does not have implementation for the actor, it would pass it to another Silo which contains the implementation for actor, without deserializing the method parameters
Veikko Eeva
@veikkoeeva
Heh, mentioned Orleans indirectly at https://twitter.com/hiddenmarkov/status/1219288129716334592 and https://twitter.com/_dotnetbot_ picked it up. :D Though the truth is, I believe, Orleans and actor model in general, could fit even complex CPS deployments. There just are gnarly problems otherwise to consider.
Panu Oksala
@PanuOksala_twitter

Silo A hosts Grain A1
Silo B hosts Grain B1 which has method DoSomething(Customer customer)

I call grain B1 from clusterclient, the invocation goes to A1 which deserializes the message (and crashes because it does not know about Customer) and does not resend the message to B1

Lars Thomas Denstad
@COCPORN
@PanuOksala_twitter I don't know about this kind of behavior. Is it a documented feature?
Panu Oksala
@PanuOksala_twitter
If I add Customer class into shared dll and add reference to Silo A, everything works.
Documentation says: All silos should reference interfaces of all grain types of the cluster, but grain classes should only be referenced by the silos that will host them.
What I have tested, this is actually not true. The method parameter type is the problem, not the grain interfaces.
Lars Thomas Denstad
@COCPORN
Where is this documentation? I am curious.