These are chat archives for akkadotnet/

Apr 2016
Thomas Lazar
Apr 27 2016 07:14 UTC
anyone here know their way around IL generation stuff? any experience?
via Reflection.Emit
Dave Sansum
Apr 27 2016 08:11 UTC

Any advice on below would be much appreciated?

I'm currently using a child per entity model and after getting this running locally I'm starting to look into the remoting/clustering elements. In seems the clustering is really geared towards actors that are functional rather than entity based and I'm struggling to any documentation on dynamic systems. What I'm looking to do is have location transparency so if an entity A lives on node A, if node A fails the entity can be brought up seamlessly on node B ? It seems cluster sharding is the right (only) thing for this but it doesn't seem that mature at the moment and depends on akka persistence which I don't currently require?

Bartosz Sypytkowski
Apr 27 2016 08:22 UTC
@dave-sansum in your case cluster sharding is a way to go, and unfortunately, atm persistence is required in order to work with it
since you need to reliably recover the shards state between nodes in case of crashes or failures
Dave Sansum
Apr 27 2016 09:01 UTC
thanks @Horusiath
Pablo Castilla
Apr 27 2016 09:17 UTC
How about cluster singleton?
Dave Sansum
Apr 27 2016 09:27 UTC
@pablocastilla have you used that yourself? / do you know what the maturity of it is?
Pablo Castilla
Apr 27 2016 09:37 UTC
No, never tried. I only know that it is slower. @Aaronontheweb maybe knows more
Chris G. Stevens
Apr 27 2016 13:42 UTC
This message was deleted
Alex Valuyskiy
Apr 27 2016 14:04 UTC
@Aaronontheweb you fixed a persistence default config in 1.0.8. But seems to be, Cluster Singleton also doesn't have a default config
Kris Schepers
Apr 27 2016 14:19 UTC
Hmm, anyone else noticing this: When a ClusterClientReceptionist is started on every node of a role (running locally on 1 dev machine), those nodes consume all CPU power.
When you run a single node, everything is fine..
Christian Duhard
Apr 27 2016 14:56 UTC
has anyone ever said that distributed systems are kinda hard? ;)
Apr 27 2016 15:21 UTC
Hi guys I have a question related to remoting.
Main question is actually if remoting should be resilient/robust against temporary network issues (network partitioning, host not responding, not receiving any deathwatch hearbeat responses...).
To be more specific, is it acceptable that an ActorSystem can become quarantined because of a temporary network issue?
I see no issue with heartbeat systems that try to detect issues with the network and drop messages because of detected network issues, but I find it problematic that a system gets quarantined because there were some temporary network issues. I find this problematic because in Akka this means that the quarantined system needs to restart!
This is something I find as not "Reactive" since no recovery is possible (except the real dramatic recovery of restarting the actorsystem, which in server application is perhaps not possible).
We have an application in production (a lot of clients connecting to one server ) that uses remoting and because of network errors a client marks the remote server system as quarantained.
Which means that that client will not be able to connect until the server restarts/recycles (or at least restarts its actorsystem, which is not really feasible/desirable).
I have no problem that a state as "quaratined" exists, but I have a problem that something can get quarantined because of (temp) network errors or because the deathwatch hearbeat responses are not received. System should not get corrupted because of such errors and as such should not get quarantined.
What do you guys think about this ? Is this a bug that needs to be fixed (I do not mean that quarantining is a bug, but that getting quarantined because of temporary network issues is a possible bug) ?
Am I looking at this in the wrong way ?
What are the options to handle this (network errors are not that rare condition) ?
My current solution is to set parameter prune-quarantine-marker-after = 0 s (which is not recommended in the docs !!!!)
I tried also increasing some of the other heartbeat parameters (acceptable-heartbeat-pause in the transport-failure-detector and the watch-failure-detector ), but more to the effect that system would not recover at all.
If I'm not using the death-watch monitor then system can recover (meaning after being gated trying to associate/connecting again), but when having death watch enabled (by watching an actor) then suddenly there is some interaction that makes it not being able to reassociate (seems to be a bug) , not even trying, which results in the dead watch heartbeat getting dropped until that receives its pause threshold parameter value, which in turn triggers the quarantining.
Version info : using (put did also a test with the version in the dev git branch beginning of this week)
Kind regards,
Aaron Stannard
Apr 27 2016 15:33 UTC
@alexgoeman 1.0.8, which came out yesterday, fixes some known endpoint management issues related to that
but there are also issues with Helios at startup that I'm working on fixing right now
I won't go into detail on them now because I'm not finished with them yet, but Helios has some race conditions on startup that can cause this
@alexvaluyskiy I'm not involved with Akka.Persistence and Akka.Cluster.Sharding much, but it sounds like you and @Horusiath need to come up with a release strategy that maintains configuration integrity between releases
since that's been a persistent issue (no pun intended) across more than one release of those
default configurations should always have explicit, easily understandable regression tests
if you don't have one, that's the easiest place to create a breaking change by accident
and compared to most of the test suite, they're 100x easier tests to write than virtually anything else
I'd be happy to help, but I'm operating with very limited bandwidth. I'm pretty focused on getting Akka.Cluster and its dependencies out of beta
Apr 27 2016 15:39 UTC
@Aaronontheweb : Do you then agree in principle that death watch failure should not trigger quarantining ? (PS: I did do also testing with 1.0.8 using latest version I could get via github and still had recovery issues, so do you mean that there have more changes done yesterday or that those changes were not available in git ? )
Aaron Stannard
Apr 27 2016 16:02 UTC
you then agree in principle that death watch failure should not trigger quarantining?
I 100% do not agree with that
totally depends on when it happens
if it happens during startup, if the node you're connecting can't complete the handshake for whatever reason
quarantining is the right thing to do
as I said, there are issues down the stack I'm working on right now
that I believe are responsible for this
check back with me later - there were no additional changes made yesterday other than those published. You can easily check that by taking a look at number of commits since release on Github
Apr 27 2016 16:11 UTC
@Aaronontheweb : So because of a handshake procedure cannot be completed, why do you assume corruption ? You can cleanup any resources linked to the connection and just retry later.
Aaron Stannard
Apr 27 2016 16:11 UTC
search the codebase for HopelessAssocationException and read the sourdce
if you want an explanation
quarantines only happen as a result of repeat failures typically - I don't remember the entire flow for it offhand
by default Akka.Remote will gate a connection temporarily during an unplanned failure
in order to give the other side time to recover
Apr 27 2016 16:22 UTC
@Aaronontheweb : In the network configuration file I found some doc saying that deathwatch triggers quarantining (I tested this). And not after a few deathwatches but immediately when some parameterized pause in heartbeat response gets exceeded. So by just disconnecting network long enough the other system will get quarantined. So then there is no corruption, but when connection can get established again, one system refuses to connect to other system just because it is quarantined. I have no problem with the gating, because indeed to avoid unnecessary communications, but when communication is possible again, after gating is over this will succeed. Which is a good thing. But the deathwatch system just marks the other system as quarantined makes recovery impossible
Apr 27 2016 16:41 UTC
@Aaronontheweb : Tried looking for "HopelessAssocationException" (just downloaded the zip file , opened it in Visual Studio, but could not find that class/string in the project. Am I looking in the wrong place ?
Marc Piechura
Apr 27 2016 16:43 UTC
@alexgoeman search without exception
Apr 27 2016 23:26 UTC
Guys, any news on this Issue: akkadotnet/ it seems to affect every cluster that disconnects