Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 05:29
    nagytech commented #4090
  • 05:29
    nagytech opened #4090
  • Dec 11 23:52
    scptre commented #4082
  • Dec 11 14:26
    nagytech commented #3954
  • Dec 11 11:18
    nagytech edited #4089
  • Dec 11 11:17
    nagytech opened #4089
  • Dec 11 11:00
    nagytech commented #4083
  • Dec 11 08:34
    jiyeongj commented #4083
  • Dec 11 08:33
    jiyeongj commented #4083
  • Dec 11 08:33
    jiyeongj commented #4083
  • Dec 11 07:57

    dependabot-preview[bot] on nuget

    (compare)

  • Dec 11 07:57

    dependabot-preview[bot] on dev

    Bump MongoDB.Driver from 2.9.1 … (compare)

  • Dec 11 07:57
    dependabot-preview[bot] closed #104
  • Dec 11 07:52
    dependabot-preview[bot] synchronize #104
  • Dec 11 07:52

    dependabot-preview[bot] on nuget

    Bump MongoDB.Driver from 2.9.1 … (compare)

  • Dec 11 07:52
    dependabot-preview[bot] edited #104
  • Dec 11 07:51
    dependabot-preview[bot] edited #104
  • Dec 11 07:51
    dependabot-preview[bot] edited #104
  • Dec 11 07:51
    Aaronontheweb commented #104
  • Dec 11 07:43

    dependabot-preview[bot] on nuget

    (compare)

Aaron Stannard
@Aaronontheweb
I've seen that issue before - that's unrelated
how many cores are you running those nodes on?
Vladyslav Pyshnenko
@Pisha91
8
Aaron Stannard
@Aaronontheweb
ok, you should be good there
is where that issue occurs
the network isn't even a factor at that point
Vladyslav Pyshnenko
@Pisha91
yeah, and after than node is shutting down
Aaron Stannard
@Aaronontheweb
might be something off with the sequencing at startup there, but basically one of the system actors failed to start on time
I would start by looking at the system actors there and see if there's a race condition - it's going to be easier to spot than a traditional one, because everything is happening inside actors here. Coverage is probably missing in some edge case where resource A gets a request before it gets something it needs from resource B
and rather than buffering the request / poking resource B, it waits indefinitely and times out
I personally don't have time to look into that now (I am but one man) - but if you file a bug I'll get on it
Vladyslav Pyshnenko
@Pisha91
ok, i will create bug tommorow
thanks
Thomas Lazar
@thomaslazar
moin
anyone here know their way around IL generation stuff? any experience?
via Reflection.Emit
Dave Sansum
@dave-sansum

Any advice on below would be much appreciated?

I'm currently using a child per entity model and after getting this running locally I'm starting to look into the remoting/clustering elements. In seems the clustering is really geared towards actors that are functional rather than entity based and I'm struggling to any documentation on dynamic systems. What I'm looking to do is have location transparency so if an entity A lives on node A, if node A fails the entity can be brought up seamlessly on node B ? It seems cluster sharding is the right (only) thing for this but it doesn't seem that mature at the moment and depends on akka persistence which I don't currently require?

Bartosz Sypytkowski
@Horusiath
@dave-sansum in your case cluster sharding is a way to go, and unfortunately, atm persistence is required in order to work with it
since you need to reliably recover the shards state between nodes in case of crashes or failures
Dave Sansum
@dave-sansum
thanks @Horusiath
Pablo Castilla
@pablocastilla
How about cluster singleton?
Dave Sansum
@dave-sansum
@pablocastilla have you used that yourself? / do you know what the maturity of it is?
Pablo Castilla
@pablocastilla
No, never tried. I only know that it is slower. @Aaronontheweb maybe knows more
Chris G. Stevens
@cgstevens
This message was deleted
Alex Valuyskiy
@alexvaluyskiy
@Aaronontheweb you fixed a persistence default config in 1.0.8. But seems to be, Cluster Singleton also doesn't have a default config
Kris Schepers
@schepersk
Hmm, anyone else noticing this: When a ClusterClientReceptionist is started on every node of a role (running locally on 1 dev machine), those nodes consume all CPU power.
When you run a single node, everything is fine..
Christian Duhard
@cduhard
has anyone ever said that distributed systems are kinda hard? ;)
alexgoeman
@alexgoeman
Hi guys I have a question related to remoting.
Main question is actually if remoting should be resilient/robust against temporary network issues (network partitioning, host not responding, not receiving any deathwatch hearbeat responses...).
To be more specific, is it acceptable that an ActorSystem can become quarantined because of a temporary network issue?
I see no issue with heartbeat systems that try to detect issues with the network and drop messages because of detected network issues, but I find it problematic that a system gets quarantined because there were some temporary network issues. I find this problematic because in Akka this means that the quarantined system needs to restart!
This is something I find as not "Reactive" since no recovery is possible (except the real dramatic recovery of restarting the actorsystem, which in server application is perhaps not possible).
We have an application in production (a lot of clients connecting to one server ) that uses remoting and because of network errors a client marks the remote server system as quarantained.
Which means that that client will not be able to connect until the server restarts/recycles (or at least restarts its actorsystem, which is not really feasible/desirable).
I have no problem that a state as "quaratined" exists, but I have a problem that something can get quarantined because of (temp) network errors or because the deathwatch hearbeat responses are not received. System should not get corrupted because of such errors and as such should not get quarantined.
What do you guys think about this ? Is this a bug that needs to be fixed (I do not mean that quarantining is a bug, but that getting quarantined because of temporary network issues is a possible bug) ?
Am I looking at this in the wrong way ?
What are the options to handle this (network errors are not that rare condition) ?
My current solution is to set parameter prune-quarantine-marker-after = 0 s (which is not recommended in the docs !!!!)
I tried also increasing some of the other heartbeat parameters (acceptable-heartbeat-pause in the transport-failure-detector and the watch-failure-detector ), but more to the effect that system would not recover at all.
If I'm not using the death-watch monitor then system can recover (meaning after being gated trying to associate/connecting again), but when having death watch enabled (by watching an actor) then suddenly there is some interaction that makes it not being able to reassociate (seems to be a bug) , not even trying, which results in the dead watch heartbeat getting dropped until that receives its pause threshold parameter value, which in turn triggers the quarantining.
Version info : using akka.net 1.0.6.16 (put did also a test with the version in the dev git branch beginning of this week)
Kind regards,
Alex
Aaron Stannard
@Aaronontheweb
@alexgoeman 1.0.8, which came out yesterday, fixes some known endpoint management issues related to that
but there are also issues with Helios at startup that I'm working on fixing right now
I won't go into detail on them now because I'm not finished with them yet, but Helios has some race conditions on startup that can cause this
@alexvaluyskiy I'm not involved with Akka.Persistence and Akka.Cluster.Sharding much, but it sounds like you and @Horusiath need to come up with a release strategy that maintains configuration integrity between releases
since that's been a persistent issue (no pun intended) across more than one release of those
default configurations should always have explicit, easily understandable regression tests
if you don't have one, that's the easiest place to create a breaking change by accident
and compared to most of the test suite, they're 100x easier tests to write than virtually anything else
I'd be happy to help, but I'm operating with very limited bandwidth. I'm pretty focused on getting Akka.Cluster and its dependencies out of beta
alexgoeman
@alexgoeman
@Aaronontheweb : Do you then agree in principle that death watch failure should not trigger quarantining ? (PS: I did do also testing with 1.0.8 using latest version I could get via github and still had recovery issues, so do you mean that there have more changes done yesterday or that those changes were not available in git ? )
Aaron Stannard
@Aaronontheweb
you then agree in principle that death watch failure should not trigger quarantining?
I 100% do not agree with that
totally depends on when it happens
if it happens during startup, if the node you're connecting can't complete the handshake for whatever reason
quarantining is the right thing to do
as I said, there are issues down the stack I'm working on right now
that I believe are responsible for this
check back with me later - there were no additional changes made yesterday other than those published. You can easily check that by taking a look at number of commits since release on Github
alexgoeman
@alexgoeman
@Aaronontheweb : So because of a handshake procedure cannot be completed, why do you assume corruption ? You can cleanup any resources linked to the connection and just retry later.