Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 17:25
    valdisz synchronize #3889
  • 17:17
    valdisz synchronize #3889
  • 15:44
    valdisz synchronize #3889
  • 09:22
    ismaelhamed commented #3863
  • Oct 19 23:39
    valdisz synchronize #3889
  • Oct 19 23:08
    edvinasz commented #2947
  • Oct 19 13:36
    Aaronontheweb commented #3973
  • Oct 19 13:34
    dependabot-preview[bot] synchronize #3995
  • Oct 19 13:34

    dependabot-preview[bot] on nuget

    Bump BenchmarkDotNet from 0.10.… (compare)

  • Oct 19 13:34
    dependabot-preview[bot] edited #3995
  • Oct 19 13:34
    dependabot-preview[bot] synchronize #3993
  • Oct 19 13:34

    dependabot-preview[bot] on nuget

    Bump Google.Protobuf from 3.9.1… (compare)

  • Oct 19 13:34
    dependabot-preview[bot] synchronize #3991
  • Oct 19 13:34

    dependabot-preview[bot] on nuget

    Bump Microsoft.Extensions.Depen… (compare)

  • Oct 19 13:34
    dependabot-preview[bot] synchronize #3989
  • Oct 19 13:34

    dependabot-preview[bot] on nuget

    Bump ApiApprover from 3.0.1 to … (compare)

  • Oct 19 13:34
    dependabot-preview[bot] synchronize #3992
  • Oct 19 13:34
    dependabot-preview[bot] edited #3993
  • Oct 19 13:34
    dependabot-preview[bot] synchronize #3985
  • Oct 19 13:34

    dependabot-preview[bot] on nuget

    Bump System.Reflection.Emit fro… (compare)

Bartosz Sypytkowski
@Horusiath
@trbngr you may need to specify roles for sharding nodes (I'm not 100% sure if it's necessary) and use them in sharding config
Chris Martin
@trbngr
ok. I'll give it a shot
Bartosz Sypytkowski
@Horusiath
because cluster sharding may assume, that lighthouse will also host shards
Chris Martin
@trbngr
            cluster {
              sharding {
                role = "projections"
              }
              #will inject this node as a self-seed node at run-time
              seed-nodes = [
                "akka.tcp://eventdayprojections@168.62.228.228:4053",
                "akka.tcp://eventdayprojections@23.96.183.175:4053"
              ]
              roles = [projections]
            }
          }
look right?
Bartosz Sypytkowski
@Horusiath
yes
Chris Martin
@trbngr
still not happening :(
Bartosz Sypytkowski
@Horusiath
any errors?
Chris Martin
@trbngr
Trying to register to coordinator at [], but no acknowledgement. Total [1] buffered messages.
oh! I don't have any journal setup
sorry. I've been in Scala-land for the last few months. Trying to get my head back here for a bit ;)
hmm. defaults to inmem, right?
Bartosz Sypytkowski
@Horusiath
yes, it won't work right between processes
Chris Martin
@trbngr
right right
what can I use without setting anything up?
leveldb?
Bartosz Sypytkowski
@Horusiath
sqlite (if all akka processes will point to the same file)
Chris Martin
@trbngr
hmm. so far this is only one instance. It should work.
wire serializer is also advised (I know, that existing json-based may have some problems with some of the cluster sharding message)
Chris Martin
@trbngr
Oh yes. I remember that
Chris Martin
@trbngr
hmm. the db isn't being created. Seems like the persistence module isn't initializing at all
updated gist if you have time to look
wondering if having my seeds over the internet is the problem at this point?
Bartosz Sypytkowski
@Horusiath
if db is not created, you should have some error messages
could you put logs on the gist?
Chris Martin
@trbngr
I got it working after starting lighthouse locally.
Big question here is what happens when a node goes down? Do the shards get recreated on another node?
Bartosz Sypytkowski
@Horusiath
yes - basically shards can be handed over to another node, or rebalanced when the difference in number of shards between nodes goes over some specified threshold
Chris Martin
@trbngr
seems to hold true ;)
but only if auto-down is set. Well.. I can't tell for sure. Too many logs to see if my messages are received.
Bartosz Sypytkowski
@Horusiath
I know, that @Aaronontheweb often says to be careful with auto-down, but to be honest - unless you'll specify your own logic for downing nodes, I think it's a reasonable to use it (at least for clusters, which fit into single datacenter).
Chris Martin
@trbngr
I think that's reasonable too. It's always in the back of my mind though. The Akka guys warn against that hard.
I suppose it's not a huge deal with a small system either
Bartosz Sypytkowski
@Horusiath
in akka on the jvm you can use split-brain resolver strategies, we don't have them on .net side yet
Chris Martin
@trbngr
ok. thanks for your help, man. Ima go code now
Aaron Stannard
@Aaronontheweb
where auto-down kicks you in the nuts is when the unexpected happens
if there's a hardware failure inside azure that knocks out the network for a minute or two
then you have to take your service offline and do a full reboot if every node has downed every other node
it's a remote possibility though
and most businesses that couldn't tolerate that issue
wouldn't tolerate that type of outage in the first place
and would have some sort of data-center level failover
it'll be useful once we have downing providers implemented
makes the strategy for doing automatic downing pluggable
Chris Ochs
@gamemachine
wanted to confirm behavior on singleton actor. That it's being unreachable that triggers moving it, it doesn't have to be flagged as down before it moves
Chris Ochs
@gamemachine
The behavior I want in the cluster is that it just degrades as nodes become unreachable. Singleton is always available (minus downtime while being unreachable until started on another node). auto-down disabled. App is a multiplayer game so small number of mostly vertically scaled boxes. Singleton is for a global registry of active games/online players
Aaron Stannard
@Aaronontheweb
taking a look at the source real quick
to confirm how it works
since I don't know the answer myself