Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Oct 13 14:40
    cptjazz synchronize #3974
  • Oct 13 14:07
    cptjazz opened #3974
  • Oct 13 08:30
    ismaelhamed commented #3937
  • Oct 12 15:50
    IrvinDominin opened #127
  • Oct 11 18:21
    Aaronontheweb commented #3973
  • Oct 11 18:20
    Aaronontheweb commented #3937
  • Oct 11 18:16
    Zetanova commented #3937
  • Oct 11 18:11
    Zetanova commented #3937
  • Oct 11 15:09
    Aaronontheweb commented #3937
  • Oct 11 15:08
    Aaronontheweb commented #3937
  • Oct 11 14:36
    Aaronontheweb commented #3973
  • Oct 11 01:00
    Horusiath commented #3057
  • Oct 10 20:02
    IgorFedchenko synchronize #3973
  • Oct 10 19:59
    IgorFedchenko synchronize #3973
  • Oct 10 19:58
    IgorFedchenko commented #3973
  • Oct 10 19:53
    IgorFedchenko opened #3973
  • Oct 10 14:04
    stijnherreman commented #3057
  • Oct 10 13:54
    Aaronontheweb commented #3970
  • Oct 10 13:54
    Aaronontheweb synchronize #3970
  • Oct 10 10:10
    Zetanova commented #3937
hisabir
@hisabir
release 1.5...
Bartosz Sypytkowski
@Horusiath
@hisabir the exact date is not yet known
most of the changes, it introduces, are available right now in form of optional settings or plugins
Alessandro Rizzotto
@easysoft2k15
Hi, I'm trying to figure out the best way to implement this architecture:
  • in an ASP.NET application a controller is receiving requests: each request must be dispatched to an entity represented by an actor.
  • there are many instances of the same actor: each request must be dispatched to the right instance of the actor (base on a parameter witch is available within the controller action)
  • there should be a Coordinator actor that is responsible for the life time of all instance actors: if the instance actor does not exist when the request comes in, the coordinator must create the actor. Base on some event the coordinator actor must also kill instances not needed anymore.
    The obvious choice seems to have one coordinator actor managing the list of instances actor (through a dictionary of instances). The problem with this solution is that it doesn't scale: the coordinator actor is the bottleneck.
    What is the best approach?
    It would be great even if the actor instances may be spread on remote machine as well.
    I think that at the end the problem can be state as follow: what is the best way to manage the life time of many actors of the same type that can potentially be spread on many remote machine.
Arjen Smits
@Danthar
Sounds to me you want to use an consistent hashing group http://getakka.net/docs/working-with-actors/Routers#consistenthashing.
When it comes to distributing this across machines i think the cluster sharding feature is want to you want to look at cluster sharding: http://doc.akka.io/docs/akka/current/scala/cluster-sharding.html http://getakka.net/docs/clustering/cluster-sharding
Bartosz Sypytkowski
@Horusiath
@easysoft2k15 sounds like the case for akka-cluster-sharding
Alessandro Rizzotto
@easysoft2k15
@Horusiath , @Danthar Thank You. I was thinking about consistent hashing as well. Regarding cluster-sharding I don't have experience right now. I'll take a look
Aaron Stannard
@Aaronontheweb
@hisabir .NET Core kind of threw a wrench into our release schedule
since that came out sooner than expected
Lev Lehn
@llehn
@easysoft2k15's questions look like ddd-related ones to me. I'm exploring the possibilities to implement a ddd-cqrs system with akka.net too. Regarding @Danthar's answer with a consistenhashgroup router, can we add more routees at runtime after the creation of the router? In DDD terms it would be: Lets say I have an Entity-Actor Order with an ID, let's say I have 100 of them. This entity-actor handles the CancelOrder command, which is delivered to it via the consistenthash group router, which hashes on the order id. Now we receive a CreateOrder command, with an empty/special ID, so it goes to a special actor which creates a new order entity actor. Now it needs to add the newly created entity to the router, so all subsequent messages directed to /orders/<newly-created-id> go to the new entity actor. How can this be done?
David Rivera
@mithril52
@Aaronontheweb Speaking of .NET Core and Akka.NET, doers that mean that a .NET Core version of Akka is in the near future? Before 1.5?
Or even part of 1.5?
Alessandro Rizzotto
@easysoft2k15
@llehn I not sure (I'm not an expert in Akka.NET) but I don't think the system is design to support the solution You suggested. More likely You need an intermediate layer of actors (the router's actors) that manage the Order Actor and send messages to them. In this way the Order Actor make the heavy computation and the Router actors just dispatch messages to the Orders actor in order to avoid the dispatching mechanism being a bottleneck. This is the approach I'm taking right now. Not sure if is the best way to do it though.
Bartosz Sypytkowski
@Horusiath
@easysoft2k15 @llehn Akka.Cluster.Sharding is plugin designed to work on actors as aggregate roots, created on demand, migrated, routed and rebalanced across cluster nodes.
Alessandro Rizzotto
@easysoft2k15
@Horusiath I'll dig deeper in the sharding cluster doc. At the moment (keeping everything on a single machine) I'm adopting the solution I explained above.
Chris Martin
@trbngr
I'm trying to use Cluster.Sharding with lighthouse but my shards aren't being resolved. Please tell me I can use Lighthouse as just a dumb seed.
and not have to launch the region there too
Bartosz Sypytkowski
@Horusiath
@trbngr you may need to specify roles for sharding nodes (I'm not 100% sure if it's necessary) and use them in sharding config
Chris Martin
@trbngr
ok. I'll give it a shot
Bartosz Sypytkowski
@Horusiath
because cluster sharding may assume, that lighthouse will also host shards
Chris Martin
@trbngr
            cluster {
              sharding {
                role = "projections"
              }
              #will inject this node as a self-seed node at run-time
              seed-nodes = [
                "akka.tcp://eventdayprojections@168.62.228.228:4053",
                "akka.tcp://eventdayprojections@23.96.183.175:4053"
              ]
              roles = [projections]
            }
          }
look right?
Bartosz Sypytkowski
@Horusiath
yes
Chris Martin
@trbngr
still not happening :(
Bartosz Sypytkowski
@Horusiath
any errors?
Chris Martin
@trbngr
Trying to register to coordinator at [], but no acknowledgement. Total [1] buffered messages.
oh! I don't have any journal setup
sorry. I've been in Scala-land for the last few months. Trying to get my head back here for a bit ;)
hmm. defaults to inmem, right?
Bartosz Sypytkowski
@Horusiath
yes, it won't work right between processes
Chris Martin
@trbngr
right right
what can I use without setting anything up?
leveldb?
Bartosz Sypytkowski
@Horusiath
sqlite (if all akka processes will point to the same file)
Chris Martin
@trbngr
hmm. so far this is only one instance. It should work.
wire serializer is also advised (I know, that existing json-based may have some problems with some of the cluster sharding message)
Chris Martin
@trbngr
Oh yes. I remember that
Chris Martin
@trbngr
hmm. the db isn't being created. Seems like the persistence module isn't initializing at all
updated gist if you have time to look
wondering if having my seeds over the internet is the problem at this point?
Bartosz Sypytkowski
@Horusiath
if db is not created, you should have some error messages
could you put logs on the gist?
Chris Martin
@trbngr
I got it working after starting lighthouse locally.
Big question here is what happens when a node goes down? Do the shards get recreated on another node?
Bartosz Sypytkowski
@Horusiath
yes - basically shards can be handed over to another node, or rebalanced when the difference in number of shards between nodes goes over some specified threshold
Chris Martin
@trbngr
seems to hold true ;)
but only if auto-down is set. Well.. I can't tell for sure. Too many logs to see if my messages are received.
Bartosz Sypytkowski
@Horusiath
I know, that @Aaronontheweb often says to be careful with auto-down, but to be honest - unless you'll specify your own logic for downing nodes, I think it's a reasonable to use it (at least for clusters, which fit into single datacenter).