Well that explanation did not clear this question for me, that is why I am asking again.
What I understood from that, is that you have to manually unstash messages before termination for them to go into deadLetters
But my tests indicate that INFO messages regarding "deadLetters" and "StashedMessage" still appear even if I do not unstash my messages manually.
@Inkp you are right.
When you have messages on the stash, and the actor is killed, by poisonpill or otherwise.
So not restarted.
The stashed messages are automatically deadlettered
Also when you have stashed messages, and the actor restarts, it automatically unstashes
Okay, thanks for clearing this for me! How do I handle stashed messages in case I want them discarded if actor stops/restarts? Do I Stash.ClearStash() on PostStop()?
This message was deleted
And more philosophical question. What is the reasoning behind unstahing messages on actor restart? Isn't stash a part of actor's state? Thus it should be discarded like any other actor's non-persistent state.
@Inkp you would have to try it out, never used ClearStash(). But on PostStop() seems reasonable.
As to the philosophical question. Not sure. I just know its really handy in most cases :P
hi, when unit testing actors via Testkit... is it still required in latest version to tear down the test actor system?
or is this handled by the base class "Testkit"?
@Ralf1108 that's handled by the Teskit
but is it normal that after test completes there are so much debug log entries like
DeadLetter from [akka://test/user] to [akka://test/user]: <<DeathWatchNotification>: [akka://test/user/TestProbe_DetailPageVersion_9c57b66d-5cc9-4b73-8ad4-d4bdef0ad324], ExistenceConfirmed=True, AddressTerminated=False>
do I have to collect these DeathWatchNotifications by myself?
or can I suppress them?
@Ralf1108 its normal. What is happening is that as the ActorSystem is being shutdown. Various actors are sending DeathWatchNotifications (which is a system level message), to Actors that are already gone.
Thus the deadletter logs
nothing to worry about.
is it possible to suppress them as they pollute the test log :-)
There is currently some work being done to suppress deadletter notifications for system messages, but it needs more work.
ive used conemu for a while it is v good, noticed that is based off it. v cool
Jordan S. Jones
heh.. I use Git-bash
Have anyone used app/web config transformations for HOCON? Any sample?
I just replace the entire HOCON section full stop
Yeah I suspected that replace of the whole section required
Question on Cluster Sharding: Can you have more than one instance of a given type of entity on a single Node? The only statement I could find of this issue was "One entity instance may live only at one node at the time" in the documentation, but I'm not really sure what that is supposed to mean.
@cfjames yes you can. This statement means, that if you have for example actor representing User with id=1, you may be sure that as long as cluster is not partitioned, no more than one actor representing that user will be present at the same time, keeping your user state consistent
@Horusiath got it thanks.
Any example out there on how to use Cluster Sharding with Lighthouse?
Hello, I am moving my router configuration to HOCON, but I am using my own hash mapping function that I send to WithHashMapping. Does this disqualifiy router configuration in HOCON file?
When Lighthouse is running and a node where a Sharding Region starts, I see the following message in the Lighthouse logging: "Messge Register from akka.tcp://clustername@ip:port/user/sharding/typename to akka://clustername/user/sharding/typenameCoordinator/singleton/coordinator was not delivered"
@cconstantin need some info for the MongoDb Persistence. Just aligned to akka 1.0.7 on my fork, now I've to implement AtomicWrites. It is possible to have a call to WriteMessagesAsync with AtomicWrites on more than one PersistenceId?
ok. i seem to forget something. i have a topshelf service running an actorsystem and i added all the akka.remote stuff and put the config in the app.config and the logs show me the actorsystem gets started with that config. but it doesn't open up a port to listen to when i start the service in the debugger. do i still need something else?
ok. my own fault and hocon comes again to bite me in the ass... i put the remote section unter the actor section because copy'n'pasta.
@schepersk you probably need to restrict your cluster sharding to nodes not being lighthouse. One of the available sharding settings is the role, which tells what role does a cluster node need to have in order to support cluster sharding - without it, shard region will assume, that all nodes are capable of sharding (while lighthouse is not)
@andreabalducci no, it's not possible, persistence id describes boundaries of write operations for persistence
@Horusiath Indeed, I was just about to try that when I saw it in the source code :-) Thanks!
@Horusiath So, does this also mean that you can restrict the type of a shard to a node role? Lets say you have a shard region for AR1 running on role ABC and you want to start a shard for AR2 on a completely different node with role XYZ..