TCP idle-timeout encountered on connection to [sqs.us-east-1.amazonaws.com:443], no bytes passed in the last 1 minute. What's the recommended way of handling this error? Increasing the timeout seems like one solution, but it's not clear what a reasonable value should be (also this is a global setting), or why the Alpakka SQS connector isn't self-healing in this respect. Any advice?
EventSourcedBehavior: What if I got an actor that besides having some state to be persisted, I also want to contain regular old actor state which shall be forgotten as soon as the actor stops? To give as an example the case I am envisioning: I have the stateful actor consume some commands that should sometimes mutate the state and sometimes also broadcast some messages based off of the state to some set of subscribing actors. I am sure I am just not seing something but as you are not supposed to mix
EventSourcedBehaviorwith any other behavior I am having difficulties. So what's the established way of doing this? Can I mark some fields of my state as not to be persisted or something?
hey all, I'm running an akka based play app in production and occasionally seeing this error -- trying to figure out what it means
Sink.asPublisher(fanout = false) only supports one subscriber (which is allowed, see reactive-streams specification, rule 1.12)"
does anyone have any ideas or could point me in the right direction?
@amine.chikhaoui:matrix.org: I apologize for some vagueness, btw, as it's not totally clear what you mean by write to a database. If it's CRUD-style operations, then the transaction can be modeled in a persistent actor as:
Those events then allow either a projection to publish to Kafka or for the state of pending publishes to be tracked by the persistent actor.
If the DB write is itself appending internal state-changing events (i.e. you're already event-sourcing) and you want to have a message eventually published to Kafka which isn't easily completely derivable from the internal state events (e.g. it depends on the command and the internal state), I've sometimes defined a "command in an event's clothing" (e.g.
RecordedNeedToPublishDomainEventToKafka) which wraps the message to be published to Kafka and is tagged. Then an events by tag query in a projection looks for that tag and publishes the wrapped message. The event itself is a no-op/identity function in the event handler.
persistentActior ? DoAction(foo)the reply would be within the persist callback as well, then the rest continues.
I am trying to migrate some old code from 2.5 to 2.6. It was using
ActorPublisher. I created a custom source which now has to use the ask pattern to get messages from the actor. The problem I have is that now I am hitting the case where the timeout expires just before the actor sends the response. I can’t just increase the timeout as I don’t know when the actor will reply (this is for a websocket). As far as I can tell this was never an issue with the old
ActorPublisher since it was all actors. Is there a solution to this?
The best that I can come up with is to include a deadline in the request message so that the reciever knows not to send any replies after this deadline. This has to be earlier than the timeout to avoid the race condition but not too early as that will be a period of unresponsiveness. I can use a long timeout to reduce the frequency that this happens. What are the downsides of making the ask timeout extremely long?
Source.asksuffers this same problem.