so it seems to want to use the squbs actor system still somewhere
If anyone has any idea that would be a big help.. thanks :)
@anilgursel thanks anyways :)
Sorry for chiming in late. Yes, it seems a message destined for io-surfkit-nubank... arrived at a squbs actorsystem. Also note the address and port numbers are the same. Configuration issue somehow? It is hard to tell without knowing how you set up the cluster and why we happen to have the same port on the same host for supposedly different actorsystems.
thanks for the feedback :)
If I’m adding a perpetual stream to a primarily http-server application, the likely home for that would be the cube module, right?
I debated adding it as a separate module, but I'm wondering what others think..
If you go by default 3 module project structure, then I'd say so. Although you can argue this is transport specific and you want to rename svc module consumer, for instance.
submitte pr #696 to bump all the dependency versions.
Looks good, @sebady!
It seems we have a stable build now with #697. I'll keep testing the builds but so far everything seems very positive. If no objections, please merge and restart all pending PR builds.
We may have other build instabilities so it may not be 100%. I'll chase the rest as we encounter them.
Hi @akara ! Thanks a lot for the really nice job done in the persistent buffer :-)
a couple of questions, referring to: akka/alpakka#258 is there any plan to integrate persistent-buffer in alpakka itself? (the main advantage is that right now there is a team for supporting allocated)
I have integrated Source and Sink on top of the PersistentQueue and I will like to contribute them back
At the moment my problem is that I need to publish the persistent buffer implementation for both akka 2.4 and akka 2.5 do you mind if I publish from my organization account?
it is maybe useful to mark also the akka dependency as provided ... and I need to publish as well for scala 2.11 and 2.12
We wanted to implement integrate test case for my application, can anyone provide me any github or any links to start with the implementations
Have sent with "5" parallel request to the Squbs server
Hi, i want to shutdown the squbs in runtime, in squbs website its provided as " squbs runtime can be properly shutdown by sending the Unicomplex() a GracefulStop message". Do i need to extend the Unicomplex class and then do the shutdown or how?. Please let me know
Really sorry about the delay. No, you do not need to extend the Unicomplex class. You can just call Unicomplex(system).uniActor ! GracefulStop.
@ganeshjanu Sorry for the delay on the response. I don't think you have to tune anything to make things execute concurrently. However, your forkjoin executor with parallelism-min at 1024 is going to kill the system. The actual parallelism should be optimally 1xCPU cores. It could go up to 3xCPU cores. I'm sure you're not testing it on a 1000 core system. The fjpool will kill you at that rate.
It is more important to see what you're doing in your request handling code to cause it to run in serial.
If you can please share that, I'd be happy to take a look.
Thank you for sharing the system. My server is a 2 core system.
i need to handle parallel request for some orchestration process which will talk to multiple processes on different systems.
what would be the recommended Configuration in my case? Please let me know. Thank you in advance
@ganeshjanu your configuration except for the parallelism seems all good. The fact that you're handling the requests in serial seems to point to something blocking in your request handling code. Do you make a call to a blocking API directly or indirectly? Can it be a non-blocking API? A non-blocking API is far recommended. If not, we need to make sure the blocking API is called on a blocking dispatcher which can have hundreds of threads. You must not use fork-join for that dispatcher. Generally, we use a standard ThreadPoolExecutor for such cases. There is already a blocking-io-dispatcher available in Akka. Just need to change the parallelism of that.
For blocking dispatcher, 100s is quite fine.
@akara blocking dispatcher is the default one that contains threadpool executor in squbs unicomplex reference.conf. To override, have defined fork join that replaces thread pool executor. Why the squbs has default with blocking dispatcher?
how can we achieve parallelism in such case?
Nope, you should never use the FJ pool for blocking. Leave it as is. Just make sure any blocking code runs on the blocking dispatcher.
Also, that blocking dispatcher was defined before Akka had a blocking-io-dispatcher in their reference.conf. At this time it is probably better to use the Akka one.
How you get your blocking code to run on the blocking dispatcher depends how your code is structured. For instance, you can configure a particular actor to run in the blocking-io-dispatcher.
Or if your blocking code is in a Future, you can set the execution context to the blocking-io-dispatcher programmatically before firing up that future.
That's why I said, if you can show code or at least explain what your code does, structurally (i.e. svc a calls actor B which makes a blocking DB call), we can help figure the best way to make sure you're not blocked.
I not using blocking dispatcher with any of my actors. My doubts whether the squbs routee using this dispatcher by default.
Because am creating the RESTful services based on Squbs routee