These are chat archives for akkadotnet/akka.net

16th
Mar 2018
jameswilddev
@jameswilddev
Mar 16 2018 14:12
Hello! We have a scenario where sometimes we need to send messages totalling hundreds of kilobytes from an Akka cluster node in response to an .Ask from another application. Is Akka Streams suited to this? It seems from a quick look to be better suited to big, persistent pipelines of data than breaking a single message into a few pieces.
Onur Gumus
@OnurGumus
Mar 16 2018 14:13
@jameswilddev can't say for sure, but number 1 reason to use streams is back pressure.
jameswilddev
@jameswilddev
Mar 16 2018 14:14
That's what I was thinking. We don't really have a bottleneck either end here; we just need to get that data somewhere else.
(it can't be produced or consumed in a streaming manner)
Onur Gumus
@OnurGumus
Mar 16 2018 14:14
then I would use plain actors.
with hyperion serializer
jameswilddev
@jameswilddev
Mar 16 2018 14:14
Just break up the message into smaller messages and send/reassemble those?
Onur Gumus
@OnurGumus
Mar 16 2018 14:15
What you will gain by breaking up the messages?
jameswilddev
@jameswilddev
Mar 16 2018 14:15
We're currently just sending one message but it's too big (exceeds the default message size by several times) and the general advice is to break up large messages.
external system -> request message -> cluster -> response message (too big; often nearing a megabyte) -> external system
Onur Gumus
@OnurGumus
Mar 16 2018 14:18
if that is the case you can split or compress the message or fallback streams or Akka.IO and fall back to TCP/UDP
or increase the acceptable message size
jameswilddev
@jameswilddev
Mar 16 2018 14:21
Hmm. Thanks. To make sure the system works at all I've been developing with a larger message size but I'm aware of the problems that will cause in a production environment. Will just have to pre-serialize the message, transport it in chunks, reassemble at the other end and deserialize.
Marc Piechura
@marcpiechura
Mar 16 2018 14:47
@jameswilddev I wouldn’t only use Streams for backpressure, the ability to model you workflow as stream has some advantages even when you may never actually need the backpressure and with the new StreamRefs ( streams over the network ) it sounds like a good fit
If it’s only for splitting up the message in chunks it’s a bit overkill, yes, but if you could implement the rest of the receiving workflow, maybe even the sending part, with streams too, then you’ll have a nice stream workflow even if you don’t need backpressure
Aaron Stannard
@Aaronontheweb
Mar 16 2018 20:46
@jackowild regarding Akka.Persistence.MongoDb: AkkaNetContrib/Akka.Persistence.MongoDB#37
upgraded the build system; fixed TeamCity so that CI verification now runs correctly
and the build system now supports stuff like DocFx if you wanted to build web docs
I added you and @diegolinan to the set of people with write access to that repo
so you can push a new release whenever you feel it necessary. Releases have to go through the master branch.
Nightlies go through the dev branch.
there's a few outstanding PRs there that look like they'd probably be helpful - I'll leave it up to you guys if you want to merge them in or not. I'm not familiar enough with MongoDb to say so.
Diego Liñan
@diegolinan
Mar 16 2018 22:00
@jackowild So, no support for current Akka.net persistence. How about adding the latest mongo db driver which supports immutability out of the box? That would save a lot of time on registering those mappings. About Aaron's PR, I leave it up to you to merge it or not.