Just to close this out and give some extra clarification (I work with abdul), the reason that we have separate hosts is because we've created a service who is able to start up multiple of our other microservices inside of it, so that we can do networkless/serializationless transport in some instances, such as long-term simulations.
We thought about throwing all the options into one big host, but were worried about crosstalk between the dependencies, for instance if two different services consume the same interfaced object. I briefly thought about using nested containers (I was one of the monsters that used them a bunch with StructureMap), but decided that was a rabbit hole we didn't necessarily want to go down.
We figured out what we were missing though: when making the sending agents, the handler pipelines were not getting shared with our custom transport implementation. We created a wrapper IHandlerPipeline which accepts multiple injected HandlerPipelines and connects them up in the single sender inside the SendingAgent. This is a biiiit of a hack, but seems to be working decently well. Still ironing out some of the kinks, such as making a different collection for each configured queue, but I think that will mainly be a performance benefit so it doesn't have to check every pipeline per message, as our messages are strongly typed (though I think the namespace gets pulled out of the type name, so that may not be any protection).
I'm going to try and get permission from on high to open source some of the work we're doing on this, as I think it's pretty interesting in some cases. Would definitely be interesting to see what you think on it!
Endpoint.ExecutionOptions(for which the
.MaximumThreads()operates on) is only constructed by using the default constructor of the
ExecutionDataflowBlockOptionsand nowhere is the parallelism ever changed from what the defaults are.