Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 28 21:40
    Build #2375 passed
  • Jan 28 21:40
    Build #2375 passed
  • Jan 28 19:34
    Build #2373 passed
  • Jan 28 19:34
    Build #2373 passed
  • Jan 28 04:18
    Build #2372 passed
  • Jan 28 04:18
    Build #2372 passed
  • Jan 26 17:15
    Build #2371 passed
  • Jan 26 17:15
    Build #2371 passed
  • Jan 25 15:54
    Build #2370 passed
  • Jan 25 15:54
    Build #2370 passed
  • Jan 24 22:48
    Build #2369 passed
  • Jan 24 22:48
    Build #2369 passed
  • Jan 23 23:39
    Build #2368 passed
  • Jan 23 23:39
    Build #2368 passed
  • Jan 23 00:46
    Build #2366 passed
  • Jan 23 00:46
    Build #2366 passed
  • Jan 22 19:24
    Build #2365 passed
  • Jan 22 19:24
    Build #2365 passed
  • Jan 21 19:32
    Build #2363 passed
  • Jan 21 19:32
    Build #2363 passed
Neil Houghton
@nizmow
JsonMessageSerializer.SerializerSettings
on application start you can manipulate that and change the MT serialiser settings
it might not be ideal but maybe you can try doing something with that and see if you can resolve your problem
(you could add your own JsonConverter for example)
Chris
@Rocky-25
I've got this in ConfigureServices, but it seems to completely ignore it:
JsonConvert.DefaultSettings = () => new JsonSerializerSettings()
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
};
I'm trying to pass odataqueryoptions down to the microservice to apply at the database level
I've just found a setting within MT
cfg.ConfigureJsonSerializer(x => new JsonSerializerSettings()
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
});
Chris
@Rocky-25
when setting that I get a new error:
---> System.Runtime.Serialization.SerializationException: Failed to serialize message
---> System.IO.IOException: Stream was too long.
Neil Houghton
@nizmow
I didn't realise ConfigureJsonSerializer was there
Chris
@Rocky-25
Neither did I. Not sure how I now deal with the next issue though :(
Has anyone ever used MT with odata? And if so how do you get the odata query down to the microservice layer from the api?
Neil Houghton
@nizmow
I'm not really sure what this has to do with MT and I've never used odata
Amila
@amilarox1
Hi Guys, just getting started with MassTransit. I have publisher which works. The subscriber has the code below but the message always ends up in the dead-letter queue. Have I missed anything in the configuration. Any help really appreciated. Cheers
void ConfigureMassTransit(IServiceCollectionConfigurator configurator)
            {
                configurator.AddConsumer(typeof(NewsBroadcastCreatedConsumer));

                configurator.AddBus(provider =>
                {
                    return Bus.Factory.CreateUsingAzureServiceBus(cfg =>
                    {
                        cfg.Host("");
                        cfg.Message<NewsBroadcastCreatedIntegrationEvent>(msgConfig =>
                        {
                            msgConfig.SetEntityName("news-broadcast");
                        });
                        cfg.SubscriptionEndpoint<NewsBroadcastCreatedIntegrationEvent>("news-broadcast-api", e =>
                        {
                            e.ConfigureConsumer<NewsBroadcastCreatedConsumer>(provider);
                        });
                    });
                });
            }

            services.AddMassTransit(ConfigureMassTransit);
            services.AddHostedService<HostedService>();
Chris Patterson
@phatboyg
@Rocky-25 you can't create a new serializer settings object, you just have to change the value on the one provided and return it back. Or you'll break, literally, everything, as you see :)
@amilarox1 does the publisher also configure the entity name for the event so that it is published to the topic? I'm guessing yes. When you say dead-letter queue, is that the Azure DLQ? If so, your consumer is likely failing to consume the message with an exception at the transport level. What do you see in the logs?
Amila
@amilarox1
@phatboyg Yes, the publisher also configure the entity name for the event. Just for testing, I added the subscription endpoint configuration and the consumer to the publisher application along with the publish endpoint configuration and the consumer there received the Message. But not when I move it to a separate application. Sorry for the confusion, I meant the Skipped queue.
Chris Patterson
@phatboyg
Ah, did you change the namespace+name of the type? They must be the same, end-to-end.
I'm guessing your separate application moved the message contract to a new namespace. THey must match, entirely.
Amila
@amilarox1
@phatboyg yep, that sorted it out, Thanks. Is it a requirement in Azure Service Bus or MT? I thought as the message is json, it Deserialize into the type you configured in the subscription endpoint.
Chris Patterson
@phatboyg
It's a requirement for MT to know the message type.
It's written into the message envelope on send, and used to match messages to consumers on receive.
You can see the details of the envelope in the documentation: https://masstransit-project.com/architecture/interoperability.html#json-bson-xml
I need to add that as a common gotcha to the documentation.
Amila
@amilarox1
:D yeah, I was pulling my hair over the past couple of days. Thanks @phatboyg so I will have to create a class library with all the contracts and use it with both the sender and receiver.
Chris Robison
@chrisdrobison
@phatboyg A new bus is created, because a new process is created to process a request, which process sends report generation requests. Once the process is done, the process exits. There are reasons for that for this particular process and realize it is not ideal, but there you have it. We do see things stabilize around 1000, but then as load comes in, we see it go past that. In an hour you could probably see 3000-4000 report requests.
Chris Patterson
@phatboyg
That, or you could just put the same namespace/class/interface in each project.
Amila
@amilarox1
Got it, thanks again.
Chris Patterson
@phatboyg
@chrisdrobison I'm pretty sure I've suggested an alternative before for this. You said it's fire and forget right? So, why respond?
I also have an open issue to change it so that responses are sent via the same RabbitMQ channel that received the message
Chris Robison
@chrisdrobison
I think I've created a little confusion, it's a combination of things. First message is a fire/forget upload reports, a consumer process then takes that (it doesn't respond to it) and spins off process to handle that request. That new process then needs all the reports for that request, so it fires off 10 or so request/response generate report messages on which it waits for a response.
Chris Patterson
@phatboyg
Ah, okay. So the responses are needed. Gotcha.
These are the defaults, I mean, every ten seconds it should prune the bucket list.
static SendEndpointCacheDefaults()
        {
            Capacity = 1000;
            MinAge = TimeSpan.FromSeconds(10);
            MaxAge = TimeSpan.FromHours(24);
        }
Only on a request though, it doesn't prune in the background.
Chris
@Rocky-25
Is it possible to pass an Expression through MT?
e.g.
public Expression<Func<Domain.Models.Course, bool>> Filter { get; set; }
When putting this on the contract the endpoint no longer picks it up
danmalcolm
@danmalcolm

I have a question around aggregate roots in a domain model and how they relate to MT sagas.

For example, say I have a saga running in OrderService that is interested in events published from other services as the order is fulfilled. My first thought was to split responsibilities as follows:

  1. Use the saga framework to handle the mechanics of correlating messages and any state relevant to the saga (e.g. timeouts to check delays in fulfilment)
  2. Have a separate Order aggregate root as the "source of truth" in terms of the order's data and business rules (e.g. do shipment details on an event make sense given the order lines?). This would involve storing an OrderId property in the saga state and loading the Order object as part of the saga's behaviour.

The documentation and other examples don't seem to suggest this. In the example used in the documentation, all business logic (valid transitions) and state is held in the OrderState and OrderStateMachine classes. We see a similar approach in https://github.com/MassTransit/TheCoffeeShop. I can see the advantages of this in terms of speed, simplicity and concurrency management. Is this a sign that I'm on the wrong track? Or are we just keeping the examples simple to focus on the saga infrastructure?

I can see that the separate AR approach introduces new complications, like persisting entity and saga state in the same transaction, separate concurrency management for entities etc. However, I can't see an alternative if I do want to keep most business logic within a self-contained domain model. Any other reasons not to go along this route?

This is my first post, so I should offer a big thanks to MassTransit contributors and the community on here.

Chris Patterson
@phatboyg
@Rocky-25 methods/behaviors on message contracts aren't really supported, nor recommended.
@danmalcolm I've seen both, with trade offs in either direction. I prefer to use only commands against roots, implemented by the saga with state stored in the saga repository. Events produced by the saga/aggregate are used to update views and other related stores which support read-only queries. Queries directly against the aggregate are via messaging to the saga (using .Respond in the state machine) so that accurate source-of-truth information is availabl.e
The coffee shop is an example of this.
But I would say that fulfillment may be separate from an order, it's all about decomposing a large domain thing like an order into the parts of order fulfillment
Intake, order, billing, payment, fulfillment, shipping, proof of delivery, etc. - those are all separate concerns that may push events into an overall order history and current "state" but that's a bigger question with its own architectural considerations.
Ryan Langton
@ryanlangton
A number of ppl in my company want to use Azure EventGrid because of it's pub/sub spec/design. I'm pushing more for just using MT bus.publish(). What are the arguments I can use against EG or benefits using MT instead for pub/sub?
They're also arguing that EventGrid is cheaper than Azure ServiceBus
Ryan Langton
@ryanlangton
I'm leaning towards proposing we use MT pub/sub (with Azure Service Bus) for internal events and use Azure EventGrid for things that external customers can web hook into.. has anyone seen this sort of architecture/design?
Ryan Langton
@ryanlangton
this article didn't help my cause lol
https://blog.eldert.net/choosing-your-pub-sub-messaging-service-service-bus-and-event-grid/
So in conclusion, use Event Grid by default, but don’t hesitate to bring in Service Bus for those specific scenario’s.
danmalcolm
@danmalcolm
@phatboyg Fair point on the fulfilment domain, this was just an example scenario.
danmalcolm
@danmalcolm
I'm still tied to the old fashioned hexagonal architecture, with the core logic baked into a domain model at the centre and messaging part of the "outer" infrastructure. I can see now that with well-factored smaller services, managing state in a saga might be all you need.
Paul VanRoosendaal
@pvanroos
@phatboyg I'm hosting a saga in an Asp.Net Core 3.1 web api service. I'm using Azure SB as the transport. When send or publish (from another process), I've been using the SetSessionId method to add an unique Guid for the SessionId. Is there a way to set this in the SagaStateMachineInstance by convention rather than by expliciting setting it in the publish/send?