Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 16 18:12
    codecov[bot] commented #568
  • Sep 16 18:12
    codecov[bot] commented #568
  • Sep 16 18:10
    Build #536 passed
  • Sep 16 18:10
    Build #536 passed
  • Sep 16 18:03
    codecov[bot] commented #568
  • Sep 16 18:02
    martincostello synchronize #568
  • Sep 16 10:17
    martalexandra commented #575
  • Sep 16 08:58
    nathandaly starred justeat/JustSaying
  • Sep 04 19:48
    codecov[bot] commented #568
  • Sep 04 19:47
    codecov[bot] commented #568
  • Sep 04 19:47
    codecov[bot] commented #568
  • Sep 04 19:44
    Build #536 passed
  • Sep 04 19:44
    Build #536 passed
  • Sep 04 19:38
    codecov[bot] commented #568
  • Sep 04 19:38
    martincostello synchronize #568
  • Sep 04 08:52
    codecov[bot] commented #575
  • Sep 04 08:51
    codecov[bot] commented #575
  • Sep 04 08:50
    codecov[bot] commented #575
  • Sep 04 08:48
    Build #536 passed
  • Sep 04 08:48
    Build #536 passed
Stuart Lang
@slang25
Cool, I've got it recreated here! Thanks for your help. I'll see if I can figure out what we can do
Mat Robichaud
@robichaud
awesome, thanks!
Stuart Lang
@slang25
So it looks like it's FluentValidation and JustSaying having conflicting version requirements for a couple of packages. It should be safe to ignore the warning in this case, so you can add NU1605 to your NoWarn or if you want to be a bit more controlled about it (I wouldn't personally), add this the to the netstandard2.0 projects package references:
<PackageReference Include="System.Runtime.Handles" Version="4.3.0" NoWarn="NU1605" />
<PackageReference Include="System.IO.FileSystem.Primitives" Version="4.3.0" NoWarn="NU1605" />
Mat Robichaud
@robichaud
Ok thanks for your time
Stuart Lang
@slang25
No worries
Stuart Lang
@slang25
I think it's a FluentValidation oversight actually, they shouldn't be explicitly referencing those packages on netstandard2.0, I'll send them a PR
Anthony Steele
@AnthonySteele
A microbenchmark on the new message context store. https://gist.github.com/AnthonySteele/3544807f54831ca9bce8ef32c31f5ebd
Stuart Lang
@slang25
Nice work @AnthonySteele, I agree that it doesn't look like it's worthwhile shortcutting it. We will always have lower hanging fruit.
Anthony Steele
@AnthonySteele
True. JustSaying will probably never be a "zero-allocation, lowest possible overhead" framework.
FYI, about "trusty" ubuntu on Travis - I noticed after that it's LTS, but only until April 2019. So not long to move off it!
Stuart Lang
@slang25
Agreed, and it doesn't need to be, it's intended to be used in an inherently asynchronous manner. We might want publishing to be optimal
Anthony Steele
@AnthonySteele
I think you're also right that "the latest travis supports is xenial" - I specified "bionic" here https://github.com/NuKeeperDotNet/NuKeeper/pull/626/files in the travis.yml, but the build did
OS Version: 16.04 ( https://travis-ci.org/NuKeeperDotNet/NuKeeper/builds/472961483?utm_source=github_status&utm_medium=notification )
So travis picked xenial not bionic
Stuart Lang
@slang25
Yeah, it seems it will default to xenial when it's empty or doesn't recognise it
Anthony Steele
@AnthonySteele
Could have given a warning that it was ignoring what I put there :(
Bogdan Radacina
@bradacina
hi, when I try to shut down a service that has JustSaying subscribers listening, after stopping to listen, is there a way to wait for any incomplete IAsyncHandler tasks to finish (handlers that are in flight)?
Stuart Lang
@slang25
There is nothing out-of-the-box for this, and that is deliberate because message handers should be idempotent and expected to occasionally fail.
You can implement it yourself however by wrapping the handers and keeping count, or implementing a custom throttling strategy that has an awaitable method for when all workers are available again.
Ronald Suharta
@RonaldSuharta_twitter
Hi, I have been trying to find the code base which causing the message into an error queue
in the MessageDispatcher.cs -> DispatchMessage method, the exception being suppressed
then how does the SQS client knows if there is an exception?
btw i'm using the v6.0.1
try
            {
                if (typedMessage != null)
                {
                    typedMessage.ReceiptHandle = message.ReceiptHandle;
                    typedMessage.QueueUrl = _queue.Url;
                    handlingSucceeded = await CallMessageHandler(typedMessage).ConfigureAwait(false);
                }

                if (handlingSucceeded)
                {
                    await DeleteMessageFromQueue(message.ReceiptHandle).ConfigureAwait(false);
                }
            }
            catch (Exception ex)
            {
                var errorText = $"Error handling message [{message.Body}]";
                _log.LogError(0, ex, errorText);

                if (typedMessage != null)
                {
                    _messagingMonitor.HandleException(typedMessage.GetType());
                }

                _onError(ex, message);

                lastException = ex;
            }
            finally
            {
                if (!handlingSucceeded && _messageBackoffStrategy != null)
                {
                    await UpdateMessageVisibilityTimeout(message, message.ReceiptHandle, typedMessage, lastException).ConfigureAwait(false);
                }
            }
Brian Murphy
@brainmurphy
@RonaldSuharta_twitter SQS handles the retry policy, including moving the message to the error queue. The act of JustSaying receiving a message from a queue increments a counter on that message - once all processing attempts have failed, SQS moves the message to the error queue. Have a read here: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html
Ronald Suharta
@RonaldSuharta_twitter
@brainmurphy .. thanks.. i;m trying to proof the point with returning false from the consumer's 'Handle' method will put the message to the error queue (if no retry /or after max retry reached). how can i proof this ?
for SQS to increment the counter, the handler (consumer) must give some sort of 'true/false' indication back to SQS that it fails to process the message, right ? but i cannot see the code to do so
Ronald Suharta
@RonaldSuharta_twitter
is it becasue we need to delete the message from the queue, otherwise sqs will always (until max retry) re-attempt to deliver the message again ?
Brian Murphy
@brainmurphy
That's exactly how it works: SQS increments the count when the message is read from the queue,
moving it to the error queue if the maximum number of processing attempts is reached. JustSaying doesn't have to do anything other than read the message. After that, the only action JustSaying performs is deleting the message if it's successfully handled.
Ronald Suharta
@RonaldSuharta_twitter
thanks
Peter Kneale
@PeterKneale
justeat/JustSaying#472 Some ideas on how diagrams might help explain some of the JustSaying concepts
Idea is basically that we use PlantUML to produce diagrams like this:
image
from this:
plantuml:
@startuml component

title PubSub using JustSaying and AWS SNS/SQS
hide footbox
skinparam BackgroundColor #EEEBDC
skinparam BoxPadding 20
skinparam ParticipantPadding 20

box "Publishing\nApplication" #lightgrey
    participant Publisher order 1
end box
box "AWS" #orange
    participant "AWS SNS Topic" as SNS_Publisher order 2
    participant "AWS SQS Queue #1" as SQS_Subscriber1 order 3
    participant "AWS SQS Queue #2" as SQS_Subscriber2 order 4
end box
box "Subscribing\nApplication #1" #lightgrey
    participant "Subscriber" as Subscriber_1 order 5
end box
box "Subscribing\nApplication #2" #lightgrey
    participant "Subscriber" as Subscriber_2 order 6
end box

== Publishing ==
group publishing
    Publisher-[#Green]>SNS_Publisher: Publish message to SNS Topic
    SNS_Publisher-[#gray]>SQS_Subscriber1: Deliver message to SQS Queue
    SNS_Publisher-[#gray]>SQS_Subscriber2: Deliver message to SQS Queue
    return ok
end

== Polling ==
group subscriber polling
    SQS_Subscriber1<[#Green]-Subscriber_1: List messages
    return Message returned
end 
group subscriber polling
    SQS_Subscriber2<[#Green]-Subscriber_2: List messages
    return Message returned
end 

== Acknowledging ==
group subscriber acknowledging
    SQS_Subscriber1<-[#Green]-Subscriber_1 !! : Delete message
end 
group subscriber acknowledging
    SQS_Subscriber2<-[#Green]-Subscriber_2 !! : Delete message
end 
@enduml
using this
Ronald Suharta
@RonaldSuharta_twitter
does anyone know what's the default concurrency consumer ? and where is the setting ?
Ronald Suharta
@RonaldSuharta_twitter
i've seen few occassions the SQS delivering the same message twice within 20 - 50 ms interval.
if the default concurrent consumer set to 1, how this could happen ?
Brian Murphy
@brainmurphy
@RonaldSuharta_twitter SQS guarantees "at least one" delivery of messages. Have a read: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html#standard-queues-at-least-once-delivery Idempotent message handling is an important principle.
Ronald Suharta
@RonaldSuharta_twitter
@brainmurphy i understand about the idempotency but trying to figure out how many concurrent consumer by default ?
Batches are fetched in one asynchronous loop, so no concurrency there. The messages are then passed to the "workers" for concurrent processing. At the point you receive them from AWS you have a visibility timeout for each message so that they don't appear in the queue for anyone else to process.
I would echo what Brian says, SQS will do at least once delivery, so what you're reporting is definitely something you should expect.
Ronald Suharta
@RonaldSuharta_twitter
is there any plan to integrate with polly framework for smarter retry? i know i can do this within the application itself but i would be nice to be able to just attach retry policy during subscribers registration
Alex Burgett
@aburgett87
Hey guys,
What are the current plans for releasing 7.0? I'm currently using 6.0.1 and looking forward to the some of the new constructs in 7.0, e.g. MessageContext
Brian Murphy
@brainmurphy
@RonaldSuharta_twitter That's an interesting question, but the short answer is "no". At the moment we just rely on the SQS redrive policy and dead letter queue for retries. There's been some thoughts about how JustSaying could better distinguish between different failure modes - "I can't deserialise this message", "exception in handler", etc. But, nothing planned.
(apologies for the delay in replying! 🙁)
Brian Murphy
@brainmurphy

@aburgett87 Unfortunately, I don't have a timeline to share. JustSaying contributors are enthusiastic engineers, but still very much volunteers.

Have I mentioned that we're very happy to have new contributors helping us out? 😉

Alex Burgett
@aburgett87
@brainmurphy Haha, I'm more than happy to give my time to the project, what needs to be done to release version 7?
Jarrod
@JarrodJ83
Has anyone used JustSaying to consume messages in a WebForms app (.net 4.7.1)? I am able to publish messages just fine but the consumer seems to fail to start but w/o any errors (at least none that are logged)
I start the bus from the Application_Start of the Global.asax