📢 The Dapr community has moved to Discord (https://aka.ms/dapr-discord). This Gitter community will no longer be monitored and maintained. Please post questions/comments over at Discord and we'll be happy to answer there!
while debuging with VS Code extension i am getting
time="2020-08-25T14:22:23.6736781+03:00" level=warning msg="error establishing client to placement service: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [::1]:50005: connectex: No connection could be made because the target machine actively refused it.\"" app_id=clientcred instance=xxxxxx scope=dapr.runtime.actor type=log ver=0.10.0
Docker hosts the placement at 6050 So i applied "placementAddress": "localhost:6050" in the tasks.json. After that the message disapered.
Hi there, sending a message to a pubsub topic results in the following error in the Dapr sidecar. time="2020-08-26T16:51:45.271826823Z" level=error msg="error from operator stream: rpc error: code = Unavailable desc = transport is closing" app_id=exampleappname instance=exampleinstancename-5bfcd46f9-f6254 scope=dapr.runtime type=log ver=0.10.0
How will I know that the pubsub component is functioning properly?
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
namespace: default
spec:
type: pubsub.rabbitmq
metadata:
- name: host
value: amqp://localhost:5672 # Required. Example: "amqp://rabbitmq.default.svc.cluster.local:5672", "amqp://localhost:5672"
- name: consumerID
value: rabbitMqConsumer # Required. Any unique ID. Example: "myConsumerID"
# - name: durable
# value: "true" # Optional. Default: "false"
# # - name: deletedWhenUnused
# # value: <REPLACE-WITH-DELETE-WHEN-UNUSED> # Optional. Default: "false"
- name: autoAck
value: "true" # Optional. Default: "false"
# # - name: deliveryMode
# # value: "0" # Optional. Default: "0". Values between 0 - 2.
# - name: requeueInFailure
# value: "true" # Optional. Default: "false".
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
namespace: default
spec:
type: pubsub.rabbitmq
metadata:
- name: host
value: amqp://user:GSlU5LpIa1:32421 # Required. Example: "amqp://rabbitmq.default.svc.cluster.local:5672", "amqp://localhost:5672"
- name: consumerID
value: rabbitMqConsumer # Required. Any unique ID. Example: "myConsumerID"
# - name: durable
# value: "true" # Optional. Default: "false"
# # - name: deletedWhenUnused
# # value: <REPLACE-WITH-DELETE-WHEN-UNUSED> # Optional. Default: "false"
- name: autoAck
value: true # Optional. Default: "false"
# # - name: deliveryMode
# # value: "0" # Optional. Default: "0". Values between 0 - 2.
# - name: requeueInFailure
# value: "true" # Optional. Default: "false".
== APP == 2020/09/02 00:27:58 event - PubsubName: messagebus, Topic: neworder, ID: 5dc38ceb-9a1a-4d71-9c8e-0d44f98f4845, Data: ping
== DAPR == time="2020-09-02T00:27:58.172224+08:00" level=error msg="rabbitmq pub/sub: error handling message from topic 'neworder', error returned from app while processing pub/sub event: . status code returned: 500" app_id=faildsub1 instance=ArideMacBook-Air.local scope=dapr.contrib type=log ver=0.10.0
DAPR won’t distribute message from input binding to multiple instances of consumer?
I have the following scenario and issue:
1) External system publish 500 TODO job message to RabbitMQ
2) Consumer Pod use input binding (AMQP protocol) to receive messages from RabbitMQ, and Consumer Pod is running on demand (if no message, then no instances running)
2) KEDA scale the Consumer Pod from 0 to 5 based on custom metrics of RabbitMQ queueLength
3) Kubernetes create 5 instances of Consumer Pod based on scale rule (incoming 500 messages)
4) The daprd sidecar of first instances of Consumer Pod will get all 500 messages and processed them one by one, each message will use 20 seconds to complete, at meantime, the status of all messages in RabbitMQ changed from Ready to Unacked - UnAcknoledged
5) The other 4 instances of consumer Pod did NOT get any un-processed messages from RabbitMQ and free all time
I need all 5 instances of Consumer Pod parallelly process all non-complete messages in RabbitMQ, What configuration I missed or did wrong?