Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Phaero_X
    @HemSoft
    Hi again. Are there any samples that demonstrate the retry logic? For some reason PublishEventAsync() stopped retrying when I return 500 from my Subscriber just to demonstrate the retry logic. AFAIK the retry logic is currently only linear and not configurable - is that right? Trying to troubleshoot this issue. Thanks.
    1 reply
    Gunnar L. Rasmussen
    @gunnarlr
    The https://github.com/dapr/quickstarts/tree/master/pub-sub example shows subscriptions set up with three keys: "pubsubName", "topic" and "route". The two first makes sense for me. What does "route" mean?
    1 reply
    sedat-eyuboglu
    @sedat-eyuboglu

    while debuging with VS Code extension i am getting

    time="2020-08-25T14:22:23.6736781+03:00" level=warning msg="error establishing client to placement service: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [::1]:50005: connectex: No connection could be made because the target machine actively refused it.\"" app_id=clientcred instance=xxxxxx scope=dapr.runtime.actor type=log ver=0.10.0

    Docker hosts the placement at 6050 So i applied "placementAddress": "localhost:6050" in the tasks.json. After that the message disapered.

    2 replies
    Hanz van Aardt
    @hanzvanaardt
    Hi there, I am having a hard time troubleshooting the Daprd sidecars and would love some guidance. For example I have a working statestore yet when I try to access it I receive "{"message":"Could not get state."}". The daprd sidecar does not show any log and my main application container simple shows the logs - {"message":"Could not get state."}. I am at a loss on how I should start troubleshooting. Is Dapr working, is the statestore working or the issue with communication between the main application container and the daprd. Any assistance will be very welcome. Thank you.
    10 replies
    Mariusz Zieliński
    @mariozski
    Welcome everyone! I'm trying to get DAPR working on MacOS with docker-machine running in VMware Fusion. I'm not entirely sure how to configure things up... when I want to get through getting started and do "dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 node app.js" I get this error message: == DAPR == 2020/08/25 23:34:57 failed to send the request: Post "http://localhost:9411/api/v2/spans": dial tcp [::1]:9411: connect: connection refused
    8 replies
    I am not sure if that error is because dapr cli is trying to reach 9411 port of local machine? (docker is running in a VM with different IP address)
    AbserAri
    @abserari
    I saw someone in the community write an article about Dapr, and I wanted to write one. Introduction to Dapr Source Code With a Pub-Sub Sample Hope it will be helpful.
    Edward.Chan
    @EdwardChange4
    How do I debug a DAPR program with VS2019?
    1 reply
    sedat-eyuboglu
    @sedat-eyuboglu
    @yaron2 According to @georgestevens99 writes, I am trying to understand why the sidecar in k8s cluster is NOT HA. When i depoly multiple replicas of my pod, the sidecar is also multiplied. Am i wrong? Can you make it more clear. Also can you give the link of that call's record. Thank you
    Yaron Schneider
    @yaron2
    @sedat-eyuboglu the sidecars being highly available are a result of your deployment being highly available, so like you say, it is HA when running multiple replicas.
    Hanz van Aardt
    @hanzvanaardt

    Hi there, sending a message to a pubsub topic results in the following error in the Dapr sidecar. time="2020-08-26T16:51:45.271826823Z" level=error msg="error from operator stream: rpc error: code = Unavailable desc = transport is closing" app_id=exampleappname instance=exampleinstancename-5bfcd46f9-f6254 scope=dapr.runtime type=log ver=0.10.0

    How will I know that the pubsub component is functioning properly?

    1 reply
    Yanzhi Li
    @Li-Yanzhi
    If I use dapr and a pod to process the queue messages via input bindings, the messages will be processed sequentially (one by one) or concurrently? To my understanding, the pod need expose a webapi endpoint to dapr sidecar, will this result that dapr post a new message to pod when a previous message has not finish processing (this means the pod will process multiple messages at same time) ?
    2 replies
    FluentGuru
    @FluentGuru
    Hey guys! how do you do service discovery for deployments in docker compose?
    7 replies
    sedat-eyuboglu
    @sedat-eyuboglu
    Is there any known issue with dapr in OpenShift?
    1 reply
    ozturkcagtay
    @ozturkcagtay
    Hi, developing service calls with dapr. I have a problem, I make my service call using dapr client, there is no problem. But whenever I connect to the corporate network with vpn, the service calls are "RpcException: Status (StatusCode = Unknown, Detail =" context deadline exceeded ")
    Although both services are running in my machine location, what causes this situation?
    1 reply
    Tugay Ersoy
    @Admiralkheir
    Hi, I have a problem with pub/sub messaging, Running 2 service on my local machine with local dapr, all have a different port (grpc, metrics, etc) one service published event then subscriber triggering and returning 500 (for testing pub/sub mechanism) after that I didn't notice any triggering for my subscriber but according to document, it must triggering again https://github.com/dapr/docs/tree/master/howto/consume-topic, help pls ty.
    11 replies
    Yanzhi Li
    @Li-Yanzhi
    Hi there, does anyone have experience about how to use dapr with kubernetes job? If I run a normal pod, I can expose a webapi to dapr sidecar to receive messages from input binding, but kubernetes job always is a console application and will not expose webapi to receive message, then how can the job application (e.g. console) receive message from dapr sidecar?
    2 replies
    AbserAri
    @abserari
    Hello guys, I wonder how Dapr realized the at-least-once semantics? Is Dapr depend on the underlying message bus? Nats don't promise the ack but nats-streaming do. Does it mean that when use nats as pubsub component dapr couldn't promise the at-least-once semantics?
    2 replies
    Ravindra
    @Ravindra-a
    Hi, Today when I installed dapr in my AKS cluster I see following error in dapr-operator pod
    kubectl logs -n dapr-system dapr-operator-d7fb8dc96-srrn5
    time="2020-08-30T19:07:44.595756486Z" level=info msg="log level set to: info" instance=dapr-operator-d7fb8dc96-srrn5 scope=dapr.operator type=log ver=0.10.0
    time="2020-08-30T19:07:44.596221515Z" level=info msg="metrics server started on :9090/" instance=dapr-operator-d7fb8dc96-srrn5 scope=dapr.metrics type=log ver=0.10.0
    time="2020-08-30T19:07:44.59679205Z" level=info msg="starting Dapr Operator -- version 0.10.0 -- commit 6032dc2" instance=dapr-operator-d7fb8dc96-srrn5 scope=dapr.operator type=log ver=0.10.0
    time="2020-08-30T19:07:45.005094919Z" level=info msg="Dapr Operator is started" instance=dapr-operator-d7fb8dc96-srrn5 scope=dapr.operator type=log ver=0.10.0
    I0830 19:07:45.005169 1 leaderelection.go:242] attempting to acquire leader lease dapr-system/operator.dapr.io...
    E0830 19:07:45.007903 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list v1alpha1.Component: v1alpha1.ComponentList.Items: []v1alpha1.Component: v1alpha1.Component.Spec: v1alpha1.ComponentSpec.Metadata: []v1alpha1.MetadataItem: v1alpha1.MetadataItem.Value: ReadString: expects " or n, but found 6, error found in #10 byte of ...|,"value":60},{"name"|..., bigger context ...|gqc7ypnZUV4Ixw="},{"name":"timeoutInSec","value":60},{"name":"disableEntityManagement","value":false|...
    E0830 19:07:46.010398 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list
    v1alpha1.Component: v1alpha1.ComponentList.Items: []v1alpha1.Component: v1alpha1.Component.Spec: v1alpha1.ComponentSpec.Metadata: []v1alpha1.MetadataItem: v1alpha1.MetadataItem.Value: ReadString: expects " or n, but found 6, error found in #10 byte of ...|,"value":60},{"name"|..., bigger context ...|gqc7ypnZUV4Ixw="},{"name":"timeoutInSec","value":60},{"name":"disableEntityManagement","value":false|...
    E0830 19:07:47.012701 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list v1alpha1.Component: v1alpha1.ComponentList.Items: []v1alpha1.Component: v1alpha1.Component.Spec: v1alpha1.ComponentSpec.Metadata: []v1alpha1.MetadataItem: v1alpha1.MetadataItem.Value: ReadString: expects " or n, but found 6, error found in #10 byte of ...|,"value":60},{"name"|..., bigger context ...|gqc7ypnZUV4Ixw="},{"name":"timeoutInSec","value":60},{"name":"disableEntityManagement","value":false|...
    E0830 19:07:48.015172 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list
    v1alpha1.Component: v1alpha1.ComponentList.Items: []v1alpha1.Component: v1alpha1.Component.Spec: v1alpha1.ComponentSpec.Metadata: []v1alpha1.MetadataItem: v1alpha1.MetadataItem.Value: ReadString: expects " or n, but found 6, error found in #10 byte of ...|,"value":60},{"name"|..., bigger context ...|gqc7ypnZUV4Ixw="},{"name":"timeoutInSec","value":60},{"name":"disableEntityManagement","value":false|...
    E0830 19:07:49.017291 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list v1alpha1.Component: v1alpha1.ComponentList.Items: []v1alpha1.Component: v1alpha1.Component.Spec: v1alpha1.ComponentSpec.Metadata: []v1alpha1.MetadataItem: v1alpha1.MetadataItem.Value: ReadString: expects " or n, but found 6, error found in #10 byte of ...|,"value":60},{"name"|..., bigger context ...|gqc7ypnZUV4Ixw="},{"name":"timeoutInSec","value":60},{"name":"disableEntityManagement","value":false|...
    E0830 19:07:50.019669 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list
    v1alpha1.Component: v1alpha1.ComponentList.Items: []v1alpha1.Component: v1alpha1.Component.Spec: v1alpha1.ComponentSpec.Metadata: []v1alpha1.MetadataItem: v1alpha1.MetadataItem.Value: ReadString: expects " or n, but found 6, error found in #10 byte of ...|,"value":60},{"name"|..., bigger context ...|gqc7ypnZUV4Ixw="},{"name":"timeoutInSec","value":60},{"name":"disableEntityManagement","value":false|...
    E0830 19:07:51.022027 1 reflector.go:15
    I see now there is version 0.10, this was working fine 2 weeks back when version was 0.9 . Is there a way that I can use specify version during helm chart installation as I am using helm to install dapr
    Ravindra
    @Ravindra-a
    My main issue is I have configured pub-sub using Azure Service Bus, however when I am publishing anything to service bus although it gets posted to the sidecar it doesn't gets posted to the Service bus
    when I check logs for my pod I see this
    {"app_id":"notification","instance":"<redacted>","level":"info","msg":"mTLS is disabled. Skipping certificate request and tls validation","scope":"dapr.runtime","time":"2020-08-30T19:03:27.049587447Z","type":"log","ver":"0.10.0"}
    {"app_id":"notification","instance":"<redacted>","level":"warning","msg":"failed to load components: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 10.0.120.180:80: connect: connection refused\"","scope":"dapr.runtime","time":"2020-08-30T19:03:36.068384958Z","type":"log","ver":"0.10.0"}
    Ravindra
    @Ravindra-a
    finally figured out my issue, something is definitely wrong in 0.10.0, so used --set global.tag=0.9.0. Eventually there wasn't any error in logs, however none of my components were getting detected so have to install everything in same namespace, including core dapr components. Found this link talking something about scopes - dapr/dapr#1181 . Is it necessary to have everything within same namespace ?
    19 replies
    Yaron Schneider
    @yaron2
    hey @Ravindra-a, I responded in a new thread on your latest message.
    Tugay Ersoy
    @Admiralkheir
    Hi, I have a problem with rabbitMQ, Running 2 service on my local machine with local dapr, all have a different port (grpc, metrics, etc) one service published event then subscriber triggering and returning 500 (for testing pub/sub mechanism) after that I restarted and didn't notice any triggering for my subscriber, I edited yaml component like this
    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: messagebus
      namespace: default
    spec:
      type: pubsub.rabbitmq
      metadata:
      - name: host
        value: amqp://localhost:5672 # Required. Example: "amqp://rabbitmq.default.svc.cluster.local:5672", "amqp://localhost:5672"
      - name: consumerID
        value: rabbitMqConsumer # Required. Any unique ID. Example: "myConsumerID"
      # - name: durable
      #   value: "true" # Optional. Default: "false"
      # # - name: deletedWhenUnused
      # #   value: <REPLACE-WITH-DELETE-WHEN-UNUSED> # Optional. Default: "false"
      - name: autoAck
        value: "true" # Optional. Default: "false"
      # # - name: deliveryMode
      # #   value: "0" # Optional. Default: "0". Values between 0 - 2.
      # - name: requeueInFailure
      #   value: "true" # Optional. Default: "false".
    Amit Hansda
    @amitHansda
    @Admiralkheir Does your publishing work? I think you need to specify username password in spec.metadata in host value.
    @Admiralkheir Watch out for dapr logs. If a component is not being initialized properly it's ought to say so
    Tugay Ersoy
    @Admiralkheir
    y it worked @amitHansda and component initialized
    Amit Hansda
    @amitHansda
    @Admiralkheir So publishing an event is working as you said. What application is you are using for subscribing? GO, NodeJs or Dotnetcore?
    Tugay Ersoy
    @Admiralkheir
    dotnetcore @amitHansda
    with redis component it worked but with rabbit this ack mechanism is not working
    Amit Hansda
    @amitHansda
    @Admiralkheir Okay... In my case of autoAck to set to false. Let me try again with setting it to false.
    Tugay Ersoy
    @Admiralkheir
    it did not work
    Amit Hansda
    @amitHansda
    @Admiralkheir Try cloning my repo https://github.com/amitHansda/store-manager
    It uses tye to start all service together. Also in components folder you may need to change the connection string. Mine uses possibly a different username password than yours
    3 replies
    Tugay Ersoy
    @Admiralkheir
    did you try this mechanism is it work with your app?
    Tugay Ersoy
    @Admiralkheir
    problem is ack mechanism, re-try logic. I don't have a problem with publishing or subscription
    FluentGuru
    @FluentGuru
    Hey guys, is there a sample for service to service RPC I can't see it on dapr/samples
    Yaron Schneider
    @yaron2
    ozturkcagtay
    @ozturkcagtay
    Is there an example project of grpc invocation with dotnet sdk?
    2 replies
    AbserAri
    @abserari
    This message was deleted
    @Admiralkheir image.png Hah, I think our YAML set correctly. so if there is a problem, it's on rabbitMQ component realization.
    Tugay Ersoy
    @Admiralkheir
    @abserari so what should i do? i dont get it
    AbserAri
    @abserari
    @Admiralkheir I did the test again and it looks like the autoAck mechanism didn't work.
    Unfortunately, no matter how many times I started a subscriber, he wouldn't have received a return of 500. The next step I think is to debug the dapr runtime. check if the autoAck parameters have been read according to our YAML.
    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: messagebus
      namespace: default
    spec:
      type: pubsub.rabbitmq
      metadata:
      - name: host
        value: amqp://user:GSlU5LpIa1:32421 # Required. Example: "amqp://rabbitmq.default.svc.cluster.local:5672", "amqp://localhost:5672"
      - name: consumerID
        value: rabbitMqConsumer # Required. Any unique ID. Example: "myConsumerID"
      # - name: durable
      #   value: "true" # Optional. Default: "false"
      # # - name: deletedWhenUnused
      # #   value: <REPLACE-WITH-DELETE-WHEN-UNUSED> # Optional. Default: "false"
      - name: autoAck
        value: true # Optional. Default: "false"
      # # - name: deliveryMode
      # #   value: "0" # Optional. Default: "0". Values between 0 - 2.
      # - name: requeueInFailure
      #   value: "true" # Optional. Default: "false".
    and I even get the log but the at-least-once semantics not work.
    == APP == 2020/09/02 00:27:58 event - PubsubName: messagebus, Topic: neworder, ID: 5dc38ceb-9a1a-4d71-9c8e-0d44f98f4845, Data: ping
    
    == DAPR == time="2020-09-02T00:27:58.172224+08:00" level=error msg="rabbitmq pub/sub: error handling message from topic 'neworder', error returned from app while processing pub/sub event: . status code returned: 500" app_id=faildsub1 instance=ArideMacBook-Air.local scope=dapr.contrib type=log ver=0.10.0
    Yaron Schneider
    @yaron2
    hey everyone, community call starting now. password: eWRhSklVTjJjSnhTaURDcFZaU2ZzQT09. url: https://us02web.zoom.us/j/85305980190
    David Aronchick
    @aronchick
    Is anyone having trouble on the android client with the passcode? It only allows ~8 characters :(
    2 replies
    Yaron Schneider
    @yaron2
    @abserari I'll debug this today
    2 replies
    Yanzhi Li
    @Li-Yanzhi

    DAPR won’t distribute message from input binding to multiple instances of consumer?

    I have the following scenario and issue:

    1) External system publish 500 TODO job message to RabbitMQ
    2) Consumer Pod use input binding (AMQP protocol) to receive messages from RabbitMQ, and Consumer Pod is running on demand (if no message, then no instances running)
    2) KEDA scale the Consumer Pod from 0 to 5 based on custom metrics of RabbitMQ queueLength
    3) Kubernetes create 5 instances of Consumer Pod based on scale rule (incoming 500 messages)
    4) The daprd sidecar of first instances of Consumer Pod will get all 500 messages and processed them one by one, each message will use 20 seconds to complete, at meantime, the status of all messages in RabbitMQ changed from Ready to Unacked - UnAcknoledged
    5) The other 4 instances of consumer Pod did NOT get any un-processed messages from RabbitMQ and free all time

    I need all 5 instances of Consumer Pod parallelly process all non-complete messages in RabbitMQ, What configuration I missed or did wrong?

    13 replies