Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Jade Koskela
    Also on an unrelated issue AFAIK I am getting "stream closed" errors that seem to be initiated by the C++ client, but can't find the reason. Client sends RST_STREAM, but I didn't cancel anything. I don't see any errors in ProcMon. This is not causing as much problems as the issue above though. Error above is on Linux, this is on Windows.
    I1001 14:27:10.299000000 33600 chttp2_transport.cc:852] W:00000226962FC000 CLIENT state WRITING -> IDLE [finish writing]
    I1001 14:27:10.299000000 33600 stream_lists.cc:71] 00000226962FC000[61][cli]: pop from writing
    I1001 14:27:10.299000000 33600 call_combiner.cc:69] ==> grpc_call_combiner_start() [0000022699A33C00] closure=0000022699A34B30 [on_complete] error="No Error"
    I1001 14:27:10.299000000 33600 call_combiner.cc:78]   size: 0 -> 1
    I1001 14:27:10.300000000 33600 call_combiner.cc:87]   EXECUTING IMMEDIATELY
    I1001 14:27:10.300000000 33600 call_combiner.cc:106] ==> grpc_call_combiner_stop() [0000022699A33C00] [on_complete]
    I1001 14:27:10.300000000 33600 call_combiner.cc:113]   size: 1 -> 0
    I1001 14:27:10.300000000 33600 call_combiner.cc:141]   queue empty
    I1001 14:27:10.300000000 33600 completion_queue.cc:1206] grpc_completion_queue_pluck(cq=0000022698A2B5B0, tag=000000BFB45F8FB0, deadline=gpr_timespec { tv_sec: 9223372036854775807, tv_nsec: 0, clock_type: 1 }, reserved=0000000000000000)
    I1001 14:27:10.300000000 33600 tcp_windows.cc:190] TCP:0000022693D09FB0 on_read
    I1001 14:27:10.300000000 33600 chttp2_transport.cc:852] W:00000226962FC000 CLIENT state IDLE -> WRITING [RST_STREAM] // but why?
    I1001 14:27:10.300000000 33600 call_combiner.cc:69] ==> grpc_call_combiner_start() [0000022699A33C00] closure=0000022699A34CD0 [recv_message_ready] error="No Error"
    I1001 14:27:10.300000000 33600 call_combiner.cc:78]   size: 0 -> 1
    I1001 14:27:10.300000000 33600 call_combiner.cc:87]   EXECUTING IMMEDIATELY
    I1001 14:27:10.300000000 33600 call_combiner.cc:69] ==> grpc_call_combiner_start() [0000022699A33C00] closure=0000022699A34E68 [recv_trailing_metadata_ready] error="No Error"
    Doug Fawley
    @nibbleshift_twitter the grpc-go release does not include the code generator. The version you are running is a pre-GA version. We did our first 1.x release last week. You should use that via go get google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.0
    Guillaume Delbergue
    Hi all. What's the right way to decode a grpc-web payload directly from http response body ? Using deserializeBinary function with a base64 decoded response body triggers AssertionError in "google-protobuf" ? (Not sure it's the right channel to find help on grpc-web)
    4 replies
    Konstantin Morozov
    does not build async-server from documentation.
    usr/bin/ld: CMakeFiles/greeter_server.dir/server.cpp.o: undefined reference to symbol 'gpr_log'.
    //usr/local/lib/libgpr.so.12: error adding: DSO missing from command line
    what could be wrong?
    build with -lpthread
    1 reply
    Vineeth Sagar
    What's the preferred method to cite gRPC in publications?
    Is there possible to config SubchannelPicker as my own without rewrite LB? Actually the LB policy is same but pick logic is different, now I need copy full LB as a new one, feeling not necessary.
    3 replies
    Nejc Galof
    Hello. Is there a option to get status of clients on server in C++. Informations on when is client connected on server and when is client disconnected?
    1 reply
    Jayanth Inakollu

    Hi, Will building a new channel on the client trigger DNS resolution for the same target address? Or is DNS caching applied at a higher/global level? I'm trying to use headless service for load-balancing and stumbled upon the comment below in one of google groups threads.

    If you use "headless" services in k8s on the other hand, the DNS query should return multiple pod IPs. This causes the gRPC client to in turn maintain multiple connections, one to each destination pod. However, I am not sure how responsive this will be to topology changes (as pod instances are auto-scaled, or killed and rescheduled, they will move around and change IP address). It would require disabling DNS caching and making sure the service host name is resolved/polled regularly

    2 replies
    can we use multiple stubs/new stub per request with one channel ( which will be instantiated while service comes up , Channel as Spring Singleton bean) , does this approach hampers performance in any way ? i'm using blocking stub as of now
    2 replies
    hello~~why in some cases I have to add "gpr" to my compilation dependencies?
    otherwise I can't find symbol Undefined symbols for architecture x86_64: "_gpr_log"
    But in some os It seems unnecessary for the same code (In macos and archlinux I have to add this dependency, but in centos/ubuntu it's ok without it). Is that something related to grpc version?
    1 reply

    Hey, I'm currently attempting to build the google-cloud-cpp, with an already built copy of grpc. The packaging system I'm using (by internal convention) builds shared libraries, exporting them for use in downstream programs.

    However, a downstream program crashes when loading both the grpc and google-cloud-cpp shared libraries in the same process, when the linker calls the init functions.

    It seems that this crash is due to an ODR violation related to the status.proto file that is present in both the cpp sdk and grpc++. Specifically, it seems that google::rpc::Status is defined twice, in two copies of the generated status.pb.cc in both projects, and the grpc++ code is trying to re-initialize parts of the google-cloud-sdk's version.

    2 replies
    Serena Xu


    I am new to go and I am trying to create a grpc-gateway for my gPRC hello world example. I am following this tutorial: https://grpc-ecosystem.github.io/grpc-gateway/docs/usage.html.

    My folder structure looks like this:

        - hello
          - client.py
          - server.py
          - hello_pb2.py
          - hello_pb2_grpc.py
        - proto
          - hello
            - hello.proto
            - hello.pb.go
            - hello.pb.gw.go
          - hello_server.go

    I have the issue of running entry point of the proxy server as in step 6. I got the error:

    package hello-world-python/hello is not in GOROOT (/usr/local/Cellar/go/1.14.3/libexec/src/hello-world-python/hello) Anyone could help with the error ?

    Jayanth Inakollu
    Hi, Is there a recommended way to identify whether a channel is bad and shut it down in grpc-java?
    24 replies
    When use grpc-java, it spend lots of time when first time call server, spend 1690ms in first time, and the later only need 10ms; So there has many slow reponse after application restart, how could I optimize this, is there can warm up and create all connection when application starting? I tried to use io.grpc.ManagedChannel#getState, and spend time is decreased, but still many times than normal.
    5 replies
    Shubham Singhal
    Hey folks! I am a grad student at UIUC. As part of a semester project, we are trying to replace TCP with QUIC in CockRoachDB: https://github.com/cockroachdb/cockroach. CockRoach uses gRPC for communication b/w the nodes. From my understanding gRPC servers traffic over HTTP/2 which is TCP. From https://grpc.io/blog/grpc-stacks/ -> it seems that we can swap out components atleast for gRPC-java to use QUIC. Is that possible for gRPC-go (as CockRoach uses it)? If so, how could we go about doing it?
    6 replies
    Paul Harrison

    I have a C++ server with reflection turned on, but it does not seem to be able to give full information to grpc_cli e.g.

    $ grpc_cli ls localhost:9090


    $ grpc_cli ls localhost:9090 jboproto.otcz.OTCZControl 
    Service or method jboproto.otcz.OTCZControl not found.

    however, I can query information about types

    $ grpc_cli type localhost:9090 jboproto.core.TelescopeID
    message TelescopeID {
      oneof id {
        string name = 1;
        string abbrev = 2;
        uint32 number = 3;

    does anyone know what it wrong?

    2 replies

    Good morning/afternoon/evening everyone.

    I'm having a pretty odd issue with PECL GRPC v1.30 and up (but not v1.29.1 and below) and I would love to get some input or pointers. Since v1.30 I'm getting the following assert error:

    [PHP-FPM    ] Oct 15 13:42:48 |INFO   | REQUES Matched route "app_overview_list". method="GET" request_uri="" route="app_overview_list" route_parameters={"_controller":"App\\Controller\\OverviewController::list","_route":"app_overview_list","entity":"personal"}
    [PHP-FPM    ] Oct 15 13:42:48 |INFO   | SECURI Guard authentication successful! authenticator="App\\Security\\StubAuthenticator" token={"Symfony\\Component\\Security\\Guard\\Token\\PostAuthenticationGuardToken":"PostAuthenticationGuardToken(user=\"Foo Bar\", authenticated=true, roles=\"ROLE_FOOBAR\")"}
    [PHP-FPM    ] E1015 15:42:48.435412000 4602330560 ssl_transport_security.cc:473]     assertion failed: (int)peer->property_count == current_insert_index
    [Web Server ] Oct 15 15:42:48 |INFO   | SERVER GET  (200) /favicon.ico ip=""
    [Web Server ] Oct 15 15:42:48 |ERROR  | SERVER issue with server callback error="unable to fetch the response from the backend: unexpected EOF"

    This assert is thrown by the following line: https://github.com/grpc/grpc/blob/master/src/core/tsi/ssl_transport_security.cc#L473

    On the client side (PHP) I use the following to connect to the server (through NGINX with a GRPC pass):

    return new AccountAPIClient($host, [
        'credentials' => Grpc\ChannelCredentials::createSsl(\file_get_contents($caFilePath)),

    On the server side (Golang) we use the following:

    // Set TLS options
        ClientAuth:   tls.RequireAndVerifyClientCert,
        Certificates: []tls.Certificate{certificate},
        ClientCAs:    certPool,

    The NGINX config we use (which creates the actual mTLS connection):

    server {
      listen 7790 ssl http2;
      client_max_body_size 15m;
      server_name foo.bar.com;
      ssl_certificate /etc/freeipa/certs/server.crt;
      ssl_certificate_key /etc/freeipa/certs/server.key; 
      location ~ ^/foo.bar.* {
        grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        grpc_ssl_certificate /etc/private/ssl/foobar/foobar-client.crt;
        grpc_ssl_certificate_key /etc/private/ssl/foobar/foobar-client.key;
        grpc_pass grpcs://foobarservers;
        access_log /var/log/nginx/access_foobar_7790.log;
        error_log /var/log/nginx/error_foobar_7790.log;

    Does anyone have any idea why this would happen and what I could do to resolve it? I don't want to stay at v1.29.1 :sweat_smile:

    3 replies
    This does seem like a bug to me since it worked perfectly before and in a new version it just :boom:.

    hello! our builds are broken after releasing v2.0.0. is there any fix to fast rescue of our builds?

    Error compiling your-: gateway/build/gopath/src/broker/broker.pb.gw.go:16:2: cannot find package "github.com/grpc-ecosystem/grpc-gateway/v2/runtime

    I need:

    btw how to get access to your slack channel?
    Why there almost no document of how to use gRPC function, every time I need ask here or find issues, If there has plan to consummate document?
    Richard Van Camp
    Can anyone confirm if protoc-gen-validate compatible with the generated kotlin-grpc classes?
    Richard Van Camp
    well seems work, although I get a warning about illegal reflective access--not entirely sure the warning is related
    question about gRPC versions - can 1.21 talk to 1.29?

    I am trying to use mTLS in gRPC v1.28.2, I am employing self-signed ECDSA certs with ecparam -name secp384r1 for the cpp helloworld example client and server. However, I see the following error at the server when the client tries to establish a connection with the server -

    E1009 04:42:47.079755845      97 ssl_transport_security.cc:1379] Handshake failed with fatal error SSL_ERROR_SSL: error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher.

    Upon packet inspection I found that the handshake fails just after client hello, the client provides cipher suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c) but the server closes the connection.
    I also tried explicitly setting the environment variable GRPC_SSL_CIPHER_SUITES to ECDHE-ECDSA-AES256-GCM-SHA384 at both the server and the client, still getting the same error.
    Does anyone have any idea how to resolve this issue?

    Jacob Janco
    quick question: are there best practices for simply creating a grpc client and passing it around your app...currently abstracting the connection instantiation and passing it into a client for every request rather than passing around a client throughout my app
    I can't seem to find any information about sub-channel-separator

    Is there any library that can help me write api test in golang for rpc based api written in golang ?

    How are you all testing api? Built using grpc.

    hi, when I use proto3, map<string, google.protobuf.Any> all = 5; with the golang ,it always show me error getting request data: json: cannot unmarshal number into Go value of type map[string]*json.RawMessage

    Is there any library that can help me write api test in golang for rpc based api written in golang ?

    How are you all testing api? Built using grpc.

    just follow https://grpc.io/docs/languages/go/quickstart/

    how to map any types in proto3? use map<string, google.protobuf.Any> all = 5; this? but I get an error.
    Christopher Pisz


    How can I check if the RPC client actually made a successfull connection to the RPC server when I create the channel and stub?

    I want to be able to throw an exception or signal if the connection failed. I am not sure what method qualifies as an "operation" to try here. I don't want to make any of the RPC calls that we defined, as they all have effects that will occur....unless the only option is to go implement some "Hi I am here" RPC method, but that seems silly, no?

    void ChatSyncClient::connect(QUrl const& endpoint)
        // TODO - How do we tell if this fails?
        //        The comments say a lame channel will be returned, where all operations fail
        //        How are we suppose to check right here rather than later?
        auto channel = ::grpc::CreateChannel(endpoint.toString().toStdString(), ::grpc::InsecureChannelCredentials());
        m_stub = std::make_unique<chat_sync::Chat::Stub>(channel);
        emit signalOnConnected();
    Also, is there an example where client and server communicate via a method that uses async and bi directional streaming?
    My ticket at work wants to keep the connection open and send messages both ways at any time. I see seperate async and streaming examples in the grpc repo, but one that uses both at the same time.
    hey guys
    keep getting this error on docker running Node grpc
    E1022 18:31:45.240350610 1 cpu_linux.cc:75] Cannot handle hot-plugged CPUs
    Hello, can any one tell me how can I gracefully handle "Address already in use" exception without handling the system signal. I am using c++ grpc, the cpp_version is 1.20.0. I searched but can not find any usefull information.
    Hi, I get some linking issue with abs::lts, when building from sources (cpp)
    Gprc v1. 33.1
    on Ubuntu 20.04
    I do something wrong ?
    Simon Bentgsson
    Hi. I'm developing a gRPC server in node and a client in .net. I'm working with streams that should be open most of the time and I've searched for a "larger" project than the examples to get a better understanding on how to both ensure that a connection is alive and also on how to close streams gracefully. Do anyone know of any good resources? (In C# we have to run .net framework 4.5 so we use grpc.core 1.22.1, so an example that isn't in .net core would be perfect).
    João Santos
    Hi there, can someone help me with this error:
    protoc: stdout: . stderr: --grpckt_out: protoc-gen-grpckt: Plugin failed with status code 3221225477.
    This happens when I try to generate kotlin code from proto using Gradle on a Windows machine. Thanks!