by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    oscollabus
    @oshoval

    @oshoval Stop (https://godoc.org/google.golang.org/grpc#Server.Stop) force closes all RPCs. But GracefulStop (https://godoc.org/google.golang.org/grpc#Server.GracefulStop) waits for existing RPCs to finish.

    Thanks Menghan, as long as they all use context, the logic added by the user, can even create go routines and such right ?
    (and GracefulStop would wait the whole handling is finished, if the chain is guarded)
    on other words, are there any limitations in some document that tell what the user handlers code should obey please?

    we had a bug that serveStreams wasnt waited by Stop, i purposed to use GracefulStop because it does wait for the s.conns to become empty,
    it doesnt wait by Stop, because serveStreams runs in a go routinue which isnt waited for, and just remove s.conn when done.

    just wondering if there is a limitation on the logic that we adding in our RPC, (i.e dont spawn orphan go routines that we want to wait for)

    Menghan Li
    @menghanl
    GracefulStop() doesn't add limitation to how streams can be used. Your spawned goroutines can use stream as normal. The server waits for the service handler to return.
    1 reply
    Alexander Scammon
    @stackedsax
    can anyone suggest who or where to ask to get a CI label put on protocolbuffers/protobuf#7470
    Gus Narea
    @gnarea
    Hi there. Has anyone created a Docker image with an HTTP proxy for the gRPC Health Check protocol?
    I'm trying to deploy a gRPC service to GCP but the loadbalancer can only do HTTP 1-2 probes, and according to GCP staff, the best solution seems to be creating a simple HTTP proxy for my gRPC service
    Which returns 200 OK when a gRPC call works
    I couldn't find any image on hub.docker.com but maybe I missed it or I'm just using the wrong keywords
    Gus Narea
    @gnarea
    Alexander Scammon
    @stackedsax
    I wish I could answer my own question :)
    matrixbot
    @matrixbot
    xionbox Has many transitioned from protobufs to flatbuffers in a grpc service? And if so what was your experience?
    Bob Reselman
    @reselbob
    I am spending a lot of time researching companies that are using gRPC for mission-critical, production work. It seems that there just aren't that many. Is my perception accurate?
    matrixbot
    @matrixbot
    xionbox Bob Reselman (Gitter): google uses it to run their data centers around the world and several space companies use it internally to run their spacecraft operation centers (I know of two separate ones). I'd argue that's mission critical
    Ronak Mehta
    @ronak521_twitter
    I'm very new to gRPC. Can anyone suggest any good resources for learning gRPC in Node for an Angular web application? I just require more documentation or more examples.
    cyberco
    @cyberco

    Hi, everybody. I've been struggling with passing a JSON with Message values back and forth between systems.

    message Result {
        struct/value/any? variables = 1;
    }
    
    // Where variables would contain a (1 level deep) JSON with different types of values, e.g.:
    {
        "key1": 1,
        "key2": true,
        "key3": proto.MessageA
    }

    The Result should is passed from a web browser (grpc-web) to a Python backend. The Python backend should serialize Reult to JSON and back again.

    I've tried Struct for the variables field but there are problems with turning a Message into proper JSON with grpc-web. I would really like to use proto messages in variables. The reason for picking protobuf/grpc was exactly this, being able to use the same type throughout our complete platform, but this seems to be blocking this goal. Did I miss something? What would you do?

    6 replies
    cyberco
    @cyberco

    Update: Got a bit further but it still doesn't work.

    // proto
    message Request {
        google.protobuf.Struct variables = 1;
    }
    
    // obj - Where variables would contain a (1 level deep) JSON with different types of values, e.g.:
    {
        "key1": 1,
        "key2": true,
        "key3": proto.MessageA
    }
    
    // code
    struct = new proto.google.protobuf.Struct(obj);
    req = new Request;
    req.variables = struct;

    Checking req.variables before sending it shows that it's indeed a Struct with all the correct fields in it. But once the other end (server) receives it req.variables is an empty Struct. For testing purposes I tried an obj that is simply {'key': 'value'}, but the result was the same.

    I'm sending the req from a browser using grpc-web and receiving it in a python grpc-server.

    cyberco
    @cyberco

    I just found out about proto.google.protobuf.Struct.fromJavaScript and tried it:

    // code
    struct = proto.google.protobuf.Struct.fromJavaScript(vars);
    req = new Request;
    req.variables = struct;

    This works for a simple obj (e.g. {"key": "val"}), but for an obj with a proto message field (such as above) it resulted in :

    struct_pb.js:875 Uncaught Error: Unexpected struct type.
        at Function.proto.google.protobuf.Value.fromJavaScript (struct_pb.js:875)
        at Function.proto.google.protobuf.Struct.fromJavaScript (struct_pb.js:941)
        at Function.proto.google.protobuf.Value.fromJavaScript (struct_pb.js:871)
        at Function.proto.google.protobuf.Struct.fromJavaScript (struct_pb.js:941)
        at Function.proto.google.protobuf.Value.fromJavaScript (struct_pb.js:871)
        at Function.proto.google.protobuf.Struct.fromJavaScript (struct_pb.js:941)
        at Function.proto.google.protobuf.Value.fromJavaScript (struct_pb.js:871)
        at Function.proto.google.protobuf.Struct.fromJavaScript (struct_pb.js:941)
    1 reply
    cyberco
    @cyberco

    Or can I, instead of going through all the troubles with protobuf/json in javascript, just use a map?

    // proto
    message Request {
        map<string, ?type?> variables = 1;
    }

    But what would ?type? then be if it the values can ben anything (proto.MessageX, string, boolean, etc)?

    cyberco
    @cyberco
    One advantage is that the receiving end knows all the proto definitions it might receive.
    Gautam
    @gautam1858
    Parallel-File-Sharing, github link https://github.com/gautam1858/Parallel-File-Sharing, please star if you find it interesting
    Alexander Scammon
    @stackedsax
    @gautam1858 created a PR to fix a couple of spelling mistakes on the README -- at least, I assumed they were mistakes
    Nikos Skalis
    @nskalis
    hi, I am quite new in grpc, and I have a question to ask: given a 3rd party grpc server that publishes its .proto files, is it possible to use a regular http2 client (instead of a generated stub) to query the grpc service?
    6 replies
    Keyvhinng
    @keyvhinng
    Hi there, Is it possible to get the IP of the client (node.js client) when the server receives the call?
    7 replies
    David Bell
    @dastbe

    hey all, have a few questions about the scenarios under which certain status codes would be generated by libraries.

    To start w/, would "cancelled" only ever be generated client-side automatically when a deadline is exceeded? Would a server-side framework ever automatically send a "cancelled" response back to a client?

    2 replies
    adiplus
    @adiplus
    hi all, I have a very strange issue with grpc and typescript. (using proto-loader and grpc) to load a proto file. I do a simple unary call but for some strange reason I do get no data back in the typescript version. client_interceptor.js's callback response has an empty read array (Uint8Array) and I have absolutely no clue why.
    The same proto files works in a simple node project but the same call fails with compiled typescript. Any ideas? I'm trying to understand where the array buffer is set but debugging compiled code is no fun at all lol
    5 replies
    yoni122
    @yoni122
    Hi,
    I am looking for an example of how to use the new python asyncio server.
    Is there a need to pass some special flags to grpc_tools.protoc?
    Richard Belleville
    @gnossen
    @yoni122 Nope. It should work right out of the box.
    Examples are being worked on, but there are some unit tests you can look at: https://github.com/grpc/grpc/blob/master/src/python/grpcio_tests/tests_aio/unit/_test_server.py
    2 replies
    HelloWood
    @helloworlde
    Hi,
    I'm using grpc-java retry function, but I couldn't found where the retry logic implement, does it in LoadBalancer? Could someone tell me which class made retry after failed? Thank you very much.
    2 replies
    Blake Willoughby
    @byblakeorriver
    When is it safe to use a blocking stub versus a future stub?
    2 replies
    Yannick Koechlin
    @yannick
    hi!
    i have a weird effect in that my dart client always gets disconnect/reset before headers from our loadbalancer with letsencrypt cert, golang client works fine
    anyone seen something like that before?
    matrixbot
    @matrixbot
    xionbox Does anyone use flatbuffers instead of protobuffers for their gRPC implementation?
    HelloWood
    @helloworlde
    Hi, now I can only trace sync method by arthas, but the async method can't be traced, so I can't known which method cost most time. Is there has a tool to trace every method spend time, include aysnc?
    hawk01
    @hawk01_gitlab
    hey there,
    in grpc-web if envoy proxy is not attached to host it doesnot work , how to use it in kubernetes. if it only works on host network then frontend proxy and backend should in a single pod?
    5 replies
    shimmy
    @lilshim
    @ejona86 is the reason gRPC doesn't allow you to access the actual request directly within interceptCall is because for streaming RPCs, the server may receive more than 1 message for a single RPC call (potentially lots more)?
    1 reply
    Dexter Bradshaw
    @DexterB
    Hello I need some help. I am doing some gRPC C++ development and was wondering if anyone using C++ smart pointers with Arena allocation. In particular, what happens on destroy if I use and arena allocated raw point to create a std:unique_ptr object.
    Sreenidhy Sreepathy
    @sreenidhy_twitter
    Hi!
    I have a gRPC-java service, written as spring boot app, that serves about 20 different RPCs. Many of them are server-streaming RPCs. Overall, the response time is anywhere between 150ms-1s, different for different RPCs. The primary client of this gRPC server is a fleet of about 5000 client processes, spread across different datacenters and geographic regions. Sometimes, due to a bug on the client, it can so happen that all these 5000 clients end up making about 10 different RPCs all at almost the same time. This has caused the gRPC server to be in a state where it is not able to accept incoming requests beyond a point (clients start seeing io.grpc.StatusRuntimeException: UNKNOWN: channel closed). And since the clients have some back-off logic they end up making these calls again as they saw channel closed. This eventually leads to cascading failure because the gRPC server is inundated with lots of requests (friendly DDoS) and other clients (not the 5000 primary) start seeing issues connecting to the server. I understand this is a classic use case for having circuit-breakers and rate-limiters. What I'd like to know from gRPC viewpoint is, what's causing the connections to be rejected? My guess is somehow the netty queue/buffer is getting filled up and then requests (maybe response too) are getting dropped. Is there a way to validate this by enabling some detailed logs? Is it possible to somehow monitor this buffer and emit metrics so that I can be alerted? Is the buffer size configurable by the application or is it a low-level gRPC thing which cannot be configured?
    @ejona86 or anyone who has experience dealing with gRPC-java servers at scale?
    43 replies
    shimmy
    @lilshim
    @ejona86 if I have a service that parses arbitrary requests (requests unrelated to service's responsibilities) e.g. message_of_any_type.unpack(Class.forName(message_of_any_type..getClass().getTypeName())) will that service need to have all the proto dependencies of every type of request proto that it parses ?
    1 reply
    Dexter Bradshaw
    @DexterB

    @ejona86 Do you know anyone who can help me with this?

    Hello I need some help. I am doing some gRPC C++ development and was wondering if anyone using C++ smart pointers with Arena allocation. In particular, what happens on destroy if I use and arena allocated raw point to create a std:unique_ptr object.

    shimmy
    @lilshim
    Is there a way to save only part of a protobuf message (due to a size limit) ? If not, I can always just convert the message to json and save part of that due to the size limit
    3 replies
    Yura
    @durkmurder
    Some time ago I have created this issue: grpc/grpc#22724
    I am really waiting for async support with callbacks without manually controlling dispatch as it is now. Any chance that it's on roadmap?
    Mike
    @unlikemikeshmay
    Hello, I am very new to grpc and am wondering how to connect a server and client that are not in the same directory. My server will be hosted and written in go and my client will be an android kotlin application. Would I just have to generate the same proto file for both or how would they share methods?
    1 reply
    all the examples i read the client and server share directories and a single proto file
    Yura
    @durkmurder
    Server defines proto file where api is described. Client needs to get access to that file to generate sources in target language. Then it’s just a matter of connecting client with server using correct communication channel( simplest way is insecure connection specifying port and host)
    Jake Arkinstall, PhD
    @JakeArkinstall_twitter

    Hi all, I'm new to gRPC and microservices in general, and I'm looking for some advice. I'm migrating from a monolithic low-latency system. The aim of creating the microservices is not one of replication, load balancing etc, but of separation of concerns and configuration flexibility (e.g. being able to swap out components). Each 'run' involves launching the microservices, launching the controllers, and running the master controller, then killing everything at the end.

    I have a main controller M that connects to a child-controller A, which runs a data pipeline through other microservices a1 -> a2 -> a3 -> a4. All of these are stateful, in that they need to be aware of their own history (for e.g. detecting changes, or sampling, etc). Note that the a1...4 is a simplification - I have multiple of these child-controllers, each controlling a different number of microservices.

    My current approach is

    Request : M  -> A -> a4 -> a3 -> a2 -> a1
    Response: M  <- A <- a4 <- a3 <- a2 <- a1

    I'm finding that this isn't great for throughput due to the requests and the responses being passed along the chain. It'd be fine if I could do it in batch, but I need the result from A as soon as possible for the next stage in the pipeline. When I run these in batch (i.e. collecting results into a repeated field and flushing every 10k of them) it runs almost as fast as the monolithic version (3.9s for the monolithic, 4.1s for the batch microservices). When I run them one-by-one, more closely matching the real-life implementation requirements, it takes 10-100x that.

    One solution would be to remove the middle-men and do:

    Request:   M -> A ----------------------> a1
    Response: M <- A <- a4 <- a3 <- a2 <- a1

    Is it at all possible to do that, bearing in mind that the A that receives the response from a4 is the same A that makes the initial request to a1? I.e. by passing direct connection details down the chain, or something along those lines? All of these components are in C++, using the gRPC C++ API.

    11 replies
    Abhijit
    @abhijit8234
    do we really need gRPC in a web application, mainly used in the browser?
    1 reply
    Jonas Beyer
    @jonasbeyer

    Is here someone who can take a look at this issue in grpc-dart? grpc/grpc-dart#135

    It would be really helpful to have access to the response trailers like in grpc-java.

    Alex Kaszynski
    @akaszynski

    Hi all,

    I'm streaming large arrays using gRPC by streaming a message with:

    message Chunk{
      bytes payload = 1;
    }

    On our server we serialize the array with:

    inline void StreamChunks(ServerWriter<Chunk>* writer, int chunk_size,
                 const void* array, int n_bytes){
      char* bytes = (char*) array;
    
      Chunk chunk;
      for (int c = 0; c < n_bytes; c += chunk_size) {
        if (c + chunk_size > n_bytes) { // last chunk, send up to the end of the array
          chunk.set_payload(&bytes[c], n_bytes - c);
          writer->Write(chunk);
        }
        else {  // Send max chunk size (chunk_size)
          chunk.set_payload(&bytes[c], chunk_size);
          writer->Write(chunk);
        }
      }
    }

    And deserialize it on the client with:

        // size the array
        reader->WaitForInitialMetadata();
        const int array_size = GetInitialMetadataValue<int>(&context, "size", 0);
        int *array = new int[array_size];
        char *rawarray = (char*)array;
    
        // Read data from the server in chunks
        Chunk chunk;
        while (reader->Read(&chunk)) {
          const std::string payload = chunk.payload();
          memcpy(rawarray, payload.data(), payload.size());
          rawarray += payload.size();
        }

    We're getting great performance on both Windows and Linux (around 1 GBps) compared to repeated messages (anywhere between 100 MBps - 400 MBps). But I'd like to get even closer to MEMCPY speeds for IPC.

    Is there a way to optimize our client/server model to improve performance? I've looked at grpc arena allocation, but it seems to be suited for messages and not streams. I've also looked into flatbuffers, but it's out of sync with gRPC and won't work with 1.25 (which is a hard requirement).

    shimmy
    @lilshim
    Is there an easy way to check the size of headers/metadata the client is sending out?
    HelloWood
    @helloworlde
    Hi, guys, what the meaning of drained status of substream in grpc-java? I'm Chinese, and translate means as exhausted, is that correct?