Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Laksh1997
    @Laksh1997
    but it seems to crash the server (ie all the packages start installing again in the logs)
    and I put the memory to 8G and it still happens - is this expected?
    hi @deliahu I just changed CUDA to cu101
    and it downloaded the new image - but torch.cuda.is_available() is still False
    David Eliahu
    @deliahu
    @Laksh1997 it could be because pip install torch might not give you the GPU version by default
    Try using this in the Dockerfile instead: pip install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
    Laksh1997
    @Laksh1997
    thanks
    ok trying now
    any idea about the data frame?
    When I change the dataframe to 10,000 x 2, it works fine
    David Eliahu
    @deliahu
    Regarding the crashing, it could be due to out-of-memory, but I'm not sure. Does cortex get show any error info when it errors? Also, does it crash if right before you serialize, instead you return a dummy response?
    Ok so it sounds like a memory issue then
    Laksh1997
    @Laksh1997
    hmmm
    yeah and also when I do like 10,000 x 20 it takes much much longer to return
    David Eliahu
    @deliahu
    do you have to use json? there may be other serialization methods that don't use as much memory. Alternatively, you can try with more memory
    Laksh1997
    @Laksh1997
    well I return a df by df.to_dict()
    I don't know what else I could use
    David Eliahu
    @deliahu
    is your client that is calling the API written in Python?
    Laksh1997
    @Laksh1997
    yes
    uses requests
    David Eliahu
    @deliahu
    I can't vouch for this article since I only found it just now by Googling, but I would try some of these listed here:
    it's a pretty old article so there might be better solutions out there now, but this is a good starting point
    Laksh1997
    @Laksh1997
    thanks will look into it!|
    David Eliahu
    @deliahu
    You'll want to change the response type of your API to be bytes
    Laksh1997
    @Laksh1997
    Okay thanks!
    as in change the configuration file?
    David Eliahu
    @deliahu
    no changes need to be made to the configuration file, you can just return the "bytes" object
    it should set the appropriate headers on the response, but if you need more control over the response type header, you can return the Starlette object as shown in one of the examples I linked
    Laksh1997
    @Laksh1997
    What do you mean?
    I'm going to try messagepack
    David Eliahu
    @deliahu
    You should be able to return the msgpack-serialized object from your predict(), and then read the body (as bytes) using the requests library. I don't think you'll have to make any changes to the headers.
    Laksh1997
    @Laksh1997
    @deliahu cuda works now thanks!
    so requests.post(**args).body ?
    Laksh1997
    @Laksh1997
    @deliahu The request seems to work but I can't seem to read it on the client side
    In Predictor I'm returning
            context = pyarrow.default_serialization_context()
            df_bytestring = context.serialize(df).to_buffer().to_pybytes()
            return df_bytestring
    On the client side, I do:
    context = pyarrow.default_serialization_context()
    res = requests.post(**inp)
    context.deserialize(res.content)
    And I get: OSError: buffer_index out of range.
    However when I do the serialization and deserialization all on the client side (ie in a python notebook) it works fine
    Laksh1997
    @Laksh1997
    @deliahu when I upgrade to the latest pandas and pyarrow on client side it works
    David Eliahu
    @deliahu
    @Laksh1997 sounds good, so is everything now working as you expect?
    Laksh1997
    @Laksh1997
    Yep - everything works!
    Cheers
    David Eliahu
    @deliahu
    :+1:
    Laksh1997
    @Laksh1997
    It's also much better and handles the 10000 x 2000 df easily
    cheers for that!
    David Eliahu
    @deliahu
    awesome, glad to hear it!