These are chat archives for beniz/deepdetect

3rd
Jan 2017
Jordan Green
@jordan-green
Jan 03 2017 05:31
Hi all, are there any benchmarks of this project vs flask based endpoints, something like CaffeOnSpark, and TF Server?
Emmanuel Benazera
@beniz
Jan 03 2017 05:33
@jordan-green not that I know, feel free to report. Though I'm not sure what you are considering, training or predict ?
Jordan Green
@jordan-green
Jan 03 2017 05:51
both training and inference are of interest :)
Emmanuel Benazera
@beniz
Jan 03 2017 05:55
Training time is orders or magnitude > server calls so it shouldn't matter. Inference time depends on many factors, from network flops to CPU/GPU hardware and batch size, so in practice server time (if this is what you are asking about) is not really the issue. That being said, it's a C++ multithreaded server based on cppnetlib and you can time a dummy /info call to study an almost pure HTTP GET call response time.