These are chat archives for beniz/deepdetect

8th
Sep 2016
Ardalan
@ArdalanM
Sep 08 2016 13:42
Hi beniz, I am looking for a proper way to do back to back training calls from the same model, multiple "fit" how should I proceed ?
Emmanuel Benazera
@beniz
Sep 08 2016 13:44
I don't understand... cross validation ?
Ardalan
@ArdalanM
Sep 08 2016 13:45
even simplier, just want to be able to do dd.post_train() two times
Emmanuel Benazera
@beniz
Sep 08 2016 13:47
resuming from a training call ?
Ardalan
@ArdalanM
Sep 08 2016 13:49
nop, the first training call (dd.post_train()) is finished but I want to execute another dd.post_train() so that the second training call start with the saved model parameters.
Just like scikit-learn where we can do multiple .fit() back to back
Emmanuel Benazera
@beniz
Sep 08 2016 13:50
the saved model weights? use resume:true
Ardalan
@ArdalanM
Sep 08 2016 14:18
That resolved my issue thanks
Emmanuel Benazera
@beniz
Sep 08 2016 14:34
Ok
Ardalan
@ArdalanM
Sep 08 2016 14:47
I am facing a core dump when I try to do dd.post_train(), here is server error: *** Error in./main/dede': free(): invalid size: 0x00007fcdacf54cd0 *`
but it was working couples of minutes ago :)
Service creation is fine tough
Emmanuel Benazera
@beniz
Sep 08 2016 14:49
yeah, you'll need to post all API calls, etc...
Ardalan
@ArdalanM
Sep 08 2016 15:00
I might have found what caused the core dump: setting layers:[2]
setting layers:[3] or more works but layers:[2] leads to core dump. Wanted small neuron number to emulate logistic regression behavior
Ardalan
@ArdalanM
Sep 08 2016 15:06
should I open an issue ?
Emmanuel Benazera
@beniz
Sep 08 2016 15:15
You should use template:lregression for logistic regression. You can open an issue anyways since the server should not crash but either execute orders or return an error. Thanks!
Ardalan
@ArdalanM
Sep 08 2016 15:19
I am using lregression as well but wanted to toy with the mlp, I'll open an issue, thanks.