These are chat archives for beniz/deepdetect

1st
Nov 2017
rperdon
@rperdon
Nov 01 2017 12:53
I'll read up on this thanks!
rperdon
@rperdon
Nov 01 2017 14:28
I just had a weird thing break on my extract_layer_nsfw.py codes. I now get input_name = caffe_net.inputs[0]
IndexError: list index out of range
even when running the original extract_layer_nsfw code and for the life of me the only thing I installed the other day was jq. I tried uninstalling it to no resolution.
rperdon
@rperdon
Nov 01 2017 14:47
I think I traced it to my deploy.prototext file, I had multiple open so its possible I saved over the one I was loading into the extract layer model
rperdon
@rperdon
Nov 01 2017 15:09
def set_inputscale(self, in, scale):
"""
Set the scale of preprocessed inputs s.t. the blob = blob * scale.
N.B. input_scale is done AFTER mean subtraction and other preprocessing
while raw_scale is done BEFORE.
Something on the caffe preprocess and transformer class caught my eye
cchadowitz-pf
@cchadowitz-pf
Nov 01 2017 15:10
did you manage to resolve the extract_layer_nsfw.py error?
rperdon
@rperdon
Nov 01 2017 15:11
I think I overwrote the deploy.prototext file by accident. I am actually keeping the same model in 3 different directories where the deploy is modified for use for deepdetect
Since there is a scale difference in conv1 layer b/w deepdetect and the extract/classify codes I am looking into whether during the preprocess function there is something funky going on in the order of operations of the the transform.
cchadowitz-pf
@cchadowitz-pf
Nov 01 2017 15:13
:+1:
rperdon
@rperdon
Nov 01 2017 15:40
I looked at the section for the image transform and did a quick output of a numpy array of the image after transform from example.py. The transformed results do differ from the extract nsfw post transform. While they both use the caffe.io transform functions, it looks as though it may differ from application to application.

Pre-processing

class Transformer:
"""
Transform input for feeding into a Net.
Note: this is mostly for illustrative purposes and it is likely better
to define your own input preprocessing routine for your needs.
cchadowitz-pf
@cchadowitz-pf
Nov 01 2017 15:55
right, you can configure how you want the image preprocessed per application. i looked through that same io.py file a while back myself
I printed out the matrix after the transformation line and while it was a numpy array, I do not recognize the transformation done to it.
So I grabbed the output after the preprocess call in both the yahoo and classify code. The values are quite different. Prior to the call, the image is both loaded from OpenCV and both show the same loaded matrix. It seems that it would indicate that the preprocess step is done differently for each classifier.
In the classify example, transformer = get_transformer(deploy_file, mean_file) indicating it is getting its info from the model
Extract nsfw hardcodes the transformer:

Note that the parameters are hard-coded for best results

caffe_transformer = caffe.io.Transformer({'data': nsfw_net.blobs['data'].data.shape})
caffe_transformer.set_transpose('data', (2, 0, 1))  # move image channels to outermost *** convert from HxWxC to CxHxW
caffe_transformer.set_mean('data', np.array([104, 117, 123]))  # subtract the dataset-mean value in each channel
caffe_transformer.set_raw_scale('data', 255)  # rescale from [0, 1] to [0, 255]
caffe_transformer.set_channel_swap('data', (2, 1, 0))  # swap channels from RGB to BGR
rperdon
@rperdon
Nov 01 2017 18:03
On the yahoo nsfw code, I removed the scaler line to see if that would bring in the values of the transform image in line but they still ended up being way off.
Emmanuel Benazera
@beniz
Nov 01 2017 18:04
Hi (holiday here in France, so not at my desk), you could comment out the transformer's operations to see if you can locate the one that makes the difference.
When you are referring to the yahoo_nsfw, are you referring to the model ? if you keep using that Anime model, that's good too for me to reproduce.
rperdon
@rperdon
Nov 01 2017 18:05
I'm testing out those ideas now. Enjoy the holiday, I'll just keep inputting what I find! I appreciate the help though
I'm still using the same anime model
cchadowitz-pf
@cchadowitz-pf
Nov 01 2017 18:05
note that the transformer for the yahoo nsfw was hardcoded from the original yahoo nsfw model repo, so it's likely that it is simply the specific parameters for that model, not necessarily general parameters for any model
rperdon
@rperdon
Nov 01 2017 18:06
I'm limiting the loads to opencv image load so the focus will be on the same image (Hei1.png) and the same model
Emmanuel Benazera
@beniz
Nov 01 2017 18:06
The reasoning is that if you can match DD output from a modified Python script with digit, this would definitely mean that the inputs are the culprit
rperdon
@rperdon
Nov 01 2017 18:07
The yahoo hardcode of the transformer gives me some values to work with so I can align them to the classify transform and potentially find a way to align them to the DD transform
cchadowitz-pf
@cchadowitz-pf
Nov 01 2017 18:07
for reference, the nsfw script you're using (if it's the one I put in my issue) is based on the original: https://github.com/yahoo/open_nsfw/blob/master/classify_nsfw.py
rperdon
@rperdon
Nov 01 2017 18:08
yes I noted where you got it from
Emmanuel Benazera
@beniz
Nov 01 2017 18:08
maybe it comes from digits as well...
rperdon
@rperdon
Nov 01 2017 18:08
I have learned a lot these past few weeks of how each step in the process works as well
I have not ruled out the possibility of digits doing its own thing as well
I'm hoping its limited to something within the transformer
rperdon
@rperdon
Nov 01 2017 18:48
I retrieved the calculated mean value to be subtracted which the digits classify py file loads from the mean file in the model: Pixel value is
[ 136.25361633 145.42053223 163.41444397]
I inputted these numbers into the yahoo model to see where this gets us
cchadowitz-pf
@cchadowitz-pf
Nov 01 2017 18:51
those are different than the standard [104,117,123] mean, interesting
Emmanuel Benazera
@beniz
Nov 01 2017 18:51
You can pass mean values to DD through the API as well, but I've tested that on your anime model yesterday and I don't believe it comes from there, but who knows, still possible.
Though the mean had a hard i.pact on the NSFW model IIRC
rperdon
@rperdon
Nov 01 2017 18:51
I'll look into that as well
cchadowitz-pf
@cchadowitz-pf
Nov 01 2017 18:51
it did, yeah
rperdon
@rperdon
Nov 01 2017 19:42
My last thought of the day was to add the input scaler value into the extract layers nsfw file and I was able to get the number scales in line, but the values are still off. So far I have added in the "correct" mean values and an input scaler value of 0.16 to get the transform values "closer".
rperdon
@rperdon
Nov 01 2017 19:48
I also commented out the raw scale line
So I'm close yet not close.