These are chat archives for beniz/deepdetect

6th
Apr 2017
Emmanuel Benazera
@beniz
Apr 06 2017 05:37
@cchadowitz-pf I believe the model is multi-label (https://github.com/openimages/dataset/issues/3#issuecomment-259546768) whereas our post-processed one is single labeled. In other words, I believe DD returns the correct labels but normalized as to sum to 1.
Emmanuel Benazera
@beniz
Apr 06 2017 07:00
this is because we are getting the endpoints from the inception_v3 tf graph definition while the openimages model is using a sigmoid entropy loss as end-point.
Emmanuel Benazera
@beniz
Apr 06 2017 07:16
loading the meta graph is not possible at the moment, cf https://github.com/openimages/dataset/issues/21#issuecomment-276109025
I'm pretty sure there's a way around it through rebuilding a tf graph, I might look into it, but going through TF bloated codebase is not something I take for breakfast...
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 13:48
Right, I saw that the original model was multi-label. I wasn't sure if DD was doing anything, but from you said it sounds like either DD is normalizing the labels to sum to 1, and/or the converted model to conform to DD requirements is actually using a softmax layer instead of a sigmoid entropy loss layer at the end. I've converted TF model checkpoints to .pb's in the past....but that was in TF v0.11 or so....not sure how it's changed since then. I may give it a go if it's not a long process....
But are you saying the model layer is responsible for having normalized the confidences to sum to 1, or is DD doing that once it's done the forward pass through the network?
Emmanuel Benazera
@beniz
Apr 06 2017 15:15
the conversion is responsible for the softmax
I couldn't find the sigmoid cross entropy layer with logits to add it to the bottom of inception_v3. Though I took only 20mins to look at it.
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 16:51
Hmm it's been a while since I've last looked into this stuff, but does original model not include the graph itself? Normally I think I've just needed to load the checkpoint and graph, freeze it, and write it back out as a .pb
Emmanuel Benazera
@beniz
Apr 06 2017 16:57
I vz trie
I ve tried
this morning with the error I reported earlier
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 17:01
I see - did you get as far as this? https://github.com/openimages/dataset/issues/21#issuecomment-276107322 I'm only just now getting around to looking into it more myself.
Emmanuel Benazera
@beniz
Apr 06 2017 17:20
not exactly, I went the slacker way and tries to add a sigmoid entropy layer in place of the inception_v3
this motivated by the fact that since the softmax is getting the right classes, it means the whole network is the right one.
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 17:21
heh :) it's definitely getting the right classes, but the confidences are definitely misleading
if you have some code or snippets that are share-able, i'd be happy to take a bit of a go at it
Emmanuel Benazera
@beniz
Apr 06 2017 17:36
on my phone :(
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 17:37
ah, no problem :) i'll let you know if i get anything going
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 20:37
@beniz it's possible I got somewhere, but I realized I'm not sure what sort of input tensor layer DD expects the model to have. At the moment, the model has the following two layers defined first:
node {
  name: "Placeholder"
  op: "Placeholder"
  attr {
    key: "dtype"
    value {
      type: DT_STRING
    }
  }
  attr {
    key: "shape"
    value {
      shape {
      }
    }
  }
}
node {
  name: "DecodeJpeg"
  op: "DecodeJpeg"
  input: "Placeholder"
  attr {
    key: "acceptable_fraction"
    value {
      f: 1.0
    }
  }
  attr {
    key: "channels"
    value {
      i: 3
    }
  }
  attr {
    key: "dct_method"
    value {
      s: ""
    }
  }
  attr {
    key: "fancy_upscaling"
    value {
      b: true
    }
  }
  attr {
    key: "ratio"
    value {
      i: 1
    }
  }
  attr {
    key: "try_recover_truncated"
    value {
      b: false
    }
  }
}
If I try creating the TF service with no inputlayer defined, it defaults to trying the "Placeholder" layer, but throws an error:
Internal: Output 0 of type float does not match declared output type string for node _recv_Placeholder_0 = _Recv[client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=7065586674243576256, tensor_name="Placeholder", tensor_type=DT_STRING, _device="/job:localhost/replica:0/task:0/cpu:0"]()
E /opt/deepdetect/src/services.h:496] service openimagesinceptionv3 mllib internal error: Internal: Output 0 of type float does not match declared output type string for node _recv_Placeholder_0 = _Recv[client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=7065586674243576256, tensor_name="Placeholder", tensor_type=DT_STRING, _device="/job:localhost/replica:0/task:0/cpu:0"]()
I also tried defining "inputlayer":"DecodeJpeg" on service creating, but it again throws an error:
Internal: Output 0 of type float does not match declared output type uint8 for node _recv_DecodeJpeg_0 = _Recv[client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=7316938532010187566, tensor_name="DecodeJpeg", tensor_type=DT_UINT8, _device="/job:localhost/replica:0/task:0/cpu:0"]()
E /opt/deepdetect/src/services.h:496] service openimagesinceptionv3 mllib internal error: Internal: Output 0 of type float does not match declared output type uint8 for node _recv_DecodeJpeg_0 = _Recv[client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=7316938532010187566, tensor_name="DecodeJpeg", tensor_type=DT_UINT8, _device="/job:localhost/replica:0/task:0/cpu:0"]()
so it looks like I definitely don't have the right input layer defined in my graph def for DD to use
fwiw i didn't modify the graph def, simply wrote it to disk and then converted it and the ckpt to a frozen .pb
Emmanuel Benazera
@beniz
Apr 06 2017 20:42
not sure what this json is. Basically you need to freeze graph with a sigmoid at the end. you can send me the pb file and I ll try tomorrow
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 20:43
yeah I have a binary frozen pb with a sigmoid at the end, it's the input layer that I'm having trouble with. The json is the first two layers from the plaintext graph_def dump prior to converting it to a binary .pb with ckpt weights included
Emmanuel Benazera
@beniz
Apr 06 2017 20:46
what the google guy says is that he used an internal layer that is not in the public tf as far as I understand. use the same input as inception_v3 instead
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 20:48
yup i am using inception_v3 - and adding some preprocessing layers at the beginning. it's those input layers i'm not sure about with respect to how DD expects a TF model to be structured.
does it expect encoded jpg data, or raw image data, or a file path, etc
Emmanuel Benazera
@beniz
Apr 06 2017 20:54
a tensor of encoded images but you can quickly check that in tflib.cc and tfinputconns.h I think
I.e it takes the batch as input, I think you need to specify a variable size in the input tensor
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 21:01
hmm ok. if it helps, here's a quick screencap of the full graph
graph-run.png
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 21:12
Comparing it to the inception-resnet-v2 graph that works out of the box with DD, I think I can see what I need to do....
blob
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 22:24
Got it! Basically had to shoehorn a combination of the preprocessing tensors together with the placeholder tensor DD expects.
Emmanuel Benazera
@beniz
Apr 06 2017 22:25
congrats :)
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 22:25
heh may have spoken to soon. not sure the labels are lining up correctly...
seems to be getting the same labels no matter what. that's not good :D I'll probably resume tomorrow. Thanks for the tips :)
Emmanuel Benazera
@beniz
Apr 06 2017 22:29
if you lay down your steps in a gist I'll look them up tomorrow
cchadowitz-pf
@cchadowitz-pf
Apr 06 2017 22:57