SavedModelBundleand loading it should load its last state where you can continue training
serving_default_input_1, and the output seems to be in
StatefulPartitionedCall:1etc if using multiple outputs). Is there a way to change these names so that I can use more meaningful names in Java? (I named the output layers, and that did affect the layers themselves when I inspect the graph in java, but not the output names)
Is it easy to integrate TF.Text operations into TF Java?
When I try to load a
SavedModel with these ops, I get errors like:
Exception in thread "main" org.tensorflow.exceptions.TensorFlowException: Op type not registered 'CaseFoldUTF8' in binary running on localhost. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f905e935595, pid=30392, tid=30399 # # JRE version: OpenJDK Runtime Environment (220.127.116.11+1) (build 18.104.22.168+1-Ubuntu-0ubuntu1.20.10) # Java VM: OpenJDK 64-Bit Server VM (22.214.171.124+1-Ubuntu-0ubuntu1.20.10, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64) # Problematic frame: # C [libtensorflow_framework.so.2+0x121f595] tensorflow::shape_inference::InferenceContext::Concatenate(tensorflow::shape_inference::ShapeHandle, tensorflow::shape_inference::ShapeHandle, tensorflow::shape_inference::ShapeHandle*)+0x185 # # Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport %p %s %c %d %P %E" (or dumping to .../core.30392) # # An error report file with more information is saved as: # .../hs_err_pid30392.log # # If you would like to submit a bug report, please visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-lts # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. #
SavedModelis optimal - it loads in Keras when we have
compile=False, but fails otherwise. Also, it runs well in TF-Serving .