Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Ryan Nett
    @rnett
    Also I had no idea this channel existed until I got mentioned, maybe put it in the readme or a contributors doc?
    Alexey Zinoviev
    @zaleslaw
    Yes, we look in one direction @rnett
    Karl Lessard
    @karllessard
    I’m trying to get some information from GBrain team about this, as we will be clearly impacted if they are effectively reconsidering the purpose of having a pure C API. I’m also planning to start a new thread on TF developers mailing list.
    So even if they are planning to wrap all the C++ stuff to C, how long will it take to do it after the C++ version is released, will it be a priority or just something left in the backlog? Will the Python client use the C++ version directly?
    1 reply
    Alexey Zinoviev
    @zaleslaw
    Merry Christmas to everyone who is celebrating
    Karl Lessard
    @karllessard
    Thanks Alexey, same thing everyone!
    Ryan Nett
    @rnett
    I can't run tests locally (with the dev profile) via IntelliJ, I get java.lang.UnsatisfiedLinkError: no jnitensorflow in java.library.path. Is this a known issue with dev?
    f
    @SidneyLann
    I want to port GAT from py to Java-TF, but Java-TF has no basic ops such as einsum() in tensorflow.python, How can I do? Thanks.
    f
    @SidneyLann
    tf.einsum("...NHI , IHO -> ...NHO", x, self.attn_kernel_self)
    How to implement this in Java-TF?
    Karl Lessard
    @karllessard
    there is the tf.linalg.einsum operation in Java, did you tried this?
    f
    @SidneyLann
    You mean org.tensorflow.op.linalg.Einsum? it may be work, I'll try. Thanks.
    f
    @SidneyLann
    Can Java-TF 2 model be saved to file and restore to train again?
    Karl Lessard
    @karllessard

    You mean org.tensorflow.op.linalg.Einsum? it may be work, I'll try. Thanks.

    Yes but you should access it via an instance of Ops, that we normally call tf, like this:

    Ops tf = Ops.create(…)
    ...
    Einsum e = tf.linalg.einsum(…)

    Can Java-TF 2 model be saved to file and restore to train again?

    yes, you can save checkpoints using Session.save

    Karl Lessard
    @karllessard
    or better, you can simply export your model as a SavedModelBundle and loading it should load its last state where you can continue training
    f
    @SidneyLann
    Greate! Thanks.
    f
    @SidneyLann
    Is java-tf2 support autodiff now?
    Adam Pocock
    @Craigacp
    Gradients work in graph mode, but we don't currently have access to the gradient tape for gradients in eager mode.
    f
    @SidneyLann
    for eager mode, we are waiting gradient tape move to c++ and then apply in java?
    Adam Pocock
    @Craigacp
    We're trying to figure out what the plan is for a stable c or c++ api to access the gradient tape, then we'll look at adding it to Java
    f
    @SidneyLann
    ok
    Lai Wei
    @roywei
    Hi guys, is there any plan to upgrade the mkl-dnn version for TF Java here? https://github.com/tensorflow/java/blob/fa6e6e1db26c32ad5ac6f59eec86aa213561cece/tensorflow-core/pom.xml#L62
    Samuel Audet
    @saudet
    MKL-DNN 1.x apparently doesn't work on Mac and Windows:
    https://github.com/tensorflow/java/blob/master/tensorflow-core/tensorflow-core-api/build.sh#L21
    I think we're just waiting after that...
    Adam Pocock
    @Craigacp
    OneDNN supports macOS, Linux and Windows and is the new name for mkl-dnn.
    Samuel Audet
    @saudet
    Right, but they still call it MKL-DNN in the source code... It's not yet supported fully in TF it seems
    jxtps
    @jxtps
    When I create a keras model in python TF2.3.1, save it using model.save("mypath") and then load it in tf java 0.2.0, the names of the input appears to be serving_default_input_1, and the output seems to be in StatefulPartitionedCall:0 (and StatefulPartitionedCall:1 etc if using multiple outputs). Is there a way to change these names so that I can use more meaningful names in Java? (I named the output layers, and that did affect the layers themselves when I inspect the graph in java, but not the output names)
    jxtps
    @jxtps
    Ah, I have to read the metaGraphDef and get the "serving_default" signature to find the input & ouputs, which do map to the tensor names I indicated, but the signatures also contain the names of the layers
    jxtps
    @jxtps
    Is there a canonical way to run the graph based on a signature? Or some example code? Up until now I've "just" been doing runner.fetch("output-name"), runner.feed("input-name") then runner.run().
    Ryan Nett
    @rnett
    Take a look at ConcreteFunction, I think it does what you need.
    Karl Lessard
    @karllessard
    Yep, @rnett is right, ConcreteFunction is now the recommended way to run a graph, especially for that reason.
    jxtps
    @jxtps
    ConcreteFunction looks great, thanks!
    jxtps
    @jxtps
    Can i use model quantization of any kind with java cpu inference? https://www.tensorflow.org/model_optimization/guide/quantization/post_training
    Adam Pocock
    @Craigacp
    Not using the tflite stack. You could quantize the models yourself and run them as regular TF models. It would probably be fairly complex to do.
    jxtps
    @jxtps
    Is it possible to use tflite in server side java? All docs I’ve found talk about android - is there a breaking difference?
    Adam Pocock
    @Craigacp
    Tflite is separate from core tf and I think the Java API in it is android only. That said, I don't think anyone has looked into it particularly deeply yet.
    Alexey Zinoviev
    @zaleslaw
    Hi, the community! I faced with the issue "No gradient defined for op: SelectV2." using tf.select(predicate, true_branch, false_branch). It's a typical situation, we know, but maybe know any additional ways to emulate "if". I tried to use switchCond, but have no idea how to do it correctly.
    Karl Lessard
    @karllessard
    @saudet continuing the discussion in the doc for MKL... Frank Liu’s point was that MKL version on other platforms looks slower than the versions without it, and I’ve observed the same thing on my side. So if that’s really the case, why using it?
    I haven’t tried many models on my side so I assumed it was just not using the « happy path » but maybe he did
    Samuel Audet
    @saudet
    Did you make sure to use 1.x and not 0.x?
    Jacob Eisinger
    @jeisinge

    Is it easy to integrate TF.Text operations into TF Java?

    When I try to load a SavedModel with these ops, I get errors like:

    Exception in thread "main" org.tensorflow.exceptions.TensorFlowException: Op type not registered 'CaseFoldUTF8' in binary running on localhost. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
    Samuel Audet
    @saudet
    It should work transparently, after loading the custom ops: tensorflow/java#82
    Jacob Eisinger
    @jeisinge
    @saudet , thank you for that pointer --- I believe it resolved the issue of TensorFlow Text! Now, I am getting core dumps around shape inferences of a concat operation. Is there documentation on who to debug these types of model issues in the JVM?
    Adam Pocock
    @Craigacp
    Core dumps as in it's taken down the JVM? Or it's throwing an exception that you aren't catching?
    The former is something we need to fix, the JVM should not crash. The latter is more of a modelling problem, though it's surprising if this model works fine in other runtimes.
    Jacob Eisinger
    @jeisinge
    Yeah - it is taking down the JVM - core dump.
    #
    # A fatal error has been detected by the Java Runtime Environment:
    #
    #  SIGSEGV (0xb) at pc=0x00007f905e935595, pid=30392, tid=30399
    #
    # JRE version: OpenJDK Runtime Environment (11.0.9.1+1) (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)
    # Java VM: OpenJDK 64-Bit Server VM (11.0.9.1+1-Ubuntu-0ubuntu1.20.10, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
    # Problematic frame:
    # C  [libtensorflow_framework.so.2+0x121f595]  tensorflow::shape_inference::InferenceContext::Concatenate(tensorflow::shape_inference::ShapeHandle, tensorflow::shape_inference::ShapeHandle, tensorflow::shape_inference::ShapeHandle*)+0x185
    #
    # Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport %p %s %c %d %P %E" (or dumping to .../core.30392)
    #
    # An error report file with more information is saved as:
    # .../hs_err_pid30392.log
    #
    # If you would like to submit a bug report, please visit:
    #   https://bugs.launchpad.net/ubuntu/+source/openjdk-lts
    # The crash happened outside the Java Virtual Machine in native code.
    # See problematic frame for where to report the bug.
    #
    It is not clear if my SavedModel is optimal - it loads in Keras when we have compile=False, but fails otherwise. Also, it runs well in TF-Serving .
    Adam Pocock
    @Craigacp
    Could you open an issue on GitHub with the rest of the hs_err log in it? There should be a full stack trace in there.
    Jacob Eisinger
    @jeisinge
    Sure thing - see tensorflow/java#194 .