Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Jason Zaman
    @perfinion
    well those are unrelated, sequential or functional api both end up with a tf.keras.Model object which you just do model.fit() on the same way
    also you can pull from the DB in many threads then interleave into the main dataset which goes to the GPU
    if that ends up not fast enough, look into just packing the data into TFRecords and train off those, uses more disk space but might be worth it
    Gili Tzabari
    @cowwoc
    I'll try to start small. I'll pull a bit of data from the DB, stuff it into a tf.TensorArray, train, rinse and repeat. If performance becomes an issue I will revisit it but I don't think this will happen in the near future.
    Jakob Sultan Ericsson
    @jakeri
    Are snapshots of PRs published anywhere? I would like to test tensorflow/java#322 without building it myself. :-)
    Adam Pocock
    @Craigacp
    We don't build snapshots of PRs.
    As that doesn't touch the native code you should be able to build it with the -Pdev flag and it'll pull down the latest snapshot of the native code.
    1 reply
    Karl Lessard
    @karllessard
    Well looks like it’s ready to get merged anyway, so let’s do both :)
    8 replies
    Jakob Sultan Ericsson
    @jakeri
    Even better. :)
    Gili Tzabari
    @cowwoc
    Given tensors a = [1, 2, 3] and b = [4, 5] how do I construct a tensor c which is [1, 2, 3, 4, 5]. I know I can tf.reshape and tf.concat but is there an easier way to do this in a single step? I'm using the python API in case that makes a difference.
    Jason Zaman
    @perfinion
    tf.stack
    Jakob Sultan Ericsson
    @jakeri

    Both good and bad about the latest 0.4.0-SNAPSHOT

    The good thing, TString does not core dump anymore on 0.4.0-SNAPSHOT.

    The bad thing, results on OSX is random/garbage.

    We have a unit test that loads a savedmodel (CNN classification using MNIST).
    I have run through the model using python and extracted out the resulting tensors. Our unit test loads this model using TF Java and runs the same data through.
    When I use TF Java on Linux our results match nicely (unit test pass everytime) but on OSX the results are way off and the result is random on every run.

    TF Java Linux (from our build server)

    Test best category: 0 (should be 0) categorical: CategoricalPredictionCell [categorical={0=1.0, 1=2.5633107E-22, 2=1.5087728E-8, 3=2.6744433E-16, 4=2.867041E-14, 5=1.9830472E-16, 6=2.6495522E-10, 7=6.265893E-15, 8=7.546605E-10, 9=4.7946207E-9}]

    TF Java OSX (locally)

    Test best category: 2 (should be 0) categorical: CategoricalPredictionCell [categorical={0=0.0, 1=0.0, 2=1.0, 3=0.0, 4=0.0, 5=0.0, 6=0.0, 7=0.0, 8=0.0, 9=0.0}]

    And for reference python output values (without the matching category)

    1.0000000e+00f, 2.5633206e-22f, 1.5087728e-08f, 2.6744229e-16f, 2.8670517e-14f, 1.9830397e-16f, 2.6495622e-10f, 6.2658695e-15f, 7.5466194e-10f, 4.7946207e-09f

    We did not experience these kind of problems when we were running on 0.2.0.
    And we are not using any GPU or MKL extensions.

    Adam Pocock
    @Craigacp
    Could you open another issue and post the model & test there?
    Presumably this isn't using a TString at all?
    Jakob Sultan Ericsson
    @jakeri
    True. This one doesn't use TStrings. The TStrings where for another use-case where we generate images as byte[].
    Still, I thought that it could have been related.
    Adam Pocock
    @Craigacp
    Well, the initialization for TString was definitely broken, but we've fixed that now, so let's try to run this one down too.
    Jakob Sultan Ericsson
    @jakeri
    Or we do use TString for the input. TFRecords, we have our own writer.
    Adam Pocock
    @Craigacp
    Ok.
    Jakob Sultan Ericsson
    @jakeri
    I'll submit it tomorrow. Getting late in Europe.
    Karl Lessard
    @karllessard
    I’m trying to think what have changed that much between 0.2.0 and 0.4.0. Definitely the TString backend has completely changed. Other than that we are now automatically mapping tensor native memory to the JVM but I can’t see how that could impact the results of a session.
    Jakob Sultan Ericsson
    @jakeri
    argh. When I created an isolated test case for this. I magically started to work. I'm trying to figure why this is happening. I see one difference from my failure run and then this row is present:
    Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
    Exactly same test.
    But our real code have a lot of more dependencies.
    Jakob Sultan Ericsson
    @jakeri
    😭 I accidently copied in the mkl-version of macosx-x86_64
    So the bug is that macosx-x86_64-mkl is not a good thing to use.
    We have seen problems with this one before (on linux) and realized that it was basically slower and more memory intensive than the regular build.
    Karl Lessard
    @karllessard
    Yes the MKL version has never shown any improvement compared to the vanilla version, and even the opposite. It is still a mystery why (and to be honest, nobody spent a lot of time for figuring it out neither) but MKL support in the TF native library itself seems a bit clumsy, afaik.
    So you are saying that you don’t have any issue with the vanilla version?
    Jakob Sultan Ericsson
    @jakeri
    Correct. It works fine now. It was a classic copy & paste-snafu. I accidently copied in the mkl-version to our pom.xml when I was upgrading.
    Sorry for that.
    Karl Lessard
    @karllessard
    Don’t be sorry, that’s good news! :)
    Karl Lessard
    @karllessard
    General announcement: Probably some of you already know but Google has launched its official Forum platform for TensorFlow a few days ago: https://discuss.tensorflow.org/
    I invite you all to subscribe to it and start posting your questions, solutions, suggestions or anything you want to talk about regarding TensorFlow Java on this platform from now on.
    To do so, make sure to tag your posts with java and/or sig_jvm so they get categorized and filtered properly
    Gili Tzabari
    @cowwoc
    This message was deleted
    @perfinion You suggested using tf.stack to combine a = [1, 2, 3] and b = [4, 5] into tensor c which is [1, 2, 3, 4, 5]. Can you elaborate? As far as I can see, tf.stack will just stack the two tensors on top of each other instead of concatenating the vectors into a [1, 5] shape.
    Karl Lessard
    @karllessard
    @cowwoc I think what you want is tf.concat, did you tried it out?
    Jason Zaman
    @perfinion
    oh, yeah you want either tf.stack or tf.concat, depending on which way you want to combine them, they both take an axis= param telling which way to combine them
    stack is nicer if you have a list of tensors, concat is nicer if you have separate tensors but all those functions can be used to do anything so just use whichever is cleaner
    Gili Tzabari
    @cowwoc
    @karllessard Yes, my example is actually not precise enough. I actually have: a = [1, 2, 3] and b = [[4, 5], [6, 7]] and I want to end up with c = [1, 2, 3, 4, 5, 6, 7]. I am currently using c = tf.concat([a], tf.reshape(b, shape=[-1])) but I was wondering if there an easier/more-readable way to collapse and concatenate everything in a single step.
    Gili Tzabari
    @cowwoc

    I've got inputs with different dtypes. One is an int64, another is a float64. I saw a tutorial on feeding Tensorflow multiple inputs where they used tf.concatenate() to combine the various inputs but when I tried to do the same I got:

    Tensor conversion requested dtype int64 for Tensor with dtype float32: <tf.Tensor 'concatenate/Cast:0' shape=(None, 4180, 5) dtype=float32>

    Any ideas?

    Adam Pocock
    @Craigacp
    You don't need to concatenate the inputs if there are two separate input placeholders, you feed each input to the appropriate placeholder.
    6 replies
    Gili Tzabari
    @cowwoc
    @Craigacp https://www.tensorflow.org/guide/autodiff#3_took_gradients_through_an_integer_or_string says "Integers and strings are not differentiable"... Does that mean I can't use int* types at all?
    Or am I misunderstanding something?
    Adam Pocock
    @Craigacp
    It depends what you're using the int for. For example MNIST is usually stored as integers in the range 0-255, and you can feed that into the model. Usually the first step in the model is then to convert it into a float and proceed as normal. There as you aren't taking gradients of the conversion procedure it doesn't matter. Also it's useful to feed in other tensors to control model behaviour (e.g. to use an integer step or epoch counter to control the learning rate), again these usually aren't involved in the gradient updates so it doesn't matter.
    Gili Tzabari
    @cowwoc
    The integers in my case represent timestamps as time since epoch.
    Adam Pocock
    @Craigacp
    And you want to use them as features in your model?
    Gili Tzabari
    @cowwoc
    Yes. I'm dealing with behavior that is tied into weather and weather follows certain patterns as a function of time. I've also got outdoor temperature as an input but I'm thinking (just a guess) it can't hurt to add in the timestamp.
    I've actually also got a second case of integers... I've got inputs that are enums, so I converted their ordinal value to an int. There I can obviously just cast it to a float. It's the timestamps where things get more complicated.