Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Jakob Sultan Ericsson
    @jakeri
    Or we do use TString for the input. TFRecords, we have our own writer.
    Adam Pocock
    @Craigacp
    Ok.
    Jakob Sultan Ericsson
    @jakeri
    I'll submit it tomorrow. Getting late in Europe.
    Karl Lessard
    @karllessard
    I’m trying to think what have changed that much between 0.2.0 and 0.4.0. Definitely the TString backend has completely changed. Other than that we are now automatically mapping tensor native memory to the JVM but I can’t see how that could impact the results of a session.
    Jakob Sultan Ericsson
    @jakeri
    argh. When I created an isolated test case for this. I magically started to work. I'm trying to figure why this is happening. I see one difference from my failure run and then this row is present:
    Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
    Exactly same test.
    But our real code have a lot of more dependencies.
    Jakob Sultan Ericsson
    @jakeri
    😭 I accidently copied in the mkl-version of macosx-x86_64
    So the bug is that macosx-x86_64-mkl is not a good thing to use.
    We have seen problems with this one before (on linux) and realized that it was basically slower and more memory intensive than the regular build.
    Karl Lessard
    @karllessard
    Yes the MKL version has never shown any improvement compared to the vanilla version, and even the opposite. It is still a mystery why (and to be honest, nobody spent a lot of time for figuring it out neither) but MKL support in the TF native library itself seems a bit clumsy, afaik.
    So you are saying that you don’t have any issue with the vanilla version?
    Jakob Sultan Ericsson
    @jakeri
    Correct. It works fine now. It was a classic copy & paste-snafu. I accidently copied in the mkl-version to our pom.xml when I was upgrading.
    Sorry for that.
    Karl Lessard
    @karllessard
    Don’t be sorry, that’s good news! :)
    Karl Lessard
    @karllessard
    General announcement: Probably some of you already know but Google has launched its official Forum platform for TensorFlow a few days ago: https://discuss.tensorflow.org/
    I invite you all to subscribe to it and start posting your questions, solutions, suggestions or anything you want to talk about regarding TensorFlow Java on this platform from now on.
    To do so, make sure to tag your posts with java and/or sig_jvm so they get categorized and filtered properly
    Gili Tzabari
    @cowwoc
    This message was deleted
    @perfinion You suggested using tf.stack to combine a = [1, 2, 3] and b = [4, 5] into tensor c which is [1, 2, 3, 4, 5]. Can you elaborate? As far as I can see, tf.stack will just stack the two tensors on top of each other instead of concatenating the vectors into a [1, 5] shape.
    Karl Lessard
    @karllessard
    @cowwoc I think what you want is tf.concat, did you tried it out?
    Jason Zaman
    @perfinion
    oh, yeah you want either tf.stack or tf.concat, depending on which way you want to combine them, they both take an axis= param telling which way to combine them
    stack is nicer if you have a list of tensors, concat is nicer if you have separate tensors but all those functions can be used to do anything so just use whichever is cleaner
    Gili Tzabari
    @cowwoc
    @karllessard Yes, my example is actually not precise enough. I actually have: a = [1, 2, 3] and b = [[4, 5], [6, 7]] and I want to end up with c = [1, 2, 3, 4, 5, 6, 7]. I am currently using c = tf.concat([a], tf.reshape(b, shape=[-1])) but I was wondering if there an easier/more-readable way to collapse and concatenate everything in a single step.
    Gili Tzabari
    @cowwoc

    I've got inputs with different dtypes. One is an int64, another is a float64. I saw a tutorial on feeding Tensorflow multiple inputs where they used tf.concatenate() to combine the various inputs but when I tried to do the same I got:

    Tensor conversion requested dtype int64 for Tensor with dtype float32: <tf.Tensor 'concatenate/Cast:0' shape=(None, 4180, 5) dtype=float32>

    Any ideas?

    Adam Pocock
    @Craigacp
    You don't need to concatenate the inputs if there are two separate input placeholders, you feed each input to the appropriate placeholder.
    6 replies
    Gili Tzabari
    @cowwoc
    @Craigacp https://www.tensorflow.org/guide/autodiff#3_took_gradients_through_an_integer_or_string says "Integers and strings are not differentiable"... Does that mean I can't use int* types at all?
    Or am I misunderstanding something?
    Adam Pocock
    @Craigacp
    It depends what you're using the int for. For example MNIST is usually stored as integers in the range 0-255, and you can feed that into the model. Usually the first step in the model is then to convert it into a float and proceed as normal. There as you aren't taking gradients of the conversion procedure it doesn't matter. Also it's useful to feed in other tensors to control model behaviour (e.g. to use an integer step or epoch counter to control the learning rate), again these usually aren't involved in the gradient updates so it doesn't matter.
    Gili Tzabari
    @cowwoc
    The integers in my case represent timestamps as time since epoch.
    Adam Pocock
    @Craigacp
    And you want to use them as features in your model?
    Gili Tzabari
    @cowwoc
    Yes. I'm dealing with behavior that is tied into weather and weather follows certain patterns as a function of time. I've also got outdoor temperature as an input but I'm thinking (just a guess) it can't hurt to add in the timestamp.
    I've actually also got a second case of integers... I've got inputs that are enums, so I converted their ordinal value to an int. There I can obviously just cast it to a float. It's the timestamps where things get more complicated.
    Adam Pocock
    @Craigacp
    I wouldn't pass in a timestamp to an ML system as a monotonically increasing integer. It's probably better to split it out into categoricals which represent months, days, possibly the season, along with the hour of the day. If you pass in the timestamp directly then the model has to expend capacity learning the cyclic behaviour and parsing the timestamp.
    Gili Tzabari
    @cowwoc
    Okay. So if later on I want a model that also predicts the timestamp of an event (e.g. it is currently 20 degrees, predict what time we will hit 23 degrees) should the output again contain the timestamp broken down into time categoricals?
    Adam Pocock
    @Craigacp
    Yeah I think that's probably easiest. Otherwise it's hard to parse the signal.
    Plus if the loss is split out into different chunks you can reward the model for predicting the correct hour & day even if it gets the number of minutes wrong.
    Whereas with a timestamp it's harder to design the loss function to do that.
    Gili Tzabari
    @cowwoc
    Hmm, I found an interesting tutorial at https://www.tensorflow.org/tutorials/structured_data/time_series#time ... they break down timestamps into sin/cos components which I would have never thought to do.
    So, what's the point of Tensorflow having integer, boolean, etc types if only float is really usable? Are they there to just let you convert integers to float on the graph (late binding)? And you always need to convert to float before feeding the values into an Input node?
    Adam Pocock
    @Craigacp
    Tensorflow is a computation graph and an autodiff system. The autodiff only applies to floats as gradients are harder to define on non-continuous spaces. But you can use the computational graph on other types just fine. For example if you're doing object detection that's going to return a bounding box on an image which needs to be integers to line up with the pixels, so the natural return type is an integer tensor. Also boolean is useful for controlling graph elements with tf.cond (i.e. if statements).
    You can compute functions of integer tensors without any trouble, but if you want to differentiate those functions to perform gradient descent that's where you hit the issue.
    Gili Tzabari
    @cowwoc
    Don't you have to performance gradient descent on all nodes in the graph for backprop to work?
    I mean, what's the point of having nodes in the graph that autodiff does not run on? When would that be fine?
    Adam Pocock
    @Craigacp
    All nodes between your inputs and outputs.
    Gili Tzabari
    @cowwoc
    Sorry, what? You're saying that you do or do not need all nodes between your inputs and output to be differentiable?
    Adam Pocock
    @Craigacp
    You need all the nodes that connect your outputs to the parameters you want to learn to be differentiable.
    Gili Tzabari
    @cowwoc
    Right. So when would you want to use non-differentiable nodes in Tensorflow? What lives outside the path between the input and output nodes?
    Adam Pocock
    @Craigacp
    You can add nodes which trigger printouts or saving based on specific computation conditions, you can construct the paths that you want to load data from, you can perform operations on the outputs of your ML model (e.g. the bounding box example, you might want to colour the boxes based on the probability of correct classification)
    All these things you can add into the computational graph.