sig_jvmso they get categorized and filtered properly
a = [1, 2, 3]and
b = [4, 5]into tensor c which is
[1, 2, 3, 4, 5]. Can you elaborate? As far as I can see,
tf.stackwill just stack the two tensors on top of each other instead of concatenating the vectors into a
a = [1, 2, 3]and
b = [[4, 5], [6, 7]]and I want to end up with
c = [1, 2, 3, 4, 5, 6, 7]. I am currently using
c = tf.concat([a], tf.reshape(b, shape=[-1]))but I was wondering if there an easier/more-readable way to collapse and concatenate everything in a single step.
I've got inputs with different dtypes. One is an int64, another is a float64. I saw a tutorial on feeding Tensorflow multiple inputs where they used
tf.concatenate() to combine the various inputs but when I tried to do the same I got:
Tensor conversion requested dtype int64 for Tensor with dtype float32: <tf.Tensor 'concatenate/Cast:0' shape=(None, 4180, 5) dtype=float32>
tf.cond(i.e. if statements).
sinof the timestamp. Is there a point to passing both to the model as input? Or do they only use one of them?
math.nanin place of sensor data but this broke training (the loss function returned
nan). What should I do in this case? Set the values to zero? Set them to random values? Ideally I want the model to skip them and not use them for training.
We have gone from TF Java 0.2.0 to 0.3.1 on our linux hosts.
And basically just changed so that it compiles. We load and unload a bunch of different savedmodels.
We are now experiencing OutOfMemory on bytedeco as if the memory is not reclaimed. We try to calculate the size of the incoming savedmodel and only keep models to roughly half of the available memory of the host (we have a guava cache that get approximate weight of the model and only hold models up to half of the available memory).
We are closing all tensors, savedmodel-bundles etc (not really changing from 0.2.0).
We are about to try 0.4.0-SNAPSHOT and also do some kind of more specific test case.