Are there any examples that show the usage of the Java framework API for basic workflow like this?
training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([#tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images)
Specifically, I don't see the equivalent of
Placeholder. Do I invoke
Ops.tensorArray()of the same size as the placeholder, then populate that, then eventually invoke
Runner.feed(placeholder, array)? Or is there a better way?
TFloat32either an empty one, or by copying from some source. Then feed it to the placeholder. Note the empty one returns memory which has not been zeroed yet tensorflow/java#271.
model.fit()until I get something working.... then I can move
model.fit()to Java and
model.predict()would always sit in Java.
So to some extent it depends how complicated your model is. Tribuo's next release exposes TF models but wraps up all the fitting, evaluation and prediction in it's interface to make it a lot simpler. It's not the same as Keras, it's a little bit more like scikit-learn as we don't have callbacks in Tribuo.
However TF-Java will have this in the future, it's just a lot of stuff to build with a much smaller team than the Keras team.
int32. Other parts are
float64. Any ideas?
Okay, thank you.
Another question: My dataset is dynamically generated from a database. All online tutorials I saw focus on pulling in an existing dataset (e.g. MNIST) or from CSV files on disk. Neither seems like a good fit for my case. Should I be "streaming" data from the database to the model for training somehow? Or am I expected to construct a fixed-sized tensor and populate it column by column based on the database resultset?