Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Hi guys, this is more general question. I have successfully implemented LSTM models for anomaly detection for IoT devices, super thanks to deeplearning4j.
    Now I got another possible case on my table. Imagine there is building. In such building there is input of some material and machines (multiple machines for different type of purpose), material flows trough these machines by processing input material (can be steel blocks, etc. simply some material which has some attributes: weight, size, but much more). Now we have 3D modeled and existing simulations of this material flow trough machines in building (modeled by humans), all modeled in very low detail. This you can see as fully featured model of manufacturing with detail to button on some panel.. What do you think AI can help here especially by improving the efficiency of process, so flow of material is improved and time of result stuff produced is reduced. In very gross numbers how difficult will be design and implement such AI - I am looking only for some kind of prototype? Thanks for feedback.
    I am more interested what type of approach already existing could be used for such type of stuff. AI is used already for make shape of objects more efficient by studying physics and improving shape for aircrafts, so I am looking for something similar except my "physic laws" are in form of "process configuration - buidling size, machines size/speed/etc, material attributres, humans working with machinery capabilities.. I understand this is already quite challenging task, but can someone just point me to correct direction of possibly existing study paper/sample prototype?
    Fei Hu
    Hi guys, do you know in REGISTER_OP, how to define an attribute as float infinity?
    I know this is out of topic here but trying to get as much as data we can. Can you please help me understand your use of agile methods by completing this one minute survey https://www.surveymonkey.com/r/98JMTJ2
    hi guys ,I want to use tensorflow build a word2vec+nn model,input nn dimension is [500,532,128],500 is batch size , when I build the first layer ,I set the dimension of weight is [532,128,100] ,but It did not work and show me this :" In[0].dim(0) and In[1].dim(0) must be the same: [500,532,128] vs [532,128,100]", how to solve this problem
    does anyone use tensorflow.js
    Traceback (most recent call last):
    File "C:/work/object_detection/Models/research/object_detection/object_detection_tutorial.py", line 69, in <module>
    label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
    AttributeError: module 'utils.label_map_util' has no attribute 'load_labelmap'
    i'm getting this error. No clue why load_labelmap is not loaded
    i printed the dir(label_map_util) and getting this ['builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'spec', 'cv2', 'label_map_util', 'np', 'os', 'sys', 'tf', 'vis_util']
    check the label_map_util file, it may not contain the label_labelmap function.
    Vivek Aswal
    Hello Everyone!! Any recommendation on text classification in tensorflowjs
    Bigyan Karki
    hey everyone . I am trying to solve this problem. Any help/suggestions would be hugely appreciated. Here is the link http://www.bigyankarki.com/natural-commentary-in-ea-sports-fifa/
    Can you use tf.data.dataset API with tf.contrib.slim?
    Hello, everybody. This is my first day to join Tensorflow room.
    hello, everybody.
    hello,erverybody .
    Jegathesan Shanmugam
    Hello, Every one. Please find the Tensor Flow Container Setup with NvidiaGPU Support for Ubuntu 16.04 doc https://gist.github.com/nullbyte91/0f969cc2d41dd052d52bef378918c163
    Lukasz Zmudzinski

    Hey there, maybe someone knows what's up here, trying to use my own dataset for object detection API in tensorflow and get the following error:

    Traceback (most recent call last):
      File "D:\Work\Python Stuff\models-master\research\object_detection\model_main.py", line 109, in <module>
      File "D:\Work\Anaconda\envs\vehicle-detection\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
      File "D:\Work\Python Stuff\models-master\research\object_detection\model_main.py", line 71, in main
      File "D:\Work\Anaconda\envs\vehicle-detection\lib\site-packages\object_detection-0.1-py3.5.egg\object_detection\model_lib.py", line 589, in create_estimator_and_inputs
        model_config=model_config, predict_input_config=eval_input_configs[0])
    IndexError: list index out of range

    I'm not sure what is wrong here.

    Hey guys please I need help.
    I used tf.estimator.DNNRegressor to predict a couple of prices. But how to I play the predicted values against the target values
    Hi everyone....
    Can anybody explain face detection accuracy result is depend on which parameters?
    bcoz when i train dataset with some images and test with other images still gets 1.00 accuracy...
    Sebastian Riegelbauer


    do you know if a pretrained resnet-101 tensorflow model exists on MPII human pose dataset? I could not find any so far..

    Mukul Agrawal
    Heyy, Everyone. This is Mukul here. I am learning Tensorflow. I have a question. I am using get_variable and named the variable as x but I don't have any variable named x initialized before. Now when I print the value of variable x , it always shows some random value. I wanna know that in the situation like above when I don't have a variable and even then I use get_variable, will it always show some random value? does it have a range?
    hello ,everybody
    Adam Goh

    Hi guys, currently attempting to use TFLITE for CPP to load an existing .pb file. After compiling successfully using bazel, I encounter this error at this code:

    tflite::MutableOpResolver resolver;
    std::unique_ptr<tflite::Interpreter> interpreter;
    tflite::InterptreterBuilder(*model, resolver)(&interpreter);
    Didn't find op for builtin opcode 'ADD' version '1'
    Registration failed.

    What does this mean? How can I go about with this? I tried to look into if I have added all the necessary header files, but to no avail so far.

    fyi, the model is a tflite::FlatBufferModel loaded using FlatBufferModel::BuildFromFile if that matters to anyone
    Adam Goh
    I found it out, turns out I should be using the builtinOpsResolver
    getting this error
    while trying to run tensorflow object detection api
    Armando Fandango
    which git branch is being used for TF-2.0-preview development and where can I find the documentation for TF-2.0-preview please ?
    where can i find datasets to train my model in tensorflow.js . Which format of data does tensorflow.js support?
    does anyone has experience with mixed-precision training using the tensorflow estimator api? Error messages are also relatively uninformative: "Tried to convert 'x' to a tensor and failed. Error: None values not supported"
    Hi! Anyone use google bert for next sentence predicton?
    hello all. are there any advantages when it comes to running tensorflow on local machine, rather than in a container like docker?
    Sathyamoorthy R
    hello there, have you worked with tensorflow sreving??
    Shawn Tian
    Does anyone knows when will the object detection for tf 2.0 be released?
    Elvin Ugonna
    I am using flags in my code in tensorflow and I have not been getting errors not until I upgraded from TF 1X to TF2.0 and these are the errors I am getting. I need an assistance on resolving this issue.

    def del_all_flags(FLAGS):
    flags_dict = FLAGS._flags()
    keys_list = [keys for keys in flags_dict]
    for keys in keys_list:


    flags = tf.app.flags

    FLAGS = tf.app.flags.FLAGS

    flags.DEFINE_float("learning_rate", default = 0.0001, help = "Initial learning rate.")
    flags.DEFINE_integer("epochs", default = 700, help = "Number of epochs to train for")
    flags.DEFINE_integer("batch_size", default =128, help = "Batch size.")
    flags.DEFINE_integer("eval_freq", default = 400, help =" Frequency at which to validate the model.")
    flags.DEFINE_float("kernel_posterior_scale_mean", default = -0.9, help = "Initial kernel posterior mean of the scale (log var) for q(w)")
    flags.DEFINE_float("kernel_posterior_scale_constraint", default = 0.2, help = "Posterior kernel constraint for the scale (log var) for q(w)")
    flags.DEFINE_float("kl_annealing", default = 50, help = "Epochs to anneal the KL term (anneals from 0 to 1)")
    flags.DEFINE_integer("num_hidden_layers", default = 4, help = "Number of hidden layers")

                     help="Network draws to compute predictive probabilities.")

    tf.compat.v1.app.flags.DEFINE_string('f', '', 'kernel')

    I end up getting these error while doing some manipulation: DuplicateFlagError: The flag 'batch_size' is defined twice. First from D:/Python/workspace/FCN_dataset/FCN.tensorflow-master/FCN.py, Second from D:/Python/workspace/FCN_dataset/FCN.tensorflow-master/FCN.py. Description from first occurrence: batch size for training and
    TypeError: delattr() missing 1 required positional argument: 'flag_name'
    Mihai Maruseac
    please use triple backticks to wrap your code so that formatting is proper and it is easy to read
    Frank Ottey

    Hello everyone - I'm trying to generate a SavedModel that does some initialization work after loading a graph. When using the tf.saved_model.Builder class, I can pass an operation to the init_op parameter for this purpose.

    Some of the initialization I would like to happen is for some tf.contrib.lookup.HashTable objects to be initialized. I know I can get these object's initializer operations by invoking table_var.initializer. What I would like to do, additionally, however, is after initializing these tables, add them to a collection via tf.add_to_collection(...). I want to do this because I want to be able to access that table from some other point in the same session / graph, but at that point, I won't have a python reference to the table. So my solution was to use the collection to store the object and get it back from its key... As far as I can tell, however, tf.add_to_collection(...) isn't an operation, however, and even if it was, I don't know how I can create an operation that is the sequence of that function call and the table initializer... I know that tf.control_dependencies is a thing, but that returns a context manager, not an op, which still only gives us the capability to call tf.add_to_collection(...) eagerly again...

    Ideally, I would wish something like the following would work:

    table = tf.contrib.lookup.HashTable(initializer=tf.contrib.lookup.TextFileIdTableInitializer(
    init_op = tf.Operation(node_def=lambda: tf.add_to_collection(table.name, table),
    # Elsewhere in the universe
    key = '1'
    with tf.Session() as sess:
        sess.run(init_op) # this is only for demonstration - I want init_op to be an actual tf.Operation because I want to pass it to the init_op parameter of the tf.saved_model.Builder class's __init__ method...
        [ table_handle ] = tf.get_collection('my_table')
        value = table_handle.lookup(tf.constant(value=key))

    Does anyone know I way I might be able to achieve the above?

    Alternatively, when resources like tables are initialized, are they automatically placed somewhere I could retrieve them, and if so, how? I think the table initializer itself is stored in tf.GraphKeys.TABLE_INITIALIZERS, but I don't want to initialize the table in "elsewhere" I just want to fetch the existing, initialized table...

    Sebastian Riegelbauer
    Hi guys! I would like to implement a deterministic CNN model with tensorflow. Pytorch uses torch.backends.cudnn.deterministic = True
    (https://pytorch.org/docs/stable/notes/randomness.html). Is there anything comparable in tensorflow?
    what are the best deep learning book to learn everything.
    John F. Davis
    hello, anyone build tf from source and get a linker error?
    Mihai Maruseac
    what is the linker error? what operating system?
    John F. Davis
    i have a weird problem with gcloud and tensorboard. Sometimes, I get curves in tensorboard, but mostly I get just a single dot for evaluate. Any idea why?
    I am using custom estimator. I want to save the best model and use it on another computer to perform prediction. it is not necessarily for production model serving. Now I can save the recent models. I see the BestExporter class, but i could not undrestant why and how should I have serving input function to save the best model based on evaluation. would you please help. I tried several code but none of the did not work for me. I am using TFrecoeds file and have training and validation input function