Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    khanyuriy
    @khanyuriy
    hi, all! i walked through this tutorial https://www.tensorflow.org/tutorials/wide and i have graph.pbtxt in output dir. But i need grapn in pb format, to use in TensorFlowSharp. Please, provide me with any way to get it.
    AngryPowman
    @AngryPowman
    Anybody compiled static libraries (libtensorflow.a) for Android ?
    Trevor R.H. Clarke
    @tclarke
    trying to write an input pipeline for some data in tf 1.2 python. I have a file format that has 3 bytes (number as string) followed by PNG data (variable length). I'm using the cifar10 tutorial as a starting point.
    I have a filename queue which I put into a wholefilereader and .read() to get the filename, raw_data. I do tf.string_to_number(tf.substr(raw_data, 0, 3), tf.int32) to get the numeric value (it's the class number for the image)
    having trouble getting the png data. tried tf.image.decode_png(tf.substr(raw_data, 3, -1)) and it said the -1 wasn't a valid index. Next I tried tf.image.decode_png(tf.substr(raw_data, 3, string_length(raw_data)) where string_length is
    def string_length(t): return tf.py_func(lambda p: [len(x) for x in p], [t], [tf.int32])[0]
    with that I get ValueError: Shape must be rank -1 but is rank 0 for 'Substr' (op: 'Substr') with input shapes: [], [], ?.
    I'm guessing that might mean that my rank 0 int32 tensor can't be used? I don't want to pass in a session and eval() the string_length result forcing the whole input pipe to eval at that point. How else can I do this?
    Shivam Kalra
    @shivamkalra
    Can anyone explain how does conv2d work on multi-channel image?
    Is convolution calculated per channel and then added to each other per filter?
    Simon Ho
    @bawongfai
    Is there any way to implement Multi-task learning with Estimator ?
    i.e. multiple outputs with model_fn
    Sebastian Raschka
    @rasbt
    Shivam, regarding the conv2d. E.g. say we have 5x5 image with 3 channels. I.e., the image is 5x5x3. Now, you have 2 kernels. Each kernel has the size 3x3x3. The first kernel will do the convolution for each RGB channel, adding the results together and give you a 3x3 feature map (if you don't consider sliding, each patch will give you a scalar). Then, you do the same thing for the 2nd kernel though. Now you have two 3x3 feature maps and you stack those together. So, the result of using the 2 kernels on the 5x5x3 image is a 3x3x2 feature map.
    Kiwee
    @oiwio
    Hey, guys, i met a problem, when i use sess.run(), i found my validation set runs very more slowly than training set. i didn't use optimizer on the validation set, i want to know what happened. Another problem is that, the time i run sess.run([aaa,bbb,ccc]) approximately equal to the time run sess.run(aaa) or bbb or ccc, sess.run() operation is in parallel?
    hhxxttxsh
    @hhxxttxsh
    Hi guys, anyone know how to get reproducible results in tensorflow?
    Akshay Mankar
    @akki6843
    @oiwio yes tensorflow framework by default uses parallel computing
    Amitayus
    @Amitayus
    I have a problem when using bazel to 'bazel run' the model in models zoom, i.e., github.com/models/domain_adaptation/. Though I follow the step one by one, it outputs error infomation like, 'ERROR: D:/workspace/slim/BUILD:56:12: in deps attribute of py_library rule //slim:download_and_convert_cifar10: file '//t ensorflow:tensorflow' is misplaced here (expected no files).' I am using Windows 10 x64 + Bazel 0.54.
    Shiva Manne
    @manneshiva
    Hi guys,
    I have been working on benchmarking commonly used frameworks/libraries for unsupervised learning of word embeddings(word2vec). Since learning embeddings is a frequently used technique, this will be helpful for many working in this field.
    I am currently comparing tensorflow(cpu/gpu), gensim, deeplearning4j and the original c code on standard metrics like training time, peak memory usage and quality of learned vectors.
    Link to my github repo(still working on it).
    I have directly picked up the code for training on each framework from the example given in their respective official github repositories. I ran the benchmark on text8 corpus(plan to run it on a much larger corpus later for the true picture) which gave me strange results.
    I would really appreciate it if you could have a look at the tensorflow code (for word2vec) and give feedback/suggest changes.
    Thanks for your time!
    Shiva.
    promach
    @promach
    What is the purpose of combining Max pooling with 1x1 convolution ? http://iamaaditya.github.io/2016/03/one-by-one-convolution/
    Ad34
    @ad34
    anyone familiar with magenta here? I run it on mac with docker and trying to generate from my own dataset
    I successfully built the dataset but when I start the training , it seems to start with logs like INFO:tensorflow:Starting training loop...
    INFO:tensorflow:Create CheckpointSaverHook. but remains stuck for hours
    my load avg is 0.95 on the docker instance so i dont think it s a cpu issue
    ErasRasmuson
    @ErasRasmuson
    Hi.. Do you know any good playground for RNN ? Like this http://playground.tensorflow.org/
    Vaibhav Satam
    @SatioO
    Hi guys, I am using estimator api in tensroflow. In input function I am reading csv using decode_csv function and its working perfectly but one thing which I don’t understand is how to do preprocessing of data with this like imputing or trasforming data. I hv seen implementation where ppl are loading data using pandas preprocessing them and feeding them in input function. Whats the best practice for preprocessing data in tensorflow?
    Vaibhav Satam
    @SatioO

    I am writing to ask the principle of how to feed a big training data to a tensor flow model. My training data is hosted in csv file(s), and basically I am using the below code to load data in queue.

    filename_queue = tf.train.string_inputproducer([...])
    reader = tf.TextLineReader()
    , line = reader.read(filename_queue)

    line = tf.decode_csv(line, record_defaults=default)
    label_batch, feature_batch = tf.train.shuffle_batch([label, feature], batch_size=batch_size, capacity=512, min_after_dequeue=256, num_threads=8)

    armundle
    @armundle
    Does anyone have ideas about fixing this: tensorflow/tensorflow#12522
    Trevor R.H. Clarke
    @tclarke
    trying to use TFslim to evaluate an inception_v4 net
    I'm classifying a number of images and need to evaluate repeatedly but I won't have all the images at once so I can't create a single batch and call evaluate_once
    I can repeatedly call evaluate_once which works but reloads the checkpoint and reconfigures the net each time which is slow
    can someone point me to a way to load the checkpoint once then set the input batch differently and eval the net each time using tfslim? or do I need to use raw tensorflow to do this?
    Saurabh Vyas
    @saurabhvyas
    can anyone, please help me with tensorflow datasetapi ? I am wanting to create a simple dataset for speech recognition, each component consists of mfcc , and target transcription , there is one problem , mfcc is not implemented by default in tensorflow, so I am using python implementation using tf.py_func, but I am getting a strange error
    UnimplementedError: Unsupported object type Tensor
         [[Node: PyFunc = PyFunc[Tin=[DT_STRING, DT_STRING], Tout=[DT_DOUBLE, DT_STRING], token="pyfunc_7"](arg0, arg1)]]
         [[Node: IteratorGetNext_7 = IteratorGetNext[output_shapes=[<unknown>, <unknown>], output_types=[DT_DOUBLE, DT_STRING], _device="/job:localhost/replica:0/task:0/cpu:0"](OneShotIterator_7)]]
    gumnn
    @arita37
    A friend have some project code dev. in TFlow (salary based ),
    if anyone interested, please pm me directly. thanks !
    MachineLearning
    @zhangshengshan
    Caused by: java.io.InvalidClassException: org.apache.spark.unsafe.types.UTF8String; local class incompatible: stream classdesc serialVersionUID = -2992553500466442037, local class serialVersionUID = -5670082246090726217
    Hello when i run spark-shell
    I came across this problem
    image.png
    image.png
    Some says it is because the hadoop version of spark .
    MachineLearning
    @zhangshengshan
    However i don't know how to specify the version of hadoop when compile spark source code! any suggestion would be appreciated!
    Loreto Parisi
    @loretoparisi
    anyone is aware of ONNX open model support in TF?
    some import/export has done, but for TF nothing official yet, https://github.com/onnx/tutorials
    neverdie88
    @neverdie88
    please help me on my dummy question: what is the difference between tf.maximum and tf.reduce_max? Are their derivatives different from each other? If I implement maxout/minout, should I use tf.maximum or tf.reduce_max?
    Jay Kim (Data Scientist)
    @bravekjh
    Hi everyone. I joined this room first time today, nice to meet you all
    Jay Kim (Data Scientist)
    @bravekjh
    anyone knows how to run tensorflow-on-spark ?
    gumnn
    @arita37
    there is yahoo wrapper
    Amitayus
    @Amitayus
    @neverdie88 tf.reduce_max is for finding maximum cross some specific dims while tf.maximum is for finding max one between two scale elements.
    Jay Kim (Data Scientist)
    @bravekjh
    @arita37 do you know how to configure tensorflowOnSpark?
    @arita37 ?
    gumnn
    @arita37
    you can use yahoo tensorflow on spark
    Jay Kim (Data Scientist)
    @bravekjh
    I know but I am asking what is the steps. @arita37
    specifically