Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 13:25
    googlebot labeled #1029
  • 13:25
    shazhongcheng opened #1029
  • 13:25
    shazhongcheng review_requested #1029
  • 13:25
    shazhongcheng review_requested #1029
  • 13:25
    shazhongcheng review_requested #1029
  • 02:46
    googlebot labeled #1028
  • 02:46
    tfdocsbot labeled #1028
  • 02:46
    tfdocsbot labeled #1028
  • 02:46
    Bryan-Kan review_requested #1028
  • 02:46
    Bryan-Kan opened #1028
  • 02:46
    Bryan-Kan review_requested #1028
  • 01:52
    Bryan-Kan closed #1017
  • 01:50
    googlebot labeled #1027
  • 01:50
    tfdocsbot labeled #1027
  • 01:50
    tfdocsbot labeled #1027
  • 01:50
    truongnh1992 review_requested #1027
  • 01:50
    truongnh1992 review_requested #1027
  • 01:50
    truongnh1992 opened #1027
  • Sep 18 23:37
    kokoro-team unlabeled #995
  • Sep 18 23:32
    tfdocsbot labeled #995
HarikrishnanBalagopal
@HarikrishnanBalagopal
for instance this doesn't work https://bpaste.net/show/23C2
it fails to train the model
the documentation for @tf.function is not detailed enough to explain this
Billy Lamberta
@lamberta
work in progress. check out this detailed section on autograph/tf.function: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md
HarikrishnanBalagopal
@HarikrishnanBalagopal
it seems to be the while loop causing the issue
or not
very hard to debug
Billy Lamberta
@lamberta
Check out https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/debugging.md
I'm sure they would love feedback/PRs on it. Still early for their docs, though
HarikrishnanBalagopal
@HarikrishnanBalagopal
I am wondering if this is a google colab issue, if I include the loop inside the @tf.function then sometimes it goes into an infinite loop
might have crashed the runtime and colab just not updating
@lamberta uh the example they give there is slightly incorrect
original function
@tf.function
def f(a):
  pdb.set_trace()
  if a > 0:
    tf.print(a, 'is positive')
during debugging
>l
      8 def f(a):
      9   pdb.set_trace()
---> 10   tf.print(a)
     11   if a > 0:
     12     tf.print('is positive')
where did the extra tf.print(a) line come from?
HarikrishnanBalagopal
@HarikrishnanBalagopal
HarikrishnanBalagopal
@HarikrishnanBalagopal
@lamberta ok I figured it out, you can't use for i in range(64): or something like that since its python objects
you have to use for i in tf.range(64): for it to get converted to tf ops by @tf.function
Billy Lamberta
@lamberta
nice. might be worth a PR to those tf.function docs
HarikrishnanBalagopal
@HarikrishnanBalagopal
with that fix it no longer goes into an infinite loop
while loop also seems to work
@lamberta where tho
it also already kind of mentioned in the docs
Key Point: Only statements that are conditioned on, or iterate over, a TensorFlow object such as tf.Tensor, are converted into TensorFlow ops.
HarikrishnanBalagopal
@HarikrishnanBalagopal
@lamberta are there any plans to add a simple way to release GPU memory?
I have a lot of tests for functions that build networks. These tests create temporary networks, run them once and finish
However looking in tensorboard and also at GPU memory usage in google colab, the memory is not getting released
TruongSinh Tran-Nguyen
@truongsinh
Noobs question, how can I generate HTML for https://github.com/tensorflow/docs locally? python3 setup.py build only generate egg file
Reuben Morais
@reuben
@truongsinh you can't, you can only build markdown but then you have to render it yourself. I built some simple tooling around it for creating docsets, maybe it'll be useful for you:
it doesn't look nearly as refined as the official docs though, for that you'll have to scrape the website
TruongSinh Tran-Nguyen
@truongsinh
thanks @reuben, I'll check it out
TruongSinh Tran-Nguyen
@truongsinh
@reuben is that tool only for API docs? Can I use it for community translation docs like guides and tutorials?
Reuben Morais
@reuben
@truongsinh I only tested it for API docs, not sure if the other sub projects have the same structure
TruongSinh Tran-Nguyen
@truongsinh
gotcha
Vishesh Mangla
@XtremeGood
image.png
I have been waiting for more than 10 mins and this isn't still complete. If it was a neural neutral the computations were definitely faster. What's the reason? Also only a little chunk of ram is being used
3-4 layers of 30*30 conv gets computed with 10 epochs in 2 mins or so or even faster but why is this so slow?
I 'm using gpu
I tried tpu too
I converted my code fro scipy to tensorflow only for gpu power
because matrices are 1000* 1000 dimensions
Also in tf 2.0 contrib is removed
Vishesh Mangla
@XtremeGood
where can I find integrate.odeint?
HarikrishnanBalagopal
@HarikrishnanBalagopal
@lamberta is non eager still supported in tensorflow 2?
Sean Morgan
@seanpmorgan
So you can decorate eager functions with @tf.function to build a graph representation under the hood. Alternatively public API from TF 1.x is available using tf.compat.v1 https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/compat/v1
So you can still run sessions/graphs if you really want
HarikrishnanBalagopal
@HarikrishnanBalagopal
def testing():
    model = Sequential([
        Input(shape=(1,)),
        Dense(2),
        Dense(1, activation='sigmoid')
    ])
    model.compile('adam', tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy()])

    xs = np.linspace(-1, 1, 100)
    ys = np.where(xs <= 0, 1, 0)
    plt.plot(xs, ys)

    results = model.evaluate(xs, ys)
    print('before training test loss, test acc:', results)
    yps = model.predict(xs)
    print(np.mean(yps), yps[:10].flatten())

    model.fit(xs, ys, batch_size=100, epochs=400, verbose=0)

    results = model.evaluate(xs, ys)
    print('after training, test loss, test acc:', results)
    yps = model.predict(xs)
    print(np.mean(yps), yps[:10].flatten())

testing()
This prints 100% accuracy before and after training.
100/100 [==============================] - 0s 1ms/sample - loss: 0.4602 - binary_accuracy: 1.0000
before training test loss, test acc: [0.4602195370197296, 1.0]
0.49999997 [0.75605214 0.7518128  0.7475245  0.7431873  0.73880166 0.7343679
 0.72988635 0.72535753 0.7207818  0.7161597 ]
100/100 [==============================] - 0s 137us/sample - loss: 0.2889 - binary_accuracy: 1.0000
after training, test loss, test acc: [0.28886041820049285, 1.0]
0.50000006 [0.9287006  0.9251896  0.92152035 0.91768706 0.9136842  0.90950584
 0.90514624 0.9005994  0.89585984 0.8909216 ]
HarikrishnanBalagopal
@HarikrishnanBalagopal
Well it does SOMETIMES