Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 17:59
    avinava-o commented #745
  • 03:59
    vrince closed #747
  • 03:59
    vrince commented #747
  • 00:24
    shun-lin opened #750
  • 00:01
    shun-lin commented #749
  • Dec 08 22:58
    kokoro-team unlabeled #735
  • Dec 08 22:58
    WindQAQ labeled #735
  • Dec 08 22:06
    kokoro-team unlabeled #745
  • Dec 08 22:06
    seanpmorgan labeled #745
  • Dec 08 20:54
    manzilz commented #745
  • Dec 08 20:41
    manzilz synchronize #745
  • Dec 08 20:30
    kokoro-team unlabeled #745
  • Dec 08 20:29
    seanpmorgan labeled #745
  • Dec 08 20:28
    MadsAdrian commented #746
  • Dec 08 20:26
    manzilz review_requested #745
  • Dec 08 20:26
    manzilz commented #745
  • Dec 08 20:24
    manzilz synchronize #745
  • Dec 08 20:22
    manzilz synchronize #745
  • Dec 08 20:13
    kokoro-team unlabeled #735
  • Dec 08 20:12
    manzilz synchronize #745
Sean Morgan
@seanpmorgan
Hmmm my guess would be image pre-processing pipeline? Can you increase your dataset buffer?
Sean Morgan
@seanpmorgan
Hi SIG, please come by the contributor summit today + tomorrow if you’re at TF World
Jay Vercellone
@jverce
Hi there! I'm trying to build tensorflow/addons from source, using a pre-built tensorflow_gpu that I built with CUDA 10.1. However I'm having a hard time building addons with that version of CUDA. Is version 10.1 supported? Anybody tried the same?
Sean Morgan
@seanpmorgan
Hi! Unfortunetly we only support CUDA 10.0 at the moment, once the upstream https://github.com/tensorflow/custom-op supports 10.1 we will follow suit
I don't believe there is an issue in custom-op for that at the moment so you may want to file an issue
hmmm tensorflow/custom-op#15 looks like it was discussed but not a great outcome there
Jay Vercellone
@jverce
Great (as in "I'll stop banging my head against the wall"), thanks @seanpmorgan!
Sean Morgan
@seanpmorgan
Feel free to file an issue in Addons so future people may be sparred from your trouble. Also we can update you when it's supported
joao guilherme
@joaogui1
@seanpmorgan I tried that, and I got the following error
FailedPreconditionError: Error while reading resource variable training/SGD/learning_rate from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/training/SGD/learning_rate) [[node ReadVariableOp (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1751) ]]
Andrei Nesterov
@manifest

Hey everybody, I'm trying to figure out how to use tf.addons.seq2seq for a neural machine translation problem. I decided to implement a simple encoder/decoder model from Sequence Models course by deeplearning.ai on Coursera that translates dates from human readable format ("25th of June, 2009") into machine readable format ("2009-06-25").

After 35 epochs of training, accuracy on the training set about 90%. But, I'm getting very bad predictions even on data from the same training set. I suppose, there should be some error during inference. I'd appreciate any suggestions.

# encoder
encoder_inputs = tf.keras.layers.Input(shape=[None], dtype=np.int32, name='x_seq')
encoder_embeddings = tf.keras.layers.Embedding(n_xvocab, n_e)
encoder = tf.keras.tf.keras.layers.LSTM(n_a, return_state=True, name='encoder')

x_emb = encoder_embeddings(encoder_inputs)
encoder_outputs, state_h, state_c = encoder(x_emb)
encoder_state = [state_h, state_c]

# decoder
decoder_lengths = tf.keras.layers.Input(shape=[], dtype=np.int32, name='s_seqn')
decoder_inputs = tf.keras.layers.Input(shape=[None], dtype=np.int32, name='s_seq')
decoder_embeddings = tf.keras.layers.Embedding(n_yvocab, n_e)

sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = tf.keras.layers.LSTMCell(n_s)
output_layer = tf.keras.layers.Dense(n_xvocab)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler, output_layer=output_layer, name='decoder')

y_emb = decoder_embeddings(decoder_inputs)
decoder_outputs, s_seq_fin, y_seqn_fin = decoder(
    y_emb,
    initial_state=encoder_state,
    sequence_length=decoder_lengths)
Y_proba = tf.nn.softmax(decoder_outputs.rnn_output)

model = tf.keras.models.Model(
    inputs=[encoder_inputs, decoder_inputs, decoder_lengths],
    outputs=[Y_proba])

model.compile(loss="sparse_categorical_crossentropy", optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), metrics=["accuracy"])
model.fit([x_seq, s_seq, y_seqn], y_seq, epochs=35)
# ['9 may 1998', '10.09.70', '4/28/90']
x_ex = x_corpus[0:8]
m_ex = len(x_ex)
x_seq_ex = serialize(x_ex, x_tokenizer, Tx)
s_seq_ex = np.zeros((m_ex, Ty), dtype=np.int32)
y_seqn_ex = np.full((m_ex), Ty, dtype=np.int32)

h_seq_logits_ex = model.predict([x_seq_ex, s_seq_ex, y_seqn_ex])
y_seq_ex = np.argmax(h_seq_logits_ex, axis=-1)
# array([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], [3, 5, 4, 2, 3, 3, 3, 3, 3, 3]])
deserialize(y_seq_ex, y_tokenizer)
# ['1111111111', '1111111111', '1290111111'] instead of ['1998-05-09', '1970-09-10', '1990-04-28']
Moritz Kröger
@Smokrow
Hey there. What is the correct Usage of get_hyper() in optimizers. Right now I am trying to hold every hyperparameter with set_hyper() but when running if self._get_hyper("beta1"): in graph mode I get
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
Should I just change the parameter to a python class variable or is it better to create a Tensorflow function.
Mike Walmsley
@mwalmsley

Hi!

I'm trying to use tfa on the GCloud tensorflow-gpu image and I'm having trouble getting it to import, even with a minimal setup. Can anyone help?

Dockerfile:
FROM gcr.io/deeplearning-platform-release/tf2-gpu.2-0
...
RUN pip install --upgrade pip
RUN pip install -r requirements.txt # including tensorflow-addons==0.6.0

Python:
Python 3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 21:41:56)
[GCC 7.3.0] on linux
...
import tensorflow_addons as tfa

2019-11-06 15:29:43.940868: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/miniconda3/lib/python3.5/site-packages/tensorflow_addons/init.py", line 21, in <module>
from tensorflow_addons import activations
File "/root/miniconda3/lib/python3.5/site-packages/tensorflow_addons/activations/init.py", line 21, in <module>
from tensorflow_addons.activations.gelu import gelu
File "/root/miniconda3/lib/python3.5/site-packages/tensorflow_addons/activations/gelu.py", line 25, in <module>
get_path_to_datafile("custom_ops/activations/_activation_ops.so"))
File "/root/miniconda3/lib/python3.5/site-packages/tensorflow_core/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /root/miniconda3/lib/python3.5/site-packages/tensorflow_addons/custom_ops/activations/_activation_ops.so: undefined symbol: _ZN10tensorflow12OpDefBuilder4AttrESs

Looks like some gnarly incompatibility between TF and TFA installs/versions, but the error is way above my head.

Mike Walmsley
@mwalmsley
Also created an issue here tensorflow/addons#676 if that's more appropriate
Sean Morgan
@seanpmorgan
Hi @mwalmsley sorry for the late reply. Yes this is a compiler incompatibility most likely. There is on-going work for make sure this doesn't happen (tensorflow/community#133)
But in the meantime we can look into this because I checked the gcc version of that docker image and it should match
Will post updates in the issue. Thanks for the post!
Mike Walmsley
@mwalmsley
Brilliant @seanpmorgan , thanks so much for getting back to me!
palak
@developer22-university
hey who are admin of this group please tell me how can i contribute for gci mentor?
Sean Morgan
@seanpmorgan
We have a few maintainers a larger group of core members. Not sure I understand the question but if you’re looking for ways to contribute then issues with “Good first issue” or “Help wanted” tags are the best way to start
Sean Morgan
@seanpmorgan
Should have read few admins and a larger group of maintianers*
Evan Casey
@evancasey
hey everyone, i'm getting extremely slow inference times on CPU with the CorrelationCostlayer (opticalflow.py)
is this expected? It runs fast on my GPU but I need to do inference on the CPU...
Evan Casey
@evancasey
issue created here: tensorflow/addons#688
Tzu-Wei Sung
@WindQAQ
HI @evancasey I have submitted a PR tensorflow/addons#689
Hope it could help you
Evan Casey
@evancasey
@WindQAQ i saw it. thanks again!
Sean Morgan
@seanpmorgan
If anyone would like to answer there is a post on the discuss google groups regarding TFA.seq2seq and keras functional API:
https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/C1_9zvAtYw0
Also congratulations everyone on our repo getting to 500 stars. Great community building around our SIG and it shows :)
Philip May
@PhilipMay
Yay to us. 🎉🎊
Philip May
@PhilipMay
I added a test file in addons (fork from master).
I want to run it and do a 'pip install -e .' but get this error message:
   ERROR: Command errored out with exit status 1:
     command: /usr/local/miniconda3/envs/tf2/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/mike/develop/git/addons/setup.py'"'"'; __file__='"'"'/Users/mike/develop/git/addons/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
         cwd: /Users/mike/develop/git/addons/
    Complete output (16 lines):
    running develop
    running egg_info
    writing tensorflow_addons.egg-info/PKG-INFO
    writing dependency_links to tensorflow_addons.egg-info/dependency_links.txt
    writing requirements to tensorflow_addons.egg-info/requires.txt
    writing top-level names to tensorflow_addons.egg-info/top_level.txt
    reading manifest file 'tensorflow_addons.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    /usr/local/miniconda3/envs/tf2/lib/python3.6/site-packages/setuptools/dist.py:475: UserWarning: Normalizing '0.7.0-dev' to '0.7.0.dev0'
      normalized_version,
    warning: no files found matching '*.so' under directory 'tensorflow_addons/'
    writing manifest file 'tensorflow_addons.egg-info/SOURCES.txt'
    running build_ext
    building '_foo' extension
    creating build
    error: could not create 'build': File exists
    ----------------------------------------
ERROR: Command errored out with exit status 1: /usr/local/miniconda3/envs/tf2/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/mike/develop/git/addons/setup.py'"'"'; __file__='"'"'/Users/mike/develop/git/addons/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
I am on Mac - what can I do?
Kshitij09
@Kshitij09
Hi, when is the next release ? I've to use CyclicalLearningRate and Mish Activation function. Will report the issues (if any)
Sean Morgan
@seanpmorgan
Hi @Kshitij09 we just got a 2.1RC so hopefully in the next couple of days see: tensorflow/addons#725 as there is a blocker we have to complete first
Hi @PhilipMay is this after building the package using bazel? Addons is difficult to package because we have C++ ops that have to be compiled before packaging. Would havily recommend you run your tests using Bazel.. Let me know if you'd like specifics on how to do that
Philip May
@PhilipMay

@seanpmorgan No. I did not build anything with hazel before. I will try that.

Let me know if you'd like specifics on how to do that

Yes - that would be great. I know how to start all tests with Bazel. But how do I start just one test script or those of a specific module?

Sean Morgan
@seanpmorgan
bazel test -c opt -k \
--test_output=all \
//tensorflow_addons/losses:giou_loss_test
As an example. The package is identified after the slash and then the target is specified after the colon
Target names will be found in the BUILD file of that package: https://github.com/tensorflow/addons/blob/master/tensorflow_addons/losses/BUILD
Best to do this in the docker container but it should work fine in a HOST env if you have bazel installed
test_output=all means that it'll print everything to stdout so you can put typical python prints etc. in there and it should work fine. I believe PDB will work as well but not positive
Please feel free to add any updates to the CONTRIBUTING.md doc if you feel anything insufficient
Moritz Kröger
@Smokrow
Is there a monthly meeting at the moment?
Philip May
@PhilipMay
Yes there was.
Moritz Kröger
@Smokrow
Weird. I wasn’t able to join this time
Philip May
@PhilipMay
Had the same problem before. Very annoying. Did you try to log in to the conference with the same mail address you have for addons google group?
Moritz Kröger
@Smokrow
I think so. The funny thing is that it worked 2 months earlier 😂
Philip May
@PhilipMay
Same here. Sometimes it works, sometimes it does not.
Sean Morgan
@seanpmorgan
:( I'll see if we can schedule an alternative meeting where we can troubleshoot. That or we can move to Zoom or something if we want