Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/numpy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-5hYSAi-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/numpy Traceback (most recent call last): File "/usr/bin/pip", line 9, in <module> load_entry_point('pip==1.5.4', 'console_scripts', 'pip')() File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 235, in main return command.main(cmd_args) File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 161, in main text = '\n'.join(complete_log) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 72: ordinal not in range(128)
oh! nope... I have created another network:
best_inputs, best_out, best_scaled_out = create_network()
best_network_params = tf.trainable_variables()
Then I create this op:
update_best_network_params = [best_network_params[i].assign(original_network_params[i])
for i in range(len(original_network_params))]
Later in my program I invoke:
And the problem is that it seems that it is assigning a reference rather than doing a deep copy
shape_invariantsargument of tf.while_loop or set_shape() on the loop variables. I guess the issue is on keras itself, but I would like to revert to 0.10.0 , do u guys know if it can be done with pip . I m on macOS
My question has no relation with this topic. I want to know the reason why does people use ::testing::initGoogleTest instead of testing::initGoogleTest when people use gtest to do unit test. => from this, I am confused by the difference of ::testing::initGoogleTest and testing::initGoogleTest? Sorry to take up u guys time here in such stupid question.
Anyone can help me for this confusion?Thanks.
pip install tensorflow-gpuand running
python -c "import tensorflow as tf; tf.InteractiveSession()"should also tell you if TF finds your GPU, if that's what you are interested in
tf.py_func()and tools like tensorboard, tdb, tfdbg
I have a question regarding the use of
I have a numpy array of size
x_shape = (50, 30, 10),
batch size = 50,
max length of series (max_time) = 30
input vector of length = 10.
I'm getting an error of
TypeError: 'Tensor' object is not iterable.
According to the documatation:
If time_major == False (default), this must be a Tensor of shape: [batch_size, max_time, ...], or a nested tuple of such elements.
How should I format my input if not an array of rank three e.g. [50, 30, 10]? Perhaps a list of 30 elements each of which each element is a vector of length 10?