Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Aurélien Geron
@ageron
Trying out gitter...
Aurélien Geron
@ageron
If you have tried out the Jupyter notebooks and/or read the book, I'd love to get your feedback.
dolaameng
@dolaameng
Hi @ageron, I am reading early-release of your book, have gone through first 11 chapters so far. And I just want to say that I really enjoy it! Some thoughts:
  1. I found chapter 11 extremely useful as an introduction to tensorflow - I really love the way it is organized - covering important practical points with a highlight on solving different challenges. I think it will be great if you can put a complete example using all the tricks, e.g., using the bottleneck features of VGG to classify cats and dogs (from Kaggle)?
  1. a quick question on minor details in notebook "11_deep_learning.ipynb", you use reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) to get the l1 penalty and later added it to the base_loss, but since tf.get_collection returns a list, it probably makes sense to sum them e.g., by tf.add_n? Besides, would it be better to explictly add an coefficient for the reg_loss, since it is not gauranteed that base_loss will always be directly comparable with reg_loss? Or am I missing a point here?
Thanks for th book
Aurélien Geron
@ageron
@dolaameng Thanks for your kind words and useful feedback. I like your suggestion of a complete example using all the tricks. I'm having a bit of a bandwidth problem right now, with a lot of consulting, conferences, webinars, translating my book to French, etc., so I'm really not sure when I will be able to do that. Do you think you could create a first draft? Regarding the error, you are absolutely right, excellent catch. It should be loss = tf.add_n([base_loss] + reg_losses, name="loss"). However, the regularization hyperparameter is provided to the l1_regularizer() function: tf.contrib.layers.l1_regularizer(0.01).
dolaameng
@dolaameng
@ageron : thanks for the update! I would love to contribute to the draft of the example. I will make a PR when I get something! And thanks for the answers to my questions.
Justin Francis
@wagonhelm

initializer = tf.contrib.layers.xavier_initializer()

CONVOLUTION 1 - 1

with tf.name_scope('conv1_1'):
conv1_1 = convolution2d(X, num_outputs=16, kernel_size=(8,8),
stride=4, padding="VALID",
activation_fn=tf.nn.relu,
weights_initializer=initializer)

CONVOLUTION 1 - 1

with tf.name_scope('conv1_2'):
conv1_2 = convolution2d(conv1_1, num_outputs=32, kernel_size=(4,4),
stride=2, padding="VALID",
activation_fn=tf.nn.relu,
weights_initializer=initializer)

FULLY CONNECTED 1

with tf.name_scope('fc1') as scope:
conv1_2_flat = tf.reshape(conv1_2, shape=[-1, 256])
fc1 = fully_connected(conv1_2_flat, 256, activation_fn=tf.nn.relu,
weights_initializer=initializer)

FULLY CONNECTED 2

with tf.name_scope('fc2') as scope:
fc2 = fully_connected(fc1, n_actions, activation_fn=tf.nn.relu,
weights_initializer=initializer)

SOFTMAX OUTPUT

with tf.name_scope('softmax') as scope:
output = tf.nn.softmax(fc2)

what am i doing wrong here?
not use to using contrib.learn
my output is giving me a [?,2] tensor when i want a [1,2]
Vinit
@bodhwani
Hello guys! I am very much interesting in Ml and AI stuff. Looking to contribute in this repo, really excited!
nu007a
@nu007a
~~ ## hello ~~

hello

Diego Quintana
@diegoquintanav
So I'm in page 52, about stratification of data. Is the choice of dividing by 1.5 arbitrary? Is there a rule of thumb in this cases, based on how the data is distributed?
s/this/these
Ashok Bakthavathsalam
@kgashok
Hello, is a eBook version of the book for purchase?