These are chat archives for arita37/tensorflow

6th
Jul 2016
Danijar Hafner
@danijar
Jul 06 2016 21:00
@off99555 I can think of some reasons: Don't waste memory since you only need to store the weights once. Less likely to overfit due to fewer parameters. When both parts have different inputs, you might get better weights by training them on more data.