With my new eye on the subject, I could help point the few little thing that would benefit from some more polish;
README and cmd line help could be improved easily
some issues/PR should be closed quickly to help focus on what is useful
then we should complement the roadmap with more specific/detailed missing features
For instance, I believe that having some (optional) test/validation data would be nice (issue #33)
Hi, I just started playing around with char-rnn-tensorflow and was wondering why did you choose to use an encoder-decoder model along with sequence_loss_by_example? Is it also possible (and ideal) to use static_rnn and softmax cross entropy with logits respectively?
Hi @sherjilozair I am playing around with the OpenAI RfR 'Train a language model on a jokes corpus' and have forked char-rnn-tensorflow to act as a starting point for playing around with the problem space. I hope this is ok, I've observed the MIT licence and kept your copyright notice
At the moment it's still not very funny, the best joke that it has come up with is "your mom is unit unblong?"
I wanted to say thanks for your work on this too, it's been a great starting point for generative RNNs on tensorflow
Hi @sherjilozair ,I was running your char-rnn tensorflow could you tell me how to sample in for a specific checkpoint
Hello, everyone! Have someone dealt with grid-lstm for chars? I work in university and I really need vector of research on this topic. char-rnn-tensorflow (lstm) is cool, but it's not enough. Relly hope for your help, thanks!
Hi, has anyone had success in directly replicating char-rnn? I ask because I attempted a tensorflow implementation Jan'17 but I was unable to get good output as described by char-rnn. I've only just now got Torch7 working to validate char-rnn's performance (and, indeed, it works as described). So now I can either learn Torch, or, if this project works the same way, I can continue with Tensorflow using this codebase. Cheers!
maybe this is a python 3.5.4 + windows issue here? train.py works fine, but sample.py gets lost on encoding File "C:\Users\peter\AppData\Local\Programs\Python\Python35\lib\encodings\cp437.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_map) UnicodeEncodeError: 'charmap' codec can't encode character '\xc8' in position 1: character maps to <undefined>
okay - got it, seems that's in fact a 3.5 problem, solved with 3.6 :)
seems also there's a conflict on windows with CUDA support, that is 3.6 + CUDA tensorflow don't work ... but that may be a tf issue, not char-rnn
Hi guys, has anyone tried to transform the model to pb.model and serve it with tensorflow serving framework?
confusing about find the output node name from more than 10000 nodes name sherjilozair/char-rnn-tensorflow#122
Even after python3 train.py --init_from=save the model starts to train from 0. I have checkpoints saved in my save directory. Can anybody tell me what happened and how to overcome this?
Any help is appreciated
Also it seems the training is happening on my CPU(htop shows all cores above 90%). How can I make the training run on my GPU?
hey guys. i'm wondering why we don't get any info of validation loss. it doesn't seem to be computed. how can I find out if my models are overfitting?
@yashjakhotiya you need to install the gpu version of tensorflow.
Hey Felix, this repository isn't actively maintained. Maybe try finding other more maintained repositories?
How can i Train it on my own dataset?
Hello everyone. Anyone knows why is tensorflow using CPU instead of GPU?
I have tensorflow-gpu installed
does anyone know how to export into an output.txt file?
Can anyone tell me how to convert checkpoint to savedmodels?