Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Ajay Unagar
    @ajayunagar
    Awesome man!
    Paarth Neekhara
    @paarthneekhara
    Thanks :smile:
    ankitsxn5143
    @ankitsxn5143
    nice :-D
    Liang Shuailong
    @Shuailong
    Cool
    jayathungek
    @jayathungek
    Hey guys! I was just wondering if there is still some work to be done on this project. I'm looking to contribute somehow.
    Paarth Neekhara
    @paarthneekhara
    Did you come across this?
    This is some recent work in this area done by the author of the same paper. You can work on extending this project to replicate this new work in tensorflow.
    jayathungek
    @jayathungek
    Ah thanks, I'll check that out :smile:
    Hao
    @zsdonghao
    This repo help me a lot :smile:
    I was just wondering if I would like to train the word embedding and RNN end-to-end, when should I updates the embedding matrix and RNN variables?
    Paarth Neekhara
    @paarthneekhara
    Glad it helped you! I think you'll be using the same weights for the RNN in the discriminator and generator. It makes sense to update them while optimizing both discr. and generator loss. You may need to experiment with it.. I haven't tried it out yet. Do go ahead.
    Hao
    @zsdonghao
    @paarthneekhara Thanks, I will let you know when I done :D
    Hao
    @zsdonghao
    @paarthneekhara hi, parrth, I still work on training word embedding and RNN end-to-end, but I found the d_loss is very high but g_loss is very loss. Do you have any idea about that.
    Hao
    @zsdonghao
    I wonder, why it has two dense layers to reduce the output size of RNN on generator and discriminator separately.
    Paarth Neekhara
    @paarthneekhara
    Not much can be inferred from g_loss and d_loss. Both of them keep increasing and decreasing throught the training. Did you try training the existing model? Do d_loss and g_loss follow similar trend? g_loss and d_loss should follow a similar trend as shown in these graphs.. https://github.com/carpedm20/DCGAN-tensorflow
    Working with separate layers for the generator and discriminator makes more sense perhaps. I think the same was done in the torch implementation of the paper. You may verify that.
    Hao
    @zsdonghao
    Thank you, Parrth. You are right, the embedding layers should be separated. I am now able to generate images. But the g_loss still very high (around 5), and it generates same images from a same caption but different noise (z)..
    Paarth Neekhara
    @paarthneekhara
    Glad to know you were able to work it out. is the image generated corressponding to the caption? Do you mean changing does not change the image at all? And which dataset are you working on?
    Paarth Neekhara
    @paarthneekhara
    *changing z
    Hao
    @zsdonghao
    Hi Parrth, I am using the flower dataset, I can see the images are some how match with the caption. But the strange thing is that no matter how I change the noise (z), the images look the same under the same caption. but the pixel values are slightly different..
    Paarth Neekhara
    @paarthneekhara
    I would like to have a look at the code. I am not really sure what happened, there.. Is the z_dim and caption embedding dimension same as in my code?
    Hao
    @zsdonghao
    Hi Paarth, I found that the RNN can only be updated when I update D, it solve the problem.
    Paarth Neekhara
    @paarthneekhara
    oh great! So now everything is working fine? Which GPU are you using? How much time did it take to train?
    Hao
    @zsdonghao
    I am using Titan X (Pascal), it take 26 seconds an epoch.
    Paarth Neekhara
    @paarthneekhara
    and in how many epochs did the training converge?
    Hao
    @zsdonghao
    There still one problem, in 100 epochs, everything look good. But as I train more, the g_loss increase, 5 for 500 epoch, 10 for 1000 epoch ...
    So if I train more epoch, the generator will "broken" and generate strange image.
    Paarth Neekhara
    @paarthneekhara
    ooh.. I am not sure, but you may update the generator weights more than once per batch..
    Hao
    @zsdonghao
    I update twice per batch.
    Paarth Neekhara
    @paarthneekhara
    I see.. Will let you know if I get more ideas..
    Hao
    @zsdonghao
    do you mean, I should update more, in this situation?
    Thanks
    Paarth Neekhara
    @paarthneekhara
    ya.. I mean may be try thrice..
    Hao
    @zsdonghao
    I see, let me try and let you know.
    Paarth Neekhara
    @paarthneekhara
    Cool!
    Ajay Unagar
    @ajayunagar
    Making images more realistic with stackGAN
    Paarth Neekhara
    @paarthneekhara
    Yes.. saw that.. pretty cool :smile:
    Ajay Unagar
    @ajayunagar
    Paarth, on which system did you run your program?
    anushaGundapaneni
    @anushaGundapaneni
    hi.. i need to know how to find out how much accurate is my output image.. if any one know plz tell me
    Sandu Ursu
    @sursu
    @paarthneekhara thanks a lot for the code. There is an issue with with the python download_datasets.py. flowers_text_c10.tar.gz appears to be missing.
    Funky yoda
    @anonymouspogo_twitter
    hey guys. i am confused with this portion. Can anyone help me download this Download the pretrained models and vocabulary for skip thought vectors as per the instructions given here. Save the downloaded files in Data/skipthoughts.
    prem1yadav
    @prem1yadav
    can you help me , as a beginner from where i should start. help me.