Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 21 2015 23:06

    TNick on master

    unroll iterator buffer for debu… (compare)

  • Jun 20 2015 20:29

    TNick on master

    fix issue 1172 CudaNdarray error fix add doc string for test file and 33 more (compare)

  • Jun 20 2015 20:23

    TNick on master

    ordered dict in yaml csv remove y_labels make training_algorithms/tests … (compare)

  • Jun 10 2015 00:00

    TNick on predict_page

    A page about using models to ge… (compare)

  • Jun 09 2015 16:41

    TNick on predict_page

    Make ZCA test use fix seed Bugfixes in SVHN and DenseDesig… SVHN/DenseDesignMatrixPyTables … and 16 more (compare)

  • May 21 2015 21:57

    TNick on master

    Added X_labels and y_labels to … Redefined _check_labels for Den… Added unit test and 34 more (compare)

  • May 21 2015 21:54

    TNick on master

    Ignore documentation (compare)

  • May 13 2015 12:48

    TNick on fix_1496

    Added X_labels and y_labels to … Redefined _check_labels for Den… Added unit test and 19 more (compare)

  • May 13 2015 12:07

    TNick on fix_1496

    (compare)

  • May 05 2015 22:25

    TNick on master

    OneHotFormatter: include dtype … Make sure dtype of formatted da… Added test for ZCA_Dataset. and 47 more (compare)

  • Apr 30 2015 21:00

    TNick on train_multi

    OneHotFormatter: include dtype … Make sure dtype of formatted da… Added test for ZCA_Dataset. and 42 more (compare)

  • Apr 30 2015 20:50

    TNick on train_multi

    Allow downloading even if the d… download caltech dataset (compare)

  • Apr 27 2015 14:51

    TNick on fix_1496

    AdaGrad accepts zero-filled par… (compare)

  • Apr 27 2015 13:51

    TNick on fix_1496

    (compare)

  • Apr 26 2015 23:26

    TNick on fix_1468

    run print_monitor_cv.py on a pk… (compare)

  • Apr 26 2015 23:00

    TNick on fix_1468

    (compare)

  • Apr 26 2015 22:35

    TNick on fix_1467

    tear_down for train extensions;… (compare)

  • Apr 26 2015 21:40

    TNick on fix_1467

    (compare)

  • Apr 26 2015 21:32

    TNick on fix_1496

    AdaGrad accepts zero-filled par… (compare)

  • Apr 26 2015 19:04

    TNick on fix_1496

    (compare)

HuniyaArif
@HuniyaArif
http://deeplearning.net/tutorial/lstm.html I run the following lstm code for my own data, but I don't understand the output. Train 0.281.. Valid 0.0556 Test 0.5. This statement is printed multiple times. How can I find out the total overall accuracy of lstm based on this result?
Nicu Tofan
@TNick
The inner loop prints the accuracy for train sub-set, for valid subset and for test subset
After the training ended these values are computed one last time
HuniyaArif
@HuniyaArif
So the last time is the overall accuracy? For accuracy I look at the test value?
Nicu Tofan
@TNick
The value you're interested in, I think, is the value after the test in the last printed line. That tells you how well your model performs on an input that the model never saw before / is not used in any way in training that model.
HuniyaArif
@HuniyaArif
@TNick This is my result http://i.stack.imgur.com/Cp78v.png
Nicu Tofan
@TNick
Right; the relevant lines for that line with results are:
train_err = pred_error(f_pred, prepare_data, train, kf_train_sorted)
valid_err = pred_error(f_pred, prepare_data, valid, kf_valid)
test_err = pred_error(f_pred, prepare_data, test, kf_test)
As you can see, the code simply does a forward propagation and computes the result
Interpretation: on train dataset the model predicts the wrong thing 28% of the time (and gets it right 72%)
on validation dataset the error goes down to 5%
on test dataset the error is 50%
I should mention that I only say general things here as I don't have the time to read the code in http://deeplearning.net/tutorial/code/lstm.py and understand what it's doing
I'm not that cursive in this matters to understand at a glance the details
HuniyaArif
@HuniyaArif
Thank you so much!
Nicu Tofan
@TNick
sure, welcome
HuniyaArif
@HuniyaArif
I switched to lstm code by keras library and got better results (86%!)
Nicu Tofan
@TNick
for train subset?
HuniyaArif
@HuniyaArif
Test
Nicu Tofan
@TNick
oh. By the way, do you plan to give TensorFlow a try? I don't have the time to do that myself these days.
Punita Ojha
@punitaojha
This message was deleted
Renato Marinho
@renatomarinho
This message was deleted
Renato Marinho
@renatomarinho
This message was deleted
Prantik
@Prantik13278284_twitter
Hello ,I am prantik chakraborty. I am a coding and deep learning enthusiasts i want to contribute to
Opem source libraries and learn about it
shriyashish
@shriyashish

Hello everyone! I'm Shriyashish. I'm yet another engineering student but yes I do have something exciting for all of you! I'm a part of IEEE- VIT, one of the most active chapters in Region 10 of IEEE International. In Gravitas 2020, the biggest Techno-Management Extravaganza in VIT, IEEEVIT brings you OpenCon: A Virtual Tech Conference with Speakers from well-established companies like Google, Microsoft, Uber, Spotify who are experienced in various domains ranging from UI/UX and Web development to AI/ML and Big Data Analysis.

These industry experts are going to be talking about their journey in their respective domains and will also be interacting with the attendees, answering their queries, and providing them with the right guidance to kickstart their journey in the world of tech.

Register for OpenCon at opencon.ieeevit.org