These are chat archives for FreeCodeCamp/DataScience
discussion on how we can use statistical methods to measure and improve the efficacy of http://freeCodeCamp.com
The process of tuning parameters is usually a combination of trial and error plus experience in the field. The more experience you have, the easier the selection.
All those libraries offer ways to experiment with combination of parameters. In R there is
caret, in sk-learn the
SearchGrid method. For what I saw it seems you are using TensorFlow. The person helping you in stackexchange is mentioning
Deepmind for your case.
If you are new to this, plain TensorFlow will be hard: it is Neural Networks and those are very difficult to grasp. Additionally, it is a myth that neural networks are the best for everything. There are many cases, specially commercial solutions, where the BEST option with the BEST outcome will be always a more simple model. If you are new to this, I seriously advice you to look for simple solution and understand the ML methodologies before getting into something so complicated.
You also wrote:
How can I best tweak the above lines in order to minimize the loss and have the learning curve look like a normal one?
What exactly you are expecting to behave as "normal"? This comment is unclear: the "learning curve" you are mentioning might not be expected to behave normally, but asymptotically:
And even after reaching asymptotic behaviour you might need to tweak the model to reach a higher level of accuracy. Read the following solution to see why:
You have an answer there in stackexchange. I agree with the poster you are just trying at the moment and that you might need to get more insights about ML. The modification to your code made by the person at stackexchange won't help if you don't understand NN well.