These are chat archives for FreeCodeCamp/DataScience
discussion on how we can use statistical methods to measure and improve the efficacy of http://freeCodeCamp.com
Just watching the Andrew Ng's Coursera Deep Learning, https://www.coursera.org/learn/neural-networks-deep-learning/lecture/NYnog/vectorization. Not consider myself an expert, I have seen several courses already about the topic, I might probably know more than some of you.
I can say that despite of what he claims, it might be HARD to follow for some. It is THE course, in my opinion - it is visiting the back prop basis, so derivatives, the explanation of loss and cost functions, etc... are already part of the course.
Also concepts like vectorization, which could be a bit hard for the novice to grasp, specially if no notion of matrices.
I am just at the beginning, jumping around but stopping in some places to revisit some topics.
This is the kind of things you want to know if you really want to learn to fine-tuning the implementation knowing more or less what you are doing - sometimes an advantage over just trial and error and not understanding a single thing of why you are doing that.
A bit of science added to the handcraft.
Python (yuppiiii!!!) + NVIDIA GPU's - according to introduction.
If you have time and you are ready to get a more in-depth introduction of the basis, I would advice this one over any other I have seen so far.
REMEMBER: it is true that you don't need all those concepts to run your NNs, but you might not be able to really understand your implementation or change it or fine-tuning it better if you don't have any clue of what it is happening. You won't probably have all the clue for sure even after this or any intensive course, but having a bit of an idea could be better than not having a clue.