These are chat archives for FreeCodeCamp/DataScience
discussion on how we can use statistical methods to measure and improve the efficacy of http://freeCodeCamp.com
evaristoc sends brownie points to @erictleung :sparkles: :thumbsup: :sparkles:
I've been studying support vector machines lately and ran into some really good material to understand it. This lecture at MIT (49 minutes) has a wonderful professor teaching this concept. I don't think there are many prerequisites to understand it. Some linear algebra may help.
The one thing I didn't quite understand that was brought up was Lagrange multipliers, but found this (10 minutes) to help me understand that. Cheers!
@erictleung thanks! really good.
@darwinrc I remember you were taking the Andrew Ng course in ML? He explains SVM in a very straight forward way. Want to get a better idea about the deduction of the SVM then check @erictleung's provided link above (the MIT Lecture by Winston): EX-CE-LLENT!!!
Furthermore: Are you into mathematical history, and want to know about a contemporary story? Just a simple anecdote about how long passed before people started taking seriously the work by V. Vapnik and Chervonenkis on SVM?? 30 years!!!. Check the last minutes of the MIT youtube lesson to know more...
Well, people like Copernicus had a worse time though... But that still says a lot about how lucky you must be anyway...
evaristoc sends brownie points to @erictleung and @darwinrc :sparkles: :thumbsup: :sparkles:
@erictleung Lagrange multipliers are normally used to solve optimisation problems with equality and inequality constrains. I didn't know until I saw the video that the first basic problem by Vapnik and Chervonenkis resembled a specific Linear Programming problem with Karush-Kuhn-Tucker constrains.
Check for example: http://cs229.stanford.edu/notes/cs229-notes3.pdf
The way that they found the optimising variables (in the dot product), and later discovering that by only translating that dot product into another kernel was enough to to find an optimum for non-linear problems, AND finding that the optimisation function is always convex is just... beautiful.
I really enjoyed the lesson. I hope to have more time to follow more of those ones in the future, really...