These are chat archives for FreeCodeCamp/DataScience

21st
May 2016
Jacob Bogers
@jacobbogers
May 21 2016 00:40
@erictleung cluster analysis is chapter 8 of that book i am reading, not there yet
oops, chpt 13, sorry misread,
cluster analysis is chapter 8 in this book, "multivariate data analysis" (Black and Hair)
265 USD, wtf?
Jacob Bogers
@jacobbogers
May 21 2016 01:12
hello,anyone online here?
Jacob Bogers
@jacobbogers
May 21 2016 01:18
anyone want to study multivariate statistics with me?
Jacob Bogers
@jacobbogers
May 21 2016 01:32
would be cool to get a study group going, its 15 chapters and lots of exercises
Koustuv Sinha
@koustuvsinha
May 21 2016 05:54
hey @jacobbogers .. i'm interested. which mooc ?
Gayathry Dasika
@gayathry2612
May 21 2016 06:24
Hey @jacobbogers . I'm interested too.
evaristoc
@evaristoc
May 21 2016 08:40
@jacobbogers please let us know?
@erictleung thanks for sharing that link!
CamperBot
@camperbot
May 21 2016 08:41
evaristoc sends brownie points to @erictleung :sparkles: :thumbsup: :sparkles:
:cookie: 350 | @erictleung |http://www.freecodecamp.com/erictleung
Jacob Bogers
@jacobbogers
May 21 2016 12:00
Hi people you can join me in my channel "mathematics"
Its 700 pages (index starts at 734).
I think If we do this book, we will be more then knowledgable about multivariate statistics
Jacob Bogers
@jacobbogers
May 21 2016 12:14
Not sure if we should use R or something else, I think we should stick to R))
evaristoc
@evaristoc
May 21 2016 12:23
@jacobbogers could you try some of those analyses on our data instead? Then we can all learn from your examples.
evaristoc
@evaristoc
May 21 2016 12:55
Ahh! @jacobbogers I found that you studied in Delft!!! Leuk!!! How did you find it? Applied Physics is a bit tough! Did you have to do that in Dutch?
Eric Leung
@erictleung
May 21 2016 16:38
@evaristoc I think you mentioned needing a hand with responsive D3.js. I still don't have experience with it but I was interested in responsive behavior for charts. If you haven't stumbled upon this StackOverflow post, it looks promising.
Eric Leung
@erictleung
May 21 2016 16:45

I've been studying support vector machines lately and ran into some really good material to understand it. This lecture at MIT (49 minutes) has a wonderful professor teaching this concept. I don't think there are many prerequisites to understand it. Some linear algebra may help.

The one thing I didn't quite understand that was brought up was Lagrange multipliers, but found this (10 minutes) to help me understand that. Cheers!

Jacob Bogers
@jacobbogers
May 21 2016 17:24
if anyone is interested in joining my selfstudy group let me know, join my link "mathematics"
Jacob Bogers
@jacobbogers
May 21 2016 17:34
provision, of course, you need to Finnish first year analysis (is that called in USA?) and linear algebra, .., an introductions course in statistics (know how to derive student T distribution etc)
evaristoc
@evaristoc
May 21 2016 20:42

@erictleung thanks! really good.

@darwinrc I remember you were taking the Andrew Ng course in ML? He explains SVM in a very straight forward way. Want to get a better idea about the deduction of the SVM then check @erictleung's provided link above (the MIT Lecture by Winston): EX-CE-LLENT!!!

Furthermore: Are you into mathematical history, and want to know about a contemporary story? Just a simple anecdote about how long passed before people started taking seriously the work by V. Vapnik and Chervonenkis on SVM?? 30 years!!!. Check the last minutes of the MIT youtube lesson to know more...

Well, people like Copernicus had a worse time though... But that still says a lot about how lucky you must be anyway...

CamperBot
@camperbot
May 21 2016 20:42
evaristoc sends brownie points to @erictleung and @darwinrc :sparkles: :thumbsup: :sparkles:
:cookie: 351 | @erictleung |http://www.freecodecamp.com/erictleung
:cookie: 422 | @darwinrc |http://www.freecodecamp.com/darwinrc
evaristoc
@evaristoc
May 21 2016 21:40

@erictleung Lagrange multipliers are normally used to solve optimisation problems with equality and inequality constrains. I didn't know until I saw the video that the first basic problem by Vapnik and Chervonenkis resembled a specific Linear Programming problem with Karush-Kuhn-Tucker constrains.

Check for example: http://cs229.stanford.edu/notes/cs229-notes3.pdf

The way that they found the optimising variables (in the dot product), and later discovering that by only translating that dot product into another kernel was enough to to find an optimum for non-linear problems, AND finding that the optimisation function is always convex is just... beautiful.

I really enjoyed the lesson. I hope to have more time to follow more of those ones in the future, really...