These are chat archives for FreeCodeCamp/DataScience

4th
Dec 2018
Niranjan Salimath
@srniranjan
Dec 04 2018 17:30
Hey Guys, I was reading "Elements of Statistical learning" by Hastie, Tibsharani and Friedman (Vol.2 )
On page 12 they say,
In the (p + 1)-dimensional input–output space, (X, Yˆ ) represents a hyperplane. If the constant is included in X, then the hyperplane includes the origin and is a subspace; if not, it is an affine set cutting the Y -axis at the point (0,βˆ0). From now on we assume that the intercept is included in βˆ.
Anyone else reading / read that book?
Can anyone help me understand that paragraph?
In the (p + 1)-dimensional input–output space, (X, Yˆ ) represents a hyperplane. If the constant is included in X, then the hyperplane includes the origin and is a subspace; if not, it is an affine set cutting the Y -axis at the point (0,βˆ0). From now on we assume that the intercept is included in βˆ.
Alice Jiang
@becausealice2
Dec 04 2018 18:25
@srniranjan I haven't read that book, and I'm not sure I could guess at that excerpt's meaning without a bit more context. Sorry :/
Eric Leung
@erictleung
Dec 04 2018 21:51
@srniranjan all that statement is saying is that the input data can pass through parts of the coordinate plane that are not the origin, and to do so, you need the beta-hat term to offset it. It is a very mathy way to say something very simple. I wouldn't worry about it too much.
Norvin Burrus
@ndburrus
Dec 04 2018 23:01
@MahmoudElsayad this may be helpful :sparkles:
Eric Leung
@erictleung
Dec 04 2018 23:46
@srniranjan here's an unofficial set of solutions for the Introduction to Statistical Learning book http://blog.princehonest.com/stat-learning/ and solutions to selected problems in the ESL book https://waxworksmath.com/Authors/G_M/Hastie/hastie.html.