Hey Guys, I was reading "Elements of Statistical learning" by Hastie, Tibsharani and Friedman (Vol.2 )

On page 12 they say,

`In the (p + 1)-dimensional input–output space, (X, Yˆ ) represents a hyperplane. If the constant is included in X, then the hyperplane includes the origin and is a subspace; if not, it is an affine set cutting the Y -axis at the point (0,βˆ0). From now on we assume that the intercept is included in βˆ.`

Anyone else reading / read that book?

Can anyone help me understand that paragraph?

@srniranjan I haven't read that book, and I'm not sure I could guess at that excerpt's meaning without a bit more context. Sorry :/

@srniranjan all that statement is saying is that the input data can pass through parts of the coordinate plane that are not the origin, and to do so, you need the beta-hat term to offset it. It is a very mathy way to say something very simple. I wouldn't worry about it too much.

@srniranjan here's an unofficial set of solutions for the **Introduction** to Statistical Learning book http://blog.princehonest.com/stat-learning/ and solutions to selected problems in the ESL book https://waxworksmath.com/Authors/G_M/Hastie/hastie.html.