These are chat archives for FreeCodeCamp/DataScience
discussion on how we can use statistical methods to measure and improve the efficacy of http://freeCodeCamp.com
I was checking and your
inv with dot product seems to suit the correct expression. What for error are you getting?
And to solve your issue in the jupyter notebook, check this?
Notice that I am referring to SO. I am more than happy with helping and glad you are asking here but it might be that more of your problems are already solved somewhere else :) .
I advice you to print the shapes of the matrices you are working.
Other thing is: it is very confusing but there is a huge different between inner (dot) and outer product when multiplying matrices. Also, check for transposes as you did for the second case.
I think this multiplication will give you a vector:
thetta.T[i] * x[i]
If I am not wrong, you are expecting an scalar for the error function.
If your purpose is to compare before and after, I don't see the need to divide by 2*m; won't change the trend. Furthermore, I am not sure if there you are incurring in a conceptual error?
The normal equation seems to be a bit tricky.
This seems to be the way is put:
Theta_hat = inv(X.T.dot(X)).dot(X.T).dot(y)
However I got an error: the dot product in the inversion is, as said, singular.
You might run into problems with the inversion if the resulting matrix doesnt follow certain conditions. It is no true that all square matrix are invertible.
Not necessarily your case but it can happen.
The pseudo-inverse is proposed as a solution for an overdetermined system where there are more equations than unknowns. It is also a very likely case where dealing with regression problems.
You can also check the following:
In order to avoid the complexity of your example, I would suggest to simplify it. You could try to solve for:
A = numpy.array([[1,2,3],[4,5,6],[7,8,9],[11,12,13]]) y = numpy.array([,,,]) #In python numpy.linalg.pinv(A).dot(y)
Additionally I suggested to find a vectorial way to solve the error (which you still call error sum, I personally find that confusing).
I think you can easily calculate the error by using the numpy
norm method after converting into matrices. I am assuming it from an example here.
@bigyankarki: I will please ask you to let us know why are you asking all this? ;)
Notice that for the example I gave you using numpy:
Is a singular matrix and doesn't have an inverse. So you can use
pinv to complete the task.
I like reading articles in the BBC and found a few that might be of interest?
Kids and Entrepreneurs:
The future Bot Economy and the impact on labour:
The specific case of Bots replacing administrative tasks:
I love Biology and Anthropology topics...