These are chat archives for FreeCodeCamp/DataScience

16th
Mar 2018
can anyone look at this code an figure out whats wrong. Also, is there any better way i could have implemented it?
evaristoc
@evaristoc
Mar 16 2018 19:00

@bigyankarki

I was checking and your inv with dot product seems to suit the correct expression. What for error are you getting?

evaristoc
@evaristoc
Mar 16 2018 20:06

@bigyankarki

And to solve your issue in the jupyter notebook, check this?
https://stackoverflow.com/a/30059652

Notice that I am referring to SO. I am more than happy with helping and glad you are asking here but it might be that more of your problems are already solved somewhere else :) .

evaristoc
@evaristoc
Mar 16 2018 21:06

@bigyankarki
I advice you to print the shapes of the matrices you are working.

Other thing is: it is very confusing but there is a huge different between inner (dot) and outer product when multiplying matrices. Also, check for transposes as you did for the second case.

I think this multiplication will give you a vector:
thetta.T[i] * x[i]

If I am not wrong, you are expecting an scalar for the error function.

If your purpose is to compare before and after, I don't see the need to divide by 2*m; won't change the trend. Furthermore, I am not sure if there you are incurring in a conceptual error?

The normal equation seems to be a bit tricky.

This seems to be the way is put:
Theta_hat = inv(X.T.dot(X)).dot(X.T).dot(y)

However I got an error: the dot product in the inversion is, as said, singular.

You might run into problems with the inversion if the resulting matrix doesnt follow certain conditions. It is no true that all square matrix are invertible.

Not necessarily your case but it can happen.

The pseudo-inverse is proposed as a solution for an overdetermined system where there are more equations than unknowns. It is also a very likely case where dealing with regression problems.
https://www.youtube.com/watch?v=pTUfUjIQjoE

You can also check the following:
https://eli.thegreenplace.net/2014/derivation-of-the-normal-equation-for-linear-regression


In order to avoid the complexity of your example, I would suggest to simplify it. You could try to solve for:

A = numpy.array([[1,2,3],[4,5,6],[7,8,9],[11,12,13]])
y = numpy.array([[1],[2],[1],[1]])

#In python
numpy.linalg.pinv(A).dot(y)

Additionally I suggested to find a vectorial way to solve the error (which you still call error sum, I personally find that confusing).

I think you can easily calculate the error by using the numpy norm method after converting into matrices. I am assuming it from an example here.

@bigyankarki: I will please ask you to let us know why are you asking all this? ;)

evaristoc
@evaristoc
Mar 16 2018 21:19

@bigyankarki

Notice that for the example I gave you using numpy:

A.T.dot(A)

Is a singular matrix and doesn't have an inverse. So you can use pinv to complete the task.

evaristoc
@evaristoc
Mar 16 2018 21:24

PEOPLE

I like reading articles in the BBC and found a few that might be of interest?