These are chat archives for harthur/brain

22nd
May 2017
xidorn5
@xidorn5
May 22 2017 01:36
Hello. I am not very familiar with javascript. However, I would like to get some information out of the ConvNetJS Image regression demo. In particular, I need to extract the loss over time over a number of iterations. Please any pointers on how can I do this?
Right now, the loss is being displayed but I don't know if this can be piped into a database
Robert Plummer
@robertleeplummerjr
May 22 2017 01:59
I'm familiar. It'd be super simple. Ping me tomorrow?
xidorn5
@xidorn5
May 22 2017 02:32
Great! Thank you
What time, please? @robertleeplummerjr
use something like this ^
xidorn5
@xidorn5
May 22 2017 17:31
Thank you ! @robertleeplummerjr
Robert Plummer
@robertleeplummerjr
May 22 2017 17:31
np!
xidorn5
@xidorn5
May 22 2017 17:35
I understand what the function does (roughly), but how do I run it though?
It just iterates training and loss usign an interval
xidorn5
@xidorn5
May 22 2017 17:37
Yes, but I need to get the numbers out so I can make a plot in python
Robert Plummer
@robertleeplummerjr
May 22 2017 17:38
Why mix the languages?
xidorn5
@xidorn5
May 22 2017 17:39
Because I don't know how to create the plot in JavaScript actually
Robert Plummer
@robertleeplummerjr
May 22 2017 17:40
what is a plot in python?
Maybe you could easily create one in js to cut down on tech debt?
xidorn5
@xidorn5
May 22 2017 17:41
OK... yes, I guess I could
Robert Plummer
@robertleeplummerjr
May 22 2017 17:41
I guess it doesn’t matter too much
xidorn5
@xidorn5
May 22 2017 17:41
I am used to the matlab plot function
No, it doesn't.
Robert Plummer
@robertleeplummerjr
May 22 2017 17:41
matlab = magic
xidorn5
@xidorn5
May 22 2017 17:41
I just need to trace the loss over iterations
Robert Plummer
@robertleeplummerjr
May 22 2017 17:42
do you have the loss stored as a variable?
xidorn5
@xidorn5
May 22 2017 17:42
No, that's what I need to do
That's what I mean by extracting the numbers
Robert Plummer
@robertleeplummerjr
May 22 2017 17:44
xidorn5
@xidorn5
May 22 2017 17:46
Yes, it does
So I can store that in an array and plot a graph from it (I'm looking at plotting over 5000 iterations. Thinking about it, that's a pretty long array)
Robert Plummer
@robertleeplummerjr
May 22 2017 17:49
bah 5k is tiny
xidorn5
@xidorn5
May 22 2017 17:49
:D (Newbie alert, I guess )
Robert Plummer
@robertleeplummerjr
May 22 2017 17:51
if you know python, and matlab, you are far from a newbie ;)
But I know what you mean
xidorn5
@xidorn5
May 22 2017 17:51
:D
I'm still confused about running it. I assume I can't run it off my system. Would JSBin work? Or do I need html?
Robert Plummer
@robertleeplummerjr
May 22 2017 17:52
You don’t need html, jsbin would be fine.
or jsfiddle
or console
xidorn5
@xidorn5
May 22 2017 17:52
OK.
Robert Plummer
@robertleeplummerjr
May 22 2017 17:52
or node :)
xidorn5
@xidorn5
May 22 2017 17:52
Oh. wow!
So many options. I'd check them out and see what I can make happen
Robert Plummer
@robertleeplummerjr
May 22 2017 17:53
brain will have convolutions soon that run on cpu, so you can have wicked fast neural nets, fyi.
xidorn5
@xidorn5
May 22 2017 17:53
Cool!!!!!
Talking about networks, I have a theoretical question on feedforward networks
Are you by any chance into those?
Robert Plummer
@robertleeplummerjr
May 22 2017 17:54
absolutley
xidorn5
@xidorn5
May 22 2017 17:54
:D
Robert Plummer
@robertleeplummerjr
May 22 2017 17:54
a feedforward net is one of the simplest nets
xidorn5
@xidorn5
May 22 2017 17:54
That's what I read
But I am absolutely new to neural nets
Robert Plummer
@robertleeplummerjr
May 22 2017 17:55
brain’s initial net is feedfarward with backpropigation.
xidorn5
@xidorn5
May 22 2017 17:55
So still trying to wrap my mind around them
OK
Robert Plummer
@robertleeplummerjr
May 22 2017 17:55
ah, very good
xidorn5
@xidorn5
May 22 2017 17:55
There's a theory that any function can be represented by a feedforward network
Robert Plummer
@robertleeplummerjr
May 22 2017 17:56
Very intrieged, yes
xidorn5
@xidorn5
May 22 2017 17:57
OK. Opening it
But basically, you get an arbitrary graph and are meant to design a feedforward network
Robert Plummer
@robertleeplummerjr
May 22 2017 17:57
Right
xidorn5
@xidorn5
May 22 2017 17:57
I understand what went down in the paper I shared
ok, cool
xidorn5
@xidorn5
May 22 2017 17:58
But when trying to work with a graph that wasn't a bar chart using ReLU...I could segment the graph and get the line equations
But after that I had no idea how to design the network
The graph isn't a precise summation of the different equations of a line. For example, it has a certain value for 0 < x < 5, then another equation for 5 <=x <=8 etc
How are all these summed to design a neural network
Robert Plummer
@robertleeplummerjr
May 22 2017 18:00
The aren’t, they are summed for the layers neurons leading into the neuron: https://github.com/harthur-org/brain.js/blob/master/src/neural-network.js#L111
xidorn5
@xidorn5
May 22 2017 18:00
I imagine ReLU can be used to set the different thresholds instead of just being activated when x is non-negative
ok....
So each neuron represents a line equation?
Robert Plummer
@robertleeplummerjr
May 22 2017 18:01
In this context: forward, yes, backwards no
xidorn5
@xidorn5
May 22 2017 18:02
Yes, I know backwards is more complicated
And then the slope == weight and intercept == bias?
Robert Plummer
@robertleeplummerjr
May 22 2017 18:03
That I’m not sure, I have not thought of it like that before.
xidorn5
@xidorn5
May 22 2017 18:03
But how will the network know which neuron to send a signal to. for example, when 0 < x< 5, only one of the neurons should give an output, not all
@robertleeplummerjr It most probably isn't correct. It's just that the equations look alike
Robert Plummer
@robertleeplummerjr
May 22 2017 18:04
Unless you design the network to have a single neuron layer for it’s output, it’ll always have more than one neuron output.
The innificient design of neurons in the mathematical sense, is that every neuron fires in current implementations of a neural net.
Because we are, in a sense, simulating electricity.
xidorn5
@xidorn5
May 22 2017 18:05
Hm..OK
Robert Plummer
@robertleeplummerjr
May 22 2017 18:05
or whatever means a real neuron uses to activate.
xidorn5
@xidorn5
May 22 2017 18:05
Is that where the thresholding function comes it?
Robert Plummer
@robertleeplummerjr
May 22 2017 18:05
So the output, won’t be “This is the one neuron!!! Woohoo"
xidorn5
@xidorn5
May 22 2017 18:06
:D :D
Robert Plummer
@robertleeplummerjr
May 22 2017 18:06
It’ll be more like this: [0.0001, 0.002, 0.1, 0.85, 0.5]
In this scenario, index 43 is the activated neuron.
xidorn5
@xidorn5
May 22 2017 18:06
ok
Robert Plummer
@robertleeplummerjr
May 22 2017 18:06
that is an array in js
xidorn5
@xidorn5
May 22 2017 18:07
oh. You start counting from 1 not 0?
Because 0.85 is the highest number
Robert Plummer
@robertleeplummerjr
May 22 2017 18:07
No, I just can’t count.
index of 3, sorry
xidorn5
@xidorn5
May 22 2017 18:07
:D
Robert Plummer
@robertleeplummerjr
May 22 2017 18:07
you are correct.
xidorn5
@xidorn5
May 22 2017 18:11
Ok. Makes sense
Then where does the ReLU (or sigmoid) non-linearity come in
In my mind, I see one input neuron (x); a hidden layer with neurons for each function and an out put layer with the same number of neurons as the hidden layer?
Robert Plummer
@robertleeplummerjr
May 22 2017 18:27
that is where the current neuron connects with the previous layer
xidorn5
@xidorn5
May 22 2017 18:32
Ah
So it's not graphically depicted
Robert Plummer
@robertleeplummerjr
May 22 2017 18:33
it is somewhat “tricky” to display, yes
xidorn5
@xidorn5
May 22 2017 18:33
Hm
How are weights and biases determined in feedforward only networks. I know back propagation is used in the others. So how is training done, when there is no way of checking accuracy?
Robert Plummer
@robertleeplummerjr
May 22 2017 18:55
At first, they are random.
If it was truly feedfarward, you’d just keep picking the network that had the highest success, always starting with random weights.
but in anything brain.js, there is delta, which is essentially error rate, which is fed back through network.
xidorn5
@xidorn5
May 22 2017 18:57
Cool!
Thank you so much!
Robert Plummer
@robertleeplummerjr
May 22 2017 18:58
np
neural nets are fantastically fun
and simple
xidorn5
@xidorn5
May 22 2017 19:02
OK. I hope I get to tell someone that someday soon :D
Robert Plummer
@robertleeplummerjr
May 22 2017 19:03
ha, I’m just beginning as well.
Knowing the architecture, doesn’t mean we know what goes on in the net.
xidorn5
@xidorn5
May 22 2017 19:03
:D :D
True