Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 31 23:35
    buddhaface commented #25
  • Aug 31 23:35
    buddhaface commented #25
  • Jul 19 11:01
    dialex commented #72
  • Jul 02 17:37
    AlibekJ commented #84
  • Jul 02 17:36
    AlibekJ commented #84
  • Jun 24 20:58
    lucaspojo opened #84
  • Apr 29 10:02
    Berkmann18 commented #72
  • Jan 22 19:05
    dmoraroca commented #25
  • Nov 21 2018 22:13
    Macil commented #25
  • Nov 21 2018 21:56
    rrajvanshi1947 commented #25
  • Jul 09 2018 00:28
    ewascent commented #83
  • Jul 07 2018 04:30
    Just-another-pleb opened #83
  • Jun 04 2018 19:13
    robertleeplummerjr commented #61
  • Jun 04 2018 19:09
    johnmarkos commented #61
  • Jun 04 2018 18:55
    robertleeplummerjr commented #61
  • Jun 04 2018 18:31
    johnmarkos closed #61
  • Apr 02 2018 15:15
    leeoniya commented #72
  • Apr 02 2018 04:38
    dan-ryan commented #82
  • Mar 26 2018 15:30
    juanpablolvl99 commented #72
  • Mar 26 2018 15:03
    juanpablolvl99 commented #72
xidorn5
@xidorn5
Is that where the thresholding function comes it?
Robert Plummer
@robertleeplummerjr
So the output, won’t be “This is the one neuron!!! Woohoo"
xidorn5
@xidorn5
:D :D
Robert Plummer
@robertleeplummerjr
It’ll be more like this: [0.0001, 0.002, 0.1, 0.85, 0.5]
In this scenario, index 43 is the activated neuron.
xidorn5
@xidorn5
ok
Robert Plummer
@robertleeplummerjr
that is an array in js
xidorn5
@xidorn5
oh. You start counting from 1 not 0?
Because 0.85 is the highest number
Robert Plummer
@robertleeplummerjr
No, I just can’t count.
index of 3, sorry
xidorn5
@xidorn5
:D
Robert Plummer
@robertleeplummerjr
you are correct.
xidorn5
@xidorn5
Ok. Makes sense
Then where does the ReLU (or sigmoid) non-linearity come in
In my mind, I see one input neuron (x); a hidden layer with neurons for each function and an out put layer with the same number of neurons as the hidden layer?
Robert Plummer
@robertleeplummerjr
that is where the current neuron connects with the previous layer
xidorn5
@xidorn5
Ah
So it's not graphically depicted
Robert Plummer
@robertleeplummerjr
it is somewhat “tricky” to display, yes
xidorn5
@xidorn5
Hm
How are weights and biases determined in feedforward only networks. I know back propagation is used in the others. So how is training done, when there is no way of checking accuracy?
Robert Plummer
@robertleeplummerjr
At first, they are random.
If it was truly feedfarward, you’d just keep picking the network that had the highest success, always starting with random weights.
but in anything brain.js, there is delta, which is essentially error rate, which is fed back through network.
xidorn5
@xidorn5
Cool!
Thank you so much!
Robert Plummer
@robertleeplummerjr
np
neural nets are fantastically fun
and simple
xidorn5
@xidorn5
OK. I hope I get to tell someone that someday soon :D
Robert Plummer
@robertleeplummerjr
ha, I’m just beginning as well.
Knowing the architecture, doesn’t mean we know what goes on in the net.
xidorn5
@xidorn5
:D :D
True
skaraman
@skaraman
hi all, i'm brand new here and would like to know what you think about brain.js vs synapticjs and neatapticjs
Maxime ROBIN
@Waxo
hi, sorry i saw your post few days ago, but i'm quite busy atm. It depends what kind of neurals networks you need
Robert Plummer
@robertleeplummerjr
With brain.js we want wicked fast neural networks everywhere, node and web
We are doing a great deal of work to change the underlying technologies to achieve this.
But the most important thing is they much be simple.
Simple enough to teach a child.
Line 52
You'll see a rather complex idea, a gated recurrent unit becomes fairly straightforward
Robert Plummer
@robertleeplummerjr
Right now we've spent a great deal of time in gpu.js, as it allows us to write very simple JavaScript that is translated to gpu code with cpu fallback when not supported.
We are starting to implement it and will have it in v2.
skaraman
@skaraman
that's awesome thanks for letting me know what you're up to
skaraman
@skaraman
@robertleeplummerjr how come the gpu branch is nn-gpu but you linked to master branch, is the gpu.js build kinda scattered across the repo? what src would be the best to use?
Robert Plummer
@robertleeplummerjr
nn-gpu is where we are experimenting with the initial feed forward neural networks.
The recurrent neural networks is the next target.