These are chat archives for FreeCodeCamp/DataScience

3rd
Jul 2018
Fikri Taswin
@fikrietaswin
Jul 03 2018 04:35
Hello everyone
Eric Leung
@erictleung
Jul 03 2018 23:11
@fikrietaswin welcome welcome! What kind of interest in data do you have? Trying to get into it or you're already analyzing lots of data?

@GoldbergData very nice, thanks for sharing! I didn't know they made a response AND made revisions to their paper already so fast! Ah, I really like the section on "What This Paper Is NOT",

Any smooth regression/classification can be approximated by NNs, or by polynomials,
so it may at first appear that our work here is to show that NNs are
approximately polynomials. But we show a subtle but much stronger connection
than that. We are interested in the NN fitting process itself; we show that it
mimics PR, with a higher-degree polynomial emerging from each hidden layer

That definitely makes it clear on what they are trying to get at in this paper. I misread that intention the first time I read it.

Eric Leung
@erictleung
Jul 03 2018 23:17

Also, good to know about the universal approximation theorem that

a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of R^n, under mild assumptions on the activation function.