These are chat archives for FreeCodeCamp/DataScience
discussion on how we can use statistical methods to measure and improve the efficacy of http://freeCodeCamp.com
@GoldbergData very nice, thanks for sharing! I didn't know they made a response AND made revisions to their paper already so fast! Ah, I really like the section on "What This Paper Is NOT",
Any smooth regression/classification can be approximated by NNs, or by polynomials,
so it may at first appear that our work here is to show that NNs are
approximately polynomials. But we show a subtle but much stronger connection
than that. We are interested in the NN fitting process itself; we show that it
mimics PR, with a higher-degree polynomial emerging from each hidden layer
That definitely makes it clear on what they are trying to get at in this paper. I misread that intention the first time I read it.
Also, good to know about the universal approximation theorem that
a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of R^n, under mild assumptions on the activation function.