These are chat archives for FreeCodeCamp/DataScience

26th
Dec 2017
Yingjie (Iris) Hu
@huyingjie
Dec 26 2017 02:02
I made a cured list of shiny apps for learning statistics. If you have a shiny app related to basic statistics, please consider contributing to the repo. Thank you.
evaristoc
@evaristoc
Dec 26 2017 11:45

@deanhu2

This is a good basic description of conv-NN. The other name usually assigned to is "filter". Bear in mind that it is still an overloaded term anyway.

I am not sure but I believe to remember seen some references using the term "filter" to refer stacked layers for multidimensional analysis (eg. when running conv-NN by RGB ranges, which is usual).

Still, that doesn't answer why darknet requires two parameters to define what they called "filter".

The values you shared seemed to be more related to the default of the maxpool parameters in the darknet.cfg file.

With the available information I can't help tracking the relation, sorry.

The break down you supplied is good, thanks!

Here is where I definitively advise you to get some training:

fails to state how it has shrunk the layers...it goes from 416x416x3 -> 416x416x16 down to 208x208x16 with only a mention of using a leaky relu in between...so i can figure out how it shrinks the input down to 208...

I must say the material might be a bit confusing but it is giving some clues of what it is happening. I don't really think the material is failing on that regard.

(NOTE: we have talked about kernels, windows and filters; if you read well the break down reference you will at least suspect from where the rest of stacked matrices, from 3 to 16, might come from).

@deanhub2 bear in mind that I have studied convNN but I am not really an expert. It would be hard for me to provide you with more details.

I think the (mini-) yolo script is worth looking at! Success with the rest of the translation.

CamperBot
@camperbot
Dec 26 2017 11:45
evaristoc sends brownie points to @deanhu2 and @deanhub2 :sparkles: :thumbsup: :sparkles:
api offline
api offline
evaristoc
@evaristoc
Dec 26 2017 11:48

@deanhu2

It would be easier for you if you would understand convNN and then trying to replicate the procedure instead of translating the code.

I strongly recommend to study convNN a bit more.

evaristoc
@evaristoc
Dec 26 2017 12:17
PD: @deanhu2 what I suggested as "filters" are apparently called "channels" in the break down. Intuitively, channels seem to be a more appropriate term. That is also a reason why to study the case in depth: terminology might be confusing, so it is important to understand how it is (mis)used in all cases.
Dean
@deanhu2
Dec 26 2017 12:25
@evaristoc ah great ill get convnn and try that, youre probably right understanding the terminology seems to be key to not waste time...wish they would standardise more, thanks for all your help
CamperBot
@camperbot
Dec 26 2017 12:25
deanhu2 sends brownie points to @evaristoc :sparkles: :thumbsup: :sparkles:
:cookie: 392 | @evaristoc |http://www.freecodecamp.org/evaristoc
Dean
@deanhu2
Dec 26 2017 16:43
Ok iv tested the creation of the network in convvnet and not managed to replicate it in tiny_dnn and it all seems fine...but im confused at the training aspect...the data format for tiny_dnn is a vector of floating point numbers, but the format used here : https://www.youtube.com/watch?v=NM6lrxy0bxs and described in the article states i need to train using a multiple bounding box values (the cells which contains a dog etc) and decrease values of non objects....how can i represent such a format in a vector of floats in a way the network can train on?
Dean
@deanhu2
Dec 26 2017 16:51
managed to*