Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Keith Gould
    Hey @elpidiovaldez thanks for the feedback. Is there a paper or post somewhere describing the strategy of using a net per action/label? Happy to learn more about it.
    Keith Gould

    Also @elpidiovaldez , great question:

    Now here is the rub; it also sets a target of 0 for the action which was not taken. Why 0?

    New to this but I'm trying to do the same thing in tiny-dnn that I found in pytorch. This works really well, as in when I run the simulation (in python/pytorch) the cartpole balances well.

    In the above, specifically on this line it looks like they are passing in the normalized reward per step. I'm trying to do the same, though I'm prob not doing it correctly.

    Not that I know of. I am not sure it is the right thing to do, but it is the solution that seemed sensible to me. There is a problem updating a network that outputs values for all actions when only 1 value is known. @MaxSavenkov proposes another solution, explicitly passing zero error gradients for the actions that were not tried. That may be better, but it had not occurred to me. When the error for the action that was tried is back-propagated through SHARED weights it WILL affect the outputs for all other actions, although that may not stop the network converging to a good solution, since outputs may be corrected when the corresponding actions are tried. I will think more about it.
    Keith Gould
    Also if it helps, I created a pseudocode section to summarize what I did (not to be confused with what is RIGHT)
    Keith Gould
    Also, just to get a wider audience I wrote up the question on Reddit's Reinforcement Learning channel here
    @keithmgould after thinking a bit more, I realised that in the case of cartpole you only have two actions. I think you can use a single NN with a single output to compute the probability, p of performing one of the actions. The probability of the other action is 1-p. Instead of softmax you use the logistic function to force the output between 0,1. I would still like to know how best to handle the general many action case though.
    Keith Gould
    tried asking in developers but that channel is looking pretty quiet:
    Hey guys, from what I can tell, there is no way to modulate a gradient with a reward using tiny-dnn as it stands today (especially when using the train/fit methods). Am I correct? I'm new to reinforcement learning but a central idea is being able to do this (which amounts to multiplying a gradient and a reward). Examples are found in Karpathy's pixels to pong gist and also in pytorch when computing gradient here
    Christophe Zoghbi
    Hey guys, was anyone able to build tiny-dnn with LibDNN for GPU support? I can't seem to get it to work and I would appreciate any insights. Thanks!
    hey i was wondering why the input layer only accepts one parameters as a size_t when most other nn's require you to define 3 parameters for an image input layer (width,height,rgb/colour depth)?
    could anyone show me how to setup a cnn with the following properties? Input (416, 416, 3)
    Convolution 3×3 1 (416, 416, 16)
    MaxPooling 2×2 2 (208, 208, 16)
    i get the following error : inoutput size of Nth layer must be equal to input of (N+1)th layer\nlayerN: leaky-relu-activation in:2742336([[414x414x16]]), out:2742336([[414x414x16]])\nlayerN+1: max-pool in:519168([[416x416x3]]... ...}
    from net
    << convolutional_layer(w, h,3,16,16)
    << leaky_relu_layer(0.1F)
    << max_pooling_layer(shape3d(w , h , 3),2,2, core::default_engine());
    Hi all! I am new to tiny-dnn, I wonder how to control the number of CPU threads used for training? Currently I am getting about 9% reported CPU load on a 24 thread machine (with the program, reportedly, using 10 threads), I would like to make it 100%. Or may be the reason is that I am using it on too low-dimensional (dim = 7) data?
    I am not building it, all I am doing is #including the header file in my program
    Yash Dusing
    Hey ! I’m new to tiny-dnn and would like to get started with it. I’ve run an Xor example to cross-verify if everything’s done correctly.
    I’m experiencing issues in running the mnist example provided
    Yash Dusing
    @edgarriba @prlz77
    hello!I new to here , could you offer me some study materials ?
    Ankit Dhankhar
    I've run the recurrent addition example in tiny-dnn/examples but it is give a error of more than 10%, so is it alright or there is some error?
    Akash Nagaraj
    Hey, I'm having a little trouble finding the weights of the hidden layers. Could someone please help me out?
    Akash Nagaraj
    More specofically, how do I read the weights file?
    Tomaz Stih
    Hi. So far I've only seen samples of convolution, polling, relu. But can tiny-dnn do deconvolution (as basis for image segmentation experiments)? And if not, what would it take to implement it ( I might take this task if provided enough support from current developers :) )?
    Hi all! I’m new to tiny-dnn and i don't know how to get values of weight. Is somebody can help me ? ( ps : sorry for my english)
    Hi all!I want to participate gsoc on this project. where should i start ?

    Hi! Does anyone know a possible cause for this runtime error?

    /TinyDNN/./tiny-dnn/tiny_dnn/nodes.h:193: void tiny_dnn::nodes::label2vec(const label_t*, size_t, std::vector<std::vector<float, tiny_dnn::aligned_allocator<float, 64ul> > >&) const: Assertion `t[i] < outdim' failed. Aborted (core dumped)

    My label vector is a std::vector<label_t> type vector with dimensions 1470x1, my data has dimensions 1470x88 and my network is as follows:

    net << fc1 << batch_norm(fc1) << linear(100) << relu()
    << fc2 << batch_norm(fc2) << linear(100) << relu()
    << fc3;
    fc fc1(88, 100);
    fc fc2(100, 100);
    fc fc3(100, 1);
    I'm trying to train the network for regression using:
    net.fit<absolute>(optimizer, train_data, train_labels, n_minibatch, n_train_epochs, on_enumerate_minibatch, on_enumerate_epoch);

    Martin Chang
    Hi, how is the gradient of the loss function used in tiny-dnn? I can only find a function called gradient calling loss_func_class::df. But gradient seems not be called anywhere.
    Holy shit, I just realized that line was from fucking February. What the fuck.
    Sure is dead here...
    Does anyone know that tinydnn is GEMM based for CONV layers, or just native implementation of CONV?
    Configure opencv3.4 and tiny-cnn (windows),tanks.
    Martin Chang
    Just sharing. tiny-dnn can run on Jupyter with cling (although a bug in cling prohibits multi threading). Intresting...
    John Milton
    I work at a sports-tech startup in the Seattle area and we're looking for developers that know how to work with tiny-dnn. Please let me know if you might be interested. My email address is john@seattlesportsciences.com.
    Hi i want to implement dnn model which has 2hidden layers and softmax layer in C++.does anyone tried that i need back propagation for loss estimation also iam new to this.
    Noureldin Hendy
    Hi I want to contribute to maintaining tiny-dnn who can I get in touch with?
    Max "Isika" Gotts
    hi guys! do ppl still use this? is it still vaguely functional/
    Hello everyone...
    I use cv::dnn::readnetfromdarknet method.
    but it occur bad allocation error.
    please help me...
    thank you
    Geunsik Lim
    What is the tiny-dnn replacement for tensorflow Conv2d(64,(1,1),strides=(1,1),bn_momentum=bn_momentum)??

    I want to connect from out shape 1x3x3 to right fc layer to weight params. I use tiny-dnn and didn't find how to do it. I tried to connect directly (// i2_reshape<<fc_mul; //I tried this but this didn't help me.), but it doesn't help me. How can i do this?

    Here is my code:
    using namespace std;
    using namespace tiny_dnn;
    using namespace tiny_dnn::layers;
    using namespace tiny_dnn::activation;

    void build_tnet(int n,int k){
    layers::input i1(shape3d(n,1,k));
    layers::input i1_reshape(shape3d(1,k,n));
    layers::input i2_reshape(shape3d(1,k,k));
    layers::conv conv0(n,1,1,1,k,64,padding::same);
    layers::conv conv1(n,1,1,1,64,128,padding::same);
    layers::conv conv2(n,1,1,1,128,1024,padding::same);
    layers::max_pool mp(n,1,1024,n,1,1,1);
    layers::fc fc1(1024,512);
    layers::fc fc2(512,256);
    layers::fc fc3(256,k*k);
    layers::fc fc_mul(k,k,false);
    // i2_reshape<<fc_mul; //I tried this but this didn't help me.
    network<graph> net;
    construct_graph(net, { &i1 },{&fc_mul});
    std::ofstream ofs("graph_net_example.txt");
    graph_visualizer viz(net, "graph");
    for (int i = 0; i < net.depth(); i++) {
    cout << "#layer:" << i << "\n";
    cout << "layer type:" << net[i]->layer_type() << "\n";
    cout << "input:" << net[i]->in_data_size() << "(" << net[i]->in_data_shape() << ")\n";
    cout << "output:" << net[i]->out_data_size() << "(" << net[i]->out_data_shape() <<")\n";
    system("dot -Tgif graph_net_example.txt -o graph.gif");
    system("gwenview graph.gif");

    int main(){

    Hi, I just started to use tiny-dnn. Is still an active gitter group?
    nope :(