## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Feb 15 09:42
hariharansrc closed #1045
• Feb 15 09:41
hariharansrc opened #1045
• Feb 09 17:37
1412540568 opened #1044
• Feb 07 16:07
progr7 opened #1043
• Dec 31 2020 00:10
tak1000 opened #1042
• Nov 17 2020 17:35
ZW007 commented #1035
• Nov 15 2020 14:28
GiovanniCmpaner commented #1041
• Nov 15 2020 01:38
GiovanniCmpaner commented #1041
• Nov 15 2020 01:37
GiovanniCmpaner commented #1041
• Nov 09 2020 08:56
dmilos commented #1038
• Oct 26 2020 21:18
tak1000 closed #1041
• Oct 26 2020 21:18
tak1000 commented #1041
• Oct 25 2020 20:07
tak1000 opened #1041
• Oct 01 2020 19:25
teriterance opened #1040
• Aug 10 2020 08:02
Sammed98 opened #1039
• Aug 04 2020 15:43
Sammed98 commented #892
• Aug 04 2020 14:54
Sammed98 commented #1038
• Aug 04 2020 13:56
Joe-Mikey commented #1033
• Aug 04 2020 13:55
Joe-Mikey commented #1033
• Aug 04 2020 13:50
Sammed98 opened #1038
bhack
@bhack
@deanhu2 tiny-dnn/tiny-dnn#866
Dean
@deanhu2
ah thanks for the link, hopefully it happens in future then
Dean
@deanhu2
would it take a particularly long time to convert certain (cnn mostly) layers of tiny_dnn to perform on the gpu? or is the system tied together in another way where a lot of refactoring further down would have to take place? I want to improve the performance and i like tiny dnn, it fits well with continuous integration setups
Manvendra Singh
@manu-chroma
Hi, I'm looking forward to contribute to tiny-dnn. I'm not very experienced with coding in C++ apart from my coursework, though I have experience in contributing to OS projects and experience in python. Is there a beginners issue I might take a shot at? Thanks.
bhack
@bhack
tiny-dnn/tiny-dnn#921
xiaoerlageid
@xiaoerlaigeid
Hi all!I want to participate gsoc on this project. where should i start ?
Ranjodh Singh
@singhranjodh
Hi! I want to contribute to tiny-dnn. Is there any checklist of the milestones available?
appanacca
@appanacca
how about using SYCL as beckebnd for gpu support?
Also I am interested in contribute to the project, there is anyone that can indicate priorities of the project and/or introduce things to work on ?
Alex Giokas
@alexge233
Hi, is tiny-dnn still maintained? The examples you have do not correspond to release 1.0.0.a3, and the master branch doesn't compile.
han-so1omon
@han-so1omon
is there a room open for reinforcement-learning related development?
han-so1omon
@han-so1omon
or has anyone integrated automatic differentiation?
Dendi Suhubdy
@dendisuhubdy
No, not yet.
Ravshan90
@Ravshan90
hello! does anybody use tinydnn with libdnn?
beru
@beru
libdnn looks really good but it's not header only and uses ViennaCL for OpenCL.
Ravshan90
@Ravshan90
yes, i installed ViennaCL, CUDA, but i have some errors when i build, can you help me?
Ravshan90
@Ravshan90
error LNK2019: unresolved external symbol "public: thiscall greentea::device::device(int,int,enum greentea::Backend)" (??0device@greentea@@QAE@HHW4Backend@1@@Z) referenced in function "public: thiscall std::_Ref_count_obj<class greentea::device>::_Ref_count_obj<class greentea::device><int,int,enum greentea::Backend>(int &&,int &&,enum greentea::Backend &&)" (??$?0HHW4Backend@greentea@@@?$_Ref_count_obj@Vdevice@greentea@@@std@@QAE@$QAH0$QAW4Backend@greentea@@@Z)
error LNK2019: unresolved external symbol _cuGetErrorString@8 referenced in function "private: class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __thiscall CLCudaAPI::CLCudaAPIError::GetErrorString(enum cudaError_enum)const " (?GetErrorString@CLCudaAPIError@CLCudaAPI@@ABE?AV?$basic_string@DU?$char_traits@D@std@@V?\$allocator@D@2@@std@@W4cudaError_enum@@@Z)
beru
@beru
jianhui
@fantimond
Is this chat room still active?
Sylvain Corlay
@SylvainCorlay
now it is :)
Stanislav Arnaudov
@palikar
Hello to everyone! Is the library still being actively developed\used?
I just came across the note re: possible abandonment on the readme.
Are there any recommended alternatives?
Caleb
@caleb221
hi! does anyone know about support for an ESP-32?
taoliqiang1
@taoliqiang1
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
taoliqiang1
@taoliqiang1
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
Febin P Sunny
@codegit001
Hello, is AVX absolutely necessary for tinyDNN to run ?
Is it okay if I turn off using -mno-avx2 or editing the config file ?
Febin P Sunny
@codegit001
I am trying to get some traces for a research paper, using Gem5, for X86
beru
@beru
It's OK to turn it off. You may check CMakeLists.txt how C preprocessor symbols are defined.
taoliqiang1
@taoliqiang1
hello, I am @codegit001 In CMakeLists.txt, this line: option(USE_AVX "Build tiny-dnn with AVX library support" OFF), set USE_AVX --->OFF. AVX improves program effiency, it can be included，but not absolutely necessary.
Febin P Sunny
@codegit001
Thanks
taoliqiang1
@taoliqiang1
Hello, Beru. I debug char_rnn with Eclipse. I find it has some unusual output. In recurrent layer.h: line 29 , “bool reset_state = true”, that is to say the default value for variable “reset_state” is true, and it doesn’t update in any other place in the program. However, this will result in the following two lines codes not being executed ( in recurrent layer.h:line 189, 190) : "for (size_t b = 0; b < batch_size; b++) data[b] = buffer[b]; // copy state" . So, in forward computation, the computed state “h_next” and “c_next” will not be copied to lstm edge (in_data[1]:h_prev, in_data[2]:c_prev) and it is only be stored in the temporary variable “input_buffer”. When in backward propagation, the state "h_prev" and "c_prev" are always zero (their default value when initialized in layer.h, set_sample_count function ), this leads to the variables “dWh2c”, “dWh2f”, “dWh2i”, “dWh2o”, “dWx2f” , "db2f" (in lstm_cell_op_internal.h) will always be zero, the h matrix will not be updated and it is a constant value. I am doubt if this is a bug or What's wrong with my understanding?
@beru
Febin P Sunny
@codegit001
Hello
I would like to cite your framework in my research
How do I do that ? As you guys have not provided a format for this ! Do you guys have a paper in which this was published so that I can cite that ?
tinaba96
@tinaba96
Hello,
Did someone implemented bidirectional_recurrent_layer.h?
I am struggling with the implementation of Bidirectional LSTM.
I know that I just need to make one more LSTM which reads the input backward, and concatenate the both LSTM.
I am trying to edit recurrent_layer .h and lstm_cell.h but I don't really know what I am
supposed to do. The code is very difficult for me to read. Could someone help me?
Thanks
dmitriys566
@dmitriys566
Hello!
What is the replacement for tensorflow Conv2D(64,(1,1),strides=(1,1),bn_momentum=bn_momentum)?
Pierre Gabin FODOP GUMETE
@teriterance
Hello all
I'm adding new layer to lib but I don't understand what is the role of "scale" function in activation layer.
yikangle
@yikangle
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
yikangle
@yikangle
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
Martin Chang
@marty1885
Wow, devs are still active!?
yikangle
@yikangle
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
yikangle
@yikangle
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
yikangle
@yikangle
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
yikangle
@yikangle
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
yikangle
@yikangle
tiny_char_rnn bot: say something and I'll try to answer :D
Usage: @tiny_char_rnn query
hello
@tiny_char_rnn hello