Hi all, quick comment about the naming conventions being used. there are lots of file names with the same path ie. lite/kerenels/kernel_utils.c and lite/micro/kernels/keren_utils.c
When these are compiled into .a file you actually get duplicates kernel_utils.o kernel_utils.o. If you try and extract that .a file using something like ar -x, you end up with the files overwriting each other
Hi all, I am developing on the Hifi4 and had a question about the TFLM Xtensa kernels. Many of the ops wrap the implementation in
#if defined(HIFI4) || defined(HIFI5) so they are using the same code for the Hifi4 and Hifi5.
However, I noticed that in
pooling.cc the various pooling kernels are wrapped in
#if defined(HIFI5). Why is the Hifi4 omitted here?
I tried just adding the
defined(HIFI4) like the other kernels and it compiles and seems to run just fine, at least for the inputs I've tried. But I'm wondering if I'm missing some subtlety as to why it wasn't included.
Hi everyone! I'm trying to run the CIFAR10 MLCommons Tiny Model: https://github.com/mlcommons/tiny/edit/master/v0.5/reference_submissions/image_classification/ic/ic_model_quant_data.cc
Hardware used: Silicon Labs Thunderboard Sense 2
Trained with: tensorflow version 2.4.1
TfLite Micro Version: 2.4.1
When running this particular model, I get the following error:
"Didn't find op for builtin opcode 'ADD' version '2'".
Do you have any clue what I could have done wrong?
Hi there, I have a model in flatbuffer format that runs on the HiFi4. However, when trying to run this model on the HiFi5 I get errors due to bias vectors in my model not being 8-byte aligned. Previous, the optimized HiFi4 matrix multiply functions only required 4-byte alignment. Is there a way to ensure that all vectors inside of the compiled model are 8-byte aligned?
Currently there doesn't seem to be way for aligning all the weight vectors. If only you can compile model the way EdgeImpulse does!
hi, Does anyone know if tensors that contain fewer than 1024 elements get quantized with tflite. I get the following log output:
Skipping quantization of tensor model_1/conv2d_3/Conv2D because it has fewer than 1024 elements (120)
But if I inspect the tflite file with the
tflite_visualize tool it shows everything as INT8/INT32. I am using
Also, is it expected that a quantized tflite file should be substantially smaller than a tflite file with floating point weights? In my case the model contains roughly 7k parameters, the float tflite file is 24KB and the quantized file is 20.6KB
TfLiteIntArrayGetSizeInBytes': common.cc:(.text.TfLiteIntArrayGetSizeInBytes+0x0): multiple definition ofTfLiteIntArrayGetSizeInBytes'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteIntArrayGetSizeInBytes+0x0): first defined here
TfLiteIntArrayEqualsArray': common.cc:(.text.TfLiteIntArrayEqualsArray+0x0): multiple definition ofTfLiteIntArrayEqualsArray'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteIntArrayEqualsArray+0x0): first defined here
TfLiteIntArrayEqual': common.cc:(.text.TfLiteIntArrayEqual+0x0): multiple definition ofTfLiteIntArrayEqual'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteIntArrayEqual+0x0): first defined here
TfLiteFloatArrayGetSizeInBytes': common.cc:(.text.TfLiteFloatArrayGetSizeInBytes+0x0): multiple definition ofTfLiteFloatArrayGetSizeInBytes'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteFloatArrayGetSizeInBytes+0x0): first defined here
TfLiteTypeGetName': common.cc:(.text.TfLiteTypeGetName+0x0): multiple definition ofTfLiteTypeGetName'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteTypeGetName+0x0): first defined here
TfLiteDelegateCreate': common.cc:(.text.TfLiteDelegateCreate+0x0): multiple definition ofTfLiteDelegateCreate'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteDelegateCreate+0x0): first defined here
I am trying to use the person_detection code and apply it to my microcontroller (Arduino Nano 33. BLE) but I have a problem when I apply the code it gives me static output
"Person score: -115 No person score: 72
Invoke() called after initialization failed"
I just updated the Hexadecimal values and the "micro_op_resolver" do I have to change anything else to the code?
I am new to this field so please guide me in it
@cortensinger you need to use C++ to interface with the TFLM. At least I'm not aware of any wrapper libraries for plain C.
You can of course build TFLM for the Cortex-M architecture that is used in nRF chips. You can build the
microlite library and then link it with your project.