Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Niranjan Yadla
    @tjgarcia-mchp yes,that's correct.
    thank you nyadla!
    getting tensor value not set,got 5 but expected 4 for input 0
    Chris Knorowski

    Hi all, quick comment about the naming conventions being used. there are lots of file names with the same path ie. lite/kerenels/kernel_utils.c and lite/micro/kernels/keren_utils.c

    When these are compiled into .a file you actually get duplicates kernel_utils.o kernel_utils.o. If you try and extract that .a file using something like ar -x, you end up with the files overwriting each other

    It might be worth adding a prefix to duplicate files such as lite/micro/kernels/micro_kernel_utils.c to avoid that issue
    only noticed this as I was building a .so from the .a file and there were many undefined references
    Joseph O'Brien Antognini

    Hi all, I am developing on the Hifi4 and had a question about the TFLM Xtensa kernels. Many of the ops wrap the implementation in #if defined(HIFI4) || defined(HIFI5) so they are using the same code for the Hifi4 and Hifi5.

    However, I noticed that in pooling.cc the various pooling kernels are wrapped in #if defined(HIFI5). Why is the Hifi4 omitted here?

    I tried just adding the defined(HIFI4) like the other kernels and it compiles and seems to run just fine, at least for the inputs I've tried. But I'm wondering if I'm missing some subtlety as to why it wasn't included.

    i'm trying to follow the Hello World example, but it gives me an error when i try to build the code. the error is "find: ‘../google/’: No such file or directory". how do i fix this?
    Can I calculate all the storage I need flash and ram to deploy my .tflite template??
    1 reply
    some way to calculate values ​​in a tensorflow model (tf lite)

    Hi everyone! I'm trying to run the CIFAR10 MLCommons Tiny Model: https://github.com/mlcommons/tiny/edit/master/v0.5/reference_submissions/image_classification/ic/ic_model_quant_data.cc

    Hardware used: Silicon Labs Thunderboard Sense 2
    Trained with: tensorflow version 2.4.1
    TfLite Micro Version: 2.4.1

    When running this particular model, I get the following error:

    "Didn't find op for builtin opcode 'ADD' version '2'".

    Do you have any clue what I could have done wrong?

    Vikram Dattu
    Is the monthly meeting schedule changed?
    Is there On-Device Training possible in TFL-Micro 2.7?
    For TensorFlow Lite there is an example for this feature https://www.tensorflow.org/lite/examples/on_device_training/overview
    hi,everyone there are gernerat_keil_project.py and gernerat_keil_project_test.sh in the make folder,but it is failure to run. who can answer how to get keil preject with Cortex M0
    just wonder how is it with tagging since a new repo doesn't have a 2.7 or 2.8, etc... any tags - please how can I get into the picture?
    Rameen Irfan
    Good evening..I am trying to porting tflite on zephyr.Is it possible to port tensorflow lite on zephyr?If it is,than please guide me the procedure.
    @rameenirfan yeah you can port tensorflow lite on zephyr.
    try this
    Andres Gomez
    Apollo Blue.png
    Hi everyone! I'm having some issues running the TFLM hello world example on an ambiq-3p board (very similar to the sparkfun edge). I'm able to compile and flash the project, but the output is not as expected. The Y value is always 0. I don't really understand why. Does anyone have an idea how to debug this?
    Andres Gomez
    In case anyone has this issue as well, I was missing a few flags during the compilation phase. Things like -D __FPU_PRESENT=1 and -DARM_MATH_CM4. When using all the flags from the standard sparkfun makefile, I finally obtained the expected results.
    Hi there, I have a model in flatbuffer format that runs on the HiFi4. However, when trying to run this model on the HiFi5 I get errors due to bias vectors in my model not being 8-byte aligned. Previous, the optimized HiFi4 matrix multiply functions only required 4-byte alignment. Is there a way to ensure that all vectors inside of the compiled model are 8-byte aligned?
    2 replies
    Vikram Dattu

    Hi there, I have a model in flatbuffer format that runs on the HiFi4. However, when trying to run this model on the HiFi5 I get errors due to bias vectors in my model not being 8-byte aligned. Previous, the optimized HiFi4 matrix multiply functions only required 4-byte alignment. Is there a way to ensure that all vectors inside of the compiled model are 8-byte aligned?

    Currently there doesn't seem to be way for aligning all the weight vectors. If only you can compile model the way EdgeImpulse does!

    Sigurd Odden
    Hello, Have anyone tried to implement TFLM for Nordic nRF?
    hi, Does anyone know a similar keras-tuner optimizer where you can easily define a RAM or ROM target?
    Basile B.
    Is the ELL framework from microsoft abandoned? cant find any statement about it.

    hi, Does anyone know if tensors that contain fewer than 1024 elements get quantized with tflite. I get the following log output: Skipping quantization of tensor model_1/conv2d_3/Conv2D because it has fewer than 1024 elements (120)
    But if I inspect the tflite file with the tflite_visualize tool it shows everything as INT8/INT32. I am using tensorflow==2.8.0

    Also, is it expected that a quantized tflite file should be substantially smaller than a tflite file with floating point weights? In my case the model contains roughly 7k parameters, the float tflite file is 24KB and the quantized file is 20.6KB

    Hi everyone,I an trying to deploy a model to a board which has cmsis in it's source code. But I do not know how to change the source code. Does anyone has done these things before? Or anyone knows some examples of this kind of deployment?
    Hi, I met build error when add LIB to M33 : /usr/bin/gcc-arm-none-eabi-9-2019-q4-major/bin/../lib/gcc/arm-none-eabi/9.2.1/../../../../arm-none-eabi/bin/ld: thirdparty/built-in.a(common.o): in function TfLiteIntArrayGetSizeInBytes': common.cc:(.text.TfLiteIntArrayGetSizeInBytes+0x0): multiple definition ofTfLiteIntArrayGetSizeInBytes'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteIntArrayGetSizeInBytes+0x0): first defined here
    /usr/bin/gcc-arm-none-eabi-9-2019-q4-major/bin/../lib/gcc/arm-none-eabi/9.2.1/../../../../arm-none-eabi/bin/ld: thirdparty/built-in.a(common.o): in function TfLiteIntArrayEqualsArray': common.cc:(.text.TfLiteIntArrayEqualsArray+0x0): multiple definition ofTfLiteIntArrayEqualsArray'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteIntArrayEqualsArray+0x0): first defined here
    /usr/bin/gcc-arm-none-eabi-9-2019-q4-major/bin/../lib/gcc/arm-none-eabi/9.2.1/../../../../arm-none-eabi/bin/ld: thirdparty/built-in.a(common.o): in function TfLiteIntArrayEqual': common.cc:(.text.TfLiteIntArrayEqual+0x0): multiple definition ofTfLiteIntArrayEqual'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteIntArrayEqual+0x0): first defined here
    /usr/bin/gcc-arm-none-eabi-9-2019-q4-major/bin/../lib/gcc/arm-none-eabi/9.2.1/../../../../arm-none-eabi/bin/ld: thirdparty/built-in.a(common.o): in function TfLiteFloatArrayGetSizeInBytes': common.cc:(.text.TfLiteFloatArrayGetSizeInBytes+0x0): multiple definition ofTfLiteFloatArrayGetSizeInBytes'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteFloatArrayGetSizeInBytes+0x0): first defined here
    /usr/bin/gcc-arm-none-eabi-9-2019-q4-major/bin/../lib/gcc/arm-none-eabi/9.2.1/../../../../arm-none-eabi/bin/ld: thirdparty/built-in.a(common.o): in function TfLiteTypeGetName': common.cc:(.text.TfLiteTypeGetName+0x0): multiple definition ofTfLiteTypeGetName'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteTypeGetName+0x0): first defined here
    /usr/bin/gcc-arm-none-eabi-9-2019-q4-major/bin/../lib/gcc/arm-none-eabi/9.2.1/../../../../arm-none-eabi/bin/ld: thirdparty/built-in.a(common.o): in function TfLiteDelegateCreate': common.cc:(.text.TfLiteDelegateCreate+0x0): multiple definition ofTfLiteDelegateCreate'; thirdparty/built-in.a(common.o):common.cc:(.text.TfLiteDelegateCreate+0x0): first defined here
    collect2: error: ld returned 1 exit status
    hello , I fix this bug by updating code
    Abdulrahman Alghaligah


    I am trying to use the person_detection code and apply it to my microcontroller (Arduino Nano 33. BLE) but I have a problem when I apply the code it gives me static output

    "Person score: -115 No person score: 72
    Invoke() called after initialization failed"

    I just updated the Hexadecimal values and the "micro_op_resolver" do I have to change anything else to the code?

    I am new to this field so please guide me in it

    antoine lebaud
    This message was deleted
    Corten Singer
    Hello everyone, I have a quick question. Thank you in advance :)! I see earlier in this chat that someone asked if there is any support for TFLM on a Nordic nRF platform, and the response was this link: https://devzone.nordicsemi.com/nordic/nordic-blog/b/blog/posts/nrf-tensorflow-support. Unfortunately, this blog post and the accompanying github repo are from a student project, and they warn that this code is not stable/supported or even functional.... I understand that there may not be C-specific support for the TFLM project. I simply want to confirm/verify that this is the case in this chat room with folks who are very familiar with TFLM before I move on to finding a new solution for my embedded application. For the sake of completeness, I am working with the nRF52840 chip on a custom board, and the code is written in C (not C++, with which I have zero experience). Is there a supported way to interface with TFLM in C for embedded applications? I have successfully created a great model on my Desktop first, and my goal is to get it working on my nRF52 microcontroller. I also used Bazel to try and build a C-version of tensorflow/tensorflow/lite/c (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/c), but found out that the /micro/ subdirectory has been recently pulled out of this repo into a standalone repo, and now Im not sure if there exists a C-based option for the tflite-micro project. Cheers!
    Hello, I would like to run the voice examples for esp32. Since migration last year there is no esp target ( or at least did not see it) Can somebody guide me on this?
    Vikram Dattu
    Hi @monoapp3 ESP32 support is now moved to separate repository. Please find it here: https://github.com/espressif/tflite-micro-esp-examples
    Atis Elsts

    @cortensinger you need to use C++ to interface with the TFLM. At least I'm not aware of any wrapper libraries for plain C.

    You can of course build TFLM for the Cortex-M architecture that is used in nRF chips. You can build the microlite library and then link it with your project.

    @vikramdattu thanks. Hope they have also a chat room
    Hao Zhao
    Hi, all. I am new to the group. I found the agenda of discussion for 'LSTM from TFL to TFLM, fused-op vs fine-grain-ops' in the recent SIG Micro Meeting. I am wondering if anyone has managed to run LSTM/GRU on mcu? I saw the TFLM kernel for xtensa ('https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/kernels/xtensa') contains the LSTM implementation, hopefully, LSTM layer will be officially included in the TFLM soon.
    hello everyone, i have some problem with importing tensorflow may someone help me please
    Rahul Arasikere
    Hello Folks, I am tring to run an evaluation of a neural network model on a cortex m7 based stm32f7 chip using the generic-cortex-m target with the cmsis_nn kernels enabled. But when I tried to create an interpreter and get the input tensor to feed data into and invoke the model, the input tensor is a nullptr but as far as I can tell the tensor allocations were all successful. I have aligned the tensor arena as well as the model to 16 byte boundaries. I was hoping I could get some pointers or advice in the right direction.
    1 reply
    Hi, all, I am new to group. I am using tflite-micro for running an application. I got errors such as " xx operators is not registered". This operator is in tf-lite namespace, but not in micro's. I wondering if anyone can show me a pointer how I could add this additional operator to micro? Thanks in advance
    Michael O'Cleirigh
    I´m looking at vector similarity where instead of running the embedded ML model to a complete result you stop a few layers up where you have a vector/tensor with many values. Does anyone know of any examples or papers in this area? I´m interested in use cases where you could compute the vector on device and then upload it into a remote vector db for querying.
    Michael O'Cleirigh
    @xianxianzhang look at the documentation here about how to add/request tflite ops from lite to micro: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/docs/porting_reference_ops.md
    Mo, Fan Vincent
    Hello, I'm planing to run TF Lite micro with some on-device training. I found that TF Lite (v2.7.0) started to have the feature that we can run with 'train' signature, to train the last layers. I suppose that TFLM still does not have that. I know it will be challenging, but can anyone kindly give some clues what things I probably need to change/add?
    Prerna Khanna
    Hi, I am trying to figure out if there is any tutorial to get started with a TFLite micro on any other ARM Cortex-M MCU. To be precise, I am trying to run it on the Maxim7800 board (https://www.maximintegrated.com/en/products/microcontrollers/MAX78000.html).
    hello everyone, has anyone tried to train a tflite model on arduino?