Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Michael O'Cleirigh
    @mocleiri

    @petewarden_twitter the pretrained models for microspeech are incorrect. They are uint8 quantized instead of int8 quantized. I just filed tensorflow/tensorflow#48752

    For micropython due to heap limits there is only 100K or so of RAM and my original approach of loading the model like an array in a micropython file was due to dynamic compiling was taking > 70KB. I copied what openmv did and did a direct load of the model.tflite file from the filesystem which caused it to only have a 18k impact on ram.

    As I didn't know how to reverse the xxd command I just downloaded the pretrained files and used the model.tflite file directly.

    Inference hasn't been working for me and I just found out that its because the model I've been using is uint8 quantized but I was expecting int8 quantized as that is what comes from the spectrogram data.

    2 replies
    peter197321
    @peter197321
    @petewarden_twitter is there a comparision for actually supported operations between TFLite-micro vs. TFLite?
    Daniel Konegen
    @konegen
    Hi there. Does anyone know if it is possible to measure the execution time of each layer of a TensorFlow Lite model while executing on a microcontroller?
    TCal
    @tcal-x
    @konegen - short answer is, probably. Is your difficulty with adding the profiling (adding a MicroProfiler to the MicroInterpreter), or with accessing the output from the microcontroller?
    12 replies
    Daniel Konegen
    @konegen
    Hi all,
    does anyone know how to implement a custom layer or operator for TensorFlow Lite on microcontroller? If anyone knows can you give me a step by step instruction. The instructions on the TensorFlow website (https://www.tensorflow.org/lite/guide/ops_custom) didn't really help me. I don't really know where I have to put all the code. Can anyone help me and have already experience at this topic?
    Þórir Már
    @ThorirMar_twitter

    Hello all!

    I'm wondering if you know of an official website for TFLite micro which goes over what layers and so on are supported and such?

    siddanth-digi
    @siddanth-digi

    Hello all!

    I'm wondering if you know of an official website for TFLite micro which goes over what layers and so on are supported and such?

    https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/micro_ops.h

    Supported layers are under ops{ micro {

    dont know any official website

    Hello all!

    I'm wondering if you know of an official website for TFLite micro which goes over what layers and so on are supported and such?

    https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/micro_ops.h

    Supported layers are under ops{ micro {

    as far as i know official documentation / official website is current not available but this might help you

    chuantunglin
    @chuantunglin
    Hi all,
    Does anyone know how to build the examples (ex. hello_world and micro_speech in TFLite micro) for RISCV?
    What are the steps and commands?
    I'm using the RISC-V GNU Compiler Toolchain: riscv64-unknown-elf-g++.
    juliowissing
    @juliowissing
    Hello everyone,
    is there a possibility to disable the dynamic quantization when converting a tf model to a tf lite model? We need this to obtain a baseline for float vs int benchmarking on arduino when performing inference.
    juliowissing
    @juliowissing

    dont know any official website

    Hello all!

    I'm wondering if you know of an official website for TFLite micro which goes over what layers and so on are supported and such?

    https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/micro_ops.h

    Supported layers are under ops{ micro {

    as far as i know official documentation / official website is current not available but this might help you

    The official website also references the all_ops_resolver.cc file for a list of supported operations.

    Daniel Konegen
    @konegen
    Hi @petewarden_twitter, hi all, do any of you know if it is possible in the meantime to calculate in advance the tensor arena size that a TFLite model will need on a microcontroller? Because currently you have to find it out by trial and error. Does anyone of you know how to do this, or know a technique how to implement this?
    Daniel Konegen
    @konegen
    Also, I have another question. Is it possible to get from a TensorFlow/Keras or TensorFlow Lite model the needed operations for TensorFlow Lite for microcontrollers. So that not tflite::AllOpsResolver but tflite::MicroMutableOpResolver can be used to save memory and improve execution speed. Can anyone help me with this?
    JosephBushagour
    @JosephBushagour
    I find https://netron.app/ helpful for determining what ops are needed @konegen
    driedler
    @driedler

    Has anyone ever tried running TFLM on a Raspberry PI Zero?
    Nothing popped up in my searches so I made my own port:
    https://github.com/driedler/tflite_micro_runtime

    Using the CMSIS NN kernels I saw a ~8x speed-up compared to the tflite_runtime Python package.
    This makes sense because the RPI0's ARMv6 core doesn't have any acceleration and the
    CMSIS kernels are far more optimal than the tflite_runtime's default kernels.

    mathewgeorge88
    @mathewgeorge88
    Hi all, I would like to create an object detection model to run on Esp32. I have a few questions:
    1) Is there a newer documentation or example for training a custom model with TF2? The tf.slim method seems to be outdated - https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/person_detection/training_a_model.md
    2) Is there a recommended architecture like MobileNets, EfficientDet etc. to use on microcontroller/Esp32?
    I am a beginner in this field. Thanks for your help!
    ricoGG
    @ricoGG
    when I run person_detection of tflite micro, I got "person score:-72 no person score 72", I am not quite sure if it was correct, who has met the same problem?
    Vikram Dattu
    @vikramdattu

    when I run person_detection of tflite micro, I got "person score:-72 no person score 72", I am not quite sure if it was correct, who has met the same problem?

    I have observed the same thing since the example is updated to use int8 model. (previous was greyscale uint8).
    This is the case for all the archs and not just ESP32? It's not the issue but the way output is interpreted?

    Jens Elofsson
    @jenselofsson
    I'm trying to run the script tensorflow/lite/micro/tools/ci_build/test_all.sh, and I'm getting an error related to the flags to the git-command. is there a specific version (or max/min version) that is intended to be used?
    ricoGG
    @ricoGG
    I am using Tensorflow-2.5.0 for test, I tested grayscale after add uint8 ops, detection accuracy is better than int8 version, but it's too slow. for int8 version, should I still input grayscale 96x96 image?
    siddanth-digi
    @siddanth-digi

    Hi @petewarden_twitter, hi all, do any of you know if it is possible in the meantime to calculate in advance the tensor arena size that a TFLite model will need on a microcontroller? Because currently you have to find it out by trial and error. Does anyone of you know how to do this, or know a technique how to implement this?

    you can use the following method in your tfmicro C++ code if you have the luxury to allocate more and find out the exact amount of bytes used
    //
    size_t used_bytes = static_interpreter.arena_used_bytes();
    //

    siddanth-digi
    @siddanth-digi

    Hi all, I would like to create an object detection model to run on Esp32. I have a few questions:
    1) Is there a newer documentation or example for training a custom model with TF2? The tf.slim method seems to be outdated - https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/person_detection/training_a_model.md
    2) Is there a recommended architecture like MobileNets, EfficientDet etc. to use on microcontroller/Esp32?
    I am a beginner in this field. Thanks for your help!

    Object Detection model can be run but the detection post_process is not available . also only a selected operations are supported . U can decide the model based on that
    Coming to the model , the size of the model needs to be less that 4MB as u can only access only 4MB of RODATA from FLASH( model is stored as ROdata) ,
    u will need atleast 8MB of flash and 8MB PSRAM

    I am using Tensorflow-2.5.0 for test, I tested grayscale after add uint8 ops, detection accuracy is better than int8 version, but it's too slow. for int8 version, should I still input grayscale 96x96 image?

    u have to typecast the image from uint8 to int8 before feeding it to the interpreter

    Has anyone ever tried running TFLM on a Raspberry PI Zero?
    Nothing popped up in my searches so I made my own port:
    https://github.com/driedler/tflite_micro_runtime

    Using the CMSIS NN kernels I saw a ~8x speed-up compared to the tflite_runtime Python package.
    This makes sense because the RPI0's ARMv6 core doesn't have any acceleration and the
    CMSIS kernels are far more optimal than the tflite_runtime's default kernels.

    thanks for the info .

    ricoGG
    @ricoGG
    I tried to typecast the image from uint8 to int8 as below:
    for (int i; i < image_width image_height channels; i++) {
    image_data[i] = grayscale_data[i] - 128;
    }
    but it doesn't work, maybe worse, is't wrong? Tks
    Diogo Santiago
    @dsantiago
    Hi all... i am getting this error trying the Makefile testes tensorflow/lite//micro/sparkfun_edge/system_setup.cc:22:10: fatal error: 'am_bsp.h' file not found
    siddanth-digi
    @siddanth-digi

    I tried to typecast the image from uint8 to int8 as below:
    for (int i; i < image_width image_height channels; i++) {
    image_data[i] = grayscale_data[i] - 128;
    }
    but it doesn't work, maybe worse, is't wrong? Tks

    @ricoGG
    do a typecast
    image_data[i] = (int8_t) (grayscale_data[i] -128)

    Can u provide more details regarding what hardware you are using
    also the code snippet is not enough to provide a solution

    @ricoGG

    for(int i=0; i< IMAGE_HEIGHT IMAGE_WIDTH CHANNELS; i++)
    {
    image_data[i] = (int8_t) (rgb_buf[i] - 128);
    }

    this completely works for me

    ricoGG
    @ricoGG
    @siddanth-digi tks for your response, I tested on HaaS EDU K1 with cortex-m4. U mean if use int8 version, I have to input rgb888 96x96 data while not grayscale 96x96?
    jam244
    @jam244
    Hi , Im new to Tflite micro. I am attempting to build the hello_world example for ESP32. The Readme https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples/hello_world says "The sample has been tested on ESP-IDF version 4.0 " But the IDF docs https://docs.espressif.com/projects/esp-idf/en/latest/esp32/get-started-legacy/windows-setup.html says "Since ESP-IDF V4.0, the default build system is based on CMake." The Readme then proceeds to use "make" to generate the project. So should I use "Legacy GNU Make" setup for the toolchain and IDF? What version of toolchain was used for building the example? Would esp32_win32_msys2_environment_and_esp2020r2_toolchain-20200601 work?
    Michael O'Cleirigh
    @mocleiri
    @jam244 that directory is the base. You actually have to generate the project which is what will give you the IDF compatible project. The instructions are on the readme: https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples/micro_speech#generate-example-project
    jam244
    @jam244
    @mocleiri Thanks. I suspect the builds in CI and Readme are focussed to a linux env. I managed to run the example on ESP32 DevkitC using virtualbox. It may be helpful to mention this in the Readme too.
    jam244
    @jam244
    Hi, I have a Bidirectional GRU based model which I would like to convert to TFlite Micro. Is this possible? I checked the ops here : https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/all_ops_resolver.cc and could not find any operations to add GRU layers. The closest I could get to is the following commit: https://github.com/tensorflow/tflite-micro/pull/281/commits/66e4d2529e20f3fc0279fe8bc7d9ad9c0ea5dee4
    So I suspect LSTM is still being ported. Can you confirm?
    Michael O'Cleirigh
    @mocleiri
    @jam244 yes there is a pull request for it: tensorflow/tflite-micro#281 based on the review so far it looks to me like there will be more revisions before it's merged.
    Kunal Khatri
    @kunal15
    Hi guys, I am using a simple model for TFLM, however, it prints aborted on the stdout and terminates. I figured out that it is caused during the "TfLiteStatus allocate_status = interpreter->AllocateTensors();" line. Any idea what might be the reason? The model size if 1.03 MB
    6 replies
    Kunal Khatri
    @kunal15
    Hi guys, I want to cross compile the Hello World project for ARM CPU. Can someone please tell me how I can proceed with that? I am currently able to do it for x86, but not able to generate a binary that can run on ARM
    2 replies
    siddanth-digi
    @siddanth-digi

    @siddanth-digi tks for your response, I tested on HaaS EDU K1 with cortex-m4. U mean if use int8 version, I have to input rgb888 96x96 data while not grayscale 96x96?

    no no not what i meant . for grayscale model u should use gray scale image only . i just showed the conversion . First subtract and then do typecast

    Atis Elsts
    @atiselsts
    Hello folks! I'm trying to build TFLite micro natively on Raspberry Pi, for performance evaluation / comparison purposes.
    1) Is native compilation something that is officially supported?
    2) I got this compilation error: /home/pi/tensorflow/tensorflow/lite/micro/tools/make/downloads/flatbuffers/include/flatbuffers/flatbuffers.h:78: undefined reference to_xassert(char const*, int)'` After commenting out the offending line, the code compiles and works. Is this something that I can fix somehow?
    TCal
    @tcal-x
    Hello TFLmicro+Python experts...Is there a way to satisfy the new PIL dependence by installing a Debian package? I tried sudo apt install python3-pil but I still get the compilation error "No module named 'PIL'". Is pip3 the only way to go?
    hubertsumarno
    @hubertsumarno
    Hi,
    I am trying to use tensorflow lite on a Cortex M4 microcontroller by STM (Nucleo 64 board) to do animal recognition. The system will be trained using the cifar-10 dataset from http://www.cs.toronto.edu/~kriz/cifar.html.
    First step is that I would like to generate a keil project for an Hello World example. I have cloned the tensorflow lite repo from: https://github.com/tensorflow/tflite-micro and then tried to use 'make' (please see command below) to generate the keil project.
    The command "make -f tensorflow/lite/micro/tools/make/Makefile TARGET=stm32f4 TARGET_ARCH=cortex-m4 test_hello_world_test" does not seem to get any template projects generated in this path tflite-micro\tensorflow\lite\micro\tools\make\gen\linux_x86_64_default\prj\hello_world\keil
    Any help as to what should be done/done correctly?
    Andrew Cavanaugh
    @andrewxcav
    Sorry if this has already been asked, but does anyone have a way to reconcile which upstream tensorflow hash was the most recent merge into the micro repo? I am having issues with a model that has gone through external_representation->TF->TFlite conversion having a newer version of leaky_relu than what is in the current head of the micro repo. Thanks!
    Sandeep Singh
    @arm-ssingh
    @hubertsumarno you can build the project with:
    make -f tensorflow/lite/micro/tools/make/Makefile TARGET=stm32f4 TARGET_ARCH=cortex-m4 OPTIMIZED_KERNEL_DIR=cmsis_nn generate_hello_world_keil_project
    N1ko7aj
    @N1ko7aj
    Trying to build the person detection example for ESP-EYE using the ESP-IDF through powershell and following the instructions in the example readme.md. The IDF doens't recognise the "make" command and requires a CmakeLists file in order to build the project. What am i missing?
    2 replies
    jmaha
    @jmaha
    I've managed to build TensorFlow Lite for Micros for a Nordic nRF52833. I'm having an issue loading my model, which is a fairly simple model with an LSTM layer. I get a message indicating the model is using the REDUCE_PROD operator which isn't supported by TF Lite for Micros. Does anyone know how to work around this issue?
    N1ko7aj
    @N1ko7aj

    Getting this error message:

    "make: * No rule to make target 'generate_person_detection_esp_project'. Stop."

    It appears, when I'm trying to generate person_detection for ESP32. Im following all instructions given in the example here: https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples/person_detection#running-on-esp32. Anybody who have successfully build the example?

    Charalampos Eleftheriadis
    @LambisElef
    Hi everyone! I am trying to run TensorFlow Micro in a linux machine (very limitted in resourses, that's why I need the Micro). I have managed to make it compile successfully, but when I try to get the type of the input, I get kTfLiteNoType instead of kTfLiteInt8 or kTfLiteFloat32 (quantized and non-quantized models) that I should be getting. Does anyone know why this is happening? I tried with models compiled with TF 2.4 and 2.6, same issue.
    1 reply
    RealKuri
    @RealKuri
    Hi, I'm using STM32CubeIDE to implement my neural network.
    Through X-Cube-AI I generated the application template file, however I'm having problems sending and receiving the results from the network, the functions ( acquire_and_process_data, post_process) come blank and this is my attempt but I couldn't... My network has 7 float inputs and 4 outputs. https://i.imgur.com/fyAPxGg.png | https://pastebin.com/MfT4f5Lh