oveRTOS C API
Embedded RTOS framework — build system, configuration, and portable C API
Loading...
Searching...
No Matches
Data Structures | Enumerations
ML Inference

Portable inference API for running TFLite models via LiteRT (formerly TensorFlow Lite Micro). More...

Data Structures

struct  ove_tensor_info
 Tensor descriptor returned by ove_model_input() / ove_model_output(). More...
 
struct  ove_model_config
 Configuration for an ML inference session. More...
 

Enumerations

enum  ove_tensor_type {
  OVE_TENSOR_FLOAT32 = 0 , OVE_TENSOR_INT8 = 1 , OVE_TENSOR_UINT8 = 2 , OVE_TENSOR_INT16 = 3 ,
  OVE_TENSOR_INT32 = 4
}
 Tensor element types. More...
 

Detailed Description

Portable inference API for running TFLite models via LiteRT (formerly TensorFlow Lite Micro).

Provides a C API for loading pre-trained .tflite FlatBuffer models and running inference on them. The same model binary runs unchanged across all four oveRTOS backends (FreeRTOS, Zephyr, NuttX, POSIX).

Two allocation strategies are available:

Note
Requires CONFIG_OVE_INFER.

Enumeration Type Documentation

◆ ove_tensor_type

Tensor element types.

Subset of TFLite tensor types that are relevant for microcontroller inference (quantised int8/int16 and float32).