|
oveRTOS C API
Embedded RTOS framework — build system, configuration, and portable C API
|
Portable inference API for running TFLite models via LiteRT (formerly TensorFlow Lite Micro). More...
Data Structures | |
| struct | ove_tensor_info |
| Tensor descriptor returned by ove_model_input() / ove_model_output(). More... | |
| struct | ove_model_config |
| Configuration for an ML inference session. More... | |
Enumerations | |
| enum | ove_tensor_type { OVE_TENSOR_FLOAT32 = 0 , OVE_TENSOR_INT8 = 1 , OVE_TENSOR_UINT8 = 2 , OVE_TENSOR_INT16 = 3 , OVE_TENSOR_INT32 = 4 } |
| Tensor element types. More... | |
Portable inference API for running TFLite models via LiteRT (formerly TensorFlow Lite Micro).
Provides a C API for loading pre-trained .tflite FlatBuffer models and running inference on them. The same model binary runs unchanged across all four oveRTOS backends (FreeRTOS, Zephyr, NuttX, POSIX).
Two allocation strategies are available:
_create() / _destroy() — unified API that works in both heap and zero-heap mode. In heap mode the tensor arena is allocated internally. In zero-heap mode these are macros that generate per-call-site static storage and arena; arena_size must be a compile-time constant._init() / _deinit() — explicit storage control with caller-supplied arena buffer.CONFIG_OVE_INFER. | enum ove_tensor_type |
Tensor element types.
Subset of TFLite tensor types that are relevant for microcontroller inference (quantised int8/int16 and float32).