► STM32Cube.AI (X-CUBE-AI v10.0)

Free AI model optimizer for STM32

STM32Cube.AI allows you to optimize and deploy trained Neural Network models from the most popular AI frameworks on any STM32 microcontroller. It now includes the support of the Neural-ART Accelerator NPU embedded inside the STM32N6.
The tool is available via a graphical interface in the STM32CubeMX environment as well as in command line. This tool is now also available online in the ST Edge AI Developer Cloud.

New in version 10.0


– Support of Neural-ART Accelerator NPU
– Extended support for new layers and operators
– Improved control of the data layout for input and output tensors

From Neural Networks to STM32 optimized code

Identify the right STM32 MCU for your project and generate the suitable code from your trained Neural Network model

1
Load NN model
2
Analyse NN model
3
Validate
4
Optimize
5
Generate code
Select your MCU and load your trained model from your favorite AI framework: Tensorflow, Pytorch, ONNX, Scikit-Learn.. STM32Cube.AI supports FLOAT32 or quantized INT8 weight formats (input file formats: .tflite, .h5, and .onnx).
stm32cubeai-screen-1
The model analysis provides access to a complete set of information about your model, such as the number of parameters, MACC/layer complexity, and detailed RAM and flash size requirements.
stm32cubeai-screen-2
Validate your model with a data set or random values to check that the generated C-code matches with the original trained model supplied. Validation options can be performed on the desktop computer or on the STM32 board connected to the computer.
stm32cubeai-screen-3
Optimize your model by managing memory usage by layer and choosing the right balance between internal and external memory resources.
stm32cubeai-screen-4
Generate the optimized C-code of your AI inference model.
stm32cubeai-screen-5

Optimize your AI models for peak performance

STM32Cube.AI revolutionizes the deployment of artificial intelligence on microcontrollers. By meticulously optimizing memory usage and inference time, STM32Cube.AI guarantees seamless integration and execution of AI models on your MCU devices.

Up to

70 %

Faster inference time*

75 %

space freed-up in FLASH and RAM*

* versus TensorFlow Lite for microcontroller

STM32 model zoo – Find the best edge AI model

The STM32 AI model zoo is a collection of pre-trained machine learning models that are optimized to run on STM32 microcontrollers. Available on GitHub, this is a valuable resource for anyone looking to add AI capabilities to their STM32-based projects.

Get started with STM32Cube.AI

Discover how to optimize your AI Neural Network and create processing libraries for your STM32 project