Nowadays products are becoming smarter to provide additional value to their users. To optimize their usage, smart objects need to be aware of their environment. Artificial intelligence can decrypt data from various sensor such accelerometer or microphone in order to make these data meaningful data for Humans. For example, we have taught a neural network how to distinguish scenes (Indoor, outdoor, car) to be able to optimize equipment behavior regarding user environment. After optimization with STM32Cube.AI, the AI model can run on an Ultra-Low power microcontroller to embed intelligence everywhere. This approach can easily be adapted to many other use cases or environments by retraining the AI model with new data.
The goal of Acoustic Scene Classification (ASC) is to classify the actual environment into one of the provided three predefined classes (indoor, outdoor, in-vehicle) characterized with the acoustic captured by a single digital microphone. The demo runs on a small form factor board Sensor Tile that comes along with a smartphone application connected through Bluetooth Low Energy.
The ASC configuration captures audio at a 16 kHz (16-bit, 1 channel) rate using the on-board MEMS microphone. Every millisecond, a DMA interrupt is received with the last 16 PCM audio samples. These samples are then accumulated in a sliding window consisting of 1024 samples with a 50% overlap. For every 512 samples (i.e., 32 ms), the buffer is injected into the ASC preprocessing for feature extraction. The ASC preprocessing extracts audio features into a LogMel (30×32) spectrogram.
For computational efficiency and memory management optimization, the step is divided into two routines:
– The first part computes one of the 32 spectrogram columns from the time domain input signal into the Mel scale using FFT and Filter bank applications (30 mel bands).
– The second part, when all 32 columns have been calculated (i.e., after 1024 ms), a log scaling is applied to the mel scaled spectrogram, creating the input feature for the ASC convolutional neural network.
Every 1024ms, the (30×32) LogMel spectrogram is fed to the ASC convolutional neural network input, which can then classify the output labels: indoor, outdoor and in-vehicle.
Digital MEMS Microphone (ref. MP34DT05-A)
Data format: 22h53m of audio samples
Model: ST Convolutional Neural Network Quantized
Input size: 30×32
Complexity: 517 K MACC
31 KB Flash for weights
18 KB RAM for activations
Performance on STM32L476 (Low Power) @ 80 MHz
Use case: 1 classification/sec
Pre/Post-processing: 3.7 MHz
NN processing: 6 MHz
Power consumption (1.8 V)