Machines interact with their environment by emitting various signals. These signals are source of relevant information reflecting equipment functioning. Being able to understand these signals allows significant optimization capabilities. For example, before an anomaly or failure occurs, your machine generates slightly abnormal vibration pattern, noise and ultrasounds. By placing a sensor on the machine, we can monitor its activity. Thanks to Artificial Intelligence, we can create smart solution which will distinguish abnormality. As a result, the equipment can automatically classify the criticality of an anomaly and send relevant alert to maintenance team. We have implemented this approach on a fan motor for demonstration purposes, but this approach can easily be adapted to many industrial machines.
The ultrasound-based classification model takes almost one second of data, then preprocesses it using the mel-frequency cepstral coefficients (MFCC), and then feeds it to a pre-trained neural network. The network is trained for four classes [ ‘Off’, ‘Normal’, ‘Clogging’, ‘Friction’ ].The ultrasound-based classification captures audio at a 192 kHz (16-bit, 1 channel) rate using the on-board analog MEMS microphone. Every millisecond, a DMA interrupt is received with the last 192 audio samples. These samples are then accumulated in a sliding window consisting of 4096 samples with no overlap. The buffer is injected every 21,33 ms into USC preprocessing for feature extraction. The USC preprocessing extracts ultra sound features: MFCC (46×32) spectrogram.
For computational efficiency and memory management optimization, the step is divided into two routines:
– The first part computes one of the 46 MFCC columns from the time domain input signal into the Mel scale using FFT and Filter bank applications (32 mel bands).
– The second part, when all 32 columns have been calculated (i.e., after 981 ms), a log scaling is applied to the mel scaled spectrogram, creating the input feature for the USC convolutional neural network.
Every 981 ms, the (46×32) MFCC spectrogram is fed to the USC convolutional neural network input. The Model classifies anomalies among four classes: [ ‘Off’, ‘Normal’, ‘Clogging’, ‘Friction’]. This model is created to work for a USB fan when the fan is running at the maximum speed and does not work very well when tested on other speeds
Analog MEMS microphone with frequency response up to 80 kHz (reference: IMP23ABSU)
Data format: 2-3 hours data recorded in various condition, balanced among the four classes at a fixed speed of 1500 rpm.
Model: ST Convolutional Neural Network Quantized
Input size: 46×32
Complexity: 565 K MACC
163 KB Flash for weights
74 KB RAM for activations
Performance on STM32L4R9 (Low Power) @ 120 MHz:
Pre-processing: 24 MHz; 46 MFCC column computation per second, 4,2 ms per column
NN processing: 1 inference per second; 10 MHz, 78 ms per inference