Logo image
Real-time fault detection in multirotor UAVs using lightweight deep learning and high-fidelity simulation data with single and double fault magnitudes
Journal article   Open access   Peer reviewed

Real-time fault detection in multirotor UAVs using lightweight deep learning and high-fidelity simulation data with single and double fault magnitudes

Md Najmul Mowla, Davood Asadi and Ferdous Sohel
Complex & intelligent systems, Vol.12(2), 62
2026
pdf
Published2.39 MBDownloadView
CC BY V4.0 Open Access

Abstract

Deep learning Fault detection High-fidelity simulation Loss function optimization Motor fault analysis Multirotor UAV
Robust fault detection and diagnosis (FDD) in multirotor unmanned aerial vehicles (UAVs) remains challenging due to limited actuator redundancy, nonlinear dynamics, and environmental disturbances. This work introduces two lightweight deep learning architectures: the Convolutional-LSTM Fault Detection Network (CLFDNet), which combines multi-scale one-dimensional convolutional neural networks (1D-CNN), long short-term memory (LSTM) units, and an adaptive attention mechanism for spatio-temporal fault feature extraction; and the Autoencoder LSTM Multi-loss Fusion Network (AELMFNet), a soft attention–enhanced LSTM autoencoder optimized via multi-loss fusion for fine-grained fault severity estimation. Both models are trained and evaluated on UAV-Fault Magnitude V1, a high-fidelity simulation dataset containing 114,230 labeled samples with motor degradation levels ranging from 5% to 40% in the take-off, hover, navigation, and descent phases, representing the most probable and recoverable fault scenarios in quadrotor UAVs. Including coupled faults enables models to learn correlated degradation patterns and actuator interactions while maintaining controllability under standard flight laws. CLFDNet achieves 96.81% precision in fault severity classification and 100% accuracy in motor fault localization with only 19.6K parameters, demonstrating suitability for real-time onboard applications. AELMFNet achieves the lowest reconstruction loss of 0.001 with Huber loss and an inference latency of 6 ms/step, underscoring its efficiency for embedded deployment. Comparative experiments against 15 baselines, including five classical machine learning models, five state-of-the-art fault detection methods, and five attention-based deep learning variants, validate the effectiveness of the proposed architectures. These findings confirm that lightweight deep models enable accurate and efficient diagnosis of UAV faults with minimal sensing.

Details

UN Sustainable Development Goals (SDGs)

This output has contributed to the advancement of the following goals:

#11 Sustainable Cities and Communities

Source: InCites

Metrics

1 File views/ downloads
4 Record Views

InCites Highlights

These are selected metrics from InCites Benchmarking & Analytics tool, related to this output

Collaboration types
Domestic collaboration
International collaboration
Citation topics
4 Electrical Engineering, Electronics & Computer Science
4.13 Telecommunications
4.13.2202 UAV Communications
Web Of Science research areas
Computer Science, Artificial Intelligence
ESI research areas
Computer Science
Logo image