Failure Prognostic of Turbofan Engines with Uncertainty Quantification and Explainable AI (XIA)

Main Article Content

Ahmad Kamal Mohd Nor

Abstract

Deep learning is quickly becoming essential to human ecosystem. However, the opacity of certain deep learning models poses a legal barrier in its adoption for greater purposes. Explainable AI (XAI) is a recent paradigm intended to tackle this issue. It explains the prediction mechanism produced by black box AI models, making it extremely practical for safety, security or financially important decision making. In another aspect, most deep learning studies are based on point estimate prediction with no measure of uncertainty which is vital for decision making. Obviously, these works are not suitable for real world applications. This paper presents a Remaining Useful Life (RUL) estimation problem for turbofan engines equipped with prognostic explainability and uncertainty quantification. A single input, multi outputs probabilistic Long Short-Term Memory (LSTM) is employed to predict the RULs distribution of the turbofans and SHapley Additive exPlanations (SHAP) approach is applied to explain the prognostic made. The explainable probabilistic LSTM is thus able to express its confidence in predicting and explains the produced estimation. The performance of the proposed method is comparable to several other published works

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Nor, A. K. M. . (2021). Failure Prognostic of Turbofan Engines with Uncertainty Quantification and Explainable AI (XIA). Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(3), 3494–3504. Retrieved from https://www.turcomat.org/index.php/turkbilmat/article/view/1624
Section
Research Articles