This study presents a novel framework that integrates explainable artificial intelligence (XAI) to enhance classification of flight states of an aircraft in the context of pilot training. By combining probabilistic outputs from Artificial Intelligence (AI) models, the approach identifies high-confidence and low-confidence predictions. Further, Shapley Additive Explanations (SHAP) technique is employed to uncover feature-level insights.
The main contribution of the study is to know how model identifies the ambiguous flight states and to know the most influential features in classifying these states. Moreover, to demonstrate how interpretable AI can support the development of intelligent, data-driven pilot training systems that are grounded in real flight data and model transparency.
