Adopting Artificial Intelligence (AI) models for financial applications presents significant challenges, as this domain demands high social and ethical standards. In such contexts, besides model accuracy, explainability has emerged as a key factor in building trustworthy AI systems. Widely used Explainable AI (XAI) methods, including LIME and SHAP, are increasingly applied across real-world problems to enhance model interpretability. Beyond generating explanations, XAI outputs can also support a common challenge in applying AI models: feature selection. By leveraging the feature importance metrics, such as SHAP values, high-impact and low-impact features can be distinguished, thereby helping selection decisions. This process not only reduces unnecessary computational and resource consumption but also has the potential to improve the overall predictive accuracy of the model.
Evaluating Feature Importance Metrics Using XAI
Evaluating the contribution of features in an AI model involves estimating a quantitative score representing how each input feature affects the model’s predictions Using XAI, SHAP-based methods provide such a measure through SHAP values, which represent a feature’s marginal contribution to a prediction. Specifically, a SHAP value quantifies the change in the model’s output when a feature is combined with all possible subsets of other features. This allows assessment of both the magnitude and direction of a feature’s impact: a positive SHAP value indicates that the feature increases the model’s prediction, while a negative value indicates it decreases the prediction.
Adopting XAI-Based Feature Importance for Feature Selection
Feature selection, which involves removing irrelevant or redundant features, is a critical step in data preprocessing to improve machine learning performance. A common approach is the filter method, which selects features based on their importance scores. Applying feature importance computed by XAI for feature selection is both simple and effective: retain the smallest subset of features whose cumulative importance reaches a predefined threshold of the total contribution of all features. However, the reliability of this approach depends on the quality of the feature importance estimates from XAI. Many XAI techniques provide local explanations that reflect feature contributions for individual predictions, which may not be sufficient for global feature selection. Therefore, choosing an XAI method capable of providing global explanations is crucial. In this context, Kernel SHAP is particularly appropriate [1].
Application Case: Enhancing Cloud Cost Forecasting with XAI
In our recent research presented at ICTAI 2025 (37th International Conference on Tools with Artificial Intelligence) [1], we explored the use of XAI to improve cloud cost forecasting. In our experiments, we retained only the features with the highest SHAP values, selecting a subset whose cumulative contribution accounted for 80% of the total feature importance. The results demonstrated that all simplified models, including CNN, LSTM, CNN-LSTM, and CNN-LSTM with Attention, achieved improved accuracy across all evaluation metrics. However, models integrated with a dropout mechanism occasionally showed a slight increase in prediction errors. This may result from excessive feature elimination since these models already include a mechanism to deactivate redundant parameters associated with less informative features. Consequently, applying an additional feature reduction step may lead to a loss of useful information, thereby degrading model accuracy.
Conclusion
The advantages of XAI extend beyond simply providing explanations. Recent studies have highlighted its potential for feature selection, which can enhance model accuracy and efficiency. XAI is a rapidly evolving field that continues to attract significant attention. With the rise of Generative AI, new research directions are emerging. Soon, the integration of XAI and GenAI could enable models to intuitively explain their inner workings and prediction processes, making AI more accessible even to non-technical users [2].
Reference
[1] Ha Nhi Ngo, Mouna BEN MABROUK, Ines BEN KRAIEM. “Enhancing Cloud Cost Forecasting with Explainable Artificial Intelligence.” 2025 37th International Conference on Tools with Artificial Intelligence. IEEE, 2025. In Proceedings[2] Bokstaller, Jonas, et al. “Enhancing ML Model Interpretability: Leveraging Fine-Tuned Large Language Models for Better Understanding of AI.” arXiv preprint arXiv:2505.02859 (2025