Fairness And Bias Mitigation in AI Models for Diabetes Diagnosis: A Comparative Evaluation of Algorithmic Approaches

Authors

  • Muhammad Danial Haikal Mohd Hamdan Faculty of Computer Science and Mathematics, Universiti Teknologi MARA Johor Branch Pasir Gudang Campus, 81750 Masai, Johor, Malaysia
  • Mohamad Faizal Ab Jabal Faculty of Computer Science and Mathematics, Universiti Teknologi MARA Johor Branch Pasir Gudang Campus, 81750 Masai, Johor, Malaysia; Applied Mathematics & System Development (AMSys)-Special Interest Group (SIG), Universiti Teknologi MARA Johor Branch, Pasir Gudang Campus, 81750 Masai, Johor, Malaysia https://orcid.org/0000-0002-1137-0088
  • Shuzlina Abdul Rahman Faculty of Computer Science and Mathematics, Universiti Teknologi MARA Selangor Branch Shah Alam Campus, 40450 Shah Alam, Selangor, Malaysia.
  • Azyan Yusra Kapi Faculty of Computer Science and Mathematics, Universiti Teknologi MARA Johor Branch Pasir Gudang Campus, 81750 Masai, Johor, Malaysia.

DOI:

https://doi.org/10.54554/jtec.2025.17.03.004

Keywords:

Diabetes Diagnosis, Bias Mitigation, Healthcare Predictive, Modelling

Abstract

Bias in AI-driven diagnostic models has raised serious concerns regarding fairness in healthcare delivery, particularly for chronic diseases like diabetes. This study investigates algorithmic bias in diabetes prediction models and evaluates the effectiveness of three fairness-aware approaches: Fairness-Aware Interpretable Modelling (FAIM), Fairness-Aware Machine Learning (FAML), and Fairness-Aware Oversampling (FAWOS). The same dataset and experimental setup were used to ensure a fair comparison across models. FAIM employs interpretable decision trees to enhance transparency but lacks explicit fairness mechanisms. FAML incorporates adversarial fairness constraints, achieving perfect fairness metrics while maintaining acceptable accuracy. FAWOS addresses class imbalance using SMOTE, improving overall classification accuracy without enforcing fairness. Results show that while each method has strengths, none independently achieves an optimal balance of accuracy, fairness, and interpretability. Therefore, this paper proposes a hybrid approach that integrates multiple bias mitigation strategies to support fairer and more reliable AI applications in clinical settings. This study contributes a structured comparative evaluation framework and offers actionable insights for the development of ethical AI models in healthcare diagnostics.

Downloads

Download data is not yet available.

Downloads

Published

2025-09-30

How to Cite

Mohd Hamdan, M. D. H., Ab Jabal, M. F., Abdul Rahman, S., & Kapi, A. Y. (2025). Fairness And Bias Mitigation in AI Models for Diabetes Diagnosis: A Comparative Evaluation of Algorithmic Approaches. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 17(3), 27–34. https://doi.org/10.54554/jtec.2025.17.03.004