Fairness And Bias Mitigation in AI Models for Diabetes Diagnosis: A Comparative Evaluation of Algorithmic Approaches
DOI:
https://doi.org/10.54554/jtec.2025.17.03.004Keywords:
Diabetes Diagnosis, Bias Mitigation, Healthcare Predictive, ModellingAbstract
Bias in AI-driven diagnostic models has raised serious concerns regarding fairness in healthcare delivery, particularly for chronic diseases like diabetes. This study investigates algorithmic bias in diabetes prediction models and evaluates the effectiveness of three fairness-aware approaches: Fairness-Aware Interpretable Modelling (FAIM), Fairness-Aware Machine Learning (FAML), and Fairness-Aware Oversampling (FAWOS). The same dataset and experimental setup were used to ensure a fair comparison across models. FAIM employs interpretable decision trees to enhance transparency but lacks explicit fairness mechanisms. FAML incorporates adversarial fairness constraints, achieving perfect fairness metrics while maintaining acceptable accuracy. FAWOS addresses class imbalance using SMOTE, improving overall classification accuracy without enforcing fairness. Results show that while each method has strengths, none independently achieves an optimal balance of accuracy, fairness, and interpretability. Therefore, this paper proposes a hybrid approach that integrates multiple bias mitigation strategies to support fairer and more reliable AI applications in clinical settings. This study contributes a structured comparative evaluation framework and offers actionable insights for the development of ethical AI models in healthcare diagnostics.
Downloads
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)






