IAD Index of Academic Documents
  • Home Page
  • About
    • About Izmir Academy Association
    • About IAD Index
    • IAD Team
    • IAD Logos and Links
    • Policies
    • Contact
  • Submit A Journal
  • Submit A Conference
  • Submit Paper/Book
    • Submit a Preprint
    • Submit a Book
  • Contact
  • Journal of Medicine and Palliative Care
  • Volume:4 Issue:2
  • How to explain a machine learning model: HbA1c classification example

How to explain a machine learning model: HbA1c classification example

Authors : Deniz TOPCU
Pages : 117-125
Doi:10.47582/jompac.1259507
View : 13 | Download : 14
Publication Date : 2023-03-27
Article Type : Research Paper
Abstract :Aim: Machine learning tools have various applications in healthcare. However, the implementation of developed models is still limited because of various challenges. One of the most important problems is the lack of explainability of machine learning models. Explainability refers to the capacity to reveal the reasoning and logic behind the decisions made by AI systems, making it straightforward for human users to understand the process and how the system arrived at a specific outcome. The study aimed to compare the performance of different model-agnostic explanation methods using two different ML models created for HbA1c classification. Material and Method: The H2O AutoML engine was used for the development of two ML models insert ignore into journalissuearticles values(Gradient boosting machine insert ignore into journalissuearticles values(GBM); and default random forests insert ignore into journalissuearticles values(DRF);); using 3,036 records from NHANES open data set. Both global and local model-agnostic explanation methods, including performance metrics, feature important analysis and Partial dependence, Breakdown and Shapley additive explanation plots were utilized for the developed models. Results: While both GBM and DRF models have similar performance metrics, such as mean per class error and area under the receiver operating characteristic curve, they had slightly different variable importance. Local explainability methods also showed different contributions to the features. Conclusion: This study evaluated the significance of explainable machine learning techniques for comprehending complicated models and their role in incorporating AI in healthcare. The results indicate that although there are limitations to current explainability methods, particularly for clinical use, both global and local explanation models offer a glimpse into evaluating the model and can be used to enhance or compare models.
Keywords : Makine öğrenmesi, açıklanabilir yapay zekâ, glikolize hemoglobin

ORIGINAL ARTICLE URL
VIEW PAPER (PDF)

* There may have been changes in the journal, article,conference, book, preprint etc. informations. Therefore, it would be appropriate to follow the information on the official page of the source. The information here is shared for informational purposes. IAD is not responsible for incorrect or missing information.


Index of Academic Documents
İzmir Academy Association
CopyRight © 2023-2025