Prediction Of Transfusion Based On Machine Learning

Document Type : Primary Research paper

Authors

1 Enterprise Architect, Information Technology, UST-Global, Inc., Ohio, USA

2 Director of Sales, Career Soft Solutions Inc, 145 Talmadge rd Edison NJ 08817, Middlesex, USA

Abstract

The capacity to anticipate transfusions during a hospital stay may allow for more efficient blood supply management, as well as increased patient safety by assuring a sufficient supply of red blood cells (RBCs) for a specific patient. As a result, we tested the accuracy of four machine learning–based prediction algorithms for predicting transfusion, large transfusion, and the number of transfusions in hospitalized patients. Between January 2008 and June 2017, researchers conducted a retrospective observational study at three adult tertiary care institutions in Western Australia. The area under the curve for the receiver operating characteristics curve, the F1 score, and the average precision of the four machine learning algorithms used: artificial neural networks (NNs), logistic regression (LR), random forests (RFs), and gradient boosting (GB) trees were the primary outcome measures for the classification tasks. Transfusion of at least 1 unit of RBCs could be predicted quite correctly using our four prediction models (sensitivity for NN, LR, RF, and GB: 0.898, 0.894, 0.584, and 0.872, respectively; specificity: 0.958, 0.966, 0.964, 0.965). The four approaches were less successful in predicting large transfusion (sensitivity: 0.780, 0.721, 0.002, and 0.797 for ANN, LR, RF, and GB, respectively; specificity: 0.994, 0.995, 0.993, 0.995). As a result, the total number of packed RBCs transfused was likewise very inaccurately predicted. This study shows that the need for intra-hospital transfusion can be predicted with reasonable accuracy, but the number of RBC units transfused throughout a hospital stay is more difficult to predict.

Keywords