Medical College of Wisconsin
CTSIResearch InformaticsREDCap

Applying Large Language Models for Surgical Case Length Prediction. JAMA Surg 2025 Aug 01;160(8):894-902

Date

07/09/2025

Pubmed ID

40632526

Pubmed Central ID

PMC12242817

DOI

10.1001/jamasurg.2025.2154

Scopus ID

2-s2.0-105013527436 (requires institutional sign-in at Scopus site)

Abstract

IMPORTANCE: Accurate prediction of surgical case duration is critical for operating room (OR) management, as inefficient scheduling can lead to reduced patient and surgeon satisfaction while incurring considerable financial costs.

OBJECTIVE: To evaluate the feasibility and accuracy of large language models (LLMs) in predicting surgical case length using unstructured clinical data compared to existing estimation methods.

DESIGN, SETTING, AND PARTICIPANTS: This was a retrospective study analyzing elective surgical cases performed between January 2017 and December 2023 at a single academic medical center and affiliated community hospital ORs. Analysis included 125 493 eligible surgical cases, with 1950 used for LLM fine-tuning and 2500 for evaluation. An additional 500 cases from a community site were used for external validation. Cases were randomly sampled using strata to ensure representation across surgical specialties.

EXPOSURES: Eleven LLMs, including base models (GPT-4, GPT-3.5, Mistral, Llama-3, Phi-3) and 2 fine-tuned variants (GPT-4 fine-tuned, GPT-3.5 fine-tuned), were used to predict surgical case length based on clinical notes.

MAIN OUTCOMES AND MEASURES: The primary outcome was average error between predicted and actual surgical case length (wheels-in to wheels-out time). The secondary outcome was prediction accuracy, defined as predicted length within 20% of actual duration.

RESULTS: Fine-tuned GPT-4 achieved the best performance with a mean absolute error (MAE) of 47.64 minutes (95% CI, 45.71-49.56) and R2 of 0.61, matching the performance of current OR scheduling (MAE, 49.34 minutes; 95% CI, 47.60-51.09; R2, 0.63; P = .10). Both GPT-4 fine-tuned and GPT-3.5 fine-tuned significantly outperformed current scheduling methods in accuracy (46.12% and 46.08% vs 40.92%, respectively; P < .001). GPT-4 fine-tuned outperformed all other models during external validation with similar performance metrics (MAE, 48.66 minutes; 95% CI, 45.31-52.00; accuracy, 46.0%). Base models demonstrated variable performance, with GPT-4 showing the highest performance among non-fine-tuned models (MAE, 59.20 minutes; 95% CI, 56.88 - 61.52).

CONCLUSION AND RELEVANCE: The findings in this study suggest that fine-tuned LLMs can predict surgical case length with accuracy comparable to or exceeding current institutional scheduling methods. This indicates potential for LLMs to enhance operating room efficiency through improved case length prediction using existing clinical documentation.

Author List

Ramamurthi A, Neupane B, Deshpande P, Hanson R, Vegesna S, Cray D, Crotty BH, Somai M, Brown KR, Pawar SS, Taylor B, Kothari AN

Authors

Bradley H. Crotty MD Associate Professor in the Medicine department at Medical College of Wisconsin
Anai N. Kothari MD Assistant Professor in the Surgery department at Medical College of Wisconsin
Sachin S. Pawar MD Chief, Associate Professor in the Otolaryngology department at Medical College of Wisconsin
Bradley W. Taylor Chief Research Informatics Officer in the Clinical and Translational Science Institute department at Medical College of Wisconsin




MESH terms used to index this publication - Major topics in bold

Elective Surgical Procedures
Feasibility Studies
Female
Humans
Male
Middle Aged
Operating Rooms
Operative Time
Retrospective Studies