Reliability of Large Language Model-Based Artificial Intelligence in AIS Assessment: Lenke Classification and Fusion-Level Suggestion


Aktan C., Kosar A., Unal M., KORKMAZ M., Kaya O., AKGÜL T., ...Daha Fazla

DIAGNOSTICS, cilt.15, sa.24, 2025 (SCI-Expanded, Scopus) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 15 Sayı: 24
  • Basım Tarihi: 2025
  • Doi Numarası: 10.3390/diagnostics15243219
  • Dergi Adı: DIAGNOSTICS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, EMBASE, Directory of Open Access Journals
  • İstanbul Üniversitesi Adresli: Evet

Özet

Background: Accurate deformity classification and fusion-level planning are essential in adolescent idiopathic scoliosis (AIS) surgery and are traditionally guided by Cobb angle measurement and the Lenke system. Multimodal large language models (LLMs) (e.g., ChatGPT-4.0; Claude 3.7 Sonnet, Gemini 2.5 Pro, DeepSeek-R1-0528 Chat) are increasingly used for image interpretation despite limited validation for radiographic decision-making. This study evaluated the agreement and reproducibility of contemporary multimodal LLMs for AIS assessment compared with expert spine surgeons. Methods: This single-center retrospective study included 125 AIS patients (94 females, 31 males; mean age 14.8 +/- 1.9 years) who underwent posterior instrumentation (2020-2024). Two experienced spine surgeons independently performed Lenke classification (including lumbar and sagittal modifiers) and selected fusion levels (UIV-LIV) on standing AP, lateral, and side-bending radiographs; discrepancies were resolved by consensus to establish the reference standard. The same radiographs were analyzed by four paid multimodal LLMs using standardized zero-shot prompts. Because LLMs showed inconsistent end-vertebra selection, LLM-derived Cobb angles lacked a common anatomical reference frame and were excluded from quantitative analysis. Agreement with expert consensus and test-retest reproducibility (repeat analyses one week apart) were assessed using Cohen's kappa. Evaluation times were recorded. Results: Surgeon agreement was high for Lenke classification (92.0%, kappa = 0.913) and fusion-level selection (88.8%, kappa = 0.879). All LLMs demonstrated chance-level test-retest reproducibility and very low agreement with expert consensus (Lenke: 1.6-10.2%, kappa = 0.001-0.036; fusion: 0.8-12.0%, kappa = 0.003-0.053). Claude produced missing outputs in 17 Lenke and 29 fusion-level cases. Although LLMs completed assessments far faster than surgeons (seconds vs. similar to 11-12 min), speed did not translate into clinically acceptable reliability. Conclusions: Current general-purpose multimodal LLMs do not provide reliable Lenke classification or fusion-level planning in AIS. Their poor agreement with expert surgeons and marked internal inconsistency indicate that LLM-generated interpretations should not be used for surgical decision-making or patient self-assessment without task-specific validation.