Commentary: Comparison of radiological interpretation made by veterinary radiologists and state-of-the-art commercial AI software for canine and feline radiographic studies
Published (Version of Record)CC BY V4.0, Open Access
Abstract
artificial intelligence-A radiology veterinary study design and best practice critical review
Ndiaye et al. report a head-to-head comparison of a commercial artificial intelligence radiology software (AI) and veterinary radiologists interpreting canine and feline radiographs (1). The conclusion states that the AI “performs almost as well as the best veterinary radiologist in all settings of descriptive radiographic findings” (1). The AI showed high specificity (ability to correctly identify normal findings) but lower sensitivity (ability to correctly detect abnormal findings) than radiologists (1), leading the authors to suggest that its strength lies in confirming normal cases. It is further postulated that the AI's performance is comparable to human experts and that the AI “will likely complement rather than replace human experts” in veterinary radiology (1). This is a noteworthy finding given the scarcity of veterinary radiologists, and the prospect of the AI aiding or augmenting clinical practice. However, a closer examination reveals several methodological and interpretive limitations that temper these optimistic conclusions. This commentary discusses the most impactful concerns—from the study's lack of a true gold-standard reference and biased sample, to statistical and ethical issues—and highlights their implications for real-world veterinary radiology.
Details
Title
Commentary: Comparison of radiological interpretation made by veterinary radiologists and state-of-the-art commercial AI software for canine and feline radiographic studies