Mohammad Asadi, Jack W. O’Sullivan, Fang Cao, Tahoura Nedaee, Kamyar Rajabalifardi, Fei-Fei Li, Ehsan Adeli, Euan Ashley.
Department of Electrical Engineering, Stanford University, CA, USA
Division of Cardiology, Department of Medicine, Stanford University, CA, USA
Department of Biomedical Data Science, Stanford University, CA, USA
Department of Biology, Stanford University, CA, USA
Department of Computer Science, Stanford University, CA, USA
Department of Psychiatry and Behavioral Sciences, Stanford University, CA, USA
Mirage: The Illusion of Visual Understanding. (https://arxiv.org/pdf/2603.21687)
Multimodal AI systems have achieved remarkable performance across a broad range of real-world tasks, yet the mechanisms underlying visual–language reasoning remain surprisingly poorly understood. We report three findings that challenge prevailing assumptions about how these systems process and integrate visual information. First, Frontier models readily generate detailed image descriptions and elaborate reasoning traces, including pathology-biased clinical findings, for images never provided; we term this phenomenon mirage reasoning.
Second, without any image input, models also attain strikingly high scores across general and medical multimodal benchmarks, bringing into question their utility and design. In the most extreme case, our model achieved the top rank on a standard chest X ray question-answering benchmark without access to any images.
Third, when models were explicitly instructed to guess answers without image access, rather than being implicitly prompted to assume images were present, performance declined markedly. Explicit guessing appears to engage a more conservative response regime, in contrast to the mirage regime in which models behave as though images have been provided.
These findings expose fundamental vulnerabilities in how visual–language models reason and are evaluated, pointing to an urgent need for private benchmarks that eliminate textual cues enabling non-visual inference, particularly in medical contexts where miscalibrated AI carries the greatest consequence. We introduce B-Clean as a principled solution for fair, vision-grounded evaluation of multimodal AI systems.
П р и м е ч а н и е: научная туфта приводится исключительно в развлекательных целях и не может служить серьёзным аргументом в споре о применимости искусственного интеллекта для решения тех или иных задач.
Department of Electrical Engineering, Stanford University, CA, USA
Division of Cardiology, Department of Medicine, Stanford University, CA, USA
Department of Biomedical Data Science, Stanford University, CA, USA
Department of Biology, Stanford University, CA, USA
Department of Computer Science, Stanford University, CA, USA
Department of Psychiatry and Behavioral Sciences, Stanford University, CA, USA
Mirage: The Illusion of Visual Understanding. (https://arxiv.org/pdf/2603.21687)
Multimodal AI systems have achieved remarkable performance across a broad range of real-world tasks, yet the mechanisms underlying visual–language reasoning remain surprisingly poorly understood. We report three findings that challenge prevailing assumptions about how these systems process and integrate visual information. First, Frontier models readily generate detailed image descriptions and elaborate reasoning traces, including pathology-biased clinical findings, for images never provided; we term this phenomenon mirage reasoning.
Second, without any image input, models also attain strikingly high scores across general and medical multimodal benchmarks, bringing into question their utility and design. In the most extreme case, our model achieved the top rank on a standard chest X ray question-answering benchmark without access to any images.
Third, when models were explicitly instructed to guess answers without image access, rather than being implicitly prompted to assume images were present, performance declined markedly. Explicit guessing appears to engage a more conservative response regime, in contrast to the mirage regime in which models behave as though images have been provided.
These findings expose fundamental vulnerabilities in how visual–language models reason and are evaluated, pointing to an urgent need for private benchmarks that eliminate textual cues enabling non-visual inference, particularly in medical contexts where miscalibrated AI carries the greatest consequence. We introduce B-Clean as a principled solution for fair, vision-grounded evaluation of multimodal AI systems.
П р и м е ч а н и е: научная туфта приводится исключительно в развлекательных целях и не может служить серьёзным аргументом в споре о применимости искусственного интеллекта для решения тех или иных задач.