The authors propose a novel transparent and clinically interpretable AI model for detecting lung cancer in chest X-rays. The model is based on a concept bottleneck architecture, which splits the traditional image-to-label classification pipeline into two separate models.
The first model, the concept prediction model, takes a chest X-ray as input and outputs prediction scores for a pre-determined set of clinical concepts extracted from associated medical reports. These concepts were defined under the guidance of a consultant radiologist and represent key features used in manual diagnosis of chest X-rays.
The second model, the label prediction model, then uses the concept prediction scores to classify the image as either cancerous or healthy. The authors experiment with different architectures for the label prediction model, including Decision Trees, SVMs, and MLPs, and find that the Decision Tree model performs the best in terms of precision.
The authors evaluate their approach against post-hoc image-based XAI techniques like LIME and SHAP, as well as the textual XAI tool CXR-LLaVA. They find that their concept-based explanations are more stable, clinically relevant, and reliable than the explanations generated by these existing methods.
The authors also experiment with clustering the original 28 clinical concepts into 6 broader categories, which leads to significant improvements in both concept prediction accuracy (97.1% for top-1 concept) and label prediction performance, outperforming the baseline InceptionV3 model.
Overall, the authors demonstrate the effectiveness of their transparent and clinically interpretable AI approach for lung cancer detection in chest X-rays, providing a promising solution that can build trust and enable better integration of AI systems in healthcare.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Amy Rafferty... at arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19444.pdfDeeper Inquiries