초고해상도 초음파 지역화 현미경(ULM) 이미지 품질은 미세 기포(MB) 감지의 정확성에 크게 좌우되며, 특히 False Positives(FP) 및 False Negatives(FN) 비율은 이미지 선명도 및 구조적 유사성에 큰 영향을 미칩니다.
This paper introduces SISMIK, a novel deep learning-based method for estimating and correcting in-plane rigid-body motion artifacts in brain MRI by analyzing k-space data, offering a promising solution for improving image quality without relying on motion-free references or introducing hallucinations.
FlowMRI-Net is a novel deep learning framework that leverages physics-driven unrolled optimization and a complex-valued convolutional recurrent neural network to achieve fast and accurate reconstruction of undersampled 4D flow MRI data, demonstrating superior performance compared to existing compressed sensing and deep learning methods in both aortic and cerebrovascular applications.
脊椎関連の画像解析研究を促進するために、1,005件のCTボリューム(50万枚以上のラベル付きスライスと11,000個以上の脊椎)を含む大規模な脊椎CTデータセット「CTSpine1K」が構築・公開された。
Nailfold capillaroscopy can identify specific capillary abnormalities associated with various nail conditions, providing a potential non-invasive diagnostic tool.
Segmentation quality, measured by Dice coefficients, is bounded by the accuracy of volume predictions, represented by volume prediction error (vpe). Incorporating volumetric prediction accuracy into segmentation evaluation provides a more comprehensive understanding of model performance in clinical applications.
By explicitly modeling the discrepancy between the outputs of a segmentation model like U-Net and the ground truth using Denoising Diffusion Probabilistic Models (DDPMs), Re-DiffiNet can improve brain tumor segmentation performance, especially on boundary-distance metrics like Hausdorff Distance.
LATUP-Net, a lightweight 3D U-Net variant, incorporates parallel convolutions and attention mechanisms to achieve high brain tumor segmentation performance with significantly reduced computational costs compared to state-of-the-art models.
Score-based diffusion models can be used to solve the ill-posed inverse problem of reconstructing photoacoustic tomography images from limited sensor measurements.
A novel transparent and clinically interpretable AI model that utilizes both chest X-ray images and associated medical reports to accurately detect lung cancer, outperforming baseline deep learning models while providing reliable and clinically relevant explanations.