Learning with imperfect datasets in medical image segmentation

Publication Type:
Thesis
Issue Date:
2024
Full metadata record
Medical image segmentation partitions medical images into distinct physiological regions, such as organs and lesions, essential for diagnosis and treatment planning. Deep neural networks have advanced this field recently, yet real-world performance remains unsatisfactory due to imperfect data and high accuracy requirements. First, scaling up training data is challenging due to privacy concerns and the need for expert annotations. Second, real-world medical image quality varies, causing significant performance drops in outlier cases. Last, accurate predictions are crucial for safety-critical medical applications, but existing models often fall short. For these challenges, this thesis proposes deep learning methods for effective medical image segmentation with limited and low-quality data. The proposed suite includes: (1) applying image registration to generate realistic and diverse training samples and adopting barely-supervised learning paradigms to enable learning with insufficient annotated data; (2) creating a region-aware fusion module to tackle the missing modality problem; (3) integrating automatic and interactive segmentation into a single model and training session to achieve practical segmentation performance. Extensive experiments on tasks such as brain tumor, brain structure, and abdominal organ segmentation demonstrate the proposed techniques' effectiveness and efficiency.
Please use this identifier to cite or link to this item: