Tag Archives: Deep learning

Investigating CoordConv for Fully and Weakly Supervised Medical Image Segmentation (in Proc. IPTA’20)

By Rosana El Jurdi, Thomas Dargent, Caroline Petitjean, Paul Honeine, Fahed Abdallah.

In Proceedings of the 10th International Conference on Image Processing Theory, Tools and Applications (IPTA), Paris, France, 9 – 12 November 2020.

 linkInvestigating CoordConv for Fully and Weakly Supervised Medical Image Segmentation [link]   Investigating CoordConv for Fully and Weakly Supervised Medical Image Segmentation [pdf] paper   doi:10.1109/IPTA50016.2020.9286633

Abstract. Convolutional neural networks (CNN) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenge concerns their ability to capture consistent spatial attributes, especially in medical image segmentation. A way to address this issue is through integrating localization prior into system architecture. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. This paper investigates CoordConv as a proficient substitute to convolutional layers for organ segmentation in both fully and weakly supervised settings. Experiments are conducted on two public datasets, SegTHOR, which focuses on the segmentation of thoracic organs at risk in computed tomography (CT) images, and ACDC, which addresses ventricular endocardium segmentation of the heart in MR images. We show that if CoordConv does not significantly increase the accuracy with respect to standard convolution, it may interestingly increase model convergence at almost no additional computational cost.

BB-UNet: U-Net with Bounding Box Prior (in IEEE Journal of Selected Topics in Signal Processing 2020)

By Rosana El Jurdi, Caroline Petitjean, Paul Honeine, Fahed Abdallah.

in IEEE Journal of Selected Topics in Signal Processing, 14(6): 1189-1198. October 2020.

BB-UNet: U-Net with Bounding Box Prior [pdf] paper   doi:10.1109/JSTSP.2020.3001502

Abstract. Medical image segmentation is the process of anatomically isolating organs for analysis and treatment. Leading works within this domain emerged with the well-known U-Net. Despite its success, recent works have shown the limitations of U-Net to conduct segmentation given image particularities such as noise, corruption or lack of contrast. Prior knowledge integration allows to overcome segmentation ambiguities. This paper introduces BB-UNet (Bounding Box U-Net), a deep learning model that integrates location as well as shape prior onto model training. The proposed model is inspired by U-Net and incorporates priors through a novel convolutional layer introduced at the level of skip connections. The proposed architecture helps in presenting attention kernels onto the neural training in order to guide the model on where to look for the organs. Moreover, it fine-tunes the encoder layers based on positional constraints. The proposed model is exploited within two main paradigms: as a solo model given a fully supervised framework and as an ancillary model, in a weakly supervised setting. In the current experiments, manual bounding boxes are fed at inference and as such BB-Unet is exploited in a semi-automatic setting; however, BB-Unet has the potential of being part of a fully automated process, if it relies on a preliminary step of object detection. To validate the performance of the proposed model, experiments are conducted on two public datasets: the SegTHOR dataset which focuses on the segmentation of thoracic organs at risk in computed tomography (CT) images, and the Cardiac dataset which is a mono-modal MRI dataset released as part of the Decathlon challenge and dedicated to segmentation of the left atrium. Results show that the proposed method outperforms state-of-the-art methods in fully supervised learning frameworks and registers relevant results given the weakly supervised domain.

Organ Segmentation in CT Images With Weak Annotations: A Preliminary Study (in GRETSI’19)

By Rosana El Jurdi, Caroline Petitjean, Paul Honeine, Fahed Abdallah.

Dans les actes du 27-ème Colloque GRETSI sur le Traitement du Signal et des Images, Lille, France, 26 – 29 August 2019.

Organ Segmentation in CT Images With Weak Annotations: A Preliminary Study [pdf] paper

Abstract. Medical image segmentation has unprecedented challenges, compared to natural image segmentation, in particular because of the scarcity of annotated datasets. Of particular interest is the ongoing 2019 SegTHOR competition, which consists in Segmenting THoracic Organs at Risk in CT images. While the fully supervised framework (i.e., pixel-level annotation) is considered in this competition, this paper seeks to push forward the competition to a new paradigm: weakly supervised segmentation, namely training with only bounding boxes that enclose the organs. After a pre-processing step, the proposed method applies the GrabCut algorithm in order to transforms the images into pixel-level annotated ones. And then a deep neural network is trained on the medical images, where several segmentation loss functions are examined. Experiments show the relevance of the proposed method, providing comparable results to the ongoing fully supervised segmentation competition.