ShapeDetector surgical tool detector

Abstract

Detecting tools in surgical videos is an important indregient for context-aware computer-assisted intervention systems. We propose a new two-stage pipeline for tool detection and pose estimation in 2d images, named ShapeDetector. Our approach is data-driven and overcomes strong assumptions made regarding the geometry, number, and position of tools in the image. Our method has been validated for the following three pose parameters: overall position, tip location, and orientaton; using a new surgical tool dataset: the NeuroSurgicalTools data-set made of 2476 monocular images from neurosurgical microscopes during in-vivo surgeries.

Results


NeuroSurgicalTools data-set statistics


Results and data


Detection results

Detection results obtained following our validation protocol for surgical tool detection (described in section 4 of the paper), will be available soon in .txt format. Those files have been used to generate figures shown in our paper, and can be used to compare new algorithms results versus ours..

Images and annotations

We provide separate train and test splits as long as corresponding annotations in the LabelMe format (one annotation file per image). The strategy followed to put the data-set together is described in Chapter 3, Section 3 of the thesis manuscript .

Videos (Soon available)

Original videos, from which images of the NeuroSurgicalTools data-set have been extracted, will be soon available for download. Videos are not being provided with corresponding annotations.

If you use the detections results or data, please cite

@article{bouget2015detecting,
  title={Detecting Surgical Tools by Modelling Local Appearance and Global Shape},
  author={Bouget, David and Benenson, Rodrigo and Omran, Mohamed and Riffaud, Laurent and Schiele, Bernt and Jannin, Pierre},
  journal={Medical Imaging, IEEE Transactions on},
  volume={34},
  number={12},
  pages={2603--2617},
  year={2015},
  publisher={IEEE}
}

Contact