With the introduction of electronic systems in the operating room, such as patient monitoring, laparoscopic surgery and robot-assistance, more and more data is recorded during surgical procedures. This trend gives data-driven systems, such as machine learning models, the opportunity to gain a more prominent role in the surgical environment.
We explore the development of deep learning based algoritms for scene understanding in operating room videos. For example, by detection of medical staff and the identification of their role, or by recognition of clinical phases in surgical procedures. Automatic scene understanding can form the basis for higher-level algorithms that assist the surgical team, that help to evaluate completed procedures, or that bring insights for surgery planning. We approach the development of new algorithms from the perspective of geometric deep learning, with a focus on end-to-end differentiable methods and graphical models.
In our work, we pay explicit attention to the privacy of the medical staff and the patients. Our goal is to find practical and effective solutions that respect the privacy of the people that enter the observed environment, and thereby contributing to the digital trend that makes the operating room safer, more efficient and more pleasant.