Extracted from Communications and Public Relations Office, dated 5 Aug 2025
Embodied intelligence technology leads to breakthroughs in surgical robot automation
Surgical robots have performed millions of minimally invasive procedures worldwide. Autonomy is envisaged for next-generation surgical robots, enhancing operational efficiency and consistency, while alleviating pressure on medical resources.
Professor Dou Qi, Assistant Professor from CUHK’s Department of Computer Science and Engineering, who led the study, said: “Traditional surgical automation approaches often relied on additional sensors or predefined models, which limited their clinical applicability. We used innovative AI techniques to create a brand-new embodied intelligence framework for surgical robot automation, contributing a data-driven and purely vision-based solution that is the first of its kind globally.”
This surgical embodied intelligence framework can analyse endoscopic images in real time, without additional sensors. The framework integrates advanced visual foundation models, reinforcement learning and visual servoing techniques to achieve accurate, efficient and safe automation of various surgical tasks. Its foundation-model-based visual perception allows it to robustly perform surgical scene understanding and depth estimation in practice. The reinforcement learning-based control policy was trained using SurRoL, an embodied AI simulator that the team developed, and the simulation-trained policy can be directly deployed in real-world robots via zero-shot sim-to-real transfer. In this research, the developed AI system has been seamlessly integrated into the Sentire® Surgical System which has distinctive AI-readiness and AI-friendly characteristics. This data-driven paradigm eliminates task-specific engineering, providing a general-purpose solution for versatile surgical autonomy through embodied AI, accelerating the translation from concept to pre-clinical testing.
