The Chinese University of Hong Kong
Department of Computer Science and Engineering


Title: Weakly Supervised Image Understanding
Date: February 7, 2018 (Wednesday)
Time: 3:00 p.m. - 4:00 p.m.
Venue: Room 121, 1/F, Ho Sin-hang Engineering Building,
The Chinese University of Hong Kong,
Shatin, N.T.
Speaker: Prof. Ming-Ming Cheng
Nankai University



Semantic segmentation of nature images is a fundamental problem in computer vision. While significant research progresses have been made in the last few years, the success of most existing method highly rely on large scale accurate pixel accurate annotations. However, humans effortlessly learn robust and accurate visual cognitive modes without the requirement of huge amount of pixel accurate semantic annotation. During childhood, we learn to robustly recognize and precisely locate the object regions with limited supervision from parents and other sources. Inspired by this process, our research focus on human cognitive inspired weakly supervised image understanding, by utilizing visual attention, category independent edge detection, region clustering etc., we observed consistent performance boost in weakly supervised image upstanding.



Ming-Ming Cheng is a professor with CCCE, Nankai University. He received his PhD degree from Tsinghua University in 2012. Then he worked as a research fellow for 2 years, working with Prof. Philip Torr in Oxford. Dr. Cheng's research primarily centers on algorithmic issues in image understanding and processing, including image segmentation, editing, retrieval, etc. He has published over 30 papers in leading journals and conferences, such as IEEE TPAMI, ACM TOG, ACM SIGGRAPH, IEEE CVPR, and IEEE ICCV. He has designed a series of popular methods and novel systems, indicated by 7000+ paper citations (2000+ citations to his first author paper on salient object detection). His work has been reported by several famous international media, such as BBC, UK telegraph, Der Spiegel, and Huffington Post.


Enquiries: Ms. Crystal Tam at tel. 3943 8439

For more information, please refer to