Cross-modal Representation Learning for Images and Language
|Title:||Cross-modal Representation Learning for Images and Language|
|Date:||November 22, 2018 (Thursday)|
|Time:||11:00 am - 12:00 pm|
|Venue:||Room 121, 1/F, Ho Sin-hang Engineering Building, The Chinese University of Hong Kong, Shatin, N.T.|
|Speaker:|| Dr. Liwei WANG
Tencent AI Lab at Seattle
Cross-modal learning for images and language is vital to solving many AI applications including image-text retrieval, visual grounding, image captioning and so on. In this talk, I will first introduce our two-branch neural networks for matching images and language in the joint space. I will demonstrate this framework is highly flexible to adapt to various AI tasks. Second, I will present our recent works of deep generative models that can generate human-like language descriptions. Our approaches can not only generate diverse descriptions conditioned on the image input, but also improve the accuracy of the generation results. Finally, I will introduce my recent efforts in improving traditional AI tasks like captioning and ranking with reinforcement learning.
Dr. Liwei Wang is a Senior Researcher in Tencent AI Lab at Seattle, USA. His research focuses on Artificial Intelligence, covering topics in computer vision, natural language processing, deep learning and reinforcement learning. He got his PhD degree in computer science from University of Illinois at Urbana-Champaign in 2018. During his PhD study, he worked with Prof. Svetlana Lazebnik on cross-modal representation learning for general AI tasks and published research works in top AI conferences and journals.
Enquiries: Ms. Crystal Tam at tel. 3943 8439
For more information, please refer to http://www.cse.cuhk.edu.hk/en/events
**** ALL ARE WELCOME ****