Title:When Robust Deep Learning Meets Noisy Supervision
Date: June 27, 2019 (Thursday)
Time: 3:00 pm - 4:00 pm
Venue: Room 121, 1/F, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Shatin, N.T.
Speaker: Dr. Bo Han

Abstract:

It is challenging to train deep neural networks robustly with noisy labels, as the capacity of deep neural networks is so high that they can totally overfit on these noisy labels. In this talk, I will introduce three orthogonal techniques in robust deep learning with noisy labels, namely data perspective “estimating the noise transition matrix”; training perspective “training on selected samples”; and regularization perspective “conducting scaled stochastic gradient ascent”. First, as an approximation of real-world corruption, noisy labels are corrupted from ground-truth labels by an unknown noise transition matrix. Thus, the accuracy of classifiers can be improved by estimating this matrix. We present a human-assisted approach called “Masking”. Masking conveys human cognition of invalid class transitions, and naturally speculates the structure of the noise transition matrix. Given the structure information, we only learn the noise transition probability to reduce the estimation burden. Second, motivated by the memorization effects of deep networks, which shows networks fit clean instances first and then noisy ones, we present a new paradigm called “Co-teaching” even combating with extremely noisy labels. We train two networks simultaneously. First, in each mini-batch data, each network filters noisy instances based on the memorization effects. Then, it teaches the remaining instances to its peer network for updating the parameters. To tackle the consensus issue in Co-teaching, we propose a robust learning paradigm called “Co-teaching+”, which bridges the “Update by Disagreement” strategy with the original Co-teaching. Third, deep networks inevitably memorize some noisy labels, which will degrade their generalization. We propose a meta algorithm called “Pumpout” to overcome the problem of memorizing noisy labels. By using scaled stochastic gradient ascent, Pumpout actively squeezes out the negative effects of noisy labels from the training model, instead of passively forgetting these effects. We leverage Pumpout to robustify two representative methods: MentorNet and Backward Correction.

Speaker’s Bio: 

Bo Han is a postdoc fellow at RIKEN Center for Advanced Intelligence Project (RIKEN-AIP), advised by Prof. Masashi Sugiyama. He will be a visiting postdoc fellow at Montreal Institute for Learning Algorithms (MILA). He pursued his Ph.D. degree in Computer Science at University of Technology Sydney, advised by Prof. Ivor W. Tsang and Prof. Ling Chen. He was a research intern at RIKEN-AIP, working with Prof. Masashi Sugiyama and Dr. Gang Niu. His current research interests lie in machine learning and its real-world applications. His long-term goal is to develop intelligent systems, which can learn from a massive volume of complex (e.g., weakly-supervised, adversarial, and private) data (e.g, single-/multi-label, ranking, graph and demonstration) automatically. He has published 15 journal articles and conference papers, including MLJ, TNNLS, TKDE articles and NeurIPS, ICML, IJCAI, ECML papers. He has served as program committes of NeurIPS, ICML, ICLR, AISTATS, UAI, AAAI, and ACML. He received the UTS Research Publication Award (2017 and 2018).

Enquiries: Ms. Shirley Lau at tel. 3943 8439

For more information, please refer to http://www.cse.cuhk.edu.hk/en/events