Towards Understanding Generalization in Generative Adversarial Networks

Speaker:
Prof. FARNIA Farzan

Abstract:
Generative Adversarial Networks (GANs) represent a game between two machine players designed to learn the distribution of observed data.

Since their introduction in 2014, GANs have achieved state-of-the-art performance on a wide array of machine learning tasks. However, their success has been observed to heavily depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed of the underlying optimization algorithm. In this seminar, we focus on the generalization properties of GANs and present theoretical and numerical evidence that the minimax optimization algorithm also plays a key role in the successful generalization of the learned GAN model from training samples to unseen data. To this end, we analyze the generalization behavior of standard gradient-based minimax optimization algorithms through the lens of algorithmic stability. We leverage the algorithmic stability framework to compare the generalization performance of standard simultaneous-update and non-simultaneous-update gradient-based algorithms. Our theoretical analysis suggests the superiority of simultaneous-update algorithms in achieving a smaller generalization error for the trained GAN model.

Finally, we present numerical results demonstrating the role of simultaneous-update minimax optimization algorithms in the proper generalization of trained GAN models.

Biography:
Farzan Farnia is an Assistant Professor of Computer Science and Engineering at The Chinese University of Hong Kong. Prior to joining CUHK, he was a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, from 2019-2021. He received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests span statistical learning theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia CapitalFellowship) between 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.

Enquiries: Miss Karen Chan at Tel. 3943 8439

Date

Oct 20, 2021
Expired!

Time

3:00 pm - 4:00 pm

Location

Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK

Comments are closed.