Home >> Events >> Seminars Archives
Seminars Archives
October 2024
19 October
9:00 am - 6:00 pm
September 2024
20 September
3:00 pm - 4:00 pm
Machine Learning for Embodied Artificial Intelligence: from Surgical Robotics to Multi-robot Coordination
Location
L1, 1/F, Science Centre (SC L1), CUHK
Category
Seminar Series 2024/2025
09 September
10:00 am - 11:00 am
High-Performance Systems for Graph Analytics
Location
Lecture Theatre 1 (1/F), Lady Shaw Building (LSB)
Category
Seminar Series 2024/2025
06 September
11:00 am - 12:00 pm
Machine Learning in EDA: When and How
Location
L1, 1/F, Science Centre (SC L1), CUHK
Category
Seminar Series 2024/2025
06 September
2:00 pm - 3:00 pm
Exact and Optimal Dynamic Parameterized Subset Sampling on Bounded Precision Machines
Location
Room 803, 8/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2024/2025
04 September
11:00 am - 12:00 pm
ARTIFICIAL INTELLIGENCE: PAST, PRESENT AND FUTURE
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2024/2025
August 2024
09 August
10:00 am - 11:00 am
MVSG-based Compact Models for GaN Devices
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor WEI Lan
Associate Professor
University of Waterloo
Abstract:
Given its high mobility, high breakdown voltage and decent thermal conductivity, GaN technologies have shown great promise for high-power high-frequency (HP-HF), rapidly rising as a front runner for mm-wave to THz analog/RF circuits for IoT and 5G/6G wireless communication. Meanwhile, it is also heavily explored for power electronic applications for fast charging, data center, and electric vehicles. As GaN technology continues to improve, challenges of high design cost and sub-optimal system performance emerge as bottlenecks preventing the technology from wide scale deployment. Accurate, scalable and efficient compact model is key to overcome such challenges.
This presentation will provide a brief overview of the family of MVSG GaN compact model, including models for GaN HEMT, GaN multi-channel diodes and GaN transmission-line resistors. The model formulation and various features will be introduced. Application examples will also be demonstrated, showing the potentials of this group of physics-based compact models.
Biography:
Prof. Lan Wei received her B.S. in Microelectronics from Peking University, China (2001), M.S and Ph. D. in Electrical Engineering from Stanford University, USA (2007 and 2010, respectively). She is currently an Associate Professor at the University of Waterloo, Canada. She has intensive experience in device physics-based compact modeling including silicon and GaN technologies, device-circuit interactive design and optimization, integrated nanoelectronic systems with low-dimensional materials, cryogenic CMOS device modeling and circuit design for quantum computing. She has authored/co-authored more than 90 peered reviewed publications and served on the technical program committees including IEDM, ICCAD, DATE, ISQED, BCICTS, etc.
Enquiries:
Professor YU Bei (byu@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
01 August
3:30 pm - 4:30 pm
Decision trees in a formal world: machine learning (with constraints), controller verification, and unsatisfiability proofs for graph problems
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Assistant professor
Abstract:
Decision trees are an effective and concise way of conveying information, easily understood by virtually everyone regardless of the topic. Given the recent interest in explainable AI and related fields, decision trees stand out as a popular choice. From the algorithmic side, the unique structure of decision trees is interesting since it may be exploited to obtain much more efficient algorithms than structure-oblivious approaches.
In this talk, I will give an overview of the research we have been doing on leveraging the decision tree structure from three disjoint angles: 1) machine learning with constraints, where the goal is construct the optimal regression/decision tree representing tabular data whilst potentially respecting different types of constraints such as fairness, 2) controller/policy verification, where the aim is to verify that a decision tree controller satisfies desired properties in continuous time, and 3) explaining the unsatisfiability of a combinatorial optimisation problem on a graph, by representing proofs of unsatisfiability as a tree using graph-specific concepts. We show that for each of these problems, exploiting the decision tree structure is important in obtain orders of magnitude runtime improvements and/or interpretability.
The talk summarises about half a dozen of our papers (AAAI’21/24, JMLR’22, NeurIPS’22/23, ICML’23/24) and is meant to be accessible to all backgrounds, with plenty of time for discussion!
Biography:
Emir Demirovic is an assistant professor at TU Delft (Netherlands). He leads the Constraint Solving (“ConSol”) research group, which advances combinatorial optimisation algorithms for a wide range of (real-world) problems, and co-directs the explainable AI in transportation lab (“XAIT”) as part of the Delft AI Labs. Prior to his appointment at TU Delft, Emir worked at the University of Melbourne, Vienna University of Technology, National Institute of Informatics (Tokyo), and at a production planning and scheduling company.
The focus point of Emir’s current work is solving techniques based on constraint programming, optimising decision trees, and explainable methods for combinatorial optimisation. He is also interested in industrial applications, robust/resilient optimisation, and the integration of optimisation and machine learning. He publishes in leading AI conferences (e.g., AAAI, NeurIPS) and specialised venues (e.g., CP, CPAIOR), attends scientific events such as Dagstuhl seminars, Lorentz workshops, and the Simons-Berkeley programme, and frequently organises incoming and outgoing visits, e.g., EPFL, ANITI/CNRS, CUHK, Monash University, TU Wien.
Enquiries:
Professor LEE Ho Man Jimmy (jlee@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
July 2024
12 July
2:30 pm - 3:30 pm
Data Science at Old Dominion University
Location
Room 1027, 10/F, Ho Sin Hang Engineering Building (SHB)
Category
Seminar Series 2023/2024
Speaker:
Professor Frank Liu
Professor and Inaugural Director, School of Data Science
Old Dominion University
Abstract:
Old Dominion University is a large public university located in the southwest coast of Virginia in the US. First established as a branch of College of William and Mary, its root can be traced to the 17th century England. School of Data Science is a newly established academic unit in Old Dominion University to encourage interdisciplinary research and education across the campus, as well as the region. I will give a brief introduction to the data science program, followed by open floor for Q&A and discussions.
Biography:
Frank Liu is a Professor of Computer Science and ECE at Old Dominion University. He is the founding director of the School of Data Science, with research experience spans academia, national laboratories, and corporate research labs. He is a Fellow of IEEE.
Enquiries:
Professor YOUNG Fung Yu (fyyoung@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
June 2024
25 June
2:30 pm - 3:30 pm
Generative AI in Drug Development
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor, College of Pharmaceutical Sciences
Abstract:
In recent years, generative AI has gained significant traction as a tool for designing novel molecules for therapeutic purposes. Advanced deep learning techniques have been increasingly adapted for drug design, yielding varying levels of success. In this seminar, I will provide an overview of this emerging field, highlighting the key challenges in applying generative AI to drug design and presenting our proposed solutions. Specifically, we combine principles from physics and chemistry with deep learning methods to discover more realistic drug candidates within the vast chemical space. Our results are supported by benchmark studies and validated through experimental wet lab testing.
Biography:
Dr. Chang-Yu (Kim) Hsieh is the QiuShi Engineering Professor at the College of Pharmaceutical Sciences, Zhejiang University. Before joining Zhejiang University, he led the Theory Division at Tencent Quantum Lab in Shenzhen, focusing on AI and quantum simulation for drug and material discovery. Prior to that, he was a postdoctoral researcher in the Department of Chemistry at MIT. His primary research interests lie in leveraging advanced computing technologies, including AI and quantum computing, to simulate and model material and molecular properties.
Enquiries:
Professor HENG Pheng Ann (pheng@cse.cuhk.edu.hk)
Ms. NG Man Nga Vivien (vivien@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
21 June
11:00 am - 12:00 pm
Constraint Transformation for Faster SMT Solving
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor ZHANG Qirun
Assistant Professor, School of Computer Science
Georgia Institute of Technology
Abstract:
SMT formulas are first-order formulas extended with various theories. SMT solvers are fundamental tools for many program analysis and software engineering problems. The effectiveness and scalability of SMT solvers influence the performance of the underlying client analyzers. The most popular approach to improving SMT solving is by developing new constraint-solving algorithms. In this talk, we will discuss a new perspective on improving SMT solving via compiler optimization. Our basic idea involves translating SMT formulas to LLVM IR and leveraging LLVM optimization passes to simplify the IR. Then, we translate the simplified IR back to SMT formulas. In addition, this strategy can be extended to enhance the solving of unbounded SMT theories by utilizing their bounded counterparts.
Biography:
Qirun Zhang is an Assistant Professor in the School of Computer Science at Georgia Tech. His general research areas are programming languages and software engineering, focusing on developing new program analysis frameworks to improve software reliability. He received a PLDI 2020 Distinguished Paper Award, an FSE 2023 Distinguished Paper Award, an NSF CAREER Award, and an Amazon Research Award in Automated Reasoning. He served on the program committees of FSE, ICSE, ISSTA, OOPSLA, PLDI, and POPL.
Enquiries:
Professor LYU Rung Tsong Michael (lyu@cse.cuhk.edu.hk)
Ms. NG Man Nga Vivien (vivien@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
19 June
10:30 am - 11:30 am
Model Evaluation and Test-time Methods in Medical Image Segmentation
Location
Room 402, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Associate Professor, School of Computer Science and Engineering,
Nanjing University of Science and Technology
Abstract:
With advancements in deep learning and AI techniques, medical image segmentation has experienced rapid development over the past decade. Modern DL-based models, utilizing large labeled datasets, often produce impressive benchmark results. However, practical issues, such as reliability and trustworthiness, persist when these models are implemented in hospitals and medical facilities.
This talk addresses two related aspects of medical image segmentation for improving model deployment: model evaluation and test-time methods. First, we will discuss our recent work on deployment-centric model evaluation, evaluation of foundation models and related techniques. Next, we will cover a series of test-time methods that we have developed to improve video segmentation consistency, enhance the quality of medical image segmentation, and more recently, advance segmenting anything in medical images.
Finally, we will briefly highlight several other projects from my group and discuss directions in medical image segmentation research that we find promising and important.
Biography:
Yizhe Zhang, Ph.D., is an associate professor at Nanjing University of Science and Technology. He received his Ph.D. from the University of Notre Dame in the United States. Before returning to Nanjing, he was a senior research engineer at Qualcomm AI Research, San Diego, where he worked on efficient video segmentation and the spatiotemporal consistency of segmentation. He has conducted research on topics such as active learning, semi-supervised learning, model design, training and evaluation in medical image segmentation. As the first author, he has published papers in conferences and journals including MICCAI, Medical Image Analysis, IEEE TMI, BIBM, ICCV, AAAI, and WACV. As a key contributor, he was involved in biomedical image modeling and analysis work that won the 2017 Cozzarelli Prize awarded by the National Academy of Sciences.
Enquiries:
Professor HENG Pheng Ann (pheng@cse.cuhk.edu.hk)
Ms. NG Man Nga Vivien (vivien@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
13 June
3:30 pm - 4:30 pm
When Apps Become Super: Dissecting the Security Risks of Super Apps
Location
Room 1021&1021B, 10/F, Ho Sin-hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Mr. YANG Yuqing
Ph.D. candidate, Department of Computer Science and Engineering
The Ohio State University
Abstract:
The Super App computing paradigm, debuted in 2017 by world’s social computing giant WeChat, has revolutionized the mobile app architecture. By integrating a standalone execution engine for light-weight miniapp packages, the super apps allow third-party developers to integrate customized services to billions of super app users. Simultaneously, with the powerful features provided by super apps comes the imminent risk from attackers, who actively attempt to exploit the super app ecosystem, inflicting privacy and losses of billions of users, as well as millions of developers.
In this talk, Yuqing will dissect the concept of super app paradigm with a specific focus on the security risks from super app vulnerabilities and miniapp malware. First, he will discuss communication channel vulnerabilities we identified in front-ends and back-ends, followed by a dissection of miniapp malware against miniapp vetting, and malicious behaviors against the platform prior and after the miniapp vetting process. In the end, he will discuss mitigation countermeasures and open problems to improve the security and privacy in the realm of super apps.
Biography:
Yuqing Yang is a third-year PhD candidate at the Department of Computer Science and Engineering of The Ohio State University. His research interest primarily focuses on vulnerability and malware detection in mobile and web security, particularly in super apps. His papers have been published in prestigious conferences, such as ACM CCS, SIGMETRICS, and ICSE. He was a reviewer for many top-tier journals and conferences, including TIFS, TOSEM, DSN, USENIX Security, IEEE Security & Privacy, and ACM CCS. His researches have also been acknowledged by top super app vendors, including Tencent and Baidu.
Enquiries:
Professor MENG Wei (wei@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
04 June
3:00 pm - 4:00 pm
On Physics-Inspired Generative Models
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. XU Yilun
Ph.D. at Massachusetts Institute of Technology (MIT)
Research Scientist, NVIDIA Research (from 2024 July)
Abstract:
Physics-inspired generative models such as diffusion models constitute a powerful family of generative models. The advantages of models in this family come from relatively stable training process and high capacity. A number of possible improvements remain possible. In this talk, I will discuss the enhancement and design of physics-inspired generative models. I will first present a sampling algorithm that combines the best of previous samplers, greatly accelerating the generation speed of text-to-image Stable Diffusion models. Additionally, I will discuss sampling methods to promote diversity in finite samples, by adding mutual repulsion forces between samples in the generative process. Secondly, I will discuss a training framework that introduces learnable discrete latent into continuous diffusion models. These latent simplify complex noise-to-data mappings and reduce the curvature of generative trajectories. Finally, I will introduce Poisson Flow Generative Models (PFGM), a new generative model arising from electrostatic theory, rivalling leading diffusion models. The extended version, PFGM++, places diffusion models and PFGM under the same framework and introduces new, better models. Several algorithms discussed in the talk are the state-of-the-art methods across standard benchmarks.
Biography:
Yilun Xu is an incoming research scientist at NVIDIA Research. He obtained his Ph.D. from MIT CSAIL in 2024, and his B.S. from Peking University in 2020. His research focuses on machine learning, with a current emphasis on new family of physics-inspired generative models, as well as the development of training and sampling algorithms for diffusion models. Previously, he has done research aimed on bridging information theory and machine learning.
Enquiries:
Professor HENG Pheng Ann (pheng@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
May 2024
08 May
10:00 am - 11:00 am
Revisiting Constraint Solving – From Non-Binary to Binary
Location
Room 1021&1021B, 10/F, Ho Sin-hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Roland Yap
National University of Singapore (NUS)
Zoom:
https://cuhk.zoom.us/j/94749652768
Meeting ID: 947 4965 2768
(Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record)
Abstract:
Solving finite domain constraints, e.g., Constraint Satisfaction Problems (CSP), is an intractable problem which nevertheless is one of practical significance. Due to the intractability, in practice, inference techniques usually local consistencies are used which combine neatly with search heuristics. In general, a constraint may either be a binary or non-binary relation and the typical consistency used is either Arc Consistency (for binary) and Generalised Arc Consistency (for non-binary). The natural form for many constraints is as a non-binary constraint (having more than two variables). However, it is known that binary CSPs are also NP-complete. For a long time, most efforts have been placed on non-binary techniques as they were believed to be more efficient.
In this talk, we will revisit the question of binary vs non-binary. We show why the reason behind why binary approaches were believed to be inefficient. Then we show that this belief is mistaken and binary approaches through better encodings and algorithms can outperform existing non-binary techniques. We will discuss improvements to old encodings as well as present new encodings and associated algorithms.
Biography:
Roland Yap is an Associate Professor in the Department of Computer Science, National University of Singapore, Singapore. He received his PhD from Monash University, Australia. He has pioneering work in the development of Constraint Logic Programming languages and the field of Constraint Programming. Together with Christian Bessiere, Jean-Charles Régin, and Yuanlin Zhang, their work on (Generalized) Arc Consistency was awarded the AI Journal Classic Paper Award in 2022. His current research interests include AI, Big Data, Constraints, Operating Systems, Programming Languages, Security and Social Networks.
Enquiries:
Professor Jimmy Lee (jlee@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
07 May
3:30 pm - 4:30 pm
Hardening Software Against Memory Errors
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Roland Yap
National University of Singapore (NUS)
Abstract:
Memory errors are often the root cause of security vulnerabilities and exploitation in low level languages such as C/C++. We will first overview the difficulties of dealing with memory errors and existing techniques for detecting and preventing memory errors. This talk will focus on the challenging problem of given a closed source binary, how to harden the binary against memory errors. We introduce RedFat, a binary rewriter which hardens x86_64 binaries against memory errors. The challenge is that without source code, it becomes difficult to have reliable instrumentation and also at the binary level much of the semantics of the original code has dissapeared. To deal with the problem of missing semantics while yet giving more hardening where possible, RedFat uses a complementary error detection methodology. It combines low fat pointers with red zones. RedFat makes minimal assumptions about the binary and is able to operate on stripped and non-PIC binaries. It is also language agnostic and has been evaluated on C / C++ / Fortran benchmarks.
Biography:
Roland Yap is an Associate Professor in the Department of Computer Science, National University of Singapore, Singapore. He received his PhD from Monash University, Australia. He has pioneering work in the development of Constraint Logic Programming languages and the field of Constraint Programming. Together with Christian Bessiere, Jean-Charles Régin, and Yuanlin Zhang, their work on (Generalized) Arc Consistency was awarded the AI Journal Classic Paper Award in 2022. His current research interests include AI, Big Data, Constraints, Operating Systems, Programming Languages, Security and Social Networks.
Enquiries:
Professor Jimmy Lee (jlee@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
April 2024
24 April
10:00 am - 11:00 am
General Memory Specialization for Massive Multi-Cores
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Mr. WANG Zhengrong
Postdoc researcher
University of California, Los Angeles (UCLA)
Zoom:
https://cuhk.zoom.us/j/97200491693?pwd=Uk1ORHhTQmEzbWJiVDRCMVdzZHpYdz09
Meeting ID: 972 0049 1693 // Passcode: 202400
(Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record)
Abstract:
In the last two decades, computer architects have heavily relied on specialization and scaling up to continue performance and energy efficiency improvement as Moore’s law fading away. The former customizes the system for particular program behaviors (e.g., the neural engine in Apple chips to accelerate machine learning), while the latter evolves into massive multi-core systems (e.g., 96 cores for AMD EPYC 9654 CPU).
This works until we hit the “memory wall” – as modern systems continue to scale up, data movements have become increasingly the bottleneck. Unfortunately, conventional memory systems are extremely inefficient in reducing data movements, suffering from excessive NoC traffic and limited off-chip bandwidth to bring the data to computing cores.
These inefficiencies originate from the essential core-centric view: the memory hierarchy simply reacts to individual requests from the core but is unaware of high-level program behaviors. This makes the hardware oblivious, as they must guess highly irregular and transient memory semantics from the primitive memory abstraction of simple load and store instructions.
This calls for a fundamental redesign of the memory interface to express rich memory semantics, so that the memory system can promptly adjust to evolving program behaviors and efficiently orchestrate data and computation together throughout the entire system. For example, simple computations can be directly associated with memory requests and naturally distributed across the memory hierarchy without bringing all the data to the core. More importantly, the new interface should integrate seamlessly with conventional von Neumann ISAs, enabling end-to-end memory specialization while maintaining generality and transparency. Overall, in this talk, I will discuss our solution to enable general memory specialization for massive multi-core systems that unlock order-of-magnitude speedup/energy efficiency on plain-C programs. Such data-computation orchestration is the key to continuing the performance and energy efficiency scaling.
Biography:
Zhengrong is currently a post-doc researcher at UCLA. His research aims to build general, automatic, and end-to-end near-data acceleration by revolutionizing the orchestration between data and computation throughout the entire system. His open-source work has been accepted by multiple top-tier conferences in computer architecture, including ISCA, MICRO, ASPLOS, HPCA, and awarded Best Paper Runner-Ups as well as IEEE Micro Top Pick Honorable Mentions. He is also one of the maintainers of gem5, a widely used cycle accurate simulator in computer architecture.
Enquiries:
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
18 April
11:30 am - 12:30 pm
Towards Generalizable and Robust Multimodal AI for Healthcare
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. CHEN Cheng
Postdoctoral Research Fellow
Harvard Medical School
Abstract:
Artificial Intelligence (AI) is catalyzing a paradigm shift in healthcare, promising to reshape the landscape of patient care. At the heart of this transformation is medical imaging, where AI-enabled technologies hold substantial promise for precise and personalized image-based diagnosis and treatment. Despite these advances, these models often underperform at real-world deployment, particularly due to the heterogeneous data distributions and varying modalities in healthcare applications. In this talk, I will introduce our work dedicated to tackling these real-world challenges to advance model generalizability and multimodal robustness. First, I will show how we can leverage generative networks and model adaptation to generalize models under data distribution shifts. Next, I will describe how to achieve robust multimodal learning with missing modalities and with imaging and non-imaging clinical information. Finally, I will present our work that extends to large-scale datasets and more diverse modalities based on foundation model for generalizable multimodal representation learning.
Biography:
Dr. Cheng CHEN is a postdoc research fellow at the Center for Advanced Medical Computing and Analysis, Harvard Medical School. She obtained her Ph.D. degree in Computer Science and Engineering at The Chinese University of Hong Kong in 2021. She received her M.S. and B.S. degrees from Johns Hopkins University and Zhejiang University, respectively. Her research interests lie in the interdisciplinary area of AI and healthcare, with a focus on generalizable, robust, and multimodal medical image analysis. She has over 25 papers published at top AI and medical imaging venues, reaching over 2300 Google Scholar citations with an h-index of 16. Her first-authored papers have been recognized as an ESI “Highly cited paper”, selected as oral presentations, and received travel awards from AAAI and MICCAI. She has been named one of the Global Top 80 Chinese Young Female Scholars in AI and won the MICCAI Federated Brain Tumor Segmentation Challenge. She also serves as Area Chair of MICCAI 2024 and reviewer for multiple top journals and conferences.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
17 April
11:30 am - 12:30 pm
Cryptographic Metamorphosis: Bridging Realms and Fostering Futures
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. XIAO Liang
Postdoctoral Fellow
NTT Research
Abstract:
Modern cryptography has evolved beyond its initial focus on information privacy and has become deeply integrated into various aspects of computer science. An extraordinary example in this regard is the “love-hate” relationship between Cryptography and Quantum Computing, which stands among the central topics of today’s theoretical computer science (TCS) research. On the one hand, quantum techniques (e.g., Shor’s algorithm) jeopardize the foundational assumptions for Cryptography; on the other hand, the unique features of quantum information (e.g., Heisenberg’s Uncertainty Principle) enable new cryptographic functionalities that were provably impossible in a classical world. A key focus of this talk will be my effort in re-establishing the quantum theory for central cryptography tasks like Secure Multi-Party Computation (MPC) and Zero-Knowledge (ZK) Proofs, underscoring the role of this interdisciplinary field as a fertile ground for both classical and quantum TCS innovations.
As for the “classical” aspect of my research, I will discuss my pursuits in designing concurrently-secure, black-box MPC (and ZK) protocols, addressing the inherent tension between security and efficiency. I will also talk about my passion for leveraging cryptography for system/network security tasks, instantiating my belief in bridging theoretical research with real-world applications.
The presentation will culminate with an outline of a future research agenda, as well as my aspirations to contribute to the CSE department, including the designs of a new course on mathematical tools for TCS, a new course on quantum cryptography, and a semi-annual “Crypto-Plus” seminar in Hong Kong.
Biography:
Xiao LIANG is currently a Postdoctoral Fellow at NTT Research, specializing in Cryptography. Prior to this role, he gained valuable postdoctoral experience at Rice University and Indiana University Bloomington. His expertise encompasses Zero-Knowledge Protocols, Secure Multi-Party Computation, Non-Malleability, and Digital Signature, with a deliberate effort to establish connections with adjacent domains like System/Network Security. A notable highlight of Xiao’s work is the emphasis on the convergence of cryptography and quantum computing, contributing to the dynamic interdisciplinary advancements in this burgeoning field. His research has consistently resulted in publications at esteemed conferences for both cryptography and theoretical computer science in general, such as FOCS, CRYPTO, and ICALP. Xiao Liang holds a Ph.D. in Computer Science and an M.S. in Applied Mathematics, both earned from Stony Brook University, and a B.S. in Economics from Beijing Institute of Technology.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
12 April
11:30 am - 12:30 pm
Log-driven intelligent software reliability engineering
Location
ERB404, William M W Mong Engineering Building (ERB)
Category
Seminar Series 2023/2024
Speaker:
Ms. HUO Yintong
Ph.D. candidate
The Chinese University of Hong Kong
Abstract:
Software systems are serving various aspects of our daily activities, from search engines to communication platforms. Traditional software reliability engineering (SRE) practices, which heavily rely on manual efforts, encounter challenges due to 1) sheer volume, 2) high variety, and 3) rapid evolution of modern software. My research is centered on enhancing software reliability through automated fault management. In this talk, I will present my work on intelligent SRE, with a focus on utilizing log data for the three major fault management phases: fault prevention, fault removal, and fault tolerance.
The talk starts with the development of an initial investigation on a semantic-aware log analysis framework tailored for identifying system failures during software operation, so that proper fault tolerance mechanisms can be invoked. The resulting work, SemParser, is inspired by an insightful understanding of the distinctions between human-written language (log events) and machine-generated tokens (variables). Then, we will discuss “AutoLog” – a novel log sequence simulation framework leveraging program analysis to overcome the limitations of insufficient log data. Unlike existing log data gathered from a limited number of workloads, AutoLog for the first time acquires far more comprehensive and scalable log datasets, paving the way for proactive and practical anomaly detection solutions. Finally, I will discuss my recent research progress in LLM-powered SRE that demonstrates the possibility of new designs, which integrate LLMs into resolving real-world software engineering challenges.
My past research has showcased the effectiveness of log-driven methods in advancing SRE. To conclude, I will outline my research roadmap with various directions, which extends from intelligent log operations to diverse applications in software development.
Biography:
HUO Yintong is currently a Ph.D. candidate at the Chinese University of Hong Kong, advised by Michael R. Lyu. Her research area is intelligent Software Engineering (SE), with a focus on software reliability by promoting automated software development, testing, and operations. She has published 12 papers in all top-tier SE conferences, including ICSE, FSE, ASE, ISSTA, and ISSRE. She is the recipient of an IEEE Open Software Services Award for the LogPAI project (3k+ Stars, 70k+ Downloads).
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
11 April
10:00 am - 11:00 am
Harnessing Game-Theoretic Optimization for Adversarial, Hierarchical, and Scalable Machine Learning Models
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Dr. LU Songtao
Senior Research Scientist
Mathematics and Theoretical Computer Science Department
IBM Thomas J. Watson Research Center
Zoom:
https://cuhk.zoom.us/j/99213225761?pwd=L2FHTkJBaVMxeDVkUENyUGNOZ1hCZz09
Meeting ID: 992 1322 5761 // Passcode: 202400
(Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record)
Abstract:
As machine learning continues to permeate our daily lives with the deployment of large-scale foundational models across diverse domains, we are witnessing an unprecedented era of data collection and exploration through smart devices. This abundance of data holds the potential to bring groundbreaking advancements across numerous industries and disciplines. However, effectively leveraging and safeguarding this wealth of data requires increasingly advanced mathematical techniques.
My research is centered on designing computationally efficient methods backed by theory to drive adversarial, hierarchical, and scalable machine learning models. In this talk, I will delve into my recent work on developing gradient-based optimization algorithms tailored to address game theory-related machine learning problems. Unlike traditional theories focused on convex/concave problems, my focus lies in nonconvex zero-sum games and Stackelberg games, which are essential for tackling nonconvex objective functions prevalent in neural network training. These advancements not only offer theoretical insights into stabilizing iterative numerical algorithms but also provide more generalizable solutions for downstream learning tasks. I will demonstrate the practical significance of these algorithms in addressing real-world machine learning challenges, including adversarial attacks, data hyper-cleaning, and automatic speech recognition. Furthermore, I will highlight the broader impact of the proposed learning framework on emerging problems, such as multilingual multitask learning, reinforcement learning with human feedback, and multi-agent RL.
Biography:
Songtao Lu is a Senior Research Scientist in the Mathematics and Theoretical Computer Science Department at the IBM Thomas J. Watson Research Center in Yorktown Heights, NY, USA. Additionally, he serves as a principal investigator at the MIT-IBM Watson AI Lab in Cambridge, MA, USA. He obtained his Ph.D. from the Department of Electrical and Computer Engineering at Iowa State University in 2018 and held a Post-Doctoral Associate position at the University of Minnesota Twin Cities from 2018 to 2019. His research primarily focuses on foundational machine learning models and algorithms, with applications in trustworthy learning, meta-learning, and distributed learning. He received the Best Paper Runner-Up Award at UAI in 2022, an Outstanding Paper Award from FL-NeurIPS in 2022, an IBM Entrepreneur Award in 2023, and an IBM Outstanding Research Accomplishment Award. Furthermore, he has multiple papers selected for oral/spotlight/long oral presentations at prestigious machine learning conferences, including ICML, NeurIPS, ICLR, AAAI, and UAI.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
March 2024
27 March
11:30 am - 12:30 pm
Advancing Software Reliability: A Journey from Code to Compiler
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Mr. LI Shaohua
Ph.D. Candidate
ETH Zurich
Abstract:
In today’s digital landscape, software governs every critical aspect of our lives: communication, transportation, finance, healthcare, and many more. Consequently, software reliability emerges as a critical pillar for the functioning of our society. Yet, the intricate process from source code to executable binary, integral to software development and deployment, presents substantial challenges to both reliability and security.
In this talk, I will discuss my research on advancing the reliability of modern software systems by detecting and eliminating various defects in code, code analysis, and code compilation. At the code level, I will present my research on designing a general methodology for detecting unstable code in software. At the code analysis level, I will discuss the robustness of current detection tools and introduce a novel validation framework for solidifying their robustness. At the code compilation level, I will present a data-driven program generation approach for validation compilers. Finally, I will conclude the talk with my vision and future research on building reliable software systems.
Biography:
Shaohua Li is a final-year Ph.D. candidate in the Department of Computer Science at ETH Zurich, advised by Prof. Zhendong Su (https://people.inf.ethz.ch/suz/). His research interests are compilers, programming languages, and software engineering, with a particular emphasis on their reliability and security. His research has led to the discovery and fixing of hundreds of critical issues in well-established software and systems, such as OpenSSL, Address Sanitizer, GCC, LLVM, etc. His research has received recognition from both industry and academia, including the 2022 Meta Security Research Award, the 2023 ACM Distinguished Paper Award at OOPSLA, and the 2024 ACM Distinguished Artifact Award at ASPLOS.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
25 March
11:30 am - 12:30 pm
Designing Algorithms for Massive Graphs
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Mr. CHEN Yu
Postdoc
École Polytechnique Fédérale de Lausanne (EPFL)
Abstract:
As the scale of the problems we want to solve in real life becomes larger, it is difficult to store the whole input or it takes a very long time to read the entire input. In these cases, the classical algorithms, even when they run in linear time and linear space, may no longer be feasible options as the input size is too large. To deal with this situation, we need to design algorithms that use much smaller space or time than the input size. We call this kind of algorithm a sublinear algorithm. My primary research interest is designing sublinear algorithms for combinatorial problems and proving lower bounds to understand the limits of sublinear computation. I also study graph sparsification problems, an important technique for designing sublinear algorithms on graphs. It is usually used as a pre-processing step to speed up algorithms. In this talk, I’ll cover some of my work in sublinear algorithms and graph sparsifications. I’ll give more details on my recent works about vertex sparsifiers.
Biography:
I’m a postdoc in the theory group at EPFL. I obtained my PhD from University of Pennsylvania, where I was advised by Sampath Kannan and Sanjeev Khanna. Before that, I did my undergraduate study at Shanghai Jiao Tong University. I have a broad interest in various aspects of theoretical computer science and mathematics. Currently, I focus on graph algorithms, especially sublinear algorithms on graph and graph sparsification problems. I receive the Morris and Dorothy Rubinoff Award at University of Pennsylvania and the Best Paper award at SODA’19.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
19 March
10:00 am - 11:00 am
Intelligent Systems that Perceive, Imagine, and Act Like Humans by Aligning Vision and Language Representations
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Dr. LI Boyi
Postdoctoral Fellow
Berkeley Artificial Intelligence Research Lab (BAIR), UC Berkeley
Zoom:
https://cuhk.zoom.us/j/92971603994?pwd=VFRaYTl5VWJMRnh6NHhicDBodC9JZz09
Meeting ID: 929 7160 3994 // Passcode: 202400
(Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record)
Abstract:
The machine learning community has embraced specialized models tailored to specific data domains. However, relying solely on a singular data type might constrain flexibility and generality, requiring additional labeled data and hindering user interaction. To address these challenges, my research objective is to build efficient, generalizable, interactive intelligent systems that learn from the perception of the physical world and their interactions with humans to execute diverse and complex tasks to assist people. These systems should support seamless interactions with humans and computers in digital software environments and tangible real-world contexts by aligning representations from vision and language. In this talk, I will elaborate on my approaches across three dimensions: perception, imagination, and action, focusing on large language models, generative models, and robotics. These findings effectively mitigate the limitations of existing model setups that cannot be overcome by simply scaling up, opening avenues for multimodal representations to unify a wide range of signals within a single, comprehensive model.
Biography:
Boyi Li is a postdoctoral scholar at UC Berkeley, advised by Prof. Jitendra Malik and Prof. Trevor Darrell. She is also a researcher at NVIDIA Research. She received her Ph.D. at Cornell University, advised by Prof. Serge Belongie and Prof. Kilian Q. Weinberger. Her research interest is in machine learning and multimodal systems. Her research aims to develop generalizable algorithms and interactive intelligent systems, focusing on large language models, generative models, and robotics, by aligning representations from multimodal data, such as vision and language.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
18 March
11:30 am - 12:30 pm
On-Device Personalized AI to Mobile and Implantable Devices for Better Healthcare
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. JIA Zhenge
Postdoctoral Research Associate
Department of Computer Science and Engineering, University of Notre Dame
Abstract:
The rise in chronic diseases, combined with an aging population and a healthcare professional shortage, has driven the extensive use of mobile and implantable devices for effective management of diverse health conditions. Recent years have seen burgeoning interest in exploiting the rapid advancements in artificial intelligence (AI) to augment these devices’ performance. This development leads to improved patient outcomes, reduced healthcare costs, and enhanced patient autonomy. However, due to individual differences, a one-for-all AI model cannot always provide the best performance and personalized AI is demanded to tailor the model for each individual. In addition, compounded by the privacy, security, and safety constraints, model personalization must often be done on the medical device with limited hardware resources. In this talk, I will first illustrate the resource sustainability issues in the development of AI/ML for health, and demonstrate our proposed on-device personalized AI techniques that can potentially transform the landspace of mobile and implantable devices. Additionally, I will showcase the world-first TinyML design contest for health organized at ICCAD 2022 and the next-generation Implantable Cardioverter Defibrillator (ICD) design enabled by our research.
Biography:
Zhenge Jia is currently a postdoctoral research associate in the Department of Computer Science and Engineering at the University of Notre Dame. He obtained his Ph.D. degree in Electrical and Computer Engineering at the University of Pittsburgh in 2022. He received his B.S. degree with honors in Computer Science at Australian National University in 2017. His research interests include personalized deep learning and on-device AI for health. He published more than 15 papers in Nature Machine Intelligence, DAC, ICCAD, TCAD and received the Second Place Award in Ph.D. forum at DAC 2023. He has served on the technical program committee of ICCAD and served as the reviewer for IEEE TC, TCAD TNNLS, JETC, etc.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
08 March
3:00 pm - 4:00 pm
Towards Acoustic Cameras: Neural Deconvolution and Rendering for Synthetic Aperture Sonar
Location
Room 407, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. Suren Jayasuriya
Assistant Professor
Arizona State University
Abstract:
Acoustic imaging leverages sound to form visual products with applications including biomedical ultrasound and sonar. In particular, synthetic aperture sonar (SAS) has been developed to generate high-resolution imagery of both in-air and underwater environments. In this talk, we explore the application of implicit neural representations and neural rendering for SAS imaging and highlight how such techniques can enhance acoustic imaging for both 2D and 3D reconstructions. Specifically, we discuss challenges of neural rendering applied to acoustic imaging especially when handling the phase of reflected acoustic waves that is critical for high spatial resolution in beamforming. We present two recent works on enhanced 2D circular SAS deconvolution in air as well as a general neural rendering framework for 3D volumetric SAS. This research is the starting point for realizing the next generation of acoustic cameras for a variety of applications in air and water environments for the future.
Biography:
Dr. Suren Jayasuriya is an assistant professor at Arizona State University, in the School of Arts, Media and Engineering (AME) and Electrical, Computer and Energy Engineering (ECEE) since 2018. Before this, he was a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University in 2017. Suren received his Ph.D. in ECE at Cornell University in Jan 2017 and graduated from the University of Pittsburgh in 2012 with a B.S. in Mathematics (with departmental honors) and a B.A. in Philosophy. His research interests range from computational cameras, computer vision and graphics, and acoustic imaging/remote sensing. His website can be found at: https://sites.google.com/asu.edu/imaging-lyceum
Enquiries:
Professor GU Jinwei (jwgu@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
07 March
10:00 am - 11:00 am
Program Interfaces Grounded, Transparent, and Reasoning AI
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Mr. LUO Hongyin
Postdoctoral associate
MIT Computer Science and Artificial Intelligence Laboratory
Zoom:
https://cuhk.zoom.us/j/96031770790?pwd=RmI0Z25Qa1RFRzJKWUtOOG52YXlQdz09
Meeting ID: 960 3177 0790 // Passcode: 202400
Abstract:
Recent language models have achieved strong generalization ability over a vast range of tasks, but also raised concerns about hallucinations, harmful stereotypes, and lack of reliability in reasoning tasks. Our research emphasizes that the core solution to these problems is improving grounding and reasoning abilities of language models. More specifically, we build trustworthy AI systems that (1) follow an explicit grounding-planning-reasoning pipeline for transparency and reliability, and (2) combine autoregressive generation and first-principal reasoning engines. Integrating large language models with knowledge graphs, entailment models, and program interpreter under a program scaffolding instead of natural language, we have made significantly improved the accuracy, transparency, and efficiency of large language models on a wide range of numeric, symbolic, and natural language tasks.
Biography:
Hongyin LUO is a postdoctoral associate at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received a bachelor’s degree from Tsinghua University in 2016 and obtained a Ph.D. degree in computer science in 2022 at MIT EECS. His research focuses on improving the efficiency, transparency, and reasoning ability of language models. His latest research has combined natural language with different formal reasoning engines, including entailment models and program interpreters. He has built self-trained language understanding models outperforming GPT3-175B with 1/500 computation, retrieval-augmented language models that handle noises from search engines, and natural language embedded programs that achieves accurate reasoning without task-specific examples.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
05 March
1:00 pm - 4:00 pm
Lossy Compression for HPC Scientific Data
Location
L4, 2/F, Science Centre (SC L4), CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor HE Xubin
Professor, Department of Computer and Information Sciences
Temple University
Director, Storage Technology and Architecture Research (STAR) Lab
Abstract:
Scientific simulations generate large amounts of floating-point data, which are often not very compressible using traditional reduction schemes, such as deduplication or lossless compression. The emergence of lossy floating-point compression holds promise to satisfy the data reduction demand from HPC applications. In this talk, I will share our exploration of lossy compression for HPC scientific data, specifically in three aspects: 1) Understanding and modelling lossy compression schemes on HPC scientific data, and 2) Compression ratio modelling and estimation across error bounds for lossy compression, and 3) Exploring the autoencoder to compress scientific data.
Biography:
Dr. Xubin He is a Professor in the Department of Computer and Information Sciences at Temple University. He is also the Director of the Storage Technology and Architecture Research (STAR) lab. Dr. He received his PhD in Electrical and Computer Engineering from the University of Rhode Island, USA in 2002 and both his MS and BS degrees in Computer Science from Huazhong University of Science and Technology, China, in 1997 and 1995, respectively. His research interests focus on data storage and I/O systems, including big data, cloud storage, Non-Volatile Storage, and scalability for large storage systems. He has published more than 100 refereed articles in prestigious journals such as IEEE Transactions on Parallel and Distributed Systems (TPDS), Journal of Parallel and Distributed Computing (JPDC), ACM Transactions on Storage, and IEEE Transactions on Dependable and Secure Computing (TDSC), and at various international conferences, including USENIX FAST, USENIX ATC, Eurosys, IEEE/IFIP DSN, IEEE INFOCOM, IEEE IPDPS, MSST, ICPP, MASCOTS, LCN, etc. He is the program co-chair for ccGRID’2024, IPCCC’2017, ICPADS’2016, MSST’2010, general co-chair for IEEE NAS’2009, and general vice co-chair for IPCCC’2018. Dr. He has served as a proposal review panelist for NSF and a committee member for many professional conferences in the field. Dr. He was a recipient of the ORAU Ralph E. Powe Junior Faculty Enhancement Award in 2004, the TTU Chapter Sigma Xi Research Award in 2010 and 2005, TTU ECE Most Outstanding Teaching Faculty Award in 2010, and VCU ECE Outstanding Research Faculty in 2015. He holds one U.S. patent. He is a senior member of the IEEE, a member of the IEEE Computer Society, and USENIX.
Enquiries:
Professor SHAO Zili (shao@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
04 March
11:30 am - 12:30 pm
Towards Principled Modeling of Inductive Bias for Generalizable Machine Learning
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Mr. LIU Weiyang
Ph.D. Candidate
Max Planck Institute for Intelligent Systems
University of Cambridge
Abstract:
Machine learning (ML) becomes increasingly ubiquitous nowadays, as it enables scalable and accurate decision making in many applications, ranging from autonomous driving to medical diagnosis. Despite its unprecedented success, how to ensure that ML systems are trustworthy and generalize as intended remains a huge challenge. To address this challenge, my research aims to build generalizable ML algorithms through a principled modeling of inductive bias. To this end, I introduce three methods for modeling inductive biases: (1) value-based modeling, (2) data-centric modeling, and (3) structure-guided modeling. While briefly touching upon all three methods, I will focus on my recent efforts in value-based modeling and how it can effectively improve the adaptation of foundation models. Finally, I will conclude by highlighting the critical role of principled inductive bias modeling in unlocking new possibilities in the age of foundation models.
Biography:
Weiyang LIU is currently a final-year PhD student at University of Cambridge and Max Planck Institute for Intelligent Systems, advised by Prof. Adrian Weller and Prof. Bernhard Schölkopf under the Cambridge-Tuebingen Machine Learning Fellowship. His research focuses on the principled modeling of inductive biases to achieve generalizable and reliable machine learning. He has received Baidu Fellowship, Hitachi Fellowship and Qualcomm Innovation Fellowship Finalist. His works have received 2023 IEEE Signal Processing Society Best Paper Award, Best Demo Award at HCOMP 2022 and multiple oral/spotlight presentations at conferences such as ICLR, NeurIPS and CVPR.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
February 2024
02 February
3:00 pm - 4:00 pm
Intelligent Digital Design and Implementation with Machine Learning in EDA
Location
Lecture Theatre 2 (1/F), Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Professor XIE Zhiyao
Assistant Professor, Department of Electronic and Computer Engineering (ECE),
The Hong Kong University of Science and Technology (HKUST)
Abstract:
As the integrated circuit (IC) complexity keeps increasing, the chip design cost is skyrocketing. Semiconductor companies are in increasingly greater demand for experienced manpower and stressed with unprecedented turnaround time. Therefore, there is a compelling need for design efficiency improvement through new electronic design automation (EDA) techniques. In this talk, I will present multiple design automation techniques based on machine learning (ML) methods, whose major strength is to explore highly complex correlations based on prior circuit data. These techniques cover various chip-design objectives and design stages, including layout, netlist, register-transfer level (RTL), and micro-architectural level. I will focus on the different challenges in design objective prediction at different stages, and present our customized solutions. In addition, I will share our latest observations in design generation with large language models (LLMs).
Biography:
Zhiyao Xie is an Assistant Professor in the ECE Department at Hong Kong University of Science and Technology. He received his Ph.D. in 2022 from Duke University. His research focuses on electronic design automation (EDA) and machine learning for VLSI design. Zhiyao has received multiple prestigious awards, including the UGC Early Career Award 2023, ACM Outstanding Dissertation Award in EDA 2023, EDAA Outstanding Dissertation Award 2023, MICRO 2021 Best Paper Award, ASP-DAC 2023 Best Paper Award, ACM SIGDA SRF Best Poster Award 2022, etc. During his Ph.D. studies, Zhiyao also worked as a research intern at leading semiconductor companies such as Nvidia, Arm, Cadence, and Synopsys. Now he also serves as the Seminar Chair of IEEE CEDA Hong Kong.
Enquiries:
Professor XU Qiang (qxu@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
January 2024
23 January
3:00 pm - 4:00 pm
Foundation Models for Life Science
Location
L3, 1/F, Science Centre (SC L3), CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor SONG Le
CTO and Chief AI Scientist, BioMap
Professor, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Abstract:
Can we leverage a large amount of unsupervised data to accelerate life science discovery and drug design in industry? In this talk, I will introduce the xTrimo family of large scale pretrained models across a multiscale of biological processes, integrating a huge amount of data from protein sequences, structures, protein-protein interactions and single-cell transcriptomics data. The pretrained models can be used as the foundation to address many predictive problems arising from life science and drug design and achieve SOTA performances.
Biography:
Le Song is the CTO and Chief AI Scientist of BioMap. He directs the research and development of the xTrimo family of foundation models for life sciences, which is the largest model family in the area consisting of more than 100B parameters and achieving SOTA performance in tens of downstream problems. This new technology also led to the first foundation model deal with big pharmaceutical companies (Sanofi) totaling 1B dollar in contract value. Academically, Le Song is full professor in MBZUAI, and was a tenured associate professor of Georgia Tech, and the conference program chair of ICML 2022. He is an expert in machine learning and AI, and has won many best paper awards in leading AI conferences such as NeurIPS, ICML and AISTATS. Recently, his work on using large language models for protein structure predictions has been featured as the cover story in Nature Machine Intelligence.
Enquiries:
Professor LI Yu (liyu@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
19 January
3:00 pm - 4:00 pm
Generative AI for EDA and Chip Design
Location
Room 407, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. REN Haoxing
Director of Design Automation Research
NVIDIA
Abstract:
This talk explores the transformative potential of Generative AI (GenAI) techniques for EDA and Chip Design. First, we introduce the physical design scaling challenge and propose leveraging GenAI to meet this challenge, particularly in core areas of physical design such as gate sizing and buffering. Using GenAI, we have achieved speed-ups that are multiple orders of magnitude faster than existing commercial tools. Additionally, we delve into the challenges associated with training and inference in GenAI models. To facilitate this, we introduce CircuitOps, an open-source tool that efficiently gathers and processes EDA data for the training and inference phases of GenAI models. Secondly, we explore the application of Large Language Models (a key GenAI technology) to improve industrial chip design productivity. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we trained domain-adapted LLMs (ChipNeMo) with internal design documents and source code. We evaluated ChipNeMo on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our results show that domain adaptation techniques enable significant LLM performance improvements over general-purpose base models. We also find that domain adaptation is orthogonal to retrieval augmented generation (RAG). On the engineering assistant application, our best model achieved 20% higher performance than GPT-4 with RAG.
Biography:
Haoxing Ren (Mark) serves as the Director of Design Automation Research at NVIDIA, where he focuses on leveraging machine learning and GPU-accelerated tools to enhance chip design quality and productivity. Prior to joining NVIDIA in 2016, he dedicated 15 years to EDA algorithm research and design methodology innovation at IBM Microelectronics and IBM Research. Mark is widely recognized for his contributions to physical design, AI, and GPU acceleration for EDA, achievements that have earned him several prestigious awards, including the IBM Corporate Award and best paper awards at ISPD, DAC, TCAD, and MLCAD. He holds over twenty patents and has co-authored over 100 papers and books including a book on ML for EDA and several book chapters in physical design and logic synthesis. He holds Bachelor’s and Master’s degrees from Shanghai Jiao Tong University and Rensselaer Polytechnic Institute, respectively, and earned his PhD from the University of Texas at Austin. He is a Fellow of the IEEE.
Enquiries: Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
15 January
11:30 am - 12:30 pm
Learning to Perceive and Model the World at Scale for Autonomous AI
Location
L4, 2/F, Science Centre (SC L4), CUHK
Category
Seminar Series 2023/2024
Speaker:
Mr. XIONG Yuwen
Ph.D. Candidate
Department of Computer Science, University of Toronto
Abstract:
Developing truly autonomous AI systems like self-driving cars has the potential to transform various industries and improve our daily lives. Accomplishing such a system hinges on two crucial components. First, precise perception of the world is necessary; second, modeling and predicting the world’s dynamics is essential to interact with the real world effectively.
In this talk, I will outline my research efforts in perception and world modeling, focusing on developing scalable deep learning algorithms and models beyond controlled environments.
Regarding perception, I will delve into the development of core deep-learning operators that fundamentally augment the capabilities of deep learning models, followed by discussions on how to perform unsupervised pretraining design unified neural network architectures for efficient and effective image segmentation.
As for world modeling, I will show how to learn prior knowledge of the world and then learn to accurately predict world dynamics at the observational level, both in a scalable and unsupervised manner. Lastly, I will discuss my future research plans to advance perception and world modeling further. This involves integrating multi-modal information into the models and systematically incorporating external knowledge, which is crucial for realizing intelligent autonomous AI systems.
Biography:
Yuwen Xiong is a Ph.D. candidate at the University of Toronto, advised by Professor Raquel Urtasun. He was a research scientist in industrial labs (e.g., Uber ATG, Waabi), conducting cutting-edge research on autonomous driving. Before coming to Toronto, he collaborated with Dr Jifeng Dai at Microsoft Research Asia. Yuwen Xiong’s primary interest lies at the intersection of computer vision, robotics, and machine learning. His long-term vision is to build autonomous AI systems that can learn like humans and operate reliably in the real world. To this end, he leverages his knowledge in the full spectrum of autonomy, including perception, prediction, decision-making, and 3D generation, to create systems that are flexible to handle real-world complexities, robust to uncertainties, and generalizable to novel scenes. He is a recipient of the Canada Graduate Scholarships – Doctoral and the Borealis AI Fellowship. More information about him can be found at https://www.cs.toronto.edu/~yuwen/.
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
12 January
3:00 pm - 4:00 pm
Translating Computer Vision Research to the Real-world Applications
Location
Room 407, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Moi Hoon Yap
Professor of Image & Vision Computing
Manchester Metropolitan University
Abstract:
This seminar covers introduction of research and education in Department of Computing and Mathematics, Manchester Metropolitan University, UK. It provides a pathway for potential research and education collaboration. Prof. Yap will share her research from conceptual foundation and procedures used in the development of medical and computer vision datasets over the past decade, with a timeline to demonstrate progress. It covers data capturing methods, an overview of research in developing private and public datasets, the related computer vision tasks (the facial micro-expressions challenges and the diabetic foot ulcer challenges) and the future direction of the development of her research. As leading institution and investigator in these fields, her aim is to share the technical challenges that we encountered together with good practices in datasets development, and provide motivation for other researchers to participate in data sharing in this domain. Future research involves call for effort in establishing international consortium to form international repository of medical imaging datasets.
For more details, please refer to:
https://dfu-challenge.github.io/ (DFU challenges)
https://megc2023.github.io/ (Facial micro-expressions Challenges)
Biography:
Prof. Moi Hoon Yap is the Research Lead of Department of Computing and Mathematics, Manchester Metropolitan University, UK. Her leadership in both research and education have attracted international students and research collaborations. She is the lead of Human-Centred Computing Group (20 staff members and 12 research scholars) and with expertise in computer vision and deep learning. As the holder of The Royal Society Industry Fellowship (2016-2022), hosted by Image Metrics Ltd, her research is driven by industrial needs. In addition, her research provides new insights and breakthrough for medical image analysis and facial analysis. Moi Hoon has received research funding from The Royal Society, EU Funding, EPSRC, Innovate UK, Cancer Research UK, and industry partners. She serves as the Associate Editor of the Journal of Computers and Programs in Biomedicine and panel member of UK funding bodies. She is leading the technology development for multiple computer vision projects, created novel datasets for reproducible research and conducted international computer vision challenges.
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
December 2023
19 December
11:00 am - 12:00 pm
Textual Inversion
Location
ERB405, 4/F, William M W Mong Engineering Building (ERB)
Category
Seminar Series 2023/2024
Speaker:
Professor Daniel Cohen-Or
Professor
School of Computer Science, Tel Aviv University
Abstract:
Text-to-image models offer unprecedented freedom to guide creation through natural language. Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes. In other words, we ask: how can we use language-guided models to turn our cat into a painting, or imagine a new product based on our favourite toy? Here we present a simple approach that allows such creative freedom. Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new “words” in the embedding space of a frozen text-to-image model. These “words” can be composed into natural language sentences, guiding personalized creation in an intuitive way. Notably, we find evidence that a single word embedding is sufficient for capturing unique and varied concepts.
Biography:
Daniel Cohen-Or is a professor in the School of Computer Science. He received his B.Sc. cum laude in both mathematics and computer science (1985), and M.Sc. cum laude in computer science (1986) from Ben-Gurion University, and Ph.D. from the Department of Computer Science (1991) at State University of New York at Stony Brook. He was sitting on the editorial board of a number of international journals, and a member of many the program committees of several international conferences. He was the recipient of the Eurographics Outstanding Technical Contributions Award in 2005. In 2013 he received The People’s Republic of China Friendship Award. In 2015 he has been named a Thomson Reuters Highly Cited Researcher? He received the ACM SIGGRAPH Computer Graphics Achievement Award in 2018. In 2019 he won The Kadar Family Award for Outstanding Research. In 2020, he received The Eurographics Distinguished Career Award. His research interests are in computer graphics, in particular, synthesis, processing and modelling techniques.
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
13 December
10:30 am - 11:30 am
Designing Secure Datacenter Transport Protocol
Location
ERB804, 8/F, William M W Mong Engineering Building (ERB)
Category
Seminar Series 2023/2024
Speaker:
Professor Michio Honda
Lecturer (Assistant Professor)
School of Informatics, University of Edinburgh
Abstract:
Datacenter operators and tenants need end-to-end encryption due to workload co-location and untrusted network infrastructure. The status quo is TLS/TCP and QUIC, but those are unfit for datacenters due to unsuitable abstractions and host software overheads. This talk presents our work on a secure datacenter transport protocol (SDP), which allows the use of legacy hardware offload for cryptographic operations available in commodity NICs, while using a new datacenter transport protocol as its basis, such as Homa, allowing operators that currently rely on TLS over TCP to adopt SDP without sacrificing hardware offloading opportunity.
Biography:
Michio Honda is a lecturer (equivalent to assistant professor in US) in the School of Informatics at the University of Edinburgh. His best-known work is identifying TCP extensibility against middlebox interference and building the first TCP/IP network stack for persistent memory. His current research interests include networked storage systems and secure datacenter transport protocols. He is a recipient of IRTF Applied Networking Research Prize (2011), Facebook Research Award (2021) and Google Research Scholar Award (2022).
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
08 December
9:00 am - 10:00 am
AI for Chip Design & EDA: Everything, Everywhere, All at Once
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor David Z. Pan
Professor & Silicon Laboratories Endowed Chair
Department of Electrical and Computer Engineering, The University of Texas at Austin
Abstract:
AI for chip design and EDA has received tremendous interests from both academia and industry in recent years. It touches everything that chip designers care about, from power/performance/area (PPA) to cost/yield, turn-around-time, security, among others. It is everywhere, in all levels of design abstractions, testing, verification, DFM, mask synthesis, for digital as well as some aspects of analog/mixed-signal/RF designs as well. It has also been used to tweak the overall design flow and hyper-parameter tuning, but not yet all at once, e.g., generative AI from design specification all the way to layouts, in a correct-by-construction manner. In this talk, I will cover some recent advancement/breakthroughs in AI for chip design/EDA and share my perspectives.
Biography:
Prof. David Pan (Fellow of ACM, IEEE, and SPIE) is a Full Professor and holder of Silicon Laboratories Endowed Chair at the Chandra Department of Electrical and Computer Engineering, The University of Texas at Austin. His research interests include electronic design automation, synergistic AI and IC co-optimizations, design for manufacturing, hardware security, and design/CAD for analog/mixed-signal and emerging technologies. He has published over 480 refereed journal/conference papers and 9 US patents. He has served in many editorial boards and conference committees, including various leadership roles such as DAC 2024 Technical Program Chair, DAC 2023 Technical Program Co-Chair, ICCAD 2019 General Chair, and ISPD 2008 General Chair. He has received many awards, including 20 Best Paper Awards (from TCAD, DAC, ICCAD, DATE, ASP-DAC, ISPD, HOST, SRC, IBM, etc.), SRC Technical Excellence Award, DAC Top 10 Author Award in Fifth Decade, ASP-DAC Frequently Cited Author Award, NSF CAREER Award, IBM Faculty Award (4 times), and many international CAD contest awards. He has held various advisory, consulting, or visiting positions in academia and industry, such as MIT and Google. He has graduated 52 PhD students and postdocs who have won many awards, including ACM Student Research Competition Grand Finals 1st Place (twice, 2018 and 2021), and Outstanding PhD Dissertation Awards 5 times from ACM/SIGDA and EDAA.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
08 December
10:30 am - 11:30 am
Resource Management and Runtime Reconfiguration for Distributed Streaming Systems
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Richard T. B. Ma
Associate Professor
School of Computing, National University of Singapore
Abstract:
Due to the long-run and unpredictable nature of stream processing, any statically configured execution of stream jobs fails to process data in a timely and efficient manner. To achieve performance requirements, stream jobs need to be reconfigured dynamically.
In the first part of the talk, we will discuss DRS, a dynamic resource scaling framework for cloud-based stream data analytics systems. DRS overcomes three fundamental challenges: 1) how to model the relationship between the provisioned resources and the application performance, 2) where to best place resources, and 3) how to measure the system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of Jackson open queueing networks and is capable of handling arbitrary operator topologies, possibly with loops, splits, and joins. Extensive experiments with real data show that DRS is capable of detecting sub-optimal resource allocation and making quick and effective resource adjustment.
In the second part of the talk, we present Trisk, a control plane that supports versatile reconfigurations while keeping high efficiency with easy-to-use programming APIs. Trisk enables versatile reconfigurations with usability based on a task-centric abstraction, and encapsulates primitive operations such that reconfigurations can be described by compositing the primitive operations on the abstraction. Trisk adopts a partial pause-and-resume design for efficiency, through which synchronization mechanisms in the native stream systems can further be leveraged. We implement Trisk on Apache Flink and demonstrate its usage and performance under realistic application scenarios. We show that Trisk executes reconfigurations with shorter completion time and comparable latency compared to a state-of-the-art fluid mechanism for state management.
Biography:
Prof. Richard T. B. Ma received the B.Sc. (Hons.) degree in computer science and M.Phil. degree in computer science and engineering from The Chinese University of Hong Kong in 2002 and 2004, respectively, and the Ph.D. degree in electrical engineering from Columbia University in 2010. During his Ph.D. study, he worked as a Research Intern at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA, and Telefonica Research, Barcelona, Spain. From 2010–2014, he worked as a Research Scientist at the Advanced Digital Science Center (ADSC), University of Illinois at Urbana–Champaign, Champaign, IL, USA. He is currently an Associate Professor with the School of Computing, National University of Singapore. His current research interests include distributed systems and network economics. He was a recipient of the Best Paper Award Runners-up from the ACM Mobihoc 2020 and a co-recipient of the Best Paper Award from the IEEE IC2E 2013, the IEEE ICNP 2014, and the IEEE Workshop on Smart Data Pricing 2015. He is a Senior Member of ACM and IEEE.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
05 December
10:00 am - 11:00 am
Neural Acceleration with Full Stack Optimization
Location
Lecture Theatre 2 (LT2), 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Professor Meng Li
Assistant Professor
Institute for Artificial Intelligence, School of Integrated Circuits, Peking University
Abstract:
Recent years have witnessed the fast evolution of AI and deep learning (DL) in the field computer vision, natural language processing, etc. Though promising, DL faces serious challenges due to the exponential network scaling and network heterogeneity. In this talk, I will discuss some of our recent works that leverage network/hardware co-design and co-optimization to improve the efficiency for DL. I will cover our recent works on tiny language model for MCUs, memory-aware scheduling, and hardware accelerator designs based on a new computing paradigm, i.e., stochastic computing. I will also discuss interesting future directions to further improve the efficiency and security for efficient AI.
Biography:
Prof. Meng Li is currently a tenure-track assistant professor in Peking University, jointly affiliated with Institute for Artificial Intelligence and School of Integrated Circuits. Before joining Peking University, he was staff research scientist and tech lead in Meta Reality Lab, the world’s largest social media company, focusing on research and productization of efficient AI algorithms and hardware/systems for next generation AR/VR devices. Dr. Li received his Ph.D. degree from the University of Texas at Austin in 2018 and his bachelor degree from Peking University in 2013.
Prof. Meng Li’s research interests lie in the field of efficient and secure multi-modal AI acceleration algorithms and hardware. He has published more than 60 papers and received two best paper awards from HOST 2017 and GLSVLSI 2018. He also receives EDAA Outstanding Dissertation Award, First Place in ACM Student Research Competition Grand Final (Graduate Category), Best Poster Awards in ASPDAC Student Research Forum, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
04 December
1:30 pm - 2:30 pm
Qwen: Towards a Generalist Model
Location
Lecture Theatre 2 (LT2), 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Mr. Junyang Lin
Staff Engineer, Leader of Qwen Team,
Alibaba Group
Abstract:
This talk introduces the large language and multimodal model series Qwen, which stands for Tongyi Qianwen (通义千问), published and opensourced by Alibaba Group. The Qwen models have achieved competitive performance against both opensource and proprietary LLMs and LMMs in both benchmark evaluation and human evaluation. This talk provides a brief overview of the model series, and then delves into details about building the LLMs and LMMs, including pretraining, alignment, multimodal extension, as well as the opensource. Additionally, it points out the limitations, and discusses the future work for both research community and industry in this field.
Biography:
Mr. Junyang Lin is a staff engineer of Alibaba Group, and he is now a leader of Qwen Team. He has been doing research in natural language processing and multimodal representation learning, with a focus on large-scale pretraining, and he has around 3000 citations. Recently his team released and opensourced the Qwen series, including large language model Qwen, large vision-language model Qwen-VL, and large audio-language model Qwen-Audio. Previously, he focused on building large-scale pretraining with a focus on multimodal pretraining, and developed opensourced models OFA, Chinese-CLIP, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
04 December
3:00 pm - 4:00 pm
Classical simulation of one-query quantum distinguishers
Location
Lecture Theatre 2 (LT2), 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Professor Andrej Bogdanov
Professor
School of Electrical Engineering and Computer Science, University of Ottawa
Abstract:
A distinguisher is an algorithm that tells whether its input was sampled from one distribution or from another. The computational complexity of distinguishers is important for much of cryptography, pseudorandomness, and statistical inference.
We study the relative advantage of classical and quantum distinguishers of bounded query complexity over n-bit strings. Our focus is on a single quantum query, which is already quite powerful: Aaronson and Ambainis (STOC 2015) constructed a pair of distributions that is ?-distinguishable by a one-query quantum algorithm, but O(?k/√n)-indistinguishable by any non-adaptive k-query classical algorithm.
We show that every pair of distributions that is ?-distinguishable by a one-query quantum algorithm is distinguishable with k classical queries and (1) advantage min{?(?√(k/n)), ?(?^2k^2/n)} non-adaptively (i.e., in one round), and (2) advantage ?(?^2k/√(n log n)) in two rounds. The second bound is tight in k and n up to a (log n) factor.
Based on joint work with Tsun Ming Cheung (McGill), Krishnamoorthy Dinesh (IIT Palakkad), and John C.S. Lui (CUHK)
Biography:
Prof. Andrej Bogdanov is a professor in the School of Electrical Engineering and Computer Science at the University of Ottawa. He is interested in cryptography, pseudorandomness, and computational complexity. Andrej obtained his Ph.D. from UC Berkeley. Before joining uOttawa he taught at the Chinese University of Hong Kong. He was a visiting professor at the Tokyo Institute of Technology in 2013 and at the Simons Institute for the Theory of Computing in 2017 and 2021.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
November 2023
30 November
10:00 am - 11:00 am
Compact AI Representations for Game Theory: Models, Computations, and Applications
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Professor Hau Chan
Assistant Professor
School of Computing, University of Nebraska-Lincoln
Abstract:
In the last few decades, game theory has become a prominent construct for modeling and predicting outcomes of strategic interactions of rational agents in various real-world environments, ranging from adversarial (e.g., attacker-defender in the security domain) to collaborative (e.g., public good contributions). In terms, these predicted outcomes can be used to facilitate, inform, and improve agents’ and policymakers’ decision-making. Unfortunately, because of the domain characteristics in real-world environments, classical game-theoretic modeling and computational approaches (for predicting outcomes) can often take exponential space and time.
In this talk, I will discuss compact AI representations for strategic interactions (or games) to provide efficient approaches for a wide range of applications. I will demonstrate how they can be used to model and predict outcomes in scenarios we examined previously such as attacker-defenders, resource congestions, residential segregations, and public project contributions.
More specifically, I will first present aggregate games, a compact AI representation of games where each agent’s utility function depends on their own actions and the aggregation or summarization of the actions of all agents, and resource graph games, a compact AI representation of games where agents have exponential numbers of actions. For these games, I will then present our computational results for determining and computing Nash Equilibria (NE), a fundamental solution concept to specify predicted outcomes in games, and their related problems.
Biography:
Prof. Hau Chan is an assistant professor in the School of Computing at the University of Nebraska-Lincoln. He received his Ph.D. in Computer Science from Stony Brook University in 2015 and completed three years of Postdoctoral Fellowships, including at the Laboratory for Innovation Science at Harvard University in 2018. His main research areas focus on modeling and algorithmic aspects of AI and multi-agent interactions (e.g., via game theory, mechanism design, and applied machine learning), addressing several cross-disciplinary societal problems and applications. His recent application areas include improving accessibility to public facilities, reducing substance usage, and making fair collective decisions. His research has been supported by NSF, NIH, and USCYBERCOM. He has received several Best Paper Awards at SDM and AAMAS and distinguished/outstanding SPC/PC member recognitions at IJCAI and WSDM. He has given tutorials and talks on computational game theory and mechanism design at venues such as AAMAS and IJCAI, including an Early Career Spotlight at IJCAI 2022. He has served as a co-chair for Demonstrations, Doctoral Consortium, Scholarships, and Diversity & Inclusion Activities at AAMAS and IJCAI.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93836939970
Meeting ID: 938 3693 9970
Passcode: 202300
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
29 November
2:00 pm - 3:00 pm
Cryo-Electron Microscopy Image Analysis: from 2D class averaging to 3D reconstruction
Location
L2, 1/F, Science Centre (SC L2), CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Zhizhen Zhao
William L. Everitt Fellow and Associate Professor
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Abstract:
Cryo-electron microscopy (EM) single particle reconstruction is an entirely general technique for 3D structure determination of macromolecular complexes. This talk focuses on the algorithms for 2D class averaging and 3D reconstruction for the single-particle images, assuming no conformation changes of the macromolecules. In the first part, I will introduce the multi-frequency vector diffusion maps to improve the efficiency and accuracy of cryo-EM 2D image classification and denoising. This framework incorporates different irreducible representations of the estimated alignment between similar images. In addition, we use a graph filtering scheme to denoise the images using the eigenvalues and eigenvectors of the MFVDM matrices. In the second part, I will present a 3D reconstruction approach, which follows a line of works starting from Kam (1977) that employs the autocorrelation analysis for the single particle reconstruction. Our approach does not require per image pose estimation and imposes spatial non-negativity constraint. At the end of the talk, I will briefly review the challenges and existing approaches for addressing the continuous heterogeneity in cryo-EM data.
Biography:
Prof. Zhizhen Zhao is an Associate Professor and William L. Everitt Fellow in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She joined University of Illinois in 2016. From 2014 to 2016, she was a Courant Instructor at the Courant Institute of Mathematical Sciences, New York University. She received the B.A. and M.Sc. degrees in physics from Trinity College, Cambridge University in 2008, and the Ph.D. degree in physics from Princeton University in 2013. She is a recipient of Alfred P. Sloan Research Fellowship (2020). Her research interests include computational imaging, data science, and machine learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
28 November
10:00 am - 11:00 am
Structure for Scalable Verification
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. Lauren Pick
Postdoctoral Researcher
Department of Computer Sciences, University of Wisconsin-Madison and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley
Abstract:
Given the critical role of software systems in society, it is important that we understand how such systems behave and interact. Formal specifications can help us in this task by providing rigorous and unambiguous descriptions of system behaviors. Automated verification can be applied to automate the process of proving formal specifications hold for software systems, making it easier to ensure that the underlying systems function as intended. Unfortunately, the application of automated verification to real-world systems remains hindered by scalability limitations. In this talk, I describe my work on addressing these limitations by leveraging the problem-specific structure of specifications and systems. I specifically illustrate my approach for handling concrete problems in security and distributed domains, where taking advantage of structure enables scalable verification.
Biography:
Dr. Lauren Pick is a postdoctoral researcher at the University of California, Berkeley and the University of Wisconsin-Madison. She received her Ph.D. from Princeton University in January 2022. Her research focuses on developing techniques for automated verification and synthesis, with the goal of enabling formal reasoning about real-world systems. To this end, she has developed techniques that take advantage of structural aspects of target systems and their desired properties to enable efficient verification and synthesis. She is a Computing Innovation fellow and was a recipient of the NSF GRFP Fellowship.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
27 November
10:30 am - 11:30 am
Shape Geometric Processing and Analysis of Large Aviation Equipments
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Mingqiang Wei
Professor
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA)
Abstract:
Large aircraft, as one of the most complex high-end equipment in modern society, is the culmination of interdisciplinary and cross-domain advanced technologies, occupying the top of the manufacturing industry’s technology and value chains. With the emergence of a batch of national key equipment such as the Y-20, C919, and Jiaolong-600, China has made breakthrough progress in large aircraft manufacturing and gradually established a relatively complete production and development system. However, due to insufficient technological foundation and compared with international aerospace manufacturing giants, Chinese aviation enterprises have not yet achieved integrated manufacturing and measurement capabilities or effective precision control capabilities. The “high-precision rapid 3D scanning analysis and quality control technology” has become an important factor affecting the development process of large aircraft in China. Geometric deep learning, with its powerful ability to learn geometric features, has shown great potential in the analysis of large aircraft shapes. However, existing network structures lack domain-specific expertise in aviation, there is no publicly available large-scale aircraft 3D dataset, and the latest machine learning technologies have not been deeply integrated into the field of geometric deep learning, making it difficult to comprehensively and efficiently analyze the complex features and stringent accuracy requirements of large aircraft shapes. This report will introduce the interdisciplinary technical issues involved in the analysis of large aircraft shapes.
Biography:
Prof. Mingqiang Wei received his Ph.D. degree (2014) in Computer Science and Engineering from the Chinese University of Hong Kong (CUHK). He is a professor at the School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA). He was the recipient of Excellent Youth Fund Project of the National Natural Science Foundation of China in 2023. Before joining NUAA, he served as an assistant professor at Hefei University of Technology, and a postdoctoral fellow at CUHK. He was a recipient of the CUHK Young Scholar Thesis Awards in 2014. He is now an Associate Editor for ACM TOMM, The Visual Computer Journal, Journal of Electronic Imaging, and a Guest Editor for IEEE Transactions on Multimedia. His research interests focus on 3D vision, computer graphics, and deep learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
27 November
2:30 pm - 3:30 pm
Looking behind the Seen
Location
L3, 1/F, Science Centre (SC L3), CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Alexander Schwing
Associate Professor
Department of Electrical and Computer Engineering & Department of Computer Science, University of Illinois at Urbana-Champaign
Abstract:
Our goal is to develop methods which anticipate. For this, four foundational questions need to be answered: (1) How can methods accurately forecast high-dimensional observations?; (2) How can algorithms holistically understand objects, e.g., when reasoning about occluded parts?; (3) How can accurate probabilistic models be recovered from limited amounts of labeled data and for rare events?; and (4) How can autonomous agents be trained effectively to collaborate?
In this talk we present vignettes of our research to address those questions. We start by discussing MaskFormer and Mask2Former, a recent architecture which achieves state-of-the-art results on three tasks: panoptic, instance and semantic segmentation. We then discuss the importance of memory for video object segmentation and its combination with foundation models for open-world segmentation. Finally, and if time permits, we discuss SDFusion, a generative model to infer parts of an object that are unobserved. For additional info and questions, please browse to http://alexander-schwing.de.
Biography:
Prof. Alexander Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on computer vision and machine learning topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016. His research interests are in the area of computer vision and machine learning, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award. For additional info, please browse to http://alexander-schwing.de.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
22 November
9:00 am - 10:00 am
Open-Source Accelerator-Based Edge AI Architectures for a Sustainable World
Location
Lecture Theatre 3, 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Professor David Atienza
Professor
Department of Electrical and Computer Engineering, The École Polytechnique Fédérale de Lausanne (EPFL)
Abstract:
Edge computing is becoming an essential concept covering multiple domains nowadays as our world becomes increasingly connected to enable the Internet of Things (IoT) concept. In addition, the new wave of Artificial Intelligence (AI), particularly complex Machine Learning (ML) and Deep Learning (DL) models, is demanding new computing paradigms beyond traditional general-purpose computing to make IoT a viable reality in a sustainable world.
In this seminar, Prof. Atienza will discuss new approaches to effectively design the next generation of edge AI computing architectures by taking inspiration from how biological computing systems operate. In particular, these novel bioinspired edge AI architectures includes two key concepts. First, it exploits the idea of accepting computing inexactness and integrating multiple computing acceleration engines and low-power principles to create a new open-source eXtended and Heterogeneous Energy-Efficient hardware Platform (called x-HEEP). Second, x-HEEP can be instantiated for different application domains of edge AI to operate ensembles of neural networks to improve the ML/DL outputs’ robustness at system level, while minimizing memory and computation resources for the target application. Overall, x-HEEP instantiations for edge AI applications included in-memory computing or run-time reconfigurable coarse-grained accelerators to minimize energy according to the required precision of the target application.
Biography:
Prof. David Atienza is a professor of Electrical and Computer Engineering, and leads both the Embedded Systems Laboratory (ESL) and the new EcoCloud Sustainable Computing Center at EPFL, Switzerland. He received his M.Sc. and Ph.D. degrees in Computer Science and Engineering from UCM (Spain) and IMEC (Belgium). His research interests include system-level design methodologies for high-performance multi-processor system-on-chip (MPSoC) and low-power Internet-of-Things (IoT) systems, including edge AI architectures for wearables and IoT systems as well as thermal-aware designs for MPSoCs and many-core servers. He is a co-author of more than 400 papers, two books, and has 14 licensed patents in these topics. He served as DATE General Chair and Program Chair, and is currently Editor-in-Chief of IEEE TCAD. Among others, Dr. Atienza has received the ICCAD 10-Year Retrospective Most Influential Paper Award, the DAC Under-40 Innovators Award, the IEEE TC-CPS Mid-Career Award, and the ACM SIGDA Outstanding Faculty Award. He is a Fellow of IEEE, a Fellow of ACM, served as IEEE CEDA President (period 2018-2019), and he is currently the Chair of the European Design Automation Association (EDAA).
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
October 2023
20 October
10:00 am - 11:00 am
Heads-Up Computing: Towards The Next Generation Interactive
Location
Lecture Theatre 5 (1/F), Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Prof. Shengdong Zhao
Associate Professor
Department of Computer Science, National University of Singapore
Abstract:
Heads-up computing is an emerging concept in human-computer interaction (HCI) that focuses on natural and intuitive interaction with technology. By making technology more seamlessly integrated into our lives, heads-up computing has the potential to revolutionize the way we interact with devices. With the rise of large language models (LLMs) such as ChatGPT and GPT4, the vision of heads-up computing is becoming much easier to realize. The combination of LLMs and heads-up computing can create more proactive, personalized, and responsive systems that are more human-centric. However, technology is a double-edged sword. While technology provides us with great power, it also comes with the responsibility to ensure that it is used ethically and for the benefit of all. That’s why it is essential to place fundamental human values at the center of research programs and work collaboratively among disciplines. As we navigate through this historic transition, it is crucial to shape a future that reflects our values and enhances our quality of life.
Biography:
Dr. Shengdong Zhao is an Associate Professor in the Department of Computer Science at the National University of Singapore, where he established and leads the NUS-HCI research lab. He received his Ph.D. degree in Computer Science from the University of Toronto and a Master’s degree in Information Management & Systems from the University of California, Berkeley. With a wealth of experience in developing new interface tools and applications, Dr. Zhao regularly publishes his research in top-tier HCI conferences and journals. He has also worked as a senior consultant with the Huawei Consumer Business Group in 2017. In addition to his research, Dr. Zhao is an active member of the HCI community, frequently serving on program committees for top HCI conferences and as the paper chair for the ACM SIGCHI 2019 and 2020 conferences. For more information about Dr. Zhao and the NUS-HCI lab, please visit http://www.shengdongzhao.com and http://www.nus-hci.org .
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
16 October
1:00 pm - 2:00 pm
Robust AI for Security
Location
L2, 1/F, Science Centre (SC L2), CUHK
Category
Seminar Series 2023/2024
Speaker:
Prof. Yizheng Chen
Assistant Professor
Department of Computer Science, University of Maryland
Abstract:
Artificial Intelligence is becoming more powerful than ever, e.g., GitHub Copilot suggests code to developers, and Large Language Model (LLM) Plugins will soon assist many tasks in our daily lives. We can utilize the power of AI to solve security problems, which needs to be robust against new attacks and new vulnerabilities.
In this talk, I will first discuss how to develop robust AI techniques for malware detection. Our research finds that, after training an Android malware classifier on one year’s worth of data, the F1 score quickly dropped from 0.99 to 0.76 after 6 months of deployment on new test samples. I will present new methods to make machine learning for Android malware detection more effective against data distribution shift. My vision is, continuous learning with a human-in-the-loop setup can achieve robust malware detection. Our results show that to maintain a steady F1 score over time, we can achieve 8X reduction in labels indeed from security analysts.
Next, I will discuss the potential of using large language models to solve security problems, using vulnerable source code detection as a case study. We propose and release a new vulnerable source code dataset, DiverseVul. Using the new dataset, we study 11 model architectures belonging to 4 families for vulnerability detection. Our results indicate that developing code-specific pre-training tasks is a promising research direction of using LLMs for security. We demonstrate an important generalization challenge for the deployment of deep learning-based models.
In closing, I will discuss security issues of LLMs and future research directions.
Biography:
Yizheng Chen is an Assistant Professor of Computer Science at University of Maryland. She works at the intersection of AI and security. Her research focuses on AI for Security and robustness of AI models. Previously, she received her Ph.D. in Computer Science from the Georgia Institute of Technology, and was a postdoc at University of California, Berkeley and Columbia University. Her work has received an ACM CCS Best Paper Award Runner-up and a Google ASPIRE Award. She is a recipient of the Anita Borg Memorial Scholarship.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
September 2023
27 September
10:00 am - 11:30 am
Geometric Robot Learning for Generalizable Skills Acquisition
Location
Room 123, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Prof. Xiaolong Wang
Associate Professor
Department of Electrical and Computer Engineering, University of California, San Diego
Abstract:
Robot learning has witnessed significant progress in terms of generalization in the past few years. At the heart of such a generalization, the advancement of representation learning, such as image and text foundation models plays an important role. While these achievements are encouraging, most tasks conducted are relatively simple. In this talk, I will talk about our recent efforts on learning generalizable skills focusing on tasks with complex physical contacts and geometric reasoning. Specifically, I will discuss our research on: (i) the use of a large number of low-cost, binary force sensors to enable Sim2Real manipulation; (ii) unifying 3D and semantic representation learning to generalize policy learning across diverse objects and scenes. I will showcase the real-world applications of our research, including dexterous manipulation, language-driven manipulation, and legged locomotion control.
Biography:
Xiaolong Wang is an Assistant Professor in the ECE department at the University of California, San Diego, affiliated with the TILOS NSF AI Institute. He received his Ph.D. in Robotics at Carnegie Mellon University. His postdoctoral training was at the University of California, Berkeley. His research focuses on the intersection between computer vision and robotics. His specific interest lies in learning 3D and dynamics representations from videos and physical robotic interaction data. These comprehensive representations are utilized to facilitate the learning of robot skills, with the goal of generalizing the robot to interact effectively with a wide range of objects and environments in the real physical world. He is the recipient of the NSF CAREER Award, Intel Rising Star Faculty Award, and Research Awards from Sony, Amazon, Adobe, and Cisco.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
27 September
2:00 pm - 3:00 pm
Disentangled Representation from Generative Networks
Location
L2, 1/F, Science Centre (SC L2), CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. LIU Sifei
Abstract:
Disentangled representation in computer vision refers to encoding visual data into distinct, independent factors. These representations are critical for enhancing interpretability, improving generalization across tasks, and enabling controlled manipulation of specific visual attributes. Learning disentangled representation is challenging, primarily because obtaining ground-truth factorizations is often elusive.
In this talk, I will discuss our latest efforts to extract disentangled representations from GANs and diffusion models, for both 2D images and 3D textured shapes. I will demonstrate how, in the absence of annotations, our approaches can discern and extract fine-grained structural information, such as correspondence maps, in a self-supervised manner. Building on this space, I will introduce our work on a generalizable network designed for controlled generation and editing in a feed-forward paradigm. Additionally, I will spotlight our recent exploration into generating hand-object interactions, leveraging the disentanglement of layout and content through image diffusion models.
Biography:
Dr. LIU Sifei is a staff-level Senior Research Scientist at NVIDIA, where she is part of the LPR team led by Jan Kautz. Her work primarily revolves around the development of generalizable visual representation and data-efficiency learning for images, videos, and 3D contents. Prior to this, she pursued her Ph.D. at the VLLAB, under the guidance of Ming-Hsuan Yang. Sifei had received several prestigious awards and recognitions. In 2013, she was honored with the Baidu Graduate Fellowship. This was followed by the NVIDIA Pioneering Research Award in 2017, and the Rising Star EECS accolade in 2019. Additionally, she was nominated for the VentureBeat Women in AI Award in 2020.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
21 September
4:30 pm - 5:30 pm
Towards Scalable, Secure and Privacy-Preserving Metaverse
Location
L1, 1/F, Science Centre (SC L1), CUHK
Category
Seminar Series 2023/2024
Speaker:
Prof. DAI Hong-Ning
Associate Professor
Department of Computing Science, Hong Kong Baptist University (HKBU)
Abstract:
The metaverse is essentially constructed by multiple technologies, including virtual reality (VR), augmented reality (AR), mixed reality (MR), artificial intelligence (AI), digital twin (DT), blockchain, and 5G communications. The advent of the metaverse has proliferated a number of VR/AR apps on top of diverse VR/AR devices, such as Meta Quest 2, MS Hololens, Sony PlayStation VR, ByteDance Pico, and Apple Vision Pro. Meanwhile, diverse metaverse applications have emerged, such as gaming, healthcare, industry, creator economy, and digital arts. However, the current development of the metaverse is still in its early stage because of the complexity and heterogeneity of the entire system, which cannot be scalable to fulfill the increasing number of participants as well as the stringent demands of metaverse applications. Moreover, emerging security vulnerabilities and privacy-leakage concerns have also prevented the metaverse from wide adoption. In this talk, I will first briefly review the Metaverse as well as relevant technologies. I will then elaborate on its challenges as well as potential solutions. Finally, I will discuss several future directions in this promising area.
Biography:
Hong-Ning Dai is an associate professor in the Department of Computer Science, Hong Kong Baptist University (HKBU). He obtained a Ph.D. degree in Computer Science and Engineering from The Chinese University of Hong Kong. Before joining HKBU, he has more than 10-year academic experience in the Chinese University of Hong Kong, Macau University of Science and Technology (Macau), and Lingnan University (Hong Kong). His current research interests include the Internet of Things, Blockchain, and Big Data Analytics. Prof. Dai has published more than 200 papers in referred journals and conferences. His publications have received more than 15,000 citations. He was also included in the world’s top 2% scientists for career-long impact (2022, 2021) by Stanford University, USA. He was also conferred on AI 2000 Most Influential Scholar Award (Honorable Mention) in Internet of Things, 2023. He is the holder of 1 U.S. patent. He is the senior member of IEEE and ACM. Prof. Dai has served as an associate editor for IEEE Communications Surveys & Tutorials, IEEE Transactions on Intelligent Transportation Systems, IEEE Transactions on Industrial Informatics, IEEE Transactions on Industrial Cyber-Physical Systems, Ad Hoc Networks (Elsevier), and Connection Science (Taylor & Francis).
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
20 September
4:00 pm - 5:00 pm
The da Vinci Research Kit: System Description, Research Highlights, and Surgical Robotics Challenge
Location
CK TSE, G/F, Chung Chi College Elisabeth Luce Moore Library
Category
Seminar Series 2023/2024
Speaker:
Prof. Peter Kazanzides
Research Professor
Department of Computing Science, Johns Hopkins University
Abstract:
The da Vinci Research Kit (dVRK) is an open research platform that couples open-source control electronics and software with the mechanical components of the da Vinci surgical robot. This presentation will describe the dVRK system architecture, followed by selected research enabled by this system, including mixed reality for the first assistant, autonomous camera motion, and force estimation for bilateral teleoperation. The presentation will conclude with an overview of the AccelNet Surgical Robotics Challenge, which includes both simulated and physical environments.
Biography:
Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988. He began work on surgical robotics in March 1989 as a postdoctoral researcher at the IBM T.J. Watson Research Center and co-founded Integrated Surgical Systems (ISS) in November 1990. As Director of Robotics and Software at ISS, he was responsible for the design, implementation, validation and support of the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University December 2002 and currently holds an appointment as a Research Professor of Computer Science. His research focuses on computer-integrated surgery, space robotics and mixed reality.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
12 September
9:30 am - 10:30 am
Smart Reconfigurable Computing for GNN and Transformer using Agile High Level Synthesis
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Dr. HAO Cong, Callie
Assistant Professor
Department of Electrical and Computer Engineering (ECE), Georgia Institute of Technology (GaTech)
Abstract:
In this talk, we introduce two architectures, one for graph neural work (GNN) called FlowGNN, one for vision transformer (ViT) called Edge-MoE. In FlowGNN, a generic dataflow architecture for GNN acceleration is proposed, supporting a wide range of GNN models without graph pre-processing. GNNBuilder is then introduced as an automated, end-to-end GNN accelerator generation framework, allowing the generation of accelerators for various GNN models with minimal overhead. Next, Edge-MoE presents an FPGA accelerator for multi-task Vision Transformers (ViTs) with architectural innovations, achieving improved energy efficiency compared to GPU and CPU. The talk demonstrates the performance of these approaches, with code and measurements available for public access. Finally, we briefly introduce LightningSim, a fast and rapid simulation tool for High-Level Synthesis (HLS) designs, which can significantly improve HLS design simulation speed.
Biography:
Dr. HAO Cong, Callie is an assistant professor in ECE at Georgia Tech. She received the Ph.D. degree in Electrical Engineering from Waseda University in 2017. Her primary research interests lie in the joint area of efficient hardware design and machine learning algorithms, as well as reconfigurable and high-efficiency computing and agile electronic design automation tools.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96351056844?pwd=cDBJcVY3ZHlGMSt2V0FUQVdUVnAwZz09
Meeting ID: 963 5105 6844
Passcode: 471978
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
01 September
11:00 am - 12:00 pm
An Evolution of Learning Neural Implicit Representations for 3D Shapes
Location
Room 407, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor ZHANG Hao, Richard
Amazon Scholar, Professor
School of Computing Science, Simon Fraser University, Canada
Abstract:
Neural implicit representations are the immediate precursors to neural radiance fields (NeRF). In a short span of only four years, they have quickly become the representation of choice for learning reconstructive and generative models of 3D shapes. Unlike traditional convolutional neural networks that have been widely applied to reason about images and video, neural implicit models encode shape boundaries in a continuous manner to lead to superior visual quality; they are also amenable to simple network architectures to facilitate a variety of extensions and adaptations. In this talk, I will recount a brief history of the development of neural implicit representations, while focusing mainly on several paths of follow-ups from our recent works, including structured implicit models, direct mesh generation, CSG assemblies, and the use of contextual, query-specific feature encoding for category-agnostic and generalizable shape representation learning.
Biography:
ZHANG Hao, Richard is a professor in the School of Computing Science at Simon Fraser University, Canada. Currently, he holds a Distinguished University Professorship and is an Amazon Scholar. Richard earned his Ph.D. from the University of Toronto, and MMath and BMath degrees from the University of Waterloo. His research is in computer graphics and visual computing with special interests in geometric modeling, shape analysis, 3D vision, geometric deep learning, as well as computational design and fabrication. He has published more than 180 papers on these topics, including over 60 articles in SIGGRAPH (+Asia) and ACM Transactions on Graphics (TOG), the top venue in computer graphics. Awards won by Richard include a Canadian Human-Computer Communications Society Achievement Award in Computer Graphics (2022), a Google Faculty Award (2019), a National Science Foundation of China Overseas Outstanding Young Researcher Award (2015), an NSERC Discovery Accelerator Supplement Award (2014), a Best Dataset Award from ChinaGraph (2020), as well as faculty grants/gifts from Adobe, Autodesk, Google, and Huawei. He and his students have won the CVPR 2020 Best Student Paper Award and Best Paper Awards at SGP 2008 and CAD/Graphics 2017.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
August 2023
28 August
2:00 pm - 3:00 pm
Towards predictive spatiotemporal modeling of single cells
Location
Room 803, 8/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. Xiaojie Qiu
Incoming Assistant Professor
Department of Genetics, Department of Computer Science, Stanford University
Abstract:
Single-cell RNA-seq, together with RNA velocity and metabolic labeling, reveals cellular states and transitions at unprecedented resolution. Fully exploiting these data, however, requires kinetic models capable of unveiling governing regulatory functions.
In the first part of my talk, I will introduce an analytical framework dynamo (https://github.com/aristoteleo/dynamo-release), and highlight dynamo’s power to overcome fundamental limitations of conventional splicing-based RNA velocity analyses to enable accurate velocity estimations on a metabolically labeled human hematopoiesis scRNA-seq dataset. Furthermore, differential geometry analyses reveal mechanisms driving early megakaryocyte appearance and elucidate asymmetrical regulation within the PU.1-GATA1 circuit. Leveraging the least-action-path method, dynamo accurately predicts drivers of numerous hematopoietic transitions. Finally, in silico perturbations predict cell-fate diversions induced by gene perturbations. Dynamo, thus, represents an important step in advancing quantitative and predictive theories of cell-state transitions.
Cells do not live in a vacuum, but in a milieu defined by cell–cell communication that can be quantified via recent advances in spatial transcriptomics. In my second section of my talk, I will talk about Spateo, a general framework for quantitative spatiotemporal modeling of single-cell resolution spatial transcriptomics. Spateo develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila. Thus, Spateo enables the study of the ecology of organs at a molecular level in 3D space, beyond isolated single cells.
Biography:
Dr. Xiaojie Qiu is an incoming assistant professor at the Department of Genetics, the BASE program, and the Department of Computer Science at Stanford. Xiaojie’s Ph.D. work at University of Washington with Dr. Cole Trapnell made substantial contributions to the field of single-cell genomics, exemplified by the development of Monocle ⅔ (monocle 2 & monocle 3), which can accurately and robustly reconstruct complex developmental trajectories from scRNA-seq data. In his post-doc at Whitehead Institute with Dr. Jonathan Weissman, Xiaojie developed Dynamo (aristoteleo/dynamo-release) to infers absolute RNA velocity with metabolic labeling enabled single-cell RNA-seq, reconstructs continuous vector fields that predict fates of individual cells, employs differential geometry to extract underlying gene regulatory network regulations, and ultimately predicts optimal reprogramming paths and makes nontrivial in silico perturbation predictions. Recently he also developed a powerful toolkit, Spateo (aristoteleo/spateo-release), for advanced multi-dimensional spatiotemporal modeling of single cell resolution spatial transcriptomics. Spateo delivers novel methods for digitizing spatial layers/columns to identify spatially-polar genes, and develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila.
The Qiu lab at Stanford will officially start on Dec. 16, 2024. Xiaojie will continue leveraging his unique background in single-cell genomics, mathematical modeling, and machine learning to lead a research team that bridges the gap between the “big data” from single-cell and spatial genomics and quantitative/predictive modeling in order to address fundamental questions in mammalian cell fate transitions, especially that of heart development and disease. There will be mainly four directions in the lab: 1) dissect the mechanisms of mammalian cell differentiation, reprogramming, and maintenance, including that of cardiac cells, through differentiable deep learning frameworks; 2) integrate multi-omics and harmonize short-term RNA velocities with long-term lineage tracing and apply such methods to heart developmental and heart congenital disease; 3) build predictive in silico 3D spatiotemporal models of mammalian organogenesis with a focus on the heart morphogenesis; and 4) establish foundational software ecosystem for predictive and mechanistic modeling of single cell and spatial transcriptomics.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
14 August
11:00 am - 12:00 pm
The characteristics and relationships between deep generative modelling approaches
Location
Room 1027, 10/F, Ho Sin-hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Professor Chris G. Willcocks
Associate Professor
Department of Computer Science, Durham University
Abstract:
There are several key equations in the generative modelling literature, most of which estimate the probability of data. Each related modelling approach (Flows, EBMs, VAEs, GANs, OT, Autoregressive,…) have trade-offs in terms of (i) modelling quality, (i) inference time/depth, and (iii) distribution coverage/mode collapse. Building off findings in our TPAMI 2022 review, “Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models”, this talk covers high-level theoretical insights into the different generative modelling paradigms, discussing where there is a gap in the current theory and looking at promising directions such as from optimal transport theory and implicit networks, to address upcoming challenges.
Biography:
Chris G. Willcocks is an associate professor in computer science at Durham University where he leads the deep learning and reinforcement learning modules. His research is in theoretical aspects of deep learning, with a particular emphasis on non-adversarial methodologies such as probabilistic diffusion models and stochastic processes. Research within his group has led to several impactful results in generative modelling including an extension of diffusion models to infinite dimensions without requiring latent vector compression, and an approach that shows you don’t need encoders in traditional autoencoders. He is a Fellow of the Higher Education Academy (FHEA), an area chair for BMVC, and has authored over 30 peer-reviewed publications in venues such as ICLR, CVPR, ECCV and TPAMI.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
14 August
3:00 pm - 4:00 pm
Fair and Private Backpropagation: A Scalable Framework for Fair and Private Learning
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Meisam Razaviyayn
Associate Professor
University of Southern California
Abstract:
Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals’ health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning require a full-batch of data in each iteration of the algorithm to be able to impose fairness. Moreover, the fairness/accuracy of the model can degrade significantly in prior DP training algorithms. In this work, we developed a min-batch (stochastic) differentially private algorithm for fair learning (with theoretical convergence guarantee). Here, the term “stochastic” refers to the fact that our proposed algorithm converges even when mini-batches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger-scale problems with non-binary target/sensitive attributes.
Biography:
Meisam Razaviyayn is an associate professor of Industrial and Systems Engineering, Computer Science, Quantitative and Computational Biology, and Electrical Engineering at the University of Southern California. He is also the associate director of the USC-Meta Center for Research and Education in AI and Learning. Prior to joining USC, he was a postdoctoral research fellow in the Department of Electrical Engineering at Stanford University. He received his PhD in Electrical Engineering with a minor in Computer Science at the University of Minnesota. He obtained his M.Sc. degree in Mathematics from the University of Minnesota. Meisam Razaviyayn is the recipient of the 2022 NSF CAREER Award, the 2022 Northrop Grumman Excellence in Teaching Award, the 2021 AFOSR Young Investigator Award, the 2021 3M Nontenured Faculty award, 2020 ICCM Best Paper Award in Mathematics, IEEE Data Science Workshop Best Paper Award in 2019, the Signal Processing Society Young Author Best Paper Award in 2014, and the finalist for Best Paper Prize for Young Researcher in Continuous Optimization in 2013 and 2016. He is also the silver medalist of Iran’s National Mathematics Olympiad. His research interests include the design and the study of the fundamental aspects of optimization algorithms that arise in the modern data science era.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
02 August
2:00 pm - 3:00 pm
On the Model-misspecification of Reinforcement Learning
Location
Room 1027, 10/F, Ho Sin-hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Dr. YANG Lin
Assistant Professor
Electrical and Computer Engineering Department, University of California
Abstract:
The success of reinforcement learning (RL) heavily depends on the approximation of functions such as policy, value, or models. Misspecification—a mismatch between the ground-truth and the best function approximators—often occurs, particularly when the ground-truth is complex. Because the misspecification error does not disappear even with an infinite number of samples, it’s crucial to design algorithms that demonstrate robustness under misspecification. In this talk, we will first present a lower bound illustrating that RL can be inefficient (e.g., possessing exponentially large complexity) if the features can only represent the optimal value functions approximately but with high precision. Subsequently, we will show that this issue can be mitigated by approximating the transition probabilities. In such a setting, we will demonstrate that both policy-based and value-based approaches can be resilient to model misspecifications. Specifically, we will show that these methods can maintain accuracy even under large, locally-bounded misspecification errors. Here, the function class might have a \Omega(1) approximation error in specific states and actions, but it remains small on average under a policy-induced state-distribution. Such robustness to model misspecification partially explains why practical algorithms perform so well, paving the way for new directions in understanding model misspecifications.
Biography:
Dr. Lin Yang is an Assistant Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. His current research focuses on the theory and applications of reinforcement learning. Previously, he served as a postdoctoral researcher at Princeton University. He earned two Ph.D. degrees in Computer Science and in Physics & Astronomy from Johns Hopkins University. Prior to that, he obtained a Bachelor’s degree in Math & Physics from Tsinghua University. Dr. Yang has numerous publications in premier machine learning venues like ICML and NeurIPS, and has served as area chairs for these conferences. His receives an Amazon Faculty Award, a Simons-Berkeley Research Fellowship, the JHU MINDS Best Dissertation Award, and the JHU Dean Robert H. RoyFellowship.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
June 2023
16 June
2:30 pm - 3:30 pm
Towards Application-oriented Big Data and ML Systems
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Professor ZHANG Hong
Assistant Professor
Cheriton School of Computer Science, University of Waterloo
Abstract:
The world is undergoing a data revolution. Emerging big data and ML applications are harnessing massive volumes of data to uncover hidden patterns, correlations, and other valuable insights, transforming information and knowledge production. As the data volume keeps growing explosively, these applications require high-performance big data and ML systems to efficiently transfer, store, and process data at a massive scale.
In this talk, I advocate an application-oriented principle to design big data and ML systems: fully exploiting application-specific structures — communication patterns, execution dependencies, ML model structures, etc. — to suit application-specific performance demands. I will present how I have developed the application-oriented principle throughout my PhD-Postdoc-Faculty research, and how I have applied it to build systems tailored for different big data and ML applications.
Biography:
ZHANG Hong is currently an assistant professor at the Cheriton School of Computer Science at the University of Waterloo. Previously, he was a postdoctoral scholar at UC Berkeley and obtained his Ph.D. degree in Computer Science and Engineering from HKUST. Hong is broadly interested in computer systems and networking, with special focuses on distributed data analytics and ML systems, data center networking, and serverless computing. His research work appeared in prestigious systems and networking conferences, such as SIGCOMM, NSDI, and EuroSys. He has been awarded the Google Ph.D. Fellowship in systems and networking.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
May 2023
31 May
10:00 am - 11:00 am
Contemporary Visual Computing: Storytelling & Scene Graph Generation
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Professor Chang Wen Chen
Chair Professor of Visual Computing
The Hong Kong Polytechnic University
Abstract:
Visual computing, traditionally, is a generic term for all computer science disciplines for algorithmic development dealing with images, videos, and other types of visual data. This talk shall focus on contemporary visual computing design from several systematic perspectives. Contemporary visual computing has been substantially advanced to enhance both human understanding and machine intelligence. The ultimate goal for human understanding will be for visual computing algorithms to generate human-like storytelling with a rational contextual setting and the capability to apply general knowledge. For machine intelligence, a more appropriate form of representing semantics from visual data will be to utilize a well-structured scene graph generation approach to characterize the logical relationship among the subjects and objects detected from the visual data. We shall report our recent research activities in developing advanced visual computing algorithms for both human understanding and machine intelligence. These exemplary applications demonstrate several unique visual computing capabilities in understanding the real world with more accurate contextual and environmental interpretations. These examples also illustrate the technical challenges we are facing and the potential impacts that contemporary visual computing systems are making, including the paradigm-shifting visual semantic communication design for the future 6G mobile networks.
Biography:
Chang Wen Chen is currently Chair Professor of Visual Computing at The Hong Kong Polytechnic University. Before his current position, he served as Dean of the School of Science and Engineering at The Chinese University of Hong Kong, Shenzhen from 2017 to 2020, and concurrently as Deputy Director at Peng Cheng Laboratory from 2018 to 2021. Previously, he has been an Empire Innovation Professor at the State University of New York at Buffalo (SUNY) from 2008 to 2021 and the Allan Henry Endowed Chair Professor at the Florida Institute of Technology from 2003 to 2007.
He has served as an Editor-in-Chief for IEEE Trans. Multimedia (2014-2016) and IEEE Trans. Circuits and Systems for Video Technology (2006-2009). He has received many professional achievement awards, including ten (10) Best Paper Awards in premier publication venues, the prestigious Alexander von Humboldt Award in 2010, the SUNY Chancellor’s Award for Excellence in Scholarship and Creative Activities in 2016, and UIUC ECE Distinguished Alumni Award in 2019. He is an IEEE Fellow, a SPIE Fellow, and a Member of the Academia Europaea.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
24 May
11:00 am - 12:00 pm
Solving Extreme-Scale Problems on Sunway Supercomputers
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. Haohuan Fu
Professor
Department of Earth System Science, Tsinghua University
Abstract:
Defined as the fastest computers in the world by the name, supercomputers have been important tools for making scientific discoveries and technology breakthroughs. In this talk, we will introduce a series of Sunway Supercomputers, which demonstrate a superb example of integrating tens of millions of cores into a high-resolution numerical simulator or a large-scale machine learning engine, and bringing opportunities for widening our knowledge boundaries in various domains. Application examples include ultra-high-resolution climate modeling and earthquake simulation, close-to-real-time quantum circuit simulation, unsupervised learning to achieve nation-scale land cover mapping, and training large deep learning models of brain-scale complexity. Through these examples, we discuss the key issues and potential of combining supercomputing and AI technologies for solving the major challenges that we face.
Biography:
Haohuan Fu is a professor in the Department of Earth System Science, Tsinghua University, and the deputy director of the National Supercomputing Center in Wuxi. Fu has his BE (2003) in CS from Tsinghua University, MPhil (2005) in CS from City University of Hong Kong, and PhD (2009) in computing from Imperial College London. His research work focuses on supercomputing architecture and software, leading to three ACM Gordon Bell Prizes (nonhydrostatic atmospheric dynamic solver in 2016, nonlinear earthquake simulation in 2017, and random quantum circuit simulation in 2021).
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
11 May
4:00 pm - 5:00 pm
Probabilistic Sports Analytics
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. Jin-Song Dong
Professor
School of Computing, National University of Singapore
Abstract:
Sports analytics encompasses the utilization of data science, artificial intelligence (AI), psychology, and advanced Internet of Things (IoT) devices to enhance sports performance, strategy, and decision-making. This process involves the collection, processing, and interpretation of cloud-based data from a variety of sources, such as video recordings, performance metrics, and scouting reports. The resulting insights aid in evaluating player and team performance, preventing injuries, and supporting coaches and team managers in making well-informed decisions to optimize resources and achieve superior outcomes.
One widely recognized formal method, Probabilistic Model Checking (PMC), has been conventionally employed in reliability analysis for intricate safety critical systems. For instance, the reliability of an aircraft can be determined by evaluating the reliability of its individual components, including the engine, wings, and sensors. Our groundbreaking approach applies PMC to a novel domain: Sports Strategy Analytics. As an example, the reliability (winning percentage) of a sports player can be ascertained from the reliability (success rate) of their specific sub-skill sets (e.g., serve, forehand, backhand, etc., in tennis).
In this presentation, we will discuss our recent research work, which involves the application of PMC, machine learning, and computer vision to the realm of sports strategy analytics. At the end of the presentation, we will also discuss the vision of a new international sports analytics conference series (https://formal-analysis.com/isace/2023/).
Biography:
Jin-Song Dong is a professor at the National University of Singapore. His research spans a range of fields, including formal methods, safety and security systems, probabilistic reasoning, sports analytics, and trusted machine learning. He co-founded the commercialized PAT verification system, which has garnered thousands of registered users from over 150 countries and received the 20-Year ICFEM Most Influential System Award. Jin Song also co-founded the commercialized trusted machine learning system Silas (www.depintel.com). He has received numerous best paper awards, including the ACM SIGSOFT Distinguished Paper Award at ICSE 2020.
He served on the editorial board of ACM Transactions on Software Engineering and Methodology, Formal Aspects of Computing, and Innovations in Systems and Software Engineering, A NASA Journal. He has successfully supervised 28 PhD students, many of whom have become tenured faculty members at leading universities worldwide. He is also a Fellow of the Institute of Engineers Australia. In his leisure time, Jin Song developed Markov Decision Process (MDP) models for tennis strategy analysis using PAT, assisting professional players with pre-match analysis (outperforming the world’s best). He is a Junior Grand Slam coach and takes pleasure in coaching tennis to his three children, all of whom have reached the #1 national junior ranking in Singapore/Australia. Two of his children have earned NCAA Division 1 full scholarships, while his second son, Chen Dong, played #1 singles for Australia in the Junior Davis Cup and participated in both the Australian Open and US Open Junior Grand Slams.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
09 May
10:00 am - 11:00 am
On the Efficiency and Robustness of Foundation Models
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Dr. CHENG Yu
Microsoft Research Redmond, USA
Abstract:
In recent years, we are witnessing a paradigm shift where foundational models, such as GPT-4, ChatGPT, and Codex, are consolidating into fewer, but extremely large models that cover multiple modalities and tasks and significantly surpass the performance of standalone models. However, these extremely large models are still very expensive to adapt to new scenarios/tasks, deploy in the runtime inference in real-world applications, and are vulnerable to crafted adversarial examples. In this talk, I will present the techniques we developed to enable foundation models to smoothly scale to small computational footprints/new tasks, and be robust to handle diverse/adversarial textual inputs. The talk also introduces how to productionize these techniques in several applications such as Github Copliot and New Bing.
Biography:
Dr. CHENG Yu is a Principal Researcher at Microsoft Research and an Adjunct Professor at Rice University/Renmin University of China. Before joining Microsoft, he was a Research Staff Member at IBM Research & MIT-IBM Watson AI Lab. He got a Ph.D. from Northwestern University in 2015 and a bachelor’s degree from Tsinghua University in 2010. His research covers deep learning in general, with specific interests in model compression and efficiency, deep generative models, and adversarial robustness. Yu has led several teams and productized these techniques for Microsoft-OpenAI core products (e.g., Copilot, DALL-E-2, ChatGPT, GPT-4). He serves (or, has served) as an area chair for CVPR, NeurIPS, AAAI, IJCAI, ACMMM, WACV, and ECCV.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
08 May
11:00 am - 12:00 pm
Graph Reachability Algorithms for Program Analysis
Location
Room 1027, 10/F, Ho Sin-hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. Qirun Zhang
Assistant Professor
School of Computer Science, Georgia Institute of Technology
Abstract:
Program analysis automatically reasons about program runtime behavior and provides mechanisms to determine whether a program’s execution will satisfy certain properties. Program analysis offers a rich spectrum of methods for improving software reliability. A variety of program analysis problems can be formulated as graph reachability problems in edge-labeled graphs. Over the years, we have witnessed the tremendous success of various graph-reachability-based program-analysis techniques. In this talk, I will discuss our work, in the past three years, on CFL-reachability, Dyck-reachability, and InterDyck-reachability.
Biography:
Qirun Zhang is an Assistant Professor in Computer Science at Georgia Tech. His general research interests are in programming languages and software engineering, focusing on developing new static program analysis frameworks to improve software reliability. He has received a PLDI 2020 Distinguished Paper Award, an OOPSLA 2022 Distinguished Artifact award, an NSF CAREER Award, and an Amazon Research Award in Automated Reasoning. He served on the program committees of FSE, ICSE, ISSTA, OOPSLA, PLDI, and POPL.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
04 May
10:00 am - 11:00 am
Recent Advance on Neural Radiance Fields
Location
Room 804, 8/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. CAI Jianfei
Professor
Faculty of IT, Monash University
Abstract:
Neural Radiance Fields (NeRF) has been a new paradigm for 3D representation, providing implicit shape information and view-dependent appearance simultaneously. Based on this new representation, seminal 3D generation approaches have been proposed that aim to generate photorealistic images from a given distribution in a 3D-aware and view-consistent manner, while their performance in 3D geometry reconstruction is limited. On the other hand, several works demonstrate that rendering neural implicit surfaces, where gradients are concentrated around surface regions, is able to produce a high-quality 3D reconstruction. However, they focus only on holistic scene representation yet ignore individual objects inside it, thus limiting potential downstream applications. In this talk, we will first present our recent ECCV’22 work, ObjectSDF, which provides a nice object-compositional neural implicit surfaces framework that can jointly reconstruct the scene and objects inside it with only semantic masks. We will also introduce our another ECCV’22 work that can reconstruct a 3D scene modelled by NeRF, conditioned on one single-view semantic mask as input. Finally, we will provide some future directions on this topic.
Biography:
CAI Jianfei is a Professor at Faculty of IT, Monash University, where he currently serves as the Head for the Data Science & AI Department. He is also a visiting professor at Nanyang Technological University (NTU). Before that, he was Head of Visual and Interactive Computing Division and Head of Computer Communications Division in NTU. His major research interests include computer vision, deep learning and multimedia. He has successfully trained 30+ PhD students with three getting NTU SCSE Outstanding PhD thesis award. Many of his PhD students joined leading IT companies such as Facebook, Apple, Amazon, and Adobe or become faculty members in reputable universities. He is a co-recipient of paper awards in ACCV, ICCM, IEEE ICIP and MMSP. He serves or has served as an Associate Editor for IJCV, IEEE T-IP, T-MM, and T-CSVT as well as serving as Area Chair for CVPR, ICCV, ECCV, IJCAI, ACM Multimedia, ICME and ICIP. He was the Chair of IEEE CAS VSPC-TC during 2016-2018. He had also served as the leading TPC Chair for IEEE ICME 2012 and the best paper award committee chair & co-chair for IEEE T-MM 2020 & 2019. He will be the leading general chair for ACM Multimedia 2024. He is a Fellow of IEEE.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
03 May
10:00 am - 11:00 am
Adaptive and Effective Fuzzing: a Data-driven Approach
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Mr. SHE Dongdong
PhD candidate
Department of Computer Science, Columbia University
Abstract:
Security vulnerabilities significantly impact our daily lives, from ransomware attacks costing billions of dollars every year to confidential data leakage in government, military and industry. Fuzzing is a popular automated technique to catch these vulnerabilities in real-world programs. Despite the wide application in industry, existing fuzzers heavily rely on rule-based designs (i.e., incorporating a set of static rules and heuristics). These fixed rules and heuristics often fail on diverse programs and severely limit fuzzing performance.
In this talk, I will present a novel and pioneering approach to general fuzzing: a data-driven approach. Fuzzing is an iterative process. Data-driven approach extracts useful knowledge from the massive amount of iterations in fuzzing and uses the learned knowledge to perform future fuzzing smartly. Meanwhile, in a data-driven approach, we can formulate fuzzing as a data-centric problem, thus bridging the connection between fuzzing to various domains (e.g., machine learning, optimization and social network), enabling adaptive and effective designs in the general fuzzing framework.
Biography:
SHE Dongdong is a PhD candidate in Computer Science at Columbia University. His research focuses on security and machine learning, particularly applying machine learning and other data-driven approaches to security problems. His work has been published at top-tier security and software engineering conferences (S&P, CCS, Security and FSE). He is the recipient of an ACM CCS Best Paper runner-up award and a finalist in the NYU CSAW applied research competition. Before attending Columbia, he obtained a Master’s in Computer Science from UC, Riverside and Bachelor’s in Electronic and Information Engineering from HUST.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92596540594?pwd=bEJKc0RlN3hXQVFNTWpmcWRmVnRFdz09
Meeting ID: 925 9654 0594
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624
April 2023
25 April
10:00 am - 11:00 am
Temporal-Spatial Re-configurable Approximate Computing Technologies
Location
Room 402, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Prof. Renyuan Zhang
Associate Professor
Nara Institute of Science and Technology
Abstract:
This talk aims at introducing the multi-grained re-configurable computing platforms which are elastic in both of space and time domains. As the preliminary, several approximate computing technologies by Prof. Zhang’s group are introduced for efficiently accelerating the AI tasks. For the next generation of AI platforms, it is expected to explore the disruptive computer architectures for ultra-high speed, low cost, and flexible tensor computations without any benefitting of Moore’s Law. For this purpose, temporal-spatial re-configurable accelerators are demanded: (1) an innovative mechanism for data processing is explored by the snapshot (or accumulative, optionally) observation of spiking (addressing time-elastic); (2) the multi-grained re-configurable architecture is developed on the basis of our novel neural network topology seen as “DiaNet” (addressing space-elastic).
Biography:
Prof. Renyuan Zhang (Senior Member, IEEE) received the M.E. degree from Waseda University, in 2010, and the Ph.D. degree from The University of Tokyo, in 2013. He was an Assistant Professor with the Japan Advanced Institute of Science and Technology, from 2013 to 2017. He has been an Assistant Professor and an Associate Professor with the Nara Institute of Science and Technology, since 2017 and 2021, respectively. His research interests include analog–digital mixed circuits and approximate computing. He is a member of IEICE.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
19 April
4:00 pm - 5:00 pm
Overcoming Data Heterogeneity Challenges in Federated Learning
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Dr. Xiaoxiao Li,
Assistant Professor
Department of Electrical and Computer Engineering
The University of British Columbia (UBC)
Abstract:
Federated learning (FL) is a trending framework to enable multi-institutional collaboration in machine learning without sharing raw data. This presentation will discuss our ongoing progress in designing FL algorithms that embrace the data heterogeneity properties for distributed data analysis in the FL setting. First, I will present our work on theoretically understanding FL training convergence and generalization using a neural tangent kernel, called FL-NTK. Then, I will present our algorithms for tackling data heterogeneity (on features and labels) and device heterogeneity, motivated by our previous theoretical foundation. Lastly, I will also show the promising results of applying our FL algorithms in real-world applications.
Biography:
Dr. Xiaoxiao Li is an Assistant Professor at the Department of Electrical and Computer Engineering at The University of British Columbia (UBC) starting August 2021. In addition, Dr. Li is an adjunct Assistant Professor at Yale University. Before joining UBC, Dr. Li was a Postdoc Research Fellow at Princeton University. Dr. Li obtained her Ph.D. degree from Yale University in 2020. Dr. Li’s research focuses on developing theoretical and practical solutions for enhancing the trustworthiness of AI systems in healthcare. Specifically, her recent research has been dedicated to advancing federated learning techniques and their applications in the medical field. Dr. Li’s work has been recognized with numerous publications in top-tier machine learning conferences and journals, including NeurIPS, ICML, ICLR, MICCAI, IPMI, ECCV, TMI, TNNLS, Medical Image Analysis, and Nature Methods.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
March 2023
31 March
8:00 am - 6:00 pm
Demystifying Fuzzing Strategies
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Professor Yuqun Zhang
Assistant Professor
Department of Computer Science and Engineering
Southern University of Science and Technology
Abstract:
Fuzzing (or fuzz testing) refers to inputting invalid, unexpected, or random data to programs for exposing unexpected program behaviors (such as crashes, failing assertions, or memory leaks), which can be further inspected or analyzed to detect potential vulnerabilities/bugs. While recently there is a growing trend to propose new fuzzing techniques, limited attentions have been paid on studying their common/representative strategies, e.g., exploring why and how exactly their strategies work. In this talk, I will discuss a rather common fuzzing strategy, namely Havoc, which randomly mutates seeds via a mutator stacking mechanism and is widely adopted in coverage-guided fuzzers. I will show that essentially, it is Havoc which dominates the fuzzing effectiveness, including increasing coverage and exposing program bugs, rather than the strategies proposed by the coverage-guided fuzzers. Moreover, it can be rather simple to enhance the effectiveness of Havoc.
Biography:
Yuqun Zhang is an Assistant Professor in the Department of Computer Science and Engineering at Southern University of Science and Technology, Shenzhen, China. His research focuses on exploring new general-purpose and domain-specific quality assurance methods for software. His research output on fuzzing and taint analysis has been deployed in Tencent and Alibaba to successfully detect hundreds of bugs/vulnerabilities. He received his PhD from UT Austin. He has been awarded one ACM SIGSOFT Distinguished Paper Award as well as one nominee.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
30 March
10:00 am - 11:00 am
Huawei Seminar (in Mandarin)
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
6 lab managers from Huawei Cloud will hold a presentation and communication session in the Room 121, HSH Engineering Building at the Chinese University of Hong Kong on March 30th, from 10 – 11 am. They will introduce the following six innovative Labs from Huawei Cloud:
- Algorithm Innovation Lab: Application of mathematical modeling and optimization algorithms in Huawei Cloud, presented by Dr. Wenli Zhou.
- Cloud Storage Innovation Lab: Introduction to building a high-performance, highly reliable, secure, and intelligent cloud-native storage platform (research areas include key technologies and core algorithms in block storage, object storage, file storage, memory storage, etc., including distributed data consistency, space management, metadata indexing, intelligent caching, etc.), presented by Dr. Xusheng Chen.
- Data Intelligence Innovation Lab: Provide right data to the right person at the right time, presented by Dr. Ke Xu.
- Availability Engineering Lab: Introduction to the related technologies of public cloud and large-scale distributed application architecture reliability and availability engineering, technology, and innovation capabilities center, presented by Ling Wei.
- Computing and Networking Innovation Lab: In the computing domain, focus on tapping the efficiency of large-scale computing resource reuse in Huawei Cloud + continuous research on next-generation autonomous cloud network systems in the networking domain, presented by Dr. Zengyin Yang.
- Cloud Database Innovation Lab: Innovating Cloud-Native Databases for Next-Gen Applications, presented by Dr. Hao Zhang.
Lab Introduction:
The Computing and Networking Innovation Lab focuses on the research and development of new computing and networking in Huawei Cloud. Positioned as a technical pre-research team for Huawei Cloud, it mainly studies two major areas of cloud computing:
-In the computing domain, focus on tapping the efficiency of large-scale computing resource reuse in Huawei Cloud, including cloud service application load profiling, VM/container scheduling algorithms and systems, real-time QoS detection and control systems, and new research directions in virtualization -In the networking domain, based on the requirements and data of cloud computing itself, continuously research the next-generation autonomous cloud network system, including the next-generation gateway platform, P4/NP programmable device platform, network brain combined with AI, large-scale high-performance SDN platform, real-time network measurement and verification, and other new cloud computing network directions.
The Cloud Storage Innovation Lab is Huawei Cloud’s storage innovation research center. The research areas involve key technologies and core algorithms in block storage, object storage, file storage, memory storage, etc., including distributed data consistency, space management, metadata indexing, intelligent caching, etc. It is committed to building a high-performance, highly reliable, secure, and intelligent cloud-native storage platform, providing the best experience and cost-effective storage services for enterprises moving to the cloud.
Enquiries: Professor Michael LYU (lyu@cse.cuhk.edu.hk) / Jeff Liu (jeffliu@cse.cuhk.edu.hk)
24 March
3:00 pm - 4:00 pm
Deep Learning for Physical Design Automation of VLSI Circuits: Modeling, Optimization, and Datasets
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Professor Yibo Lin
Assistant Professor
School of Integrated Circuits
Peking University
Abstract:
Physical design is a critical step in the design flow of modern VLSI circuits. With continuous increase of design complexity, physical design becomes extremely challenging and time-consuming due to the repeated design iterations for the optimization of performance, power, and area. With recent boom of artificial intelligence, deep learning has shown its potential in various fields, like computer vision, recommendation systems, robotics, etc. Incorporating deep learning into the VLSI design flow has also become a promising trend. In this talk, we will introduce our recent studies on developing dedicated deep learning techniques for cross-stage modeling and optimization in physical design. We will also discuss the impact of large-scale and diverse datasets (e.g., CircuitNet) on improving the performance of deep learning models.
Biography:
Yibo Lin is an assistant professor in the School of Integrated Circuits at Peking University. He received the B.S. degree in microelectronics from Shanghai Jiaotong University in 2013, and his Ph.D. degree from the Electrical and Computer Engineering Department of the University of Texas at Austin in 2018. His research interests include physical design, machine learning applications, and GPU/FPGA acceleration. He has received 6 Best Paper Awards at premier venues including DATE 2022, TCAD 2021, and DAC 2019. He has also served in the Technical Program Committees of many major conferences, including ICCAD, ICCD, ISPD, and DAC.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
21 March
10:00 am - 11:00 am
Data-Efficient Graph Learning
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Mr. DING Kaize
Abstract:
The world around us — and our understanding of it — is rich in relational structure: from atoms and their interactions to objects and entities in our environments. Graphs, with nodes representing entities and edges representing relationships between entities, serve as a common language to model complex, relational, and heterogeneous systems. Despite the success of recent deep graph learning, the efficacy of existing efforts heavily depends on the ideal data quality of the observed graphs and the sufficiency of the supervision signals provided by the human-annotated labels, leading to the fact that those carefully designed models easily fail in resource-constrained scenarios.
In this talk, I will present my recent research contributions centered around data-efficient learning for relational and heterogeneous graph-structured data. First, I will introduce what data-efficient graph learning is and my contributions to different research problems under its umbrella, including graph few-shot learning, graph weakly-supervised learning, and graph self-supervised learning. Based on my work, I will elucidate how to push forward the performance boundary of graph learning models especially graph neural networks with minimal human supervision signals. I will also touch upon the applications of data-efficient graph learning to different domains and finally conclude my talk with a brief overview of my future research agenda.
Biography:
DING Kaize is currently a Ph.D. candidate from the School of Computing and Augmented Intelligence (SCAI) at Arizona State University (ASU). Kaize is working at the Data Mining and Machine Learning (DMML) Lab with Prof. Huan Liu and previously he was previously interned at Google Brain, Microsoft Research, and Amazon Alexa AI. Kaize is broadly interested in the areas of data mining, machine learning, and natural language processing and their interdisciplinary applications in different domains including cybersecurity, social good, and healthcare. His recent research interests particularly focus on data-efficient learning and graph neural networks. He has published a series of papers in top conferences and journals such as AAAI, EMNLP, IJCAI, KDD, NeurIPS, and TheWebConf. Kaize was the recipient of the ASU Graduate College Completion Fellowship and ASU GPSA Outstanding Research Award, etc. More information about him can be found at https://www.public.asu.edu/~kding9/ .
Join Zoom Meeting:
https://cuhk.zoom.us/j/99778568306?pwd=Nms0cm9takVNQWtRaDhuaVdaTVJ5dz09
Enquiries: Mr Jeff Liu at Tel. 3943 0624
17 March
2:30 pm - 3:30 pm
Resilience through Adaptation — the Challenge of Change
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2022/2023
Speaker:
Professor Jeff Kramer
Emeritus Professor, Department of Computing,
Imperial College London
Abstract:
Change in complex systems is inevitable. Providing rigorous techniques and tools to support dynamic system adaptation so that it can be performed online, at runtime, is certainly challenging. However the potential resilience rewards could be great. There is the need for a software architecture and runtime support for dynamic software configuration, plan execution and plan synthesis, domain environment modelling and monitoring, and ultimately even potentially performing some elements of requirements engineering at runtime! This talk will present our motivation and vision, describing our work to date and our hopes for the future.
Biography:
Jeff Kramer is Emeritus Professor of Computing at Imperial College London.
His research work is primarily concerned with software engineering, with particular emphasis on evolving software architectures, behaviour analysis, the use of models in requirements elaboration and self organising adaptive software systems. An early research result was the DARWIN language for evolving distributed architectures, and more recently was the Three Layer Model for self-adaptive systems. One of the major research challenges in self-managed adaptation is the need to perform requirements analysis at runtime.
Jeff has been involved in many major conferences and journals, notably as program co-chair of ICSE in Los Angeles in 1999, general co-chair of ICSE 2010 in Cape Town, and Editor in Chief of IEEE TSE from 2006 to 2010. His awards include the 2005 ACM SIGSOFT Outstanding Research Award and the 2011 ACM SIGSOFT Distinguished Service. He is a Fellow of the Royal Academy of Engineering, Fellow of the ACM, and a Member of Academia Europaea.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
15 March
10:00 am - 11:00 am
Execution-Guided Learning for Software Development, Testing, and Maintenance
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Mr. NIE Pengyu
Abstract:
Machine Learning (ML) techniques have been increasing adopted for Software Engineering (SE) tasks, such as code completion and code summarization. However, existing ML models provide limited value for SE tasks, because these models do not take into account the key characteristics of software: software is executable and software constantly evolves. In this talk, I will present my insights and work on developing execution-guided and evolution-aware ML models for several SE tasks targeting important domains, including software testing, verification, and maintenance.
First, I will present my techniques to help developers write tests and formal proofs. My work has direct impact on software correctness and everyone that depends on software. I will present TeCo: the first ML model for test completion/generation, and Roosterize: the first model for lemma name generation. In order to achieve good performance, these two tasks require reasoning about code execution, which existing ML models are not capable of. To tackle this problem, I designed and develop ML models that integrate execution data and use such data to validate generation results.
Next, I will present my techniques to help developers maintain software. Specifically, I will present my work on comment updating, i.e., automatically updating comments when associated code changes. I proposed the first edit ML model for SE to solve this task, which learns to perform developer-like edits instead of generating comments from scratch. This model can be generalized for general-purpose software editing, including tasks such as bug fixing and automated code review.
All my code and data are open-sourced, evaluated on real-world software, and shown to outperform existing ML models by large margins. My contributions lay the foundation for the development of accurate, robust, and interpretable ML models for SE.
Biography:
NIE Pengyu is a Ph.D. candidate at the University of Texas at Austin, advised by Milos Gligoric. Pengyu obtained his Bachelor’s Degree at the University of Science and Technology of China. His research area is the fusion of Software Engineering (SE) and Natural Language Processing (NLP), with a focus on improving developers’ productivity during software development, testing, and maintenance. He has published 14 papers in top-tier SE, NLP, and PL conferences. He is the recipient of an ACM SIGSOFT Distinguished Paper Award (FSE 2019), and the UT Austin Graduate School Continuing Fellowship. More information can be found on his webpage: https://pengyunie.github.io.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95560110806?pwd=VFN4eXc2UU1KOTJIVk15aGU2ZkVydz09
Meeting ID: 955 6011 0806
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624
15 March
2:00 pm - 3:00 pm
Adaptive and Automated Deep Recommender Systems
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Prof. ZHAO Xiangyu
Assistant Professor, School of Data Science
City University of Hong Kong (CityU)
Abstract:
Deep recommender systems have become increasingly popular in recent years, and have been utilized in a variety of domains, including movies, music, books, search queries, and social networks. They assist users in their information-seeking tasks by suggesting items (products, services, or information) that best fit their needs and preferences. Most existing recommender systems are based on static recommendation policies and hand-crafted architectures. Specifically, (i) most recommender systems consider the recommendation procedure as a static process, which may fail given the dynamic nature of the users’ preferences; (ii) existing recommendation policies aim to maximize the immediate reward from users, while completely overlooking their long-term impacts on user experience; (iii) designing architectures manually requires ample expert knowledge, non-trivial time and engineering efforts, while sometimes human error and bias can lead to suboptimal architectures. I will introduce my efforts in tackling these challenges via reinforcement learning (RL) and automated machine learning (AutoML), which can (i) adaptively update the recommendation policies, (ii) optimize the long-term user experience, and (iii) automatically design the deep architectures for recommender systems.
Biography:
Prof. Xiangyu ZHAO is an assistant professor of the school of data science at City University of Hong Kong (CityU). His current research interests include data mining and machine learning, and their applications in Recommender System, Smart City, Healthcare, Carbon Neutral and Finance. He has published more than 60 papers in top conferences (e.g., KDD, WWW, AAAI, SIGIR, IJCAI, ICDE, CIKM, ICDM, WSDM, RecSys, ICLR) and journals (e.g., TOIS, SIGKDD, SIGWeb, EPL, APS). His research has been awarded ICDM’22 and ICDM’21 Best-ranked Papers, Global Top 100 Chinese New Stars in AI, CCF-Ant Research Fund, CCF-Tencent Open Fund, Criteo Faculty Research Award, Bytedance Research Collaboration Award, and nomination for Joint AAAI/ACM SIGAI Doctoral Dissertation Award. He serves as top data science conference (senior) program committee members and session chairs, and journal guest editors and reviewers. He serves as the organizers of DRL4KDD@KDD’19/WWW’21 and DRL4IR@SIGIR’20/21/22 and a lead tutor at WWW’21/22/23, IJCAI’21 and WSDM’23. He also serves as the founding academic committee members of MLNLP, the largest Chinese AI community with millions of members/followers. The models and algorithms from his research have been launched in the online system of many companies. Please find more information at https://zhaoxyai.github.io/.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96382199967
Meeting ID: 963 8219 9967
Enquiries: Mr Jeff Liu at Tel. 3943 0624
13 March
10:00 am - 11:00 am
Designing and Analyzing Machine Learning Algorithms in the Presence of Strategic Behavior
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Mr. ZHANG Hanrui
Abstract:
Machine learning algorithms now play a major part in all kinds of decision-making scenarios. When the stakes are high, self-interested agents — about whom decisions are being made — are increasingly tempted to manipulate the machine learning algorithm, in order to better fulfill their own goals, which are generally different from the decision maker’s. This highlights the importance of making machine learning algorithms robust against manipulation. In this talk, I will focus on generalization (i.e., the bridge between training and testing) in strategic classification: Traditional wisdom suggests that a classifier trained on historical observations (i.e., the training set) usually also works well on future data points to be classified (i.e., the test set). I will show how this very general principle fails when agents being classified strategically respond to the classifier, and present an intuitive fix that leads to provable (and in fact, optimal) generalization guarantees under strategic manipulation. I will then discuss the role of incentive-compatibility in strategic classification, and present experimental results that illustrate how the theoretical results can guide practice. If time permits, I will also discuss distinguishing strategic agents with samples, and/or dynamic decision making with strategic agents.
Biography:
ZHANG Hanrui is a PhD student at Carnegie Mellon University, advised by Vincent Conitzer. He was named a finalist for the 2021 Facebook Fellowship. His work won the Best Student Paper Award at the European Symposia on Algorithms (ESA), and a Honorable Mention for Best Paper Award at the AAAI Conference on Human Computation and Crowdsourcing (HCOMP). He received his bachelor’s degree in Yao’s Class, Tsinghua University, where he won the Outstanding Undergraduate Thesis Award.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96485699602?pwd=aXZZd0Z4aDVzVjhWdTRiVGt5cytvdz09
Meeting ID: 964 8569 9602
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624
07 March
10:00 am - 11:00 am
Efficient Reinforcement Learning Through Uncertainties
Location
Zoom
Category
Seminar Series 2022/2023
Speaker:
Mr. ZHOU Dongruo
Abstract:
Reinforcement learning (RL) has achieved great empirical success in many real-world problems in the last few years. However, many RL algorithms are inefficient due to their data-hungry nature. Whether there exists a universal way to improve the efficiency of existing RL algorithms remains an open question.
In this talk, I will give a selective overview of my research, which suggests that efficient (and optimal) RL can be built through the lens of uncertainties. I will show that uncertainties can not only guide RL to make decisions efficiently, but also have the ability to accelerate the learning of the optimal policy over a finite number of data samples collected from the unknown environment. By utilizing the proposed uncertainty-based framework, I design computationally efficient and statistically optimal RL algorithms under various settings, which improve existing baseline algorithms from both theoretical and empirical aspects. At the end of this talk, I will briefly discuss several additional works, and my future research plan for designing next-generation decision making algorithms.
Biography:
ZHOU Dongruo is a final-year PhD student in the Department of Computer Science at UCLA, advised by Prof. Quanquan Gu. His research is broadly on the foundation of machine learning, with a particular focus on reinforcement learning and stochastic optimization. He aims to provide a theoretical understanding of machine learning methods, as well as to develop new machine learning algorithms with better performance. He is a recipient of the UCLA dissertation year fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93549469461?pwd=R0FOaFdxOG5LS0s2Q1RmaFdNVm4zZz09
Meeting ID: 935 4946 9461
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624
February 2023
24 February
4:00 pm - 5:00 pm
Learning Deep Feature Representations of 3D Point Cloud Data
Seminars Archives
Machine Learning for Embodied Artificial Intelligence: from Surgical Robotics to Multi-robot Coordination
Location
High-Performance Systems for Graph Analytics
Location
Machine Learning in EDA: When and How
Location
Exact and Optimal Dynamic Parameterized Subset Sampling on Bounded Precision Machines
Location
ARTIFICIAL INTELLIGENCE: PAST, PRESENT AND FUTURE
Location
MVSG-based Compact Models for GaN Devices
Location
Speaker:
Professor WEI Lan
Associate Professor
University of Waterloo
Abstract:
Given its high mobility, high breakdown voltage and decent thermal conductivity, GaN technologies have shown great promise for high-power high-frequency (HP-HF), rapidly rising as a front runner for mm-wave to THz analog/RF circuits for IoT and 5G/6G wireless communication. Meanwhile, it is also heavily explored for power electronic applications for fast charging, data center, and electric vehicles. As GaN technology continues to improve, challenges of high design cost and sub-optimal system performance emerge as bottlenecks preventing the technology from wide scale deployment. Accurate, scalable and efficient compact model is key to overcome such challenges.
This presentation will provide a brief overview of the family of MVSG GaN compact model, including models for GaN HEMT, GaN multi-channel diodes and GaN transmission-line resistors. The model formulation and various features will be introduced. Application examples will also be demonstrated, showing the potentials of this group of physics-based compact models.
Biography:
Prof. Lan Wei received her B.S. in Microelectronics from Peking University, China (2001), M.S and Ph. D. in Electrical Engineering from Stanford University, USA (2007 and 2010, respectively). She is currently an Associate Professor at the University of Waterloo, Canada. She has intensive experience in device physics-based compact modeling including silicon and GaN technologies, device-circuit interactive design and optimization, integrated nanoelectronic systems with low-dimensional materials, cryogenic CMOS device modeling and circuit design for quantum computing. She has authored/co-authored more than 90 peered reviewed publications and served on the technical program committees including IEDM, ICCAD, DATE, ISQED, BCICTS, etc.
Enquiries:
Professor YU Bei (byu@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
Decision trees in a formal world: machine learning (with constraints), controller verification, and unsatisfiability proofs for graph problems
Location
Speaker:
Assistant professor
Abstract:
Decision trees are an effective and concise way of conveying information, easily understood by virtually everyone regardless of the topic. Given the recent interest in explainable AI and related fields, decision trees stand out as a popular choice. From the algorithmic side, the unique structure of decision trees is interesting since it may be exploited to obtain much more efficient algorithms than structure-oblivious approaches.
In this talk, I will give an overview of the research we have been doing on leveraging the decision tree structure from three disjoint angles: 1) machine learning with constraints, where the goal is construct the optimal regression/decision tree representing tabular data whilst potentially respecting different types of constraints such as fairness, 2) controller/policy verification, where the aim is to verify that a decision tree controller satisfies desired properties in continuous time, and 3) explaining the unsatisfiability of a combinatorial optimisation problem on a graph, by representing proofs of unsatisfiability as a tree using graph-specific concepts. We show that for each of these problems, exploiting the decision tree structure is important in obtain orders of magnitude runtime improvements and/or interpretability.
The talk summarises about half a dozen of our papers (AAAI’21/24, JMLR’22, NeurIPS’22/23, ICML’23/24) and is meant to be accessible to all backgrounds, with plenty of time for discussion!
Biography:
Emir Demirovic is an assistant professor at TU Delft (Netherlands). He leads the Constraint Solving (“ConSol”) research group, which advances combinatorial optimisation algorithms for a wide range of (real-world) problems, and co-directs the explainable AI in transportation lab (“XAIT”) as part of the Delft AI Labs. Prior to his appointment at TU Delft, Emir worked at the University of Melbourne, Vienna University of Technology, National Institute of Informatics (Tokyo), and at a production planning and scheduling company.
The focus point of Emir’s current work is solving techniques based on constraint programming, optimising decision trees, and explainable methods for combinatorial optimisation. He is also interested in industrial applications, robust/resilient optimisation, and the integration of optimisation and machine learning. He publishes in leading AI conferences (e.g., AAAI, NeurIPS) and specialised venues (e.g., CP, CPAIOR), attends scientific events such as Dagstuhl seminars, Lorentz workshops, and the Simons-Berkeley programme, and frequently organises incoming and outgoing visits, e.g., EPFL, ANITI/CNRS, CUHK, Monash University, TU Wien.
Enquiries:
Professor LEE Ho Man Jimmy (jlee@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
Data Science at Old Dominion University
Location
Speaker:
Professor Frank Liu
Professor and Inaugural Director, School of Data Science
Old Dominion University
Abstract:
Old Dominion University is a large public university located in the southwest coast of Virginia in the US. First established as a branch of College of William and Mary, its root can be traced to the 17th century England. School of Data Science is a newly established academic unit in Old Dominion University to encourage interdisciplinary research and education across the campus, as well as the region. I will give a brief introduction to the data science program, followed by open floor for Q&A and discussions.
Biography:
Frank Liu is a Professor of Computer Science and ECE at Old Dominion University. He is the founding director of the School of Data Science, with research experience spans academia, national laboratories, and corporate research labs. He is a Fellow of IEEE.
Enquiries:
Professor YOUNG Fung Yu (fyyoung@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
Generative AI in Drug Development
Location
Speaker:
Professor, College of Pharmaceutical Sciences
Abstract:
In recent years, generative AI has gained significant traction as a tool for designing novel molecules for therapeutic purposes. Advanced deep learning techniques have been increasingly adapted for drug design, yielding varying levels of success. In this seminar, I will provide an overview of this emerging field, highlighting the key challenges in applying generative AI to drug design and presenting our proposed solutions. Specifically, we combine principles from physics and chemistry with deep learning methods to discover more realistic drug candidates within the vast chemical space. Our results are supported by benchmark studies and validated through experimental wet lab testing.
Biography:
Dr. Chang-Yu (Kim) Hsieh is the QiuShi Engineering Professor at the College of Pharmaceutical Sciences, Zhejiang University. Before joining Zhejiang University, he led the Theory Division at Tencent Quantum Lab in Shenzhen, focusing on AI and quantum simulation for drug and material discovery. Prior to that, he was a postdoctoral researcher in the Department of Chemistry at MIT. His primary research interests lie in leveraging advanced computing technologies, including AI and quantum computing, to simulate and model material and molecular properties.
Enquiries:
Professor HENG Pheng Ann (pheng@cse.cuhk.edu.hk)
Ms. NG Man Nga Vivien (vivien@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
Constraint Transformation for Faster SMT Solving
Location
Speaker:
Professor ZHANG Qirun
Assistant Professor, School of Computer Science
Georgia Institute of Technology
Abstract:
SMT formulas are first-order formulas extended with various theories. SMT solvers are fundamental tools for many program analysis and software engineering problems. The effectiveness and scalability of SMT solvers influence the performance of the underlying client analyzers. The most popular approach to improving SMT solving is by developing new constraint-solving algorithms. In this talk, we will discuss a new perspective on improving SMT solving via compiler optimization. Our basic idea involves translating SMT formulas to LLVM IR and leveraging LLVM optimization passes to simplify the IR. Then, we translate the simplified IR back to SMT formulas. In addition, this strategy can be extended to enhance the solving of unbounded SMT theories by utilizing their bounded counterparts.
Biography:
Qirun Zhang is an Assistant Professor in the School of Computer Science at Georgia Tech. His general research areas are programming languages and software engineering, focusing on developing new program analysis frameworks to improve software reliability. He received a PLDI 2020 Distinguished Paper Award, an FSE 2023 Distinguished Paper Award, an NSF CAREER Award, and an Amazon Research Award in Automated Reasoning. He served on the program committees of FSE, ICSE, ISSTA, OOPSLA, PLDI, and POPL.
Enquiries:
Professor LYU Rung Tsong Michael (lyu@cse.cuhk.edu.hk)
Ms. NG Man Nga Vivien (vivien@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
Model Evaluation and Test-time Methods in Medical Image Segmentation
Location
Speaker:
Associate Professor, School of Computer Science and Engineering,
Nanjing University of Science and Technology
Abstract:
With advancements in deep learning and AI techniques, medical image segmentation has experienced rapid development over the past decade. Modern DL-based models, utilizing large labeled datasets, often produce impressive benchmark results. However, practical issues, such as reliability and trustworthiness, persist when these models are implemented in hospitals and medical facilities.
This talk addresses two related aspects of medical image segmentation for improving model deployment: model evaluation and test-time methods. First, we will discuss our recent work on deployment-centric model evaluation, evaluation of foundation models and related techniques. Next, we will cover a series of test-time methods that we have developed to improve video segmentation consistency, enhance the quality of medical image segmentation, and more recently, advance segmenting anything in medical images.
Finally, we will briefly highlight several other projects from my group and discuss directions in medical image segmentation research that we find promising and important.
Biography:
Yizhe Zhang, Ph.D., is an associate professor at Nanjing University of Science and Technology. He received his Ph.D. from the University of Notre Dame in the United States. Before returning to Nanjing, he was a senior research engineer at Qualcomm AI Research, San Diego, where he worked on efficient video segmentation and the spatiotemporal consistency of segmentation. He has conducted research on topics such as active learning, semi-supervised learning, model design, training and evaluation in medical image segmentation. As the first author, he has published papers in conferences and journals including MICCAI, Medical Image Analysis, IEEE TMI, BIBM, ICCV, AAAI, and WACV. As a key contributor, he was involved in biomedical image modeling and analysis work that won the 2017 Cozzarelli Prize awarded by the National Academy of Sciences.
Enquiries:
Professor HENG Pheng Ann (pheng@cse.cuhk.edu.hk)
Ms. NG Man Nga Vivien (vivien@cse.cuhk.edu.hk)
Mr. WONG O Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
When Apps Become Super: Dissecting the Security Risks of Super Apps
Location
Speaker:
Mr. YANG Yuqing
Ph.D. candidate, Department of Computer Science and Engineering
The Ohio State University
Abstract:
The Super App computing paradigm, debuted in 2017 by world’s social computing giant WeChat, has revolutionized the mobile app architecture. By integrating a standalone execution engine for light-weight miniapp packages, the super apps allow third-party developers to integrate customized services to billions of super app users. Simultaneously, with the powerful features provided by super apps comes the imminent risk from attackers, who actively attempt to exploit the super app ecosystem, inflicting privacy and losses of billions of users, as well as millions of developers.
In this talk, Yuqing will dissect the concept of super app paradigm with a specific focus on the security risks from super app vulnerabilities and miniapp malware. First, he will discuss communication channel vulnerabilities we identified in front-ends and back-ends, followed by a dissection of miniapp malware against miniapp vetting, and malicious behaviors against the platform prior and after the miniapp vetting process. In the end, he will discuss mitigation countermeasures and open problems to improve the security and privacy in the realm of super apps.
Biography:
Yuqing Yang is a third-year PhD candidate at the Department of Computer Science and Engineering of The Ohio State University. His research interest primarily focuses on vulnerability and malware detection in mobile and web security, particularly in super apps. His papers have been published in prestigious conferences, such as ACM CCS, SIGMETRICS, and ICSE. He was a reviewer for many top-tier journals and conferences, including TIFS, TOSEM, DSN, USENIX Security, IEEE Security & Privacy, and ACM CCS. His researches have also been acknowledged by top super app vendors, including Tencent and Baidu.
Enquiries:
Professor MENG Wei (wei@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
On Physics-Inspired Generative Models
Location
Speaker:
Dr. XU Yilun
Ph.D. at Massachusetts Institute of Technology (MIT)
Research Scientist, NVIDIA Research (from 2024 July)
Abstract:
Physics-inspired generative models such as diffusion models constitute a powerful family of generative models. The advantages of models in this family come from relatively stable training process and high capacity. A number of possible improvements remain possible. In this talk, I will discuss the enhancement and design of physics-inspired generative models. I will first present a sampling algorithm that combines the best of previous samplers, greatly accelerating the generation speed of text-to-image Stable Diffusion models. Additionally, I will discuss sampling methods to promote diversity in finite samples, by adding mutual repulsion forces between samples in the generative process. Secondly, I will discuss a training framework that introduces learnable discrete latent into continuous diffusion models. These latent simplify complex noise-to-data mappings and reduce the curvature of generative trajectories. Finally, I will introduce Poisson Flow Generative Models (PFGM), a new generative model arising from electrostatic theory, rivalling leading diffusion models. The extended version, PFGM++, places diffusion models and PFGM under the same framework and introduces new, better models. Several algorithms discussed in the talk are the state-of-the-art methods across standard benchmarks.
Biography:
Yilun Xu is an incoming research scientist at NVIDIA Research. He obtained his Ph.D. from MIT CSAIL in 2024, and his B.S. from Peking University in 2020. His research focuses on machine learning, with a current emphasis on new family of physics-inspired generative models, as well as the development of training and sampling algorithms for diffusion models. Previously, he has done research aimed on bridging information theory and machine learning.
Enquiries:
Professor HENG Pheng Ann (pheng@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
** ALL ARE WELCOME **
Revisiting Constraint Solving – From Non-Binary to Binary
Location
Speaker:
Professor Roland Yap
National University of Singapore (NUS)
Zoom: | https://cuhk.zoom.us/j/94749652768
Meeting ID: 947 4965 2768 (Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record) |
Abstract:
Solving finite domain constraints, e.g., Constraint Satisfaction Problems (CSP), is an intractable problem which nevertheless is one of practical significance. Due to the intractability, in practice, inference techniques usually local consistencies are used which combine neatly with search heuristics. In general, a constraint may either be a binary or non-binary relation and the typical consistency used is either Arc Consistency (for binary) and Generalised Arc Consistency (for non-binary). The natural form for many constraints is as a non-binary constraint (having more than two variables). However, it is known that binary CSPs are also NP-complete. For a long time, most efforts have been placed on non-binary techniques as they were believed to be more efficient.
In this talk, we will revisit the question of binary vs non-binary. We show why the reason behind why binary approaches were believed to be inefficient. Then we show that this belief is mistaken and binary approaches through better encodings and algorithms can outperform existing non-binary techniques. We will discuss improvements to old encodings as well as present new encodings and associated algorithms.
Biography:
Roland Yap is an Associate Professor in the Department of Computer Science, National University of Singapore, Singapore. He received his PhD from Monash University, Australia. He has pioneering work in the development of Constraint Logic Programming languages and the field of Constraint Programming. Together with Christian Bessiere, Jean-Charles Régin, and Yuanlin Zhang, their work on (Generalized) Arc Consistency was awarded the AI Journal Classic Paper Award in 2022. His current research interests include AI, Big Data, Constraints, Operating Systems, Programming Languages, Security and Social Networks.
Enquiries:
Professor Jimmy Lee (jlee@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Hardening Software Against Memory Errors
Location
Speaker:
Professor Roland Yap
National University of Singapore (NUS)
Abstract:
Memory errors are often the root cause of security vulnerabilities and exploitation in low level languages such as C/C++. We will first overview the difficulties of dealing with memory errors and existing techniques for detecting and preventing memory errors. This talk will focus on the challenging problem of given a closed source binary, how to harden the binary against memory errors. We introduce RedFat, a binary rewriter which hardens x86_64 binaries against memory errors. The challenge is that without source code, it becomes difficult to have reliable instrumentation and also at the binary level much of the semantics of the original code has dissapeared. To deal with the problem of missing semantics while yet giving more hardening where possible, RedFat uses a complementary error detection methodology. It combines low fat pointers with red zones. RedFat makes minimal assumptions about the binary and is able to operate on stripped and non-PIC binaries. It is also language agnostic and has been evaluated on C / C++ / Fortran benchmarks.
Biography:
Roland Yap is an Associate Professor in the Department of Computer Science, National University of Singapore, Singapore. He received his PhD from Monash University, Australia. He has pioneering work in the development of Constraint Logic Programming languages and the field of Constraint Programming. Together with Christian Bessiere, Jean-Charles Régin, and Yuanlin Zhang, their work on (Generalized) Arc Consistency was awarded the AI Journal Classic Paper Award in 2022. His current research interests include AI, Big Data, Constraints, Operating Systems, Programming Languages, Security and Social Networks.
Enquiries:
Professor Jimmy Lee (jlee@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
General Memory Specialization for Massive Multi-Cores
Location
Speaker:
Mr. WANG Zhengrong
Postdoc researcher
University of California, Los Angeles (UCLA)
Zoom: | https://cuhk.zoom.us/j/97200491693?pwd=Uk1ORHhTQmEzbWJiVDRCMVdzZHpYdz09
Meeting ID: 972 0049 1693 // Passcode: 202400 (Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record) |
Abstract:
In the last two decades, computer architects have heavily relied on specialization and scaling up to continue performance and energy efficiency improvement as Moore’s law fading away. The former customizes the system for particular program behaviors (e.g., the neural engine in Apple chips to accelerate machine learning), while the latter evolves into massive multi-core systems (e.g., 96 cores for AMD EPYC 9654 CPU).
This works until we hit the “memory wall” – as modern systems continue to scale up, data movements have become increasingly the bottleneck. Unfortunately, conventional memory systems are extremely inefficient in reducing data movements, suffering from excessive NoC traffic and limited off-chip bandwidth to bring the data to computing cores.
These inefficiencies originate from the essential core-centric view: the memory hierarchy simply reacts to individual requests from the core but is unaware of high-level program behaviors. This makes the hardware oblivious, as they must guess highly irregular and transient memory semantics from the primitive memory abstraction of simple load and store instructions.
This calls for a fundamental redesign of the memory interface to express rich memory semantics, so that the memory system can promptly adjust to evolving program behaviors and efficiently orchestrate data and computation together throughout the entire system. For example, simple computations can be directly associated with memory requests and naturally distributed across the memory hierarchy without bringing all the data to the core. More importantly, the new interface should integrate seamlessly with conventional von Neumann ISAs, enabling end-to-end memory specialization while maintaining generality and transparency. Overall, in this talk, I will discuss our solution to enable general memory specialization for massive multi-core systems that unlock order-of-magnitude speedup/energy efficiency on plain-C programs. Such data-computation orchestration is the key to continuing the performance and energy efficiency scaling.
Biography:
Zhengrong is currently a post-doc researcher at UCLA. His research aims to build general, automatic, and end-to-end near-data acceleration by revolutionizing the orchestration between data and computation throughout the entire system. His open-source work has been accepted by multiple top-tier conferences in computer architecture, including ISCA, MICRO, ASPLOS, HPCA, and awarded Best Paper Runner-Ups as well as IEEE Micro Top Pick Honorable Mentions. He is also one of the maintainers of gem5, a widely used cycle accurate simulator in computer architecture.
Enquiries:
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Towards Generalizable and Robust Multimodal AI for Healthcare
Location
Speaker:
Dr. CHEN Cheng
Postdoctoral Research Fellow
Harvard Medical School
Abstract:
Artificial Intelligence (AI) is catalyzing a paradigm shift in healthcare, promising to reshape the landscape of patient care. At the heart of this transformation is medical imaging, where AI-enabled technologies hold substantial promise for precise and personalized image-based diagnosis and treatment. Despite these advances, these models often underperform at real-world deployment, particularly due to the heterogeneous data distributions and varying modalities in healthcare applications. In this talk, I will introduce our work dedicated to tackling these real-world challenges to advance model generalizability and multimodal robustness. First, I will show how we can leverage generative networks and model adaptation to generalize models under data distribution shifts. Next, I will describe how to achieve robust multimodal learning with missing modalities and with imaging and non-imaging clinical information. Finally, I will present our work that extends to large-scale datasets and more diverse modalities based on foundation model for generalizable multimodal representation learning.
Biography:
Dr. Cheng CHEN is a postdoc research fellow at the Center for Advanced Medical Computing and Analysis, Harvard Medical School. She obtained her Ph.D. degree in Computer Science and Engineering at The Chinese University of Hong Kong in 2021. She received her M.S. and B.S. degrees from Johns Hopkins University and Zhejiang University, respectively. Her research interests lie in the interdisciplinary area of AI and healthcare, with a focus on generalizable, robust, and multimodal medical image analysis. She has over 25 papers published at top AI and medical imaging venues, reaching over 2300 Google Scholar citations with an h-index of 16. Her first-authored papers have been recognized as an ESI “Highly cited paper”, selected as oral presentations, and received travel awards from AAAI and MICCAI. She has been named one of the Global Top 80 Chinese Young Female Scholars in AI and won the MICCAI Federated Brain Tumor Segmentation Challenge. She also serves as Area Chair of MICCAI 2024 and reviewer for multiple top journals and conferences.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Cryptographic Metamorphosis: Bridging Realms and Fostering Futures
Location
Speaker:
Dr. XIAO Liang
Postdoctoral Fellow
NTT Research
Abstract:
Modern cryptography has evolved beyond its initial focus on information privacy and has become deeply integrated into various aspects of computer science. An extraordinary example in this regard is the “love-hate” relationship between Cryptography and Quantum Computing, which stands among the central topics of today’s theoretical computer science (TCS) research. On the one hand, quantum techniques (e.g., Shor’s algorithm) jeopardize the foundational assumptions for Cryptography; on the other hand, the unique features of quantum information (e.g., Heisenberg’s Uncertainty Principle) enable new cryptographic functionalities that were provably impossible in a classical world. A key focus of this talk will be my effort in re-establishing the quantum theory for central cryptography tasks like Secure Multi-Party Computation (MPC) and Zero-Knowledge (ZK) Proofs, underscoring the role of this interdisciplinary field as a fertile ground for both classical and quantum TCS innovations.
As for the “classical” aspect of my research, I will discuss my pursuits in designing concurrently-secure, black-box MPC (and ZK) protocols, addressing the inherent tension between security and efficiency. I will also talk about my passion for leveraging cryptography for system/network security tasks, instantiating my belief in bridging theoretical research with real-world applications.
The presentation will culminate with an outline of a future research agenda, as well as my aspirations to contribute to the CSE department, including the designs of a new course on mathematical tools for TCS, a new course on quantum cryptography, and a semi-annual “Crypto-Plus” seminar in Hong Kong.
Biography:
Xiao LIANG is currently a Postdoctoral Fellow at NTT Research, specializing in Cryptography. Prior to this role, he gained valuable postdoctoral experience at Rice University and Indiana University Bloomington. His expertise encompasses Zero-Knowledge Protocols, Secure Multi-Party Computation, Non-Malleability, and Digital Signature, with a deliberate effort to establish connections with adjacent domains like System/Network Security. A notable highlight of Xiao’s work is the emphasis on the convergence of cryptography and quantum computing, contributing to the dynamic interdisciplinary advancements in this burgeoning field. His research has consistently resulted in publications at esteemed conferences for both cryptography and theoretical computer science in general, such as FOCS, CRYPTO, and ICALP. Xiao Liang holds a Ph.D. in Computer Science and an M.S. in Applied Mathematics, both earned from Stony Brook University, and a B.S. in Economics from Beijing Institute of Technology.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Log-driven intelligent software reliability engineering
Location
Speaker:
Ms. HUO Yintong
Ph.D. candidate
The Chinese University of Hong Kong
Abstract:
Software systems are serving various aspects of our daily activities, from search engines to communication platforms. Traditional software reliability engineering (SRE) practices, which heavily rely on manual efforts, encounter challenges due to 1) sheer volume, 2) high variety, and 3) rapid evolution of modern software. My research is centered on enhancing software reliability through automated fault management. In this talk, I will present my work on intelligent SRE, with a focus on utilizing log data for the three major fault management phases: fault prevention, fault removal, and fault tolerance.
The talk starts with the development of an initial investigation on a semantic-aware log analysis framework tailored for identifying system failures during software operation, so that proper fault tolerance mechanisms can be invoked. The resulting work, SemParser, is inspired by an insightful understanding of the distinctions between human-written language (log events) and machine-generated tokens (variables). Then, we will discuss “AutoLog” – a novel log sequence simulation framework leveraging program analysis to overcome the limitations of insufficient log data. Unlike existing log data gathered from a limited number of workloads, AutoLog for the first time acquires far more comprehensive and scalable log datasets, paving the way for proactive and practical anomaly detection solutions. Finally, I will discuss my recent research progress in LLM-powered SRE that demonstrates the possibility of new designs, which integrate LLMs into resolving real-world software engineering challenges.
My past research has showcased the effectiveness of log-driven methods in advancing SRE. To conclude, I will outline my research roadmap with various directions, which extends from intelligent log operations to diverse applications in software development.
Biography:
HUO Yintong is currently a Ph.D. candidate at the Chinese University of Hong Kong, advised by Michael R. Lyu. Her research area is intelligent Software Engineering (SE), with a focus on software reliability by promoting automated software development, testing, and operations. She has published 12 papers in all top-tier SE conferences, including ICSE, FSE, ASE, ISSTA, and ISSRE. She is the recipient of an IEEE Open Software Services Award for the LogPAI project (3k+ Stars, 70k+ Downloads).
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Harnessing Game-Theoretic Optimization for Adversarial, Hierarchical, and Scalable Machine Learning Models
Location
Speaker:
Dr. LU Songtao
Senior Research Scientist
Mathematics and Theoretical Computer Science Department
IBM Thomas J. Watson Research Center
Zoom: | https://cuhk.zoom.us/j/99213225761?pwd=L2FHTkJBaVMxeDVkUENyUGNOZ1hCZz09 Meeting ID: 992 1322 5761 // Passcode: 202400 (Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record) |
Abstract:
As machine learning continues to permeate our daily lives with the deployment of large-scale foundational models across diverse domains, we are witnessing an unprecedented era of data collection and exploration through smart devices. This abundance of data holds the potential to bring groundbreaking advancements across numerous industries and disciplines. However, effectively leveraging and safeguarding this wealth of data requires increasingly advanced mathematical techniques.
My research is centered on designing computationally efficient methods backed by theory to drive adversarial, hierarchical, and scalable machine learning models. In this talk, I will delve into my recent work on developing gradient-based optimization algorithms tailored to address game theory-related machine learning problems. Unlike traditional theories focused on convex/concave problems, my focus lies in nonconvex zero-sum games and Stackelberg games, which are essential for tackling nonconvex objective functions prevalent in neural network training. These advancements not only offer theoretical insights into stabilizing iterative numerical algorithms but also provide more generalizable solutions for downstream learning tasks. I will demonstrate the practical significance of these algorithms in addressing real-world machine learning challenges, including adversarial attacks, data hyper-cleaning, and automatic speech recognition. Furthermore, I will highlight the broader impact of the proposed learning framework on emerging problems, such as multilingual multitask learning, reinforcement learning with human feedback, and multi-agent RL.
Biography:
Songtao Lu is a Senior Research Scientist in the Mathematics and Theoretical Computer Science Department at the IBM Thomas J. Watson Research Center in Yorktown Heights, NY, USA. Additionally, he serves as a principal investigator at the MIT-IBM Watson AI Lab in Cambridge, MA, USA. He obtained his Ph.D. from the Department of Electrical and Computer Engineering at Iowa State University in 2018 and held a Post-Doctoral Associate position at the University of Minnesota Twin Cities from 2018 to 2019. His research primarily focuses on foundational machine learning models and algorithms, with applications in trustworthy learning, meta-learning, and distributed learning. He received the Best Paper Runner-Up Award at UAI in 2022, an Outstanding Paper Award from FL-NeurIPS in 2022, an IBM Entrepreneur Award in 2023, and an IBM Outstanding Research Accomplishment Award. Furthermore, he has multiple papers selected for oral/spotlight/long oral presentations at prestigious machine learning conferences, including ICML, NeurIPS, ICLR, AAAI, and UAI.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Advancing Software Reliability: A Journey from Code to Compiler
Location
Speaker:
Mr. LI Shaohua
Ph.D. Candidate
ETH Zurich
Abstract:
In today’s digital landscape, software governs every critical aspect of our lives: communication, transportation, finance, healthcare, and many more. Consequently, software reliability emerges as a critical pillar for the functioning of our society. Yet, the intricate process from source code to executable binary, integral to software development and deployment, presents substantial challenges to both reliability and security.
In this talk, I will discuss my research on advancing the reliability of modern software systems by detecting and eliminating various defects in code, code analysis, and code compilation. At the code level, I will present my research on designing a general methodology for detecting unstable code in software. At the code analysis level, I will discuss the robustness of current detection tools and introduce a novel validation framework for solidifying their robustness. At the code compilation level, I will present a data-driven program generation approach for validation compilers. Finally, I will conclude the talk with my vision and future research on building reliable software systems.
Biography:
Shaohua Li is a final-year Ph.D. candidate in the Department of Computer Science at ETH Zurich, advised by Prof. Zhendong Su (https://people.inf.ethz.ch/suz/). His research interests are compilers, programming languages, and software engineering, with a particular emphasis on their reliability and security. His research has led to the discovery and fixing of hundreds of critical issues in well-established software and systems, such as OpenSSL, Address Sanitizer, GCC, LLVM, etc. His research has received recognition from both industry and academia, including the 2022 Meta Security Research Award, the 2023 ACM Distinguished Paper Award at OOPSLA, and the 2024 ACM Distinguished Artifact Award at ASPLOS.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Designing Algorithms for Massive Graphs
Location
Speaker:
Mr. CHEN Yu
Postdoc
École Polytechnique Fédérale de Lausanne (EPFL)
Abstract:
As the scale of the problems we want to solve in real life becomes larger, it is difficult to store the whole input or it takes a very long time to read the entire input. In these cases, the classical algorithms, even when they run in linear time and linear space, may no longer be feasible options as the input size is too large. To deal with this situation, we need to design algorithms that use much smaller space or time than the input size. We call this kind of algorithm a sublinear algorithm. My primary research interest is designing sublinear algorithms for combinatorial problems and proving lower bounds to understand the limits of sublinear computation. I also study graph sparsification problems, an important technique for designing sublinear algorithms on graphs. It is usually used as a pre-processing step to speed up algorithms. In this talk, I’ll cover some of my work in sublinear algorithms and graph sparsifications. I’ll give more details on my recent works about vertex sparsifiers.
Biography:
I’m a postdoc in the theory group at EPFL. I obtained my PhD from University of Pennsylvania, where I was advised by Sampath Kannan and Sanjeev Khanna. Before that, I did my undergraduate study at Shanghai Jiao Tong University. I have a broad interest in various aspects of theoretical computer science and mathematics. Currently, I focus on graph algorithms, especially sublinear algorithms on graph and graph sparsification problems. I receive the Morris and Dorothy Rubinoff Award at University of Pennsylvania and the Best Paper award at SODA’19.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Intelligent Systems that Perceive, Imagine, and Act Like Humans by Aligning Vision and Language Representations
Location
Speaker:
Dr. LI Boyi
Postdoctoral Fellow
Berkeley Artificial Intelligence Research Lab (BAIR), UC Berkeley
Zoom: | https://cuhk.zoom.us/j/92971603994?pwd=VFRaYTl5VWJMRnh6NHhicDBodC9JZz09 Meeting ID: 929 7160 3994 // Passcode: 202400 (Students must login with CUHK account, i.e., @link.cuhk.edu.hk, for valid attendance record) |
Abstract:
The machine learning community has embraced specialized models tailored to specific data domains. However, relying solely on a singular data type might constrain flexibility and generality, requiring additional labeled data and hindering user interaction. To address these challenges, my research objective is to build efficient, generalizable, interactive intelligent systems that learn from the perception of the physical world and their interactions with humans to execute diverse and complex tasks to assist people. These systems should support seamless interactions with humans and computers in digital software environments and tangible real-world contexts by aligning representations from vision and language. In this talk, I will elaborate on my approaches across three dimensions: perception, imagination, and action, focusing on large language models, generative models, and robotics. These findings effectively mitigate the limitations of existing model setups that cannot be overcome by simply scaling up, opening avenues for multimodal representations to unify a wide range of signals within a single, comprehensive model.
Biography:
Boyi Li is a postdoctoral scholar at UC Berkeley, advised by Prof. Jitendra Malik and Prof. Trevor Darrell. She is also a researcher at NVIDIA Research. She received her Ph.D. at Cornell University, advised by Prof. Serge Belongie and Prof. Kilian Q. Weinberger. Her research interest is in machine learning and multimodal systems. Her research aims to develop generalizable algorithms and interactive intelligent systems, focusing on large language models, generative models, and robotics, by aligning representations from multimodal data, such as vision and language.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
On-Device Personalized AI to Mobile and Implantable Devices for Better Healthcare
Location
Speaker:
Dr. JIA Zhenge
Postdoctoral Research Associate
Department of Computer Science and Engineering, University of Notre Dame
Abstract:
The rise in chronic diseases, combined with an aging population and a healthcare professional shortage, has driven the extensive use of mobile and implantable devices for effective management of diverse health conditions. Recent years have seen burgeoning interest in exploiting the rapid advancements in artificial intelligence (AI) to augment these devices’ performance. This development leads to improved patient outcomes, reduced healthcare costs, and enhanced patient autonomy. However, due to individual differences, a one-for-all AI model cannot always provide the best performance and personalized AI is demanded to tailor the model for each individual. In addition, compounded by the privacy, security, and safety constraints, model personalization must often be done on the medical device with limited hardware resources. In this talk, I will first illustrate the resource sustainability issues in the development of AI/ML for health, and demonstrate our proposed on-device personalized AI techniques that can potentially transform the landspace of mobile and implantable devices. Additionally, I will showcase the world-first TinyML design contest for health organized at ICCAD 2022 and the next-generation Implantable Cardioverter Defibrillator (ICD) design enabled by our research.
Biography:
Zhenge Jia is currently a postdoctoral research associate in the Department of Computer Science and Engineering at the University of Notre Dame. He obtained his Ph.D. degree in Electrical and Computer Engineering at the University of Pittsburgh in 2022. He received his B.S. degree with honors in Computer Science at Australian National University in 2017. His research interests include personalized deep learning and on-device AI for health. He published more than 15 papers in Nature Machine Intelligence, DAC, ICCAD, TCAD and received the Second Place Award in Ph.D. forum at DAC 2023. He has served on the technical program committee of ICCAD and served as the reviewer for IEEE TC, TCAD TNNLS, JETC, etc.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Towards Acoustic Cameras: Neural Deconvolution and Rendering for Synthetic Aperture Sonar
Location
Speaker:
Dr. Suren Jayasuriya
Assistant Professor
Arizona State University
Abstract:
Acoustic imaging leverages sound to form visual products with applications including biomedical ultrasound and sonar. In particular, synthetic aperture sonar (SAS) has been developed to generate high-resolution imagery of both in-air and underwater environments. In this talk, we explore the application of implicit neural representations and neural rendering for SAS imaging and highlight how such techniques can enhance acoustic imaging for both 2D and 3D reconstructions. Specifically, we discuss challenges of neural rendering applied to acoustic imaging especially when handling the phase of reflected acoustic waves that is critical for high spatial resolution in beamforming. We present two recent works on enhanced 2D circular SAS deconvolution in air as well as a general neural rendering framework for 3D volumetric SAS. This research is the starting point for realizing the next generation of acoustic cameras for a variety of applications in air and water environments for the future.
Biography:
Dr. Suren Jayasuriya is an assistant professor at Arizona State University, in the School of Arts, Media and Engineering (AME) and Electrical, Computer and Energy Engineering (ECEE) since 2018. Before this, he was a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University in 2017. Suren received his Ph.D. in ECE at Cornell University in Jan 2017 and graduated from the University of Pittsburgh in 2012 with a B.S. in Mathematics (with departmental honors) and a B.A. in Philosophy. His research interests range from computational cameras, computer vision and graphics, and acoustic imaging/remote sensing. His website can be found at: https://sites.google.com/asu.edu/imaging-lyceum
Enquiries:
Professor GU Jinwei (jwgu@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Program Interfaces Grounded, Transparent, and Reasoning AI
Location
Speaker:
Mr. LUO Hongyin
Postdoctoral associate
MIT Computer Science and Artificial Intelligence Laboratory
Zoom: | https://cuhk.zoom.us/j/96031770790?pwd=RmI0Z25Qa1RFRzJKWUtOOG52YXlQdz09 Meeting ID: 960 3177 0790 // Passcode: 202400 |
Abstract:
Recent language models have achieved strong generalization ability over a vast range of tasks, but also raised concerns about hallucinations, harmful stereotypes, and lack of reliability in reasoning tasks. Our research emphasizes that the core solution to these problems is improving grounding and reasoning abilities of language models. More specifically, we build trustworthy AI systems that (1) follow an explicit grounding-planning-reasoning pipeline for transparency and reliability, and (2) combine autoregressive generation and first-principal reasoning engines. Integrating large language models with knowledge graphs, entailment models, and program interpreter under a program scaffolding instead of natural language, we have made significantly improved the accuracy, transparency, and efficiency of large language models on a wide range of numeric, symbolic, and natural language tasks.
Biography:
Hongyin LUO is a postdoctoral associate at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received a bachelor’s degree from Tsinghua University in 2016 and obtained a Ph.D. degree in computer science in 2022 at MIT EECS. His research focuses on improving the efficiency, transparency, and reasoning ability of language models. His latest research has combined natural language with different formal reasoning engines, including entailment models and program interpreters. He has built self-trained language understanding models outperforming GPT3-175B with 1/500 computation, retrieval-augmented language models that handle noises from search engines, and natural language embedded programs that achieves accurate reasoning without task-specific examples.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Lossy Compression for HPC Scientific Data
Location
Speaker:
Professor HE Xubin
Professor, Department of Computer and Information Sciences
Temple University
Director, Storage Technology and Architecture Research (STAR) Lab
Abstract:
Scientific simulations generate large amounts of floating-point data, which are often not very compressible using traditional reduction schemes, such as deduplication or lossless compression. The emergence of lossy floating-point compression holds promise to satisfy the data reduction demand from HPC applications. In this talk, I will share our exploration of lossy compression for HPC scientific data, specifically in three aspects: 1) Understanding and modelling lossy compression schemes on HPC scientific data, and 2) Compression ratio modelling and estimation across error bounds for lossy compression, and 3) Exploring the autoencoder to compress scientific data.
Biography:
Dr. Xubin He is a Professor in the Department of Computer and Information Sciences at Temple University. He is also the Director of the Storage Technology and Architecture Research (STAR) lab. Dr. He received his PhD in Electrical and Computer Engineering from the University of Rhode Island, USA in 2002 and both his MS and BS degrees in Computer Science from Huazhong University of Science and Technology, China, in 1997 and 1995, respectively. His research interests focus on data storage and I/O systems, including big data, cloud storage, Non-Volatile Storage, and scalability for large storage systems. He has published more than 100 refereed articles in prestigious journals such as IEEE Transactions on Parallel and Distributed Systems (TPDS), Journal of Parallel and Distributed Computing (JPDC), ACM Transactions on Storage, and IEEE Transactions on Dependable and Secure Computing (TDSC), and at various international conferences, including USENIX FAST, USENIX ATC, Eurosys, IEEE/IFIP DSN, IEEE INFOCOM, IEEE IPDPS, MSST, ICPP, MASCOTS, LCN, etc. He is the program co-chair for ccGRID’2024, IPCCC’2017, ICPADS’2016, MSST’2010, general co-chair for IEEE NAS’2009, and general vice co-chair for IPCCC’2018. Dr. He has served as a proposal review panelist for NSF and a committee member for many professional conferences in the field. Dr. He was a recipient of the ORAU Ralph E. Powe Junior Faculty Enhancement Award in 2004, the TTU Chapter Sigma Xi Research Award in 2010 and 2005, TTU ECE Most Outstanding Teaching Faculty Award in 2010, and VCU ECE Outstanding Research Faculty in 2015. He holds one U.S. patent. He is a senior member of the IEEE, a member of the IEEE Computer Society, and USENIX.
Enquiries:
Professor SHAO Zili (shao@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Towards Principled Modeling of Inductive Bias for Generalizable Machine Learning
Location
Speaker:
Mr. LIU Weiyang
Ph.D. Candidate
Max Planck Institute for Intelligent Systems
University of Cambridge
Abstract:
Machine learning (ML) becomes increasingly ubiquitous nowadays, as it enables scalable and accurate decision making in many applications, ranging from autonomous driving to medical diagnosis. Despite its unprecedented success, how to ensure that ML systems are trustworthy and generalize as intended remains a huge challenge. To address this challenge, my research aims to build generalizable ML algorithms through a principled modeling of inductive bias. To this end, I introduce three methods for modeling inductive biases: (1) value-based modeling, (2) data-centric modeling, and (3) structure-guided modeling. While briefly touching upon all three methods, I will focus on my recent efforts in value-based modeling and how it can effectively improve the adaptation of foundation models. Finally, I will conclude by highlighting the critical role of principled inductive bias modeling in unlocking new possibilities in the age of foundation models.
Biography:
Weiyang LIU is currently a final-year PhD student at University of Cambridge and Max Planck Institute for Intelligent Systems, advised by Prof. Adrian Weller and Prof. Bernhard Schölkopf under the Cambridge-Tuebingen Machine Learning Fellowship. His research focuses on the principled modeling of inductive biases to achieve generalizable and reliable machine learning. He has received Baidu Fellowship, Hitachi Fellowship and Qualcomm Innovation Fellowship Finalist. His works have received 2023 IEEE Signal Processing Society Best Paper Award, Best Demo Award at HCOMP 2022 and multiple oral/spotlight presentations at conferences such as ICLR, NeurIPS and CVPR.
Enquiries:
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Ms. FUNG Wing Chi Mary (maryfung@cse.cuhk.edu.hk)
Intelligent Digital Design and Implementation with Machine Learning in EDA
Location
Speaker:
Professor XIE Zhiyao
Assistant Professor, Department of Electronic and Computer Engineering (ECE),
The Hong Kong University of Science and Technology (HKUST)
Abstract:
As the integrated circuit (IC) complexity keeps increasing, the chip design cost is skyrocketing. Semiconductor companies are in increasingly greater demand for experienced manpower and stressed with unprecedented turnaround time. Therefore, there is a compelling need for design efficiency improvement through new electronic design automation (EDA) techniques. In this talk, I will present multiple design automation techniques based on machine learning (ML) methods, whose major strength is to explore highly complex correlations based on prior circuit data. These techniques cover various chip-design objectives and design stages, including layout, netlist, register-transfer level (RTL), and micro-architectural level. I will focus on the different challenges in design objective prediction at different stages, and present our customized solutions. In addition, I will share our latest observations in design generation with large language models (LLMs).
Biography:
Zhiyao Xie is an Assistant Professor in the ECE Department at Hong Kong University of Science and Technology. He received his Ph.D. in 2022 from Duke University. His research focuses on electronic design automation (EDA) and machine learning for VLSI design. Zhiyao has received multiple prestigious awards, including the UGC Early Career Award 2023, ACM Outstanding Dissertation Award in EDA 2023, EDAA Outstanding Dissertation Award 2023, MICRO 2021 Best Paper Award, ASP-DAC 2023 Best Paper Award, ACM SIGDA SRF Best Poster Award 2022, etc. During his Ph.D. studies, Zhiyao also worked as a research intern at leading semiconductor companies such as Nvidia, Arm, Cadence, and Synopsys. Now he also serves as the Seminar Chair of IEEE CEDA Hong Kong.
Enquiries:
Professor XU Qiang (qxu@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Foundation Models for Life Science
Location
Speaker:
Professor SONG Le
CTO and Chief AI Scientist, BioMap
Professor, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Abstract:
Can we leverage a large amount of unsupervised data to accelerate life science discovery and drug design in industry? In this talk, I will introduce the xTrimo family of large scale pretrained models across a multiscale of biological processes, integrating a huge amount of data from protein sequences, structures, protein-protein interactions and single-cell transcriptomics data. The pretrained models can be used as the foundation to address many predictive problems arising from life science and drug design and achieve SOTA performances.
Biography:
Le Song is the CTO and Chief AI Scientist of BioMap. He directs the research and development of the xTrimo family of foundation models for life sciences, which is the largest model family in the area consisting of more than 100B parameters and achieving SOTA performance in tens of downstream problems. This new technology also led to the first foundation model deal with big pharmaceutical companies (Sanofi) totaling 1B dollar in contract value. Academically, Le Song is full professor in MBZUAI, and was a tenured associate professor of Georgia Tech, and the conference program chair of ICML 2022. He is an expert in machine learning and AI, and has won many best paper awards in leading AI conferences such as NeurIPS, ICML and AISTATS. Recently, his work on using large language models for protein structure predictions has been featured as the cover story in Nature Machine Intelligence.
Enquiries:
Professor LI Yu (liyu@cse.cuhk.edu.hk)
Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Generative AI for EDA and Chip Design
Location
Speaker:
Dr. REN Haoxing
Director of Design Automation Research
NVIDIA
Abstract:
This talk explores the transformative potential of Generative AI (GenAI) techniques for EDA and Chip Design. First, we introduce the physical design scaling challenge and propose leveraging GenAI to meet this challenge, particularly in core areas of physical design such as gate sizing and buffering. Using GenAI, we have achieved speed-ups that are multiple orders of magnitude faster than existing commercial tools. Additionally, we delve into the challenges associated with training and inference in GenAI models. To facilitate this, we introduce CircuitOps, an open-source tool that efficiently gathers and processes EDA data for the training and inference phases of GenAI models. Secondly, we explore the application of Large Language Models (a key GenAI technology) to improve industrial chip design productivity. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we trained domain-adapted LLMs (ChipNeMo) with internal design documents and source code. We evaluated ChipNeMo on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our results show that domain adaptation techniques enable significant LLM performance improvements over general-purpose base models. We also find that domain adaptation is orthogonal to retrieval augmented generation (RAG). On the engineering assistant application, our best model achieved 20% higher performance than GPT-4 with RAG.
Biography:
Haoxing Ren (Mark) serves as the Director of Design Automation Research at NVIDIA, where he focuses on leveraging machine learning and GPU-accelerated tools to enhance chip design quality and productivity. Prior to joining NVIDIA in 2016, he dedicated 15 years to EDA algorithm research and design methodology innovation at IBM Microelectronics and IBM Research. Mark is widely recognized for his contributions to physical design, AI, and GPU acceleration for EDA, achievements that have earned him several prestigious awards, including the IBM Corporate Award and best paper awards at ISPD, DAC, TCAD, and MLCAD. He holds over twenty patents and has co-authored over 100 papers and books including a book on ML for EDA and several book chapters in physical design and logic synthesis. He holds Bachelor’s and Master’s degrees from Shanghai Jiao Tong University and Rensselaer Polytechnic Institute, respectively, and earned his PhD from the University of Texas at Austin. He is a Fellow of the IEEE.
Enquiries: Mr. WONG O-Bong (obong@cse.cuhk.edu.hk)
Learning to Perceive and Model the World at Scale for Autonomous AI
Location
Speaker:
Mr. XIONG Yuwen
Ph.D. Candidate
Department of Computer Science, University of Toronto
Abstract:
Developing truly autonomous AI systems like self-driving cars has the potential to transform various industries and improve our daily lives. Accomplishing such a system hinges on two crucial components. First, precise perception of the world is necessary; second, modeling and predicting the world’s dynamics is essential to interact with the real world effectively.
In this talk, I will outline my research efforts in perception and world modeling, focusing on developing scalable deep learning algorithms and models beyond controlled environments.
Regarding perception, I will delve into the development of core deep-learning operators that fundamentally augment the capabilities of deep learning models, followed by discussions on how to perform unsupervised pretraining design unified neural network architectures for efficient and effective image segmentation.
As for world modeling, I will show how to learn prior knowledge of the world and then learn to accurately predict world dynamics at the observational level, both in a scalable and unsupervised manner. Lastly, I will discuss my future research plans to advance perception and world modeling further. This involves integrating multi-modal information into the models and systematically incorporating external knowledge, which is crucial for realizing intelligent autonomous AI systems.
Biography:
Yuwen Xiong is a Ph.D. candidate at the University of Toronto, advised by Professor Raquel Urtasun. He was a research scientist in industrial labs (e.g., Uber ATG, Waabi), conducting cutting-edge research on autonomous driving. Before coming to Toronto, he collaborated with Dr Jifeng Dai at Microsoft Research Asia. Yuwen Xiong’s primary interest lies at the intersection of computer vision, robotics, and machine learning. His long-term vision is to build autonomous AI systems that can learn like humans and operate reliably in the real world. To this end, he leverages his knowledge in the full spectrum of autonomy, including perception, prediction, decision-making, and 3D generation, to create systems that are flexible to handle real-world complexities, robust to uncertainties, and generalizable to novel scenes. He is a recipient of the Canada Graduate Scholarships – Doctoral and the Borealis AI Fellowship. More information about him can be found at https://www.cs.toronto.edu/~yuwen/.
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
Translating Computer Vision Research to the Real-world Applications
Location
Speaker:
Professor Moi Hoon Yap
Professor of Image & Vision Computing
Manchester Metropolitan University
Abstract:
This seminar covers introduction of research and education in Department of Computing and Mathematics, Manchester Metropolitan University, UK. It provides a pathway for potential research and education collaboration. Prof. Yap will share her research from conceptual foundation and procedures used in the development of medical and computer vision datasets over the past decade, with a timeline to demonstrate progress. It covers data capturing methods, an overview of research in developing private and public datasets, the related computer vision tasks (the facial micro-expressions challenges and the diabetic foot ulcer challenges) and the future direction of the development of her research. As leading institution and investigator in these fields, her aim is to share the technical challenges that we encountered together with good practices in datasets development, and provide motivation for other researchers to participate in data sharing in this domain. Future research involves call for effort in establishing international consortium to form international repository of medical imaging datasets.
For more details, please refer to:
https://dfu-challenge.github.io/ (DFU challenges)
https://megc2023.github.io/ (Facial micro-expressions Challenges)
Biography:
Prof. Moi Hoon Yap is the Research Lead of Department of Computing and Mathematics, Manchester Metropolitan University, UK. Her leadership in both research and education have attracted international students and research collaborations. She is the lead of Human-Centred Computing Group (20 staff members and 12 research scholars) and with expertise in computer vision and deep learning. As the holder of The Royal Society Industry Fellowship (2016-2022), hosted by Image Metrics Ltd, her research is driven by industrial needs. In addition, her research provides new insights and breakthrough for medical image analysis and facial analysis. Moi Hoon has received research funding from The Royal Society, EU Funding, EPSRC, Innovate UK, Cancer Research UK, and industry partners. She serves as the Associate Editor of the Journal of Computers and Programs in Biomedicine and panel member of UK funding bodies. She is leading the technology development for multiple computer vision projects, created novel datasets for reproducible research and conducted international computer vision challenges.
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
Textual Inversion
Location
Speaker:
Professor Daniel Cohen-Or
Professor
School of Computer Science, Tel Aviv University
Abstract:
Text-to-image models offer unprecedented freedom to guide creation through natural language. Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes. In other words, we ask: how can we use language-guided models to turn our cat into a painting, or imagine a new product based on our favourite toy? Here we present a simple approach that allows such creative freedom. Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new “words” in the embedding space of a frozen text-to-image model. These “words” can be composed into natural language sentences, guiding personalized creation in an intuitive way. Notably, we find evidence that a single word embedding is sufficient for capturing unique and varied concepts.
Biography:
Daniel Cohen-Or is a professor in the School of Computer Science. He received his B.Sc. cum laude in both mathematics and computer science (1985), and M.Sc. cum laude in computer science (1986) from Ben-Gurion University, and Ph.D. from the Department of Computer Science (1991) at State University of New York at Stony Brook. He was sitting on the editorial board of a number of international journals, and a member of many the program committees of several international conferences. He was the recipient of the Eurographics Outstanding Technical Contributions Award in 2005. In 2013 he received The People’s Republic of China Friendship Award. In 2015 he has been named a Thomson Reuters Highly Cited Researcher? He received the ACM SIGGRAPH Computer Graphics Achievement Award in 2018. In 2019 he won The Kadar Family Award for Outstanding Research. In 2020, he received The Eurographics Distinguished Career Award. His research interests are in computer graphics, in particular, synthesis, processing and modelling techniques.
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
Designing Secure Datacenter Transport Protocol
Location
Speaker:
Professor Michio Honda
Lecturer (Assistant Professor)
School of Informatics, University of Edinburgh
Abstract:
Datacenter operators and tenants need end-to-end encryption due to workload co-location and untrusted network infrastructure. The status quo is TLS/TCP and QUIC, but those are unfit for datacenters due to unsuitable abstractions and host software overheads. This talk presents our work on a secure datacenter transport protocol (SDP), which allows the use of legacy hardware offload for cryptographic operations available in commodity NICs, while using a new datacenter transport protocol as its basis, such as Homa, allowing operators that currently rely on TLS over TCP to adopt SDP without sacrificing hardware offloading opportunity.
Biography:
Michio Honda is a lecturer (equivalent to assistant professor in US) in the School of Informatics at the University of Edinburgh. His best-known work is identifying TCP extensibility against middlebox interference and building the first TCP/IP network stack for persistent memory. His current research interests include networked storage systems and secure datacenter transport protocols. He is a recipient of IRTF Applied Networking Research Prize (2011), Facebook Research Award (2021) and Google Research Scholar Award (2022).
Enquiries: WONG O-Bong (obong@cse.cuhk.edu.hk)
AI for Chip Design & EDA: Everything, Everywhere, All at Once
Location
Speaker:
Professor David Z. Pan
Professor & Silicon Laboratories Endowed Chair
Department of Electrical and Computer Engineering, The University of Texas at Austin
Abstract:
AI for chip design and EDA has received tremendous interests from both academia and industry in recent years. It touches everything that chip designers care about, from power/performance/area (PPA) to cost/yield, turn-around-time, security, among others. It is everywhere, in all levels of design abstractions, testing, verification, DFM, mask synthesis, for digital as well as some aspects of analog/mixed-signal/RF designs as well. It has also been used to tweak the overall design flow and hyper-parameter tuning, but not yet all at once, e.g., generative AI from design specification all the way to layouts, in a correct-by-construction manner. In this talk, I will cover some recent advancement/breakthroughs in AI for chip design/EDA and share my perspectives.
Biography:
Prof. David Pan (Fellow of ACM, IEEE, and SPIE) is a Full Professor and holder of Silicon Laboratories Endowed Chair at the Chandra Department of Electrical and Computer Engineering, The University of Texas at Austin. His research interests include electronic design automation, synergistic AI and IC co-optimizations, design for manufacturing, hardware security, and design/CAD for analog/mixed-signal and emerging technologies. He has published over 480 refereed journal/conference papers and 9 US patents. He has served in many editorial boards and conference committees, including various leadership roles such as DAC 2024 Technical Program Chair, DAC 2023 Technical Program Co-Chair, ICCAD 2019 General Chair, and ISPD 2008 General Chair. He has received many awards, including 20 Best Paper Awards (from TCAD, DAC, ICCAD, DATE, ASP-DAC, ISPD, HOST, SRC, IBM, etc.), SRC Technical Excellence Award, DAC Top 10 Author Award in Fifth Decade, ASP-DAC Frequently Cited Author Award, NSF CAREER Award, IBM Faculty Award (4 times), and many international CAD contest awards. He has held various advisory, consulting, or visiting positions in academia and industry, such as MIT and Google. He has graduated 52 PhD students and postdocs who have won many awards, including ACM Student Research Competition Grand Finals 1st Place (twice, 2018 and 2021), and Outstanding PhD Dissertation Awards 5 times from ACM/SIGDA and EDAA.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Resource Management and Runtime Reconfiguration for Distributed Streaming Systems
Location
Speaker:
Professor Richard T. B. Ma
Associate Professor
School of Computing, National University of Singapore
Abstract:
Due to the long-run and unpredictable nature of stream processing, any statically configured execution of stream jobs fails to process data in a timely and efficient manner. To achieve performance requirements, stream jobs need to be reconfigured dynamically.
In the first part of the talk, we will discuss DRS, a dynamic resource scaling framework for cloud-based stream data analytics systems. DRS overcomes three fundamental challenges: 1) how to model the relationship between the provisioned resources and the application performance, 2) where to best place resources, and 3) how to measure the system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of Jackson open queueing networks and is capable of handling arbitrary operator topologies, possibly with loops, splits, and joins. Extensive experiments with real data show that DRS is capable of detecting sub-optimal resource allocation and making quick and effective resource adjustment.
In the second part of the talk, we present Trisk, a control plane that supports versatile reconfigurations while keeping high efficiency with easy-to-use programming APIs. Trisk enables versatile reconfigurations with usability based on a task-centric abstraction, and encapsulates primitive operations such that reconfigurations can be described by compositing the primitive operations on the abstraction. Trisk adopts a partial pause-and-resume design for efficiency, through which synchronization mechanisms in the native stream systems can further be leveraged. We implement Trisk on Apache Flink and demonstrate its usage and performance under realistic application scenarios. We show that Trisk executes reconfigurations with shorter completion time and comparable latency compared to a state-of-the-art fluid mechanism for state management.
Biography:
Prof. Richard T. B. Ma received the B.Sc. (Hons.) degree in computer science and M.Phil. degree in computer science and engineering from The Chinese University of Hong Kong in 2002 and 2004, respectively, and the Ph.D. degree in electrical engineering from Columbia University in 2010. During his Ph.D. study, he worked as a Research Intern at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA, and Telefonica Research, Barcelona, Spain. From 2010–2014, he worked as a Research Scientist at the Advanced Digital Science Center (ADSC), University of Illinois at Urbana–Champaign, Champaign, IL, USA. He is currently an Associate Professor with the School of Computing, National University of Singapore. His current research interests include distributed systems and network economics. He was a recipient of the Best Paper Award Runners-up from the ACM Mobihoc 2020 and a co-recipient of the Best Paper Award from the IEEE IC2E 2013, the IEEE ICNP 2014, and the IEEE Workshop on Smart Data Pricing 2015. He is a Senior Member of ACM and IEEE.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Neural Acceleration with Full Stack Optimization
Location
Speaker:
Professor Meng Li
Assistant Professor
Institute for Artificial Intelligence, School of Integrated Circuits, Peking University
Abstract:
Recent years have witnessed the fast evolution of AI and deep learning (DL) in the field computer vision, natural language processing, etc. Though promising, DL faces serious challenges due to the exponential network scaling and network heterogeneity. In this talk, I will discuss some of our recent works that leverage network/hardware co-design and co-optimization to improve the efficiency for DL. I will cover our recent works on tiny language model for MCUs, memory-aware scheduling, and hardware accelerator designs based on a new computing paradigm, i.e., stochastic computing. I will also discuss interesting future directions to further improve the efficiency and security for efficient AI.
Biography:
Prof. Meng Li is currently a tenure-track assistant professor in Peking University, jointly affiliated with Institute for Artificial Intelligence and School of Integrated Circuits. Before joining Peking University, he was staff research scientist and tech lead in Meta Reality Lab, the world’s largest social media company, focusing on research and productization of efficient AI algorithms and hardware/systems for next generation AR/VR devices. Dr. Li received his Ph.D. degree from the University of Texas at Austin in 2018 and his bachelor degree from Peking University in 2013.
Prof. Meng Li’s research interests lie in the field of efficient and secure multi-modal AI acceleration algorithms and hardware. He has published more than 60 papers and received two best paper awards from HOST 2017 and GLSVLSI 2018. He also receives EDAA Outstanding Dissertation Award, First Place in ACM Student Research Competition Grand Final (Graduate Category), Best Poster Awards in ASPDAC Student Research Forum, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Qwen: Towards a Generalist Model
Location
Speaker:
Mr. Junyang Lin
Staff Engineer, Leader of Qwen Team,
Alibaba Group
Abstract:
This talk introduces the large language and multimodal model series Qwen, which stands for Tongyi Qianwen (通义千问), published and opensourced by Alibaba Group. The Qwen models have achieved competitive performance against both opensource and proprietary LLMs and LMMs in both benchmark evaluation and human evaluation. This talk provides a brief overview of the model series, and then delves into details about building the LLMs and LMMs, including pretraining, alignment, multimodal extension, as well as the opensource. Additionally, it points out the limitations, and discusses the future work for both research community and industry in this field.
Biography:
Mr. Junyang Lin is a staff engineer of Alibaba Group, and he is now a leader of Qwen Team. He has been doing research in natural language processing and multimodal representation learning, with a focus on large-scale pretraining, and he has around 3000 citations. Recently his team released and opensourced the Qwen series, including large language model Qwen, large vision-language model Qwen-VL, and large audio-language model Qwen-Audio. Previously, he focused on building large-scale pretraining with a focus on multimodal pretraining, and developed opensourced models OFA, Chinese-CLIP, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Classical simulation of one-query quantum distinguishers
Location
Speaker:
Professor Andrej Bogdanov
Professor
School of Electrical Engineering and Computer Science, University of Ottawa
Abstract:
A distinguisher is an algorithm that tells whether its input was sampled from one distribution or from another. The computational complexity of distinguishers is important for much of cryptography, pseudorandomness, and statistical inference.
We study the relative advantage of classical and quantum distinguishers of bounded query complexity over n-bit strings. Our focus is on a single quantum query, which is already quite powerful: Aaronson and Ambainis (STOC 2015) constructed a pair of distributions that is ?-distinguishable by a one-query quantum algorithm, but O(?k/√n)-indistinguishable by any non-adaptive k-query classical algorithm.
We show that every pair of distributions that is ?-distinguishable by a one-query quantum algorithm is distinguishable with k classical queries and (1) advantage min{?(?√(k/n)), ?(?^2k^2/n)} non-adaptively (i.e., in one round), and (2) advantage ?(?^2k/√(n log n)) in two rounds. The second bound is tight in k and n up to a (log n) factor.
Based on joint work with Tsun Ming Cheung (McGill), Krishnamoorthy Dinesh (IIT Palakkad), and John C.S. Lui (CUHK)
Biography:
Prof. Andrej Bogdanov is a professor in the School of Electrical Engineering and Computer Science at the University of Ottawa. He is interested in cryptography, pseudorandomness, and computational complexity. Andrej obtained his Ph.D. from UC Berkeley. Before joining uOttawa he taught at the Chinese University of Hong Kong. He was a visiting professor at the Tokyo Institute of Technology in 2013 and at the Simons Institute for the Theory of Computing in 2017 and 2021.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Compact AI Representations for Game Theory: Models, Computations, and Applications
Location
Speaker:
Professor Hau Chan
Assistant Professor
School of Computing, University of Nebraska-Lincoln
Abstract:
In the last few decades, game theory has become a prominent construct for modeling and predicting outcomes of strategic interactions of rational agents in various real-world environments, ranging from adversarial (e.g., attacker-defender in the security domain) to collaborative (e.g., public good contributions). In terms, these predicted outcomes can be used to facilitate, inform, and improve agents’ and policymakers’ decision-making. Unfortunately, because of the domain characteristics in real-world environments, classical game-theoretic modeling and computational approaches (for predicting outcomes) can often take exponential space and time.
In this talk, I will discuss compact AI representations for strategic interactions (or games) to provide efficient approaches for a wide range of applications. I will demonstrate how they can be used to model and predict outcomes in scenarios we examined previously such as attacker-defenders, resource congestions, residential segregations, and public project contributions.
More specifically, I will first present aggregate games, a compact AI representation of games where each agent’s utility function depends on their own actions and the aggregation or summarization of the actions of all agents, and resource graph games, a compact AI representation of games where agents have exponential numbers of actions. For these games, I will then present our computational results for determining and computing Nash Equilibria (NE), a fundamental solution concept to specify predicted outcomes in games, and their related problems.
Biography:
Prof. Hau Chan is an assistant professor in the School of Computing at the University of Nebraska-Lincoln. He received his Ph.D. in Computer Science from Stony Brook University in 2015 and completed three years of Postdoctoral Fellowships, including at the Laboratory for Innovation Science at Harvard University in 2018. His main research areas focus on modeling and algorithmic aspects of AI and multi-agent interactions (e.g., via game theory, mechanism design, and applied machine learning), addressing several cross-disciplinary societal problems and applications. His recent application areas include improving accessibility to public facilities, reducing substance usage, and making fair collective decisions. His research has been supported by NSF, NIH, and USCYBERCOM. He has received several Best Paper Awards at SDM and AAMAS and distinguished/outstanding SPC/PC member recognitions at IJCAI and WSDM. He has given tutorials and talks on computational game theory and mechanism design at venues such as AAMAS and IJCAI, including an Early Career Spotlight at IJCAI 2022. He has served as a co-chair for Demonstrations, Doctoral Consortium, Scholarships, and Diversity & Inclusion Activities at AAMAS and IJCAI.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93836939970
Meeting ID: 938 3693 9970
Passcode: 202300
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Cryo-Electron Microscopy Image Analysis: from 2D class averaging to 3D reconstruction
Location
Speaker:
Professor Zhizhen Zhao
William L. Everitt Fellow and Associate Professor
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Abstract:
Cryo-electron microscopy (EM) single particle reconstruction is an entirely general technique for 3D structure determination of macromolecular complexes. This talk focuses on the algorithms for 2D class averaging and 3D reconstruction for the single-particle images, assuming no conformation changes of the macromolecules. In the first part, I will introduce the multi-frequency vector diffusion maps to improve the efficiency and accuracy of cryo-EM 2D image classification and denoising. This framework incorporates different irreducible representations of the estimated alignment between similar images. In addition, we use a graph filtering scheme to denoise the images using the eigenvalues and eigenvectors of the MFVDM matrices. In the second part, I will present a 3D reconstruction approach, which follows a line of works starting from Kam (1977) that employs the autocorrelation analysis for the single particle reconstruction. Our approach does not require per image pose estimation and imposes spatial non-negativity constraint. At the end of the talk, I will briefly review the challenges and existing approaches for addressing the continuous heterogeneity in cryo-EM data.
Biography:
Prof. Zhizhen Zhao is an Associate Professor and William L. Everitt Fellow in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She joined University of Illinois in 2016. From 2014 to 2016, she was a Courant Instructor at the Courant Institute of Mathematical Sciences, New York University. She received the B.A. and M.Sc. degrees in physics from Trinity College, Cambridge University in 2008, and the Ph.D. degree in physics from Princeton University in 2013. She is a recipient of Alfred P. Sloan Research Fellowship (2020). Her research interests include computational imaging, data science, and machine learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Structure for Scalable Verification
Location
Speaker:
Dr. Lauren Pick
Postdoctoral Researcher
Department of Computer Sciences, University of Wisconsin-Madison and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley
Abstract:
Given the critical role of software systems in society, it is important that we understand how such systems behave and interact. Formal specifications can help us in this task by providing rigorous and unambiguous descriptions of system behaviors. Automated verification can be applied to automate the process of proving formal specifications hold for software systems, making it easier to ensure that the underlying systems function as intended. Unfortunately, the application of automated verification to real-world systems remains hindered by scalability limitations. In this talk, I describe my work on addressing these limitations by leveraging the problem-specific structure of specifications and systems. I specifically illustrate my approach for handling concrete problems in security and distributed domains, where taking advantage of structure enables scalable verification.
Biography:
Dr. Lauren Pick is a postdoctoral researcher at the University of California, Berkeley and the University of Wisconsin-Madison. She received her Ph.D. from Princeton University in January 2022. Her research focuses on developing techniques for automated verification and synthesis, with the goal of enabling formal reasoning about real-world systems. To this end, she has developed techniques that take advantage of structural aspects of target systems and their desired properties to enable efficient verification and synthesis. She is a Computing Innovation fellow and was a recipient of the NSF GRFP Fellowship.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Shape Geometric Processing and Analysis of Large Aviation Equipments
Location
Speaker:
Professor Mingqiang Wei
Professor
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA)
Abstract:
Large aircraft, as one of the most complex high-end equipment in modern society, is the culmination of interdisciplinary and cross-domain advanced technologies, occupying the top of the manufacturing industry’s technology and value chains. With the emergence of a batch of national key equipment such as the Y-20, C919, and Jiaolong-600, China has made breakthrough progress in large aircraft manufacturing and gradually established a relatively complete production and development system. However, due to insufficient technological foundation and compared with international aerospace manufacturing giants, Chinese aviation enterprises have not yet achieved integrated manufacturing and measurement capabilities or effective precision control capabilities. The “high-precision rapid 3D scanning analysis and quality control technology” has become an important factor affecting the development process of large aircraft in China. Geometric deep learning, with its powerful ability to learn geometric features, has shown great potential in the analysis of large aircraft shapes. However, existing network structures lack domain-specific expertise in aviation, there is no publicly available large-scale aircraft 3D dataset, and the latest machine learning technologies have not been deeply integrated into the field of geometric deep learning, making it difficult to comprehensively and efficiently analyze the complex features and stringent accuracy requirements of large aircraft shapes. This report will introduce the interdisciplinary technical issues involved in the analysis of large aircraft shapes.
Biography:
Prof. Mingqiang Wei received his Ph.D. degree (2014) in Computer Science and Engineering from the Chinese University of Hong Kong (CUHK). He is a professor at the School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA). He was the recipient of Excellent Youth Fund Project of the National Natural Science Foundation of China in 2023. Before joining NUAA, he served as an assistant professor at Hefei University of Technology, and a postdoctoral fellow at CUHK. He was a recipient of the CUHK Young Scholar Thesis Awards in 2014. He is now an Associate Editor for ACM TOMM, The Visual Computer Journal, Journal of Electronic Imaging, and a Guest Editor for IEEE Transactions on Multimedia. His research interests focus on 3D vision, computer graphics, and deep learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Looking behind the Seen
Location
Speaker:
Professor Alexander Schwing
Associate Professor
Department of Electrical and Computer Engineering & Department of Computer Science, University of Illinois at Urbana-Champaign
Abstract:
Our goal is to develop methods which anticipate. For this, four foundational questions need to be answered: (1) How can methods accurately forecast high-dimensional observations?; (2) How can algorithms holistically understand objects, e.g., when reasoning about occluded parts?; (3) How can accurate probabilistic models be recovered from limited amounts of labeled data and for rare events?; and (4) How can autonomous agents be trained effectively to collaborate?
In this talk we present vignettes of our research to address those questions. We start by discussing MaskFormer and Mask2Former, a recent architecture which achieves state-of-the-art results on three tasks: panoptic, instance and semantic segmentation. We then discuss the importance of memory for video object segmentation and its combination with foundation models for open-world segmentation. Finally, and if time permits, we discuss SDFusion, a generative model to infer parts of an object that are unobserved. For additional info and questions, please browse to http://alexander-schwing.de.
Biography:
Prof. Alexander Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on computer vision and machine learning topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016. His research interests are in the area of computer vision and machine learning, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award. For additional info, please browse to http://alexander-schwing.de.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Open-Source Accelerator-Based Edge AI Architectures for a Sustainable World
Location
Speaker:
Professor David Atienza
Professor
Department of Electrical and Computer Engineering, The École Polytechnique Fédérale de Lausanne (EPFL)
Abstract:
Edge computing is becoming an essential concept covering multiple domains nowadays as our world becomes increasingly connected to enable the Internet of Things (IoT) concept. In addition, the new wave of Artificial Intelligence (AI), particularly complex Machine Learning (ML) and Deep Learning (DL) models, is demanding new computing paradigms beyond traditional general-purpose computing to make IoT a viable reality in a sustainable world.
In this seminar, Prof. Atienza will discuss new approaches to effectively design the next generation of edge AI computing architectures by taking inspiration from how biological computing systems operate. In particular, these novel bioinspired edge AI architectures includes two key concepts. First, it exploits the idea of accepting computing inexactness and integrating multiple computing acceleration engines and low-power principles to create a new open-source eXtended and Heterogeneous Energy-Efficient hardware Platform (called x-HEEP). Second, x-HEEP can be instantiated for different application domains of edge AI to operate ensembles of neural networks to improve the ML/DL outputs’ robustness at system level, while minimizing memory and computation resources for the target application. Overall, x-HEEP instantiations for edge AI applications included in-memory computing or run-time reconfigurable coarse-grained accelerators to minimize energy according to the required precision of the target application.
Biography:
Prof. David Atienza is a professor of Electrical and Computer Engineering, and leads both the Embedded Systems Laboratory (ESL) and the new EcoCloud Sustainable Computing Center at EPFL, Switzerland. He received his M.Sc. and Ph.D. degrees in Computer Science and Engineering from UCM (Spain) and IMEC (Belgium). His research interests include system-level design methodologies for high-performance multi-processor system-on-chip (MPSoC) and low-power Internet-of-Things (IoT) systems, including edge AI architectures for wearables and IoT systems as well as thermal-aware designs for MPSoCs and many-core servers. He is a co-author of more than 400 papers, two books, and has 14 licensed patents in these topics. He served as DATE General Chair and Program Chair, and is currently Editor-in-Chief of IEEE TCAD. Among others, Dr. Atienza has received the ICCAD 10-Year Retrospective Most Influential Paper Award, the DAC Under-40 Innovators Award, the IEEE TC-CPS Mid-Career Award, and the ACM SIGDA Outstanding Faculty Award. He is a Fellow of IEEE, a Fellow of ACM, served as IEEE CEDA President (period 2018-2019), and he is currently the Chair of the European Design Automation Association (EDAA).
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Heads-Up Computing: Towards The Next Generation Interactive
Location
Speaker:
Prof. Shengdong Zhao
Associate Professor
Department of Computer Science, National University of Singapore
Abstract:
Heads-up computing is an emerging concept in human-computer interaction (HCI) that focuses on natural and intuitive interaction with technology. By making technology more seamlessly integrated into our lives, heads-up computing has the potential to revolutionize the way we interact with devices. With the rise of large language models (LLMs) such as ChatGPT and GPT4, the vision of heads-up computing is becoming much easier to realize. The combination of LLMs and heads-up computing can create more proactive, personalized, and responsive systems that are more human-centric. However, technology is a double-edged sword. While technology provides us with great power, it also comes with the responsibility to ensure that it is used ethically and for the benefit of all. That’s why it is essential to place fundamental human values at the center of research programs and work collaboratively among disciplines. As we navigate through this historic transition, it is crucial to shape a future that reflects our values and enhances our quality of life.
Biography:
Dr. Shengdong Zhao is an Associate Professor in the Department of Computer Science at the National University of Singapore, where he established and leads the NUS-HCI research lab. He received his Ph.D. degree in Computer Science from the University of Toronto and a Master’s degree in Information Management & Systems from the University of California, Berkeley. With a wealth of experience in developing new interface tools and applications, Dr. Zhao regularly publishes his research in top-tier HCI conferences and journals. He has also worked as a senior consultant with the Huawei Consumer Business Group in 2017. In addition to his research, Dr. Zhao is an active member of the HCI community, frequently serving on program committees for top HCI conferences and as the paper chair for the ACM SIGCHI 2019 and 2020 conferences. For more information about Dr. Zhao and the NUS-HCI lab, please visit http://www.shengdongzhao.com and http://www.nus-hci.org .
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Robust AI for Security
Location
Speaker:
Prof. Yizheng Chen
Assistant Professor
Department of Computer Science, University of Maryland
Abstract:
Artificial Intelligence is becoming more powerful than ever, e.g., GitHub Copilot suggests code to developers, and Large Language Model (LLM) Plugins will soon assist many tasks in our daily lives. We can utilize the power of AI to solve security problems, which needs to be robust against new attacks and new vulnerabilities.
In this talk, I will first discuss how to develop robust AI techniques for malware detection. Our research finds that, after training an Android malware classifier on one year’s worth of data, the F1 score quickly dropped from 0.99 to 0.76 after 6 months of deployment on new test samples. I will present new methods to make machine learning for Android malware detection more effective against data distribution shift. My vision is, continuous learning with a human-in-the-loop setup can achieve robust malware detection. Our results show that to maintain a steady F1 score over time, we can achieve 8X reduction in labels indeed from security analysts.
Next, I will discuss the potential of using large language models to solve security problems, using vulnerable source code detection as a case study. We propose and release a new vulnerable source code dataset, DiverseVul. Using the new dataset, we study 11 model architectures belonging to 4 families for vulnerability detection. Our results indicate that developing code-specific pre-training tasks is a promising research direction of using LLMs for security. We demonstrate an important generalization challenge for the deployment of deep learning-based models.
In closing, I will discuss security issues of LLMs and future research directions.
Biography:
Yizheng Chen is an Assistant Professor of Computer Science at University of Maryland. She works at the intersection of AI and security. Her research focuses on AI for Security and robustness of AI models. Previously, she received her Ph.D. in Computer Science from the Georgia Institute of Technology, and was a postdoc at University of California, Berkeley and Columbia University. Her work has received an ACM CCS Best Paper Award Runner-up and a Google ASPIRE Award. She is a recipient of the Anita Borg Memorial Scholarship.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Geometric Robot Learning for Generalizable Skills Acquisition
Location
Speaker:
Prof. Xiaolong Wang
Associate Professor
Department of Electrical and Computer Engineering, University of California, San Diego
Abstract:
Robot learning has witnessed significant progress in terms of generalization in the past few years. At the heart of such a generalization, the advancement of representation learning, such as image and text foundation models plays an important role. While these achievements are encouraging, most tasks conducted are relatively simple. In this talk, I will talk about our recent efforts on learning generalizable skills focusing on tasks with complex physical contacts and geometric reasoning. Specifically, I will discuss our research on: (i) the use of a large number of low-cost, binary force sensors to enable Sim2Real manipulation; (ii) unifying 3D and semantic representation learning to generalize policy learning across diverse objects and scenes. I will showcase the real-world applications of our research, including dexterous manipulation, language-driven manipulation, and legged locomotion control.
Biography:
Xiaolong Wang is an Assistant Professor in the ECE department at the University of California, San Diego, affiliated with the TILOS NSF AI Institute. He received his Ph.D. in Robotics at Carnegie Mellon University. His postdoctoral training was at the University of California, Berkeley. His research focuses on the intersection between computer vision and robotics. His specific interest lies in learning 3D and dynamics representations from videos and physical robotic interaction data. These comprehensive representations are utilized to facilitate the learning of robot skills, with the goal of generalizing the robot to interact effectively with a wide range of objects and environments in the real physical world. He is the recipient of the NSF CAREER Award, Intel Rising Star Faculty Award, and Research Awards from Sony, Amazon, Adobe, and Cisco.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Disentangled Representation from Generative Networks
Location
Speaker:
Dr. LIU Sifei
Abstract:
Disentangled representation in computer vision refers to encoding visual data into distinct, independent factors. These representations are critical for enhancing interpretability, improving generalization across tasks, and enabling controlled manipulation of specific visual attributes. Learning disentangled representation is challenging, primarily because obtaining ground-truth factorizations is often elusive.
In this talk, I will discuss our latest efforts to extract disentangled representations from GANs and diffusion models, for both 2D images and 3D textured shapes. I will demonstrate how, in the absence of annotations, our approaches can discern and extract fine-grained structural information, such as correspondence maps, in a self-supervised manner. Building on this space, I will introduce our work on a generalizable network designed for controlled generation and editing in a feed-forward paradigm. Additionally, I will spotlight our recent exploration into generating hand-object interactions, leveraging the disentanglement of layout and content through image diffusion models.
Biography:
Dr. LIU Sifei is a staff-level Senior Research Scientist at NVIDIA, where she is part of the LPR team led by Jan Kautz. Her work primarily revolves around the development of generalizable visual representation and data-efficiency learning for images, videos, and 3D contents. Prior to this, she pursued her Ph.D. at the VLLAB, under the guidance of Ming-Hsuan Yang. Sifei had received several prestigious awards and recognitions. In 2013, she was honored with the Baidu Graduate Fellowship. This was followed by the NVIDIA Pioneering Research Award in 2017, and the Rising Star EECS accolade in 2019. Additionally, she was nominated for the VentureBeat Women in AI Award in 2020.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Towards Scalable, Secure and Privacy-Preserving Metaverse
Location
Speaker:
Prof. DAI Hong-Ning
Associate Professor
Department of Computing Science, Hong Kong Baptist University (HKBU)
Abstract:
The metaverse is essentially constructed by multiple technologies, including virtual reality (VR), augmented reality (AR), mixed reality (MR), artificial intelligence (AI), digital twin (DT), blockchain, and 5G communications. The advent of the metaverse has proliferated a number of VR/AR apps on top of diverse VR/AR devices, such as Meta Quest 2, MS Hololens, Sony PlayStation VR, ByteDance Pico, and Apple Vision Pro. Meanwhile, diverse metaverse applications have emerged, such as gaming, healthcare, industry, creator economy, and digital arts. However, the current development of the metaverse is still in its early stage because of the complexity and heterogeneity of the entire system, which cannot be scalable to fulfill the increasing number of participants as well as the stringent demands of metaverse applications. Moreover, emerging security vulnerabilities and privacy-leakage concerns have also prevented the metaverse from wide adoption. In this talk, I will first briefly review the Metaverse as well as relevant technologies. I will then elaborate on its challenges as well as potential solutions. Finally, I will discuss several future directions in this promising area.
Biography:
Hong-Ning Dai is an associate professor in the Department of Computer Science, Hong Kong Baptist University (HKBU). He obtained a Ph.D. degree in Computer Science and Engineering from The Chinese University of Hong Kong. Before joining HKBU, he has more than 10-year academic experience in the Chinese University of Hong Kong, Macau University of Science and Technology (Macau), and Lingnan University (Hong Kong). His current research interests include the Internet of Things, Blockchain, and Big Data Analytics. Prof. Dai has published more than 200 papers in referred journals and conferences. His publications have received more than 15,000 citations. He was also included in the world’s top 2% scientists for career-long impact (2022, 2021) by Stanford University, USA. He was also conferred on AI 2000 Most Influential Scholar Award (Honorable Mention) in Internet of Things, 2023. He is the holder of 1 U.S. patent. He is the senior member of IEEE and ACM. Prof. Dai has served as an associate editor for IEEE Communications Surveys & Tutorials, IEEE Transactions on Intelligent Transportation Systems, IEEE Transactions on Industrial Informatics, IEEE Transactions on Industrial Cyber-Physical Systems, Ad Hoc Networks (Elsevier), and Connection Science (Taylor & Francis).
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
The da Vinci Research Kit: System Description, Research Highlights, and Surgical Robotics Challenge
Location
Speaker:
Prof. Peter Kazanzides
Research Professor
Department of Computing Science, Johns Hopkins University
Abstract:
The da Vinci Research Kit (dVRK) is an open research platform that couples open-source control electronics and software with the mechanical components of the da Vinci surgical robot. This presentation will describe the dVRK system architecture, followed by selected research enabled by this system, including mixed reality for the first assistant, autonomous camera motion, and force estimation for bilateral teleoperation. The presentation will conclude with an overview of the AccelNet Surgical Robotics Challenge, which includes both simulated and physical environments.
Biography:
Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988. He began work on surgical robotics in March 1989 as a postdoctoral researcher at the IBM T.J. Watson Research Center and co-founded Integrated Surgical Systems (ISS) in November 1990. As Director of Robotics and Software at ISS, he was responsible for the design, implementation, validation and support of the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University December 2002 and currently holds an appointment as a Research Professor of Computer Science. His research focuses on computer-integrated surgery, space robotics and mixed reality.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Smart Reconfigurable Computing for GNN and Transformer using Agile High Level Synthesis
Location
Speaker:
Dr. HAO Cong, Callie
Assistant Professor
Department of Electrical and Computer Engineering (ECE), Georgia Institute of Technology (GaTech)
Abstract:
In this talk, we introduce two architectures, one for graph neural work (GNN) called FlowGNN, one for vision transformer (ViT) called Edge-MoE. In FlowGNN, a generic dataflow architecture for GNN acceleration is proposed, supporting a wide range of GNN models without graph pre-processing. GNNBuilder is then introduced as an automated, end-to-end GNN accelerator generation framework, allowing the generation of accelerators for various GNN models with minimal overhead. Next, Edge-MoE presents an FPGA accelerator for multi-task Vision Transformers (ViTs) with architectural innovations, achieving improved energy efficiency compared to GPU and CPU. The talk demonstrates the performance of these approaches, with code and measurements available for public access. Finally, we briefly introduce LightningSim, a fast and rapid simulation tool for High-Level Synthesis (HLS) designs, which can significantly improve HLS design simulation speed.
Biography:
Dr. HAO Cong, Callie is an assistant professor in ECE at Georgia Tech. She received the Ph.D. degree in Electrical Engineering from Waseda University in 2017. Her primary research interests lie in the joint area of efficient hardware design and machine learning algorithms, as well as reconfigurable and high-efficiency computing and agile electronic design automation tools.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96351056844?pwd=cDBJcVY3ZHlGMSt2V0FUQVdUVnAwZz09
Meeting ID: 963 5105 6844
Passcode: 471978
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
An Evolution of Learning Neural Implicit Representations for 3D Shapes
Location
Speaker:
Professor ZHANG Hao, Richard
Amazon Scholar, Professor
School of Computing Science, Simon Fraser University, Canada
Abstract:
Neural implicit representations are the immediate precursors to neural radiance fields (NeRF). In a short span of only four years, they have quickly become the representation of choice for learning reconstructive and generative models of 3D shapes. Unlike traditional convolutional neural networks that have been widely applied to reason about images and video, neural implicit models encode shape boundaries in a continuous manner to lead to superior visual quality; they are also amenable to simple network architectures to facilitate a variety of extensions and adaptations. In this talk, I will recount a brief history of the development of neural implicit representations, while focusing mainly on several paths of follow-ups from our recent works, including structured implicit models, direct mesh generation, CSG assemblies, and the use of contextual, query-specific feature encoding for category-agnostic and generalizable shape representation learning.
Biography:
ZHANG Hao, Richard is a professor in the School of Computing Science at Simon Fraser University, Canada. Currently, he holds a Distinguished University Professorship and is an Amazon Scholar. Richard earned his Ph.D. from the University of Toronto, and MMath and BMath degrees from the University of Waterloo. His research is in computer graphics and visual computing with special interests in geometric modeling, shape analysis, 3D vision, geometric deep learning, as well as computational design and fabrication. He has published more than 180 papers on these topics, including over 60 articles in SIGGRAPH (+Asia) and ACM Transactions on Graphics (TOG), the top venue in computer graphics. Awards won by Richard include a Canadian Human-Computer Communications Society Achievement Award in Computer Graphics (2022), a Google Faculty Award (2019), a National Science Foundation of China Overseas Outstanding Young Researcher Award (2015), an NSERC Discovery Accelerator Supplement Award (2014), a Best Dataset Award from ChinaGraph (2020), as well as faculty grants/gifts from Adobe, Autodesk, Google, and Huawei. He and his students have won the CVPR 2020 Best Student Paper Award and Best Paper Awards at SGP 2008 and CAD/Graphics 2017.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Towards predictive spatiotemporal modeling of single cells
Location
Speaker:
Dr. Xiaojie Qiu
Incoming Assistant Professor
Department of Genetics, Department of Computer Science, Stanford University
Abstract:
Single-cell RNA-seq, together with RNA velocity and metabolic labeling, reveals cellular states and transitions at unprecedented resolution. Fully exploiting these data, however, requires kinetic models capable of unveiling governing regulatory functions.
In the first part of my talk, I will introduce an analytical framework dynamo (https://github.com/aristoteleo/dynamo-release), and highlight dynamo’s power to overcome fundamental limitations of conventional splicing-based RNA velocity analyses to enable accurate velocity estimations on a metabolically labeled human hematopoiesis scRNA-seq dataset. Furthermore, differential geometry analyses reveal mechanisms driving early megakaryocyte appearance and elucidate asymmetrical regulation within the PU.1-GATA1 circuit. Leveraging the least-action-path method, dynamo accurately predicts drivers of numerous hematopoietic transitions. Finally, in silico perturbations predict cell-fate diversions induced by gene perturbations. Dynamo, thus, represents an important step in advancing quantitative and predictive theories of cell-state transitions.
Cells do not live in a vacuum, but in a milieu defined by cell–cell communication that can be quantified via recent advances in spatial transcriptomics. In my second section of my talk, I will talk about Spateo, a general framework for quantitative spatiotemporal modeling of single-cell resolution spatial transcriptomics. Spateo develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila. Thus, Spateo enables the study of the ecology of organs at a molecular level in 3D space, beyond isolated single cells.
Biography:
Dr. Xiaojie Qiu is an incoming assistant professor at the Department of Genetics, the BASE program, and the Department of Computer Science at Stanford. Xiaojie’s Ph.D. work at University of Washington with Dr. Cole Trapnell made substantial contributions to the field of single-cell genomics, exemplified by the development of Monocle ⅔ (monocle 2 & monocle 3), which can accurately and robustly reconstruct complex developmental trajectories from scRNA-seq data. In his post-doc at Whitehead Institute with Dr. Jonathan Weissman, Xiaojie developed Dynamo (aristoteleo/dynamo-release) to infers absolute RNA velocity with metabolic labeling enabled single-cell RNA-seq, reconstructs continuous vector fields that predict fates of individual cells, employs differential geometry to extract underlying gene regulatory network regulations, and ultimately predicts optimal reprogramming paths and makes nontrivial in silico perturbation predictions. Recently he also developed a powerful toolkit, Spateo (aristoteleo/spateo-release), for advanced multi-dimensional spatiotemporal modeling of single cell resolution spatial transcriptomics. Spateo delivers novel methods for digitizing spatial layers/columns to identify spatially-polar genes, and develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila.
The Qiu lab at Stanford will officially start on Dec. 16, 2024. Xiaojie will continue leveraging his unique background in single-cell genomics, mathematical modeling, and machine learning to lead a research team that bridges the gap between the “big data” from single-cell and spatial genomics and quantitative/predictive modeling in order to address fundamental questions in mammalian cell fate transitions, especially that of heart development and disease. There will be mainly four directions in the lab: 1) dissect the mechanisms of mammalian cell differentiation, reprogramming, and maintenance, including that of cardiac cells, through differentiable deep learning frameworks; 2) integrate multi-omics and harmonize short-term RNA velocities with long-term lineage tracing and apply such methods to heart developmental and heart congenital disease; 3) build predictive in silico 3D spatiotemporal models of mammalian organogenesis with a focus on the heart morphogenesis; and 4) establish foundational software ecosystem for predictive and mechanistic modeling of single cell and spatial transcriptomics.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
The characteristics and relationships between deep generative modelling approaches
Location
Speaker:
Professor Chris G. Willcocks
Associate Professor
Department of Computer Science, Durham University
Abstract:
There are several key equations in the generative modelling literature, most of which estimate the probability of data. Each related modelling approach (Flows, EBMs, VAEs, GANs, OT, Autoregressive,…) have trade-offs in terms of (i) modelling quality, (i) inference time/depth, and (iii) distribution coverage/mode collapse. Building off findings in our TPAMI 2022 review, “Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models”, this talk covers high-level theoretical insights into the different generative modelling paradigms, discussing where there is a gap in the current theory and looking at promising directions such as from optimal transport theory and implicit networks, to address upcoming challenges.
Biography:
Chris G. Willcocks is an associate professor in computer science at Durham University where he leads the deep learning and reinforcement learning modules. His research is in theoretical aspects of deep learning, with a particular emphasis on non-adversarial methodologies such as probabilistic diffusion models and stochastic processes. Research within his group has led to several impactful results in generative modelling including an extension of diffusion models to infinite dimensions without requiring latent vector compression, and an approach that shows you don’t need encoders in traditional autoencoders. He is a Fellow of the Higher Education Academy (FHEA), an area chair for BMVC, and has authored over 30 peer-reviewed publications in venues such as ICLR, CVPR, ECCV and TPAMI.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Fair and Private Backpropagation: A Scalable Framework for Fair and Private Learning
Location
Speaker:
Meisam Razaviyayn
Associate Professor
University of Southern California
Abstract:
Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals’ health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning require a full-batch of data in each iteration of the algorithm to be able to impose fairness. Moreover, the fairness/accuracy of the model can degrade significantly in prior DP training algorithms. In this work, we developed a min-batch (stochastic) differentially private algorithm for fair learning (with theoretical convergence guarantee). Here, the term “stochastic” refers to the fact that our proposed algorithm converges even when mini-batches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger-scale problems with non-binary target/sensitive attributes.
Biography:
Meisam Razaviyayn is an associate professor of Industrial and Systems Engineering, Computer Science, Quantitative and Computational Biology, and Electrical Engineering at the University of Southern California. He is also the associate director of the USC-Meta Center for Research and Education in AI and Learning. Prior to joining USC, he was a postdoctoral research fellow in the Department of Electrical Engineering at Stanford University. He received his PhD in Electrical Engineering with a minor in Computer Science at the University of Minnesota. He obtained his M.Sc. degree in Mathematics from the University of Minnesota. Meisam Razaviyayn is the recipient of the 2022 NSF CAREER Award, the 2022 Northrop Grumman Excellence in Teaching Award, the 2021 AFOSR Young Investigator Award, the 2021 3M Nontenured Faculty award, 2020 ICCM Best Paper Award in Mathematics, IEEE Data Science Workshop Best Paper Award in 2019, the Signal Processing Society Young Author Best Paper Award in 2014, and the finalist for Best Paper Prize for Young Researcher in Continuous Optimization in 2013 and 2016. He is also the silver medalist of Iran’s National Mathematics Olympiad. His research interests include the design and the study of the fundamental aspects of optimization algorithms that arise in the modern data science era.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
On the Model-misspecification of Reinforcement Learning
Location
Speaker:
Dr. YANG Lin
Assistant Professor
Electrical and Computer Engineering Department, University of California
Abstract:
The success of reinforcement learning (RL) heavily depends on the approximation of functions such as policy, value, or models. Misspecification—a mismatch between the ground-truth and the best function approximators—often occurs, particularly when the ground-truth is complex. Because the misspecification error does not disappear even with an infinite number of samples, it’s crucial to design algorithms that demonstrate robustness under misspecification. In this talk, we will first present a lower bound illustrating that RL can be inefficient (e.g., possessing exponentially large complexity) if the features can only represent the optimal value functions approximately but with high precision. Subsequently, we will show that this issue can be mitigated by approximating the transition probabilities. In such a setting, we will demonstrate that both policy-based and value-based approaches can be resilient to model misspecifications. Specifically, we will show that these methods can maintain accuracy even under large, locally-bounded misspecification errors. Here, the function class might have a \Omega(1) approximation error in specific states and actions, but it remains small on average under a policy-induced state-distribution. Such robustness to model misspecification partially explains why practical algorithms perform so well, paving the way for new directions in understanding model misspecifications.
Biography:
Dr. Lin Yang is an Assistant Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. His current research focuses on the theory and applications of reinforcement learning. Previously, he served as a postdoctoral researcher at Princeton University. He earned two Ph.D. degrees in Computer Science and in Physics & Astronomy from Johns Hopkins University. Prior to that, he obtained a Bachelor’s degree in Math & Physics from Tsinghua University. Dr. Yang has numerous publications in premier machine learning venues like ICML and NeurIPS, and has served as area chairs for these conferences. His receives an Amazon Faculty Award, a Simons-Berkeley Research Fellowship, the JHU MINDS Best Dissertation Award, and the JHU Dean Robert H. RoyFellowship.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Towards Application-oriented Big Data and ML Systems
Location
Speaker:
Professor ZHANG Hong
Assistant Professor
Cheriton School of Computer Science, University of Waterloo
Abstract:
The world is undergoing a data revolution. Emerging big data and ML applications are harnessing massive volumes of data to uncover hidden patterns, correlations, and other valuable insights, transforming information and knowledge production. As the data volume keeps growing explosively, these applications require high-performance big data and ML systems to efficiently transfer, store, and process data at a massive scale.
In this talk, I advocate an application-oriented principle to design big data and ML systems: fully exploiting application-specific structures — communication patterns, execution dependencies, ML model structures, etc. — to suit application-specific performance demands. I will present how I have developed the application-oriented principle throughout my PhD-Postdoc-Faculty research, and how I have applied it to build systems tailored for different big data and ML applications.
Biography:
ZHANG Hong is currently an assistant professor at the Cheriton School of Computer Science at the University of Waterloo. Previously, he was a postdoctoral scholar at UC Berkeley and obtained his Ph.D. degree in Computer Science and Engineering from HKUST. Hong is broadly interested in computer systems and networking, with special focuses on distributed data analytics and ML systems, data center networking, and serverless computing. His research work appeared in prestigious systems and networking conferences, such as SIGCOMM, NSDI, and EuroSys. He has been awarded the Google Ph.D. Fellowship in systems and networking.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Contemporary Visual Computing: Storytelling & Scene Graph Generation
Location
Speaker:
Professor Chang Wen Chen
Chair Professor of Visual Computing
The Hong Kong Polytechnic University
Abstract:
Visual computing, traditionally, is a generic term for all computer science disciplines for algorithmic development dealing with images, videos, and other types of visual data. This talk shall focus on contemporary visual computing design from several systematic perspectives. Contemporary visual computing has been substantially advanced to enhance both human understanding and machine intelligence. The ultimate goal for human understanding will be for visual computing algorithms to generate human-like storytelling with a rational contextual setting and the capability to apply general knowledge. For machine intelligence, a more appropriate form of representing semantics from visual data will be to utilize a well-structured scene graph generation approach to characterize the logical relationship among the subjects and objects detected from the visual data. We shall report our recent research activities in developing advanced visual computing algorithms for both human understanding and machine intelligence. These exemplary applications demonstrate several unique visual computing capabilities in understanding the real world with more accurate contextual and environmental interpretations. These examples also illustrate the technical challenges we are facing and the potential impacts that contemporary visual computing systems are making, including the paradigm-shifting visual semantic communication design for the future 6G mobile networks.
Biography:
Chang Wen Chen is currently Chair Professor of Visual Computing at The Hong Kong Polytechnic University. Before his current position, he served as Dean of the School of Science and Engineering at The Chinese University of Hong Kong, Shenzhen from 2017 to 2020, and concurrently as Deputy Director at Peng Cheng Laboratory from 2018 to 2021. Previously, he has been an Empire Innovation Professor at the State University of New York at Buffalo (SUNY) from 2008 to 2021 and the Allan Henry Endowed Chair Professor at the Florida Institute of Technology from 2003 to 2007.
He has served as an Editor-in-Chief for IEEE Trans. Multimedia (2014-2016) and IEEE Trans. Circuits and Systems for Video Technology (2006-2009). He has received many professional achievement awards, including ten (10) Best Paper Awards in premier publication venues, the prestigious Alexander von Humboldt Award in 2010, the SUNY Chancellor’s Award for Excellence in Scholarship and Creative Activities in 2016, and UIUC ECE Distinguished Alumni Award in 2019. He is an IEEE Fellow, a SPIE Fellow, and a Member of the Academia Europaea.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Solving Extreme-Scale Problems on Sunway Supercomputers
Location
Speaker:
Prof. Haohuan Fu
Professor
Department of Earth System Science, Tsinghua University
Abstract:
Defined as the fastest computers in the world by the name, supercomputers have been important tools for making scientific discoveries and technology breakthroughs. In this talk, we will introduce a series of Sunway Supercomputers, which demonstrate a superb example of integrating tens of millions of cores into a high-resolution numerical simulator or a large-scale machine learning engine, and bringing opportunities for widening our knowledge boundaries in various domains. Application examples include ultra-high-resolution climate modeling and earthquake simulation, close-to-real-time quantum circuit simulation, unsupervised learning to achieve nation-scale land cover mapping, and training large deep learning models of brain-scale complexity. Through these examples, we discuss the key issues and potential of combining supercomputing and AI technologies for solving the major challenges that we face.
Biography:
Haohuan Fu is a professor in the Department of Earth System Science, Tsinghua University, and the deputy director of the National Supercomputing Center in Wuxi. Fu has his BE (2003) in CS from Tsinghua University, MPhil (2005) in CS from City University of Hong Kong, and PhD (2009) in computing from Imperial College London. His research work focuses on supercomputing architecture and software, leading to three ACM Gordon Bell Prizes (nonhydrostatic atmospheric dynamic solver in 2016, nonlinear earthquake simulation in 2017, and random quantum circuit simulation in 2021).
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Probabilistic Sports Analytics
Location
Speaker:
Prof. Jin-Song Dong
Professor
School of Computing, National University of Singapore
Abstract:
Sports analytics encompasses the utilization of data science, artificial intelligence (AI), psychology, and advanced Internet of Things (IoT) devices to enhance sports performance, strategy, and decision-making. This process involves the collection, processing, and interpretation of cloud-based data from a variety of sources, such as video recordings, performance metrics, and scouting reports. The resulting insights aid in evaluating player and team performance, preventing injuries, and supporting coaches and team managers in making well-informed decisions to optimize resources and achieve superior outcomes.
One widely recognized formal method, Probabilistic Model Checking (PMC), has been conventionally employed in reliability analysis for intricate safety critical systems. For instance, the reliability of an aircraft can be determined by evaluating the reliability of its individual components, including the engine, wings, and sensors. Our groundbreaking approach applies PMC to a novel domain: Sports Strategy Analytics. As an example, the reliability (winning percentage) of a sports player can be ascertained from the reliability (success rate) of their specific sub-skill sets (e.g., serve, forehand, backhand, etc., in tennis).
In this presentation, we will discuss our recent research work, which involves the application of PMC, machine learning, and computer vision to the realm of sports strategy analytics. At the end of the presentation, we will also discuss the vision of a new international sports analytics conference series (https://formal-analysis.com/isace/2023/).
Biography:
Jin-Song Dong is a professor at the National University of Singapore. His research spans a range of fields, including formal methods, safety and security systems, probabilistic reasoning, sports analytics, and trusted machine learning. He co-founded the commercialized PAT verification system, which has garnered thousands of registered users from over 150 countries and received the 20-Year ICFEM Most Influential System Award. Jin Song also co-founded the commercialized trusted machine learning system Silas (www.depintel.com). He has received numerous best paper awards, including the ACM SIGSOFT Distinguished Paper Award at ICSE 2020.
He served on the editorial board of ACM Transactions on Software Engineering and Methodology, Formal Aspects of Computing, and Innovations in Systems and Software Engineering, A NASA Journal. He has successfully supervised 28 PhD students, many of whom have become tenured faculty members at leading universities worldwide. He is also a Fellow of the Institute of Engineers Australia. In his leisure time, Jin Song developed Markov Decision Process (MDP) models for tennis strategy analysis using PAT, assisting professional players with pre-match analysis (outperforming the world’s best). He is a Junior Grand Slam coach and takes pleasure in coaching tennis to his three children, all of whom have reached the #1 national junior ranking in Singapore/Australia. Two of his children have earned NCAA Division 1 full scholarships, while his second son, Chen Dong, played #1 singles for Australia in the Junior Davis Cup and participated in both the Australian Open and US Open Junior Grand Slams.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
On the Efficiency and Robustness of Foundation Models
Location
Speaker:
Dr. CHENG Yu
Microsoft Research Redmond, USA
Abstract:
In recent years, we are witnessing a paradigm shift where foundational models, such as GPT-4, ChatGPT, and Codex, are consolidating into fewer, but extremely large models that cover multiple modalities and tasks and significantly surpass the performance of standalone models. However, these extremely large models are still very expensive to adapt to new scenarios/tasks, deploy in the runtime inference in real-world applications, and are vulnerable to crafted adversarial examples. In this talk, I will present the techniques we developed to enable foundation models to smoothly scale to small computational footprints/new tasks, and be robust to handle diverse/adversarial textual inputs. The talk also introduces how to productionize these techniques in several applications such as Github Copliot and New Bing.
Biography:
Dr. CHENG Yu is a Principal Researcher at Microsoft Research and an Adjunct Professor at Rice University/Renmin University of China. Before joining Microsoft, he was a Research Staff Member at IBM Research & MIT-IBM Watson AI Lab. He got a Ph.D. from Northwestern University in 2015 and a bachelor’s degree from Tsinghua University in 2010. His research covers deep learning in general, with specific interests in model compression and efficiency, deep generative models, and adversarial robustness. Yu has led several teams and productized these techniques for Microsoft-OpenAI core products (e.g., Copilot, DALL-E-2, ChatGPT, GPT-4). He serves (or, has served) as an area chair for CVPR, NeurIPS, AAAI, IJCAI, ACMMM, WACV, and ECCV.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Graph Reachability Algorithms for Program Analysis
Location
Speaker:
Prof. Qirun Zhang
Assistant Professor
School of Computer Science, Georgia Institute of Technology
Abstract:
Program analysis automatically reasons about program runtime behavior and provides mechanisms to determine whether a program’s execution will satisfy certain properties. Program analysis offers a rich spectrum of methods for improving software reliability. A variety of program analysis problems can be formulated as graph reachability problems in edge-labeled graphs. Over the years, we have witnessed the tremendous success of various graph-reachability-based program-analysis techniques. In this talk, I will discuss our work, in the past three years, on CFL-reachability, Dyck-reachability, and InterDyck-reachability.
Biography:
Qirun Zhang is an Assistant Professor in Computer Science at Georgia Tech. His general research interests are in programming languages and software engineering, focusing on developing new static program analysis frameworks to improve software reliability. He has received a PLDI 2020 Distinguished Paper Award, an OOPSLA 2022 Distinguished Artifact award, an NSF CAREER Award, and an Amazon Research Award in Automated Reasoning. He served on the program committees of FSE, ICSE, ISSTA, OOPSLA, PLDI, and POPL.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Recent Advance on Neural Radiance Fields
Location
Speaker:
Prof. CAI Jianfei
Professor
Faculty of IT, Monash University
Abstract:
Neural Radiance Fields (NeRF) has been a new paradigm for 3D representation, providing implicit shape information and view-dependent appearance simultaneously. Based on this new representation, seminal 3D generation approaches have been proposed that aim to generate photorealistic images from a given distribution in a 3D-aware and view-consistent manner, while their performance in 3D geometry reconstruction is limited. On the other hand, several works demonstrate that rendering neural implicit surfaces, where gradients are concentrated around surface regions, is able to produce a high-quality 3D reconstruction. However, they focus only on holistic scene representation yet ignore individual objects inside it, thus limiting potential downstream applications. In this talk, we will first present our recent ECCV’22 work, ObjectSDF, which provides a nice object-compositional neural implicit surfaces framework that can jointly reconstruct the scene and objects inside it with only semantic masks. We will also introduce our another ECCV’22 work that can reconstruct a 3D scene modelled by NeRF, conditioned on one single-view semantic mask as input. Finally, we will provide some future directions on this topic.
Biography:
CAI Jianfei is a Professor at Faculty of IT, Monash University, where he currently serves as the Head for the Data Science & AI Department. He is also a visiting professor at Nanyang Technological University (NTU). Before that, he was Head of Visual and Interactive Computing Division and Head of Computer Communications Division in NTU. His major research interests include computer vision, deep learning and multimedia. He has successfully trained 30+ PhD students with three getting NTU SCSE Outstanding PhD thesis award. Many of his PhD students joined leading IT companies such as Facebook, Apple, Amazon, and Adobe or become faculty members in reputable universities. He is a co-recipient of paper awards in ACCV, ICCM, IEEE ICIP and MMSP. He serves or has served as an Associate Editor for IJCV, IEEE T-IP, T-MM, and T-CSVT as well as serving as Area Chair for CVPR, ICCV, ECCV, IJCAI, ACM Multimedia, ICME and ICIP. He was the Chair of IEEE CAS VSPC-TC during 2016-2018. He had also served as the leading TPC Chair for IEEE ICME 2012 and the best paper award committee chair & co-chair for IEEE T-MM 2020 & 2019. He will be the leading general chair for ACM Multimedia 2024. He is a Fellow of IEEE.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Adaptive and Effective Fuzzing: a Data-driven Approach
Location
Speaker:
Mr. SHE Dongdong
PhD candidate
Department of Computer Science, Columbia University
Abstract:
Security vulnerabilities significantly impact our daily lives, from ransomware attacks costing billions of dollars every year to confidential data leakage in government, military and industry. Fuzzing is a popular automated technique to catch these vulnerabilities in real-world programs. Despite the wide application in industry, existing fuzzers heavily rely on rule-based designs (i.e., incorporating a set of static rules and heuristics). These fixed rules and heuristics often fail on diverse programs and severely limit fuzzing performance.
In this talk, I will present a novel and pioneering approach to general fuzzing: a data-driven approach. Fuzzing is an iterative process. Data-driven approach extracts useful knowledge from the massive amount of iterations in fuzzing and uses the learned knowledge to perform future fuzzing smartly. Meanwhile, in a data-driven approach, we can formulate fuzzing as a data-centric problem, thus bridging the connection between fuzzing to various domains (e.g., machine learning, optimization and social network), enabling adaptive and effective designs in the general fuzzing framework.
Biography:
SHE Dongdong is a PhD candidate in Computer Science at Columbia University. His research focuses on security and machine learning, particularly applying machine learning and other data-driven approaches to security problems. His work has been published at top-tier security and software engineering conferences (S&P, CCS, Security and FSE). He is the recipient of an ACM CCS Best Paper runner-up award and a finalist in the NYU CSAW applied research competition. Before attending Columbia, he obtained a Master’s in Computer Science from UC, Riverside and Bachelor’s in Electronic and Information Engineering from HUST.
Join Zoom Meeting:
https://cuhk.zoom.us/j/92596540594?pwd=bEJKc0RlN3hXQVFNTWpmcWRmVnRFdz09
Meeting ID: 925 9654 0594
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Temporal-Spatial Re-configurable Approximate Computing Technologies
Location
Speaker:
Prof. Renyuan Zhang
Associate Professor
Nara Institute of Science and Technology
Abstract:
This talk aims at introducing the multi-grained re-configurable computing platforms which are elastic in both of space and time domains. As the preliminary, several approximate computing technologies by Prof. Zhang’s group are introduced for efficiently accelerating the AI tasks. For the next generation of AI platforms, it is expected to explore the disruptive computer architectures for ultra-high speed, low cost, and flexible tensor computations without any benefitting of Moore’s Law. For this purpose, temporal-spatial re-configurable accelerators are demanded: (1) an innovative mechanism for data processing is explored by the snapshot (or accumulative, optionally) observation of spiking (addressing time-elastic); (2) the multi-grained re-configurable architecture is developed on the basis of our novel neural network topology seen as “DiaNet” (addressing space-elastic).
Biography:
Prof. Renyuan Zhang (Senior Member, IEEE) received the M.E. degree from Waseda University, in 2010, and the Ph.D. degree from The University of Tokyo, in 2013. He was an Assistant Professor with the Japan Advanced Institute of Science and Technology, from 2013 to 2017. He has been an Assistant Professor and an Associate Professor with the Nara Institute of Science and Technology, since 2017 and 2021, respectively. His research interests include analog–digital mixed circuits and approximate computing. He is a member of IEICE.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Overcoming Data Heterogeneity Challenges in Federated Learning
Location
Speaker:
Dr. Xiaoxiao Li,
Assistant Professor
Department of Electrical and Computer Engineering
The University of British Columbia (UBC)
Abstract:
Federated learning (FL) is a trending framework to enable multi-institutional collaboration in machine learning without sharing raw data. This presentation will discuss our ongoing progress in designing FL algorithms that embrace the data heterogeneity properties for distributed data analysis in the FL setting. First, I will present our work on theoretically understanding FL training convergence and generalization using a neural tangent kernel, called FL-NTK. Then, I will present our algorithms for tackling data heterogeneity (on features and labels) and device heterogeneity, motivated by our previous theoretical foundation. Lastly, I will also show the promising results of applying our FL algorithms in real-world applications.
Biography:
Dr. Xiaoxiao Li is an Assistant Professor at the Department of Electrical and Computer Engineering at The University of British Columbia (UBC) starting August 2021. In addition, Dr. Li is an adjunct Assistant Professor at Yale University. Before joining UBC, Dr. Li was a Postdoc Research Fellow at Princeton University. Dr. Li obtained her Ph.D. degree from Yale University in 2020. Dr. Li’s research focuses on developing theoretical and practical solutions for enhancing the trustworthiness of AI systems in healthcare. Specifically, her recent research has been dedicated to advancing federated learning techniques and their applications in the medical field. Dr. Li’s work has been recognized with numerous publications in top-tier machine learning conferences and journals, including NeurIPS, ICML, ICLR, MICCAI, IPMI, ECCV, TMI, TNNLS, Medical Image Analysis, and Nature Methods.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Demystifying Fuzzing Strategies
Location
Speaker:
Professor Yuqun Zhang
Assistant Professor
Department of Computer Science and Engineering
Southern University of Science and Technology
Abstract:
Fuzzing (or fuzz testing) refers to inputting invalid, unexpected, or random data to programs for exposing unexpected program behaviors (such as crashes, failing assertions, or memory leaks), which can be further inspected or analyzed to detect potential vulnerabilities/bugs. While recently there is a growing trend to propose new fuzzing techniques, limited attentions have been paid on studying their common/representative strategies, e.g., exploring why and how exactly their strategies work. In this talk, I will discuss a rather common fuzzing strategy, namely Havoc, which randomly mutates seeds via a mutator stacking mechanism and is widely adopted in coverage-guided fuzzers. I will show that essentially, it is Havoc which dominates the fuzzing effectiveness, including increasing coverage and exposing program bugs, rather than the strategies proposed by the coverage-guided fuzzers. Moreover, it can be rather simple to enhance the effectiveness of Havoc.
Biography:
Yuqun Zhang is an Assistant Professor in the Department of Computer Science and Engineering at Southern University of Science and Technology, Shenzhen, China. His research focuses on exploring new general-purpose and domain-specific quality assurance methods for software. His research output on fuzzing and taint analysis has been deployed in Tencent and Alibaba to successfully detect hundreds of bugs/vulnerabilities. He received his PhD from UT Austin. He has been awarded one ACM SIGSOFT Distinguished Paper Award as well as one nominee.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Huawei Seminar (in Mandarin)
Location
6 lab managers from Huawei Cloud will hold a presentation and communication session in the Room 121, HSH Engineering Building at the Chinese University of Hong Kong on March 30th, from 10 – 11 am. They will introduce the following six innovative Labs from Huawei Cloud:
- Algorithm Innovation Lab: Application of mathematical modeling and optimization algorithms in Huawei Cloud, presented by Dr. Wenli Zhou.
- Cloud Storage Innovation Lab: Introduction to building a high-performance, highly reliable, secure, and intelligent cloud-native storage platform (research areas include key technologies and core algorithms in block storage, object storage, file storage, memory storage, etc., including distributed data consistency, space management, metadata indexing, intelligent caching, etc.), presented by Dr. Xusheng Chen.
- Data Intelligence Innovation Lab: Provide right data to the right person at the right time, presented by Dr. Ke Xu.
- Availability Engineering Lab: Introduction to the related technologies of public cloud and large-scale distributed application architecture reliability and availability engineering, technology, and innovation capabilities center, presented by Ling Wei.
- Computing and Networking Innovation Lab: In the computing domain, focus on tapping the efficiency of large-scale computing resource reuse in Huawei Cloud + continuous research on next-generation autonomous cloud network systems in the networking domain, presented by Dr. Zengyin Yang.
- Cloud Database Innovation Lab: Innovating Cloud-Native Databases for Next-Gen Applications, presented by Dr. Hao Zhang.
Lab Introduction:
The Computing and Networking Innovation Lab focuses on the research and development of new computing and networking in Huawei Cloud. Positioned as a technical pre-research team for Huawei Cloud, it mainly studies two major areas of cloud computing:
-In the computing domain, focus on tapping the efficiency of large-scale computing resource reuse in Huawei Cloud, including cloud service application load profiling, VM/container scheduling algorithms and systems, real-time QoS detection and control systems, and new research directions in virtualization -In the networking domain, based on the requirements and data of cloud computing itself, continuously research the next-generation autonomous cloud network system, including the next-generation gateway platform, P4/NP programmable device platform, network brain combined with AI, large-scale high-performance SDN platform, real-time network measurement and verification, and other new cloud computing network directions.
The Cloud Storage Innovation Lab is Huawei Cloud’s storage innovation research center. The research areas involve key technologies and core algorithms in block storage, object storage, file storage, memory storage, etc., including distributed data consistency, space management, metadata indexing, intelligent caching, etc. It is committed to building a high-performance, highly reliable, secure, and intelligent cloud-native storage platform, providing the best experience and cost-effective storage services for enterprises moving to the cloud.
Enquiries: Professor Michael LYU (lyu@cse.cuhk.edu.hk) / Jeff Liu (jeffliu@cse.cuhk.edu.hk)
Deep Learning for Physical Design Automation of VLSI Circuits: Modeling, Optimization, and Datasets
Location
Speaker:
Professor Yibo Lin
Assistant Professor
School of Integrated Circuits
Peking University
Abstract:
Physical design is a critical step in the design flow of modern VLSI circuits. With continuous increase of design complexity, physical design becomes extremely challenging and time-consuming due to the repeated design iterations for the optimization of performance, power, and area. With recent boom of artificial intelligence, deep learning has shown its potential in various fields, like computer vision, recommendation systems, robotics, etc. Incorporating deep learning into the VLSI design flow has also become a promising trend. In this talk, we will introduce our recent studies on developing dedicated deep learning techniques for cross-stage modeling and optimization in physical design. We will also discuss the impact of large-scale and diverse datasets (e.g., CircuitNet) on improving the performance of deep learning models.
Biography:
Yibo Lin is an assistant professor in the School of Integrated Circuits at Peking University. He received the B.S. degree in microelectronics from Shanghai Jiaotong University in 2013, and his Ph.D. degree from the Electrical and Computer Engineering Department of the University of Texas at Austin in 2018. His research interests include physical design, machine learning applications, and GPU/FPGA acceleration. He has received 6 Best Paper Awards at premier venues including DATE 2022, TCAD 2021, and DAC 2019. He has also served in the Technical Program Committees of many major conferences, including ICCAD, ICCD, ISPD, and DAC.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Data-Efficient Graph Learning
Location
Speaker:
Mr. DING Kaize
Abstract:
The world around us — and our understanding of it — is rich in relational structure: from atoms and their interactions to objects and entities in our environments. Graphs, with nodes representing entities and edges representing relationships between entities, serve as a common language to model complex, relational, and heterogeneous systems. Despite the success of recent deep graph learning, the efficacy of existing efforts heavily depends on the ideal data quality of the observed graphs and the sufficiency of the supervision signals provided by the human-annotated labels, leading to the fact that those carefully designed models easily fail in resource-constrained scenarios.
In this talk, I will present my recent research contributions centered around data-efficient learning for relational and heterogeneous graph-structured data. First, I will introduce what data-efficient graph learning is and my contributions to different research problems under its umbrella, including graph few-shot learning, graph weakly-supervised learning, and graph self-supervised learning. Based on my work, I will elucidate how to push forward the performance boundary of graph learning models especially graph neural networks with minimal human supervision signals. I will also touch upon the applications of data-efficient graph learning to different domains and finally conclude my talk with a brief overview of my future research agenda.
Biography:
DING Kaize is currently a Ph.D. candidate from the School of Computing and Augmented Intelligence (SCAI) at Arizona State University (ASU). Kaize is working at the Data Mining and Machine Learning (DMML) Lab with Prof. Huan Liu and previously he was previously interned at Google Brain, Microsoft Research, and Amazon Alexa AI. Kaize is broadly interested in the areas of data mining, machine learning, and natural language processing and their interdisciplinary applications in different domains including cybersecurity, social good, and healthcare. His recent research interests particularly focus on data-efficient learning and graph neural networks. He has published a series of papers in top conferences and journals such as AAAI, EMNLP, IJCAI, KDD, NeurIPS, and TheWebConf. Kaize was the recipient of the ASU Graduate College Completion Fellowship and ASU GPSA Outstanding Research Award, etc. More information about him can be found at https://www.public.asu.edu/~kding9/ .
Join Zoom Meeting:
https://cuhk.zoom.us/j/99778568306?pwd=Nms0cm9takVNQWtRaDhuaVdaTVJ5dz09
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Resilience through Adaptation — the Challenge of Change
Location
Speaker:
Professor Jeff Kramer
Emeritus Professor, Department of Computing,
Imperial College London
Abstract:
Change in complex systems is inevitable. Providing rigorous techniques and tools to support dynamic system adaptation so that it can be performed online, at runtime, is certainly challenging. However the potential resilience rewards could be great. There is the need for a software architecture and runtime support for dynamic software configuration, plan execution and plan synthesis, domain environment modelling and monitoring, and ultimately even potentially performing some elements of requirements engineering at runtime! This talk will present our motivation and vision, describing our work to date and our hopes for the future.
Biography:
Jeff Kramer is Emeritus Professor of Computing at Imperial College London.
His research work is primarily concerned with software engineering, with particular emphasis on evolving software architectures, behaviour analysis, the use of models in requirements elaboration and self organising adaptive software systems. An early research result was the DARWIN language for evolving distributed architectures, and more recently was the Three Layer Model for self-adaptive systems. One of the major research challenges in self-managed adaptation is the need to perform requirements analysis at runtime.
Jeff has been involved in many major conferences and journals, notably as program co-chair of ICSE in Los Angeles in 1999, general co-chair of ICSE 2010 in Cape Town, and Editor in Chief of IEEE TSE from 2006 to 2010. His awards include the 2005 ACM SIGSOFT Outstanding Research Award and the 2011 ACM SIGSOFT Distinguished Service. He is a Fellow of the Royal Academy of Engineering, Fellow of the ACM, and a Member of Academia Europaea.
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Execution-Guided Learning for Software Development, Testing, and Maintenance
Location
Speaker:
Mr. NIE Pengyu
Abstract:
Machine Learning (ML) techniques have been increasing adopted for Software Engineering (SE) tasks, such as code completion and code summarization. However, existing ML models provide limited value for SE tasks, because these models do not take into account the key characteristics of software: software is executable and software constantly evolves. In this talk, I will present my insights and work on developing execution-guided and evolution-aware ML models for several SE tasks targeting important domains, including software testing, verification, and maintenance.
First, I will present my techniques to help developers write tests and formal proofs. My work has direct impact on software correctness and everyone that depends on software. I will present TeCo: the first ML model for test completion/generation, and Roosterize: the first model for lemma name generation. In order to achieve good performance, these two tasks require reasoning about code execution, which existing ML models are not capable of. To tackle this problem, I designed and develop ML models that integrate execution data and use such data to validate generation results.
Next, I will present my techniques to help developers maintain software. Specifically, I will present my work on comment updating, i.e., automatically updating comments when associated code changes. I proposed the first edit ML model for SE to solve this task, which learns to perform developer-like edits instead of generating comments from scratch. This model can be generalized for general-purpose software editing, including tasks such as bug fixing and automated code review.
All my code and data are open-sourced, evaluated on real-world software, and shown to outperform existing ML models by large margins. My contributions lay the foundation for the development of accurate, robust, and interpretable ML models for SE.
Biography:
NIE Pengyu is a Ph.D. candidate at the University of Texas at Austin, advised by Milos Gligoric. Pengyu obtained his Bachelor’s Degree at the University of Science and Technology of China. His research area is the fusion of Software Engineering (SE) and Natural Language Processing (NLP), with a focus on improving developers’ productivity during software development, testing, and maintenance. He has published 14 papers in top-tier SE, NLP, and PL conferences. He is the recipient of an ACM SIGSOFT Distinguished Paper Award (FSE 2019), and the UT Austin Graduate School Continuing Fellowship. More information can be found on his webpage: https://pengyunie.github.io.
Join Zoom Meeting:
https://cuhk.zoom.us/j/95560110806?pwd=VFN4eXc2UU1KOTJIVk15aGU2ZkVydz09
Meeting ID: 955 6011 0806
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Adaptive and Automated Deep Recommender Systems
Location
Speaker:
Prof. ZHAO Xiangyu
Assistant Professor, School of Data Science
City University of Hong Kong (CityU)
Abstract:
Deep recommender systems have become increasingly popular in recent years, and have been utilized in a variety of domains, including movies, music, books, search queries, and social networks. They assist users in their information-seeking tasks by suggesting items (products, services, or information) that best fit their needs and preferences. Most existing recommender systems are based on static recommendation policies and hand-crafted architectures. Specifically, (i) most recommender systems consider the recommendation procedure as a static process, which may fail given the dynamic nature of the users’ preferences; (ii) existing recommendation policies aim to maximize the immediate reward from users, while completely overlooking their long-term impacts on user experience; (iii) designing architectures manually requires ample expert knowledge, non-trivial time and engineering efforts, while sometimes human error and bias can lead to suboptimal architectures. I will introduce my efforts in tackling these challenges via reinforcement learning (RL) and automated machine learning (AutoML), which can (i) adaptively update the recommendation policies, (ii) optimize the long-term user experience, and (iii) automatically design the deep architectures for recommender systems.
Biography:
Prof. Xiangyu ZHAO is an assistant professor of the school of data science at City University of Hong Kong (CityU). His current research interests include data mining and machine learning, and their applications in Recommender System, Smart City, Healthcare, Carbon Neutral and Finance. He has published more than 60 papers in top conferences (e.g., KDD, WWW, AAAI, SIGIR, IJCAI, ICDE, CIKM, ICDM, WSDM, RecSys, ICLR) and journals (e.g., TOIS, SIGKDD, SIGWeb, EPL, APS). His research has been awarded ICDM’22 and ICDM’21 Best-ranked Papers, Global Top 100 Chinese New Stars in AI, CCF-Ant Research Fund, CCF-Tencent Open Fund, Criteo Faculty Research Award, Bytedance Research Collaboration Award, and nomination for Joint AAAI/ACM SIGAI Doctoral Dissertation Award. He serves as top data science conference (senior) program committee members and session chairs, and journal guest editors and reviewers. He serves as the organizers of DRL4KDD@KDD’19/WWW’21 and DRL4IR@SIGIR’20/21/22 and a lead tutor at WWW’21/22/23, IJCAI’21 and WSDM’23. He also serves as the founding academic committee members of MLNLP, the largest Chinese AI community with millions of members/followers. The models and algorithms from his research have been launched in the online system of many companies. Please find more information at https://zhaoxyai.github.io/.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96382199967
Meeting ID: 963 8219 9967
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Designing and Analyzing Machine Learning Algorithms in the Presence of Strategic Behavior
Location
Speaker:
Mr. ZHANG Hanrui
Abstract:
Machine learning algorithms now play a major part in all kinds of decision-making scenarios. When the stakes are high, self-interested agents — about whom decisions are being made — are increasingly tempted to manipulate the machine learning algorithm, in order to better fulfill their own goals, which are generally different from the decision maker’s. This highlights the importance of making machine learning algorithms robust against manipulation. In this talk, I will focus on generalization (i.e., the bridge between training and testing) in strategic classification: Traditional wisdom suggests that a classifier trained on historical observations (i.e., the training set) usually also works well on future data points to be classified (i.e., the test set). I will show how this very general principle fails when agents being classified strategically respond to the classifier, and present an intuitive fix that leads to provable (and in fact, optimal) generalization guarantees under strategic manipulation. I will then discuss the role of incentive-compatibility in strategic classification, and present experimental results that illustrate how the theoretical results can guide practice. If time permits, I will also discuss distinguishing strategic agents with samples, and/or dynamic decision making with strategic agents.
Biography:
ZHANG Hanrui is a PhD student at Carnegie Mellon University, advised by Vincent Conitzer. He was named a finalist for the 2021 Facebook Fellowship. His work won the Best Student Paper Award at the European Symposia on Algorithms (ESA), and a Honorable Mention for Best Paper Award at the AAAI Conference on Human Computation and Crowdsourcing (HCOMP). He received his bachelor’s degree in Yao’s Class, Tsinghua University, where he won the Outstanding Undergraduate Thesis Award.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96485699602?pwd=aXZZd0Z4aDVzVjhWdTRiVGt5cytvdz09
Meeting ID: 964 8569 9602
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624
Efficient Reinforcement Learning Through Uncertainties
Location
Speaker:
Mr. ZHOU Dongruo
Abstract:
Reinforcement learning (RL) has achieved great empirical success in many real-world problems in the last few years. However, many RL algorithms are inefficient due to their data-hungry nature. Whether there exists a universal way to improve the efficiency of existing RL algorithms remains an open question.
In this talk, I will give a selective overview of my research, which suggests that efficient (and optimal) RL can be built through the lens of uncertainties. I will show that uncertainties can not only guide RL to make decisions efficiently, but also have the ability to accelerate the learning of the optimal policy over a finite number of data samples collected from the unknown environment. By utilizing the proposed uncertainty-based framework, I design computationally efficient and statistically optimal RL algorithms under various settings, which improve existing baseline algorithms from both theoretical and empirical aspects. At the end of this talk, I will briefly discuss several additional works, and my future research plan for designing next-generation decision making algorithms.
Biography:
ZHOU Dongruo is a final-year PhD student in the Department of Computer Science at UCLA, advised by Prof. Quanquan Gu. His research is broadly on the foundation of machine learning, with a particular focus on reinforcement learning and stochastic optimization. He aims to provide a theoretical understanding of machine learning methods, as well as to develop new machine learning algorithms with better performance. He is a recipient of the UCLA dissertation year fellowship.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93549469461?pwd=R0FOaFdxOG5LS0s2Q1RmaFdNVm4zZz09
Meeting ID: 935 4946 9461
Passcode: 202300
Enquiries: Mr Jeff Liu at Tel. 3943 0624