Home >> Events >> Seminars Archives >> Seminar Series 2023/2024
Seminar Series 2023/2024
December 2023
08 December
9:00 am - 10:00 am
08 December
10:30 am - 11:30 am
Resource Management and Runtime Reconfiguration for Distributed Streaming Systems
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Richard T. B. Ma
Associate Professor
School of Computing, National University of Singapore
Abstract:
Due to the long-run and unpredictable nature of stream processing, any statically configured execution of stream jobs fails to process data in a timely and efficient manner. To achieve performance requirements, stream jobs need to be reconfigured dynamically.
In the first part of the talk, we will discuss DRS, a dynamic resource scaling framework for cloud-based stream data analytics systems. DRS overcomes three fundamental challenges: 1) how to model the relationship between the provisioned resources and the application performance, 2) where to best place resources, and 3) how to measure the system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of Jackson open queueing networks and is capable of handling arbitrary operator topologies, possibly with loops, splits, and joins. Extensive experiments with real data show that DRS is capable of detecting sub-optimal resource allocation and making quick and effective resource adjustment.
In the second part of the talk, we present Trisk, a control plane that supports versatile reconfigurations while keeping high efficiency with easy-to-use programming APIs. Trisk enables versatile reconfigurations with usability based on a task-centric abstraction, and encapsulates primitive operations such that reconfigurations can be described by compositing the primitive operations on the abstraction. Trisk adopts a partial pause-and-resume design for efficiency, through which synchronization mechanisms in the native stream systems can further be leveraged. We implement Trisk on Apache Flink and demonstrate its usage and performance under realistic application scenarios. We show that Trisk executes reconfigurations with shorter completion time and comparable latency compared to a state-of-the-art fluid mechanism for state management.
Biography:
Prof. Richard T. B. Ma received the B.Sc. (Hons.) degree in computer science and M.Phil. degree in computer science and engineering from The Chinese University of Hong Kong in 2002 and 2004, respectively, and the Ph.D. degree in electrical engineering from Columbia University in 2010. During his Ph.D. study, he worked as a Research Intern at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA, and Telefonica Research, Barcelona, Spain. From 2010–2014, he worked as a Research Scientist at the Advanced Digital Science Center (ADSC), University of Illinois at Urbana–Champaign, Champaign, IL, USA. He is currently an Associate Professor with the School of Computing, National University of Singapore. His current research interests include distributed systems and network economics. He was a recipient of the Best Paper Award Runners-up from the ACM Mobihoc 2020 and a co-recipient of the Best Paper Award from the IEEE IC2E 2013, the IEEE ICNP 2014, and the IEEE Workshop on Smart Data Pricing 2015. He is a Senior Member of ACM and IEEE.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
05 December
10:00 am - 11:00 am
Neural Acceleration with Full Stack Optimization
Location
Lecture Theatre 2 (LT2), 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Professor Meng Li
Assistant Professor
Institute for Artificial Intelligence, School of Integrated Circuits, Peking University
Abstract:
Recent years have witnessed the fast evolution of AI and deep learning (DL) in the field computer vision, natural language processing, etc. Though promising, DL faces serious challenges due to the exponential network scaling and network heterogeneity. In this talk, I will discuss some of our recent works that leverage network/hardware co-design and co-optimization to improve the efficiency for DL. I will cover our recent works on tiny language model for MCUs, memory-aware scheduling, and hardware accelerator designs based on a new computing paradigm, i.e., stochastic computing. I will also discuss interesting future directions to further improve the efficiency and security for efficient AI.
Biography:
Prof. Meng Li is currently a tenure-track assistant professor in Peking University, jointly affiliated with Institute for Artificial Intelligence and School of Integrated Circuits. Before joining Peking University, he was staff research scientist and tech lead in Meta Reality Lab, the world’s largest social media company, focusing on research and productization of efficient AI algorithms and hardware/systems for next generation AR/VR devices. Dr. Li received his Ph.D. degree from the University of Texas at Austin in 2018 and his bachelor degree from Peking University in 2013.
Prof. Meng Li’s research interests lie in the field of efficient and secure multi-modal AI acceleration algorithms and hardware. He has published more than 60 papers and received two best paper awards from HOST 2017 and GLSVLSI 2018. He also receives EDAA Outstanding Dissertation Award, First Place in ACM Student Research Competition Grand Final (Graduate Category), Best Poster Awards in ASPDAC Student Research Forum, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
04 December
1:30 pm - 2:30 pm
Qwen: Towards a Generalist Model
Location
Lecture Theatre 2 (LT2), 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Mr. Junyang Lin
Staff Engineer, Leader of Qwen Team,
Alibaba Group
Abstract:
This talk introduces the large language and multimodal model series Qwen, which stands for Tongyi Qianwen (通义千问), published and opensourced by Alibaba Group. The Qwen models have achieved competitive performance against both opensource and proprietary LLMs and LMMs in both benchmark evaluation and human evaluation. This talk provides a brief overview of the model series, and then delves into details about building the LLMs and LMMs, including pretraining, alignment, multimodal extension, as well as the opensource. Additionally, it points out the limitations, and discusses the future work for both research community and industry in this field.
Biography:
Mr. Junyang Lin is a staff engineer of Alibaba Group, and he is now a leader of Qwen Team. He has been doing research in natural language processing and multimodal representation learning, with a focus on large-scale pretraining, and he has around 3000 citations. Recently his team released and opensourced the Qwen series, including large language model Qwen, large vision-language model Qwen-VL, and large audio-language model Qwen-Audio. Previously, he focused on building large-scale pretraining with a focus on multimodal pretraining, and developed opensourced models OFA, Chinese-CLIP, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
04 December
3:00 pm - 4:00 pm
Classical simulation of one-query quantum distinguishers
Location
Lecture Theatre 2 (LT2), 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Professor Andrej Bogdanov
Professor
School of Electrical Engineering and Computer Science, University of Ottawa
Abstract:
A distinguisher is an algorithm that tells whether its input was sampled from one distribution or from another. The computational complexity of distinguishers is important for much of cryptography, pseudorandomness, and statistical inference.
We study the relative advantage of classical and quantum distinguishers of bounded query complexity over n-bit strings. Our focus is on a single quantum query, which is already quite powerful: Aaronson and Ambainis (STOC 2015) constructed a pair of distributions that is 𝜀-distinguishable by a one-query quantum algorithm, but O(𝜀k/√n)-indistinguishable by any non-adaptive k-query classical algorithm.
We show that every pair of distributions that is 𝜀-distinguishable by a one-query quantum algorithm is distinguishable with k classical queries and (1) advantage min{𝛺(𝜀√(k/n)), 𝛺(𝜀^2k^2/n)} non-adaptively (i.e., in one round), and (2) advantage 𝛺(𝜀^2k/√(n log n)) in two rounds. The second bound is tight in k and n up to a (log n) factor.
Based on joint work with Tsun Ming Cheung (McGill), Krishnamoorthy Dinesh (IIT Palakkad), and John C.S. Lui (CUHK)
Biography:
Prof. Andrej Bogdanov is a professor in the School of Electrical Engineering and Computer Science at the University of Ottawa. He is interested in cryptography, pseudorandomness, and computational complexity. Andrej obtained his Ph.D. from UC Berkeley. Before joining uOttawa he taught at the Chinese University of Hong Kong. He was a visiting professor at the Tokyo Institute of Technology in 2013 and at the Simons Institute for the Theory of Computing in 2017 and 2021.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
November 2023
30 November
10:00 am - 11:00 am
Compact AI Representations for Game Theory: Models, Computations, and Applications
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Professor Hau Chan
Assistant Professor
School of Computing, University of Nebraska-Lincoln
Abstract:
In the last few decades, game theory has become a prominent construct for modeling and predicting outcomes of strategic interactions of rational agents in various real-world environments, ranging from adversarial (e.g., attacker-defender in the security domain) to collaborative (e.g., public good contributions). In terms, these predicted outcomes can be used to facilitate, inform, and improve agents’ and policymakers’ decision-making. Unfortunately, because of the domain characteristics in real-world environments, classical game-theoretic modeling and computational approaches (for predicting outcomes) can often take exponential space and time.
In this talk, I will discuss compact AI representations for strategic interactions (or games) to provide efficient approaches for a wide range of applications. I will demonstrate how they can be used to model and predict outcomes in scenarios we examined previously such as attacker-defenders, resource congestions, residential segregations, and public project contributions.
More specifically, I will first present aggregate games, a compact AI representation of games where each agent’s utility function depends on their own actions and the aggregation or summarization of the actions of all agents, and resource graph games, a compact AI representation of games where agents have exponential numbers of actions. For these games, I will then present our computational results for determining and computing Nash Equilibria (NE), a fundamental solution concept to specify predicted outcomes in games, and their related problems.
Biography:
Prof. Hau Chan is an assistant professor in the School of Computing at the University of Nebraska-Lincoln. He received his Ph.D. in Computer Science from Stony Brook University in 2015 and completed three years of Postdoctoral Fellowships, including at the Laboratory for Innovation Science at Harvard University in 2018. His main research areas focus on modeling and algorithmic aspects of AI and multi-agent interactions (e.g., via game theory, mechanism design, and applied machine learning), addressing several cross-disciplinary societal problems and applications. His recent application areas include improving accessibility to public facilities, reducing substance usage, and making fair collective decisions. His research has been supported by NSF, NIH, and USCYBERCOM. He has received several Best Paper Awards at SDM and AAMAS and distinguished/outstanding SPC/PC member recognitions at IJCAI and WSDM. He has given tutorials and talks on computational game theory and mechanism design at venues such as AAMAS and IJCAI, including an Early Career Spotlight at IJCAI 2022. He has served as a co-chair for Demonstrations, Doctoral Consortium, Scholarships, and Diversity & Inclusion Activities at AAMAS and IJCAI.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93836939970
Meeting ID: 938 3693 9970
Passcode: 202300
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
29 November
2:00 pm - 3:00 pm
Cryo-Electron Microscopy Image Analysis: from 2D class averaging to 3D reconstruction
Location
L2, 1/F, Science Centre (SC L2), CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Zhizhen Zhao
William L. Everitt Fellow and Associate Professor
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Abstract:
Cryo-electron microscopy (EM) single particle reconstruction is an entirely general technique for 3D structure determination of macromolecular complexes. This talk focuses on the algorithms for 2D class averaging and 3D reconstruction for the single-particle images, assuming no conformation changes of the macromolecules. In the first part, I will introduce the multi-frequency vector diffusion maps to improve the efficiency and accuracy of cryo-EM 2D image classification and denoising. This framework incorporates different irreducible representations of the estimated alignment between similar images. In addition, we use a graph filtering scheme to denoise the images using the eigenvalues and eigenvectors of the MFVDM matrices. In the second part, I will present a 3D reconstruction approach, which follows a line of works starting from Kam (1977) that employs the autocorrelation analysis for the single particle reconstruction. Our approach does not require per image pose estimation and imposes spatial non-negativity constraint. At the end of the talk, I will briefly review the challenges and existing approaches for addressing the continuous heterogeneity in cryo-EM data.
Biography:
Prof. Zhizhen Zhao is an Associate Professor and William L. Everitt Fellow in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She joined University of Illinois in 2016. From 2014 to 2016, she was a Courant Instructor at the Courant Institute of Mathematical Sciences, New York University. She received the B.A. and M.Sc. degrees in physics from Trinity College, Cambridge University in 2008, and the Ph.D. degree in physics from Princeton University in 2013. She is a recipient of Alfred P. Sloan Research Fellowship (2020). Her research interests include computational imaging, data science, and machine learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
28 November
10:00 am - 11:00 am
Structure for Scalable Verification
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. Lauren Pick
Postdoctoral Researcher
Department of Computer Sciences, University of Wisconsin-Madison and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley
Abstract:
Given the critical role of software systems in society, it is important that we understand how such systems behave and interact. Formal specifications can help us in this task by providing rigorous and unambiguous descriptions of system behaviors. Automated verification can be applied to automate the process of proving formal specifications hold for software systems, making it easier to ensure that the underlying systems function as intended. Unfortunately, the application of automated verification to real-world systems remains hindered by scalability limitations. In this talk, I describe my work on addressing these limitations by leveraging the problem-specific structure of specifications and systems. I specifically illustrate my approach for handling concrete problems in security and distributed domains, where taking advantage of structure enables scalable verification.
Biography:
Dr. Lauren Pick is a postdoctoral researcher at the University of California, Berkeley and the University of Wisconsin-Madison. She received her Ph.D. from Princeton University in January 2022. Her research focuses on developing techniques for automated verification and synthesis, with the goal of enabling formal reasoning about real-world systems. To this end, she has developed techniques that take advantage of structural aspects of target systems and their desired properties to enable efficient verification and synthesis. She is a Computing Innovation fellow and was a recipient of the NSF GRFP Fellowship.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
27 November
10:30 am - 11:30 am
Shape Geometric Processing and Analysis of Large Aviation Equipments
Location
Room 801, 8/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Mingqiang Wei
Professor
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA)
Abstract:
Large aircraft, as one of the most complex high-end equipment in modern society, is the culmination of interdisciplinary and cross-domain advanced technologies, occupying the top of the manufacturing industry’s technology and value chains. With the emergence of a batch of national key equipment such as the Y-20, C919, and Jiaolong-600, China has made breakthrough progress in large aircraft manufacturing and gradually established a relatively complete production and development system. However, due to insufficient technological foundation and compared with international aerospace manufacturing giants, Chinese aviation enterprises have not yet achieved integrated manufacturing and measurement capabilities or effective precision control capabilities. The “high-precision rapid 3D scanning analysis and quality control technology” has become an important factor affecting the development process of large aircraft in China. Geometric deep learning, with its powerful ability to learn geometric features, has shown great potential in the analysis of large aircraft shapes. However, existing network structures lack domain-specific expertise in aviation, there is no publicly available large-scale aircraft 3D dataset, and the latest machine learning technologies have not been deeply integrated into the field of geometric deep learning, making it difficult to comprehensively and efficiently analyze the complex features and stringent accuracy requirements of large aircraft shapes. This report will introduce the interdisciplinary technical issues involved in the analysis of large aircraft shapes.
Biography:
Prof. Mingqiang Wei received his Ph.D. degree (2014) in Computer Science and Engineering from the Chinese University of Hong Kong (CUHK). He is a professor at the School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA). He was the recipient of Excellent Youth Fund Project of the National Natural Science Foundation of China in 2023. Before joining NUAA, he served as an assistant professor at Hefei University of Technology, and a postdoctoral fellow at CUHK. He was a recipient of the CUHK Young Scholar Thesis Awards in 2014. He is now an Associate Editor for ACM TOMM, The Visual Computer Journal, Journal of Electronic Imaging, and a Guest Editor for IEEE Transactions on Multimedia. His research interests focus on 3D vision, computer graphics, and deep learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
27 November
2:30 pm - 3:30 pm
Looking behind the Seen
Location
L3, 1/F, Science Centre (SC L3), CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor Alexander Schwing
Associate Professor
Department of Electrical and Computer Engineering & Department of Computer Science, University of Illinois at Urbana-Champaign
Abstract:
Our goal is to develop methods which anticipate. For this, four foundational questions need to be answered: (1) How can methods accurately forecast high-dimensional observations?; (2) How can algorithms holistically understand objects, e.g., when reasoning about occluded parts?; (3) How can accurate probabilistic models be recovered from limited amounts of labeled data and for rare events?; and (4) How can autonomous agents be trained effectively to collaborate?
In this talk we present vignettes of our research to address those questions. We start by discussing MaskFormer and Mask2Former, a recent architecture which achieves state-of-the-art results on three tasks: panoptic, instance and semantic segmentation. We then discuss the importance of memory for video object segmentation and its combination with foundation models for open-world segmentation. Finally, and if time permits, we discuss SDFusion, a generative model to infer parts of an object that are unobserved. For additional info and questions, please browse to http://alexander-schwing.de.
Biography:
Prof. Alexander Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on computer vision and machine learning topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016. His research interests are in the area of computer vision and machine learning, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award. For additional info, please browse to http://alexander-schwing.de.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
22 November
9:00 am - 10:00 am
Open-Source Accelerator-Based Edge AI Architectures for a Sustainable World
Location
Lecture Theatre 3, 1/F, Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Professor David Atienza
Professor
Department of Electrical and Computer Engineering, The École Polytechnique Fédérale de Lausanne (EPFL)
Abstract:
Edge computing is becoming an essential concept covering multiple domains nowadays as our world becomes increasingly connected to enable the Internet of Things (IoT) concept. In addition, the new wave of Artificial Intelligence (AI), particularly complex Machine Learning (ML) and Deep Learning (DL) models, is demanding new computing paradigms beyond traditional general-purpose computing to make IoT a viable reality in a sustainable world.
In this seminar, Prof. Atienza will discuss new approaches to effectively design the next generation of edge AI computing architectures by taking inspiration from how biological computing systems operate. In particular, these novel bioinspired edge AI architectures includes two key concepts. First, it exploits the idea of accepting computing inexactness and integrating multiple computing acceleration engines and low-power principles to create a new open-source eXtended and Heterogeneous Energy-Efficient hardware Platform (called x-HEEP). Second, x-HEEP can be instantiated for different application domains of edge AI to operate ensembles of neural networks to improve the ML/DL outputs’ robustness at system level, while minimizing memory and computation resources for the target application. Overall, x-HEEP instantiations for edge AI applications included in-memory computing or run-time reconfigurable coarse-grained accelerators to minimize energy according to the required precision of the target application.
Biography:
Prof. David Atienza is a professor of Electrical and Computer Engineering, and leads both the Embedded Systems Laboratory (ESL) and the new EcoCloud Sustainable Computing Center at EPFL, Switzerland. He received his M.Sc. and Ph.D. degrees in Computer Science and Engineering from UCM (Spain) and IMEC (Belgium). His research interests include system-level design methodologies for high-performance multi-processor system-on-chip (MPSoC) and low-power Internet-of-Things (IoT) systems, including edge AI architectures for wearables and IoT systems as well as thermal-aware designs for MPSoCs and many-core servers. He is a co-author of more than 400 papers, two books, and has 14 licensed patents in these topics. He served as DATE General Chair and Program Chair, and is currently Editor-in-Chief of IEEE TCAD. Among others, Dr. Atienza has received the ICCAD 10-Year Retrospective Most Influential Paper Award, the DAC Under-40 Innovators Award, the IEEE TC-CPS Mid-Career Award, and the ACM SIGDA Outstanding Faculty Award. He is a Fellow of IEEE, a Fellow of ACM, served as IEEE CEDA President (period 2018-2019), and he is currently the Chair of the European Design Automation Association (EDAA).
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
October 2023
20 October
10:00 am - 11:00 am
Heads-Up Computing: Towards The Next Generation Interactive
Location
Lecture Theatre 5 (1/F), Lady Shaw Building (LSB)
Category
Seminar Series 2023/2024
Speaker:
Prof. Shengdong Zhao
Associate Professor
Department of Computer Science, National University of Singapore
Abstract:
Heads-up computing is an emerging concept in human-computer interaction (HCI) that focuses on natural and intuitive interaction with technology. By making technology more seamlessly integrated into our lives, heads-up computing has the potential to revolutionize the way we interact with devices. With the rise of large language models (LLMs) such as ChatGPT and GPT4, the vision of heads-up computing is becoming much easier to realize. The combination of LLMs and heads-up computing can create more proactive, personalized, and responsive systems that are more human-centric. However, technology is a double-edged sword. While technology provides us with great power, it also comes with the responsibility to ensure that it is used ethically and for the benefit of all. That’s why it is essential to place fundamental human values at the center of research programs and work collaboratively among disciplines. As we navigate through this historic transition, it is crucial to shape a future that reflects our values and enhances our quality of life.
Biography:
Dr. Shengdong Zhao is an Associate Professor in the Department of Computer Science at the National University of Singapore, where he established and leads the NUS-HCI research lab. He received his Ph.D. degree in Computer Science from the University of Toronto and a Master’s degree in Information Management & Systems from the University of California, Berkeley. With a wealth of experience in developing new interface tools and applications, Dr. Zhao regularly publishes his research in top-tier HCI conferences and journals. He has also worked as a senior consultant with the Huawei Consumer Business Group in 2017. In addition to his research, Dr. Zhao is an active member of the HCI community, frequently serving on program committees for top HCI conferences and as the paper chair for the ACM SIGCHI 2019 and 2020 conferences. For more information about Dr. Zhao and the NUS-HCI lab, please visit http://www.shengdongzhao.com and http://www.nus-hci.org .
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
16 October
1:00 pm - 2:00 pm
Robust AI for Security
Location
L2, 1/F, Science Centre (SC L2), CUHK
Category
Seminar Series 2023/2024
Speaker:
Prof. Yizheng Chen
Assistant Professor
Department of Computer Science, University of Maryland
Abstract:
Artificial Intelligence is becoming more powerful than ever, e.g., GitHub Copilot suggests code to developers, and Large Language Model (LLM) Plugins will soon assist many tasks in our daily lives. We can utilize the power of AI to solve security problems, which needs to be robust against new attacks and new vulnerabilities.
In this talk, I will first discuss how to develop robust AI techniques for malware detection. Our research finds that, after training an Android malware classifier on one year’s worth of data, the F1 score quickly dropped from 0.99 to 0.76 after 6 months of deployment on new test samples. I will present new methods to make machine learning for Android malware detection more effective against data distribution shift. My vision is, continuous learning with a human-in-the-loop setup can achieve robust malware detection. Our results show that to maintain a steady F1 score over time, we can achieve 8X reduction in labels indeed from security analysts.
Next, I will discuss the potential of using large language models to solve security problems, using vulnerable source code detection as a case study. We propose and release a new vulnerable source code dataset, DiverseVul. Using the new dataset, we study 11 model architectures belonging to 4 families for vulnerability detection. Our results indicate that developing code-specific pre-training tasks is a promising research direction of using LLMs for security. We demonstrate an important generalization challenge for the deployment of deep learning-based models.
In closing, I will discuss security issues of LLMs and future research directions.
Biography:
Yizheng Chen is an Assistant Professor of Computer Science at University of Maryland. She works at the intersection of AI and security. Her research focuses on AI for Security and robustness of AI models. Previously, she received her Ph.D. in Computer Science from the Georgia Institute of Technology, and was a postdoc at University of California, Berkeley and Columbia University. Her work has received an ACM CCS Best Paper Award Runner-up and a Google ASPIRE Award. She is a recipient of the Anita Borg Memorial Scholarship.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
September 2023
27 September
10:00 am - 11:30 am
Geometric Robot Learning for Generalizable Skills Acquisition
Location
Room 123, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Prof. Xiaolong Wang
Associate Professor
Department of Electrical and Computer Engineering, University of California, San Diego
Abstract:
Robot learning has witnessed significant progress in terms of generalization in the past few years. At the heart of such a generalization, the advancement of representation learning, such as image and text foundation models plays an important role. While these achievements are encouraging, most tasks conducted are relatively simple. In this talk, I will talk about our recent efforts on learning generalizable skills focusing on tasks with complex physical contacts and geometric reasoning. Specifically, I will discuss our research on: (i) the use of a large number of low-cost, binary force sensors to enable Sim2Real manipulation; (ii) unifying 3D and semantic representation learning to generalize policy learning across diverse objects and scenes. I will showcase the real-world applications of our research, including dexterous manipulation, language-driven manipulation, and legged locomotion control.
Biography:
Xiaolong Wang is an Assistant Professor in the ECE department at the University of California, San Diego, affiliated with the TILOS NSF AI Institute. He received his Ph.D. in Robotics at Carnegie Mellon University. His postdoctoral training was at the University of California, Berkeley. His research focuses on the intersection between computer vision and robotics. His specific interest lies in learning 3D and dynamics representations from videos and physical robotic interaction data. These comprehensive representations are utilized to facilitate the learning of robot skills, with the goal of generalizing the robot to interact effectively with a wide range of objects and environments in the real physical world. He is the recipient of the NSF CAREER Award, Intel Rising Star Faculty Award, and Research Awards from Sony, Amazon, Adobe, and Cisco.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
27 September
2:00 pm - 3:00 pm
Disentangled Representation from Generative Networks
Location
L2, 1/F, Science Centre (SC L2), CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. LIU Sifei
Abstract:
Disentangled representation in computer vision refers to encoding visual data into distinct, independent factors. These representations are critical for enhancing interpretability, improving generalization across tasks, and enabling controlled manipulation of specific visual attributes. Learning disentangled representation is challenging, primarily because obtaining ground-truth factorizations is often elusive.
In this talk, I will discuss our latest efforts to extract disentangled representations from GANs and diffusion models, for both 2D images and 3D textured shapes. I will demonstrate how, in the absence of annotations, our approaches can discern and extract fine-grained structural information, such as correspondence maps, in a self-supervised manner. Building on this space, I will introduce our work on a generalizable network designed for controlled generation and editing in a feed-forward paradigm. Additionally, I will spotlight our recent exploration into generating hand-object interactions, leveraging the disentanglement of layout and content through image diffusion models.
Biography:
Dr. LIU Sifei is a staff-level Senior Research Scientist at NVIDIA, where she is part of the LPR team led by Jan Kautz. Her work primarily revolves around the development of generalizable visual representation and data-efficiency learning for images, videos, and 3D contents. Prior to this, she pursued her Ph.D. at the VLLAB, under the guidance of Ming-Hsuan Yang. Sifei had received several prestigious awards and recognitions. In 2013, she was honored with the Baidu Graduate Fellowship. This was followed by the NVIDIA Pioneering Research Award in 2017, and the Rising Star EECS accolade in 2019. Additionally, she was nominated for the VentureBeat Women in AI Award in 2020.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
21 September
4:30 pm - 5:30 pm
Towards Scalable, Secure and Privacy-Preserving Metaverse
Location
L1, 1/F, Science Centre (SC L1), CUHK
Category
Seminar Series 2023/2024
Speaker:
Prof. DAI Hong-Ning
Associate Professor
Department of Computing Science, Hong Kong Baptist University (HKBU)
Abstract:
The metaverse is essentially constructed by multiple technologies, including virtual reality (VR), augmented reality (AR), mixed reality (MR), artificial intelligence (AI), digital twin (DT), blockchain, and 5G communications. The advent of the metaverse has proliferated a number of VR/AR apps on top of diverse VR/AR devices, such as Meta Quest 2, MS Hololens, Sony PlayStation VR, ByteDance Pico, and Apple Vision Pro. Meanwhile, diverse metaverse applications have emerged, such as gaming, healthcare, industry, creator economy, and digital arts. However, the current development of the metaverse is still in its early stage because of the complexity and heterogeneity of the entire system, which cannot be scalable to fulfill the increasing number of participants as well as the stringent demands of metaverse applications. Moreover, emerging security vulnerabilities and privacy-leakage concerns have also prevented the metaverse from wide adoption. In this talk, I will first briefly review the Metaverse as well as relevant technologies. I will then elaborate on its challenges as well as potential solutions. Finally, I will discuss several future directions in this promising area.
Biography:
Hong-Ning Dai is an associate professor in the Department of Computer Science, Hong Kong Baptist University (HKBU). He obtained a Ph.D. degree in Computer Science and Engineering from The Chinese University of Hong Kong. Before joining HKBU, he has more than 10-year academic experience in the Chinese University of Hong Kong, Macau University of Science and Technology (Macau), and Lingnan University (Hong Kong). His current research interests include the Internet of Things, Blockchain, and Big Data Analytics. Prof. Dai has published more than 200 papers in referred journals and conferences. His publications have received more than 15,000 citations. He was also included in the world’s top 2% scientists for career-long impact (2022, 2021) by Stanford University, USA. He was also conferred on AI 2000 Most Influential Scholar Award (Honorable Mention) in Internet of Things, 2023. He is the holder of 1 U.S. patent. He is the senior member of IEEE and ACM. Prof. Dai has served as an associate editor for IEEE Communications Surveys & Tutorials, IEEE Transactions on Intelligent Transportation Systems, IEEE Transactions on Industrial Informatics, IEEE Transactions on Industrial Cyber-Physical Systems, Ad Hoc Networks (Elsevier), and Connection Science (Taylor & Francis).
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
20 September
4:00 pm - 5:00 pm
The da Vinci Research Kit: System Description, Research Highlights, and Surgical Robotics Challenge
Location
CK TSE, G/F, Chung Chi College Elisabeth Luce Moore Library
Category
Seminar Series 2023/2024
Speaker:
Prof. Peter Kazanzides
Research Professor
Department of Computing Science, Johns Hopkins University
Abstract:
The da Vinci Research Kit (dVRK) is an open research platform that couples open-source control electronics and software with the mechanical components of the da Vinci surgical robot. This presentation will describe the dVRK system architecture, followed by selected research enabled by this system, including mixed reality for the first assistant, autonomous camera motion, and force estimation for bilateral teleoperation. The presentation will conclude with an overview of the AccelNet Surgical Robotics Challenge, which includes both simulated and physical environments.
Biography:
Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988. He began work on surgical robotics in March 1989 as a postdoctoral researcher at the IBM T.J. Watson Research Center and co-founded Integrated Surgical Systems (ISS) in November 1990. As Director of Robotics and Software at ISS, he was responsible for the design, implementation, validation and support of the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University December 2002 and currently holds an appointment as a Research Professor of Computer Science. His research focuses on computer-integrated surgery, space robotics and mixed reality.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
12 September
9:30 am - 10:30 am
Smart Reconfigurable Computing for GNN and Transformer using Agile High Level Synthesis
Location
Zoom
Category
Seminar Series 2023/2024
Speaker:
Dr. HAO Cong, Callie
Assistant Professor
Department of Electrical and Computer Engineering (ECE), Georgia Institute of Technology (GaTech)
Abstract:
In this talk, we introduce two architectures, one for graph neural work (GNN) called FlowGNN, one for vision transformer (ViT) called Edge-MoE. In FlowGNN, a generic dataflow architecture for GNN acceleration is proposed, supporting a wide range of GNN models without graph pre-processing. GNNBuilder is then introduced as an automated, end-to-end GNN accelerator generation framework, allowing the generation of accelerators for various GNN models with minimal overhead. Next, Edge-MoE presents an FPGA accelerator for multi-task Vision Transformers (ViTs) with architectural innovations, achieving improved energy efficiency compared to GPU and CPU. The talk demonstrates the performance of these approaches, with code and measurements available for public access. Finally, we briefly introduce LightningSim, a fast and rapid simulation tool for High-Level Synthesis (HLS) designs, which can significantly improve HLS design simulation speed.
Biography:
Dr. HAO Cong, Callie is an assistant professor in ECE at Georgia Tech. She received the Ph.D. degree in Electrical Engineering from Waseda University in 2017. Her primary research interests lie in the joint area of efficient hardware design and machine learning algorithms, as well as reconfigurable and high-efficiency computing and agile electronic design automation tools.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96351056844?pwd=cDBJcVY3ZHlGMSt2V0FUQVdUVnAwZz09
Meeting ID: 963 5105 6844
Passcode: 471978
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
01 September
11:00 am - 12:00 pm
An Evolution of Learning Neural Implicit Representations for 3D Shapes
Location
Room 407, 4/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Professor ZHANG Hao, Richard
Amazon Scholar, Professor
School of Computing Science, Simon Fraser University, Canada
Abstract:
Neural implicit representations are the immediate precursors to neural radiance fields (NeRF). In a short span of only four years, they have quickly become the representation of choice for learning reconstructive and generative models of 3D shapes. Unlike traditional convolutional neural networks that have been widely applied to reason about images and video, neural implicit models encode shape boundaries in a continuous manner to lead to superior visual quality; they are also amenable to simple network architectures to facilitate a variety of extensions and adaptations. In this talk, I will recount a brief history of the development of neural implicit representations, while focusing mainly on several paths of follow-ups from our recent works, including structured implicit models, direct mesh generation, CSG assemblies, and the use of contextual, query-specific feature encoding for category-agnostic and generalizable shape representation learning.
Biography:
ZHANG Hao, Richard is a professor in the School of Computing Science at Simon Fraser University, Canada. Currently, he holds a Distinguished University Professorship and is an Amazon Scholar. Richard earned his Ph.D. from the University of Toronto, and MMath and BMath degrees from the University of Waterloo. His research is in computer graphics and visual computing with special interests in geometric modeling, shape analysis, 3D vision, geometric deep learning, as well as computational design and fabrication. He has published more than 180 papers on these topics, including over 60 articles in SIGGRAPH (+Asia) and ACM Transactions on Graphics (TOG), the top venue in computer graphics. Awards won by Richard include a Canadian Human-Computer Communications Society Achievement Award in Computer Graphics (2022), a Google Faculty Award (2019), a National Science Foundation of China Overseas Outstanding Young Researcher Award (2015), an NSERC Discovery Accelerator Supplement Award (2014), a Best Dataset Award from ChinaGraph (2020), as well as faculty grants/gifts from Adobe, Autodesk, Google, and Huawei. He and his students have won the CVPR 2020 Best Student Paper Award and Best Paper Awards at SGP 2008 and CAD/Graphics 2017.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
August 2023
28 August
2:00 pm - 3:00 pm
Towards predictive spatiotemporal modeling of single cells
Location
Room 803, 8/F, William M W Mong Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Dr. Xiaojie Qiu
Incoming Assistant Professor
Department of Genetics, Department of Computer Science, Stanford University
Abstract:
Single-cell RNA-seq, together with RNA velocity and metabolic labeling, reveals cellular states and transitions at unprecedented resolution. Fully exploiting these data, however, requires kinetic models capable of unveiling governing regulatory functions.
In the first part of my talk, I will introduce an analytical framework dynamo (https://github.com/aristoteleo/dynamo-release), and highlight dynamo’s power to overcome fundamental limitations of conventional splicing-based RNA velocity analyses to enable accurate velocity estimations on a metabolically labeled human hematopoiesis scRNA-seq dataset. Furthermore, differential geometry analyses reveal mechanisms driving early megakaryocyte appearance and elucidate asymmetrical regulation within the PU.1-GATA1 circuit. Leveraging the least-action-path method, dynamo accurately predicts drivers of numerous hematopoietic transitions. Finally, in silico perturbations predict cell-fate diversions induced by gene perturbations. Dynamo, thus, represents an important step in advancing quantitative and predictive theories of cell-state transitions.
Cells do not live in a vacuum, but in a milieu defined by cell–cell communication that can be quantified via recent advances in spatial transcriptomics. In my second section of my talk, I will talk about Spateo, a general framework for quantitative spatiotemporal modeling of single-cell resolution spatial transcriptomics. Spateo develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila. Thus, Spateo enables the study of the ecology of organs at a molecular level in 3D space, beyond isolated single cells.
Biography:
Dr. Xiaojie Qiu is an incoming assistant professor at the Department of Genetics, the BASE program, and the Department of Computer Science at Stanford. Xiaojie’s Ph.D. work at University of Washington with Dr. Cole Trapnell made substantial contributions to the field of single-cell genomics, exemplified by the development of Monocle ⅔ (monocle 2 & monocle 3), which can accurately and robustly reconstruct complex developmental trajectories from scRNA-seq data. In his post-doc at Whitehead Institute with Dr. Jonathan Weissman, Xiaojie developed Dynamo (aristoteleo/dynamo-release) to infers absolute RNA velocity with metabolic labeling enabled single-cell RNA-seq, reconstructs continuous vector fields that predict fates of individual cells, employs differential geometry to extract underlying gene regulatory network regulations, and ultimately predicts optimal reprogramming paths and makes nontrivial in silico perturbation predictions. Recently he also developed a powerful toolkit, Spateo (aristoteleo/spateo-release), for advanced multi-dimensional spatiotemporal modeling of single cell resolution spatial transcriptomics. Spateo delivers novel methods for digitizing spatial layers/columns to identify spatially-polar genes, and develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila.
The Qiu lab at Stanford will officially start on Dec. 16, 2024. Xiaojie will continue leveraging his unique background in single-cell genomics, mathematical modeling, and machine learning to lead a research team that bridges the gap between the “big data” from single-cell and spatial genomics and quantitative/predictive modeling in order to address fundamental questions in mammalian cell fate transitions, especially that of heart development and disease. There will be mainly four directions in the lab: 1) dissect the mechanisms of mammalian cell differentiation, reprogramming, and maintenance, including that of cardiac cells, through differentiable deep learning frameworks; 2) integrate multi-omics and harmonize short-term RNA velocities with long-term lineage tracing and apply such methods to heart developmental and heart congenital disease; 3) build predictive in silico 3D spatiotemporal models of mammalian organogenesis with a focus on the heart morphogenesis; and 4) establish foundational software ecosystem for predictive and mechanistic modeling of single cell and spatial transcriptomics.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
14 August
3:00 pm - 4:00 pm
Fair and Private Backpropagation: A Scalable Framework for Fair and Private Learning
Location
Room 121, 1/F, Ho Sin-Hang Engineering Building, CUHK
Category
Seminar Series 2023/2024
Speaker:
Meisam Razaviyayn
Associate Professor
University of Southern California
Abstract:
Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals’ health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning require a full-batch of data in each iteration of the algorithm to be able to impose fairness. Moreover, the fairness/accuracy of the model can degrade significantly in prior DP training algorithms. In this work, we developed a min-batch (stochastic) differentially private algorithm for fair learning (with theoretical convergence guarantee). Here, the term “stochastic” refers to the fact that our proposed algorithm converges even when mini-batches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger-scale problems with non-binary target/sensitive attributes.
Biography:
Meisam Razaviyayn is an associate professor of Industrial and Systems Engineering, Computer Science, Quantitative and Computational Biology, and Electrical Engineering at the University of Southern California. He is also the associate director of the USC-Meta Center for Research and Education in AI and Learning. Prior to joining USC, he was a postdoctoral research fellow in the Department of Electrical Engineering at Stanford University. He received his PhD in Electrical Engineering with a minor in Computer Science at the University of Minnesota. He obtained his M.Sc. degree in Mathematics from the University of Minnesota. Meisam Razaviyayn is the recipient of the 2022 NSF CAREER Award, the 2022 Northrop Grumman Excellence in Teaching Award, the 2021 AFOSR Young Investigator Award, the 2021 3M Nontenured Faculty award, 2020 ICCM Best Paper Award in Mathematics, IEEE Data Science Workshop Best Paper Award in 2019, the Signal Processing Society Young Author Best Paper Award in 2014, and the finalist for Best Paper Prize for Young Researcher in Continuous Optimization in 2013 and 2016. He is also the silver medalist of Iran’s National Mathematics Olympiad. His research interests include the design and the study of the fundamental aspects of optimization algorithms that arise in the modern data science era.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)
haha!
Seminar Series 2023/2024
Resource Management and Runtime Reconfiguration for Distributed Streaming Systems
Location
Speaker:
Professor Richard T. B. Ma
Associate Professor
School of Computing, National University of Singapore
Abstract:
Due to the long-run and unpredictable nature of stream processing, any statically configured execution of stream jobs fails to process data in a timely and efficient manner. To achieve performance requirements, stream jobs need to be reconfigured dynamically.
In the first part of the talk, we will discuss DRS, a dynamic resource scaling framework for cloud-based stream data analytics systems. DRS overcomes three fundamental challenges: 1) how to model the relationship between the provisioned resources and the application performance, 2) where to best place resources, and 3) how to measure the system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of Jackson open queueing networks and is capable of handling arbitrary operator topologies, possibly with loops, splits, and joins. Extensive experiments with real data show that DRS is capable of detecting sub-optimal resource allocation and making quick and effective resource adjustment.
In the second part of the talk, we present Trisk, a control plane that supports versatile reconfigurations while keeping high efficiency with easy-to-use programming APIs. Trisk enables versatile reconfigurations with usability based on a task-centric abstraction, and encapsulates primitive operations such that reconfigurations can be described by compositing the primitive operations on the abstraction. Trisk adopts a partial pause-and-resume design for efficiency, through which synchronization mechanisms in the native stream systems can further be leveraged. We implement Trisk on Apache Flink and demonstrate its usage and performance under realistic application scenarios. We show that Trisk executes reconfigurations with shorter completion time and comparable latency compared to a state-of-the-art fluid mechanism for state management.
Biography:
Prof. Richard T. B. Ma received the B.Sc. (Hons.) degree in computer science and M.Phil. degree in computer science and engineering from The Chinese University of Hong Kong in 2002 and 2004, respectively, and the Ph.D. degree in electrical engineering from Columbia University in 2010. During his Ph.D. study, he worked as a Research Intern at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA, and Telefonica Research, Barcelona, Spain. From 2010–2014, he worked as a Research Scientist at the Advanced Digital Science Center (ADSC), University of Illinois at Urbana–Champaign, Champaign, IL, USA. He is currently an Associate Professor with the School of Computing, National University of Singapore. His current research interests include distributed systems and network economics. He was a recipient of the Best Paper Award Runners-up from the ACM Mobihoc 2020 and a co-recipient of the Best Paper Award from the IEEE IC2E 2013, the IEEE ICNP 2014, and the IEEE Workshop on Smart Data Pricing 2015. He is a Senior Member of ACM and IEEE.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Neural Acceleration with Full Stack Optimization
Location
Speaker:
Professor Meng Li
Assistant Professor
Institute for Artificial Intelligence, School of Integrated Circuits, Peking University
Abstract:
Recent years have witnessed the fast evolution of AI and deep learning (DL) in the field computer vision, natural language processing, etc. Though promising, DL faces serious challenges due to the exponential network scaling and network heterogeneity. In this talk, I will discuss some of our recent works that leverage network/hardware co-design and co-optimization to improve the efficiency for DL. I will cover our recent works on tiny language model for MCUs, memory-aware scheduling, and hardware accelerator designs based on a new computing paradigm, i.e., stochastic computing. I will also discuss interesting future directions to further improve the efficiency and security for efficient AI.
Biography:
Prof. Meng Li is currently a tenure-track assistant professor in Peking University, jointly affiliated with Institute for Artificial Intelligence and School of Integrated Circuits. Before joining Peking University, he was staff research scientist and tech lead in Meta Reality Lab, the world’s largest social media company, focusing on research and productization of efficient AI algorithms and hardware/systems for next generation AR/VR devices. Dr. Li received his Ph.D. degree from the University of Texas at Austin in 2018 and his bachelor degree from Peking University in 2013.
Prof. Meng Li’s research interests lie in the field of efficient and secure multi-modal AI acceleration algorithms and hardware. He has published more than 60 papers and received two best paper awards from HOST 2017 and GLSVLSI 2018. He also receives EDAA Outstanding Dissertation Award, First Place in ACM Student Research Competition Grand Final (Graduate Category), Best Poster Awards in ASPDAC Student Research Forum, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Qwen: Towards a Generalist Model
Location
Speaker:
Mr. Junyang Lin
Staff Engineer, Leader of Qwen Team,
Alibaba Group
Abstract:
This talk introduces the large language and multimodal model series Qwen, which stands for Tongyi Qianwen (通义千问), published and opensourced by Alibaba Group. The Qwen models have achieved competitive performance against both opensource and proprietary LLMs and LMMs in both benchmark evaluation and human evaluation. This talk provides a brief overview of the model series, and then delves into details about building the LLMs and LMMs, including pretraining, alignment, multimodal extension, as well as the opensource. Additionally, it points out the limitations, and discusses the future work for both research community and industry in this field.
Biography:
Mr. Junyang Lin is a staff engineer of Alibaba Group, and he is now a leader of Qwen Team. He has been doing research in natural language processing and multimodal representation learning, with a focus on large-scale pretraining, and he has around 3000 citations. Recently his team released and opensourced the Qwen series, including large language model Qwen, large vision-language model Qwen-VL, and large audio-language model Qwen-Audio. Previously, he focused on building large-scale pretraining with a focus on multimodal pretraining, and developed opensourced models OFA, Chinese-CLIP, etc.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Classical simulation of one-query quantum distinguishers
Location
Speaker:
Professor Andrej Bogdanov
Professor
School of Electrical Engineering and Computer Science, University of Ottawa
Abstract:
A distinguisher is an algorithm that tells whether its input was sampled from one distribution or from another. The computational complexity of distinguishers is important for much of cryptography, pseudorandomness, and statistical inference.
We study the relative advantage of classical and quantum distinguishers of bounded query complexity over n-bit strings. Our focus is on a single quantum query, which is already quite powerful: Aaronson and Ambainis (STOC 2015) constructed a pair of distributions that is 𝜀-distinguishable by a one-query quantum algorithm, but O(𝜀k/√n)-indistinguishable by any non-adaptive k-query classical algorithm.
We show that every pair of distributions that is 𝜀-distinguishable by a one-query quantum algorithm is distinguishable with k classical queries and (1) advantage min{𝛺(𝜀√(k/n)), 𝛺(𝜀^2k^2/n)} non-adaptively (i.e., in one round), and (2) advantage 𝛺(𝜀^2k/√(n log n)) in two rounds. The second bound is tight in k and n up to a (log n) factor.
Based on joint work with Tsun Ming Cheung (McGill), Krishnamoorthy Dinesh (IIT Palakkad), and John C.S. Lui (CUHK)
Biography:
Prof. Andrej Bogdanov is a professor in the School of Electrical Engineering and Computer Science at the University of Ottawa. He is interested in cryptography, pseudorandomness, and computational complexity. Andrej obtained his Ph.D. from UC Berkeley. Before joining uOttawa he taught at the Chinese University of Hong Kong. He was a visiting professor at the Tokyo Institute of Technology in 2013 and at the Simons Institute for the Theory of Computing in 2017 and 2021.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Compact AI Representations for Game Theory: Models, Computations, and Applications
Location
Speaker:
Professor Hau Chan
Assistant Professor
School of Computing, University of Nebraska-Lincoln
Abstract:
In the last few decades, game theory has become a prominent construct for modeling and predicting outcomes of strategic interactions of rational agents in various real-world environments, ranging from adversarial (e.g., attacker-defender in the security domain) to collaborative (e.g., public good contributions). In terms, these predicted outcomes can be used to facilitate, inform, and improve agents’ and policymakers’ decision-making. Unfortunately, because of the domain characteristics in real-world environments, classical game-theoretic modeling and computational approaches (for predicting outcomes) can often take exponential space and time.
In this talk, I will discuss compact AI representations for strategic interactions (or games) to provide efficient approaches for a wide range of applications. I will demonstrate how they can be used to model and predict outcomes in scenarios we examined previously such as attacker-defenders, resource congestions, residential segregations, and public project contributions.
More specifically, I will first present aggregate games, a compact AI representation of games where each agent’s utility function depends on their own actions and the aggregation or summarization of the actions of all agents, and resource graph games, a compact AI representation of games where agents have exponential numbers of actions. For these games, I will then present our computational results for determining and computing Nash Equilibria (NE), a fundamental solution concept to specify predicted outcomes in games, and their related problems.
Biography:
Prof. Hau Chan is an assistant professor in the School of Computing at the University of Nebraska-Lincoln. He received his Ph.D. in Computer Science from Stony Brook University in 2015 and completed three years of Postdoctoral Fellowships, including at the Laboratory for Innovation Science at Harvard University in 2018. His main research areas focus on modeling and algorithmic aspects of AI and multi-agent interactions (e.g., via game theory, mechanism design, and applied machine learning), addressing several cross-disciplinary societal problems and applications. His recent application areas include improving accessibility to public facilities, reducing substance usage, and making fair collective decisions. His research has been supported by NSF, NIH, and USCYBERCOM. He has received several Best Paper Awards at SDM and AAMAS and distinguished/outstanding SPC/PC member recognitions at IJCAI and WSDM. He has given tutorials and talks on computational game theory and mechanism design at venues such as AAMAS and IJCAI, including an Early Career Spotlight at IJCAI 2022. He has served as a co-chair for Demonstrations, Doctoral Consortium, Scholarships, and Diversity & Inclusion Activities at AAMAS and IJCAI.
Join Zoom Meeting:
https://cuhk.zoom.us/j/93836939970
Meeting ID: 938 3693 9970
Passcode: 202300
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Cryo-Electron Microscopy Image Analysis: from 2D class averaging to 3D reconstruction
Location
Speaker:
Professor Zhizhen Zhao
William L. Everitt Fellow and Associate Professor
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Abstract:
Cryo-electron microscopy (EM) single particle reconstruction is an entirely general technique for 3D structure determination of macromolecular complexes. This talk focuses on the algorithms for 2D class averaging and 3D reconstruction for the single-particle images, assuming no conformation changes of the macromolecules. In the first part, I will introduce the multi-frequency vector diffusion maps to improve the efficiency and accuracy of cryo-EM 2D image classification and denoising. This framework incorporates different irreducible representations of the estimated alignment between similar images. In addition, we use a graph filtering scheme to denoise the images using the eigenvalues and eigenvectors of the MFVDM matrices. In the second part, I will present a 3D reconstruction approach, which follows a line of works starting from Kam (1977) that employs the autocorrelation analysis for the single particle reconstruction. Our approach does not require per image pose estimation and imposes spatial non-negativity constraint. At the end of the talk, I will briefly review the challenges and existing approaches for addressing the continuous heterogeneity in cryo-EM data.
Biography:
Prof. Zhizhen Zhao is an Associate Professor and William L. Everitt Fellow in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She joined University of Illinois in 2016. From 2014 to 2016, she was a Courant Instructor at the Courant Institute of Mathematical Sciences, New York University. She received the B.A. and M.Sc. degrees in physics from Trinity College, Cambridge University in 2008, and the Ph.D. degree in physics from Princeton University in 2013. She is a recipient of Alfred P. Sloan Research Fellowship (2020). Her research interests include computational imaging, data science, and machine learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Structure for Scalable Verification
Location
Speaker:
Dr. Lauren Pick
Postdoctoral Researcher
Department of Computer Sciences, University of Wisconsin-Madison and Department of Electrical Engineering and Computer Sciences, University of California, Berkeley
Abstract:
Given the critical role of software systems in society, it is important that we understand how such systems behave and interact. Formal specifications can help us in this task by providing rigorous and unambiguous descriptions of system behaviors. Automated verification can be applied to automate the process of proving formal specifications hold for software systems, making it easier to ensure that the underlying systems function as intended. Unfortunately, the application of automated verification to real-world systems remains hindered by scalability limitations. In this talk, I describe my work on addressing these limitations by leveraging the problem-specific structure of specifications and systems. I specifically illustrate my approach for handling concrete problems in security and distributed domains, where taking advantage of structure enables scalable verification.
Biography:
Dr. Lauren Pick is a postdoctoral researcher at the University of California, Berkeley and the University of Wisconsin-Madison. She received her Ph.D. from Princeton University in January 2022. Her research focuses on developing techniques for automated verification and synthesis, with the goal of enabling formal reasoning about real-world systems. To this end, she has developed techniques that take advantage of structural aspects of target systems and their desired properties to enable efficient verification and synthesis. She is a Computing Innovation fellow and was a recipient of the NSF GRFP Fellowship.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Shape Geometric Processing and Analysis of Large Aviation Equipments
Location
Speaker:
Professor Mingqiang Wei
Professor
School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA)
Abstract:
Large aircraft, as one of the most complex high-end equipment in modern society, is the culmination of interdisciplinary and cross-domain advanced technologies, occupying the top of the manufacturing industry’s technology and value chains. With the emergence of a batch of national key equipment such as the Y-20, C919, and Jiaolong-600, China has made breakthrough progress in large aircraft manufacturing and gradually established a relatively complete production and development system. However, due to insufficient technological foundation and compared with international aerospace manufacturing giants, Chinese aviation enterprises have not yet achieved integrated manufacturing and measurement capabilities or effective precision control capabilities. The “high-precision rapid 3D scanning analysis and quality control technology” has become an important factor affecting the development process of large aircraft in China. Geometric deep learning, with its powerful ability to learn geometric features, has shown great potential in the analysis of large aircraft shapes. However, existing network structures lack domain-specific expertise in aviation, there is no publicly available large-scale aircraft 3D dataset, and the latest machine learning technologies have not been deeply integrated into the field of geometric deep learning, making it difficult to comprehensively and efficiently analyze the complex features and stringent accuracy requirements of large aircraft shapes. This report will introduce the interdisciplinary technical issues involved in the analysis of large aircraft shapes.
Biography:
Prof. Mingqiang Wei received his Ph.D. degree (2014) in Computer Science and Engineering from the Chinese University of Hong Kong (CUHK). He is a professor at the School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA). He was the recipient of Excellent Youth Fund Project of the National Natural Science Foundation of China in 2023. Before joining NUAA, he served as an assistant professor at Hefei University of Technology, and a postdoctoral fellow at CUHK. He was a recipient of the CUHK Young Scholar Thesis Awards in 2014. He is now an Associate Editor for ACM TOMM, The Visual Computer Journal, Journal of Electronic Imaging, and a Guest Editor for IEEE Transactions on Multimedia. His research interests focus on 3D vision, computer graphics, and deep learning.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Looking behind the Seen
Location
Speaker:
Professor Alexander Schwing
Associate Professor
Department of Electrical and Computer Engineering & Department of Computer Science, University of Illinois at Urbana-Champaign
Abstract:
Our goal is to develop methods which anticipate. For this, four foundational questions need to be answered: (1) How can methods accurately forecast high-dimensional observations?; (2) How can algorithms holistically understand objects, e.g., when reasoning about occluded parts?; (3) How can accurate probabilistic models be recovered from limited amounts of labeled data and for rare events?; and (4) How can autonomous agents be trained effectively to collaborate?
In this talk we present vignettes of our research to address those questions. We start by discussing MaskFormer and Mask2Former, a recent architecture which achieves state-of-the-art results on three tasks: panoptic, instance and semantic segmentation. We then discuss the importance of memory for video object segmentation and its combination with foundation models for open-world segmentation. Finally, and if time permits, we discuss SDFusion, a generative model to infer parts of an object that are unobserved. For additional info and questions, please browse to http://alexander-schwing.de.
Biography:
Prof. Alexander Schwing is an Associate Professor at the University of Illinois at Urbana-Champaign working with talented students on computer vision and machine learning topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016. His research interests are in the area of computer vision and machine learning, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing and generative modeling. His PhD thesis was awarded an ETH medal and his team’s research was awarded an NSF CAREER award. For additional info, please browse to http://alexander-schwing.de.
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Open-Source Accelerator-Based Edge AI Architectures for a Sustainable World
Location
Speaker:
Professor David Atienza
Professor
Department of Electrical and Computer Engineering, The École Polytechnique Fédérale de Lausanne (EPFL)
Abstract:
Edge computing is becoming an essential concept covering multiple domains nowadays as our world becomes increasingly connected to enable the Internet of Things (IoT) concept. In addition, the new wave of Artificial Intelligence (AI), particularly complex Machine Learning (ML) and Deep Learning (DL) models, is demanding new computing paradigms beyond traditional general-purpose computing to make IoT a viable reality in a sustainable world.
In this seminar, Prof. Atienza will discuss new approaches to effectively design the next generation of edge AI computing architectures by taking inspiration from how biological computing systems operate. In particular, these novel bioinspired edge AI architectures includes two key concepts. First, it exploits the idea of accepting computing inexactness and integrating multiple computing acceleration engines and low-power principles to create a new open-source eXtended and Heterogeneous Energy-Efficient hardware Platform (called x-HEEP). Second, x-HEEP can be instantiated for different application domains of edge AI to operate ensembles of neural networks to improve the ML/DL outputs’ robustness at system level, while minimizing memory and computation resources for the target application. Overall, x-HEEP instantiations for edge AI applications included in-memory computing or run-time reconfigurable coarse-grained accelerators to minimize energy according to the required precision of the target application.
Biography:
Prof. David Atienza is a professor of Electrical and Computer Engineering, and leads both the Embedded Systems Laboratory (ESL) and the new EcoCloud Sustainable Computing Center at EPFL, Switzerland. He received his M.Sc. and Ph.D. degrees in Computer Science and Engineering from UCM (Spain) and IMEC (Belgium). His research interests include system-level design methodologies for high-performance multi-processor system-on-chip (MPSoC) and low-power Internet-of-Things (IoT) systems, including edge AI architectures for wearables and IoT systems as well as thermal-aware designs for MPSoCs and many-core servers. He is a co-author of more than 400 papers, two books, and has 14 licensed patents in these topics. He served as DATE General Chair and Program Chair, and is currently Editor-in-Chief of IEEE TCAD. Among others, Dr. Atienza has received the ICCAD 10-Year Retrospective Most Influential Paper Award, the DAC Under-40 Innovators Award, the IEEE TC-CPS Mid-Career Award, and the ACM SIGDA Outstanding Faculty Award. He is a Fellow of IEEE, a Fellow of ACM, served as IEEE CEDA President (period 2018-2019), and he is currently the Chair of the European Design Automation Association (EDAA).
Enquiries: Jeff Liu ( jeffliu@cse.cuhk.edu.hk )
Heads-Up Computing: Towards The Next Generation Interactive
Location
Speaker:
Prof. Shengdong Zhao
Associate Professor
Department of Computer Science, National University of Singapore
Abstract:
Heads-up computing is an emerging concept in human-computer interaction (HCI) that focuses on natural and intuitive interaction with technology. By making technology more seamlessly integrated into our lives, heads-up computing has the potential to revolutionize the way we interact with devices. With the rise of large language models (LLMs) such as ChatGPT and GPT4, the vision of heads-up computing is becoming much easier to realize. The combination of LLMs and heads-up computing can create more proactive, personalized, and responsive systems that are more human-centric. However, technology is a double-edged sword. While technology provides us with great power, it also comes with the responsibility to ensure that it is used ethically and for the benefit of all. That’s why it is essential to place fundamental human values at the center of research programs and work collaboratively among disciplines. As we navigate through this historic transition, it is crucial to shape a future that reflects our values and enhances our quality of life.
Biography:
Dr. Shengdong Zhao is an Associate Professor in the Department of Computer Science at the National University of Singapore, where he established and leads the NUS-HCI research lab. He received his Ph.D. degree in Computer Science from the University of Toronto and a Master’s degree in Information Management & Systems from the University of California, Berkeley. With a wealth of experience in developing new interface tools and applications, Dr. Zhao regularly publishes his research in top-tier HCI conferences and journals. He has also worked as a senior consultant with the Huawei Consumer Business Group in 2017. In addition to his research, Dr. Zhao is an active member of the HCI community, frequently serving on program committees for top HCI conferences and as the paper chair for the ACM SIGCHI 2019 and 2020 conferences. For more information about Dr. Zhao and the NUS-HCI lab, please visit http://www.shengdongzhao.com and http://www.nus-hci.org .
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Robust AI for Security
Location
Speaker:
Prof. Yizheng Chen
Assistant Professor
Department of Computer Science, University of Maryland
Abstract:
Artificial Intelligence is becoming more powerful than ever, e.g., GitHub Copilot suggests code to developers, and Large Language Model (LLM) Plugins will soon assist many tasks in our daily lives. We can utilize the power of AI to solve security problems, which needs to be robust against new attacks and new vulnerabilities.
In this talk, I will first discuss how to develop robust AI techniques for malware detection. Our research finds that, after training an Android malware classifier on one year’s worth of data, the F1 score quickly dropped from 0.99 to 0.76 after 6 months of deployment on new test samples. I will present new methods to make machine learning for Android malware detection more effective against data distribution shift. My vision is, continuous learning with a human-in-the-loop setup can achieve robust malware detection. Our results show that to maintain a steady F1 score over time, we can achieve 8X reduction in labels indeed from security analysts.
Next, I will discuss the potential of using large language models to solve security problems, using vulnerable source code detection as a case study. We propose and release a new vulnerable source code dataset, DiverseVul. Using the new dataset, we study 11 model architectures belonging to 4 families for vulnerability detection. Our results indicate that developing code-specific pre-training tasks is a promising research direction of using LLMs for security. We demonstrate an important generalization challenge for the deployment of deep learning-based models.
In closing, I will discuss security issues of LLMs and future research directions.
Biography:
Yizheng Chen is an Assistant Professor of Computer Science at University of Maryland. She works at the intersection of AI and security. Her research focuses on AI for Security and robustness of AI models. Previously, she received her Ph.D. in Computer Science from the Georgia Institute of Technology, and was a postdoc at University of California, Berkeley and Columbia University. Her work has received an ACM CCS Best Paper Award Runner-up and a Google ASPIRE Award. She is a recipient of the Anita Borg Memorial Scholarship.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Geometric Robot Learning for Generalizable Skills Acquisition
Location
Speaker:
Prof. Xiaolong Wang
Associate Professor
Department of Electrical and Computer Engineering, University of California, San Diego
Abstract:
Robot learning has witnessed significant progress in terms of generalization in the past few years. At the heart of such a generalization, the advancement of representation learning, such as image and text foundation models plays an important role. While these achievements are encouraging, most tasks conducted are relatively simple. In this talk, I will talk about our recent efforts on learning generalizable skills focusing on tasks with complex physical contacts and geometric reasoning. Specifically, I will discuss our research on: (i) the use of a large number of low-cost, binary force sensors to enable Sim2Real manipulation; (ii) unifying 3D and semantic representation learning to generalize policy learning across diverse objects and scenes. I will showcase the real-world applications of our research, including dexterous manipulation, language-driven manipulation, and legged locomotion control.
Biography:
Xiaolong Wang is an Assistant Professor in the ECE department at the University of California, San Diego, affiliated with the TILOS NSF AI Institute. He received his Ph.D. in Robotics at Carnegie Mellon University. His postdoctoral training was at the University of California, Berkeley. His research focuses on the intersection between computer vision and robotics. His specific interest lies in learning 3D and dynamics representations from videos and physical robotic interaction data. These comprehensive representations are utilized to facilitate the learning of robot skills, with the goal of generalizing the robot to interact effectively with a wide range of objects and environments in the real physical world. He is the recipient of the NSF CAREER Award, Intel Rising Star Faculty Award, and Research Awards from Sony, Amazon, Adobe, and Cisco.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Disentangled Representation from Generative Networks
Location
Speaker:
Dr. LIU Sifei
Abstract:
Disentangled representation in computer vision refers to encoding visual data into distinct, independent factors. These representations are critical for enhancing interpretability, improving generalization across tasks, and enabling controlled manipulation of specific visual attributes. Learning disentangled representation is challenging, primarily because obtaining ground-truth factorizations is often elusive.
In this talk, I will discuss our latest efforts to extract disentangled representations from GANs and diffusion models, for both 2D images and 3D textured shapes. I will demonstrate how, in the absence of annotations, our approaches can discern and extract fine-grained structural information, such as correspondence maps, in a self-supervised manner. Building on this space, I will introduce our work on a generalizable network designed for controlled generation and editing in a feed-forward paradigm. Additionally, I will spotlight our recent exploration into generating hand-object interactions, leveraging the disentanglement of layout and content through image diffusion models.
Biography:
Dr. LIU Sifei is a staff-level Senior Research Scientist at NVIDIA, where she is part of the LPR team led by Jan Kautz. Her work primarily revolves around the development of generalizable visual representation and data-efficiency learning for images, videos, and 3D contents. Prior to this, she pursued her Ph.D. at the VLLAB, under the guidance of Ming-Hsuan Yang. Sifei had received several prestigious awards and recognitions. In 2013, she was honored with the Baidu Graduate Fellowship. This was followed by the NVIDIA Pioneering Research Award in 2017, and the Rising Star EECS accolade in 2019. Additionally, she was nominated for the VentureBeat Women in AI Award in 2020.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Towards Scalable, Secure and Privacy-Preserving Metaverse
Location
Speaker:
Prof. DAI Hong-Ning
Associate Professor
Department of Computing Science, Hong Kong Baptist University (HKBU)
Abstract:
The metaverse is essentially constructed by multiple technologies, including virtual reality (VR), augmented reality (AR), mixed reality (MR), artificial intelligence (AI), digital twin (DT), blockchain, and 5G communications. The advent of the metaverse has proliferated a number of VR/AR apps on top of diverse VR/AR devices, such as Meta Quest 2, MS Hololens, Sony PlayStation VR, ByteDance Pico, and Apple Vision Pro. Meanwhile, diverse metaverse applications have emerged, such as gaming, healthcare, industry, creator economy, and digital arts. However, the current development of the metaverse is still in its early stage because of the complexity and heterogeneity of the entire system, which cannot be scalable to fulfill the increasing number of participants as well as the stringent demands of metaverse applications. Moreover, emerging security vulnerabilities and privacy-leakage concerns have also prevented the metaverse from wide adoption. In this talk, I will first briefly review the Metaverse as well as relevant technologies. I will then elaborate on its challenges as well as potential solutions. Finally, I will discuss several future directions in this promising area.
Biography:
Hong-Ning Dai is an associate professor in the Department of Computer Science, Hong Kong Baptist University (HKBU). He obtained a Ph.D. degree in Computer Science and Engineering from The Chinese University of Hong Kong. Before joining HKBU, he has more than 10-year academic experience in the Chinese University of Hong Kong, Macau University of Science and Technology (Macau), and Lingnan University (Hong Kong). His current research interests include the Internet of Things, Blockchain, and Big Data Analytics. Prof. Dai has published more than 200 papers in referred journals and conferences. His publications have received more than 15,000 citations. He was also included in the world’s top 2% scientists for career-long impact (2022, 2021) by Stanford University, USA. He was also conferred on AI 2000 Most Influential Scholar Award (Honorable Mention) in Internet of Things, 2023. He is the holder of 1 U.S. patent. He is the senior member of IEEE and ACM. Prof. Dai has served as an associate editor for IEEE Communications Surveys & Tutorials, IEEE Transactions on Intelligent Transportation Systems, IEEE Transactions on Industrial Informatics, IEEE Transactions on Industrial Cyber-Physical Systems, Ad Hoc Networks (Elsevier), and Connection Science (Taylor & Francis).
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
The da Vinci Research Kit: System Description, Research Highlights, and Surgical Robotics Challenge
Location
Speaker:
Prof. Peter Kazanzides
Research Professor
Department of Computing Science, Johns Hopkins University
Abstract:
The da Vinci Research Kit (dVRK) is an open research platform that couples open-source control electronics and software with the mechanical components of the da Vinci surgical robot. This presentation will describe the dVRK system architecture, followed by selected research enabled by this system, including mixed reality for the first assistant, autonomous camera motion, and force estimation for bilateral teleoperation. The presentation will conclude with an overview of the AccelNet Surgical Robotics Challenge, which includes both simulated and physical environments.
Biography:
Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988. He began work on surgical robotics in March 1989 as a postdoctoral researcher at the IBM T.J. Watson Research Center and co-founded Integrated Surgical Systems (ISS) in November 1990. As Director of Robotics and Software at ISS, he was responsible for the design, implementation, validation and support of the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University December 2002 and currently holds an appointment as a Research Professor of Computer Science. His research focuses on computer-integrated surgery, space robotics and mixed reality.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Smart Reconfigurable Computing for GNN and Transformer using Agile High Level Synthesis
Location
Speaker:
Dr. HAO Cong, Callie
Assistant Professor
Department of Electrical and Computer Engineering (ECE), Georgia Institute of Technology (GaTech)
Abstract:
In this talk, we introduce two architectures, one for graph neural work (GNN) called FlowGNN, one for vision transformer (ViT) called Edge-MoE. In FlowGNN, a generic dataflow architecture for GNN acceleration is proposed, supporting a wide range of GNN models without graph pre-processing. GNNBuilder is then introduced as an automated, end-to-end GNN accelerator generation framework, allowing the generation of accelerators for various GNN models with minimal overhead. Next, Edge-MoE presents an FPGA accelerator for multi-task Vision Transformers (ViTs) with architectural innovations, achieving improved energy efficiency compared to GPU and CPU. The talk demonstrates the performance of these approaches, with code and measurements available for public access. Finally, we briefly introduce LightningSim, a fast and rapid simulation tool for High-Level Synthesis (HLS) designs, which can significantly improve HLS design simulation speed.
Biography:
Dr. HAO Cong, Callie is an assistant professor in ECE at Georgia Tech. She received the Ph.D. degree in Electrical Engineering from Waseda University in 2017. Her primary research interests lie in the joint area of efficient hardware design and machine learning algorithms, as well as reconfigurable and high-efficiency computing and agile electronic design automation tools.
Join Zoom Meeting:
https://cuhk.zoom.us/j/96351056844?pwd=cDBJcVY3ZHlGMSt2V0FUQVdUVnAwZz09
Meeting ID: 963 5105 6844
Passcode: 471978
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
An Evolution of Learning Neural Implicit Representations for 3D Shapes
Location
Speaker:
Professor ZHANG Hao, Richard
Amazon Scholar, Professor
School of Computing Science, Simon Fraser University, Canada
Abstract:
Neural implicit representations are the immediate precursors to neural radiance fields (NeRF). In a short span of only four years, they have quickly become the representation of choice for learning reconstructive and generative models of 3D shapes. Unlike traditional convolutional neural networks that have been widely applied to reason about images and video, neural implicit models encode shape boundaries in a continuous manner to lead to superior visual quality; they are also amenable to simple network architectures to facilitate a variety of extensions and adaptations. In this talk, I will recount a brief history of the development of neural implicit representations, while focusing mainly on several paths of follow-ups from our recent works, including structured implicit models, direct mesh generation, CSG assemblies, and the use of contextual, query-specific feature encoding for category-agnostic and generalizable shape representation learning.
Biography:
ZHANG Hao, Richard is a professor in the School of Computing Science at Simon Fraser University, Canada. Currently, he holds a Distinguished University Professorship and is an Amazon Scholar. Richard earned his Ph.D. from the University of Toronto, and MMath and BMath degrees from the University of Waterloo. His research is in computer graphics and visual computing with special interests in geometric modeling, shape analysis, 3D vision, geometric deep learning, as well as computational design and fabrication. He has published more than 180 papers on these topics, including over 60 articles in SIGGRAPH (+Asia) and ACM Transactions on Graphics (TOG), the top venue in computer graphics. Awards won by Richard include a Canadian Human-Computer Communications Society Achievement Award in Computer Graphics (2022), a Google Faculty Award (2019), a National Science Foundation of China Overseas Outstanding Young Researcher Award (2015), an NSERC Discovery Accelerator Supplement Award (2014), a Best Dataset Award from ChinaGraph (2020), as well as faculty grants/gifts from Adobe, Autodesk, Google, and Huawei. He and his students have won the CVPR 2020 Best Student Paper Award and Best Paper Awards at SGP 2008 and CAD/Graphics 2017.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Towards predictive spatiotemporal modeling of single cells
Location
Speaker:
Dr. Xiaojie Qiu
Incoming Assistant Professor
Department of Genetics, Department of Computer Science, Stanford University
Abstract:
Single-cell RNA-seq, together with RNA velocity and metabolic labeling, reveals cellular states and transitions at unprecedented resolution. Fully exploiting these data, however, requires kinetic models capable of unveiling governing regulatory functions.
In the first part of my talk, I will introduce an analytical framework dynamo (https://github.com/aristoteleo/dynamo-release), and highlight dynamo’s power to overcome fundamental limitations of conventional splicing-based RNA velocity analyses to enable accurate velocity estimations on a metabolically labeled human hematopoiesis scRNA-seq dataset. Furthermore, differential geometry analyses reveal mechanisms driving early megakaryocyte appearance and elucidate asymmetrical regulation within the PU.1-GATA1 circuit. Leveraging the least-action-path method, dynamo accurately predicts drivers of numerous hematopoietic transitions. Finally, in silico perturbations predict cell-fate diversions induced by gene perturbations. Dynamo, thus, represents an important step in advancing quantitative and predictive theories of cell-state transitions.
Cells do not live in a vacuum, but in a milieu defined by cell–cell communication that can be quantified via recent advances in spatial transcriptomics. In my second section of my talk, I will talk about Spateo, a general framework for quantitative spatiotemporal modeling of single-cell resolution spatial transcriptomics. Spateo develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila. Thus, Spateo enables the study of the ecology of organs at a molecular level in 3D space, beyond isolated single cells.
Biography:
Dr. Xiaojie Qiu is an incoming assistant professor at the Department of Genetics, the BASE program, and the Department of Computer Science at Stanford. Xiaojie’s Ph.D. work at University of Washington with Dr. Cole Trapnell made substantial contributions to the field of single-cell genomics, exemplified by the development of Monocle ⅔ (monocle 2 & monocle 3), which can accurately and robustly reconstruct complex developmental trajectories from scRNA-seq data. In his post-doc at Whitehead Institute with Dr. Jonathan Weissman, Xiaojie developed Dynamo (aristoteleo/dynamo-release) to infers absolute RNA velocity with metabolic labeling enabled single-cell RNA-seq, reconstructs continuous vector fields that predict fates of individual cells, employs differential geometry to extract underlying gene regulatory network regulations, and ultimately predicts optimal reprogramming paths and makes nontrivial in silico perturbation predictions. Recently he also developed a powerful toolkit, Spateo (aristoteleo/spateo-release), for advanced multi-dimensional spatiotemporal modeling of single cell resolution spatial transcriptomics. Spateo delivers novel methods for digitizing spatial layers/columns to identify spatially-polar genes, and develops a comprehensive framework of cell-cell interaction to reveal spatial effects of niche factors and cell type-specific ligand-receptor interactions. Furthermore, Spateo reconstructs 3D models of whole embryos, and performs 3D morphometric analyses. Lastly, Spateo introduces the concept of “morphometric vector field” of cell migrations, and integrates spatial differential geometry to unveil regulatory programs underlying various organogenesis patterns of Drosophila.
The Qiu lab at Stanford will officially start on Dec. 16, 2024. Xiaojie will continue leveraging his unique background in single-cell genomics, mathematical modeling, and machine learning to lead a research team that bridges the gap between the “big data” from single-cell and spatial genomics and quantitative/predictive modeling in order to address fundamental questions in mammalian cell fate transitions, especially that of heart development and disease. There will be mainly four directions in the lab: 1) dissect the mechanisms of mammalian cell differentiation, reprogramming, and maintenance, including that of cardiac cells, through differentiable deep learning frameworks; 2) integrate multi-omics and harmonize short-term RNA velocities with long-term lineage tracing and apply such methods to heart developmental and heart congenital disease; 3) build predictive in silico 3D spatiotemporal models of mammalian organogenesis with a focus on the heart morphogenesis; and 4) establish foundational software ecosystem for predictive and mechanistic modeling of single cell and spatial transcriptomics.
Enquiries: Ms Anna Wong (annawong@cse.cuhk.edu.hk)
Fair and Private Backpropagation: A Scalable Framework for Fair and Private Learning
Location
Speaker:
Meisam Razaviyayn
Associate Professor
University of Southern California
Abstract:
Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals’ health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning require a full-batch of data in each iteration of the algorithm to be able to impose fairness. Moreover, the fairness/accuracy of the model can degrade significantly in prior DP training algorithms. In this work, we developed a min-batch (stochastic) differentially private algorithm for fair learning (with theoretical convergence guarantee). Here, the term “stochastic” refers to the fact that our proposed algorithm converges even when mini-batches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger-scale problems with non-binary target/sensitive attributes.
Biography:
Meisam Razaviyayn is an associate professor of Industrial and Systems Engineering, Computer Science, Quantitative and Computational Biology, and Electrical Engineering at the University of Southern California. He is also the associate director of the USC-Meta Center for Research and Education in AI and Learning. Prior to joining USC, he was a postdoctoral research fellow in the Department of Electrical Engineering at Stanford University. He received his PhD in Electrical Engineering with a minor in Computer Science at the University of Minnesota. He obtained his M.Sc. degree in Mathematics from the University of Minnesota. Meisam Razaviyayn is the recipient of the 2022 NSF CAREER Award, the 2022 Northrop Grumman Excellence in Teaching Award, the 2021 AFOSR Young Investigator Award, the 2021 3M Nontenured Faculty award, 2020 ICCM Best Paper Award in Mathematics, IEEE Data Science Workshop Best Paper Award in 2019, the Signal Processing Society Young Author Best Paper Award in 2014, and the finalist for Best Paper Prize for Young Researcher in Continuous Optimization in 2013 and 2016. He is also the silver medalist of Iran’s National Mathematics Olympiad. His research interests include the design and the study of the fundamental aspects of optimization algorithms that arise in the modern data science era.
Enquiries: Mr Jeff Liu (jeffliu@cse.cuhk.edu.hk)