Smart Reconfigurable Computing for GNN and Transformer using Agile High Level Synthesis
Dr. HAO Cong, Callie
Department of Electrical and Computer Engineering (ECE), Georgia Institute of Technology (GaTech)
In this talk, we introduce two architectures, one for graph neural work (GNN) called FlowGNN, one for vision transformer (ViT) called Edge-MoE. In FlowGNN, a generic dataflow architecture for GNN acceleration is proposed, supporting a wide range of GNN models without graph pre-processing. GNNBuilder is then introduced as an automated, end-to-end GNN accelerator generation framework, allowing the generation of accelerators for various GNN models with minimal overhead. Next, Edge-MoE presents an FPGA accelerator for multi-task Vision Transformers (ViTs) with architectural innovations, achieving improved energy efficiency compared to GPU and CPU. The talk demonstrates the performance of these approaches, with code and measurements available for public access. Finally, we briefly introduce LightningSim, a fast and rapid simulation tool for High-Level Synthesis (HLS) designs, which can significantly improve HLS design simulation speed.
Dr. HAO Cong, Callie is an assistant professor in ECE at Georgia Tech. She received the Ph.D. degree in Electrical Engineering from Waseda University in 2017. Her primary research interests lie in the joint area of efficient hardware design and machine learning algorithms, as well as reconfigurable and high-efficiency computing and agile electronic design automation tools.
Join Zoom Meeting:
Meeting ID: 963 5105 6844
Enquiries: Ms Anna Wong (email@example.com)