FastFold: Optimizing AlphaFold Training and Inference on GPU Clusters
Protein structure prediction helps to understand gene translation and protein function, which is of growing interest and importance in structural biology. The AlphaFold model, which used transformer architecture to achieve atomic-level accuracy in protein structure prediction, was a significant breakthrough. However, training and inference of AlphaFold model are challenging due to its high computation and memory cost. In this work, we present FastFold, an efficient implementation of AlphaFold for both training and inference. We propose Dynamic Axial Parallelism (DAP) as a novel model parallelism method. Additionally, we have implemented a series of low-level optimizations aimed at reducing communication, computation, and memory costs. These optimizations include Duality Async Operations, highly optimized kernels, and AutoChunk (an automated search algorithm finds the best chunk strategy to reduce memory peaks). Experimental results show that FastFold can efficiently scale to more GPUs using DAP and reduces overall training time from 11 days to 67 hours and achieves 7.5 ~ 9.5x speedup for long-sequence inference. Furthermore, AutoChunk can reduce memory cost by over 80% during inference by automatically partitioning the intermediate tensors during the computation.
Wed 6 MarDisplayed time zone: London change
11:30 - 12:10 | |||
11:30 20mTalk | FastFold: Optimizing AlphaFold Training and Inference on GPU Clusters Main Conference Shenggan Cheng National University of Singapore, Xuanlei Zhao HPC-AI Tech, Guangyang Lu HPC-AI Tech, Jiarui Fang HPC-AI Tech, Tian Zheng Xi'an Jiaotong University, Ruidong Wu HeliXon, Xiwen Zhang HeliXon, Jian Peng HeliXon, Yang You National University of Singapore Link to publication DOI | ||
11:50 20mTalk | AGAThA: Fast and Efficient GPU Acceleration of Guided Sequence Alignment for Long Read Mapping Main Conference Seongyeon Park Seoul National University, Junguk Hong Seoul National University, Jaeyong Song Seoul National University, Hajin Kim Yonsei University, Youngsok Kim Yonsei University, Jinho Lee Seoul National University Link to publication DOI |