Our brain executes very sparse computation, allowing for great speed and energy savings. Deep neural networks can also be made to exhibit high levels of sparsity without significant accuracy loss. As their size grows, it is becoming imperative that we use sparsity to improve their efficiency. This is a challenging task because the memory systems and SIMD operations that dominate todays CPUs and GPUs do not lend themselves easily to the irregular data patterns sparsity introduces. This talk will survey the role of sparsity in neural network computation, and the parallel algorithms and hardware features that nevertheless allow us to make effective use of it.
Bio: Nir Shavit received B.Sc. and M.Sc. degrees in Computer Science from the Technion - Israel Institute of Technology in 1984 and 1986, and a Ph.D. in Computer Science from the Hebrew University of Jerusalem in 1990. Shavit is a co-author of the book The Art of Multiprocessor Programming. He is a recipient of the 2004 Gödel Prize in theoretical computer science for his work on applying tools from algebraic topology to model shared memory computability and of the 2012 Dijkstra Prize in Distributed Computing for the introduction of Software Transactional Memory. For many years his main interests were techniques for designing, implementing, and reasoning about multiprocessor algorithms. These days he is interested in understanding the relationship between deep learning and the ways neural tissue computes and is part of an effort to do so by extracting connectivity maps of brain, a field called connectomics.
Wed 6 MarDisplayed time zone: London change
08:30 - 09:30 | PPoPP KeynoteKeynotes at Pentland Chair(s): I-Ting Angelina Lee Washington University in St. Louis, USA | ||
08:30 60mKeynote | PPoPP Keynote: Sparsity in Deep Neural Nets Keynotes Nir Shavit MIT CSAIL |