Indexed by:
Abstract:
Due to the rapid development of deep learning (DL) has brought, artificial intelligence (AI) chips were invented incorperating the traditional computing architecture with the simulated neural network structure for the sake of improving the energy efficiency. Recently, emerging deep learning AI chips imposed the challenge of allocating computing resources according to a deep neural networks (DNN), such that tasks using the DNN can be processed in a parallel and distributed manner. In this paper, we combine graph theory and combinatorial optimization technology to devise a fast floorplanning approach based on kernel graph structure, which is provided by Cerebras Systems Inc. for mapping the layers of DNN to the mesh of computing units called Wafer-Scale-Engine (WSE). Numerical experiments were carried out to evaluate our method using the public benchmarks and evaluation criteria, demonstrating its performance gain comparing to the state-of-art algorithms.
Keyword:
Reprint 's Address:
Version:
Source :
2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021)
ISSN: 1063-6927
Year: 2021
Page: 1114-1115
Language: English
Cited Count:
WoS CC Cited Count: 1
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: