AIBench: An Agile Domain-specific Benchmarking Methodology and an AI Benchmark Suite
AIBench: An Agile Domain-specific Benchmarking Methodology and an AI Benchmark Suite
About this item
Full title
Author / Creator
Gao, Wanling , Tang, Fei , Zhan, Jianfeng , Chuanxin Lan , Luo, Chunjie , Wang, Lei , Dai, Jiahui , Cao, Zheng , Xiong, Xiongwang , Jiang, Zihan , Hao, Tianshu , Fan, Fanda , Xu, Wen , Zhang, Fan , Huang, Yunyou , Chen, Jianan , Du, Mengjia , Ren, Rui , Chen, Zheng , Zheng, Daoyi , Tang, Haoning , Zhan, Kunlin , Wang, Biao , Kong, Defei , Yu, Minghe , Tan, Chongkang , Li, Huan , Tian, Xinhui , Li, Yatao , Lu, Gang , Shao, Junchao , Wang, Zhenyu , Wang, Xiaoyu and Ye, Hainan
Publisher
Ithaca: Cornell University Library, arXiv.org
Journal title
Language
English
Formats
Publication information
Publisher
Ithaca: Cornell University Library, arXiv.org
Subjects
More information
Scope and Contents
Contents
Domain-specific software and hardware co-design is encouraging as it is much easier to achieve efficiency for fewer tasks. Agile domain-specific benchmarking speeds up the process as it provides not only relevant design inputs but also relevant metrics, and tools. Unfortunately, modern workloads like Big data, AI, and Internet services dwarf the traditional one in terms of code size, deployment scale, and execution path, and hence raise serious benchmarking challenges. This paper proposes an agile domain-specific benchmarking methodology. Together with seventeen industry partners, we identify ten important end-to-end application scenarios, among which sixteen representative AI tasks are distilled as the AI component benchmarks. We propose the permutations of essential AI and non-AI component benchmarks as end-to-end benchmarks. An end-to-end benchmark is a distillation of the essential attributes of an industry-scale application. We design and implement a highly extensible, configurable, and flexible benchmark framework, on the basis of which, we propose the guideline for building end-to-end benchmarks, and present the first end-to-end Internet service AI benchmark. The preliminary evaluation shows the value of our benchmark suite---AIBench against MLPerf and TailBench for hardware and software designers, micro-architectural researchers, and code developers. The specifications, source code, testbed, and results are publicly available from the web site \url{http://www.benchcouncil.org/AIBench/index.html}....
Alternative Titles
Full title
AIBench: An Agile Domain-specific Benchmarking Methodology and an AI Benchmark Suite
Authors, Artists and Contributors
Author / Creator
Tang, Fei
Zhan, Jianfeng
Chuanxin Lan
Luo, Chunjie
Wang, Lei
Dai, Jiahui
Cao, Zheng
Xiong, Xiongwang
Jiang, Zihan
Hao, Tianshu
Fan, Fanda
Xu, Wen
Zhang, Fan
Huang, Yunyou
Chen, Jianan
Du, Mengjia
Ren, Rui
Chen, Zheng
Zheng, Daoyi
Tang, Haoning
Zhan, Kunlin
Wang, Biao
Kong, Defei
Yu, Minghe
Tan, Chongkang
Li, Huan
Tian, Xinhui
Li, Yatao
Lu, Gang
Shao, Junchao
Wang, Zhenyu
Wang, Xiaoyu
Ye, Hainan
Identifiers
Primary Identifiers
Record Identifier
TN_cdi_proquest_journals_2358137434
Permalink
https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2358137434
Other Identifiers
E-ISSN
2331-8422