作者
Robert M Patton, J Travis Johnston, Steven R Young, Catherine D Schuman, Thomas E Potok, Derek C Rose, Seung-Hwan Lim, Junghoon Chae, Le Hou, Shahira Abousamra, Dimitris Samaras, Joel Saltz
发表日期
2019/12/9
研讨会论文
2019 IEEE International Conference on Big Data (Big Data)
页码范围
1488-1496
出版商
IEEE
简介
Deep learning, through the use of neural networks, has demonstrated remarkable ability to automate many routine tasks when presented with sufficient data for training. The neural network architecture (e.g. number of layers, types of layers, connections between layers, etc.) plays a critical role in determining what, if anything, the neural network is able to learn from the training data. The trend for neural network architectures, especially those trained on ImageNet, has been to grow ever deeper and more complex. The result has been ever increasing accuracy on benchmark datasets with the cost of increased computational demands. In this paper we demonstrate that neural network architectures can be automatically generated, tailored for a specific application, with dual objectives: accuracy of prediction and speed of prediction. Using MENNDL- an HPC-enabled software stack for neural architecture search-we …
引用总数
2020202120222023202425534
学术搜索中的文章
RM Patton, JT Johnston, SR Young, CD Schuman… - 2019 IEEE International Conference on Big Data (Big …, 2019