Efficient mapping of ANNs on hypercube massively parallel machines

QM Malluhi, MA Bayoumi… - IEEE Transactions on …, 1995 - ieeexplore.ieee.org
QM Malluhi, MA Bayoumi, TRN Rao
IEEE Transactions on Computers, 1995ieeexplore.ieee.org
This paper presents a technique for mapping artificial neural networks (ANNs) on hypercube
massively parallel machines. The paper starts by synthesizing a parallel structure, the mesh-
of-appendixed-trees (MAT), for fast ANN implementation. Then, it presents a recursive
procedure to embed the MAT structure into the hypercube topology. This procedure is used
as the basis for an efficient mapping of ANN computations on hypercube systems. Both the
multilayer feedforward with backpropagation (FFBP) and the Hopfield ANN models are …
This paper presents a technique for mapping artificial neural networks (ANNs) on hypercube massively parallel machines. The paper starts by synthesizing a parallel structure, the mesh-of-appendixed-trees (MAT), for fast ANN implementation. Then, it presents a recursive procedure to embed the MAT structure into the hypercube topology. This procedure is used as the basis for an efficient mapping of ANN computations on hypercube systems. Both the multilayer feedforward with backpropagation (FFBP) and the Hopfield ANN models are considered. Algorithms to implement the recall and the training phases of the FFBP model as well as the recall phase of the Hopfield model are provided. The major advantage of our technique is high performance. Unlike the other techniques presented in the literature which require O(n) time, where N is the size of the largest layer, our implementation requires only O(log N) time. Moreover, it allows pipelining of more than one input pattern and thus further improves the performance.< >
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
查找
获取 PDF 文件
引用
References