A scalable parallel algorithm for training a hierarchical mixture of neural experts

PA Estevez, H Paugam-Moisy, D Puzenat, M Ugarte - Parallel Computing, 2002 - Elsevier
Efficient parallel learning algorithms are proposed for training a powerful modular neural
network, the hierarchical mixture of experts (HME). Parallelizations are based on the
concept of modular parallelism, ie parallel execution of network modules. From modeling the
speedup as a function of the number of processors and the number of training examples,
several improvements are derived, such as pipelining the training examples by packets.
Compared to experimental measurements, theoretical models are accurate. For regular …
以上显示的是最相近的搜索结果。 查看全部搜索结果