摘要

Back propagation (BP) algorithm is a widely used learning algorithm that is used for training multiple layer neural networks. BP algorithm based on Hadoop cluster and MapReduce parallel programming model (MRBP) shows good performance on processing big data problems. However, it lacks the capability of fine-grained parallelism. Thus, when confronted with high dimension data and neural networks with large nodes, the performance is low relatively. On the other hand, since the users can't control the communication of Hadoop computing nodes, the existing structure parallel scheme based on clusters can't be directly applied to MRBP algorithm. This paper proposes a structure parallelism based MRBP algorithm (SP-MRBP), which adopts layer-wise parallelism, layer-wise ensemble (LPLE) strategy to implement structure parallel computing. Also, we derive the analytical expressions of the proposed SP-MRBP algorithm and the classic MRBP algorithm, and obtain the time differences between the both algorithms as well as the optimal number of parallel structures of SP-MRBP algorithm. To the best knowledge of the authors, it is the first time to introduce the structure parallelism scheme to the MRBP algorithm. The experimental results show that, compared with the classic MRBP algorithm, our algorithm has better performance on processing efficiency when facing large neural networks.

全文