Graphlab: A new framework for parallel machine learning

Y Low, JE Gonzalez, A Kyrola, D Bickson… - arXiv preprint arXiv …, 2014 - arxiv.org
arXiv preprint arXiv:1408.2041, 2014arxiv.org
Designing and implementing efficient, provably correct parallel machine learning (ML)
algorithms is challenging. Existing high-level parallel abstractions like MapReduce are
insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts
repeatedly solving the same design challenges. By targeting common patterns in ML, we
developed GraphLab, which improves upon abstractions like MapReduce by compactly
expressing asynchronous iterative algorithms with sparse computational dependencies …
Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果