Towards Decentralized Distributed Learning for Dynamic Edge Networks
Jones, Ryan
Citations
Abstract
As Machine Learning (ML) becomes ever more prevalent across disciplinary boundaries and throughout society’s innovations, technological requirements and advancements pull the storage of data and responsibility of computation towards the edge. Federated Learning (FL) began a new wave of algorithms designed for distributed learning. Research in Machine Learning is now progressing even further from distributed to decentralized distributed machine learning. This requires additional considerations such as the limited computational power and communication resources which characterize systems utilizing wireless networks.
Current decentralized learning algorithms are not in compliance with these strenuous limitations. We introduce a new algorithm, Peer-to-Peer Critical-Infrastructure-Free Distributed Swarm Learning (PC-DSL), which leverages the characteristics of edge and wireless networks to optimize fully decentralized distributed learning. PC-DSL reduces the maximum number of communications of parameter weight vectors to 1 per agent per step while retaining an 88% testing average on MNIST with 300 training points at 50 agents.
