Abstract: Decentralized training of deep learning models enables on-device learning
over networks, as well as efficient scaling to large compute clusters.
Experiments in earlier works reveal that, even in a data-center setup,
decentralized training often suffers from the degradation in the quality of the
model: the training and test performance of models trained in a decentralized
fashion is in general worse than that of models trained in a centralized
fashion, and this performance drop is impacted by parameters such as network
size, communication topology and data partitioning.
We identify the changing consensus distance between devices as a key
parameter to explain the gap between centralized and decentralized training. We
show in theory that when the training consensus distance is lower than a
critical quantity, decentralized training converges as fast as the centralized
counterpart. We empirically validate that the relation between generalization
performance and consensus distance is consistent with this theoretical
observation. Our empirical insights allow the principled design of better
decentralized training schemes that mitigate the performance drop. To this end,
we propose practical training guidelines for the data-center setup as the
important first step.