Abstract: Stochastic gradient descent (SGD) algorithm and its variations have been
effectively used to optimize neural network models. However, with the rapid
growth of big data and deep learning, SGD is no longer the most suitable choice
due to its natural behavior of sequential optimization of the error function.
This has led to the development of parallel SGD algorithms, such as
asynchronous SGD (ASGD) and synchronous SGD (SSGD) to train deep neural
networks. However, it introduces a high variance due to the delay in parameter
(weight) update. We address this delay in our proposed algorithm and try to
minimize its impact. We employed guided SGD (gSGD) that encourages consistent
examples to steer the convergence by compensating the unpredictable deviation
caused by the delay. Its convergence rate is also similar to A/SSGD, however,
some additional (parallel) processing is required to compensate for the delay.
The experimental results demonstrate that our proposed approach has been able
to mitigate the impact of delay for the quality of classification accuracy. The
guided approach with SSGD clearly outperforms sequential SGD and even achieves
the accuracy close to sequential SGD for some benchmark datasets.