increasing batch size lead to low processing time neural net
Answers
Answered by
0
Explanation:
We have found that increasing the batch size progressively reduces the range of learning rates ... by trying to induce more data parallelism to reduce training time on today's hard
Answered by
0
Smaller batch sizes are used for two main reasons: Smaller batch sizes are noisy, offering a regularizing effect and lower generalization error. Smaller batch sizes make it easier to fit one batch worth of training data in memory (i.e. when using a GPU)
Similar questions