. How Gradient descent and stochastic gradient descent methods solve optimization problem?
Answers
Answered by
2
Answer:
Stochastic Gradient Descent (SGD):
In typical Gradient Descent optimization, like Batch Gradient Descent, the batch is taken to be the whole dataset. ... This problem is solved by Stochastic Gradient Descent. In SGD, it uses only a single sample, i.e., a batch size of one, to perform each iteration.
Hope it helps !!!
Please mark me the brainliest !!!
Similar questions