Computer Science, asked by rebeka21, 1 year ago

In case of limited training data, which technique, bagging or stacking, would be preferred, and why?
Bagging, because we can combine as many classifier as we want by training each on a different sample of the training data
Bagging, because we use the same classification algorithms on all samples of the training data
Stacking, because each classifier is trained on all of the available data
Stacking, because we can use different classification algorithms on the training data

Answers

Answered by writersparadise
0
The correct answer should be the option C - Stacking - Because each classifier will be trained on all the available data.

Stacking is a kind of meta-learning approach. In this method, an ensemble is used in order to “extract features” which will be useful for another layer of the ensemble. So, when the data is at a premium, it is ideally preferred to train all the models on all the available training data.
Similar questions