Business Studies, asked by mrbaba1627, 1 year ago

You have built a classification model with 90% accuracy but your client is not happy

Because False Positive rate was very high then what will you do?

Answers

Answered by sailorking
0

When I have made or built any classification, with 90% accuracy, and at that point of time, if my customers is unsatisfied, or becomes unhappy, I shall try to understand, what trouble or problem has caused his dissatisfaction, and instantly take necessary actions, and apologies for the mistake occurred.

    In case when customer is not happy due to false rating, then I shall, try to make my customer happy, by replacing the product with the best one available.

Answered by indiansumalyaa
7

Answer:

In my opinion, we should not consider only accuracy as a performance measure as it evaluate only true positive , true Negative and sum total of a model. We have a many performance measures like recall, precision and f1-score. All these metrics are well explained using a confusion matrix in previous question. Now, coming to this question statement the classification model with 90% accuracy having high false positive rate. First of all False positive rate is a parameter of error metric derived from confusion matrix. Confusion matrix depends on distinct respective model. Thus, each classification model will have different confusion matrix which turns out to have different False positive rate may be low or high as compared to previous model. Thus, here we can go for various classification model available like as logistics regression, Decision Tree, Neural networks, Random Forest, etc and check false positive rate using confusion matrix for each of the models. On comparison we can conclude which machine learning model or statistical model is best fit having high accuracy and lowest possible false positive rate. A Machine learning paradigm known as ensemble learning can also be used in this condition. Ensemble learning is nothing but the group of different types of machine learning models developed using same training dataset (some feature may or may not differ in the dataset). Ensemble learning is implemented in a technique known as bagging or Bootstrap Aggregating in which several models are trained on a dataset and mean of the output is taken for the test dataset output by each model. Random forest is one such ensemble learning technique which aggregates output of several decision trees to get most appropriate result.  

Explanation:

Similar questions