Decision tree advantages and disadvantages
Answers
Answered by
0
Advantages......
Decision trees assist managers in evaluating upcoming choices. The tree creates a visual representation of all possible outcomes, rewards and follow-up decisions in one document. Each subsequent decision resulting from the original choice is also depicted on the tree, so you can see the overall effect of any one decision. As you go through the tree and make choices, you will see a specific path from one node to another and the impact a decision made now could have down the road.
Decision Tree Function
The tree structure provides a framework for analyzing all possible alternatives for a decision. The visual representation also includes the likelihood and potential reward for each choice. To create a tree, start with the main decision and draw a square. Extend lines out from the edge of the square for each possible solution. If the solution leads to another decision, draw a square and extend new lines to the next possible series of choices. If the outcome of a particular choice is uncertain, draw a circle instead of a square. Assign probabilities to each branch and a dollar amount for the possible payoff. Make sure to subtract any costs involved with executing the decision. Multiply the probability and the net profit for each outcome to get an adjusted expectation value for each branch of the tree.
Disvantages.....
Instability - The decision tree changes when I perturb the dataset a bit. This is not desirable as we want our classification algorithm to be pretty robust to noise and be able to generalize well to future observed data. This can undercut confidence in the tree and hurt the ability to learn from it. One solution - Is to switch to a tree-ensemble method that combines many decision trees on slightly different versions of the dataset.
Classification Plateaus - There's a very big difference between being on the left side of a boundary instead of a right side. We could see two different flowers with similar characteristics classified very differently. Some sort of rolling hill type of classification could work better than a plateau classification scheme. One solution - (like above), is to switch to a tree-ensemble method that combines many decision trees on slightly different versions of the dataset.
Decision Boundaries are parallel to the axis - We could imagine diagonal decision boundaries that would perform better, e.g. separating the setosa flowers and the versicolor flowers.
Decision trees assist managers in evaluating upcoming choices. The tree creates a visual representation of all possible outcomes, rewards and follow-up decisions in one document. Each subsequent decision resulting from the original choice is also depicted on the tree, so you can see the overall effect of any one decision. As you go through the tree and make choices, you will see a specific path from one node to another and the impact a decision made now could have down the road.
Decision Tree Function
The tree structure provides a framework for analyzing all possible alternatives for a decision. The visual representation also includes the likelihood and potential reward for each choice. To create a tree, start with the main decision and draw a square. Extend lines out from the edge of the square for each possible solution. If the solution leads to another decision, draw a square and extend new lines to the next possible series of choices. If the outcome of a particular choice is uncertain, draw a circle instead of a square. Assign probabilities to each branch and a dollar amount for the possible payoff. Make sure to subtract any costs involved with executing the decision. Multiply the probability and the net profit for each outcome to get an adjusted expectation value for each branch of the tree.
Disvantages.....
Instability - The decision tree changes when I perturb the dataset a bit. This is not desirable as we want our classification algorithm to be pretty robust to noise and be able to generalize well to future observed data. This can undercut confidence in the tree and hurt the ability to learn from it. One solution - Is to switch to a tree-ensemble method that combines many decision trees on slightly different versions of the dataset.
Classification Plateaus - There's a very big difference between being on the left side of a boundary instead of a right side. We could see two different flowers with similar characteristics classified very differently. Some sort of rolling hill type of classification could work better than a plateau classification scheme. One solution - (like above), is to switch to a tree-ensemble method that combines many decision trees on slightly different versions of the dataset.
Decision Boundaries are parallel to the axis - We could imagine diagonal decision boundaries that would perform better, e.g. separating the setosa flowers and the versicolor flowers.
Similar questions