The difference between the expected profit under conditions of risk and the expected profit with perfect information is called
Answers
Answered by
0
Expected Value Models
—EMV & EOL—
Once a probability distribution has been assessed for each set of uncertain states of nature—and this can always be done, subjectively— it is straightforward to apply the next step: compute the expected value for each action alternative. Since there are two ways to look at the same problem (actual monetary values and opportunity losses), we can compute the expected values on either one of the payoff tables.
Expected Monetary Value
Referring to the original payoff matrix, the formula for expected monetary value (EMV) is:
EMV (Ai ) = E (Ai ) = Σj pj ( Rij )
where i refers to the matrix's rows and j refers to the columns.
Thus, using the probability distribution derived previously, we obtain:
Alternatively, we can make use of the decision tree:
Expected Opportunity Loss
Recall from the Savage criterion that an opportunity loss is the payoff difference between the best possible outcome under Sj and the actual outcome resulting from choosing Ai given that Sj occurs. Referring now to the opportunity loss matrix, the formula for expected opportunity loss (EOL) is:
EOL (Ai ) = E (Ai ) = Σj pj ( OLij )
Obviously, the same probability distribution applies (since the states of nature are the same):
EOL can also be depicted with a decision tree, of course. (Exercise left to the reader.)
Note that for a given probability distribution, the expected payoffs (EMV and EOL) for every action alternative Ai always add up to a constant. In our case, they always add up to 4.2. Thus, the Max EMV corresponds with the Min EOL.
The 4.2 value represents the Expected Value given Perfect Information (EVgPI), and is obtained as follows:
We obtain the expected value of the above lottery ( the EVgPI ) thusly:
EVgPI = Σj pj (Rij*)
The Expected Value of Perfect Information ( EVPI ) Is then:
EVPI = EVgPI - EMV*
If the above lottery is solved using opportunity losses instead of monetary values, we get:
EVPI = EOL*
—EMV & EOL—
Once a probability distribution has been assessed for each set of uncertain states of nature—and this can always be done, subjectively— it is straightforward to apply the next step: compute the expected value for each action alternative. Since there are two ways to look at the same problem (actual monetary values and opportunity losses), we can compute the expected values on either one of the payoff tables.
Expected Monetary Value
Referring to the original payoff matrix, the formula for expected monetary value (EMV) is:
EMV (Ai ) = E (Ai ) = Σj pj ( Rij )
where i refers to the matrix's rows and j refers to the columns.
Thus, using the probability distribution derived previously, we obtain:
Alternatively, we can make use of the decision tree:
Expected Opportunity Loss
Recall from the Savage criterion that an opportunity loss is the payoff difference between the best possible outcome under Sj and the actual outcome resulting from choosing Ai given that Sj occurs. Referring now to the opportunity loss matrix, the formula for expected opportunity loss (EOL) is:
EOL (Ai ) = E (Ai ) = Σj pj ( OLij )
Obviously, the same probability distribution applies (since the states of nature are the same):
EOL can also be depicted with a decision tree, of course. (Exercise left to the reader.)
Note that for a given probability distribution, the expected payoffs (EMV and EOL) for every action alternative Ai always add up to a constant. In our case, they always add up to 4.2. Thus, the Max EMV corresponds with the Min EOL.
The 4.2 value represents the Expected Value given Perfect Information (EVgPI), and is obtained as follows:
We obtain the expected value of the above lottery ( the EVgPI ) thusly:
EVgPI = Σj pj (Rij*)
The Expected Value of Perfect Information ( EVPI ) Is then:
EVPI = EVgPI - EMV*
If the above lottery is solved using opportunity losses instead of monetary values, we get:
EVPI = EOL*
Similar questions