Prove the error in power of a quantity
Answers
Explanation:
MAXIMUM ERROR
We first consider how data errors propagate through calculations to affect error limits (or maximum error) of results. It's easiest to first consider determinate errors, which have explicit sign. This leads to useful rules for error propagation. Then we'll modify and extend the rules to other error measures and also to indeterminate errors.
The underlying mathematics is that of "finite differences," an algebra for dealing with numbers that have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused by experimental errors.
Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and B respectively. The data quantities are written to show the errors explicitly:
[3-1]
A + ΔA and B + ΔB
We allow the possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB."
The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written:
(A + ΔA) + (B + ΔB) = (A + B) + (Δa + Δb)
So the result, with its error ΔR explicitly shown in the form R + ΔR, is:
R + ΔR = (A + B) + (Δa + Δb)
[3-2]
The error in R is: ΔR = ΔA + ΔB.
We conclude that the error in the sum of two quantities is the sum of the errors in those quantities. You can easily work out the case where the result is calculated from the difference of two quantities. In that case the error in the result is the difference in the errors. Summarizing: