How is impulse response very important in the analysis of control system?
Answers
Answered by
0
The impulse that is referred to in the term impulse response is generally a short-duration time-domain signal. For continuous-time systems, this is the Dirac delta function δ(t)δ(t), while for discrete-time systems, the Kronecker delta function δ[n]δ[n] is typically used. A system's impulse response (often annotated as h(t)h(t) for continuous-time systems or h[n]h[n]for discrete-time systems) is defined as the output signal that results when an impulse is applied to the system input.
Why is this useful? It allows us to predict what the system's output will look like in the time domain. Remember the linearity and time-invariance properties mentioned above? If we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. What if we could decompose our input signal into a sum of scaled and time-shifted impulses? Then, the output would be equal to the sum of copies of the impulse response, scaled and time-shifted in the same way.
For discrete-time systems, this is possible, because you can write any signal x[n]x[n] as a sum of scaled and time-shifted Kronecker delta functions:
x[n]=∑k=0∞x[k]δ[n−k]x[n]=∑k=0∞x[k]δ[n−k]
Each term in the sum is an impulse scaled by the value of x[n]x[n] at that time instant. What would we get if we passed x[n]x[n] through an LTI system to yield y[n]y[n]? Simple: each scaled and time-delayed impulse that we put in yields a scaled and time-delayed copy of the impulse response at the output. That is:
y[n]=∑k=0∞x[k]h[n−k]y[n]=∑k=0∞x[k]h[n−k]
where h[n]h[n] is the system's impulse response. The above equation is the convolution theorem for discrete-time LTI systems. That is, for any signal x[n]x[n] that is input to an LTI system, the system's output y[n]y[n] is equal to the discrete convolution of the input signal and the system's impulse response.
For continuous-time systems, the above straightforward decomposition isn't possible in a strict mathematical sense (the Dirac delta has zero width and infinite height), but at an engineering level, it's an approximate, intuitive way of looking at the problem. A similar convolution theorem holds for these systems:
y(t)=∫∞−∞x(τ)h(t−τ)dτy(t)=∫−∞∞x(τ)h(t−τ)dτ
where, again, h(t)h(t) is the system's impulse response. There are a number of ways of deriving this relationship (I think you could make a similar argument as above by claiming that Dirac delta functions at all time shifts make up an orthogonal basis for the L2L2 Hilbert space, noting that you can use the delta function's sifting property to project any function in L2L2onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse responses), but I'm not a licensed mathematician, so I'll leave that aside). One method that relies only upon the aforementioned LTI system properties is shown here.
Why is this useful? It allows us to predict what the system's output will look like in the time domain. Remember the linearity and time-invariance properties mentioned above? If we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. What if we could decompose our input signal into a sum of scaled and time-shifted impulses? Then, the output would be equal to the sum of copies of the impulse response, scaled and time-shifted in the same way.
For discrete-time systems, this is possible, because you can write any signal x[n]x[n] as a sum of scaled and time-shifted Kronecker delta functions:
x[n]=∑k=0∞x[k]δ[n−k]x[n]=∑k=0∞x[k]δ[n−k]
Each term in the sum is an impulse scaled by the value of x[n]x[n] at that time instant. What would we get if we passed x[n]x[n] through an LTI system to yield y[n]y[n]? Simple: each scaled and time-delayed impulse that we put in yields a scaled and time-delayed copy of the impulse response at the output. That is:
y[n]=∑k=0∞x[k]h[n−k]y[n]=∑k=0∞x[k]h[n−k]
where h[n]h[n] is the system's impulse response. The above equation is the convolution theorem for discrete-time LTI systems. That is, for any signal x[n]x[n] that is input to an LTI system, the system's output y[n]y[n] is equal to the discrete convolution of the input signal and the system's impulse response.
For continuous-time systems, the above straightforward decomposition isn't possible in a strict mathematical sense (the Dirac delta has zero width and infinite height), but at an engineering level, it's an approximate, intuitive way of looking at the problem. A similar convolution theorem holds for these systems:
y(t)=∫∞−∞x(τ)h(t−τ)dτy(t)=∫−∞∞x(τ)h(t−τ)dτ
where, again, h(t)h(t) is the system's impulse response. There are a number of ways of deriving this relationship (I think you could make a similar argument as above by claiming that Dirac delta functions at all time shifts make up an orthogonal basis for the L2L2 Hilbert space, noting that you can use the delta function's sifting property to project any function in L2L2onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse responses), but I'm not a licensed mathematician, so I'll leave that aside). One method that relies only upon the aforementioned LTI system properties is shown here.
Similar questions