Computer Science, asked by Ashvaneet8975, 1 year ago

How can ve handle autocorelation without adding annother layer to model?

Answers

Answered by rahulgrover033
0

We usually assume that the error terms are independent unless there is a specific reason to think that this is not the case. Usually violation of this assumption occurs because there is a known temporal component for how the observations were drawn. The easiest way to assess if there is dependency is by producing a scatterplot of the residuals versus the time measurement for that observation (assuming you have the data arranged according to a time sequence order). If the data are independent, then the residuals should look randomly scattered about 0. However, if a noticeable pattern emerges (particularly one that is cyclical) then dependency is likely an issue.

Recall that if we have a first-order autocorrelation with the errors, then the errors are modeled as:

ϵ

t

=

ρ

ϵ

t

1

+

ω

t

,

where

|

ρ

|

<

1

and the

ω

t

i

i

d

N

(

0

,

σ

2

)

. If we suspect first-order autocorrelation with the errors, then a formal test does exist regarding the parameter

ρ

. In particular, the Durbin-Watson test is constructed as:

H

0

:

ρ

=

0

H

A

:

ρ

0.

So the null hypothesis of

ρ

=

0

means that

ϵ

t

=

ω

t

, or that the error term in one period is not correlated with the error term in the previous period, while the alternative hypothesis of

ρ

0

means the error term in one period is either positively or negatively correlated with the error term in the previous period. Often times, a researcher will already have an indication of whether the errors are positively or negatively correlated. For example, a regression of oil prices (in dollars per barrel) versus the gas price index will surely have positively correlated errors. When the researcher has an indication of the direction of the correlation, then the Durbin-Watson test also accommodates the one-sided alternatives

H

A

:

ρ

<

0

for negative correlations or

H

A

:

ρ

>

0

for positive correlations (as in the oil example).

The test statistic for the Durbin-Watson test on a data set of size n is given by:

D

=

n

t

=

2

(

e

t

e

t

1

)

2

n

t

=

1

e

2

t

,

where

e

t

=

y

t

^

y

t

are the residuals from the ordinary least squares fit. The DW test statistic varies from 0 to 4, with values between 0 and 2 indicating positive autocorrelation, 2 indicating zero autocorrelation, and values between 2 and 4 indicating negative autocorrelation. Exact critical values are difficult to obtain, but tables (for certain significance values) can be used to make a decision (e.g., see the tables at this link, where T represents the sample size, n, and K represents the number of regression parameters, p). The tables provide a lower and upper bound, called

d

L

and

d

U

, respectively. In testing for positive autocorrelation, if

D

<

d

L

then reject

H

0

, if

D

>

d

U

then fail to reject

H

0

, or if

d

L

D

d

U

, then the test is inconclusive. While the prospect of having an inconclusive test result is less than desirable, there are some programs which use exact and approximate procedures for calculating a p-value. These procedures require certain assumptions on the data which we will not discuss. One "exact" method is based on the beta distribution for obtaining p-values.

Similar questions