Difference between linear and non linear regressions indataminig
Answers
Answer:
Linear regression requires a linear model. No surprise, right? But what does that really mean?
A model is linear when each term is either a constant or the product of a parameter and a predictor variable. A linear equation is constructed by adding the results for each term. This constrains the equation to just one basic form:
Response = constant + parameter * predictor + ... + parameter * predictor
Y = b o + b1X1 + b2X2 + ... + bkXk
In statistics, a regression equation (or function) is linear when it is linear in the parameters. While the equation must be linear in the parameters, you can transform the predictor variables in ways that produce curvature. For instance, you can include a squared variable to produce a U-shaped curve.
Y = b o + b1X1 + b2X12
This model is still linear in the parameters even though the predictor variable is squared. You can also use log and inverse functional forms that are linear in the parameters to produce different types of curves.
Here is an example of a linear regression model that uses a squared term to fit the curved relationship between BMI and body fat percentage.
Nonlinear Regression Equations
While a linear equation has one basic form, nonlinear equations can take many different forms. The easiest way to determine whether an equation is nonlinear is to focus on the term “nonlinear” itself. Literally, it’s not linear. If the equation doesn’t meet the criteria above for a linear equation, it’s nonlinear.
That covers many different forms, which is why nonlinear regression provides the most flexible curve-fitting functionality. Here are several examples from Minitab’s nonlinear function catalog. Thetas represent the parameters and X represents the predictor in the nonlinear functions. Unlike linear regression, these functions can have more than one parameter per predictor variable.