Linear regression: Difference between revisions
From Rice Wiki
(Created page with "'''Linear regression''' is one of the simplest used techniques for predictive modeling. It estimates a linear relationship between dependent continuous variable $y$ and attributes (aka. independent variables) $X$. <math>y = f(X)</math> There are different types * Simple linear regression: one attribute * Multiple linear regression: multiple attributes Let the following function model the true relationship between $y$ and $X$ <math>\begi...") |
No edit summary |
||
(4 intermediate revisions by the same user not shown) | |||
Line 15: | Line 15: | ||
To train a linear regression model is to learn weight coefficients that minimize error. Error is numerically assigned a value with cost functions, usually [[RSS]]. | To train a linear regression model is to learn weight coefficients that minimize error. Error is numerically assigned a value with cost functions, usually [[RSS]]. | ||
= Minimizing RSS = | |||
There are several ways to minimize the RSS | |||
# [[Ordinary least squares]] | |||
# [[Maximum likelihood estimation]] | |||
# [[Gradient Descent]] | |||
# [[Newton's method]] | |||
[[Category:Machine Learning]] |
Latest revision as of 19:34, 17 May 2024
Linear regression is one of the simplest used techniques for predictive modeling. It estimates a linear relationship between dependent continuous variable $y$ and attributes (aka. independent variables) $X$.
There are different types
- Simple linear regression: one attribute
- Multiple linear regression: multiple attributes
Let the following function model the true relationship between $y$ and $X$
where is the weight coefficient of the attribute $x_i$ to be learned, and $\epsilon$ is residual error.
To train a linear regression model is to learn weight coefficients that minimize error. Error is numerically assigned a value with cost functions, usually RSS.
Minimizing RSS
There are several ways to minimize the RSS