Maximum likelihood estimation: Difference between revisions
From Rice Wiki
(Created page with "'''Maximum likelihood estimation (MLE)''' is one of the methods to find the coefficients of a model that minimizes the RSS in linear regression. MLE does this by maximizing the likelihood of observing the training data given a model. = Background = Consider objective function <math>y = w_0 x_0 + w_1 x_1 + \ldots + w_m x_m + \epsilon = g(x) + \epsilon</math> where <math>y = g(x)</math> is the true relationship and <math>\epsilon</math> is the res...") |
|||
Line 18: | Line 18: | ||
The weights are then changed to fit it better, and the process repeats. | The weights are then changed to fit it better, and the process repeats. | ||
The computation can be simplified to the following | |||
<math | |||
[[Category:Machine Learning]] | [[Category:Machine Learning]] |
Revision as of 19:59, 17 April 2024
Maximum likelihood estimation (MLE) is one of the methods to find the coefficients of a model that minimizes the RSS in linear regression. MLE does this by maximizing the likelihood of observing the training data given a model.
Background
Consider objective function
where is the true relationship and is the residual error/noise
We assume that , and
Likelihood function
The likelihood function determines the likelihood of observing the data given the parameters of the model. A high likelihood indicates a good model.
For every data point, the likelihood is computed. The product of all likelihoods are taken.
The weights are then changed to fit it better, and the process repeats.
The computation can be simplified to the following
<math