Maximum likelihood estimation: Difference between revisions
Line 19: | Line 19: | ||
The weights are then changed to fit it better, and the process repeats. | The weights are then changed to fit it better, and the process repeats. | ||
== Optimizations with log == | |||
Multiplication of many large numbers is computationally expensive. To optimize, the ''log'' of the likelihood function is computed. Since log of multiplied values is the sum of log of each value, we simplify the above down to | |||
<math | <math>\log(L(\theta|x,y) ) = \sum \log \left( \frac{1}{ \sqrt{ 2 \pi \sigma^2}} exp \left( - \frac{(y_i - g(x_i))^2}{2 \sigma^2 } \right) \right)</math> | ||
== Loss function == | |||
In line with gradient descent, we can come up with the loss function of MLE by taking the negative of the likelihood function. | |||
[[Category:Machine Learning]] | [[Category:Machine Learning]] |
Revision as of 00:47, 26 April 2024
Maximum likelihood estimation (MLE) is one of the methods to find the coefficients of a model that minimizes the RSS in linear regression. MLE does this by maximizing the likelihood of observing the training data given a model.
Background
Consider objective function
where is the true relationship and is the residual error/noise
We assume that , y values are independent of each other, and
Likelihood function
The likelihood function determines the likelihood of observing the data given the parameters of the model. A high likelihood indicates a good model.
The likelihood of observing the data is the product of observing each data point, given by the probability density function of standard distribution.
The weights are then changed to fit it better, and the process repeats.
Optimizations with log
Multiplication of many large numbers is computationally expensive. To optimize, the log of the likelihood function is computed. Since log of multiplied values is the sum of log of each value, we simplify the above down to
Loss function
In line with gradient descent, we can come up with the loss function of MLE by taking the negative of the likelihood function.