User contributions for Rice
From Rice Wiki
17 April 2024
- 21:3821:38, 17 April 2024 diff hist +45 Ordinary differential equation →Solving ODE Tag: Visual edit
- 21:3721:37, 17 April 2024 diff hist −1 Ordinary differential equation →Classification Tag: Visual edit
- 21:3521:35, 17 April 2024 diff hist +209 Ordinary differential equation →Example Tag: Visual edit
- 21:3321:33, 17 April 2024 diff hist +44 N Ordinary Differential Equation Rice moved page Ordinary Differential Equation to Ordinary differential equation current Tag: New redirect
- 21:3321:33, 17 April 2024 diff hist 0 m Ordinary differential equation Rice moved page Ordinary Differential Equation to Ordinary differential equation
- 19:5919:59, 17 April 2024 diff hist +59 Maximum likelihood estimation →Likelihood function Tag: Visual edit
- 18:2118:21, 17 April 2024 diff hist +1,091 N Maximum likelihood estimation Created page with "'''Maximum likelihood estimation (MLE)''' is one of the methods to find the coefficients of a model that minimizes the RSS in linear regression. MLE does this by maximizing the likelihood of observing the training data given a model. = Background = Consider objective function <math>y = w_0 x_0 + w_1 x_1 + \ldots + w_m x_m + \epsilon = g(x) + \epsilon</math> where <math>y = g(x)</math> is the true relationship and <math>\epsilon</math> is the res..." Tag: Visual edit
- 18:1018:10, 17 April 2024 diff hist +152 Linear regression No edit summary Tag: Visual edit
- 15:4715:47, 17 April 2024 diff hist +16 Karnaugh map →Matrix Tag: Visual edit
- 15:4515:45, 17 April 2024 diff hist +89 Karnaugh map →Simplify Tag: Visual edit
- 15:4315:43, 17 April 2024 diff hist +34 Karnaugh map →Simplify Tag: Visual edit
- 15:4315:43, 17 April 2024 diff hist +22 Karnaugh map →How it works Tag: Visual edit
- 15:4015:40, 17 April 2024 diff hist +420 Karnaugh map No edit summary Tag: Visual edit
- 15:2915:29, 17 April 2024 diff hist +571 N Karnaugh map Created page with "The '''Karnaugh map (Kmap)''' is a matrix that represent output values of a boolean function. It is primarily used to reduce digital circuits. = Minterm = A '''minterm''' is a term that consists of all inputs, complemented or not. = Matrix = Kmap minimize equations graphically. Each cell represent an input. There cannot be more than one bit change from one column to the next. If there are more variables, multiple is placed for an axis. Note the restriction regarding b..." Tag: Visual edit
15 April 2024
- 23:1423:14, 15 April 2024 diff hist +20 Ordinary differential equation →Solvable Classes Tag: Visual edit
- 23:1423:14, 15 April 2024 diff hist +22 Autonomous ODE →Equilibrium Analysis Tag: Visual edit
- 22:3522:35, 15 April 2024 diff hist +113 Equilibrium No edit summary Tag: Visual edit
- 22:3322:33, 15 April 2024 diff hist +313 N Equilibrium Created page with "The '''equilibrium''' solution of an ODE is a value where ''y'' will not differ over time. = Stability = An equilibrium is '''stable''' if slight perturbations from the equilibrium solution will not drastically change ''y''. In contrast, an equilibrium is unstable if slight perturbation causes drastic changes." Tag: Visual edit
- 22:2822:28, 15 April 2024 diff hist +358 Autonomous ODE No edit summary Tag: Visual edit
- 22:2022:20, 15 April 2024 diff hist +819 N Autonomous ODE Created page with "'''Autonomous ODE's''' have no explicit t-dependence. They come in the form <math> y' = F(y) </math> = Equilibrium = Autonomous ODE's have trivial ODE solutions. If <math> F(c) = 0 </math> then <math> y(t) = c </math> is the equilibrium solution of the ODE. If <math>y(t)</math> is a solution, then so is <math>z(t) = y(t + t_0)</math> for any constant <math>t_0</math> <math> \begin{aligned} y'(t) &= F(y(t))\\ z'(t) &= y'(t + t_0) \\ &= F(y(t + t_0)) \\ &= F(z(t)..."
- 22:1322:13, 15 April 2024 diff hist +291 First order scalar ODE No edit summary Tag: Visual edit
- 21:4721:47, 15 April 2024 diff hist +103 N First order scalar ODE Created page with "'''First order scalar ODEs''' are the first ODEs we study. * Scalar: One unknown * First order: ''y'''" Tag: Visual edit
- 21:4421:44, 15 April 2024 diff hist +45 Ordinary differential equation →Solvable Classes Tag: Visual edit
- 21:3521:35, 15 April 2024 diff hist +87 Ordinary differential equation →Other Classifications
- 21:3121:31, 15 April 2024 diff hist +242 Ordinary differential equation →Classification Tag: Visual edit
- 21:2621:26, 15 April 2024 diff hist −416 Ordinary differential equation →Classification Tag: Visual edit
- 21:2421:24, 15 April 2024 diff hist +290 N Initial Value Problem a current Tag: Visual edit
- 21:1221:12, 15 April 2024 diff hist +7 Ordinary differential equation →Example
- 21:1221:12, 15 April 2024 diff hist +30 Ordinary differential equation →Example Tag: Visual edit
- 18:5018:50, 15 April 2024 diff hist +54 Machine Learning No edit summary Tag: Visual edit
- 18:5018:50, 15 April 2024 diff hist +30 Lasso regression No edit summary Tag: Visual edit
- 18:5018:50, 15 April 2024 diff hist +30 Ridge regression No edit summary Tag: Visual edit
- 18:4918:49, 15 April 2024 diff hist +453 N Ridge regression Created page with "'''Ridge regression''' is a regression model with an additional term called the ''regularizer''. The motive for this model is to discourage overfitting. It is similar to Regularization, except it more heavily punishes complex models. = Regularizer = The '''regularizer''' <math>\lambda</math> is an additional term to the loss function that penalizes higher order terms. <math>\lambda \sum \left| w_j^2 \right|</math>" Tag: Visual edit
- 18:4818:48, 15 April 2024 diff hist +388 N Lasso regression Created page with "'''Regularization (aka. Lasso Regression)''' is a regression model with an additional term called the ''regularizer''. The motive for this model is to discourage overfitting. = Regularizer = The '''regularizer''' <math>\lambda</math> is an additional term to the loss function that penalizes higher order terms. <math>\lambda \sum \left| w_j \right|</math>" Tag: Visual edit
- 18:4518:45, 15 April 2024 diff hist +30 Polynomial Regression No edit summary current Tag: Visual edit
- 18:4118:41, 15 April 2024 diff hist +389 Polynomial Regression No edit summary Tag: Visual edit
- 18:3718:37, 15 April 2024 diff hist +212 Curve fitting No edit summary Tag: Visual edit
- 18:3518:35, 15 April 2024 diff hist +961 N Linear regression Created page with "'''Linear regression''' is one of the simplest used techniques for predictive modeling. It estimates a linear relationship between dependent continuous variable $y$ and attributes (aka. independent variables) $X$. <math>y = f(X)</math> There are different types * Simple linear regression: one attribute * Multiple linear regression: multiple attributes Let the following function model the true relationship between $y$ and $X$ <math>\begi..." Tag: Visual edit
- 18:3418:34, 15 April 2024 diff hist +82 Polynomial Regression No edit summary Tag: Visual edit
- 18:3218:32, 15 April 2024 diff hist +246 Polynomial Regression No edit summary Tag: Visual edit
- 18:3018:30, 15 April 2024 diff hist +35 N Polynomial Regression Model Rice moved page Polynomial Regression Model to Polynomial Regression current Tag: New redirect
- 18:3018:30, 15 April 2024 diff hist 0 m Polynomial Regression Rice moved page Polynomial Regression Model to Polynomial Regression
- 18:2918:29, 15 April 2024 diff hist +118 N Polynomial Regression Created page with "'''Polynomial regression''' describes the relationship between ''x'' and ''y'' as an n<sup>th</sup> degree polynomial." Tag: Visual edit
- 18:2818:28, 15 April 2024 diff hist +405 Gradient Descent No edit summary Tag: Visual edit
- 18:2818:28, 15 April 2024 diff hist +100 Batch Gradient Descent No edit summary Tag: Visual edit
- 18:2518:25, 15 April 2024 diff hist +753 N Stochastic Gradient Descent Created page with " = How it works = First, a weight <math>\bf{w}</math> is selected. This is the starting point from which we iteratively improve the solution. For ''each datapoint'' in the dataset, the ''gradient'' of the loss function with respect to weights is computed and a learning rate is selected. These two statistics determine the speed and direction the model <math>\bf{w}</math> converges to. Then, a '''GD update rule''' is used to converge the weights to the desired outcome ba..." current Tag: Visual edit
- 18:2318:23, 15 April 2024 diff hist +309 N Batch Gradient Descent Created page with "In '''batch gradient descent''', the unit of data is the entire dataset, in contrast to Stochastic Gradient Descent whose unit of data is one data point. It uses the ''average of the computed gradients'' to update the weights of a ''batch'' of data points. * Faster * Less performing/precise (not always)" Tag: Visual edit
- 18:1418:14, 15 April 2024 diff hist +256 N Gradient Descent Created page with "= How it works = After processing all data points, all weights are updated and 1 '''epoch''' is completed. == GD Update Rule == The '''GD update rule''' is used to update the weights after an iteration. === LMS === Least-mean-squared is a GD update rule." Tag: Visual edit
- 15:5015:50, 15 April 2024 diff hist +400 FISC No edit summary current Tag: Visual edit
- 15:3315:33, 15 April 2024 diff hist +35 FISC No edit summary Tag: Visual edit