
<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://ricefriedegg.com:80/mediawiki/index.php?action=history&amp;feed=atom&amp;title=Newton%27s_method</id>
	<title>Newton&#039;s method - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://ricefriedegg.com:80/mediawiki/index.php?action=history&amp;feed=atom&amp;title=Newton%27s_method"/>
	<link rel="alternate" type="text/html" href="http://ricefriedegg.com:80/mediawiki/index.php?title=Newton%27s_method&amp;action=history"/>
	<updated>2026-04-09T15:24:57Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>http://ricefriedegg.com:80/mediawiki/index.php?title=Newton%27s_method&amp;diff=596&amp;oldid=prev</id>
		<title>Rice: Created page with &quot;&#039;&#039;&#039;Newton&#039;s method&#039;&#039;&#039; is an alternative to Gradient Descent. In contrast to GD, which uses the first derivative to approach the optimal model, Newton&#039;s method adds the &#039;&#039;second derivative&#039;&#039; to converge &#039;&#039;faster&#039;&#039;.  Newton&#039;s method has the drawback of being more computationally expensive due to the need to find the second derivative.  &lt;math&gt; w_j = w_j - a\frac{\frac{\partial l}{\partial w_j}}{\frac{\partial^2 l}{\partial w_j^2}} &lt;/math&gt;  Category:Machine Learning&quot;</title>
		<link rel="alternate" type="text/html" href="http://ricefriedegg.com:80/mediawiki/index.php?title=Newton%27s_method&amp;diff=596&amp;oldid=prev"/>
		<updated>2024-04-26T05:24:47Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Newton&amp;#039;s method&amp;#039;&amp;#039;&amp;#039; is an alternative to &lt;a href=&quot;/mediawiki/index.php/Gradient_Descent&quot; title=&quot;Gradient Descent&quot;&gt;Gradient Descent&lt;/a&gt;. In contrast to GD, which uses the first derivative to approach the optimal model, Newton&amp;#039;s method adds the &amp;#039;&amp;#039;second derivative&amp;#039;&amp;#039; to converge &amp;#039;&amp;#039;faster&amp;#039;&amp;#039;.  Newton&amp;#039;s method has the drawback of being more computationally expensive due to the need to find the second derivative.  &amp;lt;math&amp;gt; w_j = w_j - a\frac{\frac{\partial l}{\partial w_j}}{\frac{\partial^2 l}{\partial w_j^2}} &amp;lt;/math&amp;gt;  &lt;a href=&quot;/mediawiki/index.php/Category:Machine_Learning&quot; title=&quot;Category:Machine Learning&quot;&gt;Category:Machine Learning&lt;/a&gt;&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Newton&amp;#039;s method&amp;#039;&amp;#039;&amp;#039; is an alternative to [[Gradient Descent]]. In contrast to GD, which uses the first derivative to approach the optimal model, Newton&amp;#039;s method adds the &amp;#039;&amp;#039;second derivative&amp;#039;&amp;#039; to converge &amp;#039;&amp;#039;faster&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
Newton&amp;#039;s method has the drawback of being more computationally expensive due to the need to find the second derivative.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
w_j = w_j - a\frac{\frac{\partial l}{\partial w_j}}{\frac{\partial^2 l}{\partial w_j^2}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Machine Learning]]&lt;/div&gt;</summary>
		<author><name>Rice</name></author>
	</entry>
</feed>