Artificial Intelligence
Gradient Descent With Momentum from Scratch
Gradient descent is an optimization algorithm that follows the detrimental gradient of an goal perform with a purpose to find the minimal of the perform.
An issue with gradient descent is that it may well bounce across the search house on optimization issues which have massive quantities of curvature or noisy gradients, and it may well get caught in flat spots within the search house that haven’t any gradient.
Momentum is an extension to the gradient descent optimization algorithm that enables the search to construct inertia in a course within the search house and overcome the oscillations of noisy gradients and coast throughout flat spots of the search house.
On this tutorial, you’ll uncover the gradient descent with momentum algorithm.
After finishing this tutorial, you’ll know:
- Gradient descent is an optimization algorithm that makes use of the gradient of the target perform to navigate the search house.
- Gradient descent could be accelerated through the use of momentum from previous updates to the search place.
- Methods to implement gradient descent optimization with momentum and develop an instinct for its habits.
Let’s get began.
Gradient Descent With Momentum from Scratch
Picture by Chris Barnes, some rights reserved.
Tutorial Overview
This tutorial is split into three components; they’re:
- Gradient Descent
- Momentum
- Gradient Descent With Momentum
- One-Dimensional Take a look at Drawback
- Gradient Descent Optimization
- Visualization of Gradient Descent Optimization
- Gradient Descent Optimization With Momentum
- Visualization of Gradient Descent Optimization With Momentum
Gradient Descent
Gradient descent is an optimization algorithm.
It’s technically known as a first-order optimization algorithm because it explicitly makes use of the first-order spinoff of the goal goal perform.
First-order strategies depend on gradient data to assist direct the seek for a minimal …
— Web page 69, Algorithms for Optimization, 2019.
The first-order spinoff, or just the “spinoff,” is the speed of change or slope of the goal perform at a selected level, e.g. for a selected enter.
If the goal perform takes a number of enter variables, it’s known as a multivariate perform and the enter variables could be considered a vector. In flip, the spinoff of a multivariate goal perform may be taken as a vector and is referred to typically because the “gradient.”
- Gradient: First-order spinoff for a multivariate goal perform.
The spinoff or the gradient factors within the course of the steepest ascent of the goal perform for a selected enter.
Gradient descent refers to a minimization optimization algorithm that follows the detrimental of the gradient downhill of the goal perform to find the minimal of the perform.
The gradient descent algorithm requires a goal perform that’s being optimized and the spinoff perform for the target perform. The goal perform f() returns a rating for a given set of inputs, and the spinoff perform f'() offers the spinoff of the goal perform for a given set of inputs.
The gradient descent algorithm requires a place to begin (x) in the issue, corresponding to a randomly chosen level within the enter house.
The spinoff is then calculated and a step is taken within the enter house that’s anticipated to end in a downhill motion within the goal perform, assuming we’re minimizing the goal perform.
A downhill motion is made by first calculating how far to maneuver within the enter house, calculated because the step dimension (referred to as alpha or the studying charge) multiplied by the gradient. That is then subtracted from the present level, guaranteeing we transfer towards the gradient, or down the goal perform.
- x = x – step_size * f'(x)
The steeper the target perform at a given level, the bigger the magnitude of the gradient and, in flip, the bigger the step taken within the search house. The dimensions of the step taken is scaled utilizing a step dimension hyperparameter.
- Step Dimension (alpha): Hyperparameter that controls how far to maneuver within the search house towards the gradient every iteration of the algorithm, additionally referred to as the educational charge.
If the step dimension is simply too small, the motion within the search house can be small and the search will take a very long time. If the step dimension is simply too massive, the search might bounce across the search house and skip over the optima.
Now that we’re conversant in the gradient descent optimization algorithm, let’s check out momentum.
Momentum
Momentum is an extension to the gradient descent optimization algorithm, sometimes called gradient descent with momentum.
It’s designed to speed up the optimization course of, e.g. lower the variety of perform evaluations required to succeed in the optima, or to enhance the potential of the optimization algorithm, e.g. end in a greater closing outcome.
An issue with the gradient descent algorithm is that the development of the search can bounce across the search house primarily based on the gradient. For instance, the search might progress downhill in direction of the minima, however throughout this development, it could transfer in one other course, even uphill, relying on the gradient of particular factors (units of parameters) encountered through the search.
This could decelerate the progress of the search, particularly for these optimization issues the place the broader pattern or form of the search house is extra helpful than particular gradients alongside the best way.
One method to this drawback is so as to add historical past to the parameter replace equation primarily based on the gradient encountered within the earlier updates.
This variation relies on the metaphor of momentum from physics the place acceleration in a course could be collected from previous updates.
The identify momentum derives from a bodily analogy, during which the detrimental gradient is a power shifting a particle by means of parameter house, in line with Newton’s legal guidelines of movement.
— Web page 296, Deep Studying, 2016.
Momentum entails including an extra hyperparameter that controls the quantity of historical past (momentum) to incorporate within the replace equation, i.e. the step to a brand new level within the search house. The worth for the hyperparameter is outlined within the vary 0.0 to 1.0 and infrequently has a worth near 1.0, corresponding to 0.8, 0.9, or 0.99. A momentum of 0.0 is similar as gradient descent with out momentum.
First, let’s break the gradient descent replace equation down into two components: the calculation of the change to the place and the replace of the outdated place to the brand new place.
The change within the parameters is calculated because the gradient for the purpose scaled by the step dimension.
- change_x = step_size * f'(x)
The brand new place is calculated by merely subtracting the change from the present level
Momentum entails sustaining the change within the place and utilizing it within the subsequent calculation of the change in place.
If we consider updates over time, then the replace on the present iteration or time (t) will add the change used on the earlier time (t-1) weighted by the momentum hyperparameter, as follows:
- change_x(t) = step_size * f'(x(t-1)) + momentum * change_x(t-1)
The replace to the place is then carried out as earlier than.
- x(t) = x(t-1) – change_x(t)
The change within the place accumulates magnitude and course of adjustments over the iterations of the search, proportional to the scale of the momentum hyperparameter.
For instance, a big momentum (e.g. 0.9) will imply that the replace is strongly influenced by the earlier replace, whereas a modest momentum (0.2) will imply little or no affect.
The momentum algorithm accumulates an exponentially decaying shifting common of previous gradients and continues to maneuver of their course.
— Web page 296, Deep Studying, 2016.
Momentum has the impact of dampening down the change within the gradient and, in flip, the step dimension with every new level within the search house.
Momentum can improve velocity when the fee floor is extremely nonspherical as a result of it damps the scale of the steps alongside instructions of excessive curvature thus yielding a bigger efficient studying charge alongside the instructions of low curvature.
— Web page 21, Neural Networks: Methods of the Commerce, 2012.
Momentum is most helpful in optimization issues the place the target perform has a considerable amount of curvature (e.g. adjustments quite a bit), that means that the gradient might change quite a bit over comparatively small areas of the search house.
The tactic of momentum is designed to speed up studying, particularly within the face of excessive curvature, small however constant gradients, or noisy gradients.
— Web page 296, Deep Studying, 2016.
It’s also useful when the gradient is estimated, corresponding to from a simulation, and could also be noisy, e.g. when the gradient has a excessive variance.
Lastly, momentum is useful when the search house is flat or almost flat, e.g. zero gradient. The momentum permits the search to progress in the identical course as earlier than the flat spot and helpfully cross the flat area.
Now that we’re conversant in what momentum is, let’s take a look at a labored instance.
Gradient Descent with Momentum
On this part, we’ll first implement the gradient descent optimization algorithm, then replace it to make use of momentum and evaluate outcomes.
One-Dimensional Take a look at Drawback
First, let’s outline an optimization perform.
We are going to use a easy one-dimensional perform that squares the enter and defines the vary of legitimate inputs from -1.0 to 1.0.
The goal() perform under implements this perform.
# goal perform def goal(x): return x**2.0 |
We will then pattern all inputs within the vary and calculate the target perform worth for every.
... # outline vary for enter r_min, r_max = –1.0, 1.0 # pattern enter vary uniformly at 0.1 increments inputs = arange(r_min, r_max+0.1, 0.1) # compute targets outcomes = goal(inputs) |
Lastly, we are able to create a line plot of the inputs (x-axis) versus the target perform values (y-axis) to get an instinct for the form of the target perform that we’ll be looking.
... # create a line plot of enter vs outcome pyplot.plot(inputs, outcomes) # present the plot pyplot.present() |
The instance under ties this collectively and gives an instance of plotting the one-dimensional take a look at perform.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
# plot of straightforward perform from numpy import arange from matplotlib import pyplot
# goal perform def goal(x): return x**2.0
# outline vary for enter r_min, r_max = –1.0, 1.0 # pattern enter vary uniformly at 0.1 increments inputs = arange(r_min, r_max+0.1, 0.1) # compute targets outcomes = goal(inputs) # create a line plot of enter vs outcome pyplot.plot(inputs, outcomes) # present the plot pyplot.present() |
Working the instance creates a line plot of the inputs to the perform (x-axis) and the calculated output of the perform (y-axis).
We will see the acquainted U-shape referred to as a parabola.

Line Plot of Easy One Dimensional Operate
Gradient Descent Optimization
Subsequent, we are able to apply the gradient descent algorithm to the issue.
First, we want a perform that calculates the spinoff for the target perform.
The spinoff of x^2 is x * 2 and the spinoff() perform implements this under.
# spinoff of goal perform def spinoff(x): return x * 2.0 |
We will outline a perform that implements the gradient descent optimization algorithm.
The process entails beginning with a randomly chosen level within the search house, then calculating the gradient, updating the place within the search house, evaluating the brand new place, and reporting the progress. This course of is then repeated for a set variety of iterations. The ultimate level and its analysis are then returned from the perform.
The perform gradient_descent() under implements this and takes the identify of the target and gradient features in addition to the bounds on the inputs to the target perform, variety of iterations, and step dimension, then returns the answer and its analysis on the finish of the search.
# gradient descent algorithm def gradient_descent(goal, spinoff, bounds, n_iter, step_size): # generate an preliminary level answer = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # run the gradient descent for i in vary(n_iter): # calculate gradient gradient = spinoff(answer) # take a step answer = answer – step_size * gradient # consider candidate level solution_eval = goal(answer) # report progress print(‘>%d f(%s) = %.5f’ % (i, answer, solution_eval)) return [solution, solution_eval] |
We will then outline the bounds of the target perform, the step dimension, and the variety of iterations for the algorithm.
We are going to use a step dimension of 0.1 and 30 iterations, each discovered after a bit experimentation.
The seed for the pseudorandom quantity generator is fastened in order that we all the time get the identical sequence of random numbers, and on this case, it ensures that we get the identical start line for the search every time the code is run (e.g. one thing attention-grabbing removed from the optima).
... # seed the pseudo random quantity generator seed(4) # outline vary for enter bounds = asarray([[–1.0, 1.0]]) # outline the overall iterations n_iter = 30 # outline the utmost step dimension step_size = 0.1 # carry out the gradient descent search finest, rating = gradient_descent(goal, spinoff, bounds, n_iter, step_size) |
Tying this collectively, the entire instance of making use of grid search to our one-dimensional take a look at perform is listed under.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
# instance of gradient descent for a one-dimensional perform from numpy import asarray from numpy.random import rand from numpy.random import seed
# goal perform def goal(x): return x**2.0
# spinoff of goal perform def spinoff(x): return x * 2.0
# gradient descent algorithm def gradient_descent(goal, spinoff, bounds, n_iter, step_size): # generate an preliminary level answer = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # run the gradient descent for i in vary(n_iter): # calculate gradient gradient = spinoff(answer) # take a step answer = answer – step_size * gradient # consider candidate level solution_eval = goal(answer) # report progress print(‘>%d f(%s) = %.5f’ % (i, answer, solution_eval)) return [solution, solution_eval]
# seed the pseudo random quantity generator seed(4) # outline vary for enter bounds = asarray([[–1.0, 1.0]]) # outline the overall iterations n_iter = 30 # outline the step dimension step_size = 0.1 # carry out the gradient descent search finest, rating = gradient_descent(goal, spinoff, bounds, n_iter, step_size) print(‘Achieved!’) print(‘f(%s) = %f’ % (finest, rating)) |
Working the instance begins with a random level within the search house, then applies the gradient descent algorithm, reporting efficiency alongside the best way.
Be aware: Your outcomes might differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate working the instance a couple of instances and evaluate the common final result.
On this case, we are able to see that the algorithm finds a superb answer after about 27 iterations, with a perform analysis of about 0.0.
Be aware the optima for this perform is at f(0.0) = 0.0.
We might anticipate that gradient descent with momentum will speed up the optimization process and discover a equally evaluated answer in fewer iterations.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
>0 f([0.74724774]) = 0.55838 >1 f([0.59779819]) = 0.35736 >2 f([0.47823856]) = 0.22871 >3 f([0.38259084]) = 0.14638 >4 f([0.30607268]) = 0.09368 >5 f([0.24485814]) = 0.05996 >6 f([0.19588651]) = 0.03837 >7 f([0.15670921]) = 0.02456 >8 f([0.12536737]) = 0.01572 >9 f([0.10029389]) = 0.01006 >10 f([0.08023512]) = 0.00644 >11 f([0.06418809]) = 0.00412 >12 f([0.05135047]) = 0.00264 >13 f([0.04108038]) = 0.00169 >14 f([0.0328643]) = 0.00108 >15 f([0.02629144]) = 0.00069 >16 f([0.02103315]) = 0.00044 >17 f([0.01682652]) = 0.00028 >18 f([0.01346122]) = 0.00018 >19 f([0.01076897]) = 0.00012 >20 f([0.00861518]) = 0.00007 >21 f([0.00689214]) = 0.00005 >22 f([0.00551372]) = 0.00003 >23 f([0.00441097]) = 0.00002 >24 f([0.00352878]) = 0.00001 >25 f([0.00282302]) = 0.00001 >26 f([0.00225842]) = 0.00001 >27 f([0.00180673]) = 0.00000 >28 f([0.00144539]) = 0.00000 >29 f([0.00115631]) = 0.00000 Achieved! f([0.00115631]) = 0.000001 |
Visualization of Gradient Descent Optimization
Subsequent, we are able to visualize the progress of the search on a plot of the goal perform.
First, we are able to replace the gradient_descent() perform to retailer all options and their rating discovered through the optimization as lists and return them on the finish of the search as an alternative of one of the best answer discovered.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# gradient descent algorithm def gradient_descent(goal, spinoff, bounds, n_iter, step_size): # monitor all options options, scores = listing(), listing() # generate an preliminary level answer = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # run the gradient descent for i in vary(n_iter): # calculate gradient gradient = spinoff(answer) # take a step answer = answer – step_size * gradient # consider candidate level solution_eval = goal(answer) # retailer answer options.append(answer) scores.append(solution_eval) # report progress print(‘>%d f(%s) = %.5f’ % (i, answer, solution_eval)) return [solutions, scores] |
The perform could be referred to as and we are able to get the lists of the options and the scores discovered through the search.
... # carry out the gradient descent search options, scores = gradient_descent(goal, spinoff, bounds, n_iter, step_size) |
We will create a line plot of the target perform, as earlier than.
... # pattern enter vary uniformly at 0.1 increments inputs = arange(bounds[0,0], bounds[0,1]+0.1, 0.1) # compute targets outcomes = goal(inputs) # create a line plot of enter vs outcome pyplot.plot(inputs, outcomes) |
Lastly, we are able to plot every answer discovered as a pink dot and join the dots with a line so we are able to see how the search moved downhill.
... # plot the options discovered pyplot.plot(options, scores, ‘.-‘, colour=‘pink’) |
Tying this all collectively, the entire instance of plotting the results of the gradient descent search on the one-dimensional take a look at perform is listed under.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
# instance of plotting a gradient descent search on a one-dimensional perform from numpy import asarray from numpy import arange from numpy.random import rand from numpy.random import seed from matplotlib import pyplot
# goal perform def goal(x): return x**2.0
# spinoff of goal perform def spinoff(x): return x * 2.0
# gradient descent algorithm def gradient_descent(goal, spinoff, bounds, n_iter, step_size): # monitor all options options, scores = listing(), listing() # generate an preliminary level answer = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # run the gradient descent for i in vary(n_iter): # calculate gradient gradient = spinoff(answer) # take a step answer = answer – step_size * gradient # consider candidate level solution_eval = goal(answer) # retailer answer options.append(answer) scores.append(solution_eval) # report progress print(‘>%d f(%s) = %.5f’ % (i, answer, solution_eval)) return [solutions, scores]
# seed the pseudo random quantity generator seed(4) # outline vary for enter bounds = asarray([[–1.0, 1.0]]) # outline the overall iterations n_iter = 30 # outline the step dimension step_size = 0.1 # carry out the gradient descent search options, scores = gradient_descent(goal, spinoff, bounds, n_iter, step_size) # pattern enter vary uniformly at 0.1 increments inputs = arange(bounds[0,0], bounds[0,1]+0.1, 0.1) # compute targets outcomes = goal(inputs) # create a line plot of enter vs outcome pyplot.plot(inputs, outcomes) # plot the options discovered pyplot.plot(options, scores, ‘.-‘, colour=‘pink’) # present the plot pyplot.present() |
Working the instance performs the gradient descent search on the target perform as earlier than, besides on this case, every level discovered through the search is plotted.
Be aware: Your outcomes might differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate working the instance a couple of instances and evaluate the common final result.
On this case, we are able to see that the search began greater than midway up the fitting a part of the perform and stepped downhill to the underside of the basin.
We will see that within the components of the target perform with the bigger curve, the spinoff (gradient) is bigger, and in flip, bigger steps are taken. Equally, the gradient is smaller as we get nearer to the optima, and in flip, smaller steps are taken.
This highlights that the step dimension is used as a scale issue on the magnitude of the gradient (curvature) of the target perform.

Plot of the Progress of Gradient Descent on a One Dimensional Goal Operate
Gradient Descent Optimization With Momentum
Subsequent, we are able to replace the gradient descent optimization algorithm to make use of momentum.
This may be achieved by updating the gradient_descent() perform to take a “momentum” argument that defines the quantity of momentum used through the search.
The change made to the answer have to be remembered from the earlier iteration of the loop, with an preliminary worth of 0.0.
... # maintain monitor of the change change = 0.0 |
We will then break the replace process down into first calculating the gradient, then calculating the change to the answer, calculating the place of the brand new answer, then saving the change for the subsequent iteration.
... # calculate gradient gradient = spinoff(answer) # calculate replace new_change = step_size * gradient + momentum * change # take a step answer = answer – new_change # save the change change = new_change |
The up to date model of the gradient_descent() perform with these adjustments is listed under.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# gradient descent algorithm def gradient_descent(goal, spinoff, bounds, n_iter, step_size, momentum): # generate an preliminary level answer = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # maintain monitor of the change change = 0.0 # run the gradient descent for i in vary(n_iter): # calculate gradient gradient = spinoff(answer) # calculate replace new_change = step_size * gradient + momentum * change # take a step answer = answer – new_change # save the change change = new_change # consider candidate level solution_eval = goal(answer) # report progress print(‘>%d f(%s) = %.5f’ % (i, answer, solution_eval)) return [solution, solution_eval] |
We will then select a momentum worth and cross it to the gradient_descent() perform.
After a bit trial and error, a momentum worth of 0.3 was discovered to be efficient on this drawback, given the fastened step dimension of 0.1.
... # outline momentum momentum = 0.3 # carry out the gradient descent search with momentum finest, rating = gradient_descent(goal, spinoff, bounds, n_iter, step_size, momentum) |
Tying this collectively, the entire instance of gradient descent optimization with momentum is listed under.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
# instance of gradient descent with momentum for a one-dimensional perform from numpy import asarray from numpy.random import rand from numpy.random import seed
# goal perform def goal(x): return x**2.0
# spinoff of goal perform def spinoff(x): return x * 2.0
# gradient descent algorithm def gradient_descent(goal, spinoff, bounds, n_iter, step_size, momentum): # generate an preliminary level answer = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # maintain monitor of the change change = 0.0 # run the gradient descent for i in vary(n_iter): # calculate gradient gradient = spinoff(answer) # calculate replace new_change = step_size * gradient + momentum * change # take a step answer = answer – new_change # save the change change = new_change # consider candidate level solution_eval = goal(answer) # report progress print(‘>%d f(%s) = %.5f’ % (i, answer, solution_eval)) return [solution, solution_eval]
# seed the pseudo random quantity generator seed(4) # outline vary for enter bounds = asarray([[–1.0, 1.0]]) # outline the overall iterations n_iter = 30 # outline the step dimension step_size = 0.1 # outline momentum momentum = 0.3 # carry out the gradient descent search with momentum finest, rating = gradient_descent(goal, spinoff, bounds, n_iter, step_size, momentum) print(‘Achieved!’) print(‘f(%s) = %f’ % (finest, rating)) |
Working the instance begins with a random level within the search house, then applies the gradient descent algorithm with momentum, reporting efficiency alongside the best way.
Be aware: Your outcomes might differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate working the instance a couple of instances and evaluate the common final result.
On this case, we are able to see that the algorithm finds a superb answer after about 13 iterations, with a perform analysis of about 0.0.
As anticipated, that is sooner (fewer iterations) than gradient descent with out momentum, utilizing the identical start line and step dimension that took 27 iterations.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
>0 f([0.74724774]) = 0.55838 >1 f([0.54175461]) = 0.29350 >2 f([0.37175575]) = 0.13820 >3 f([0.24640494]) = 0.06072 >4 f([0.15951871]) = 0.02545 >5 f([0.1015491]) = 0.01031 >6 f([0.0638484]) = 0.00408 >7 f([0.03976851]) = 0.00158 >8 f([0.02459084]) = 0.00060 >9 f([0.01511937]) = 0.00023 >10 f([0.00925406]) = 0.00009 >11 f([0.00564365]) = 0.00003 >12 f([0.0034318]) = 0.00001 >13 f([0.00208188]) = 0.00000 >14 f([0.00126053]) = 0.00000 >15 f([0.00076202]) = 0.00000 >16 f([0.00046006]) = 0.00000 >17 f([0.00027746]) = 0.00000 >18 f([0.00016719]) = 0.00000 >19 f([0.00010067]) = 0.00000 >20 f([6.05804744e-05]) = 0.00000 >21 f([3.64373635e-05]) = 0.00000 >22 f([2.19069576e-05]) = 0.00000 >23 f([1.31664443e-05]) = 0.00000 >24 f([7.91100141e-06]) = 0.00000 >25 f([4.75216828e-06]) = 0.00000 >26 f([2.85408468e-06]) = 0.00000 >27 f([1.71384267e-06]) = 0.00000 >28 f([1.02900153e-06]) = 0.00000 >29 f([6.17748881e-07]) = 0.00000 Achieved! f([6.17748881e-07]) = 0.000000 |
Visualization of Gradient Descent Optimization With Momentum
Lastly, we are able to visualize the progress of the gradient descent optimization algorithm with momentum.
The whole instance is listed under.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
# instance of plotting gradient descent with momentum for a one-dimensional perform from numpy import asarray from numpy import arange from numpy.random import rand from numpy.random import seed from matplotlib import pyplot
# goal perform def goal(x): return x**2.0
# spinoff of goal perform def spinoff(x): return x * 2.0
# gradient descent algorithm def gradient_descent(goal, spinoff, bounds, n_iter, step_size, momentum): # monitor all options options, scores = listing(), listing() # generate an preliminary level answer = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] – bounds[:, 0]) # maintain monitor of the change change = 0.0 # run the gradient descent for i in vary(n_iter): # calculate gradient gradient = spinoff(answer) # calculate replace new_change = step_size * gradient + momentum * change # take a step answer = answer – new_change # save the change change = new_change # consider candidate level solution_eval = goal(answer) # retailer answer options.append(answer) scores.append(solution_eval) # report progress print(‘>%d f(%s) = %.5f’ % (i, answer, solution_eval)) return [solutions, scores]
# seed the pseudo random quantity generator seed(4) # outline vary for enter bounds = asarray([[–1.0, 1.0]]) # outline the overall iterations n_iter = 30 # outline the step dimension step_size = 0.1 # outline momentum momentum = 0.3 # carry out the gradient descent search with momentum options, scores = gradient_descent(goal, spinoff, bounds, n_iter, step_size, momentum) # pattern enter vary uniformly at 0.1 increments inputs = arange(bounds[0,0], bounds[0,1]+0.1, 0.1) # compute targets outcomes = goal(inputs) # create a line plot of enter vs outcome pyplot.plot(inputs, outcomes) # plot the options discovered pyplot.plot(options, scores, ‘.-‘, colour=‘pink’) # present the plot pyplot.present() |
Working the instance performs the gradient descent search with momentum on the target perform as earlier than, besides on this case, every level discovered through the search is plotted.
Be aware: Your outcomes might differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate working the instance a couple of instances and evaluate the common final result.
On this case, if we evaluate the plot to the plot created beforehand for the efficiency of gradient descent (with out momentum), we are able to see that the search certainly reaches the optima in fewer steps, famous with fewer distinct pink dots on the trail to the underside of the basin.

Plot of the Progress of Gradient Descent With Momentum on a One Dimensional Goal Operate
As an extension, strive completely different values for momentum, corresponding to 0.8, and evaluation the ensuing plot.
Let me know what you uncover within the feedback under.
Additional Studying
This part gives extra sources on the subject in case you are seeking to go deeper.
Books
APIs
Articles
Abstract
On this tutorial, you found the gradient descent with momentum algorithm.
Particularly, you realized:
- Gradient descent is an optimization algorithm that makes use of the gradient of the target perform to navigate the search house.
- Gradient descent could be accelerated through the use of momentum from previous updates to the search place.
- Methods to implement gradient descent optimization with momentum and develop an instinct for its habits.
Do you’ve any questions?
Ask your questions within the feedback under and I’ll do my finest to reply.



Infosys declares Rs 15 per share last dividend; This fall outcomes meet estimates, web revenue up 17% on-yr

Eggplant Pasta {Simple Pasta alla Norma} – WellPlated.com

Dealing With Imposter Syndrome As A Instructor

33 Black Historical past Month Actions for February and Past

Entrance-Finish Efficiency Guidelines 2021 — Smashing Journal
