Understanding Gradient Descent Algorithm with Python code

This article was first published on K & L Fintech Modeling , and kindly contributed to python-bloggers. (You can report issue about the content on this page here)
Want to share your content on python-bloggers? click here.

Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning. This post explains the basic concept of gradient descent with python code.

Gradient Descent

Parameter Learning

Data is the outcome of action or activity. \[\begin{align} y, x \end{align}\] Our focus is to predict the outcome of next action from data. For this end, we should develop a model to describe data properly and do forecasting

Model is a function of data and parameters(\(\theta = (w, b)’\)). We estimate parameters which fit data well. \[\begin{align} \hat{y}=wx+b \end{align}\] Loss is a distance function between data and model like MSE(Mean Squared Error). \[\begin{align} J(\theta) = (y – \hat{y})^2 \end{align}\] Since data is fixed and given, the learning is the parameter update. \[\begin{align} b = b – \gamma \frac{\partial J}{\partial \theta} \end{align}\] Here \(\gamma\) is the learning rate or step size and \(\frac{\partial J}{\partial \theta}\) is the gradient. The gradient is the partial derivatives of \(J\) with respect to \(\theta\) as follows.

\[\begin{align} \frac{\partial J}{\partial b} &= \frac{\partial \frac{1}{n} \sum_i^n (y-\hat{y})^2}{\partial b} \\ &= \frac{\partial \frac{1}{n} \sum_i^n (y-wx-b)^2}{\partial b} \\ &= \frac{1}{n} \sum_i^n 2(y-wx-b) \times (-1) \end{align}\]
\[\begin{align} \frac{\partial J}{\partial w} &= \frac{\partial \frac{1}{n} \sum_i^n (y-\hat{y})^2}{\partial w} \\ &= \frac{\partial \frac{1}{n} \sum_i^n (y-wx-b)^2}{\partial w} \\ &= \frac{1}{n} \sum_i^n 2(y-wx-b) \times (-x) \end{align}\]

Illustration for Gradient Descent

Purpose of learning is to minimizse a loss or cost function \(J\) with respect to parameters. This is done by finding gradient. But the gradient always points in the direction of steepest increase in the loss function as can be seen in the following figure.

Gradient descent

Therefore the gradient descent which aims to find target parameters(\(b^*\)) takes a step in the direction of the negative gradient in order to reduce loss. For candidate parameters to move in the direction of reducing loss, new parameters are updated by negative gradient with learning rate or step size. In other words, parameters are determined by the gradient descent method automatically but learning rate is set by hand, which is a hyperparameter.

Python Code

The following python code implements the above explanation about gradient descent algorithm. Due to its structured simplicity, it is straightforward to understand relevant aspect of the gradient descent.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#=========================================================================#
# Financial Econometrics & Derivatives, ML/DL using R, Python, Tensorflow 
# by Sang-Heon Lee
#
# https://kiandlee.blogspot.com
#————————————————————————-#
# Gradient Descent example
#=========================================================================#
 
# -*- coding: utf-8 -*-
import numpy as np
 
#————————————————————————-#
# Declaration of functions
#————————————————————————-#
# Model
def Model(x, w, b):
    y_hat = w*+ b
    return y_hat
 
# Gradient
def Gradient(y,x,w,b):
    y_hat = Model(x, w, b)
    djdw = 2*np.mean((yy_hat)*(x))
    djdb = 2*np.mean((yy_hat)*(1))
    return djdw, djdb
 
# Learning : step = step size or learning rate
def Learning(y,x,w,b,lr):
    djdw, djdb = Gradient(y, x, w, b)
    w_update = w  step*djdw
    b_update = b  step*djdb
    return w_update, b_update
 
#————————————————————————-#
# use real data
#————————————————————————-#
import pandas as pd
import matplotlib.pyplot as plt
 
url = ‘https://raw.githubusercontent.com/bammuger/blog/main/sample_data.csv’
data = pd.read_csv(url)
data.head()
 
plt.scatter(data.inputs, data.outputs, s = 0.5)
plt.show()
cs

1) Sufficient Iteration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#————————————————————————-#
# Learning – sufficient iteration
#————————————————————————-#
# initial guess
= 2; b = 3; step = 0.05
 
# Iternated learning process by parameter update using gradient descent
for i in range(0,5000):
    y = data.outputs
    x = data.inputs
    w, b  = Learning(y, x, w, b, step)
 
print(“Learned_w: {}, Learned_b: {}”.format(w, b))
  
= np.linspace(01100)
= w * X + b
 
plt.scatter(data.inputs, data.outputs, s = 0.3)
plt.plot(X, Y, ‘-r’, linewidth = 1.5)
plt.show()
cs

2) Insufficient Iteration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#————————————————————————-#
# Learning – insufficient iteration
#————————————————————————-#
# initial guess
= 2; b = 3; step = 0.05
 
# Iternated learning process by parameter update using gradient descent
for i in range(0,10):
    y = data.outputs
    x = data.inputs
    w, b  = Learning(y, x, w, b, step)
    
print(“Learned_w: {}, Learned_b: {}”.format(w, b))
  
= np.linspace(01100)
= (w * X) + b
 
plt.scatter(data.inputs, data.outputs, s = 0.3)
plt.plot(X, Y, ‘-r’, linewidth = 1.5)
plt.show()
cs

Next post, we will cover stochastic gradient descent, mini-batch gradient descent algorithm which are variants of GD. \(\blacksquare\)

To leave a comment for the author, please follow the link and comment on their blog: K & L Fintech Modeling .

Want to share your content on python-bloggers? click here.