This article was first published on K & L Fintech Modeling , and kindly contributed to python-bloggers. (You can report issue about the content on this page here)

Gradient Descent (GD) is the basic optimization algorithm for machine learning or deep learning. This post explains the basic concept of gradient descent with python code.

### Parameter Learning

Data is the outcome of action or activity. \begin{align} y, x \end{align} Our focus is to predict the outcome of next action from data. For this end, we should develop a model to describe data properly and do forecasting

Model is a function of data and parameters($$\theta = (w, b)’$$). We estimate parameters which fit data well. \begin{align} \hat{y}=wx+b \end{align} Loss is a distance function between data and model like MSE(Mean Squared Error). \begin{align} J(\theta) = (y – \hat{y})^2 \end{align} Since data is fixed and given, the learning is the parameter update. \begin{align} b = b – \gamma \frac{\partial J}{\partial \theta} \end{align} Here $$\gamma$$ is the learning rate or step size and $$\frac{\partial J}{\partial \theta}$$ is the gradient. The gradient is the partial derivatives of $$J$$ with respect to $$\theta$$ as follows.

\begin{align} \frac{\partial J}{\partial b} &= \frac{\partial \frac{1}{n} \sum_i^n (y-\hat{y})^2}{\partial b} \\ &= \frac{\partial \frac{1}{n} \sum_i^n (y-wx-b)^2}{\partial b} \\ &= \frac{1}{n} \sum_i^n 2(y-wx-b) \times (-1) \end{align}
\begin{align} \frac{\partial J}{\partial w} &= \frac{\partial \frac{1}{n} \sum_i^n (y-\hat{y})^2}{\partial w} \\ &= \frac{\partial \frac{1}{n} \sum_i^n (y-wx-b)^2}{\partial w} \\ &= \frac{1}{n} \sum_i^n 2(y-wx-b) \times (-x) \end{align}

### Illustration for Gradient Descent

Purpose of learning is to minimizse a loss or cost function $$J$$ with respect to parameters. This is done by finding gradient. But the gradient always points in the direction of steepest increase in the loss function as can be seen in the following figure.

Therefore the gradient descent which aims to find target parameters($$b^*$$) takes a step in the direction of the negative gradient in order to reduce loss. For candidate parameters to move in the direction of reducing loss, new parameters are updated by negative gradient with learning rate or step size. In other words, parameters are determined by the gradient descent method automatically but learning rate is set by hand, which is a hyperparameter.

### Python Code

The following python code implements the above explanation about gradient descent algorithm. Due to its structured simplicity, it is straightforward to understand relevant aspect of the gradient descent.

 12345678910111213141516171819202122232425262728293031323334353637383940414243444546 #=========================================================================## Financial Econometrics & Derivatives, ML/DL using R, Python, Tensorflow # by Sang-Heon Lee## https://kiandlee.blogspot.com#————————————————————————-## Gradient Descent example#=========================================================================# # -*- coding: utf-8 -*-import numpy as np #————————————————————————-## Declaration of functions#————————————————————————-## Modeldef Model(x, w, b):    y_hat = w*x + b    return y_hat # Gradientdef Gradient(y,x,w,b):    y_hat = Model(x, w, b)    djdw = 2*np.mean((y–y_hat)*(–x))    djdb = 2*np.mean((y–y_hat)*(–1))    return djdw, djdb # Learning : step = step size or learning ratedef Learning(y,x,w,b,lr):    djdw, djdb = Gradient(y, x, w, b)    w_update = w – step*djdw    b_update = b – step*djdb    return w_update, b_update #————————————————————————-## use real data#————————————————————————-#import pandas as pdimport matplotlib.pyplot as plt url = ‘https://raw.githubusercontent.com/bammuger/blog/main/sample_data.csv’data = pd.read_csv(url)data.head() plt.scatter(data.inputs, data.outputs, s = 0.5)plt.show()Colored by Color Scripter cs

1) Sufficient Iteration

 1234567891011121314151617181920 #————————————————————————-## Learning – sufficient iteration#————————————————————————-## initial guessw = 2; b = 3; step = 0.05 # Iternated learning process by parameter update using gradient descentfor i in range(0,5000):    y = data.outputs    x = data.inputs    w, b  = Learning(y, x, w, b, step) print(“Learned_w: {}, Learned_b: {}”.format(w, b))  X = np.linspace(0, 1, 100)Y = w * X + b plt.scatter(data.inputs, data.outputs, s = 0.3)plt.plot(X, Y, ‘-r’, linewidth = 1.5)plt.show()Colored by Color Scripter cs

2) Insufficient Iteration

 1234567891011121314151617181920 #————————————————————————-## Learning – insufficient iteration#————————————————————————-## initial guessw = 2; b = 3; step = 0.05 # Iternated learning process by parameter update using gradient descentfor i in range(0,10):    y = data.outputs    x = data.inputs    w, b  = Learning(y, x, w, b, step)    print(“Learned_w: {}, Learned_b: {}”.format(w, b))  X = np.linspace(0, 1, 100)Y = (w * X) + b plt.scatter(data.inputs, data.outputs, s = 0.3)plt.plot(X, Y, ‘-r’, linewidth = 1.5)plt.show()Colored by Color Scripter cs

Next post, we will cover stochastic gradient descent, mini-batch gradient descent algorithm which are variants of GD. $$\blacksquare$$

To leave a comment for the author, please follow the link and comment on their blog: K & L Fintech Modeling .