Want to share your content on python-bloggers? click here.
I wanted a way to effectively make parallel some of my prebuilt machine learning models. Luckily, the package has this capability inbuilt and it is easy to make massive performance gains in terms of your model runtime. I will show you how in the following article.
Loading imports and finding out my CPU cores
The first step is to load in the required libraries I am going to need:
from time import time from sklearn.datasets import make_classification from sklearn.ensemble import RandomForestClassifier import os import matplotlib.pyplot as plt
This loads in the time package, as we are going to create a function to fit the model and time how long it takes to run. I then use sklearn datasets and ensembles to extract the make_classification and RandomForestClassifier – these will make the data described below and fit the Random Forest classifier to the dataset. I will use the os package to get the registered CPUs on my machine, the most important part of this article and then import matplotlib.pyplot for generating some quick plots.
To find out my CPUs I have the following code:
# Get CPU cores cpu_count = os.cpu_count() print(f"This machine has {cpu_count} cores") reserved_cpu = 1 final_cpu = int(cpu_count - reserved_cpu) print("Saving one CPU so my PC does not lock up")
The cpu_count variable stores the number of CPUs your specific machine has, this will vary from machine to machine. This then prints out the number of CPUs. I then create a reserved_cpu variable to hold on to one of the cores for other processing, like graphics. The final_cpu variable just adjusts the CPU count and the following is returned to the console:
Creating a synthetic classification dataset
I am going to create a synthetic classification dataset in scikit learn by using the make_classification function. This is implemented in the code wrapper below:
# Create the dataset X, Y = make_classification(n_samples=10000, n_features=70, n_informative=65, n_redundant=5)
To explain this:
- n_samples – generates the number of synthetic examples = 10K
- n_features – is the number of columns / features to generate
- n_informative – is used to set how many of the features will be important when we fit the final model
- n_redundant – are those features that will not be used in the data
This is then unpacked to the X and Y variables i.e. the independent variables (X) and the dependent variable (Y).
Create model fit timer function
In the next step we will create the model fit timer function to fit the model and time how long it takes to train:
# Function to run model and create a timer and list output of the model def run_model(model, X, Y): start = time() model = model.fit(X,Y) end = time() result = end-start print("[MODEL INFO] The model ran in {:.2f}".format(result)) return [model, result]
This run_model function takes in the model you want to fit, the X dependent variables and the Y dependent, or as I call it “the thing you want to predict”. Not sure if this will catch on?
Within this declaration:
- start variable that monitors the current time
- model is the fit function
- end is the end time
- result is the time between when the start and end variables were declared
- A console output is printed to state the time it took to fit the model
- The return statement returns a list ([]) and the fitted model as index 0 and the result of the timer in index 1
Fitting a model on one CPU core
This will use the run_model function we created in the previous step, as well as initialise the model to be fitted:
# ============================================================================= # Single core example # ============================================================================= #Initialise the model model_singlecore = RandomForestClassifier(n_estimators=500, n_jobs=1) # Create a timer and fit the model model_single_list = run_model(model_singlecore, X, Y)
We initialise the RandomForestClassifier with 500 trees and explicitly declare 1 CPU core to use in the n_jobs parameter. I then store the model in the model_single_list variable, I have named it such, as it returns a list of two indexes – the first is the trained ML model and the second is the run time of the model training. The model trains in:
Fitting a model on all (but one) of my CPU cores
The next model will be initialised in the same way, but this time it will be trained on all (but one) of my CPU cores:
# ============================================================================= # Multicore example # ============================================================================= model_multi = RandomForestClassifier(n_estimators=500, n_jobs=final_cpu) model_ran = run_model(model_multi, X, Y)
This uses the final_cpu variable we calculated at the head of the script to set the number of parallel jobs to spin up and then the call the run_model function to trigger the model fit timer function. This runs in:
Okay, so we can already see an improvement in the data, it is nearly 5 times faster than the 1 core example.
This is a small dataset, but imagine this performance saving on millions of records and the model having to iterate over this, it soon mounts up. Even more, imagine distributing this over an Azure cloud server and allowing you to distribute over 16 – 32 cores, this would really aid the model training.
Benchmarking by one core increase
The code in the next section trains the model against each of the cores and outputs the results in a list for them to be plotted. I will explain each section of the code in greater depth under this code block:
results_list = list() #Generate a list with each core iterate by one - this could be a range object equally n_cores = [1,2,3,4,5,6,7,8] for model_it in n_cores: model = RandomForestClassifier(n_estimators=500, n_jobs=model_it) model_fit = run_model(model, X, Y) results = model_fit[1] #Slice the index of the model to get the fit results_list.append(results)
The variables and components of the model are:
- results_list – this initialises and empty list I am going to use later
- n_cores – this is a list of each core iterated up to 8 cores, as this is the maximum number of cores on my machine
- The for loop uses an iterator variable (model_it) to iterate through the n_cores list
- A model is fitted on each iteration with 500 trees and the n_jobs is set to the current iteration
- the run_model function is ran, which we created earlier, this will fit the model we initialised in the previous step and then store the fitted model as a list, where index 0 is the fitted model and index 1 is the result of the timer function to time how long the model takes to run
- results – stores the timer result
- we then use the append method to append the fit results to the list
- The end return is an appended list of results and run times
I could just print the run times to console, and that is exactly what the function will do, but next, I want to visualise the results of this in matplotlib:
# Generate plot of results def plot_results(x_val, y_val): plt.plot(x_val, y_val, color="blue", linestyle="--", marker="o") plt.xlabel("Number of cores") plt.ylabel("Run time (secs)") plt.show() plot_results(n_cores, results_list)
This is a nice little wrapper function that will take the x and y inputs and generate a CPU core benchmark plot. This then produces our benchmark plot:
The run time reduces and performance gains start to level out at around 7 cores. We have taken an algorithm that trains in 30+ seconds to one that can be trained in a little over 5 seconds. Such a performance improvement by distributing the load across multiple cores.
What’s next?
The code for this tutorial can be found on GitHub.
In the next article/tutorial I will show you how to optimise sci-kit learn for hyperparameter tuning and cross-validation resampling of the data.
Until then:
Want to share your content on python-bloggers? click here.