### Artificial Intelligence

# How you can Manually Optimize Machine Studying Mannequin Hyperparameters

Machine studying algorithms have hyperparameters that enable the algorithms to be tailor-made to particular datasets.

Though the influence of hyperparameters could also be understood typically, their particular impact on a dataset and their interactions throughout studying might not be recognized. Due to this fact, you will need to tune the values of algorithm hyperparameters as a part of a machine studying venture.

It is not uncommon to make use of naive optimization algorithms to tune hyperparameters, corresponding to a grid search and a random search. An alternate method is to make use of a stochastic optimization algorithm, like a stochastic hill climbing algorithm.

On this tutorial, you’ll uncover how you can manually optimize the hyperparameters of machine studying algorithms.

After finishing this tutorial, you’ll know:

- Stochastic optimization algorithms can be utilized as an alternative of grid and random seek for hyperparameter optimization.
- How you can use a stochastic hill climbing algorithm to tune the hyperparameters of the Perceptron algorithm.
- How you can manually optimize the hyperparameters of the XGBoost gradient boosting algorithm.

Let’s get began.

## Tutorial Overview

This tutorial is split into three components; they’re:

- Handbook Hyperparameter Optimization
- Perceptron Hyperparameter Optimization
- XGBoost Hyperparameter Optimization

## Handbook Hyperparameter Optimization

Machine studying fashions have hyperparameters that you should set with the intention to customise the mannequin to your dataset.

Typically, the overall results of hyperparameters on a mannequin are recognized, however how you can finest set a hyperparameter and combos of interacting hyperparameters for a given dataset is difficult.

A greater method is to objectively search completely different values for mannequin hyperparameters and select a subset that leads to a mannequin that achieves the perfect efficiency on a given dataset. That is referred to as hyperparameter optimization, or hyperparameter tuning.

A spread of various optimization algorithms could also be used, though two of the only and most typical strategies are random search and grid search.

**Random Search**. Outline a search area as a bounded area of hyperparameter values and randomly pattern factors in that area.**Grid Search**. Outline a search area as a grid of hyperparameter values and consider each place within the grid.

Grid search is nice for spot-checking combos which can be recognized to carry out effectively typically. Random search is nice for discovery and getting hyperparameter combos that you wouldn’t have guessed intuitively, though it typically requires extra time to execute.

For extra on grid and random seek for hyperparameter tuning, see the tutorial:

Grid and random search are primitive optimization algorithms, and it’s potential to make use of any optimization we wish to tune the efficiency of a machine studying algorithm. For instance, it’s potential to make use of stochastic optimization algorithms. This may be fascinating when good or nice efficiency is required and there are ample sources accessible to tune the mannequin.

Subsequent, let’s take a look at how we’d use a stochastic hill climbing algorithm to tune the efficiency of the Perceptron algorithm.

## Perceptron Hyperparameter Optimization

The Perceptron algorithm is the only kind of synthetic neural community.

It’s a mannequin of a single neuron that can be utilized for two-class classification issues and gives the inspiration for later growing a lot bigger networks.

On this part, we’ll discover how you can manually optimize the hyperparameters of the Perceptron mannequin.

First, let’s outline an artificial binary classification downside that we will use as the main target of optimizing the mannequin.

We are able to use the make_classification() perform to outline a binary classification downside with 1,000 rows and 5 enter variables.

The instance beneath creates the dataset and summarizes the form of the info.

# outline a binary classification dataset from sklearn.datasets import make_classification # outline dataset X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1) # summarize the form of the dataset print(X.form, y.form) |

Working the instance prints the form of the created dataset, confirming our expectations.

The scikit-learn gives an implementation of the Perceptron mannequin by way of the Perceptron class.

Earlier than we tune the hyperparameters of the mannequin, we will set up a baseline in efficiency utilizing the default hyperparameters.

We’ll consider the mannequin utilizing good practices of repeated stratified k-fold cross-validation by way of the RepeatedStratifiedKFold class.

The whole instance of evaluating the Perceptron mannequin with default hyperparameters on our artificial binary classification dataset is listed beneath.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# perceptron default hyperparameters for binary classification from numpy import imply from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.linear_model import Perceptron # outline dataset X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1) # outline mannequin mannequin = Perceptron() # outline analysis process cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # consider mannequin scores = cross_val_score(mannequin, X, y, scoring=‘accuracy’, cv=cv, n_jobs=–1) # report consequence print(‘Imply Accuracy: %.3f (%.3f)’ % (imply(scores), std(scores))) |

Working the instance experiences evaluates the mannequin and experiences the imply and normal deviation of the classification accuracy.

**Be aware**: Your outcomes could differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate operating the instance a couple of instances and examine the typical consequence.

On this case, we will see that the mannequin with default hyperparameters achieved a classification accuracy of about 78.5 p.c.

We’d hope that we will obtain higher efficiency than this with optimized hyperparameters.

Imply Accuracy: 0.786 (0.069) |

Subsequent, we will optimize the hyperparameters of the Perceptron mannequin utilizing a stochastic hill climbing algorithm.

There are numerous hyperparameters that we might optimize, though we’ll give attention to two that maybe have probably the most influence on the educational habits of the mannequin; they’re:

- Studying Charge (
*eta0*). - Regularization (
*alpha*).

The studying fee controls the quantity the mannequin is up to date primarily based on prediction errors and controls the velocity of studying. The default worth of eta is 1.0. cheap values are bigger than zero (e.g. bigger than 1e-8 or 1e-10) and doubtless lower than 1.0

By default, the Perceptron doesn’t use any regularization, however we’ll allow “*elastic internet*” regularization which applies each L1 and L2 regularization throughout studying. This can encourage the mannequin to hunt small mannequin weights and, in flip, typically higher efficiency.

We’ll tune the “*alpha*” hyperparameter that controls the weighting of the regularization, e.g. the quantity it impacts the educational. If set to 0.0, it’s as if no regularization is getting used. Affordable values are between 0.0 and 1.0.

First, we have to outline the target perform for the optimization algorithm. We’ll consider a configuration utilizing imply classification accuracy with repeated stratified k-fold cross-validation. We’ll search to maximise accuracy within the configurations.

The *goal()* perform beneath implements this, taking the dataset and an inventory of config values. The config values (studying fee and regularization weighting) are unpacked, used to configure the mannequin, which is then evaluated, and the imply accuracy is returned.

# goal perform def goal(X, y, cfg): # unpack config eta, alpha = cfg # outline mannequin mannequin = Perceptron(penalty=‘elasticnet’, alpha=alpha, eta0=eta) # outline analysis process cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # consider mannequin scores = cross_val_score(mannequin, X, y, scoring=‘accuracy’, cv=cv, n_jobs=–1) # calculate imply accuracy consequence = imply(scores) return consequence |

Subsequent, we’d like a perform to take a step within the search area.

The search area is outlined by two variables (*eta* and *alpha*). A step within the search area should have some relationship to the earlier values and have to be sure to smart values (e.g. between 0 and 1).

We’ll use a “*step measurement*” hyperparameter that controls how far the algorithm is allowed to maneuver from the present configuration. A brand new configuration shall be chosen probabilistically utilizing a Gaussian distribution with the present worth because the imply of the distribution and the step measurement as the usual deviation of the distribution.

We are able to use the randn() NumPy perform to generate random numbers with a Gaussian distribution.

The *step()* perform beneath implements this and can take a step within the search area and generate a brand new configuration utilizing an present configuration.

# take a step within the search area def step(cfg, step_size): # unpack the configuration eta, alpha = cfg # step eta new_eta = eta + randn() * step_measurement # verify the bounds of eta if new_eta <= 0.0: new_eta = 1e–8 # step alpha new_alpha = alpha + randn() * step_measurement # verify the bounds of alpha if new_alpha < 0.0: new_alpha = 0.0 # return the brand new configuration return [new_eta, new_alpha] |

Subsequent, we have to implement the stochastic hill climbing algorithm that may name our *goal()* perform to guage candidate options and our *step()* perform to take a step within the search area.

The search first generates a random preliminary answer, on this case with eta and alpha values within the vary 0 and 1. The preliminary answer is then evaluated and is taken as the present finest working answer.

... # start line for the search answer = [rand(), rand()] # consider the preliminary level solution_eval = goal(X, y, answer) |

Subsequent, the algorithm iterates for a hard and fast variety of iterations supplied as a hyperparameter to the search. Every iteration entails taking a step and evaluating the brand new candidate answer.

... # take a step candidate = step(answer, step_size) # consider candidate level candidte_eval = goal(X, y, candidate) |

If the brand new answer is best than the present working answer, it’s taken as the brand new present working answer.

... # verify if we must always maintain the brand new level if candidte_eval >= solution_eval: # retailer the brand new level answer, solution_eval = candidate, candidte_eval # report progress print(‘>%d, cfg=%s %.5f’ % (i, answer, solution_eval)) |

On the finish of the search, the perfect answer and its efficiency are then returned.

Tying this collectively, the *hillclimbing()* perform beneath implements the stochastic hill climbing algorithm for tuning the Perceptron algorithm, taking the dataset, goal perform, variety of iterations, and step measurement as arguments.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
# hill climbing native search algorithm def hillclimbing(X, y, goal, n_iter, step_size): # start line for the search answer = [rand(), rand()] # consider the preliminary level solution_eval = goal(X, y, answer) # run the hill climb for i in vary(n_iter): # take a step candidate = step(answer, step_size) # consider candidate level candidte_eval = goal(X, y, candidate) # verify if we must always maintain the brand new level if candidte_eval >= solution_eval: # retailer the brand new level answer, solution_eval = candidate, candidte_eval # report progress print(‘>%d, cfg=%s %.5f’ % (i, answer, solution_eval)) return [solution, solution_eval] |

We are able to then name the algorithm and report the outcomes of the search.

On this case, we’ll run the algorithm for 100 iterations and use a step measurement of 0.1, chosen after slightly trial and error.

... # outline the full iterations n_iter = 100 # step measurement within the search area step_size = 0.1 # carry out the hill climbing search cfg, rating = hillclimbing(X, y, goal, n_iter, step_size) print(‘Executed!’) print(‘cfg=%s: Imply Accuracy: %f’ % (cfg, rating)) |

Tying this collectively, the entire instance of manually tuning the Perceptron algorithm is listed beneath.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
# manually search perceptron hyperparameters for binary classification from numpy import imply from numpy.random import randn from numpy.random import rand from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.linear_model import Perceptron
# goal perform def goal(X, y, cfg): # unpack config eta, alpha = cfg # outline mannequin mannequin = Perceptron(penalty=‘elasticnet’, alpha=alpha, eta0=eta) # outline analysis process cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # consider mannequin scores = cross_val_score(mannequin, X, y, scoring=‘accuracy’, cv=cv, n_jobs=–1) # calculate imply accuracy consequence = imply(scores) return consequence
# take a step within the search area def step(cfg, step_size): # unpack the configuration eta, alpha = cfg # step eta new_eta = eta + randn() * step_measurement # verify the bounds of eta if new_eta <= 0.0: new_eta = 1e–8 # step alpha new_alpha = alpha + randn() * step_measurement # verify the bounds of alpha if new_alpha < 0.0: new_alpha = 0.0 # return the brand new configuration return [new_eta, new_alpha]
# hill climbing native search algorithm def hillclimbing(X, y, goal, n_iter, step_size): # start line for the search answer = [rand(), rand()] # consider the preliminary level solution_eval = goal(X, y, answer) # run the hill climb for i in vary(n_iter): # take a step candidate = step(answer, step_size) # consider candidate level candidte_eval = goal(X, y, candidate) # verify if we must always maintain the brand new level if candidte_eval >= solution_eval: # retailer the brand new level answer, solution_eval = candidate, candidte_eval # report progress print(‘>%d, cfg=%s %.5f’ % (i, answer, solution_eval)) return [solution, solution_eval]
# outline dataset X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1) # outline the full iterations n_iter = 100 # step measurement within the search area step_size = 0.1 # carry out the hill climbing search cfg, rating = hillclimbing(X, y, goal, n_iter, step_size) print(‘Executed!’) print(‘cfg=%s: Imply Accuracy: %f’ % (cfg, rating)) |

Working the instance experiences the configuration and consequence every time an enchancment is seen in the course of the search. On the finish of the run, the perfect configuration and consequence are reported.

**Be aware**: Your outcomes could differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate operating the instance a couple of instances and examine the typical consequence.

On this case, we will see that the perfect consequence concerned utilizing a studying fee barely above 1 at 1.004 and a regularization weight of about 0.002 reaching a imply accuracy of about 79.1 p.c, higher than the default configuration that achieved an accuracy of about 78.5 p.c.

**Are you able to get a greater consequence?**

Let me know within the feedback beneath.

>0, cfg=[0.5827274503894747, 0.260872709578015] 0.70533 >4, cfg=[0.5449820307807399, 0.3017271170801444] 0.70567 >6, cfg=[0.6286475606495414, 0.17499090243915086] 0.71933 >7, cfg=[0.5956196828965779, 0.0] 0.78633 >8, cfg=[0.5878361167354715, 0.0] 0.78633 >10, cfg=[0.6353507984485595, 0.0] 0.78633 >13, cfg=[0.5690530537610675, 0.0] 0.78633 >17, cfg=[0.6650936023999641, 0.0] 0.78633 >22, cfg=[0.9070451625704087, 0.0] 0.78633 >23, cfg=[0.9253366187387938, 0.0] 0.78633 >26, cfg=[0.9966143540220266, 0.0] 0.78633 >31, cfg=[1.0048613895650054, 0.002162219228449132] 0.79133 Executed! cfg=[1.0048613895650054, 0.002162219228449132]: Imply Accuracy: 0.791333 |

Now that we’re aware of how you can use a stochastic hill climbing algorithm to tune the hyperparameters of a easy machine studying algorithm, let’s take a look at tuning a extra superior algorithm, corresponding to XGBoost.

## XGBoost Hyperparameter Optimization

XGBoost is brief for Excessive Gradient Boosting and is an environment friendly implementation of the stochastic gradient boosting machine studying algorithm.

The stochastic gradient boosting algorithm, additionally referred to as gradient boosting machines or tree boosting, is a robust machine studying method that performs effectively and even finest on a variety of difficult machine studying issues.

First, the XGBoost library have to be put in.

You’ll be able to set up it utilizing pip, as follows:

As soon as put in, you possibly can affirm that it was put in efficiently and that you’re utilizing a contemporary model by operating the next code:

# xgboost import xgboost print(“xgboost”, xgboost.__version__) |

Working the code, it’s best to see the next model quantity or larger.

Though the XGBoost library has its personal Python API, we will use XGBoost fashions with the scikit-learn API by way of the XGBClassifier wrapper class.

An occasion of the mannequin could be instantiated and used identical to another scikit-learn class for mannequin analysis. For instance:

... # outline mannequin mannequin = XGBClassifier() |

Earlier than we tune the hyperparameters of XGBoost, we will set up a baseline in efficiency utilizing the default hyperparameters.

We’ll use the identical artificial binary classification dataset from the earlier part and the identical take a look at harness of repeated stratified k-fold cross-validation.

The whole instance of evaluating the efficiency of XGBoost with default hyperparameters is listed beneath.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# xgboost with default hyperparameters for binary classification from numpy import imply from numpy import std from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from xgboost import XGBClassifier # outline dataset # outline mannequin mannequin = XGBClassifier() # outline analysis process cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # consider mannequin scores = cross_val_score(mannequin, X, y, scoring=‘accuracy’, cv=cv, n_jobs=–1) # report consequence print(‘Imply Accuracy: %.3f (%.3f)’ % (imply(scores), std(scores))) |

Working the instance evaluates the mannequin and experiences the imply and normal deviation of the classification accuracy.

**Be aware**: Your outcomes could differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate operating the instance a couple of instances and examine the typical consequence.

On this case, we will see that the mannequin with default hyperparameters achieved a classification accuracy of about 84.9 p.c.

We’d hope that we will obtain higher efficiency than this with optimized hyperparameters.

Imply Accuracy: 0.849 (0.040) |

Subsequent, we will adapt the stochastic hill climbing optimization algorithm to tune the hyperparameters of the XGBoost mannequin.

There are numerous hyperparameters that we could need to optimize for the XGBoost mannequin.

For an outline of how you can tune the XGBoost mannequin, see the tutorial:

We’ll give attention to 4 key hyperparameters; they’re:

- Studying Charge (
*learning_rate*) - Variety of Timber (
*n_estimators*) - Subsample Proportion (
*subsample*) - Tree Depth (
*max_depth*)

The **studying fee** controls the contribution of every tree to the ensemble. Wise values are lower than 1.0 and barely above 0.0 (e.g. 1e-8).

The **variety of timber** controls the scale of the ensemble, and infrequently, extra timber is best to a degree of diminishing returns. Wise values are between 1 tree and a whole bunch or hundreds of timber.

The **subsample** percentages outline the random pattern measurement used to coach every tree, outlined as a share of the scale of the unique dataset. Values are between a price barely above 0.0 (e.g. 1e-8) and 1.0

The **tree depth** is the variety of ranges in every tree. Deeper timber are extra particular to the coaching dataset and maybe overfit. Shorter timber typically generalize higher. Wise values are between 1 and 10 or 20.

First, we should replace the *goal()* perform to unpack the hyperparameters of the XGBoost mannequin, configure it, after which consider the imply classification accuracy.

# goal perform def goal(X, y, cfg): # unpack config lrate, n_tree, subsam, depth = cfg # outline mannequin mannequin = XGBClassifier(learning_rate=lrate, n_estimators=n_tree, subsample=subsam, max_depth=depth) # outline analysis process cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # consider mannequin scores = cross_val_score(mannequin, X, y, scoring=‘accuracy’, cv=cv, n_jobs=–1) # calculate imply accuracy consequence = imply(scores) return consequence |

Subsequent, we have to outline the *step()* perform used to take a step within the search area.

Every hyperparameter is kind of a special vary, subsequently, we’ll outline the step measurement (normal deviation of the distribution) individually for every hyperparameter. We will even outline the step sizes in line reasonably than as arguments to the perform, to maintain issues easy.

The variety of timber and the depth are integers, so the stepped values are rounded.

The step sizes chosen are arbitrary, chosen after slightly trial and error.

The up to date step perform is listed beneath.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# take a step within the search area def step(cfg): # unpack config lrate, n_tree, subsam, depth = cfg # studying fee lrate = lrate + randn() * 0.01 if lrate <= 0.0: lrate = 1e–8 if lrate > 1: lrate = 1.0 # variety of timber n_tree = spherical(n_tree + randn() * 50) if n_tree <= 0.0: n_tree = 1 # subsample share subsam = subsam + randn() * 0.1 if subsam <= 0.0: subsam = 1e–8 if subsam > 1: subsam = 1.0 # max tree depth depth = spherical(depth + randn() * 7) if depth <= 1: depth = 1 # return new config return [lrate, n_tree, subsam, depth] |

Lastly, the *hillclimbing()* algorithm have to be up to date to outline an preliminary answer with applicable values.

On this case, we’ll outline the preliminary answer with smart defaults, matching the default hyperparameters, or near them.

... # start line for the search answer = step([0.1, 100, 1.0, 7]) |

Tying this collectively, the entire instance of manually tuning the hyperparameters of the XGBoost algorithm utilizing a stochastic hill climbing algorithm is listed beneath.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
# xgboost handbook hyperparameter optimization for binary classification from numpy import imply from numpy.random import randn from numpy.random import rand from numpy.random import randint from sklearn.datasets import make_classification from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from xgboost import XGBClassifier
# goal perform def goal(X, y, cfg): # unpack config lrate, n_tree, subsam, depth = cfg # outline mannequin mannequin = XGBClassifier(learning_rate=lrate, n_estimators=n_tree, subsample=subsam, max_depth=depth) # outline analysis process cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # consider mannequin scores = cross_val_score(mannequin, X, y, scoring=‘accuracy’, cv=cv, n_jobs=–1) # calculate imply accuracy consequence = imply(scores) return consequence
# take a step within the search area def step(cfg): # unpack config lrate, n_tree, subsam, depth = cfg # studying fee lrate = lrate + randn() * 0.01 if lrate <= 0.0: lrate = 1e–8 if lrate > 1: lrate = 1.0 # variety of timber n_tree = spherical(n_tree + randn() * 50) if n_tree <= 0.0: n_tree = 1 # subsample share subsam = subsam + randn() * 0.1 if subsam <= 0.0: subsam = 1e–8 if subsam > 1: subsam = 1.0 # max tree depth depth = spherical(depth + randn() * 7) if depth <= 1: depth = 1 # return new config return [lrate, n_tree, subsam, depth]
# hill climbing native search algorithm def hillclimbing(X, y, goal, n_iter): # start line for the search answer = step([0.1, 100, 1.0, 7]) # consider the preliminary level solution_eval = goal(X, y, answer) # run the hill climb for i in vary(n_iter): # take a step candidate = step(answer) # consider candidate level candidte_eval = goal(X, y, candidate) # verify if we must always maintain the brand new level if candidte_eval >= solution_eval: # retailer the brand new level answer, solution_eval = candidate, candidte_eval # report progress print(‘>%d, cfg=[%s] %.5f’ % (i, answer, solution_eval)) return [solution, solution_eval]
# outline dataset # outline the full iterations n_iter = 200 # carry out the hill climbing search cfg, rating = hillclimbing(X, y, goal, n_iter) print(‘Executed!’) print(‘cfg=[%s]: Imply Accuracy: %f’ % (cfg, rating)) |

Working the instance experiences the configuration and consequence every time an enchancment is seen in the course of the search. On the finish of the run, the perfect configuration and consequence are reported.

**Be aware**: Your outcomes could differ given the stochastic nature of the algorithm or analysis process, or variations in numerical precision. Contemplate operating the instance a couple of instances and examine the typical consequence.

On this case, we will see that the perfect consequence concerned utilizing a studying fee of about 0.02, 52 timber, a subsample fee of about 50 p.c, and a big depth of 53 ranges.

This configuration resulted in a imply accuracy of about 87.3 p.c, higher than the default configuration that achieved an accuracy of about 84.9 p.c.

**Are you able to get a greater consequence?**

Let me know within the feedback beneath.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
>0, cfg=[[0.1058242692126418, 67, 0.9228490731610172, 12]] 0.85933 >1, cfg=[[0.11060813799692253, 51, 0.859353656735739, 13]] 0.86100 >4, cfg=[[0.11890247679234153, 58, 0.7135275461723894, 12]] 0.86167 >5, cfg=[[0.10226257987735601, 61, 0.6086462443373852, 17]] 0.86400 >15, cfg=[[0.11176962034280596, 106, 0.5592742266405146, 13]] 0.86500 >19, cfg=[[0.09493587069112454, 153, 0.5049124222437619, 34]] 0.86533 >23, cfg=[[0.08516531024154426, 88, 0.5895201311518876, 31]] 0.86733 >46, cfg=[[0.10092590898175327, 32, 0.5982811365027455, 30]] 0.86867 >75, cfg=[[0.099469211050998, 20, 0.36372573610040404, 32]] 0.86900 >96, cfg=[[0.09021536590375884, 38, 0.4725379807796971, 20]] 0.86900 >100, cfg=[[0.08979482274655906, 65, 0.3697395430835758, 14]] 0.87000 >110, cfg=[[0.06792737273465625, 89, 0.33827505722318224, 17]] 0.87000 >118, cfg=[[0.05544969684589669, 72, 0.2989721608535262, 23]] 0.87200 >122, cfg=[[0.050102976159097, 128, 0.2043203965148931, 24]] 0.87200 >123, cfg=[[0.031493266763680444, 120, 0.2998819062922256, 30]] 0.87333 >128, cfg=[[0.023324201169625292, 84, 0.4017169945431015, 42]] 0.87333 >140, cfg=[[0.020224220443108752, 52, 0.5088096815056933, 53]] 0.87367 Executed! cfg=[[0.020224220443108752, 52, 0.5088096815056933, 53]]: Imply Accuracy: 0.873667 |

## Additional Studying

This part gives extra sources on the subject in case you are trying to go deeper.

### Tutorials

### APIs

### Articles

## Abstract

On this tutorial, you found how you can manually optimize the hyperparameters of machine studying algorithms.

Particularly, you discovered:

- Stochastic optimization algorithms can be utilized as an alternative of grid and random seek for hyperparameter optimization.
- How you can use a stochastic hill climbing algorithm to tune the hyperparameters of the Perceptron algorithm.
- How you can manually optimize the hyperparameters of the XGBoost gradient boosting algorithm.

**Do you have got any questions?**

Ask your questions within the feedback beneath and I’ll do my finest to reply.