# Texel's Tuning Method

This is this the first post in a series intended to explain the Zurichess chess engine internals. Machine learning is not my expertise so let me know of any blatant mistakes. Comments are welcome.

Zurichess uses Texel's Tuning Method to tune its evaluation.

A chess evaluation function takes as input a chess position and outputs a real number which represents how much better is white compared to black. Large positive values means white is winning; large negative values means black is winning and 0 means draw.

With Texel's Tuning Method the evaluation function is a linear combinations of features extracted from the input positions. For example, the simplest features are the number of each type of figure. The evaluation function looks as follows:

where $x$ is the vector of features (calculated the difference between white and black), $W_m$ and $W_e$ are of middle game and end game weights, and $p$ is the game phase represented by a real number $\in [0, 1]$ with 0 opening and 1 late end game.

Texel's Tuning Method is used to find $W_m$ and $W_e$ which minimizes the function below. One can add L1 or L2 regularization (for Zurichess I only added L1).

where $N$ is the number of positions, $x_i$ is the vector of features extracted from the position ith position, $y_i$ is the expected outcome of this positions, and $S$ is the sigmoid function, $S(x) = \frac{1}{1+exp(-x)}$. $y_i$ can be generated by playing a self-game starting from ith position using a strong engine such as Stockfish.

For machine learning enthusiasts, what Texel Tuning Method does is very similar to a logistic regression, except for the phase thing. The advantage of this method is the speed of training which allows for extremely fast iterations. Using Tensorflow I can train the evaluation function in 45 min on a 8 core Intel i7 CPU, or about 5 minutes on a modern GPU. Stockfish uses SPSA which needs significantly more resources to train the evaluation function.

Generating the training data can take one week, depending on your CPU resources, and it the most difficult part. I published my training set of quiet positions here. Suprisingly, I have tried many times to build a new train set, but failed to get better results. At least 3 other chess engine authors have reported significat improvements in the range of 50-200 Elo when using this set.

The piece values are as follows:

Figure MidGame EndGame Avg. Centipawn
Pawn 14364 13854 100
Knight 64220 49415 402
Bishop 66592 52025 420
Rook 84852 98328 650
Queen 222394 169927 1387

Avg. Centipawn values are slightly larger, than those reported here for Fruit and other chess engines.

You can use Texel's Tuning method to build strong engines. Zurichess has ~2800 on CCRL 40/40 and it’s not even the strongest engine using this method. Texel chess engine is, for example, ~3150, on the same rating list.