time=1means to spend 1 second to analyse the position, no matter what
time=0.5), to trade quality for speed. there's also
depth=20or sth like that, or
nodes=2100000(just example values)
sounds like you're looking for
analyse(..., multipv=n). it has considerable extra cost, but is still better than looping through all legal moves
Thanks again for this hint, works like a charm! But another question: Often, we are interested in the evaluation of the engine for a given move.
I know how to analyze a position and some scientific papers simply use formulas such as evaluation_after_a_move - evaluation_before_move (with correct presigns etc.) to obtain the score of a given move a player made.
However, is there an easier way to obtain scores for given moves? Thanks to multipv=3 i can get the three best engine moves, but I'd like to have a more nuanced picture how the engine actually rated each possible move.
root_moves(https://python-chess.readthedocs.io/en/latest/engine.html#chess.engine.Protocol.analysis), but essentially it's not better than making the move and analysing that position
Hi everyone, I'm still looking for the best way to evaluate a given move by a player/engine.
Say I evaluate the base position at a given depth with multipv = 20 and get the evaluations for the best move, second best move and so on. The quality of a given move should be the difference between its evaluation and the evaluation of the best move in this setup, right?
Now I take any move of these 20 and limit the engine to only analyze this move (with root_moves= move and multipv=1) and ask for stockfish's evaluation of this move.
Why is this move now evaluated differently compared to the case of multipv=20?
thanks in advance! Im still struggling how to get measures of the quality of a move a player took.
Hi @niklasf , thanks for reaching out!
I appended a screenshot from the analysis using a fixed width (20), in which we can see the following:
using multipv some does change what is judged as the best move for White in the starting position (g1f3 vs c2c4) and the evaluation of this changes considerably (+38 vs +14)
If we pick any move (I just picked the ninth best move according to the multi pv evaluation), the evaluation of this move does not change very much. This is really nice as it may offer a fast way to evaluate not any random move but the move a player in a real game made, BUT since the evaluation of the best move changed, I obtain different results when constructing the final evaluation of the random move.
This leaves me puzzled, maybe someone here has dealt with this in the past and has an idea
I have a simple question. I have this code:
import chess board = chess.Board() board.push(chess.Move.from_uci("e2e3")) board.push(chess.Move.from_uci("f1c4")) board.push(chess.Move.from_uci("d1f3")) board.push(chess.Move.from_uci("f3f7")) print(board.is_checkmate())
False, which is odd because I did the 4 move checkmate thing and the game should be over, right? What am I doing wrong here?
Hi all, again me, I noticed another slightly strange behavior using root_moves =
As you can see in the screenshot, selecting a fixed depth of 20 doesn't result in the same number of "seldepth", which in turn, appears to influence the evaluation score of the same move.
Has anyone successfully evaluated many moves of real players in the past and figured out how to evaluate the moves correctly? In theory, a move should at best be evaluated around 0 (with a small randomness around 0)..
Thanks in advance!!