Hash Table handling with LMR/Futility pruning

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Hash Table handling with LMR/Futility pruning

Postby Pradu » 21 Oct 2006, 16:40

I have been trying to learn about Futility Pruning (as Ernst A. Heinz described it) and Late Move Reductions and trying out some variants of them but I've come across the problem of how to handle the hash table with these methods.

Futility pruning is a pruning by bounds and we store the result or the search of the current node in the hash table (unlike nullmove). If we reach the same node again with different bounds, we are not certain that the hash entry is valid.

Similarily, LMR is reduction by move ordering. If we reach the same node again with different move ordering we are not certain that the hash entry is valid.

So how do one handle the hash table so that you can have valid data when reaching the same node again in both these cases?
User avatar
Pradu
 
Posts: 343
Joined: 12 Jan 2005, 19:17
Location: Chandler, Arizona, USA

Re: Hash Table handling with LMR/Futility pruning

Postby H.G.Muller » 23 Oct 2006, 08:35

Futility pruning of frontier nodes isn't really pruning. It is merely a more efficient implementation of minimax, bringing as much of the code as possible in the if-part that decides if you are going to stand pat in QS. So that you don't waste time making and unmaking when making a stand-pat cutoff in cases where you know in advance such a cutoff will occur.

So the score of the 1-ply node will not really be affected at all by the fact that you 'prune' the futile moves. That the score seems alpha-dependent is just the ordinary alpha dependence of scoring you get from alpha-beta: with a higher alpha the seach gets lazy, and you might get a weaker upper bound (below alpha) as you would have gotten for the same search with a much lower alpha. But the hashing algorithm takes fully account of that, by storing the fact that the score is merely an upper bound, so that a score you obtained with large alpha, that is above the current alpha, is not accepted as a valid score.

For deeper futility pruning, the most lgical solution is to treat the criterion on which you decided to prune as a score estimate. You prune because you are confident that the score will be out-of-window. If you are doing hard fail, that is all you have to know. If, for the benifit of storing sharper bounds in the hash table, you do soft fail, you should store the most pessimistic case. (And hard fail might still be too pessimistic.)

So if you priune when eval + victim < alpha - MARGIN, you are really doing it because the best possible score you can expect, eval + victim + MARGIN, is too low. But you can use that latter score for updating your soft-fail upper bound to the score (in the below-alpha range). The true score might ly even below this, but you cannot know that without doing the search (to see if he had good captures that would be better than stending pat) or the true, rather than the lazy evaluation (to conclude that the move really earns much less positional points than MARGIN).

The same with null-move pruning: it assumes that your best move is better than the null-move score, so the value you store in the hash table for a position in which the null move fails high could be the null-move score, rather than beta.
User avatar
H.G.Muller
 
Posts: 3170
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Hash Table handling with LMR/Futility pruning

Postby Chan Rasjid » 24 Oct 2006, 09:02

Hello,
[edited] - depth <= 1
If search depth <= 1, ie at horizon/QS, we may do futility prunning.
Moves are skipped and not searched if it will certainly fail-low, ie when standpat at next ply will fail-high due only to board material and the max-limit of positional factor that is used for evaluation.

If we find a futile move :-
1) if a current best score/move is already found and best >=alpha, there is no need to set a best score/move.
2) if w/o a best score/move or currrent best <= alpha, best must be set viz. best = alpha, move = futile-move. I think it may be necessary to revert to fail hard.

It can happen that the final result after seaching all moves has
a futile move as the best; ie node fail-low as upper-bound.
Hashing score = alpha, type = UB will always be a valid hash entry
for all probes irrespective of bounds.

Best Regards
Rasjid
Chan Rasjid
 
Posts: 73
Joined: 23 Feb 2005, 16:36
Location: Singapore

Re: Hash Table handling with LMR/Futility pruning

Postby Chan Rasjid » 24 Oct 2006, 09:41

Hello,

As Muller stated, futility is not a true prunning as the same chess tree will always be searched.

I only have a vague idea of LMR, but it is a true prunning. It is unlikely that independent searches of the same node searches the same chess tree. As with all prunning, extention and reduction, there will always be some search instabilities. An implemention of LMR is only working or beneficial if hashed entries may be used and there is still a stronger program when tested.

Best Regards,
Rasjid
Chan Rasjid
 
Posts: 73
Joined: 23 Feb 2005, 16:36
Location: Singapore

Re: Hash Table handling with LMR/Futility pruning

Postby H.G.Muller » 25 Oct 2006, 09:15

Scores are supposed to be independent of move ordering. If LMR would break that rule, this would only show that the algorithm sucks. Relying on a hashed score of an LMReduced node doesn't introduce any error compared to searching the node again, because even though the re-search might give you a different score because of a different move ordering, you still wouldn't know if it is the hashed position that was at fault, or the re-search.

This would be different for true path dependencies, due to repetition draws.
User avatar
H.G.Muller
 
Posts: 3170
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Hash Table handling with LMR/Futility pruning

Postby Pradu » 25 Oct 2006, 12:19

Thanks for the replys,

I guess if the pruning sucks, hash tables will be buggy, if it is good, it will likely not be buggy.
User avatar
Pradu
 
Posts: 343
Joined: 12 Jan 2005, 19:17
Location: Chandler, Arizona, USA

Re: Hash Table handling with LMR/Futility pruning

Postby bob » 26 Oct 2006, 04:29

H.G.Muller wrote:Scores are supposed to be independent of move ordering. If LMR would break that rule, this would only show that the algorithm sucks. Relying on a hashed score of an LMReduced node doesn't introduce any error compared to searching the node again, because even though the re-search might give you a different score because of a different move ordering, you still wouldn't know if it is the hashed position that was at fault, or the re-search.

This would be different for true path dependencies, due to repetition draws.


LMR definitely breaks that rule, because if you search moves in a different order, and use some sort of "history" value, these values will be different if the same tree is searched in a different order, all else being exactly the same... that's the nature of using things like history counters and such to reduce the depth on some moves...
User avatar
bob
 
Posts: 156
Joined: 10 May 2006, 17:59

Re: Hash Table handling with LMR/Futility pruning

Postby H.G.Muller » 28 Oct 2006, 10:52

But the whole idea of LMR is that you reduce those late moves because they are no good, and so would not affect the score. If these were score-determining moves, reducing them would be a grave mistake.

The justification of any reduction scheme is this: Which move is best is determined by differences of move scores. But you are not interested in the exact value of this difference, just in the sign of the difference and the score of the best. So you can afford an error margin on non-best moves, as long as this margin stays below the difference of the two scores.

Now if a reduction in search depth correspond to an error that increases with the magnitude of the reduction, you can allow to reduce bad moves more than aproximately equal ones. If your history tables help you identify 'uninteresting' moves that have little potential for causing surprises, you can also reduce those more before the error becomes too large.

If LMR as a routine matter reduces score-determining moves, significantly decreasing the reliability of a search, the algorithm sucks. Of course there will always be some errors, but only if their frequency is far below the error you would get from simply searching less deep in any branch, it would make sense to use it.

But the question was really: should the presence of search instability, where you occasionally get a different score when you re-search the same position, discourage you from storing the scores in the hash table? And my answer would be: of course not! Not even so much because errors are supposed to be infrequent, but because the chances on an error during re-search aren't any smaller or bigger than the chances on an error in the hashed score.
User avatar
H.G.Muller
 
Posts: 3170
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 1 guest