Geschrieben von: / Posted by: Paul at 16 March 2001 04:36:42:
If my understanding of it is correct, null move pruneing searches "very good" looking moves at a reduced depth.
Why not do the same for very bad looking moves?
Not exactly; null move algorithm basically detects "very good" _positions_If my understanding of it is correct, null move pruneing searches "very good"
looking moves at a reduced depth.
Why not do the same for very bad looking moves?
I think i get the idea of the null move..Basically if the position is so good for you, that your opponent can move twice and still not get a advanatage the position is probably too good [Is that fail-high I never really got it, despite reading a couple of computer chess books and websites] , for you, that your opponent won't allow it , hence we don't need to search furthur?Not exactly; null move algorithm basically detects "very good" _positions_If my understanding of it is correct, null move pruneing searches "very good"
looking moves at a reduced depth.
allowing the opponent to move twice in a row, and then performing a reduced
depth search: if that search returns an "high enough" score for the null-moving
side (ie, if the opponent is not able to bring the score below beta even moving
twice in a row), the original position is supposed to be "enough good" and a
fail-high score is returned without further searching.
Carmelo allready answered this:Hello. Since we are having a little computer chess lesson here, I decided to join in..I think i get the idea of the null move..Basically if the position is so good for you, that your opponent can move twice and still not get a advanatage the position is probably too good [Is that fail-high I never really got it, despite reading a couple of computer chess books and websites] , for you, that your opponent won't allow it , hence we don't need to search furthur?Not exactly; null move algorithm basically detects "very good" _positions_If my understanding of it is correct, null move pruneing searches "very good"
looking moves at a reduced depth.
allowing the opponent to move twice in a row, and then performing a reduced
depth search: if that search returns an "high enough" score for the null-moving
side (ie, if the opponent is not able to bring the score below beta even moving
twice in a row), the original position is supposed to be "enough good" and a
fail-high score is returned without further searching.
What exactly do you mean by search at a reduced depth? I understand that all programs do incremental or iterative searches..Eg search up to depth 1, then 2, then 3 etc etc..
So what does search at a reduced depth move? Say this time you are supposed to search up to depth 8, when do you try a null move? And if you do it at say 6 ply, would search at a reduced depth means you search 1 more ply and that's it??
Yes. But how do we know, that the position is good enough without moveing? In general, this will need a search. So, instead for searching the position with the side to move, we pass the right to move, and let the opponent move again and search this position.I think i get the idea of the null move..Basically if the position is so good for you, that your opponent can move twice and still not get a advanatage the position is probably too good [Is that fail-high I never really got it, despite reading a couple of computer chess books and websites] , for you, that your opponent won't allow it , hence we don't need to search furthur?
If the position with the opponent to move would be searched at the same depth, nothing would be saved in general. So, the idea is, when we actually want to analyze this position to depth N, (which means, that after any of our moves, we search the position with depth N-1), to proove, that this really is a superior position, it is enough to pass the right to move, and analyze with depth N-(R+1). R is the null move depth reduction (my term, I am not sure, if this is the correct term). So, at the start of the searching process, a search is started with reduced depth, that will need much less time.
If we get a fail high here, we stop searching this position, and assume, that the position is too good, and probably an inferior move of the opponent further up in the search tree got refuted. Also, this method can fail. I.e. in Zugzwang situations. Here it is an actual advantage, to pass the right to move to the other side, and the proove above will be wrong. There exist methods, to avoid this.
I guess the idea of Paul was the following: Say we want to find the best move in the start position. Assume, that we want to analyze all positions to depth 5 (so we want to analyze any position after each of the 20 moves of white to depth 4). Actually this is done by a "normal" (without pruning mechanisms) chess search alogorithms. But now, we might want to search the moves. Say there is some indication, that some moves are better than others. We might decide (how to do this would be the difficult thing) that e4 and d4 look most interesting, followed by Nf3, Nf6, c4 and f4. The other moves look only little interesting. So we might want to only search e4 and d4 at full depth (4), Nf3, ... at slightly reduced depth (3) and the other moves at even more reduced depth (2).
Nullmove does something similar really.
-- Dieter
Perhaps, yes. But this won't be a big disadvantage. "Useless" doesn't really hurt. Also, after some tempo maneuver, you might get to the same position as "real".Yes. But how do we know, that the position is good enough without moveing? In general, this will need a search. So, instead for searching the position with the side to move, we pass the right to move, and let the opponent move again and search this position.I think i get the idea of the null move..Basically if the position is so good for you, that your opponent can move twice and still not get a advanatage the position is probably too good [Is that fail-high I never really got it, despite reading a couple of computer chess books and websites] , for you, that your opponent won't allow it , hence we don't need to search furthur?
IC. So it's still a search but a very weird one , since you
search from a position that the oppponent has moved twice. But wouldn't the information obtained from here be useless for storing in the transposition table since the position searched is in some sense not "real"?
Or perhaps it's still useful when you search to a deeper depth and use null move again?
so R=1 means you search 1 ply
less etc.. Also does Yace usually use R=3? Seems very agressive..Or does the
values in Yace.ini [Yace_high/low] file mean something else?
The double null move thing right? I will go think about it.
But your [actually Paul's] idea seem to imply that you actually "decide"
from the start position, which are moves that will be searched to a lesser depth? Don't you actually do that deeper into the move tree?
What do you mean by "In early game the null move algorithm will use
null_low when the current search depth is small (smaller or equal null_split)
and null_high, when it is high"
[From the Yace readme file]
Is it because in the early game there are more pieces on the board hence more
moves to consider and as such Yace usually searches to far lower search depths?
Hence it will use the null setting set in null_low?
So with setting Null_high 4.0, Null_low 3.0 and Null split 4.5
whenever Yace searches to depth 5 and above it will do a depth reduction of 4?
I must be misunderstanding something..Since your default Null_split of 4.5 strikes me as not very useful is reached very quickly anyway? For the current Yace, it makes no difference of course since Null_high and low is the same value.
Also does it make more sense for Null_high values to be higher than Null_low?
Return to Archive (Old Parsimony Forum)
Users browsing this forum: No registered users and 13 guests