Quick analysis of some boolean features at shallow depth

Archive of the old Parsimony forum. Some messages couldn't be restored. Limitations: Search for authors does not work, Parsimony specific formats do not work, threaded view does not work properly. Posting is disabled.

Quick analysis of some boolean features at shallow depth

Postby Dann Corbit » 04 Jun 2004, 19:36

Geschrieben von:/Posted by: Dann Corbit at 04 June 2004 20:36:22:

Using Beowulf's calibration function, we see some interesting results for shallow tests of binary parameters.
DEPTH=2
use_delta
(x=0.000000, y=4423.000000) t=56.000000,
(x=1.000000, y=4423.000000) t=63.000000, 0.000000 is better for use_delta at depth 2
use_eval_sc
(x=0.000000, y=4436.000000) t=69.000000,
(x=1.000000, y=4423.000000) t=64.000000, solves fewer but runs faster. Probably a net loss since speedup is very small.
use_hash
(x=0.000000, y=4414.000000) t=63.000000,
(x=1.000000, y=4423.000000) t=62.000000, 1.000000 is better for use_hash at depth 2
use_iid
(x=0.000000, y=4423.000000) t=70.000000,
(x=1.000000, y=4423.000000) t=66.000000, 1.000000 is better for use_iid at depth 2
use_killers
(x=0.000000, y=4420.000000) t=67.000000,
(x=1.000000, y=4423.000000) t=63.000000, 1.000000 is better for use_killers at depth 2
use_null
(x=0.000000, y=4423.000000) t=63.000000,
(x=1.000000, y=4423.000000) t=63.000000, does not matter either way
use_razoring
(x=0.000000, y=4423.000000) t=63.000000,
(x=1.000000, y=4423.000000) t=63.000000, does not matter either way
use_see
(x=0.000000, y=4447.000000) t=107.000000,
(x=1.000000, y=4423.000000) t=64.000000, mixed result but a big speedup. Probably better to use SEE.
use_verification
(x=0.000000, y=4423.000000) t=65.000000,
(x=1.000000, y=4423.000000) t=64.000000, 1.000000 is slightly better for use_verification at depth 2
use_window
(x=0.000000, y=4423.000000) t=64.000000,
(x=1.000000, y=4423.000000) t=65.000000, does not matter either way
DEPTH=3
use_delta
(x=0.000000, y=5111.000000) t=283.000000,
(x=1.000000, y=5109.000000) t=290.000000, 0.000000 is better for use_delta at depth 3
use_eval_sc
(x=0.000000, y=5121.000000) t=361.000000,
(x=1.000000, y=5111.000000) t=287.000000, solves fewer but runs faster. Probably a net loss since speedup is very small.
use_hash
(x=0.000000, y=5114.000000) t=342.000000,
(x=1.000000, y=5111.000000) t=287.000000, mixed result but a big speedup. Probably better to use hashing.
use_history
(x=0.000000, y=5111.000000) t=318.000000,
(x=1.000000, y=5111.000000) t=288.000000, 1.000000 is better for use_history at depth 3
use_iid
(x=0.000000, y=5111.000000) t=288.000000,
(x=1.000000, y=5111.000000) t=288.000000, does not matter either way
use_killers
(x=0.000000, y=5104.000000) t=288.000000,
(x=1.000000, y=5111.000000) t=287.000000, 1.000000 is better for use_killers at depth 3
use_null
(x=0.000000, y=5119.000000) t=276.000000,
(x=1.000000, y=5111.000000) t=287.000000, 0.000000 is better for use_null at depth 3 (a bit puzzling)
use_razoring
(x=0.000000, y=5119.000000) t=306.000000,
(x=1.000000, y=5119.000000) t=313.000000, 0.000000 is better for use_razoring at depth 3
use_see
(x=0.000000, y=5136.000000) t=420.000000,
(x=1.000000, y=5119.000000) t=327.000000, mixed result but a big speedup. Probably better to use SEE.
use_verification
(x=0.000000, y=5119.000000) t=320.000000,
(x=1.000000, y=5119.000000) t=341.000000, 0.000000 is better for use_verification at depth 3
use_window
(x=0.000000, y=5107.000000) t=346.000000,
(x=1.000000, y=5119.000000) t=327.000000, 1.000000 is better for use_window at depth 3
DEPTH=4
use_delta
(x=0.000000, y=5463.000000) t=1076.000000,
(x=1.000000, y=5460.000000) t=1105.000000, 0.000000 is better for use_delta at depth 4
use_eval_sc
(x=0.000000, y=5483.000000) t=1309.000000,
(x=1.000000, y=5463.000000) t=1098.000000, solves fewer but runs faster. Probably a net loss since speedup is very small.
use_hash
(x=0.000000, y=5452.000000) t=1536.000000,
(x=1.000000, y=5463.000000) t=1097.000000, 1.000000 is better for use_hash at depth 4 (now for sure we see benefit)
use_history
(x=0.000000, y=5470.000000) t=1229.000000,
(x=1.000000, y=5463.000000) t=1098.000000, mixed result seems much less clear than at shallower plies
use_iid
(x=0.000000, y=5463.000000) t=1097.000000,
(x=1.000000, y=5463.000000) t=1097.000000, does not matter either way
use_killers
(x=0.000000, y=5468.000000) t=1084.000000,
(x=1.000000, y=5463.000000) t=1096.000000, 0.000000 is better for use_killers at depth 4
use_null
(x=0.000000, y=5499.000000) t=1201.000000,
(x=1.000000, y=5468.000000) t=1083.000000, Hard to say for sure what the net benefit is, since the speedup also causes fewer solutions. The speedup is not dominating.
use_razoring
(x=0.000000, y=5468.000000) t=1081.000000,
(x=1.000000, y=5468.000000) t=1084.000000, 0.000000 is better for use_razoring at depth 4
use_see
(x=0.000000, y=5476.000000) t=1589.000000,
(x=1.000000, y=5468.000000) t=1081.000000, mixed result but a big speedup. Probably better to use SEE.
use_verification
(x=0.000000, y=5468.000000) t=1082.000000,
(x=1.000000, y=5468.000000) t=1081.000000, pretty much a toss up, but may be slightly better to use it
use_window
(x=0.000000, y=5452.000000) t=1097.000000,
(x=1.000000, y=5468.000000) t=1081.000000, 1.000000 is better for use_window at depth 4



my ftp site {remove http:// unless you like error messages}
Dann Corbit
 

Re: Quick analysis of some boolean features at shallow depth

Postby Uri Blass » 04 Jun 2004, 19:49

Geschrieben von:/Posted by: Uri Blass at 04 June 2004 20:49:03:
Als Antwort auf:/In reply to: Quick analysis of some boolean features at shallow depth geschrieben von:/posted by: Dann Corbit at 04 June 2004 20:36:22:
Using Beowulf's calibration function, we see some interesting results for shallow tests of binary parameters.
DEPTH=2
use_delta
(x=0.000000, y=4423.000000) t=56.000000,
(x=1.000000, y=4423.000000) t=63.000000, 0.000000 is better for use_delta at depth 2
use_eval_sc
(x=0.000000, y=4436.000000) t=69.000000,
(x=1.000000, y=4423.000000) t=64.000000, solves fewer but runs faster. Probably a net loss since speedup is very small.
use_hash
(x=0.000000, y=4414.000000) t=63.000000,
(x=1.000000, y=4423.000000) t=62.000000, 1.000000 is better for use_hash at depth 2
use_iid
(x=0.000000, y=4423.000000) t=70.000000,
(x=1.000000, y=4423.000000) t=66.000000, 1.000000 is better for use_iid at depth 2
use_killers
(x=0.000000, y=4420.000000) t=67.000000,
(x=1.000000, y=4423.000000) t=63.000000, 1.000000 is better for use_killers at depth 2
use_null
(x=0.000000, y=4423.000000) t=63.000000,
(x=1.000000, y=4423.000000) t=63.000000, does not matter either way
use_razoring
(x=0.000000, y=4423.000000) t=63.000000,
(x=1.000000, y=4423.000000) t=63.000000, does not matter either way
use_see
(x=0.000000, y=4447.000000) t=107.000000,
(x=1.000000, y=4423.000000) t=64.000000, mixed result but a big speedup. Probably better to use SEE.
use_verification
(x=0.000000, y=4423.000000) t=65.000000,
(x=1.000000, y=4423.000000) t=64.000000, 1.000000 is slightly better for use_verification at depth 2
use_window
(x=0.000000, y=4423.000000) t=64.000000,
(x=1.000000, y=4423.000000) t=65.000000, does not matter either way
DEPTH=3
use_delta
(x=0.000000, y=5111.000000) t=283.000000,
(x=1.000000, y=5109.000000) t=290.000000, 0.000000 is better for use_delta at depth 3
use_eval_sc
(x=0.000000, y=5121.000000) t=361.000000,
(x=1.000000, y=5111.000000) t=287.000000, solves fewer but runs faster. Probably a net loss since speedup is very small.
use_hash
(x=0.000000, y=5114.000000) t=342.000000,
(x=1.000000, y=5111.000000) t=287.000000, mixed result but a big speedup. Probably better to use hashing.
use_history
(x=0.000000, y=5111.000000) t=318.000000,
(x=1.000000, y=5111.000000) t=288.000000, 1.000000 is better for use_history at depth 3
use_iid
(x=0.000000, y=5111.000000) t=288.000000,
(x=1.000000, y=5111.000000) t=288.000000, does not matter either way
use_killers
(x=0.000000, y=5104.000000) t=288.000000,
(x=1.000000, y=5111.000000) t=287.000000, 1.000000 is better for use_killers at depth 3
use_null
(x=0.000000, y=5119.000000) t=276.000000,
(x=1.000000, y=5111.000000) t=287.000000, 0.000000 is better for use_null at depth 3 (a bit puzzling)
use_razoring
(x=0.000000, y=5119.000000) t=306.000000,
(x=1.000000, y=5119.000000) t=313.000000, 0.000000 is better for use_razoring at depth 3
use_see
(x=0.000000, y=5136.000000) t=420.000000,
(x=1.000000, y=5119.000000) t=327.000000, mixed result but a big speedup. Probably better to use SEE.
use_verification
(x=0.000000, y=5119.000000) t=320.000000,
(x=1.000000, y=5119.000000) t=341.000000, 0.000000 is better for use_verification at depth 3
use_window
(x=0.000000, y=5107.000000) t=346.000000,
(x=1.000000, y=5119.000000) t=327.000000, 1.000000 is better for use_window at depth 3
DEPTH=4
use_delta
(x=0.000000, y=5463.000000) t=1076.000000,
(x=1.000000, y=5460.000000) t=1105.000000, 0.000000 is better for use_delta at depth 4
use_eval_sc
(x=0.000000, y=5483.000000) t=1309.000000,
(x=1.000000, y=5463.000000) t=1098.000000, solves fewer but runs faster. Probably a net loss since speedup is very small.
use_hash
(x=0.000000, y=5452.000000) t=1536.000000,
(x=1.000000, y=5463.000000) t=1097.000000, 1.000000 is better for use_hash at depth 4 (now for sure we see benefit)
use_history
(x=0.000000, y=5470.000000) t=1229.000000,
(x=1.000000, y=5463.000000) t=1098.000000, mixed result seems much less clear than at shallower plies
use_iid
(x=0.000000, y=5463.000000) t=1097.000000,
(x=1.000000, y=5463.000000) t=1097.000000, does not matter either way
use_killers
(x=0.000000, y=5468.000000) t=1084.000000,
(x=1.000000, y=5463.000000) t=1096.000000, 0.000000 is better for use_killers at depth 4
use_null
(x=0.000000, y=5499.000000) t=1201.000000,
(x=1.000000, y=5468.000000) t=1083.000000, Hard to say for sure what the net benefit is, since the speedup also causes fewer solutions. The speedup is not dominating.
use_razoring
(x=0.000000, y=5468.000000) t=1081.000000,
(x=1.000000, y=5468.000000) t=1084.000000, 0.000000 is better for use_razoring at depth 4
use_see
(x=0.000000, y=5476.000000) t=1589.000000,
(x=1.000000, y=5468.000000) t=1081.000000, mixed result but a big speedup. Probably better to use SEE.
use_verification
(x=0.000000, y=5468.000000) t=1082.000000,
(x=1.000000, y=5468.000000) t=1081.000000, pretty much a toss up, but may be slightly better to use it
use_window
(x=0.000000, y=5452.000000) t=1097.000000,
(x=1.000000, y=5468.000000) t=1081.000000, 1.000000 is better for use_window at depth 4
What is use delta or use_eval_sc?
I guess that you solved some test suite at fix depth and decided based on more solution if changing binary parameters is better but I do not understand the meaning of most parameters.
Uri
Uri Blass
 

Re: Quick analysis of some boolean features at shallow depth

Postby Dann Corbit » 04 Jun 2004, 19:59

Geschrieben von:/Posted by: Dann Corbit at 04 June 2004 20:59:37:
Als Antwort auf:/In reply to: Re: Quick analysis of some boolean features at shallow depth geschrieben von:/posted by: Uri Blass at 04 June 2004 20:49:03:

[snip]
What is use delta or use_eval_sc?
I guess that you solved some test suite at fix depth and decided based on more solution if changing binary parameters is better but I do not understand the meaning of most parameters.
Uri
delta stands for delta cuts.
/* If this move looks like it won't improve alpha then ignore it. Simple qsearch
* futility pruning called 'Delta cutting'. The safety margin is usually one pawn
* score, but is adjustable in comp.h. The higher, the safer! At this stage we
* prune only _really_ bad moves (DELTA_LEVEL + BISHOP_SCORE) */
if (USE_DELTA && (swapgain + DELTA_LEVEL + BISHOP_SCORE) < delta &&
*move != hashmove && Skill>6)
{DeltaCuts++;move++;continue;}
Also here:
/* If this looks like a good move */
if (swapgain>=0) {
/* Do the move */
U = DoMove(B,*move);

/* Experimental new (safer) type of delta cuts. Basically, if it looks like a delta
* cut should be taken then actually check the LazyEval score to see if the estimate
* was at all accurate. If the score plus a bound is still below delta then
* really do the cut. */
if (USE_DELTA && (score + swapgain + DELTA_LEVEL) < delta &&
Skill>6) {
if (IsDrawnMaterial(B)) evalscore = (Current_Board.side==WHITE ? DRAW_SCORE : -DRAW_SCORE);
else evalscore = LazyEval(B) + IsWonGame(B);
if (B->side==BLACK) evalscore = -evalscore;
if ((evalscore + DELTA_LEVEL) < delta) {
DeltaCuts++;
UndoMove(B,*move,U);
break;
}
}
Here is the evaluation shortcut:

/* Check to see if we can just cutoff here - this score is so good that
* we needn't bother working it out exactly - it's going to cause a cutoff.
* We have to be very careful because the score difference gained by doing
* a proper eval here might be huge, therefore we only cutoff if this
* position is theoretically won, and beta isn't, or it is theoretically
* lost and alpha isn't. We can't just use standard futility cutoffs like
* we do below, because in theoretically won positions, the score returned
* by LazyEval() will almost always be much larger than EVAL_FUTILITY. */
if (USE_EVAL_SC && score!=0 && ((score > 0 && beta-(T_WIN_BOUND)))) {
EvalCuts++;
#ifdef DEBUG_EVAL
fprintf(stdout,"Early Cut [1] %d (A=%d,B=%d)\n",score,alpha,beta);
#endif
return score;
}

/* Get a lazy score evaluation
* (material score & simple positional terms plus passed pawns) */
lazyscore = LazyEval(B) + score;
/* Check to see if we can just cutoff here. The expectation is that the LazyEval
* is alway within EVAL_FUTILITY of the true score. Of course this isn't always
* true, but we hope that it is true most of the time. */
if (USE_EVAL_SC && (lazyscore > (beta+EVAL_FUTILITY) || lazyscore < (alpha-EVAL_FUTILITY))) {
EvalCuts++;
#ifdef DEBUG_EVAL
fprintf(stdout,"Early Cut [2] %d (A=%d,B=%d)\n",lazyscore,alpha,beta);
#endif
return lazyscore;
}




my ftp site {remove http:// unless you like error messages}
Dann Corbit
 

Re: Quick analysis of some boolean features at shallow depth

Postby Dann Corbit » 04 Jun 2004, 22:00

Geschrieben von:/Posted by: Dann Corbit at 04 June 2004 23:00:49:
Als Antwort auf:/In reply to: Re: Quick analysis of some boolean features at shallow depth geschrieben von:/posted by: Dann Corbit at 04 June 2004 20:59:37:

Delta cuts are still bad at depth 5 (both slower and also less solutions).
use_delta
(x=0.000000, y=5821.000000) t=3180.000000,
(x=1.000000, y=5818.000000) t=3261.000000, 0.000000 is better for use_delta at depth 5



my ftp site {remove http:// unless you like error messages}
Dann Corbit
 

Re: Quick analysis of some boolean features at shallow depth

Postby Dann Corbit » 05 Jun 2004, 01:00

Geschrieben von:/Posted by: Dann Corbit at 05 June 2004 02:00:41:
Als Antwort auf:/In reply to: Re: Quick analysis of some boolean features at shallow depth geschrieben von:/posted by: Dann Corbit at 04 June 2004 23:00:49:
Delta cuts are still bad at depth 5 (both slower and also less solutions).
use_delta
(x=0.000000, y=5821.000000) t=3180.000000,
(x=1.000000, y=5818.000000) t=3261.000000, 0.000000 is better for use_delta at depth 5
Interestingly, eval_sc has now become worthwhile (at depth=5 ply):
use_eval_sc
(x=0.000000, y=5821.000000) t=3387.000000,
(x=1.000000, y=5821.000000) t=2882.000000, 1.000000 is better for use_eval_sc at depth 5



my ftp site {remove http:// unless you like error messages}
Dann Corbit
 


Return to Archive (Old Parsimony Forum)

Who is online

Users browsing this forum: No registered users and 24 guests