Ruffian Comparison

Archive of the old Parsimony forum. Some messages couldn't be restored. Limitations: Search for authors does not work, Parsimony specific formats do not work, threaded view does not work properly. Posting is disabled.

Ruffian Comparison

Postby Robert Allgeuer » 03 Mar 2004, 14:57

Geschrieben von: / Posted by: Robert Allgeuer at 03 March 2004 14:57:58:

I have compared several Ruffian version in a kind of qualification tournament for my YABRL Blitz rating list (see http://f11.parsimony.net/forum16635/messages/62408.htm for latest published list).
Conditions:
Time control 300+2, all 3,4 and 5 men EGTB, hash 96MB, ponder off, Athlon 1.1GHz, Win2k, Winboard and WBTM tourney manager 0.60, Elostat 1.1b
Participants:
Ruffian 1.0.1 with 1.0.1 book
Ruffian 2.0.0 with 2.0.0 book
Ruffian Leiden with the Leiden book
Ruffian 2.0.2 with 2.0.0 book
Ruffian 2.1.0 with 2.0.0 book
Ruffian 08.02.2004 (this is a beta version before release before 2.0.2 and 2.1.0) with 2.0.0 book
and as opponents:
Smarthink 0.17a
Gromit 3.8.2
Thinker 4.5b
Crafty 17.14DC
Crafty MPC
Aristarch 4.37

All versions of Ruffian have played matches of 20 games each against each other and against each opponent (Ruffian 1.0.1 had one duplicate game which was removed).

Results:


    Program                     Elo    +   -   Games   Score   Av.Op.  Draws
  1 Ruffian 08.02.2004        : 2722   39  36   220    61.6 %   2640   37.7 %
  2 Ruffian Leiden            : 2713   40  35   220    60.2 %   2640   38.6 %
  3 Ruffian v2.1.0            : 2703   41  35   220    58.9 %   2641   37.7 %
  4 Ruffian v2.0.2            : 2695   42  31   220    57.5 %   2642   45.0 %
  5 Ruffian v2.0.0            : 2664   45  31   220    52.7 %   2645   41.8 %
  7 Ruffian v1.0.1            : 2650   47  32   219    50.5 %   2646   37.9 %


Observations:
1. This applies of course only to the conditions of this test (Blitz etc.)
2. Ruffian 2.0.0 appears to be stronger than the free Ruffian 1.0.1, although only a bit. In this test it is 14 ELO points, in my more accurate YABRL rating list - after more than 800 games each - it is 28 points.
3. Ruffian Leiden and the newer versions (2.0.2, 2.1.0 and 08.02.2004) are stronger than version 2.0.0. However, they are close to each other and it appears difficult to determine which of them is indeed the strongest.
4. When looking at the results of version 2.1.0 in more detail, it becomes apparent that it scores consistently less than the other Ruffian versions (except 1.0.1), but "saves" its high rating only by scoring high in the direct matches against Ruffian 2.0.2 and 08.02.2004. Nevertheless 2.1.0 appears to be the weakest of the new Ruffian version in matches against other non-Ruffian engines.
5. From the characteristics of its results it becomes apparent that 08.02.2004 is a (late) beta version of Ruffian 2.0.2 (and not 2.1.0). It would be highly interesting, whether this version is in fact identical to 2.0.2 (the measured 27 points difference in strength are within the error margin) or there were some changes made before release of 2.0.2, which might have decreased 2.0.2's playing strength.
6. The Leiden version seems to be one of the strongest. Version 2.0.2 is a bug-fix version of 2.0.0 and some 30 points stronger than 2.0.0. If I had a wish, I would ask for a bug-fix Leiden version; that one would most probably be the strongest Ruffian of all.
Robert



YABRL (Yet Another Blitz Rating List)
Robert Allgeuer
 

Re: Ruffian Comparison

Postby Heinz van Kempen » 03 Mar 2004, 15:14

Geschrieben von: / Posted by: Heinz van Kempen at 03 March 2004 15:14:43:
Als Antwort auf: / In reply to: Ruffian Comparison geschrieben von: / posted by: Robert Allgeuer at 03 March 2004 14:57:58:

If I had a wish, I would ask for a bug-fix Leiden version; that one would most probably be the strongest Ruffian of all.
Robert
Hello Robert,
I would support this. The Ruffian Leiden version also scored better for me than Ruffian 2.0.0. Additionally after more than 400 games with Ruffian 2.0.0 and around 800 games for Ruffian 1.0.5 and Ruffian 1.0.1 each the 2.0.0 scored 38 and 40 Elo more over the public versions. The newer updates I did not have time to test so far.
Best Regards
Heinz
Heinz van Kempen
 

Re: Ruffian Comparison

Postby Wael Deeb » 03 Mar 2004, 20:16

Geschrieben von: / Posted by: Wael Deeb at 03 March 2004 20:16:12:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Robert Allgeuer at 03 March 2004 19:29:57:
Hi,
it may be that the better Leiden book is the main reason that Ruffian Leiden was best in your test. If you use Ruffian 210 or Ruffian 202 with the Leiden book they will get better results.
Regards Dieter
Possibly, I just have decided to stick to testing the default configuration and settings. On the other hand it might very well also be the case that the Leiden book is tuned to the playing style of the Leiden version, while 2.0.2 and 2.1.0 have a different playing style that might not benefit that much from this book.
I think that at this stage we do not know yet, whether it is the book or the engine that makes the difference. But I think one conclusion is valid: a bug-fixed Leiden version should be even stronger.
Robert
Hi,
I am nearly sure that the Leiden book contains solid opening lines with some tricky and not so popular moves which take the opponent out of it's book and give an advantage to Ruffian!
Regards,
Dr.WAEL DEEB
Wael Deeb
 

Re: Ruffian Comparison

Postby Sune Fischer » 03 Mar 2004, 20:53

Geschrieben von: / Posted by: Sune Fischer at 03 March 2004 20:53:56:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Wael Deeb at 03 March 2004 20:16:12:
Hi,
it may be that the better Leiden book is the main reason that Ruffian Leiden was best in your test. If you use Ruffian 210 or Ruffian 202 with the Leiden book they will get better results.
Regards Dieter
Possibly, I just have decided to stick to testing the default configuration and settings. On the other hand it might very well also be the case that the Leiden book is tuned to the playing style of the Leiden version, while 2.0.2 and 2.1.0 have a different playing style that might not benefit that much from this book.
I think that at this stage we do not know yet, whether it is the book or the engine that makes the difference. But I think one conclusion is valid: a bug-fixed Leiden version should be even stronger.
Robert
Hi,
I am nearly sure that the Leiden book contains solid opening lines with some tricky and not so popular moves which take the opponent out of it's book and give an advantage to Ruffian!
Regards,
Dr.WAEL DEEB
If you are only interested in the strength of the engines why do testing with books?
Books introduce a very large noise factor when used in testing.
Sometimes you get a good book line sometimes a bad, it makes it very hard to estimate the strength of the engines when there is such a potentially deciding factor involved.
My suggesting is to use fixed start positions and to play with reversed colors, that way you can be certain that things are equal for all.
-S.
Sune Fischer
 

Re: Ruffian Comparison

Postby Robert Allgeuer » 03 Mar 2004, 21:20

Geschrieben von: / Posted by: Robert Allgeuer at 03 March 2004 21:20:16:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Sune Fischer at 03 March 2004 20:53:56:
Hi,
it may be that the better Leiden book is the main reason that Ruffian Leiden was best in your test. If you use Ruffian 210 or Ruffian 202 with the Leiden book they will get better results.
Regards Dieter
Possibly, I just have decided to stick to testing the default configuration and settings. On the other hand it might very well also be the case that the Leiden book is tuned to the playing style of the Leiden version, while 2.0.2 and 2.1.0 have a different playing style that might not benefit that much from this book.
I think that at this stage we do not know yet, whether it is the book or the engine that makes the difference. But I think one conclusion is valid: a bug-fixed Leiden version should be even stronger.
Robert
Hi,
I am nearly sure that the Leiden book contains solid opening lines with some tricky and not so popular moves which take the opponent out of it's book and give an advantage to Ruffian!
Regards,
Dr.WAEL DEEB
If you are only interested in the strength of the engines why do testing with books?
Books introduce a very large noise factor when used in testing.
Sometimes you get a good book line sometimes a bad, it makes it very hard to estimate the strength of the engines when there is such a potentially deciding factor involved.
My suggesting is to use fixed start positions and to play with reversed colors, that way you can be certain that things are equal for all.
-S.
depends what you decide to test, either the core engine or the complete chess-playing system. Both is valid and interesting IMO. From a user perspective I am also interested, whether a bad book "spoils" an otherwise good engine.
Robert
Robert Allgeuer
 

Re: Ruffian Comparison

Postby Sune Fischer » 03 Mar 2004, 21:39

Geschrieben von: / Posted by: Sune Fischer at 03 March 2004 21:39:38:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Robert Allgeuer at 03 March 2004 21:20:16:
My suggesting is to use fixed start positions and to play with reversed colors, that way you can be certain that things are equal for all.
-S.
depends what you decide to test, either the core engine or the complete chess-playing system. Both is valid and interesting IMO. From a user perspective I am also interested, whether a bad book "spoils" an otherwise good engine.
Robert
Yes, I guess the book is frequently seen as part of the engine and many are therefore interested in testing the whole package (like the SSDF).
However if you want to find out if engine X version 1,2,3 or 4 is the strongest, then it makes little sense to use a book.
Even if it is the same book it will not guarantee the engines get equally good starting positions.
One of the principles in experimental research is to make sure the experiment is reproducable, I must say that books do little to help in this regard ;-)
-S.
Sune Fischer
 

Re: Ruffian Comparison

Postby Gábor Szõts » 04 Mar 2004, 10:03

Geschrieben von: / Posted by: Gábor Szõts at 04 March 2004 10:03:25:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Robert Allgeuer at 03 March 2004 19:29:57:
Hi,
it may be that the better Leiden book is the main reason that Ruffian Leiden was best in your test. If you use Ruffian 210 or Ruffian 202 with the Leiden book they will get better results.
Regards Dieter
Possibly, I just have decided to stick to testing the default configuration and settings. On the other hand it might very well also be the case that the Leiden book is tuned to the playing style of the Leiden version, while 2.0.2 and 2.1.0 have a different playing style that might not benefit that much from this book.
I don't think that any of the Ruffian books is tuned to any of the engine versions. They are just books with different character.
Gábor
Gábor Szõts
 

Re: Ruffian Comparison

Postby Jose Carlos » 04 Mar 2004, 11:19

Geschrieben von: / Posted by: Jose Carlos at 04 March 2004 11:19:30:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Sune Fischer at 03 March 2004 21:39:38:
My suggesting is to use fixed start positions and to play with reversed colors, that way you can be certain that things are equal for all.
-S.
depends what you decide to test, either the core engine or the complete chess-playing system. Both is valid and interesting IMO. From a user perspective I am also interested, whether a bad book "spoils" an otherwise good engine.
Robert
Yes, I guess the book is frequently seen as part of the engine and many are therefore interested in testing the whole package (like the SSDF).
However if you want to find out if engine X version 1,2,3 or 4 is the strongest, then it makes little sense to use a book.
Even if it is the same book it will not guarantee the engines get equally good starting positions.
One of the principles in experimental research is to make sure the experiment is reproducable, I must say that books do little to help in this regard ;-)
-S.
What do you exactly expect from testing without book? Every chess player, human or computer, has an "style" and a set of positions that he plays well and others bad. A set of "neutral" starting postions (with reversed colors) are not really neutral at all; they will surely better for one of the players.
IMO, the book is necessary to measure the strength of the "playing system" (or simply the "player"). If you don't use the book you'll measure the "analyst".
I certainly agree that the book adds noise. But the "measure chess strength" experiment is not reproducable by any means, by definition. "Stronger" means "wins more games" or more generally "makes better results".
I'd like a situation where the book would be created by the engines, like humans do. We use games from other players, that's fine. The engines would be allowed to check games databases to create their books. Then analyze the lines themselves and learn from their own games. A common database for all engines to start with. Private analysis from each engine kept secret. Pretty much like us. Then the endless book discussion would be solved forever.
José C.
Jose Carlos
 

Re: Ruffian Comparison

Postby Sune Fischer » 04 Mar 2004, 12:38

Geschrieben von: / Posted by: Sune Fischer at 04 March 2004 12:38:43:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Jose Carlos at 04 March 2004 11:19:30:
What do you exactly expect from testing without book? Every chess player, human or computer, has an "style" and a set of positions that he plays well and others bad. A set of "neutral" starting postions (with reversed colors) are not really neutral at all; they will surely better for one of the players.
IMO, the book is necessary to measure the strength of the "playing system" (or simply the "player"). If you don't use the book you'll measure the "analyst".
I certainly agree that the book adds noise. But the "measure chess strength" experiment is not reproducable by any means, by definition.
"Stronger" means "wins more games" or more generally "makes better results".
I'd like a situation where the book would be created by the engines, like humans do. We use games from other players, that's fine. The engines would be allowed to check games databases to create their books. Then analyze the lines themselves and learn from their own games. A common database for all engines to start with. Private analysis from each engine kept secret. Pretty much like us. Then the endless book discussion would be solved forever.
José C.
If you take a wide variaty of positions I don't see how a weaker engine could perform better than a stronger engine.
Is an engine that can only handle 1.e4 well but plays everything else like a
patzer a strong engine?
Perhaps we disgree here, you would perhaps just use an 1.e4 book all the time
and that would solve the problem.
I wouldn't like that solution though and I'd consider the engine as weak.
Probably a matter of taste. :-)
If you play with engine X book A against engine Y book B all you get is how
X+A performs against Y+B.
If this is what you are interested then that's fine, but many times you are
interested X versus Y, book factor _out_ of the equation.
At least that is what I as a developer is interested in, I don't want to patch
weaknesses by changing the book, I want to fix the engine instead.
Only because there is some inaccuracy with the timers on most systems or there
may be some small background process that eats a little cpu, otherwise it would
be reproducable as most engines are (hopefully) determanistic when learning is
off.
Think of it this way. You and I are going to do a race for 200 meters, but
before we begin we toss a dice to see which of us gets X meters head start.
Now we race and one of us won.
Does this experiment tell us anything about which of us is the faster runner?
Is the experiment reproducable?
Obviously it is a rather silly way of testing whos is faster, we need to do
many races before this dice toss at the beginning of the race averages out.
What I suggest instead is that we do two races, one where you get the inside
track and one where I get the inside track.
If we run determanisticly this should even be reproducable :-)
Well I have no problems with using books as such.
It's just that I personally doesn't consider it an interesting programming task,
I'm solely interested in developing the engine right now and the book factor
is just a big annoyance to me.
If/when I'm going to change the book format I will do something like you are suggesting here, the engine will build the book based on the games.
It will add moves to the book when it wins against a stronger player and
perhaps remove a move when losing to a weaker player, all the time keeping statistics on each move.
I guess there is nothing new to doing it like that. :-)
-S.
Sune Fischer
 

Re: Ruffian Comparison

Postby Jose Carlos » 04 Mar 2004, 15:57

Geschrieben von: / Posted by: Jose Carlos at 04 March 2004 15:57:58:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Sune Fischer at 04 March 2004 12:38:43:
What do you exactly expect from testing without book? Every chess player, human or computer, has an "style" and a set of positions that he plays well and others bad. A set of "neutral" starting postions (with reversed colors) are not really neutral at all; they will surely better for one of the players.
IMO, the book is necessary to measure the strength of the "playing system" (or simply the "player"). If you don't use the book you'll measure the "analyst".
I certainly agree that the book adds noise. But the "measure chess strength" experiment is not reproducable by any means, by definition.
"Stronger" means "wins more games" or more generally "makes better results".
I'd like a situation where the book would be created by the engines, like humans do. We use games from other players, that's fine. The engines would be allowed to check games databases to create their books. Then analyze the lines themselves and learn from their own games. A common database for all engines to start with. Private analysis from each engine kept secret. Pretty much like us. Then the endless book discussion would be solved forever.
José C.
If you take a wide variaty of positions I don't see how a weaker engine could perform better than a stronger engine.
Is an engine that can only handle 1.e4 well but plays everything else like a
patzer a strong engine?
Perhaps we disgree here, you would perhaps just use an 1.e4 book all the time
and that would solve the problem.
I wouldn't like that solution though and I'd consider the engine as weak.
Probably a matter of taste. :-)
If you play with engine X book A against engine Y book B all you get is how
X+A performs against Y+B.
If this is what you are interested then that's fine, but many times you are
interested X versus Y, book factor _out_ of the equation.
At least that is what I as a developer is interested in, I don't want to patch
weaknesses by changing the book, I want to fix the engine instead.
Only because there is some inaccuracy with the timers on most systems or there
may be some small background process that eats a little cpu, otherwise it would
be reproducable as most engines are (hopefully) determanistic when learning is
off.
Think of it this way. You and I are going to do a race for 200 meters, but
before we begin we toss a dice to see which of us gets X meters head start.
Now we race and one of us won.
Does this experiment tell us anything about which of us is the faster runner?
Is the experiment reproducable?
Obviously it is a rather silly way of testing whos is faster, we need to do
many races before this dice toss at the beginning of the race averages out.
What I suggest instead is that we do two races, one where you get the inside
track and one where I get the inside track.
If we run determanisticly this should even be reproducable :-)
Well I have no problems with using books as such.
It's just that I personally doesn't consider it an interesting programming task,
I'm solely interested in developing the engine right now and the book factor
is just a big annoyance to me.
If/when I'm going to change the book format I will do something like you are suggesting here, the engine will build the book based on the games.
It will add moves to the book when it wins against a stronger player and
perhaps remove a move when losing to a weaker player, all the time keeping statistics on each move.
I guess there is nothing new to doing it like that. :-)
-S.
I also use tests similar to NUNN test to measure advance in my engine. As a developer, the book might certainly be cosidered as noise. But I'm talking about measuring strength.
Everytime I have this discussion I see there're several definitions of "strength". This is a crucial point because everything else depends on it. Strength, IMO, is the ability to score better. Better score -> stronger. That's how ELO works and how human competitions work.
Under this definition, understanding KBNK ending, for example, is totally irrelevant if no KBNK ending happens to me in my calibration games. Only my results count.
Force Kramnik to play 1.e4 in an important game against Shirov and see what happens. Kramnik, if let alone, would most probably play a closed opening. Is he weak because he can't play 1.e4 at the same level as 1.d4? Must Shirov play Caro-Kann with black to prove his strength?
My point is that for maximizing score in a chess game I can do anything I want (under the rules, of course), which includes not playing the English with white ever. I (JC) don't like Sicilian with black nor English with white. I never play them in tournaments. I think programs should have the same right as I have, when measuring strength.
Analysis capabilities is a different issue. If I want to buy a program to help me analyze, I'll probably want it to be able to handle a high variety of positions. But then, what about time handling? It also adds noise because my search and eval might be better than yours and my time handling much worse and I'd lose to you many games. So what, which program is "stronger"? Mine searchs better!
José C.
Jose Carlos
 

Re: Ruffian Comparison

Postby Sune Fischer » 04 Mar 2004, 18:11

Geschrieben von: / Posted by: Sune Fischer at 04 March 2004 18:11:28:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Jose Carlos at 04 March 2004 15:57:58:

I also use tests similar to NUNN test to measure advance in my engine. As a developer, the book might certainly be cosidered as noise. But I'm talking about measuring strength.
Everytime I have this discussion I see there're several definitions of "strength". This is a crucial point because everything else depends on it. Strength, IMO, is the ability to score better. Better score -> stronger. That's how ELO works and how human competitions work.
Under this definition, understanding KBNK ending, for example, is totally irrelevant if no KBNK ending happens to me in my calibration games. Only my results count.
Force Kramnik to play 1.e4 in an important game against Shirov and see what happens. Kramnik, if let alone, would most probably play a closed opening. Is he weak because he can't play 1.e4 at the same level as 1.d4? Must Shirov play Caro-Kann with black to prove his strength?
My point is that for maximizing score in a chess game I can do anything I want (under the rules, of course), which includes not playing the English with white ever. I (JC) don't like Sicilian with black nor English with white. I never play them in tournaments. I think programs should have the same right as I have, when measuring strength.
Analysis capabilities is a different issue. If I want to buy a program to help me analyze, I'll probably want it to be able to handle a high variety of positions. But then, what about time handling? It also adds noise because my search and eval might be better than yours and my time handling much worse and I'd lose to you many games. So what, which program is "stronger"? Mine searchs better!
José C.
Perhaps we have gone a bit off stray here.
I usually don't respond to the how to do testing threads, as I think the
testing methology must depend on what it is you want to test.
It was in the context of testing Ruffians I responded, because I feel for the
reasons stated that it is a bad idea to involve books in these tests.
If want to do testing with books to see if you have improved your engine,
then that's all fine by me :-)
Like I said this is probably a matter of taste.
I don't care for this way of solving problems myself, but YMMV.
There is also a difference between chess engines and humans in this regard.

Humans often have a prefered opening because they just like that type of
position better, ie. sometimes you feel like playing an attacking gambit game
other times you want a more quiet game.
The opening will depend on your mood and perhaps who your opponent is.
Chess engines have no excuses like that, they should just do everything well!
;)
Time handling is not a random parameter in the same way the book lines are,
so it doesn't "corrupt" the testing in the same way.
It might be interesting to remove the time handling, to see which engine is
the better analysis engine.
It might also be interesting to use Nunn type position only in late midgame,
to see which is engine is the better endgame player, etc..
The right way is to figure out what it is you want to know, _then_ you sit
down and design a test for that.
-S.
Sune Fischer
 

Re: Ruffian Comparison

Postby Robert Allgeuer » 04 Mar 2004, 19:16

Geschrieben von: / Posted by: Robert Allgeuer at 04 March 2004 19:16:10:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Sune Fischer at 04 March 2004 18:11:28:
I usually don't respond to the how to do testing threads, as I think the
testing methology must depend on what it is you want to test.
It was in the context of testing Ruffians I responded, because I feel for the
reasons stated that it is a bad idea to involve books in these tests.
If want to do testing with books to see if you have improved your engine,
then that's all fine by me :-)
It really depends on what you want to test. What counts for the user in the end is how strong the complete package is, including everything, and this you can test only by also testing the _complete_ package.
If one is interested in other aspects - and this is also interesting in its own right - one can and should specifically test the search, pondering, different books etc. However, I do not understand, why testing the commercial release of an engine with books should be "a bad idea"; if Ruffian had a crappy book that spoils everything, this is still something of concern and should not be ignored. Specifically when one wants to compare also with the "Leiden _package_" that has a different book ...
Robert
Robert Allgeuer
 

Re: Ruffian Comparison

Postby Sune Fischer » 04 Mar 2004, 20:09

Geschrieben von: / Posted by: Sune Fischer at 04 March 2004 20:09:22:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Robert Allgeuer at 04 March 2004 19:16:10:
I usually don't respond to the how to do testing threads, as I think the
testing methology must depend on what it is you want to test.
It was in the context of testing Ruffians I responded, because I feel for the
reasons stated that it is a bad idea to involve books in these tests.
If want to do testing with books to see if you have improved your engine,
then that's all fine by me :-)
It really depends on what you want to test. What counts for the user in the end is how strong the complete package is, including everything, and this you can test only by also testing the _complete_ package.
If one is interested in other aspects - and this is also interesting in its own right - one can and should specifically test the search, pondering, different books etc. However, I do not understand, why testing the commercial release of an engine with books should be "a bad idea"; if Ruffian had a crappy book that spoils everything, this is still something of concern and should not be ignored. Specifically when one wants to compare also with the "Leiden _package_" that has a different book ...
Robert
If you test a new version with a new book, how do you conclude if the new
engine is better or worse in itself?
I thought that was what you wanted to know.
If you test the engines and books seperately, then you can combine the
best engine with the best book and get the strongest package.
That, and then because testing with books will require more games.
-S.
Sune Fischer
 

Re: Ruffian Comparison

Postby Robert Allgeuer » 04 Mar 2004, 20:23

Geschrieben von: / Posted by: Robert Allgeuer at 04 March 2004 20:23:26:
Als Antwort auf: / In reply to: Re: Ruffian Comparison geschrieben von: / posted by: Sune Fischer at 04 March 2004 20:09:22:
If you test the engines and books seperately, then you can combine the
best engine with the best book and get the strongest package.
Not necessarily, besides it means double testing effort, which is not necessary when you just want to determine which package is strongest.
Robert
Robert Allgeuer
 


Return to Archive (Old Parsimony Forum)

Who is online

Users browsing this forum: No registered users and 37 guests