ELOStat algorithm ?

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Re: ELOStat algorithm ?

Postby Josh Menke » 03 May 2006, 13:25

R?mi Coulom wrote:Elos are the logarithms of gammas, not the reverse. The gammas are on a n exponential scale, and should be multiplied. This works like the home-field advantage in Hunter's paper: it is multiplied by the strength of the player, not added to it.


Right, since Elos are the logarithms of gammas, the elos are e^gamma, which means a product of them would be e^gamma_1*e^gamma2*e^gamma3 = e^(gamma1+gamma2+gamma3). This is the model I use. I don't use Huang et al's model. But as I was saying, this is a model decision, not something inherent in the algorithm.

As for home field advantage, in Hunter's paper it's multiplied since it's not exponentiated. If it were exponentiated, you'd have e^advantage*e^gamma = e^(advantage+gamma).

I think you and I are saying the same thing and it's just Huang who is different.

As a side note, there is a full Bayesian chess ratings generalized linear model in Bayesian Data Analysis by Gelman, Carlin, Stern, and Rubin, in 16.7, page 432.

In this model, which is probably very similar to yours,

Pr(i defeats j | gamma) = e^gamma_i / (e^gamma_i + e^gamma_j + e^(delta + 0.5*(gamma_i+gamma_j+alpha))

where alpha is the color advantage and delta determines the likelihood of a draw.

In the model I have in mind, the gamma of a team is the product of the gamma of its member. MM allows to find the maximum-likelihood gamma for every individual player. It is not a problem at all if a particular team plays very few matches.


Yes, but again I'd have to store each game and who played in it, forever. I can not do this at production time. I can only store each player's rating and maybe their rating's variance. But this is a bit off-topic for here since you don't have these problems. If you're interested in working on this some more, let me know and we can do the paper together. Especially if there's a way to do a memory-less MM---which I doubt.

It adds a quadratic term to the log likelihood. So the maximization phase of MM has to solve a second-order polynomial. This means you'll have to compute a square root at each iteration, instead of a simple division.


OK, I can see that. It's interesting you simulate draws. One of my early models does something similar. In my application, we have maps that the teams play on and each map has an advantage score for each side on the map. That score is determined from the wins and losses per team on that map. We've found to stabilize the results, it's effective to assume that both teams have won an equal number of times. Somewhere around 10-50.

Again, with expectation propagation you have a very fast, memory-less, and naturally time-varying algorithm. The nice thing with TrueSkill is the variance is estimated explicitly instead of calculated afterwards.


The fundamental flaw of this approach is that it is purely forward in time. Models that keep the history of games are better. Some have been used in practice on large-scale game servers (the KGS game server, as well as IGS, use such a method, as far as I know).


Yes, keeping the history will always be better, but for situations like Microsofts dealing with millions of matches, and even my ten thousands of matches, it's not practical.

Thanks.
--josh
Josh Menke
 
Posts: 8
Joined: 02 May 2006, 19:15

Re: ELOStat algorithm ?

Postby Rémi Coulom » 03 May 2006, 15:34

josh wrote:
R?mi Coulom wrote:Elos are the logarithms of gammas, not the reverse. The gammas are on a n exponential scale, and should be multiplied. This works like the home-field advantage in Hunter's paper: it is multiplied by the strength of the player, not added to it.


Right, since Elos are the logarithms of gammas, the elos are e^gamma, which means a product of them would be e^gamma_1*e^gamma2*e^gamma3 = e^(gamma1+gamma2+gamma3). This is the model I use. I don't use Huang et al's model. But as I was saying, this is a model decision, not something inherent in the algorithm.


What you write still looks like you got it in reverse. Elos are not e^gamma. The exact formula is Elo=400*log(gamma)/log(10). This being said, I admit that adding gammas may make sense. As you write, it is a model decision.

Yes, keeping the history will always be better, but for situations like Microsofts dealing with millions of matches, and even my ten thousands of matches, it's not practical.


I believe it is. The KGS server does it, and it has millions of games. The key idea is that adding one match result to this huge amount of results does not change the optimal gammas much. So computation starts already very close to the optimal parameters. A couple local iterations of MM can be done extremely quickly. I do not know exactly how KGS does it, but it does it, and it does not suprise me.

I believe IGS uses http://www.pem.nu/mlrate/. Ratings there are updated only once a day, probably because of the computational cost of the update. The algorithm used is not MM, and is extremely inefficient. If you want instantaneous updates, I am sure you can do it with a few iterations of MM, to update only the ratings of the players that have just finished a game.

R?mi
Rémi Coulom
 
Posts: 96
Joined: 12 Nov 2004, 13:47
Location: Lille, France

Re: ELOStat algorithm ?

Postby Josh Menke » 03 May 2006, 16:05

R?mi Coulom wrote:A couple local iterations of MM can be done extremely quickly.

If you want instantaneous updates, I am sure you can do it with a few iterations of MM, to update only the ratings of the players that have just finished a game.


This sounds very interesting. I didn't think you could do that with MM since the latest performance of the current players would change the estimates in previous matches. I guess in the limit you could assume the difference is so small it doesn't matter.

To do this, would you then only iterate through the players who just played, updating only their ratings (gammas)? If so, would you still measure the likelihood with respect to ALL the matches? Or can you somehow do it based on only the most recent match? Or even a sliding window of the last N matches.

If you did something like this, it seems like you would get time-varying information since you aren't going back to change the ratings of players who did not play in the latest game, but have played against these players, etc.

How would you do this with MM? Should I just read Hunter's paper again?
--josh
Josh Menke
 
Posts: 8
Joined: 02 May 2006, 19:15

Re: ELOStat algorithm ?

Postby Josh Menke » 03 May 2006, 17:17

Read the previous post first, and then:

R?mi Coulom wrote:What you write still looks like you got it in reverse. Elos are not e^gamma. The exact formula is Elo=400*log(gamma)/log(10). This being said, I admit that adding gammas may make sense. As you write, it is a model decision.


I just realized why I'm getting this backwards for you. Using the terminology in Hunter's paper, I think of the betas as the ratings and don't worry about the gammas. So team's rating would be e^(beta_1+beta_2+...+beta_|T|) where gamma_i = e^(beta_i).

Where beta_i = ln (gamma_i).
--josh
Josh Menke
 
Posts: 8
Joined: 02 May 2006, 19:15

Re: ELOStat algorithm ?

Postby Rémi Coulom » 03 May 2006, 17:44

josh wrote:
R?mi Coulom wrote:A couple local iterations of MM can be done extremely quickly.

If you want instantaneous updates, I am sure you can do it with a few iterations of MM, to update only the ratings of the players that have just finished a game.


This sounds very interesting. I didn't think you could do that with MM since the latest performance of the current players would change the estimates in previous matches. I guess in the limit you could assume the difference is so small it doesn't matter.

To do this, would you then only iterate through the players who just played, updating only their ratings (gammas)? If so, would you still measure the likelihood with respect to ALL the matches? Or can you somehow do it based on only the most recent match? Or even a sliding window of the last N matches.


OK, I'll explain a little more, because I am not really sure what is clear and what is not clear to you. We have N players. Their strengths are gamma_1, ..., gamma_N. We want to compute the gammas that maximize a function L(gamma_1, ..., gamma_N). L depends on the set of game results.

The MM algorithm consists in updating one gamma at time. This update is guaranteed to increase the value of L. You don't have to update all the gammas at the same time.

Updating one gamma only requires to loop over the games of the corresponding player. There is not need to consider the whole database of games.

If a new game is added, it will change the L function. It is likely that the gammas that will be most affected by this new results are the gammas of the players that participated in the game. So applying the MM algorithm to these players will quickly update their ratings to reasonable values.

The new game will also affect the optimal gammas of the other players too, albeit less. So, such a quick update is imperfect. A simple way to fix the imperfection would be to run some iterations of the MM algorithm for every player from time to time. For instance once every hour, or once every day.

Also, I have the feeling you may be overestimating the complexity of the MM algorithm. My experience with bayeselo on my 3.4 GHz PIV is that it can completely compute gammas from scratch on databases with tens of thousands of games and hundreds of players in about one second.

How would you do this with MM? Should I just read Hunter's paper again?

If you are not sure you understand how MM works, then you should. I do not know of any better description of the algorithm.

R?mi
Rémi Coulom
 
Posts: 96
Joined: 12 Nov 2004, 13:47
Location: Lille, France

Re: ELOStat algorithm ?

Postby Josh Menke » 03 May 2006, 22:10

The problems are, 1) I can't keep a history, I just don't have the memory for it, and 2) one second is far too long. I have 50 ms to perform the update in my environment.

If it takes MM an entire second with only 10,000 matches and 100 players, it's going to take far too long with my 10,000+ matches and 20,000 players.

So basically, I can't use MM. It's too inefficient in terms of both space and time.

Le Cun and Bottou use a provably optimal online method for optimizing a loss function in Leon Bottou and Yann LeCun, "Large-Scale On-Line Learning," in Advances in Neural Information Processing Systems 15, 2004. I'm going to try this approach because it also gives me an approximation of the Hessian and uses a memory-less update. It's basically backpropagation with an efficient way to approximate the Hessian.
--josh
Josh Menke
 
Posts: 8
Joined: 02 May 2006, 19:15

Re: ELOStat algorithm ?

Postby Rémi Coulom » 04 May 2006, 12:54

josh wrote:The problems are, 1) I can't keep a history, I just don't have the memory for it


What kind of hardware do you have? It is difficult for me to imagine that you could at the same time say, need more than one kilobyte to store one game result, and have less than 10 Mb of RAM. I am currently running experiments with a history of 2.5 million games, and it requires "only" 250 Mb of memory.

, and 2) one second is far too long. I have 50 ms to perform the update in my environment.


The method I suggested would be much faster than this. One second is for a complete computation from scratch, that is to say more than 600 iterations of MM for all the players. You can run a couple iterations for each player involved in a match, and that will do.

R?mi
Rémi Coulom
 
Posts: 96
Joined: 12 Nov 2004, 13:47
Location: Lille, France

Re: ELOStat algorithm ?

Postby Josh Menke » 12 May 2006, 18:40

R?mi Coulom wrote:What kind of hardware do you have? It is difficult for me to imagine that you could at the same time say, need more than one kilobyte to store one game result, and have less than 10 Mb of RAM. I am currently running experiments with a history of 2.5 million games, and it requires "only" 250 Mb of memory.


It's not so much a question of hardware, but the fact that the system is built into the game. We don't want to add TOO many extra structures into the game's memory. Tracking a player's rating and variance is reasonable, beyond that gets messy.

,
The method I suggested would be much faster than this. One second is for a complete computation from scratch, that is to say more than 600 iterations of MM for all the players. You can run a couple iterations for each player involved in a match, and that will do.


Yes, that's right, you mentioned that. I think I will have to at least try it offline for comparison.

On the same note, I have developed an online updating method for finding the posterior mode of my original hierarchical Bayesian model. I approximate the Hessian using a running "leaky" average and I am able to extract player variances from it along with prediction intervals on the matches. It's a gradient-descent algorithm not unlike standard neural network training, but with Bayesian priors, etc. It gives better accuracy than my previous model, which was essentially maximum likelihood only without approximating the Hessian at all.

It's nice because it automatically tracks time-varying player strengths AND time-varying player variances (since the average is "leaky").

Does elostat find the maximum likelihood estimate? Or does it find the posterior mode? It looked to me like it was just finding the maximum likelihood estimate. Was it a hierarchical model or a single prior / set of priors?

If I undertand elostat right, you represent your prior by implying a number of draws between each player.

In my approach, there are shrinkage terms in each update that pull player ratings and variances back towards the priors and hyperpriors, for example.
--josh
Josh Menke
 
Posts: 8
Joined: 02 May 2006, 19:15

Previous

Return to Programming and Technical Discussions

Who is online

Users browsing this forum: MSN [Bot] and 3 guests