Moderator: Andres Valverde
Thanks Dieter. You are right: matrix inversion is needed, not diagonalization. In fact, it is explained in Hunter's paper:Dieter B?r?ner wrote:R?mi, your work looks very intersesting. BTW. years back, when I worked on fitting problems, I used Matrix inversion to calculate covariance Matrix. But I forgot most of the details. I remember that some fitting problems were very demanding (in a numerical sense) and easy routines (like the rgaussi Dann mentioned, which was actually written by me - more or less as a toy in a contest like situation) would not work well.
He discusses this more in the end of the paper, if you are interested.David Hunter wrote:Obtaining standard error estimates for parameters in a Bradley-Terry model is easy in principle; the inverse of the Hessian matrix of the log-likelihood (or, alternatively, the inverse of the Fisher information matrix) evaluated at the MLE gives an asymptotic approximation to the covariance matrix of the MLE.
Dann Corbit wrote:I am curious about the problem that needs the eigenvalues/eigenvectors.
Are you fitting a curve (It seems vaguely to me like that is what you are solving from what I have read in your post and Dieter's follow-ups)? Why not use the Marquardt-Levenberg algorithm? The implementation in GnuPlot has been changed to public domain, BTW.
Maybe I just don't understand the issues at hand.
R?mi Coulom wrote:Dann Corbit wrote:I am curious about the problem that needs the eigenvalues/eigenvectors.
They are not needed. I had guessed wrong that they would be necessary.Are you fitting a curve (It seems vaguely to me like that is what you are solving from what I have read in your post and Dieter's follow-ups)? Why not use the Marquardt-Levenberg algorithm? The implementation in GnuPlot has been changed to public domain, BTW.
I am indeed fitting the Bradley-Terry model. The maximization of the likelihood could be done with any gradient-ascent algorithm, such as conjugate-gradient and Levenberg-Marquardt, as you suggest. But there is a much better approach called "Minorization-Maximization" that works well and that I have already implemented. The problem is now to estimate confidence intervals. That's why I need the inverse of the Hessian of the log-likelihood.Maybe I just don't understand the issues at hand.
If you'd like to understand, the whole theory is very well explained in Hunter's paper (the link is at the bottom of my web page). The paper may look a little intimidating at first, but its most important parts are understandable by anyone who knows what is a probability and a logarithm, I think (and knows how to solve a second-order polynomial). In order to really understand the article, it is necessary to make a few calculations on paper, but they are not difficult.
R?mi
Dann Corbit wrote:Actually, the confidence and prediction intervals are not difficult to produce if you can produce an inverse to the function you are fitting.
Dann Corbit wrote:Here is a logarithmic curve fit with prediction and confidence intervals that I wrote 20 years ago (so forgive the mess -- I've learned how to program since then):
[...]
C FIND LEAST SQUARES LINE FIT
[...]
R?mi Coulom wrote:You are right: matrix inversion is needed, not diagonalization.
Dieter B?r?ner wrote:R?mi Coulom wrote:You are right: matrix inversion is needed, not diagonalization.
Most problems, that seem to need matrix inversion don't really need a full inversion. Often triangulation (??) is enough (for example to solve x*A=b mathemeticans like to write the solution with the inverse of A, but one won't need the full inverse really). In fitting problems my experience had shown, that the matrix was often (when many parameters) is close to linear dependent, giving the typical Gauss Jordan method a hard time, even when using full pivoting. I remember that the more sophisticated routines used somehow SVD decomposition or QR factorization.
Regards,
Dieter
Dieter B?r?ner wrote:Dann Corbit wrote:Here is a logarithmic curve fit with prediction and confidence intervals that I wrote 20 years ago (so forgive the mess -- I've learned how to program since then):
[...]
C FIND LEAST SQUARES LINE FIT
[...]
Interesting, Dann. About the same time I took a course "Computer in der Chemie" at University. One of my first programs was a least square line fit. It was in Fortran IV on a PDP 11 (which was not running Unix, I forgot the name of the OS). BTW. Is your program Standard Fortran? I forgot almost everything about Fortran until now. Does it really have a $INCLUDE, $nofloatcalls (even not capitals), $PAGESIZE, $STORAGE, $DEBUG? Also note, that the formatting of lines (which is - as you know - of special importance in older Fortran) does not display correctly when pasting. "code" and "/code" in square brackets instead of quotes will do it.
Regards,
Dieter
R?mi Coulom wrote:Another exciting idea is multi-dimensional Bradley-Terry:
http://www.agro-montpellier.fr/sfds/CD/ ... usson1.pdf
I could not find an English version available online, although it should appear soon. The paper has an English abstract, anyways, and the formulas are international (and Dann can speak French very well).
{snip}
R?mi
Reinhard Scharnagl wrote:To the posters to this thread:
are you convinced for the thread title still to correspond with the thread content here?
Reinhard.
Dann Corbit wrote:On a file with 3.3 million PGN games (stripped of all comments and stripped of all PGN tags beyond the required 7 tags):
C:\eco>eco -7 -s -c -C jb.pgn -oclean.pgn
Games: 3312660
Only 27588 games are actually processed:
27588 games parsed
Then most are removed during the elo calcuation:
ResultSet>elo
23907 player(s) removed
ResultSet-EloRating>
Is the source code available?
R?mi Coulom wrote:[snip]
Do you know if there is an utility to ease the process of merging different spellings and fix such PGNs ? I have been thinking about how it could be done. Maybe I will write a tool. If it does not exist, it would be very convenient for many people, I guess.
R?mi
Return to Programming and Technical Discussions
Users browsing this forum: Google [Bot] and 6 guests