-
Notifications
You must be signed in to change notification settings - Fork 227
Add MLE/MAP functionality #1230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I suppose that for many data points, |
Yeah, the MLE estimate here is basically the mean. |
Honestly, I had no clue what you mean with standard errors and tstat before I checked the code. I would have expected it to just return the mode, either as named tuple or VarInfo object, possibly together with the other results from the optimizer. Maybe one could have a separate function for these diagnostics? If |
No, this is almost completely counter to how these analyses are supposed to work. Using a MLE/MAP estimate with no information on the dispersion of that estimator is next to useless, akin to running OLS without standard errors or using just the median or whatever from a full posterior. I think if you are the kind of person who works in the MLE world, it's an absolute requirement to have statistics computed from the information matrix. Plus, you can use it to run hypothesis tests. Granted, perhaps this should be separated out, such that you do something like est = MLE(model)
info_matrix = information_matrix(est) Or something equivalent. But I would prefer to hew to how OLS works, where the estimator covariance matrix is returned simultaneously with the point estimates. |
Co-Authored-By: David Widmann <[email protected]>
Co-authored-by: David Widmann <[email protected]>
Co-authored-by: David Widmann <[email protected]>
Co-authored-by: David Widmann <[email protected]>
Co-authored-by: David Widmann <[email protected]>
Alright, tests are passing at the moment. Further comments or things you don't think have been addressed yet? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just left one small comment. The named tuple initialization can be handled in a separate PR I think. The same needs to be done for sampling as well. Really great work @cpfiffer! I am really happy we finally have MAP and MLE out of the box.
Aight, I think it's finally good to go. 🎆 |
@init @require Optim="429524aa-4258-5aef-a3af-852621145aeb" @eval begin | ||
include("modes/ModeEstimation.jl") | ||
export MAP, MLE, optimize | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noticed: users will have problems if they load Optim but not NamedArrays. We should add a nested @require
block that checks for NamedArrays if we want to keep it optional (which I guess we want). Otherwise one could think about returning a named tuple as default and just provide some optional way for converting it to a NamedArray.
So if |
Co-authored-by: David Widmann <[email protected]>
I think I'd actually prefer to make |
Also, the VI part would benefit from using |
This PR adds MLE and MAP functions to generate point estimates. I have not yet written tests, so this is still a WIP.
Usage:
MLE results:
MAP results: