-
Notifications
You must be signed in to change notification settings - Fork 58
Target function returns NaN #435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You probably need a custom loss function. Where if a NaN is returned, you set the region to 0 loss, meaning it won't be picked again. Which learner are you using? |
Okay, sounds good, I'll try that. I'm using |
OK, then try something like: def custom_loss(simplex, values, value_scale):
if any(np.isnan(v) for v in values):
return 0.0
return adaptive.learner.learnerND.default_loss(simplex, values, value_scale)
learner = adaptive.LearnerND(..., loss_per_simplex=custom_loss) Note that I didn't test this 😅 |
Maybe you just want to make it less likely that something in that area gets chosen. You can do that with def custom_loss(simplex, values, value_scale):
loss = adaptive.learner.learnerND.default_loss(simplex, values, value_scale)
if any(np.isnan(v) for v in values):
return loss / 10
return loss
learner = adaptive.LearnerND(..., loss_per_simplex=custom_loss) |
That seems to work - thanks! |
I have a complex target function that involves a nonlinear optimization under the hood. The optimizer might fail. What should the target function return to indicate to
adaptive
that this region of the parameter space is no good?np.nan
?The text was updated successfully, but these errors were encountered: