Skip to content

Learner1D could in some situations return -inf as loss improvement, which would make balancinglearner never choose to improve #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task
basnijholt opened this issue Dec 19, 2018 · 0 comments

Comments

@basnijholt
Copy link
Member

(original issue on GitLab)

opened by Jorn Hoofwijk (@Jorn) at 2018-09-20T13:50:40.821Z

The following discussion from gitlab:!99 should be addressed:

to show an example: try running:

import adaptive
import numpy as np
adaptive.notebook_extension()

# if we would define f1 to have some features in the interval -1,0 we would never see them using the balancinglearner

def f1(x):
    return -1 if x <= 0.1 else 1

def f2(x):
    return x**2


l1 = adaptive.Learner1D(f1, (-1, 1))
l2 = adaptive.Learner1D(f2, (-1, 1))

# now let's create the balancinglearner and do some balancing :D
bl = adaptive.BalancingLearner([l1, l2])
for i in range(1000):
    xs, _ = bl.ask(1)
    x, = xs
    y = bl.function(x)
    bl.tell(x, y)
    
asked = l1.ask(1, add_data=False)
print(f"l1 requested {asked}, but since loss_improvement is -inf, \n\tbalancinglearner will never choose this")
print(f"npoints: l1: {l1.npoints}, l2: {l2.npoints}, almost all points are added to l2")
print(f"loss():  l1: {l1.loss()}, l2: {l2.loss()}, the actual loss of l1 is much higher than l2.loss")

l1.plot() + l2.plot()

this will output:

l1 requested ([0.10000000000000009], [-inf]), but since loss_improvement is -inf, 
	balancinglearner will never choose this
npoints: l1: 53, l2: 947, almost all points are added to l2
loss():  l1: 1.0, l2: 0.003584776382870768, the actual loss of l1 is much higher than l2.loss

I also have a notebook here: bug_learner1d_infinite_loss_improvement.ipynb that constructs what is happening artificially and indicates on why this happens a bit more.

The reason why this happens:

the interval is bigger than _dx_eps, so it has a finite loss associated with it. When asked to improve, it finds that by dividing this interval, it creates two intervals which are smaller than dx_eps, so it claims the loss_improvement is -inf. Which results in the balancinglearner always choosing another learner to add a point (this second bug is a result of the first one).

the problem with the balancinglearner can be solved by solving gitlab:#103 however, the underlying issue (returning -inf as loss_improvement) is then still not really solved, although one will almost never notice anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant