You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
importadaptiveimportnumpyasnpadaptive.notebook_extension()
# if we would define f1 to have some features in the interval -1,0 we would never see them using the balancinglearnerdeff1(x):
return-1ifx<=0.1else1deff2(x):
returnx**2l1=adaptive.Learner1D(f1, (-1, 1))
l2=adaptive.Learner1D(f2, (-1, 1))
# now let's create the balancinglearner and do some balancing :Dbl=adaptive.BalancingLearner([l1, l2])
foriinrange(1000):
xs, _=bl.ask(1)
x, =xsy=bl.function(x)
bl.tell(x, y)
asked=l1.ask(1, add_data=False)
print(f"l1 requested {asked}, but since loss_improvement is -inf, \n\tbalancinglearner will never choose this")
print(f"npoints: l1: {l1.npoints}, l2: {l2.npoints}, almost all points are added to l2")
print(f"loss(): l1: {l1.loss()}, l2: {l2.loss()}, the actual loss of l1 is much higher than l2.loss")
l1.plot() +l2.plot()
this will output:
l1 requested ([0.10000000000000009], [-inf]), but since loss_improvement is -inf,
balancinglearner will never choose this
npoints: l1: 53, l2: 947, almost all points are added to l2
loss(): l1: 1.0, l2: 0.003584776382870768, the actual loss of l1 is much higher than l2.loss
the interval is bigger than _dx_eps, so it has a finite loss associated with it. When asked to improve, it finds that by dividing this interval, it creates two intervals which are smaller than dx_eps, so it claims the loss_improvement is -inf. Which results in the balancinglearner always choosing another learner to add a point (this second bug is a result of the first one).
the problem with the balancinglearner can be solved by solving gitlab:#103 however, the underlying issue (returning -inf as loss_improvement) is then still not really solved, although one will almost never notice anymore.
The text was updated successfully, but these errors were encountered:
(original issue on GitLab)
opened by Jorn Hoofwijk (@Jorn) at 2018-09-20T13:50:40.821Z
The following discussion from gitlab:!99 should be addressed:
to show an example: try running:
this will output:
I also have a notebook here: bug_learner1d_infinite_loss_improvement.ipynb that constructs what is happening artificially and indicates on why this happens a bit more.
The reason why this happens:
the interval is bigger than
_dx_eps
, so it has a finite loss associated with it. When asked to improve, it finds that by dividing this interval, it creates two intervals which are smaller thandx_eps
, so it claims theloss_improvement
is-inf
. Which results in the balancinglearner always choosing another learner to add a point (this second bug is a result of the first one).the problem with the balancinglearner can be solved by solving gitlab:#103 however, the underlying issue (returning -inf as loss_improvement) is then still not really solved, although one will almost never notice anymore.
The text was updated successfully, but these errors were encountered: