You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now, I'm going to use Sympy to generate a function which is going to be evaluated with Numpy:
fromsympyimport*var("x")
f=im(sqrt(-x))
# func is going to use Numpyfunc=lambdify(x, f)
defgoal(learner):
returnlearner.loss() <0.1learner=Learner1D(func, bounds=(-1, 1))
learner_bytes=dill.dumps(learner)
learner_loaded=dill.loads(learner_bytes)
simple(learner_loaded, goal)
learner_loaded.plot()
As you can see, the last evaluation point (the right boundary) has the wrong sign. Even if I change the boundaries, the last point will always have the wrong sign. Why does it happen? Is it something that I can fix with some option in the learner/runner or is it an internal bug?
The text was updated successfully, but these errors were encountered:
Interesting, Numpy deals with (complex number) branch cuts differently depending on the input data type. For example f_numpy(1), f_numpy(1.0) outputs (1.0, -1.0). It appears that there is no bug with adaptive. Thanks for pointing me in the right direction. All I need to do is to use float boundaries. I'm closing the issue.
Let's consider the following simple example. First, I'm going to show the correct result when using Numpy:
Now, I'm going to use Sympy to generate a function which is going to be evaluated with Numpy:
As you can see, the last evaluation point (the right boundary) has the wrong sign. Even if I change the boundaries, the last point will always have the wrong sign. Why does it happen? Is it something that I can fix with some option in the learner/runner or is it an internal bug?
The text was updated successfully, but these errors were encountered: