Skip to content

Add homogeneous sampling learner #131

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jbweston opened this issue Dec 19, 2018 · 7 comments
Closed

Add homogeneous sampling learner #131

jbweston opened this issue Dec 19, 2018 · 7 comments

Comments

@jbweston
Copy link
Contributor

(original issue on GitLab)

opened by Anton Akhmerov (@anton-akhmerov) at 2017-07-26T17:17:15.043Z

This should be a very simple prototype that calculates f(x) by sampling the space homogeneously. It isn't adaptive in the sense that its point selection strategy does not depend on the values of f, but it can still be useful for interactive work (the user launches the learner and interrupts it manually when the result looks good).

@jbweston
Copy link
Contributor Author

originally posted by Anton Akhmerov (@anton-akhmerov) at 2017-07-26T17:54:11.317Z on GitLab

This can be done by making the y-scaling strategy of the 1D learner user-controllable, and allowing to effectively reduce the contribution of y-values to the loss to 0.

@jbweston
Copy link
Contributor Author

originally posted by Bas Nijholt (@basnijholt) at 2018-02-02T13:45:16.881Z on GitLab

I needed this just now for a Learner1D and it can easily be realized by using:

def loss_per_interval(interval, scale, function_values):
    x_left, x_right = interval
    x_scale, _ = scale
    dx = (x_right - x_left) / x_scale
    return dx

learner = adaptive.Learner1D(f, bounds, loss_per_interval)

@jbweston
Copy link
Contributor Author

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-02-02T13:48:31.888Z on GitLab

Ah, cool! I imagine a similar thing would work in 2D, right?

@jbweston
Copy link
Contributor Author

originally posted by Bas Nijholt (@basnijholt) at 2018-02-02T13:55:12.401Z on GitLab

Yup, this will work:

def loss_per_triangle(ip):
    from adaptive.learner.learner2D import areas
    A = areas(ip)
    return np.sqrt(A)

Screen_Shot_2018-02-02_at_14.59.30

@jbweston
Copy link
Contributor Author

originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-02-02T17:45:21.374Z on GitLab

Nice. I'm not sure what's the best way to address the issue then. What about introducing those loss functions in the corresponding modules and documenting their existence?

@jbweston
Copy link
Contributor Author

originally posted by Bas Nijholt (@basnijholt) at 2018-02-19T16:21:28.553Z on GitLab

This issue is being addressed in https://gitlab.kwant-project.org/qt/adaptive/merge_requests/49.

@jbweston
Copy link
Contributor Author

originally posted by Joseph Weston (@jbweston) at 2018-02-19T17:54:17.239Z on GitLab

fixed by gitlab:!49

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant