Skip to content

Bring few-shot HPO into AutoML #6577

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 5 tasks
LittleLittleCloud opened this issue Feb 20, 2023 · 1 comment
Open
2 of 5 tasks

Bring few-shot HPO into AutoML #6577

LittleLittleCloud opened this issue Feb 20, 2023 · 1 comment
Labels
enhancement New feature or request
Milestone

Comments

@LittleLittleCloud
Copy link
Contributor

LittleLittleCloud commented Feb 20, 2023

What's few-shot/zero-shot HPO

Few-shot HPO is a both promising-performance and budget-friendly HPO solution. It is divided into two processes: offline processing and online processing. In offline processing, it searches on a given search space over a bunch of datasets and selects a hyper-parameter set {c} based on certain criteria. The dimension of search space and size of datasets can be very large in order to cover as many distribution of dataset as possible. In online processing, it picks N hyper-parameter configurations from {c} based on user's training dataset. If N=1, this HPO algorithm is called zero-shot as it only attempts once on training dataset. If N >1, this HPO algorithm is called few-shot.

The advantage of few-shot/zero-shot HPO is it introduces prior knowledge learned from offline process, which helps resolve cold-start problem. And with the help of prior knowledge, it also prevent exploring dead-end configurations.

What will benefit most from few-shot/zero-shot HPO

Deep learning scenarios like image-classification and NLP tasks.

How will few-shot/zero-shot HPO be leveraged in ML.Net

few-shot/zero-shot HPO will be leveraged as a tuning algorithm in AutoML.Net, just like other tuning algorithms.

Work items

  • add autozero tuner
  • integrate with binary classification experiment
  • integrate with multi-classification experiment
  • integrate with regression experiment
  • add docs
@LittleLittleCloud LittleLittleCloud added the enhancement New feature or request label Feb 20, 2023
@ghost ghost added the untriaged New issue has not been triaged label Feb 20, 2023
@michaelgsharp
Copy link
Contributor

@LittleLittleCloud I'll add this to the "Future" milestone, but if you are planning on finishing it before the next major release please change it to the 4.0 milestone.

@michaelgsharp michaelgsharp added this to the ML.NET Future milestone Jan 23, 2024
@ghost ghost removed the untriaged New issue has not been triaged label Jan 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants