Skip to content

[Bug]: [RLHF] Weights update broken with V1 multiprocessing #16434

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
22quinn opened this issue Apr 10, 2025 · 3 comments
Closed
1 task done

[Bug]: [RLHF] Weights update broken with V1 multiprocessing #16434

22quinn opened this issue Apr 10, 2025 · 3 comments
Labels
bug Something isn't working

Comments

@22quinn
Copy link
Contributor

22quinn commented Apr 10, 2025

Your current environment

wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
# For security purposes, please feel free to check the contents of collect_env.py before running it.
python collect_env.py

🐛 Describe the bug

Currently we update weights using load_weights API like here

LLMEngine.model_executor.driver_worker.worker.model_runner.model.load_weights(weights=[(name, weight)])

With V1 + multiprocessing, model_executor no longer exists: ref

I saw #14185 mentioned this caveat. Based on @youkaichao 's discussion in slack, it seems we had an implementation supporting multiprocessing but was blocked on some hang issue. Wondering what's the latest gap to fix this issue?

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@22quinn 22quinn added the bug Something isn't working label Apr 10, 2025
@rbao2018
Copy link

I think this path in verl may help
https://github.com/volcengine/verl/pull/923

@22quinn
Copy link
Contributor Author

22quinn commented Apr 11, 2025

I think this path in verl may help https://github.com/volcengine/verl/pull/923

@rbao2018 Thanks for the pointer but in that PR, it still uses model = self.inference_engine.llm_engine.model_executor.driver_worker.worker.model_runner.model so I think will still fail if we turn on multiprocessing due to missing model_executor

@22quinn
Copy link
Contributor Author

22quinn commented Apr 13, 2025

actually instead of directly calling LLMEngine.model_executor.driver_worker.worker.model_runner.model.load_weights, using collective_rpc("update_weight"...) might work, as implemented in #15444

model_executor.collective_rpc will be called here:

return self.model_executor.collective_rpc(method, timeout, args,

@22quinn 22quinn closed this as completed Apr 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants