You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What problem or use case are you trying to solve?
As a user of the on premise openhands docker container I would like to have within the OpenHands app a download and then run button for the new OpenHands LM
Describe the UX of the solution you'd like
Download button for the OpenHands LM within the app. After downloading the LLM the button changes to the Play symbol (run) so that the OpenHands LM would run and is available as an LLM within the running OpenHands docker.
Whether the OpenHands LM would be another container (e.g. Ollama model) is up to the architects.
Do you have thoughts on the technical implementation?
Despite a download and run button for the LLM
First download the LLM, e.g. via docker container which installs the OpenHands LLM and then
second: runs the LLM and makes it available to the running OpenHands docker container
Describe alternatives you've considered
Directly another docker container for the new OpenHands LM which has a running OpenHands LM and makes the model available via OpenAI APi
Additional context
This feature request is actually more to help OpenHands by distributing the OpenHands LM further
From my side it is not so urgent as I can provide it myself
The text was updated successfully, but these errors were encountered:
I'm actually trying to do that right now, but OpenHands code agent timeout on 600 seconds since I'm running the llm from a slow local machine...
Hope we can have an easy way to modify the request timeout in the default docker image without using the dev workflow
What problem or use case are you trying to solve?
As a user of the on premise openhands docker container I would like to have within the OpenHands app a download and then run button for the new OpenHands LM
Describe the UX of the solution you'd like
Download button for the OpenHands LM within the app. After downloading the LLM the button changes to the Play symbol (run) so that the OpenHands LM would run and is available as an LLM within the running OpenHands docker.
Whether the OpenHands LM would be another container (e.g. Ollama model) is up to the architects.
Do you have thoughts on the technical implementation?
Despite a download and run button for the LLM
First download the LLM, e.g. via docker container which installs the OpenHands LLM and then
second: runs the LLM and makes it available to the running OpenHands docker container
Describe alternatives you've considered
Directly another docker container for the new OpenHands LM which has a running OpenHands LM and makes the model available via OpenAI APi
Additional context
This feature request is actually more to help OpenHands by distributing the OpenHands LM further
From my side it is not so urgent as I can provide it myself
The text was updated successfully, but these errors were encountered: