-
Notifications
You must be signed in to change notification settings - Fork 561
Benchmarking #819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We started to run eval with RAGAS for a set of documents and the different retrievers (but not hyperparameter sets) and also expose a first set of metrics in the UI in the next release. |
Thanks for the pointer to RAGAS. I recently got interested in benchmarking RAG and came across:
On less related point, it would be nice to have an easy dev setup where changes to frontend code get reflected immediately and restarting the back end is automatic or simple. Maybe a devcontainer - the ability to run something in a codespace through Git makes projects very accessible. |
Hi, the llm-graph-builder is using a sophisticated pipeline and I am wondering if any benchmarking was done e.g. to measure the value of added complexity, tune hyperparameters, and compare with other RAG approaches ? Thanks!
The text was updated successfully, but these errors were encountered: