You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the environment, we are configuring the models which indicated by VITE_LLM_MODELS_PROD variable we can configure models based on our needs.
You can then run Docker Compose to build and start all components:
60
41
```bash
61
42
docker-compose up --build
@@ -88,7 +69,6 @@ VITE_CHAT_MODES=""
88
69
If however you want to specify the only vector mode or only graph mode you can do that by specifying the mode in the env:
89
70
```env
90
71
VITE_CHAT_MODES="vector,graph"
91
-
VITE_CHAT_MODES="vector,graph"
92
72
```
93
73
94
74
#### Running Backend and Frontend separately (dev environment)
@@ -105,7 +85,7 @@ Alternatively, you can run the backend and frontend separately:
105
85
```
106
86
107
87
- For the backend:
108
-
1. Create the backend/.env file by copy/pasting the backend/example.env. To streamline the initial setup and testing of the application, you can preconfigure user credentials directly within the .env file. This bypasses the login dialog and allows you to immediately connect with a predefined user.
88
+
1. Create the backend/.env file by copy/pasting the backend/example.env. To streamline the initial setup and testing of the application, you can preconfigure user credentials directly within the backend .env file. This bypasses the login dialog and allows you to immediately connect with a predefined user.
| VITE_GOOGLE_CLIENT_ID | Optional || Client ID for Google authentication |
164
152
| VITE_LLM_MODELS_PROD | Optional | openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash | To Distinguish models based on the Enviornment PROD or DEV
165
153
| VITE_LLM_MODELS | Optional |'diffbot,openai_gpt_3.5,openai_gpt_4o,openai_gpt_4o_mini,gemini_1.5_pro,gemini_1.5_flash,azure_ai_gpt_35,azure_ai_gpt_4o,ollama_llama3,groq_llama3_70b,anthropic_claude_3_5_sonnet'| Supported Models For the application
166
-
| GCS_FILE_CACHE | Optional | False | If set to True, will save the files to process into GCS. If set to False, will save the files locally |
167
-
| ENTITY_EMBEDDING | Optional | False | If set to True, It will add embeddings foreach entityin database |
168
-
| LLM_MODEL_CONFIG_ollama_<model_name>| Optional || Set ollama config as - model_name,model_local_url forlocal deployments |
169
-
| RAGAS_EMBEDDING_MODEL | Optional | openai | embedding model used by ragas evaluation framework |
0 commit comments