Skip to content

Commit 859af37

Browse files
Update README.md & FrontendDoc (#974)
* Update README.md * Update frontend_docs.adoc * Update frontend_docs.adoc * folder structure * Add files via upload * Add files via upload * Add files via upload * Update frontend_docs.adoc * removed unwanted screenshots * Add files via upload * Update frontend_docs.adoc * Add files via upload * Update frontend_docs.adoc
1 parent 0314b9b commit 859af37

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+442
-114
lines changed

README.md

+9-25
Original file line numberDiff line numberDiff line change
@@ -35,27 +35,8 @@ Accoroding to enviornment we are configuring the models which is indicated by VI
3535
EX:
3636
```env
3737
VITE_LLM_MODELS_PROD="openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash"
38-
```
39-
According to the environment, we are configuring the models which indicated by VITE_LLM_MODELS_PROD variable we can configure models based on our needs.
40-
EX:
41-
```env
42-
VITE_LLM_MODELS_PROD="openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash"
43-
```
4438
45-
if you only want OpenAI:
46-
```env
47-
VITE_LLM_MODELS_PROD="diffbot,openai-gpt-3.5,openai-gpt-4o"
48-
VITE_LLM_MODELS_PROD="diffbot,openai-gpt-3.5,openai-gpt-4o"
49-
OPENAI_API_KEY="your-openai-key"
5039
```
51-
52-
if you only want Diffbot:
53-
```env
54-
VITE_LLM_MODELS_PROD="diffbot"
55-
VITE_LLM_MODELS_PROD="diffbot"
56-
DIFFBOT_API_KEY="your-diffbot-key"
57-
```
58-
5940
You can then run Docker Compose to build and start all components:
6041
```bash
6142
docker-compose up --build
@@ -88,7 +69,6 @@ VITE_CHAT_MODES=""
8869
If however you want to specify the only vector mode or only graph mode you can do that by specifying the mode in the env:
8970
```env
9071
VITE_CHAT_MODES="vector,graph"
91-
VITE_CHAT_MODES="vector,graph"
9272
```
9373

9474
#### Running Backend and Frontend separately (dev environment)
@@ -105,7 +85,7 @@ Alternatively, you can run the backend and frontend separately:
10585
```
10686

10787
- For the backend:
108-
1. Create the backend/.env file by copy/pasting the backend/example.env. To streamline the initial setup and testing of the application, you can preconfigure user credentials directly within the .env file. This bypasses the login dialog and allows you to immediately connect with a predefined user.
88+
1. Create the backend/.env file by copy/pasting the backend/example.env. To streamline the initial setup and testing of the application, you can preconfigure user credentials directly within the backend .env file. This bypasses the login dialog and allows you to immediately connect with a predefined user.
10989
- **NEO4J_URI**:
11090
- **NEO4J_USERNAME**:
11191
- **NEO4J_PASSWORD**:
@@ -139,6 +119,8 @@ Allow unauthenticated request : Yes
139119
## ENV
140120
| Env Variable Name | Mandatory/Optional | Default Value | Description |
141121
|-------------------------|--------------------|---------------|--------------------------------------------------------------------------------------------------|
122+
| |
123+
| **BACKEND ENV**
142124
| EMBEDDING_MODEL | Optional | all-MiniLM-L6-v2 | Model for generating the text embedding (all-MiniLM-L6-v2 , openai , vertexai) |
143125
| IS_EMBEDDING | Optional | true | Flag to enable text embedding |
144126
| KNN_MIN_SCORE | Optional | 0.94 | Minimum score for KNN algorithm |
@@ -152,7 +134,13 @@ Allow unauthenticated request : Yes
152134
| LANGCHAIN_API_KEY | Optional | | API key for Langchain |
153135
| LANGCHAIN_PROJECT | Optional | | Project for Langchain |
154136
| LANGCHAIN_TRACING_V2 | Optional | true | Flag to enable Langchain tracing |
137+
| GCS_FILE_CACHE | Optional | False | If set to True, will save the files to process into GCS. If set to False, will save the files locally |
155138
| LANGCHAIN_ENDPOINT | Optional | https://api.smith.langchain.com | Endpoint for Langchain API |
139+
| ENTITY_EMBEDDING | Optional | False | If set to True, It will add embeddings for each entity in database |
140+
| LLM_MODEL_CONFIG_ollama_<model_name> | Optional | | Set ollama config as - model_name,model_local_url for local deployments |
141+
| RAGAS_EMBEDDING_MODEL | Optional | openai | embedding model used by ragas evaluation framework |
142+
| |
143+
| **FRONTEND ENV**
156144
| VITE_BACKEND_API_URL | Optional | http://localhost:8000 | URL for backend API |
157145
| VITE_BLOOM_URL | Optional | https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph&featureGenAISuggestions=true&featureGenAISuggestionsInternal=true | URL for Bloom visualization |
158146
| VITE_REACT_APP_SOURCES | Mandatory | local,youtube,wiki,s3 | List of input sources that will be available |
@@ -163,10 +151,6 @@ Allow unauthenticated request : Yes
163151
| VITE_GOOGLE_CLIENT_ID | Optional | | Client ID for Google authentication |
164152
| VITE_LLM_MODELS_PROD | Optional | openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash | To Distinguish models based on the Enviornment PROD or DEV
165153
| VITE_LLM_MODELS | Optional | 'diffbot,openai_gpt_3.5,openai_gpt_4o,openai_gpt_4o_mini,gemini_1.5_pro,gemini_1.5_flash,azure_ai_gpt_35,azure_ai_gpt_4o,ollama_llama3,groq_llama3_70b,anthropic_claude_3_5_sonnet' | Supported Models For the application
166-
| GCS_FILE_CACHE | Optional | False | If set to True, will save the files to process into GCS. If set to False, will save the files locally |
167-
| ENTITY_EMBEDDING | Optional | False | If set to True, It will add embeddings for each entity in database |
168-
| LLM_MODEL_CONFIG_ollama_<model_name> | Optional | | Set ollama config as - model_name,model_local_url for local deployments |
169-
| RAGAS_EMBEDDING_MODEL | Optional | openai | embedding model used by ragas evaluation framework |
170154

171155
## LLMs Supported
172156
1. OpenAI

0 commit comments

Comments
 (0)