Skip to content

Commit 92b71ab

Browse files
prakriti-solankeykartikpersistent
authored andcommitted
Update README.md & FrontendDoc (#974)
* Update README.md * Update frontend_docs.adoc * Update frontend_docs.adoc * folder structure * Add files via upload * Add files via upload * Add files via upload * Update frontend_docs.adoc * removed unwanted screenshots * Add files via upload * Update frontend_docs.adoc * Add files via upload * Update frontend_docs.adoc
1 parent 659873c commit 92b71ab

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+443
-111
lines changed

README.md

+10-22
Original file line numberDiff line numberDiff line change
@@ -35,25 +35,8 @@ Accoroding to enviornment we are configuring the models which is indicated by VI
3535
EX:
3636
```env
3737
VITE_LLM_MODELS_PROD="openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash"
38-
```
39-
40-
In your root folder, create a .env file with your OPENAI and DIFFBOT keys (if you want to use both):
41-
```env
42-
VITE_LLM_MODELS_PROD="openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash"
43-
```
4438
45-
if you only want OpenAI:
46-
```env
47-
VITE_LLM_MODELS_PROD="diffbot,openai-gpt-3.5,openai-gpt-4o"
48-
OPENAI_API_KEY="your-openai-key"
49-
```
50-
51-
if you only want Diffbot:
52-
```env
53-
VITE_LLM_MODELS_PROD="diffbot"
54-
DIFFBOT_API_KEY="your-diffbot-key"
5539
```
56-
5740
You can then run Docker Compose to build and start all components:
5841
```bash
5942
docker-compose up --build
@@ -101,7 +84,7 @@ Alternatively, you can run the backend and frontend separately:
10184
```
10285

10386
- For the backend:
104-
1. Create the backend/.env file by copy/pasting the backend/example.env. To streamline the initial setup and testing of the application, you can preconfigure user credentials directly within the .env file. This bypasses the login dialog and allows you to immediately connect with a predefined user.
87+
1. Create the backend/.env file by copy/pasting the backend/example.env. To streamline the initial setup and testing of the application, you can preconfigure user credentials directly within the backend .env file. This bypasses the login dialog and allows you to immediately connect with a predefined user.
10588
- **NEO4J_URI**:
10689
- **NEO4J_USERNAME**:
10790
- **NEO4J_PASSWORD**:
@@ -135,6 +118,8 @@ Allow unauthenticated request : Yes
135118
## ENV
136119
| Env Variable Name | Mandatory/Optional | Default Value | Description |
137120
|-------------------------|--------------------|---------------|--------------------------------------------------------------------------------------------------|
121+
| |
122+
| **BACKEND ENV**
138123
| EMBEDDING_MODEL | Optional | all-MiniLM-L6-v2 | Model for generating the text embedding (all-MiniLM-L6-v2 , openai , vertexai) |
139124
| IS_EMBEDDING | Optional | true | Flag to enable text embedding |
140125
| KNN_MIN_SCORE | Optional | 0.94 | Minimum score for KNN algorithm |
@@ -148,7 +133,13 @@ Allow unauthenticated request : Yes
148133
| LANGCHAIN_API_KEY | Optional | | API key for Langchain |
149134
| LANGCHAIN_PROJECT | Optional | | Project for Langchain |
150135
| LANGCHAIN_TRACING_V2 | Optional | true | Flag to enable Langchain tracing |
136+
| GCS_FILE_CACHE | Optional | False | If set to True, will save the files to process into GCS. If set to False, will save the files locally |
151137
| LANGCHAIN_ENDPOINT | Optional | https://api.smith.langchain.com | Endpoint for Langchain API |
138+
| ENTITY_EMBEDDING | Optional | False | If set to True, It will add embeddings for each entity in database |
139+
| LLM_MODEL_CONFIG_ollama_<model_name> | Optional | | Set ollama config as - model_name,model_local_url for local deployments |
140+
| RAGAS_EMBEDDING_MODEL | Optional | openai | embedding model used by ragas evaluation framework |
141+
| |
142+
| **FRONTEND ENV**
152143
| VITE_BACKEND_API_URL | Optional | http://localhost:8000 | URL for backend API |
153144
| VITE_BLOOM_URL | Optional | https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph&featureGenAISuggestions=true&featureGenAISuggestionsInternal=true | URL for Bloom visualization |
154145
| VITE_REACT_APP_SOURCES | Mandatory | local,youtube,wiki,s3 | List of input sources that will be available |
@@ -158,10 +149,7 @@ Allow unauthenticated request : Yes
158149
| VITE_CHUNK_SIZE | Optional | 5242880 | Size of each chunk of file for upload |
159150
| VITE_GOOGLE_CLIENT_ID | Optional | | Client ID for Google authentication |
160151
| VITE_LLM_MODELS_PROD | Optional | openai_gpt_4o,openai_gpt_4o_mini,diffbot,gemini_1.5_flash | To Distinguish models based on the Enviornment PROD or DEV
161-
| GCS_FILE_CACHE | Optional | False | If set to True, will save the files to process into GCS. If set to False, will save the files locally |
162-
| ENTITY_EMBEDDING | Optional | False | If set to True, It will add embeddings for each entity in database |
163-
| LLM_MODEL_CONFIG_ollama_<model_name> | Optional | | Set ollama config as - model_name,model_local_url for local deployments |
164-
| RAGAS_EMBEDDING_MODEL | Optional | openai | embedding model used by ragas evaluation framework |
152+
| VITE_LLM_MODELS | Optional | 'diffbot,openai_gpt_3.5,openai_gpt_4o,openai_gpt_4o_mini,gemini_1.5_pro,gemini_1.5_flash,azure_ai_gpt_35,azure_ai_gpt_4o,ollama_llama3,groq_llama3_70b,anthropic_claude_3_5_sonnet' | Supported Models For the application
165153

166154
## LLMs Supported
167155
1. OpenAI

0 commit comments

Comments
 (0)