-
Notifications
You must be signed in to change notification settings - Fork 182
bump create-llama and update event handler #260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
81e3827
bump create-llama and update event handler
leehuwuj 471988f
remove wrong setup callback manager
leehuwuj 3f570ff
add callback manager to index
leehuwuj fe552b9
fix linting
leehuwuj b143f86
chore: Update llama-index packages for extractor template
leehuwuj d986a2d
fix
leehuwuj 4abbfbe
add changesets
leehuwuj 8637193
remove added file
leehuwuj bb85ecf
fix wrong import
leehuwuj 95fb444
fix pydantic v2 json
leehuwuj c1b0e73
remove callback manager from llm
leehuwuj cd4cec1
disable new llama_index v0.11 version in multi-agents template
leehuwuj 78c57d3
bump provider packages
leehuwuj 67ca22c
don't use python <4.0 constraint for groq and anthropic
leehuwuj b711c02
remove openai package from extractor pyproject.toml
leehuwuj df49bdf
add LlamaCloudConfig model
leehuwuj 3b02c05
beter code
leehuwuj File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
--- | ||
"create-llama": patch | ||
--- | ||
|
||
Use callback manager properly |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
--- | ||
"create-llama": patch | ||
--- | ||
|
||
Bump create-llama version to 0.11.1 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
98 changes: 72 additions & 26 deletions
98
templates/components/vectordbs/python/llamacloud/index.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,41 +1,87 @@ | ||
import logging | ||
import os | ||
from llama_index.indices.managed.llama_cloud import LlamaCloudIndex | ||
from typing import Optional | ||
|
||
from llama_index.core.callbacks import CallbackManager | ||
from llama_index.core.ingestion.api_utils import ( | ||
get_client as llama_cloud_get_client, | ||
) | ||
from llama_index.indices.managed.llama_cloud import LlamaCloudIndex | ||
from pydantic import BaseModel, Field, validator | ||
|
||
logger = logging.getLogger("uvicorn") | ||
|
||
|
||
def get_client(): | ||
return llama_cloud_get_client( | ||
os.getenv("LLAMA_CLOUD_API_KEY"), | ||
os.getenv("LLAMA_CLOUD_BASE_URL"), | ||
class LlamaCloudConfig(BaseModel): | ||
# Private attributes | ||
api_key: str = Field( | ||
default=os.getenv("LLAMA_CLOUD_API_KEY"), | ||
exclude=True, # Exclude from the model representation | ||
) | ||
base_url: Optional[str] = Field( | ||
default=os.getenv("LLAMA_CLOUD_BASE_URL"), | ||
exclude=True, | ||
) | ||
organization_id: Optional[str] = Field( | ||
default=os.getenv("LLAMA_CLOUD_ORGANIZATION_ID"), | ||
exclude=True, | ||
) | ||
# Configuration attributes, can be set by the user | ||
pipeline: str = Field( | ||
description="The name of the pipeline to use", | ||
default=os.getenv("LLAMA_CLOUD_INDEX_NAME"), | ||
) | ||
project: str = Field( | ||
description="The name of the LlamaCloud project", | ||
default=os.getenv("LLAMA_CLOUD_PROJECT_NAME"), | ||
) | ||
|
||
# Validate and throw error if the env variables are not set before starting the app | ||
@validator("pipeline", "project", "api_key", pre=True, always=True) | ||
@classmethod | ||
def validate_env_vars(cls, value): | ||
if value is None: | ||
raise ValueError( | ||
"Please set LLAMA_CLOUD_INDEX_NAME, LLAMA_CLOUD_PROJECT_NAME and LLAMA_CLOUD_API_KEY" | ||
" to your environment variables or config them in .env file" | ||
) | ||
return value | ||
|
||
def to_client_kwargs(self) -> dict: | ||
return { | ||
"api_key": self.api_key, | ||
"base_url": self.base_url, | ||
} | ||
|
||
def get_index(params=None): | ||
configParams = params or {} | ||
pipelineConfig = configParams.get("llamaCloudPipeline", {}) | ||
name = pipelineConfig.get("pipeline", os.getenv("LLAMA_CLOUD_INDEX_NAME")) | ||
project_name = pipelineConfig.get("project", os.getenv("LLAMA_CLOUD_PROJECT_NAME")) | ||
api_key = os.getenv("LLAMA_CLOUD_API_KEY") | ||
base_url = os.getenv("LLAMA_CLOUD_BASE_URL") | ||
organization_id = os.getenv("LLAMA_CLOUD_ORGANIZATION_ID") | ||
|
||
if name is None or project_name is None or api_key is None: | ||
raise ValueError( | ||
"Please set LLAMA_CLOUD_INDEX_NAME, LLAMA_CLOUD_PROJECT_NAME and LLAMA_CLOUD_API_KEY" | ||
" to your environment variables or config them in .env file" | ||
) | ||
|
||
index = LlamaCloudIndex( | ||
name=name, | ||
project_name=project_name, | ||
api_key=api_key, | ||
base_url=base_url, | ||
organization_id=organization_id, | ||
|
||
class IndexConfig(BaseModel): | ||
llama_cloud_pipeline_config: LlamaCloudConfig = Field( | ||
default=LlamaCloudConfig(), | ||
alias="llamaCloudPipeline", | ||
) | ||
callback_manager: Optional[CallbackManager] = Field( | ||
default=None, | ||
) | ||
|
||
def to_index_kwargs(self) -> dict: | ||
return { | ||
"name": self.llama_cloud_pipeline_config.pipeline, | ||
"project_name": self.llama_cloud_pipeline_config.project, | ||
"api_key": self.llama_cloud_pipeline_config.api_key, | ||
"base_url": self.llama_cloud_pipeline_config.base_url, | ||
"organization_id": self.llama_cloud_pipeline_config.organization_id, | ||
"callback_manager": self.callback_manager, | ||
} | ||
|
||
|
||
def get_index(config: IndexConfig = None): | ||
if config is None: | ||
config = IndexConfig() | ||
index = LlamaCloudIndex(**config.to_index_kwargs()) | ||
|
||
return index | ||
|
||
|
||
def get_client(): | ||
config = LlamaCloudConfig() | ||
return llama_cloud_get_client(**config.to_client_kwargs()) |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.