-
Notifications
You must be signed in to change notification settings - Fork 182
add GenUIWorkflow for generating UI components from workflow events #549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
WalkthroughThis pull request restructures the emission of workflow events by introducing new event classes and updating their usage. A new Changes
Sequence Diagram(s)sequenceDiagram
participant Trigger as Workflow Trigger
participant Workflow as GenUIWorkflow
participant LLM as Language Model
participant FS as File System
Trigger->>Workflow: start(event)
Workflow->>LLM: Generate UI description (planning)
LLM-->>Workflow: Return UI description
Workflow->>LLM: Create event aggregations
LLM-->>Workflow: Aggregation logic
Workflow->>FS: Write UI component code
Workflow->>LLM: Refine generated code
LLM-->>Workflow: Final UI component code
Possibly related PRs
Suggested reviewers
Poem
Tip ⚡💬 Agentic Chat (Pro Plan, General Availability)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
… checks for API key and package installation
…UI planning and aggregation function generation
templates/components/workflows/python/deep_research/workflow.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)
33-94
: 🛠️ Refactor suggestionEnsure robust cleanup and error chaining in exception handling.
- The current flow removes
project_root
fromsys.path
in thefinally
block. If multiple concurrent operations setsys.path
, it might interfere. Consider using a context manager to restoresys.path
more reliably.- When raising new exceptions, consider
raise ValueError(...) from e
for clearer stack traces, especially for import errors (B904).
🧹 Nitpick comments (9)
llama-index-server/llama_index/server/api/models.py (1)
145-153
: Clean implementation of UIEvent classThe
UIEvent
class follows the same pattern as other event classes in the file (e.g.,AgentRunEvent
andSourceNodesEvent
). Theto_response
method properly serializes the data field usingmodel_dump()
.One consideration: you might want to add validation for the
type
field to ensure consistent event types throughout your application. Perhaps using an Enum similar toAgentRunEventType
or adding a validator.llama-index-server/docs/custom_ui_component.md (4)
17-21
: Clarify file naming convention for UIEvent-based components.Consider providing a short reminder that the
type
used inUIEvent
(here,"deep_research_event"
) should match the filename (e.g.,deep_research_event.jsx
or.tsx
) to ensure proper rendering, reducing confusion for developers new to this workflow.
25-29
: Consider adding optional data fields or default values.Fields like
type
,status
, andcontent
might benefit from clearly stated defaults or optional behavior (e.g.,None
) if partial events are possible. This can prevent validation errors for incomplete event data.
32-43
: Enrich example with error handling.Your example usage of
ctx.write_event_to_stream()
works as a good baseline. You might also illustrate how to handle or log errors (e.g., network issues, invalid event data) for completeness, so developers understand fallback strategies.
77-101
: Explain asynchronous usage more thoroughly.It may help to demonstrate the async workflow setup (e.g., an async event loop or environment) required to call
await generate_ui_component(...)
. This clarifies usage for developers unfamiliar with async patterns in Python.Also applies to: 103-103
llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)
1-31
: Evaluate security implications of dynamic import and reload.Using
importlib.import_module
andimportlib.reload
can pose security risks iffile_path
points to untrusted or user-uploaded code. Consider adding validation or sandboxing the file before importing to safeguard against malicious code.llama-index-server/llama_index/server/gen_ui/main.py (3)
328-332
: Adopt a ternary operator for brevity.Replace the if-else block with a single-line ternary statement for clarity:
- if code_match is None: - code = response.text - else: - code = code_match.group(1).strip() + code = response.text if code_match is None else code_match.group(1).strip()🧰 Tools
🪛 Ruff (0.8.2)
328-332: Use ternary operator
code = response.text if code_match is None else code_match.group(1).strip()
instead ofif
-else
-block(SIM108)
352-354
: Enhance exception chaining when re-raising.Use
raise ValueError(...) from e
orraise ValueError(...) from None
to provide clearer exception context and preserve tracebacks when environment checks fail.🧰 Tools
🪛 Ruff (0.8.2)
352-354: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
405-418
: Suggest disclaimers for generated UI code.The generated UI code is dynamically produced. Consider adding disclaimers or instructions about reviewing, testing, or validating it before shipping to production to reduce the risk of runtime errors or undesired UI behaviors.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
llama-index-server/poetry.lock
is excluded by!**/*.lock
📒 Files selected for processing (7)
llama-index-server/docs/custom_ui_component.md
(2 hunks)llama-index-server/llama_index/server/__init__.py
(1 hunks)llama-index-server/llama_index/server/api/models.py
(1 hunks)llama-index-server/llama_index/server/chat_ui.py
(1 hunks)llama-index-server/llama_index/server/gen_ui/main.py
(1 hunks)llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py
(1 hunks)llama-index-server/pyproject.toml
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
llama-index-server/llama_index/server/__init__.py (3)
llama-index-server/llama_index/server/api/models.py (1)
UIEvent
(145-153)llama-index-server/tests/test_llamaindex_server.py (1)
server
(28-36)llama-index-server/llama_index/server/server.py (2)
LlamaIndexServer
(52-230)UIConfig
(16-49)
llama-index-server/llama_index/server/api/models.py (1)
templates/components/workflows/python/deep_research/workflow.py (1)
to_response
(81-82)
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py
328-332: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip()
instead of if
-else
-block
(SIM108)
352-354: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
⏰ Context from checks skipped due to timeout of 90000ms (28)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
🔇 Additional comments (4)
llama-index-server/llama_index/server/__init__.py (1)
1-4
: Appropriate addition to public API exportsThe changes correctly expose the newly added
UIEvent
class through the module's public API by adding it to both imports and the__all__
list. This is consistent with how other public entities likeLlamaIndexServer
andUIConfig
are exposed.llama-index-server/pyproject.toml (1)
66-67
:❓ Verification inconclusive
Verify existence of CLI entrypoint
Adding a CLI entrypoint through Poetry's scripts configuration is a good practice. This will allow users to run
llama-index-server
from the command line once the package is installed.Please ensure that the
llama_index.server.cli:cli
module and function exist and are properly implemented to avoid installation errors.
🏁 Script executed:
#!/bin/bash # Check if the CLI module exists in the codebase find llama-index-server -name "cli.py" | grep -i server # Check if the cli function is defined in any cli.py file find llama-index-server -name "cli.py" -exec grep -l "def cli" {} \;Length of output: 127
Verify CLI Entrypoint Implementation
The automated search did not return any output confirming the existence of the CLI entrypoint as referenced (
llama_index.server.cli:cli
). Please ensure that:
- A module file exists at the expected path (e.g.
llama_index/server/cli.py
).- That file includes a proper
cli
function definition.If the file or function is missing, it may lead to installation errors. Manual verification is requested to confirm that the entrypoint is implemented appropriately.
🏁 Script executed:
#!/bin/bash echo "Searching for module file 'llama_index/server/cli.py' from the repository root:" find . -type f -path "*/llama_index/server/cli.py" echo "Searching for the CLI function definition 'def cli' within the module file(s):" find . -type f -path "*/llama_index/server/cli.py" -exec grep -H "def cli" {} \;Length of output: 608
Verify CLI Entrypoint Implementation
The configuration in
pyproject.toml
registers the entrypoint asllama_index.server.cli:cli
, but our automated search did not produce any output confirming the existence of allama_index/server/cli.py
file or acli
function within it. Please manually verify that:
- A file exists at
llama_index/server/cli.py
.- The file contains a properly implemented
cli
function.If the file or function is not present, the entrypoint registration will lead to installation errors.
llama-index-server/llama_index/server/chat_ui.py (1)
24-25
:❓ Verification inconclusive
Consider a more sustainable approach for UI package sourcing
Replacing the versioned npm registry URL with a hardcoded build URL may lead to issues:
- The function still accepts a version parameter that is now ignored
- Referencing a specific build number (
@1838
) instead of a semantic version makes updates less predictable- The
pkg.pr.new
domain might be intended for temporary/PR testingConsider:
- Using a versioned URL if this is a permanent change
- Adding a comment explaining why this change was made
- Using an environment variable to allow configuration of the source URL
🏁 Script executed:
#!/bin/bash # Check if the URL is accessible curl -s -o /dev/null -w "%{http_code}" https://pkg.pr.new/run-llama/LlamaIndexTS/@llamaindex/server@1838 # Check if there are any references to this URL elsewhere grep -r "pkg.pr.new" llama-index-server --include="*.py"Length of output: 294
UI Package Sourcing Update Required
- The current implementation still accepts a version parameter, but the hardcoded URL ignores it.
- Referencing a specific build number (
@1838
) instead of a semantic version may lead to unpredictable updates.- The
pkg.pr.new
domain appears to be for temporary or PR-specific testing, which could affect long-term stability.Recommendations:
- Use a versioned URL if this change is intended to be permanent.
- Add an inline comment explaining why the URL replacement was made.
- Consider making the URL configurable via an environment variable to support different deployment scenarios.
llama-index-server/llama_index/server/gen_ui/main.py (1)
105-111
: Be mindful of shared state across concurrent workflow runs.The
GenUIWorkflow
constructor storesllm
,console
, and other mutable fields that might be manipulated concurrently in multiple runs. If you expect concurrency, consider creating separate workflow instances or adding thread-safety measures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)
407-408
: 🛠️ Refactor suggestionHandle LLM model errors gracefully
You rely on a fixed model
"claude-3-7-sonnet-latest"
. If the model name or availability changes, the code might break. Consider adding try-except around the LLM invocation or fallback logic in case the model is unavailable.- llm = Anthropic(model="claude-3-7-sonnet-latest", max_tokens=4096) - workflow = GenUIWorkflow(llm=llm, timeout=500.0) + # Try with the latest model, fallback to a more stable one if needed + try: + llm = Anthropic(model="claude-3-7-sonnet-latest", max_tokens=4096) + workflow = GenUIWorkflow(llm=llm, timeout=500.0) + except Exception as e: + console.print(f"[yellow]Warning: Failed to use latest Claude model: {str(e)}. Falling back to alternative model.[/yellow]") + llm = Anthropic(model="claude-3-opus-20240229", max_tokens=4096) + workflow = GenUIWorkflow(llm=llm, timeout=500.0)
🧹 Nitpick comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)
327-333
: Simplify code extraction logic using ternary operatorThe code extraction from the LLM response can be simplified using a ternary operator.
# Extract code from response, handling case where code block is missing code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL) - if code_match is None: - # If no code block found, use full response - code = response.text - else: - code = code_match.group(1).strip() + # Use ternary operator for simpler code + code = response.text if code_match is None else code_match.group(1).strip()🧰 Tools
🪛 Ruff (0.8.2)
328-332: Use ternary operator
code = response.text if code_match is None else code_match.group(1).strip()
instead ofif
-else
-block(SIM108)
408-408
: Consider making timeout configurableThe timeout is hardcoded to 500.0 seconds, which might not be appropriate for all workflows. Consider making this configurable.
- workflow = GenUIWorkflow(llm=llm, timeout=500.0) + # Default timeout is 500 seconds but can be overridden + timeout = float(os.getenv("GENUI_WORKFLOW_TIMEOUT", "500.0")) + workflow = GenUIWorkflow(llm=llm, timeout=timeout)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/e2e.yml
(1 hunks)llama-index-server/llama_index/server/gen_ui/main.py
(1 hunks)llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- .github/workflows/e2e.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py
🧰 Additional context used
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py
328-332: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip()
instead of if
-else
-block
(SIM108)
352-354: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
⏰ Context from checks skipped due to timeout of 90000ms (25)
- GitHub Check: build
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
🔇 Additional comments (3)
llama-index-server/llama_index/server/gen_ui/main.py (3)
82-102
: LGTM - Well-structured code templateThe code structure definition is clear and provides a good template for the generated UI component. The JSX structure follows good practices with component exports, state handling, and proper organization.
112-133
: LGTM - Excellent progress visualizationThe status update implementation using Rich's Live display provides excellent user feedback during the workflow execution. The color-coded status indicators and panel display make the process easy to follow.
22-66
: LGTM - Well-defined event classesThe event classes are well-organized, have clear docstrings, and properly define the data needed for each step in the workflow. This makes the flow of data through the workflow clear and maintainable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@leehuwuj please check comments and release
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)
355-357
: Adopt exception chaining in the ImportError handler.
This was raised previously, so marking as duplicate feedback. Usefrom err
orfrom None
to clarify exception context:except ImportError as err: - raise ValueError( + raise ValueError( "Anthropic package is not installed. Please install it..." - ) + ) from err🧰 Tools
🪛 Ruff (0.8.2)
355-357: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
410-410
: Handle fallback if the chosen Anthropic model is unavailable.
This was highlighted in a previous review. Relying on a fixed model name ("claude-3-7-sonnet-latest"
) without fallback logic may lead to runtime errors if deprecated or renamed.
🧹 Nitpick comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)
77-133
: Consider extracting repeated prompt text into reusable templates.
You have multiple large prompt templates spread across different methods. Factoring them into a shared helper or config could simplify future maintenance and reduce duplication.
299-335
: Use a ternary operator for simpler code.
Static analysis suggests condensing theif code_match is None
block into a one-line ternary expression to reduce verbosity:- if code_match is None: - code = response.text - else: - code = code_match.group(1).strip() + code = response.text if code_match is None else code_match.group(1).strip()🧰 Tools
🪛 Ruff (0.8.2)
331-335: Use ternary operator
code = response.text if code_match is None else code_match.group(1).strip()
instead ofif
-else
-block(SIM108)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
llama-index-server/llama_index/server/chat_ui.py
(1 hunks)llama-index-server/llama_index/server/gen_ui/main.py
(1 hunks)llama-index-server/pyproject.toml
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- llama-index-server/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- llama-index-server/llama_index/server/chat_ui.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)
llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)
get_workflow_event_schemas
(33-93)
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py
331-335: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip()
instead of if
-else
-block
(SIM108)
355-357: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
⏰ Context from checks skipped due to timeout of 90000ms (28)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
🔇 Additional comments (4)
llama-index-server/llama_index/server/gen_ui/main.py (4)
22-28
: No issues withPlanningEvent
class.
The class definition is straightforward and aligns well with typical event data usage.
30-47
: Event classes look solid and consistent.
All three event classes (WriteAggregationEvent
,WriteUIComponentEvent
) are similarly well-defined, with appropriate fields for storing event data.
49-65
:RefineGeneratedCodeEvent
andExtractEventSchemaEvent
classes are clear.
They properly encapsulate necessary fields, demonstrating a consistent event-driven approach.
67-75
:AggregatePrediction
model is concise.
UsingPydantic
for typed data ensures better validations and future maintainability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)
381-385
: Handle LLM model errors gracefully.You rely on a fixed model
"claude-3-7-sonnet-latest"
. If the model name changes or is unavailable, the code might break. Consider adding error handling for LLM initialization.#!/bin/bash # Check if the Anthropic package provides fallback mechanisms or version checks # Look for error handling patterns in the codebase rg -A 3 -B 3 "try:.*Anthropic\(" --multiline
🧹 Nitpick comments (4)
llama-index-server/llama_index/server/gen_ui/main.py (4)
337-341
: Simplify code with ternary operator for better readability.The code block to extract the JSX code can be simplified using a ternary operator.
- code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL) - if code_match is None: - # If no code block found, use full response - code = response.text - else: - code = code_match.group(1).strip() + code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL) + code = response.text if code_match is None else code_match.group(1).strip()🧰 Tools
🪛 Ruff (0.8.2)
337-341: Use ternary operator
code = response.text if code_match is None else code_match.group(1).strip()
instead ofif
-else
-block(SIM108)
352-424
: Consider supporting both workflow_file and event_cls simultaneously.The function requires exactly one of
workflow_file
orevent_cls
, but there could be cases where users want to generate UI for both the workflow events and additional custom events.Consider modifying the function to support both inputs simultaneously by merging the events from both sources:
- if workflow_file is None and event_cls is None: - raise ValueError( - "Either workflow_file or event_cls must be provided. Please provide one of them." - ) - if workflow_file is not None and event_cls is not None: - raise ValueError( - "Only one of workflow_file or event_cls can be provided. Please provide only one of them." - ) + if workflow_file is None and event_cls is None: + raise ValueError( + "Either workflow_file or event_cls must be provided. Please provide at least one of them." + ) + + event_schemas = []And then later merge the schemas:
# Get event schemas + event_schemas = [] if workflow_file is not None: # Get event schemas from the input file console.rule("[bold blue]Analyzing Events[/bold blue]") - event_schemas = get_workflow_event_schemas(workflow_file) + workflow_schemas = get_workflow_event_schemas(workflow_file) + event_schemas.extend(workflow_schemas) if len(event_schemas) == 0: console.print( Panel( @@ -403,8 +401,8 @@ ) elif event_cls is not None: - event_schemas = [ - {"type": event_cls.__name__, "schema": event_cls.model_json_schema()} - ] + event_schemas.append( + {"type": event_cls.__name__, "schema": event_cls.model_json_schema()} + )
58-64
: Standardize event field types across all event classes.
ExtractEventSchemaEvent.events
is typed asList[Any]
while other event classes useList[Dict[str, Any]]
for their events field.- events: List[Any] + events: List[Dict[str, Any]]
113-134
: Consider optimizing the status update mechanism for performance.The
update_status
method rebuilds the entire status display on each call, which could be inefficient for workflows with many steps.Consider using a more efficient approach to update only the changed parts of the status display, especially for complex workflows.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
llama-index-server/llama_index/server/gen_ui/main.py
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)
llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)
get_workflow_event_schemas
(33-93)
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py
337-341: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip()
instead of if
-else
-block
(SIM108)
🪛 GitHub Actions: Lint on push or pull request
llama-index-server/llama_index/server/gen_ui/main.py
[error] 1-1: Ruff formatting check failed. File would be reformatted.
⏰ Context from checks skipped due to timeout of 90000ms (28)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
🔇 Additional comments (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)
284-290
: Added ui_description parameter properly.The previous issue with missing ui_description in the format call has been fixed.
…add llm parameter to generate_ui_for_workflow function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)
381-385
: Handle LLM model errors gracefullyYou rely on a fixed model
"claude-3-7-sonnet-latest"
. If the model name or availability changes, the code might break. Consider adding try-except around the LLM invocation or fallback logic in case the model is unavailable.
412-413
: Add error handling around workflow executionThe workflow execution could fail for various reasons (network issues, LLM errors, etc.), but there's no error handling.
Add try-except blocks to gracefully handle potential errors:
🧹 Nitpick comments (3)
llama-index-server/llama_index/server/gen_ui/main.py (3)
76-103
: Consider making the code structure configurableThe
code_structure
is hardcoded as a class attribute. For better flexibility, consider making it configurable via the constructor or as a parameter to the workflow execution.class GenUIWorkflow(Workflow): """ Generate UI component for event from workflow. """ - code_structure: str = """ + DEFAULT_CODE_STRUCTURE: str = """ ```jsx // Note: Only shadcn/ui and lucide-react and tailwind css are allowed (but cn() is not supported yet). // shadcn import pattern: import { ComponentName } from "@/components/ui/<component_path>"; // e.g: import { Card, CardHeader, CardTitle, CardContent } from "@/components/ui/card"; // import { Button } from "@/components/ui/button"; // export the component export default function Component({ events }) { // logic for aggregating events (if needed) const aggregateEvents = () => { // code for aggregating events here } // State handling // e.g: const [state, setState] = useState({}); return ( // UI code here ) } ``` """ - def __init__(self, llm: LLM, **kwargs: Any): + def __init__(self, llm: LLM, code_structure: Optional[str] = None, **kwargs: Any): super().__init__(**kwargs) self.llm = llm + self.code_structure = code_structure or self.DEFAULT_CODE_STRUCTURE self.console = Console() self._live: Optional[Live] = None self._completed_steps: List[str] = [] self._current_step: Optional[str] = None
337-341
: Simplify code with ternary operatorFor improved code readability, consider using a ternary operator instead of the if-else block.
- code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL) - if code_match is None: - # If no code block found, use full response - code = response.text - else: - code = code_match.group(1).strip() + code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL) + # If no code block found, use full response, otherwise extract the code + code = response.text if code_match is None else code_match.group(1).strip()🧰 Tools
🪛 Ruff (0.8.2)
337-341: Use ternary operator
code = response.text if code_match is None else code_match.group(1).strip()
instead ofif
-else
-block(SIM108)
412-414
: Make timeout configurableThe timeout value is hardcoded to 500.0 seconds, which might be too long or too short for different use cases. Consider making it configurable via a parameter.
- workflow = GenUIWorkflow(llm=llm, timeout=500.0) - code = await workflow.run(events=event_schemas) + workflow = GenUIWorkflow(llm=llm, timeout=timeout or 500.0) + code = await workflow.run(events=event_schemas)And update the function signature:
async def generate_ui_for_workflow( workflow_file: Optional[str] = None, event_cls: Optional[Type[BaseModel]] = None, llm: Optional[LLM] = None, + timeout: Optional[float] = None, ) -> str:
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
llama-index-server/llama_index/server/gen_ui/main.py
(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py
337-341: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip()
instead of if
-else
-block
(SIM108)
⏰ Context from checks skipped due to timeout of 90000ms (28)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
- GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
- GitHub Check: Unit Tests (windows-latest, 3.9)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
- GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
🔇 Additional comments (9)
llama-index-server/llama_index/server/gen_ui/main.py (9)
1-19
: Good structure for imports and rich console setupClean organization of imports with appropriate grouping. The use of rich library for console output is a good choice for providing interactive feedback during the UI generation process.
21-65
: Well-defined event classes for workflow stepsThe event classes are clearly defined with appropriate docstrings and necessary attributes for each step of the workflow. The class hierarchy is logical, with each event class extending the base Event class and containing the specific properties needed for its step.
66-74
: Clear model definition for aggregation predictionThe
AggregatePrediction
class is well-structured with appropriate documentation explaining its purpose and the meaning of each field. The use of Pydantic for the model definition ensures proper type validation.
105-134
: Well-implemented status update mechanismThe status update mechanism using Rich's Live display is well-implemented, providing clear visual feedback about the workflow progress. The use of colored indicators for completed and current steps improves readability.
135-145
: Good validation in start stepThe start step properly validates that events are provided before proceeding, which prevents errors later in the workflow. Setting the events in the context for use in subsequent steps is also a good practice.
148-183
: Detailed prompt template for UI planningThe planning prompt template is well-crafted with clear instructions and examples. The use of Markdown formatting enhances the readability of the prompt for the LLM.
190-234
: Well-structured event aggregation stepThe event aggregation step is implemented with a clear prompt template that provides detailed instructions on how to aggregate events. The context passed includes both the events and the UI description, which ensures the aggregation function aligns with the desired UI layout.
275-289
: Fixed UI description parameter in format callThe format call now correctly includes the ui_description parameter, which was previously missing according to past review comments.
374-380
: Good input validationThe function correctly validates that either
workflow_file
orevent_cls
is provided, but not both, which prevents ambiguity in the input parameters.
Summary by CodeRabbit
llama-index-server
changes from triggering E2E tests.