Skip to content

add GenUIWorkflow for generating UI components from workflow events #549

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 24 commits into from
Apr 15, 2025

Conversation

leehuwuj
Copy link
Collaborator

@leehuwuj leehuwuj commented Apr 9, 2025

Summary by CodeRabbit

  • New Features
    • Introduced a new UI event type and data model for custom UI components.
    • Added an advanced workflow to generate React UI components automatically from backend event schemas.
    • Enabled dynamic aggregation and refinement of generated UI code using language models.
  • Documentation
    • Enhanced documentation with examples and guidelines for emitting UI events and generating UI components.
  • Chores
    • Updated GitHub Actions to exclude llama-index-server changes from triggering E2E tests.

Copy link

changeset-bot bot commented Apr 9, 2025

⚠️ No Changeset found

Latest commit: e0923ea

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link

coderabbitai bot commented Apr 9, 2025

Walkthrough

This pull request restructures the emission of workflow events by introducing new event classes and updating their usage. A new UIEvent class and a specialized DeepResearchEventData model are implemented, along with corresponding documentation changes. The public API export is updated to include UIEvent. Additionally, a comprehensive UI generation workflow is added using asynchronous steps, a code parsing tool for extracting UI event schemas from workflow code, and a refined static chat UI download link. The GitHub Actions workflow is updated to ignore changes in the llama-index-server directory for E2E tests.

Changes

File(s) Change Summary
llama-index-server/docs/custom_ui_component.md, llama-index-server/llama_index/server/__init__.py, llama-index-server/llama_index/server/api/models.py Added new event classes (UIEvent and DeepResearchEventData), updated API exports to include UIEvent, and enhanced documentation with examples for emitting UI events and generating UI components.
llama-index-server/llama_index/server/gen_ui/main.py, llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py Introduced a multi-step asynchronous workflow for dynamic React UI component generation from backend workflow events, including event classes, prompt-driven code generation/refinement, and a parser to extract UI event schemas from workflow source code.
llama-index-server/llama_index/server/chat_ui.py Updated the CHAT_UI_VERSION constant from "0.1.0" to "0.1.1".
.github/workflows/e2e.yml Added paths-ignore directives for push and pull_request events to exclude changes in the llama-index-server directory from triggering E2E tests.
llama-index-server/pyproject.toml Added a trailing blank line at the end of the [tool.poetry.group.dev.dependencies] section with no other changes.

Sequence Diagram(s)

sequenceDiagram
    participant Trigger as Workflow Trigger
    participant Workflow as GenUIWorkflow
    participant LLM as Language Model
    participant FS as File System

    Trigger->>Workflow: start(event)
    Workflow->>LLM: Generate UI description (planning)
    LLM-->>Workflow: Return UI description
    Workflow->>LLM: Create event aggregations
    LLM-->>Workflow: Aggregation logic
    Workflow->>FS: Write UI component code
    Workflow->>LLM: Refine generated code
    LLM-->>Workflow: Final UI component code
Loading

Possibly related PRs

Suggested reviewers

  • marcusschiesser

Poem

🐇 I hop through lines of code with glee,
New events inspire a bright UI spree.
Components spring from data so clear,
Each async step brings the future near.
With every commit, my heart does cheer—
CodeRabbit leaps, joyful and sincere!
🐰

Tip

⚡💬 Agentic Chat (Pro Plan, General Availability)
  • We're introducing multi-step agentic chat in review comments and issue comments, within and outside of PR's. This feature enhances review and issue discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments and add commits to existing pull requests.
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@leehuwuj leehuwuj marked this pull request as ready for review April 14, 2025 12:33
@leehuwuj leehuwuj changed the title [draft] add GenUIWorkflow for generating UI components from workflow events add GenUIWorkflow for generating UI components from workflow events Apr 14, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)

33-94: 🛠️ Refactor suggestion

Ensure robust cleanup and error chaining in exception handling.

  1. The current flow removes project_root from sys.path in the finally block. If multiple concurrent operations set sys.path, it might interfere. Consider using a context manager to restore sys.path more reliably.
  2. When raising new exceptions, consider raise ValueError(...) from e for clearer stack traces, especially for import errors (B904).
🧹 Nitpick comments (9)
llama-index-server/llama_index/server/api/models.py (1)

145-153: Clean implementation of UIEvent class

The UIEvent class follows the same pattern as other event classes in the file (e.g., AgentRunEvent and SourceNodesEvent). The to_response method properly serializes the data field using model_dump().

One consideration: you might want to add validation for the type field to ensure consistent event types throughout your application. Perhaps using an Enum similar to AgentRunEventType or adding a validator.

llama-index-server/docs/custom_ui_component.md (4)

17-21: Clarify file naming convention for UIEvent-based components.

Consider providing a short reminder that the type used in UIEvent (here, "deep_research_event") should match the filename (e.g., deep_research_event.jsx or .tsx) to ensure proper rendering, reducing confusion for developers new to this workflow.


25-29: Consider adding optional data fields or default values.

Fields like type, status, and content might benefit from clearly stated defaults or optional behavior (e.g., None) if partial events are possible. This can prevent validation errors for incomplete event data.


32-43: Enrich example with error handling.

Your example usage of ctx.write_event_to_stream() works as a good baseline. You might also illustrate how to handle or log errors (e.g., network issues, invalid event data) for completeness, so developers understand fallback strategies.


77-101: Explain asynchronous usage more thoroughly.

It may help to demonstrate the async workflow setup (e.g., an async event loop or environment) required to call await generate_ui_component(...). This clarifies usage for developers unfamiliar with async patterns in Python.

Also applies to: 103-103

llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)

1-31: Evaluate security implications of dynamic import and reload.

Using importlib.import_module and importlib.reload can pose security risks if file_path points to untrusted or user-uploaded code. Consider adding validation or sandboxing the file before importing to safeguard against malicious code.

llama-index-server/llama_index/server/gen_ui/main.py (3)

328-332: Adopt a ternary operator for brevity.

Replace the if-else block with a single-line ternary statement for clarity:

- if code_match is None:
-     code = response.text
- else:
-     code = code_match.group(1).strip()
+ code = response.text if code_match is None else code_match.group(1).strip()
🧰 Tools
🪛 Ruff (0.8.2)

328-332: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)


352-354: Enhance exception chaining when re-raising.

Use raise ValueError(...) from e or raise ValueError(...) from None to provide clearer exception context and preserve tracebacks when environment checks fail.

🧰 Tools
🪛 Ruff (0.8.2)

352-354: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


405-418: Suggest disclaimers for generated UI code.

The generated UI code is dynamically produced. Consider adding disclaimers or instructions about reviewing, testing, or validating it before shipping to production to reduce the risk of runtime errors or undesired UI behaviors.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7997cde and b5e9150.

⛔ Files ignored due to path filters (1)
  • llama-index-server/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • llama-index-server/docs/custom_ui_component.md (2 hunks)
  • llama-index-server/llama_index/server/__init__.py (1 hunks)
  • llama-index-server/llama_index/server/api/models.py (1 hunks)
  • llama-index-server/llama_index/server/chat_ui.py (1 hunks)
  • llama-index-server/llama_index/server/gen_ui/main.py (1 hunks)
  • llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1 hunks)
  • llama-index-server/pyproject.toml (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
llama-index-server/llama_index/server/__init__.py (3)
llama-index-server/llama_index/server/api/models.py (1)
  • UIEvent (145-153)
llama-index-server/tests/test_llamaindex_server.py (1)
  • server (28-36)
llama-index-server/llama_index/server/server.py (2)
  • LlamaIndexServer (52-230)
  • UIConfig (16-49)
llama-index-server/llama_index/server/api/models.py (1)
templates/components/workflows/python/deep_research/workflow.py (1)
  • to_response (81-82)
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py

328-332: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)


352-354: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

⏰ Context from checks skipped due to timeout of 90000ms (28)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
🔇 Additional comments (4)
llama-index-server/llama_index/server/__init__.py (1)

1-4: Appropriate addition to public API exports

The changes correctly expose the newly added UIEvent class through the module's public API by adding it to both imports and the __all__ list. This is consistent with how other public entities like LlamaIndexServer and UIConfig are exposed.

llama-index-server/pyproject.toml (1)

66-67:

❓ Verification inconclusive

Verify existence of CLI entrypoint

Adding a CLI entrypoint through Poetry's scripts configuration is a good practice. This will allow users to run llama-index-server from the command line once the package is installed.

Please ensure that the llama_index.server.cli:cli module and function exist and are properly implemented to avoid installation errors.


🏁 Script executed:

#!/bin/bash
# Check if the CLI module exists in the codebase
find llama-index-server -name "cli.py" | grep -i server
# Check if the cli function is defined in any cli.py file
find llama-index-server -name "cli.py" -exec grep -l "def cli" {} \;

Length of output: 127


Verify CLI Entrypoint Implementation

The automated search did not return any output confirming the existence of the CLI entrypoint as referenced (llama_index.server.cli:cli). Please ensure that:

  • A module file exists at the expected path (e.g. llama_index/server/cli.py).
  • That file includes a proper cli function definition.

If the file or function is missing, it may lead to installation errors. Manual verification is requested to confirm that the entrypoint is implemented appropriately.


🏁 Script executed:

#!/bin/bash
echo "Searching for module file 'llama_index/server/cli.py' from the repository root:"
find . -type f -path "*/llama_index/server/cli.py"
echo "Searching for the CLI function definition 'def cli' within the module file(s):"
find . -type f -path "*/llama_index/server/cli.py" -exec grep -H "def cli" {} \;

Length of output: 608


Verify CLI Entrypoint Implementation

The configuration in pyproject.toml registers the entrypoint as llama_index.server.cli:cli, but our automated search did not produce any output confirming the existence of a llama_index/server/cli.py file or a cli function within it. Please manually verify that:

  • A file exists at llama_index/server/cli.py.
  • The file contains a properly implemented cli function.

If the file or function is not present, the entrypoint registration will lead to installation errors.

llama-index-server/llama_index/server/chat_ui.py (1)

24-25:

❓ Verification inconclusive

Consider a more sustainable approach for UI package sourcing

Replacing the versioned npm registry URL with a hardcoded build URL may lead to issues:

  1. The function still accepts a version parameter that is now ignored
  2. Referencing a specific build number (@1838) instead of a semantic version makes updates less predictable
  3. The pkg.pr.new domain might be intended for temporary/PR testing

Consider:

  • Using a versioned URL if this is a permanent change
  • Adding a comment explaining why this change was made
  • Using an environment variable to allow configuration of the source URL

🏁 Script executed:

#!/bin/bash
# Check if the URL is accessible
curl -s -o /dev/null -w "%{http_code}" https://pkg.pr.new/run-llama/LlamaIndexTS/@llamaindex/server@1838
# Check if there are any references to this URL elsewhere
grep -r "pkg.pr.new" llama-index-server --include="*.py"

Length of output: 294


UI Package Sourcing Update Required

  • The current implementation still accepts a version parameter, but the hardcoded URL ignores it.
  • Referencing a specific build number (@1838) instead of a semantic version may lead to unpredictable updates.
  • The pkg.pr.new domain appears to be for temporary or PR-specific testing, which could affect long-term stability.

Recommendations:

  • Use a versioned URL if this change is intended to be permanent.
  • Add an inline comment explaining why the URL replacement was made.
  • Consider making the URL configurable via an environment variable to support different deployment scenarios.
llama-index-server/llama_index/server/gen_ui/main.py (1)

105-111: Be mindful of shared state across concurrent workflow runs.

The GenUIWorkflow constructor stores llm, console, and other mutable fields that might be manipulated concurrently in multiple runs. If you expect concurrency, consider creating separate workflow instances or adding thread-safety measures.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)

407-408: 🛠️ Refactor suggestion

Handle LLM model errors gracefully

You rely on a fixed model "claude-3-7-sonnet-latest". If the model name or availability changes, the code might break. Consider adding try-except around the LLM invocation or fallback logic in case the model is unavailable.

-    llm = Anthropic(model="claude-3-7-sonnet-latest", max_tokens=4096)
-    workflow = GenUIWorkflow(llm=llm, timeout=500.0)
+    # Try with the latest model, fallback to a more stable one if needed
+    try:
+        llm = Anthropic(model="claude-3-7-sonnet-latest", max_tokens=4096)
+        workflow = GenUIWorkflow(llm=llm, timeout=500.0)
+    except Exception as e:
+        console.print(f"[yellow]Warning: Failed to use latest Claude model: {str(e)}. Falling back to alternative model.[/yellow]")
+        llm = Anthropic(model="claude-3-opus-20240229", max_tokens=4096)
+        workflow = GenUIWorkflow(llm=llm, timeout=500.0)
🧹 Nitpick comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)

327-333: Simplify code extraction logic using ternary operator

The code extraction from the LLM response can be simplified using a ternary operator.

    # Extract code from response, handling case where code block is missing
    code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL)
-    if code_match is None:
-        # If no code block found, use full response
-        code = response.text
-    else:
-        code = code_match.group(1).strip()
+    # Use ternary operator for simpler code
+    code = response.text if code_match is None else code_match.group(1).strip()
🧰 Tools
🪛 Ruff (0.8.2)

328-332: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)


408-408: Consider making timeout configurable

The timeout is hardcoded to 500.0 seconds, which might not be appropriate for all workflows. Consider making this configurable.

-    workflow = GenUIWorkflow(llm=llm, timeout=500.0)
+    # Default timeout is 500 seconds but can be overridden
+    timeout = float(os.getenv("GENUI_WORKFLOW_TIMEOUT", "500.0"))
+    workflow = GenUIWorkflow(llm=llm, timeout=timeout)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b5e9150 and cde2eaf.

📒 Files selected for processing (3)
  • .github/workflows/e2e.yml (1 hunks)
  • llama-index-server/llama_index/server/gen_ui/main.py (1 hunks)
  • llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .github/workflows/e2e.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py
🧰 Additional context used
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py

328-332: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)


352-354: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

⏰ Context from checks skipped due to timeout of 90000ms (25)
  • GitHub Check: build
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
🔇 Additional comments (3)
llama-index-server/llama_index/server/gen_ui/main.py (3)

82-102: LGTM - Well-structured code template

The code structure definition is clear and provides a good template for the generated UI component. The JSX structure follows good practices with component exports, state handling, and proper organization.


112-133: LGTM - Excellent progress visualization

The status update implementation using Rich's Live display provides excellent user feedback during the workflow execution. The color-coded status indicators and panel display make the process easy to follow.


22-66: LGTM - Well-defined event classes

The event classes are well-organized, have clear docstrings, and properly define the data needed for each step in the workflow. This makes the flow of data through the workflow clear and maintainable.

Copy link
Collaborator

@marcusschiesser marcusschiesser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leehuwuj please check comments and release

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)

355-357: Adopt exception chaining in the ImportError handler.
This was raised previously, so marking as duplicate feedback. Use from err or from None to clarify exception context:

except ImportError as err:
-    raise ValueError(
+    raise ValueError(
         "Anthropic package is not installed. Please install it..."
-    )
+    ) from err
🧰 Tools
🪛 Ruff (0.8.2)

355-357: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


410-410: Handle fallback if the chosen Anthropic model is unavailable.
This was highlighted in a previous review. Relying on a fixed model name ("claude-3-7-sonnet-latest") without fallback logic may lead to runtime errors if deprecated or renamed.

🧹 Nitpick comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)

77-133: Consider extracting repeated prompt text into reusable templates.
You have multiple large prompt templates spread across different methods. Factoring them into a shared helper or config could simplify future maintenance and reduce duplication.


299-335: Use a ternary operator for simpler code.
Static analysis suggests condensing the if code_match is None block into a one-line ternary expression to reduce verbosity:

-        if code_match is None:
-            code = response.text
-        else:
-            code = code_match.group(1).strip()
+        code = response.text if code_match is None else code_match.group(1).strip()
🧰 Tools
🪛 Ruff (0.8.2)

331-335: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cde2eaf and 38249a1.

📒 Files selected for processing (3)
  • llama-index-server/llama_index/server/chat_ui.py (1 hunks)
  • llama-index-server/llama_index/server/gen_ui/main.py (1 hunks)
  • llama-index-server/pyproject.toml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • llama-index-server/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (1)
  • llama-index-server/llama_index/server/chat_ui.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)
llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)
  • get_workflow_event_schemas (33-93)
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py

331-335: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)


355-357: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

⏰ Context from checks skipped due to timeout of 90000ms (28)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
🔇 Additional comments (4)
llama-index-server/llama_index/server/gen_ui/main.py (4)

22-28: No issues with PlanningEvent class.
The class definition is straightforward and aligns well with typical event data usage.


30-47: Event classes look solid and consistent.
All three event classes (WriteAggregationEvent, WriteUIComponentEvent) are similarly well-defined, with appropriate fields for storing event data.


49-65: RefineGeneratedCodeEvent and ExtractEventSchemaEvent classes are clear.
They properly encapsulate necessary fields, demonstrating a consistent event-driven approach.


67-75: AggregatePrediction model is concise.
Using Pydantic for typed data ensures better validations and future maintainability.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)

381-385: Handle LLM model errors gracefully.

You rely on a fixed model "claude-3-7-sonnet-latest". If the model name changes or is unavailable, the code might break. Consider adding error handling for LLM initialization.

#!/bin/bash
# Check if the Anthropic package provides fallback mechanisms or version checks

# Look for error handling patterns in the codebase
rg -A 3 -B 3 "try:.*Anthropic\(" --multiline
🧹 Nitpick comments (4)
llama-index-server/llama_index/server/gen_ui/main.py (4)

337-341: Simplify code with ternary operator for better readability.

The code block to extract the JSX code can be simplified using a ternary operator.

-        code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL)
-        if code_match is None:
-            # If no code block found, use full response
-            code = response.text
-        else:
-            code = code_match.group(1).strip()
+        code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL)
+        code = response.text if code_match is None else code_match.group(1).strip()
🧰 Tools
🪛 Ruff (0.8.2)

337-341: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)


352-424: Consider supporting both workflow_file and event_cls simultaneously.

The function requires exactly one of workflow_file or event_cls, but there could be cases where users want to generate UI for both the workflow events and additional custom events.

Consider modifying the function to support both inputs simultaneously by merging the events from both sources:

-    if workflow_file is None and event_cls is None:
-        raise ValueError(
-            "Either workflow_file or event_cls must be provided. Please provide one of them."
-        )
-    if workflow_file is not None and event_cls is not None:
-        raise ValueError(
-            "Only one of workflow_file or event_cls can be provided. Please provide only one of them."
-        )
+    if workflow_file is None and event_cls is None:
+        raise ValueError(
+            "Either workflow_file or event_cls must be provided. Please provide at least one of them."
+        )
+    
+    event_schemas = []

And then later merge the schemas:

     # Get event schemas
+    event_schemas = []
     if workflow_file is not None:
         # Get event schemas from the input file
         console.rule("[bold blue]Analyzing Events[/bold blue]")
-        event_schemas = get_workflow_event_schemas(workflow_file)
+        workflow_schemas = get_workflow_event_schemas(workflow_file)
+        event_schemas.extend(workflow_schemas)
         if len(event_schemas) == 0:
             console.print(
                 Panel(
@@ -403,8 +401,8 @@
             )
     elif event_cls is not None:
-        event_schemas = [
-            {"type": event_cls.__name__, "schema": event_cls.model_json_schema()}
-        ]
+        event_schemas.append(
+            {"type": event_cls.__name__, "schema": event_cls.model_json_schema()}
+        )

58-64: Standardize event field types across all event classes.

ExtractEventSchemaEvent.events is typed as List[Any] while other event classes use List[Dict[str, Any]] for their events field.

-    events: List[Any]
+    events: List[Dict[str, Any]]

113-134: Consider optimizing the status update mechanism for performance.

The update_status method rebuilds the entire status display on each call, which could be inefficient for workflows with many steps.

Consider using a more efficient approach to update only the changed parts of the status display, especially for complex workflows.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 38249a1 and 3dd6671.

📒 Files selected for processing (1)
  • llama-index-server/llama_index/server/gen_ui/main.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)
llama-index-server/llama_index/server/gen_ui/parse_workflow_code.py (1)
  • get_workflow_event_schemas (33-93)
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py

337-341: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)

🪛 GitHub Actions: Lint on push or pull request
llama-index-server/llama_index/server/gen_ui/main.py

[error] 1-1: Ruff formatting check failed. File would be reformatted.

⏰ Context from checks skipped due to timeout of 90000ms (28)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
🔇 Additional comments (1)
llama-index-server/llama_index/server/gen_ui/main.py (1)

284-290: Added ui_description parameter properly.

The previous issue with missing ui_description in the format call has been fixed.

…add llm parameter to generate_ui_for_workflow function.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
llama-index-server/llama_index/server/gen_ui/main.py (2)

381-385: Handle LLM model errors gracefully

You rely on a fixed model "claude-3-7-sonnet-latest". If the model name or availability changes, the code might break. Consider adding try-except around the LLM invocation or fallback logic in case the model is unavailable.


412-413: Add error handling around workflow execution

The workflow execution could fail for various reasons (network issues, LLM errors, etc.), but there's no error handling.

Add try-except blocks to gracefully handle potential errors:

🧹 Nitpick comments (3)
llama-index-server/llama_index/server/gen_ui/main.py (3)

76-103: Consider making the code structure configurable

The code_structure is hardcoded as a class attribute. For better flexibility, consider making it configurable via the constructor or as a parameter to the workflow execution.

 class GenUIWorkflow(Workflow):
     """
     Generate UI component for event from workflow.
     """
 
-    code_structure: str = """
+    DEFAULT_CODE_STRUCTURE: str = """
         ```jsx
             // Note: Only shadcn/ui and lucide-react and tailwind css are allowed (but cn() is not supported yet).
             // shadcn import pattern: import { ComponentName } from "@/components/ui/<component_path>";
             // e.g: import { Card, CardHeader, CardTitle, CardContent } from "@/components/ui/card";
             //      import { Button } from "@/components/ui/button";
 
             // export the component
             export default function Component({ events }) {
                 // logic for aggregating events (if needed)
                 const aggregateEvents = () => {
                     // code for aggregating events here
                 }
             
                 // State handling
                 // e.g: const [state, setState] = useState({});
 
                 return (
                     // UI code here
                 )
             }
         ```
     """
 
-    def __init__(self, llm: LLM, **kwargs: Any):
+    def __init__(self, llm: LLM, code_structure: Optional[str] = None, **kwargs: Any):
         super().__init__(**kwargs)
         self.llm = llm
+        self.code_structure = code_structure or self.DEFAULT_CODE_STRUCTURE
         self.console = Console()
         self._live: Optional[Live] = None
         self._completed_steps: List[str] = []
         self._current_step: Optional[str] = None

337-341: Simplify code with ternary operator

For improved code readability, consider using a ternary operator instead of the if-else block.

-        code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL)
-        if code_match is None:
-            # If no code block found, use full response
-            code = response.text
-        else:
-            code = code_match.group(1).strip()
+        code_match = re.search(r"```jsx(.*)```", response.text, re.DOTALL)
+        # If no code block found, use full response, otherwise extract the code
+        code = response.text if code_match is None else code_match.group(1).strip()
🧰 Tools
🪛 Ruff (0.8.2)

337-341: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)


412-414: Make timeout configurable

The timeout value is hardcoded to 500.0 seconds, which might be too long or too short for different use cases. Consider making it configurable via a parameter.

-    workflow = GenUIWorkflow(llm=llm, timeout=500.0)
-    code = await workflow.run(events=event_schemas)
+    workflow = GenUIWorkflow(llm=llm, timeout=timeout or 500.0)
+    code = await workflow.run(events=event_schemas)

And update the function signature:

 async def generate_ui_for_workflow(
     workflow_file: Optional[str] = None,
     event_cls: Optional[Type[BaseModel]] = None,
     llm: Optional[LLM] = None,
+    timeout: Optional[float] = None,
 ) -> str:
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3dd6671 and e0923ea.

📒 Files selected for processing (1)
  • llama-index-server/llama_index/server/gen_ui/main.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
llama-index-server/llama_index/server/gen_ui/main.py

337-341: Use ternary operator code = response.text if code_match is None else code_match.group(1).strip() instead of if-else-block

(SIM108)

⏰ Context from checks skipped due to timeout of 90000ms (28)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (18, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
  • GitHub Check: typescript (18, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
🔇 Additional comments (9)
llama-index-server/llama_index/server/gen_ui/main.py (9)

1-19: Good structure for imports and rich console setup

Clean organization of imports with appropriate grouping. The use of rich library for console output is a good choice for providing interactive feedback during the UI generation process.


21-65: Well-defined event classes for workflow steps

The event classes are clearly defined with appropriate docstrings and necessary attributes for each step of the workflow. The class hierarchy is logical, with each event class extending the base Event class and containing the specific properties needed for its step.


66-74: Clear model definition for aggregation prediction

The AggregatePrediction class is well-structured with appropriate documentation explaining its purpose and the meaning of each field. The use of Pydantic for the model definition ensures proper type validation.


105-134: Well-implemented status update mechanism

The status update mechanism using Rich's Live display is well-implemented, providing clear visual feedback about the workflow progress. The use of colored indicators for completed and current steps improves readability.


135-145: Good validation in start step

The start step properly validates that events are provided before proceeding, which prevents errors later in the workflow. Setting the events in the context for use in subsequent steps is also a good practice.


148-183: Detailed prompt template for UI planning

The planning prompt template is well-crafted with clear instructions and examples. The use of Markdown formatting enhances the readability of the prompt for the LLM.


190-234: Well-structured event aggregation step

The event aggregation step is implemented with a clear prompt template that provides detailed instructions on how to aggregate events. The context passed includes both the events and the UI description, which ensures the aggregation function aligns with the desired UI layout.


275-289: Fixed UI description parameter in format call

The format call now correctly includes the ui_description parameter, which was previously missing according to past review comments.


374-380: Good input validation

The function correctly validates that either workflow_file or event_cls is provided, but not both, which prevents ambiguity in the input parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants