Skip to content

add GenUIWorkflow for generating UI components from workflow events #549

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 24 commits into from
Apr 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
42d3fd3
feat: add GenUIWorkflow for generating UI components from workflow ev…
leehuwuj Apr 9, 2025
0f8dd5e
feat: enhance GenUIWorkflow to support event handling and UI generation
leehuwuj Apr 10, 2025
c5dbcd4
add cache, split code
leehuwuj Apr 11, 2025
910e6e5
use gemini model
leehuwuj Apr 11, 2025
d9e2752
refactor: update GenUIWorkflow to use Anthropic model and add pre-run…
leehuwuj Apr 11, 2025
cd36b40
feat: introduce PlanningEvent and enhance GenUIWorkflow for improved …
leehuwuj Apr 11, 2025
fa406b1
Merge remote-tracking branch 'origin/main' into lee/gen-ui
leehuwuj Apr 14, 2025
37f383f
feat: add gen ui to llamaindexserver
leehuwuj Apr 14, 2025
ab40edd
refactor: remove unused gen_ui.py file
leehuwuj Apr 14, 2025
de80741
simplify
leehuwuj Apr 14, 2025
30c1fb8
update for tailwindcss
leehuwuj Apr 14, 2025
d4794ad
simplify code and add document
leehuwuj Apr 14, 2025
762a56e
refine text
leehuwuj Apr 14, 2025
9e0b15d
feat: add UIEvent model and update exports in server module
leehuwuj Apr 14, 2025
f50c1b2
use default UIEvent
leehuwuj Apr 14, 2025
0ac8def
fix wrong model, update template
leehuwuj Apr 14, 2025
63fc82d
add missing doc
leehuwuj Apr 14, 2025
5872176
fix linting
leehuwuj Apr 14, 2025
b5e9150
revert change on template
leehuwuj Apr 14, 2025
8558e9f
fix mypy
leehuwuj Apr 14, 2025
cde2eaf
disable e2e for the change from llama-index-server
leehuwuj Apr 14, 2025
68edc43
remove unused script entry from pyproject.toml and refine UI notice t…
leehuwuj Apr 15, 2025
38249a1
update workflow, bump chat ui
leehuwuj Apr 15, 2025
e0923ea
Refine GenUIWorkflow documentation and improve code structure notes; …
leehuwuj Apr 15, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/workflows/e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,12 @@ name: E2E Tests
on:
push:
branches: [main]
paths-ignore:
- "llama-index-server/**"
pull_request:
branches: [main]
paths-ignore:
- "llama-index-server/**"

env:
POETRY_VERSION: "1.6.1"
Expand Down
68 changes: 51 additions & 17 deletions llama-index-server/docs/custom_ui_component.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,27 +14,33 @@ Custom UI components are a powerful feature that enables you to:

### Workflow events

Your workflow must emit events that fit this structure, allowing the LlamaIndex server to display the right UI components based on the event type.

```json
{
"type": "<event_name>",
"data": <data model>
}
```

In Pydantic, this is equivalent to:
To display custom UI components, your workflow needs to emit `UIEvent` events with data that conforms to the data model of your custom UI component.

```python
from pydantic import BaseModel
from llama_index.server import UIEvent
from pydantic import BaseModel, Field
from typing import Literal, Any

class MyCustomEvent(BaseModel):
type: Literal["<my_custom_event_name>"]
data: dict | Any

def to_response(self):
return self.model_dump()
# Define a Pydantic model for your event data
class DeepResearchEventData(BaseModel):
id: str = Field(description="The unique identifier for the event")
type: Literal["retrieval", "analysis"] = Field(description="DeepResearch has two main stages: retrieval and analysis")
status: Literal["pending", "completed", "failed"] = Field(description="The current status of the event")
content: str = Field(description="The textual content of the event")


# In your workflow, emit the data model with UIEvent
ctx.write_event_to_stream(
UIEvent(
type="deep_research_event",
data=DeepResearchEventData(
id="123",
type="retrieval",
status="pending",
content="Retrieving data...",
),
)
)
```

### Server Setup
Expand Down Expand Up @@ -67,3 +73,31 @@ server = LlamaIndexServer(
);
}
```

### Generate UI Component

We provide a `generate_ui_component` function that uses LLMs to automatically generate UI components for your workflow events.

> **_Note:_** This feature requires the `ANTHROPIC_API_KEY` to be set in your environment.

```python
from llama_index.server.gen_ui.main import generate_ui_component

# Generate a component using the event class you defined in your workflow
from your_workflow import DeepResearchEvent
ui_code = await generate_ui_component(
event_cls=DeepResearchEvent,
)

# Alternatively, generate from your workflow file
ui_code = await generate_ui_component(
workflow_file="your_workflow.py",
)
print(ui_code)

# Save the generated code to a file for use in your project
with open("deep_research_event.jsx", "w") as f:
f.write(ui_code)
```

> **Tip:** For optimal results, add descriptive documentation to each field in your event data class. This helps the LLM better understand your data structure and generate more appropriate UI components.
3 changes: 2 additions & 1 deletion llama-index-server/llama_index/server/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from .api.models import UIEvent
from .server import LlamaIndexServer, UIConfig

__all__ = ["LlamaIndexServer", "UIConfig"]
__all__ = ["LlamaIndexServer", "UIConfig", "UIEvent"]
11 changes: 11 additions & 0 deletions llama-index-server/llama_index/server/api/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,3 +140,14 @@ class ComponentDefinition(BaseModel):
type: str
code: str
filename: str


class UIEvent(Event):
type: str
data: BaseModel

def to_response(self) -> dict:
return {
"type": self.type,
"data": self.data.model_dump(),
}
2 changes: 1 addition & 1 deletion llama-index-server/llama_index/server/chat_ui.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

import requests

CHAT_UI_VERSION = "0.1.0"
CHAT_UI_VERSION = "0.1.1"


def download_chat_ui(
Expand Down
Empty file.
Loading