Skip to content

Commit 241d82a

Browse files
feat: add create-llama artifacts template (python) (#586)
* add artifact template for python * Add artifact workflows for code and document generation - Introduced `CodeArtifactWorkflow` and `DocumentArtifactWorkflow` classes to handle code and document artifacts respectively. - Updated README to include instructions for modifying the factory method to select the appropriate workflow. - Enhanced clarity in class documentation and improved naming conventions for better understanding. * bump packages * fix wrong name * add ts workflows * revert change for TS * docs: fix docs * add metadata fields --------- Co-authored-by: Marcus Schiesser <[email protected]>
1 parent b16cfd8 commit 241d82a

File tree

10 files changed

+954
-9
lines changed

10 files changed

+954
-9
lines changed

.changeset/chilly-foxes-remain.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
"create-llama": patch
3+
---
4+
5+
Add artifacts use case (python)

packages/create-llama/helpers/python.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -562,7 +562,7 @@ const installLlamaIndexServerTemplate = async ({
562562
process.exit(1);
563563
}
564564

565-
await copy("workflow.py", path.join(root, "app"), {
565+
await copy("*.py", path.join(root, "app"), {
566566
parents: true,
567567
cwd: path.join(templatesDir, "components", "workflows", "python", useCase),
568568
});

packages/create-llama/helpers/types.ts

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,8 @@ export type TemplateUseCase =
5757
| "form_filling"
5858
| "extractor"
5959
| "contract_review"
60-
| "agentic_rag";
60+
| "agentic_rag"
61+
| "artifacts";
6162
// Config for both file and folder
6263
export type FileSourceConfig =
6364
| {

packages/create-llama/questions/simple.ts

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,11 @@ import { ModelConfig, TemplateFramework } from "../helpers/types";
66
import { PureQuestionArgs, QuestionResults } from "./types";
77
import { askPostInstallAction, questionHandlers } from "./utils";
88

9-
type AppType = "agentic_rag" | "financial_report" | "deep_research";
9+
type AppType =
10+
| "agentic_rag"
11+
| "financial_report"
12+
| "deep_research"
13+
| "artifacts";
1014

1115
type SimpleAnswers = {
1216
appType: AppType;
@@ -42,6 +46,12 @@ export const askSimpleQuestions = async (
4246
description:
4347
"Researches and analyzes provided documents from multiple perspectives, generating a comprehensive report with citations to support key findings and insights.",
4448
},
49+
{
50+
title: "Artifacts",
51+
value: "artifacts",
52+
description:
53+
"Build your own Vercel's v0 or OpenAI's canvas-styled UI.",
54+
},
4555
],
4656
},
4757
questionHandlers,
@@ -52,7 +62,7 @@ export const askSimpleQuestions = async (
5262

5363
let useLlamaCloud = false;
5464

55-
if (appType !== "extractor" && appType !== "contract_review") {
65+
if (appType !== "artifacts") {
5666
const { language: newLanguage } = await prompts(
5767
{
5868
type: "select",
@@ -111,10 +121,10 @@ const convertAnswers = async (
111121
args: PureQuestionArgs,
112122
answers: SimpleAnswers,
113123
): Promise<QuestionResults> => {
114-
const MODEL_GPT4o: ModelConfig = {
124+
const MODEL_GPT41: ModelConfig = {
115125
provider: "openai",
116126
apiKey: args.openAiKey,
117-
model: "gpt-4o",
127+
model: "gpt-4.1",
118128
embeddingModel: "text-embedding-3-large",
119129
dimensions: 1536,
120130
isConfigured(): boolean {
@@ -135,13 +145,19 @@ const convertAnswers = async (
135145
template: "llamaindexserver",
136146
dataSources: EXAMPLE_10K_SEC_FILES,
137147
tools: getTools(["interpreter", "document_generator"]),
138-
modelConfig: MODEL_GPT4o,
148+
modelConfig: MODEL_GPT41,
139149
},
140150
deep_research: {
141151
template: "llamaindexserver",
142152
dataSources: EXAMPLE_10K_SEC_FILES,
143153
tools: [],
144-
modelConfig: MODEL_GPT4o,
154+
modelConfig: MODEL_GPT41,
155+
},
156+
artifacts: {
157+
template: "llamaindexserver",
158+
dataSources: [],
159+
tools: [],
160+
modelConfig: MODEL_GPT41,
145161
},
146162
};
147163

Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
import { Badge } from "@/components/ui/badge";
2+
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
3+
import { Progress } from "@/components/ui/progress";
4+
import { Skeleton } from "@/components/ui/skeleton";
5+
import { cn } from "@/lib/utils";
6+
import { Markdown } from "@llamaindex/chat-ui/widgets";
7+
import { ListChecks, Loader2, Wand2 } from "lucide-react";
8+
import { useEffect, useState } from "react";
9+
10+
const STAGE_META = {
11+
plan: {
12+
icon: ListChecks,
13+
badgeText: "Step 1/2: Planning",
14+
gradient: "from-blue-100 via-blue-50 to-white",
15+
progress: 33,
16+
iconBg: "bg-blue-100 text-blue-600",
17+
badge: "bg-blue-100 text-blue-700",
18+
},
19+
generate: {
20+
icon: Wand2,
21+
badgeText: "Step 2/2: Generating",
22+
gradient: "from-violet-100 via-violet-50 to-white",
23+
progress: 66,
24+
iconBg: "bg-violet-100 text-violet-600",
25+
badge: "bg-violet-100 text-violet-700",
26+
},
27+
};
28+
29+
function ArtifactWorkflowCard({ event }) {
30+
const [visible, setVisible] = useState(event?.state !== "completed");
31+
const [fade, setFade] = useState(false);
32+
33+
useEffect(() => {
34+
if (event?.state === "completed") {
35+
setVisible(false);
36+
} else {
37+
setVisible(true);
38+
setFade(false);
39+
}
40+
}, [event?.state]);
41+
42+
if (!event || !visible) return null;
43+
44+
const { state, requirement } = event;
45+
const meta = STAGE_META[state];
46+
47+
if (!meta) return null;
48+
49+
return (
50+
<div className="flex justify-center items-center w-full min-h-[180px] py-2">
51+
<Card
52+
className={cn(
53+
"w-full shadow-md rounded-xl transition-all duration-500",
54+
"border-0",
55+
fade && "opacity-0 pointer-events-none",
56+
`bg-gradient-to-br ${meta.gradient}`,
57+
)}
58+
style={{
59+
boxShadow:
60+
"0 2px 12px 0 rgba(80, 80, 120, 0.08), 0 1px 3px 0 rgba(80, 80, 120, 0.04)",
61+
}}
62+
>
63+
<CardHeader className="flex flex-row items-center gap-2 pb-1 pt-2 px-3">
64+
<div
65+
className={cn(
66+
"rounded-full p-1 flex items-center justify-center",
67+
meta.iconBg,
68+
)}
69+
>
70+
<meta.icon className="w-5 h-5" />
71+
</div>
72+
<CardTitle className="text-base font-semibold flex items-center gap-2">
73+
<Badge className={cn("ml-1", meta.badge, "text-xs px-2 py-0.5")}>
74+
{meta.badgeText}
75+
</Badge>
76+
</CardTitle>
77+
</CardHeader>
78+
<CardContent className="px-3 py-1">
79+
{state === "plan" && (
80+
<div className="flex flex-col items-center gap-2 py-2">
81+
<Loader2 className="animate-spin text-blue-400 w-6 h-6 mb-1" />
82+
<div className="text-sm text-blue-900 font-medium text-center">
83+
Analyzing your request...
84+
</div>
85+
<Skeleton className="w-1/2 h-3 rounded-full mt-1" />
86+
</div>
87+
)}
88+
{state === "generate" && (
89+
<div className="flex flex-col gap-2 py-2">
90+
<div className="flex items-center gap-1">
91+
<Loader2 className="animate-spin text-violet-400 w-4 h-4" />
92+
<span className="text-violet-900 font-medium text-sm">
93+
Working on the requirement:
94+
</span>
95+
</div>
96+
<div className="rounded-lg border border-violet-200 bg-violet-50 px-2 py-1 max-h-24 overflow-auto text-xs">
97+
{requirement ? (
98+
<Markdown content={requirement} />
99+
) : (
100+
<span className="text-violet-400 italic">
101+
No requirements available yet.
102+
</span>
103+
)}
104+
</div>
105+
</div>
106+
)}
107+
</CardContent>
108+
<div className="px-3 pb-2 pt-1">
109+
<Progress
110+
value={meta.progress}
111+
className={cn(
112+
"h-1 rounded-full bg-gray-200",
113+
state === "plan" && "bg-blue-200",
114+
state === "generate" && "bg-violet-200",
115+
)}
116+
indicatorClassName={cn(
117+
"transition-all duration-500",
118+
state === "plan" && "bg-blue-500",
119+
state === "generate" && "bg-violet-500",
120+
)}
121+
/>
122+
</div>
123+
</Card>
124+
</div>
125+
);
126+
}
127+
128+
export default function Component({ events }) {
129+
const aggregateEvents = () => {
130+
if (!events || events.length === 0) return null;
131+
return events[events.length - 1];
132+
};
133+
134+
const event = aggregateEvents();
135+
136+
return <ArtifactWorkflowCard event={event} />;
137+
}
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
This is a [LlamaIndex](https://www.llamaindex.ai/) project using [Workflows](https://docs.llamaindex.ai/en/stable/understanding/workflows/).
2+
3+
## Getting Started
4+
5+
First, setup the environment with uv:
6+
7+
> **_Note:_** This step is not needed if you are using the dev-container.
8+
9+
```shell
10+
uv sync
11+
```
12+
13+
Then check the parameters that have been pre-configured in the `.env` file in this directory.
14+
Make sure you have set the `OPENAI_API_KEY` for the LLM.
15+
16+
Then, run the development server:
17+
18+
```shell
19+
uv run fastapi dev
20+
```
21+
22+
Then open [http://localhost:8000](http://localhost:8000) with your browser to start the chat UI.
23+
24+
To start the app optimized for **production**, run:
25+
26+
```
27+
uv run fastapi run
28+
```
29+
30+
## Configure LLM and Embedding Model
31+
32+
You can configure [LLM model](https://docs.llamaindex.ai/en/stable/module_guides/models/llms) and [embedding model](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings) in [settings.py](app/settings.py).
33+
34+
## Use Case
35+
36+
We have prepared two artifact workflows:
37+
38+
- [Code Workflow](app/code_workflow.py): To generate code and display it in the UI like Vercel's v0.
39+
- [Document Workflow](app/document_workflow.py): Generate and update a document like OpenAI's canvas.
40+
41+
Modify the factory method in [`workflow.py`](app/workflow.py) to decide which artifact workflow to use. Without any changes the Code Workflow is used.
42+
43+
You can start by sending an request on the [chat UI](http://localhost:8000) or you can test the `/api/chat` endpoint with the following curl request:
44+
45+
```
46+
curl --location 'localhost:8000/api/chat' \
47+
--header 'Content-Type: application/json' \
48+
--data '{ "messages": [{ "role": "user", "content": "Create a report comparing the finances of Apple and Tesla" }] }'
49+
```
50+
51+
## Customize the UI
52+
53+
To customize the UI, you can start by modifying the [./components/ui_event.jsx](./components/ui_event.jsx) file.
54+
55+
You can also generate a new code for the workflow using LLM by running the following command:
56+
57+
```
58+
uv run generate_ui
59+
```
60+
61+
## Learn More
62+
63+
To learn more about LlamaIndex, take a look at the following resources:
64+
65+
- [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex.
66+
- [Workflows Introduction](https://docs.llamaindex.ai/en/stable/understanding/workflows/) - learn about LlamaIndex workflows.
67+
- [LlamaIndex Server](https://pypi.org/project/llama-index-server/)
68+
69+
You can check out [the LlamaIndex GitHub repository](https://github.com/run-llama/llama_index) - your feedback and contributions are welcome!

0 commit comments

Comments
 (0)