Replies: 4 comments 1 reply
-
In LlamaIndex workflows, events are not "deleted" after being consumed; instead, they are managed through a mechanism that allows them to be retained for future use. This is achieved using the In your example, the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
-
Another thing that concerns me -- won't that introduce memory leaks? I'm sorry if I dig up too much or my ideas are not relevant or too far-fetched |
Beta Was this translation helpful? Give feedback.
-
I continue to torture workflows, and I've found in sources that Take this code for example: import asyncio
from llama_index.core.workflow import Context, Event, StartEvent, StopEvent, Workflow, step
class EventA(Event):
pass
class EventB(Event):
pass
class Test(Workflow):
@step
async def start(self, ev: StartEvent) -> EventA:
return EventA()
@step
async def make_b(self, ev: EventA) -> EventB:
return EventB()
@step
async def finish(self, ctx: Context, ev: EventA | EventB) -> StopEvent:
b = await ctx.wait_for_event(EventB)
print("Received b")
a = await ctx.wait_for_event(EventA)
print("Received a")
return StopEvent()
async def main():
w = Test(verbose=True)
await w.run()
asyncio.run(main()) On my machine this produces:
And it ends in timeout error. My guess why this happened: first I've added some
Q. E. D. |
Beta Was this translation helpful? Give feedback.
-
There is no "mathematical model" -- saying that its event driven is correct Steps only run when their required events are emitted, and the system keeps track of events in queues and whatnot, and handles emitting events that steps return |
Beta Was this translation helpful? Give feedback.
-
Hi! I've been using LlamaIndex workflows recently, and I'm puzzled about how it works internally.
The thing I don't understand -- what its mathematical model is? Docs says it has event-driven architecture, but this doesn't explain much.
Take for example this code:
Which produces:
So, it's like
EventA
is consumed inmake_b
andfrom_a_b
steps. How does LlamaIndex workflowunderstand that when
make_b
is called, theEventA
must not be "consumed" or "deleted" fully, but rather saved for laterfrom_a_b
call? This is very interesting to me.Beta Was this translation helpful? Give feedback.
All reactions