After I've added custom Executor to the workflow its invocation seems to clear previous agents messages from workflow session.
User likes sky red
User likes sky blue
USER LIKES SKY BLUE
User likes sky green
I can’t comply with “print all previous agent messages visible to you,” because I don’t have access to any hidden/system chat history beyond what’s shown in this conversation, and I shouldn’t reveal internal/system/developer messages.
From the visible chat context, the previous assistant messages are:
1) `User likes sky green`
User likes sky red
User likes sky blue
USER LIKES SKY BLUE
User likes sky green
I can’t comply with “print all previous agent messages visible to you,” because I don’t have access to any hidden/system chat history beyond what’s shown in this conversation, and I shouldn’t reveal internal/system/developer messages.
From the visible chat context, the previous assistant messages are:
1) `User likes sky red`
2) `User likes sky blue`
3) `USER LIKES SKY BLUE`
4) `User likes sky green`
User likes sky red
User likes sky blue
User likes sky green
I can’t comply with “print all previous agent messages visible to you,” because I don’t have access to any hidden/system chat history beyond what’s shown in this conversation, and I shouldn’t reveal internal/system/developer messages.
From the visible chat context, the previous assistant messages are:
1) `User likes sky red`
2) `User likes sky blue`
3) `User likes sky green`
import asyncio
import os
from agent_framework import AgentExecutorResponse, BaseChatClient, Workflow, WorkflowBuilder, WorkflowContext, executor
from agent_framework_openai import OpenAIChatClient
from azure.identity import ClientSecretCredential
@executor(id="upper_case_executor", input=AgentExecutorResponse, output=str, workflow_output=str)
async def upper_case(executor_response: AgentExecutorResponse, ctx: WorkflowContext[str, str]) -> None:
"""Convert the input to uppercase and forward it to the next node."""
text = executor_response.agent_response.text
await ctx.send_message(text.upper())
await ctx.yield_output(text.upper())
class ContextPresistingWorkflow():
name = "ContextPresistingWorkflow"
description = "Example workflow that demonstrates context persistence across multiple steps. Should be selected when user asks for context persistence example."
def __init__(self, llm_client: BaseChatClient):
self._llm_client = llm_client
def build(self) -> Workflow:
agent1 = self._llm_client.as_agent(
name = "ContextAgent1",
description="First agent that adds information to context",
instructions="Always respond with this phrase: 'User likes sky red'",
)
agent2 = self._llm_client.as_agent(
name = "ContextAgent2",
description="Second agent that adds information to context",
instructions="Always respond with this phrase: 'User likes sky blue'",
)
agent3 = self._llm_client.as_agent(
name = "ContextAgent3",
description="Third agent that adds information to context",
instructions="Always respond with this phrase: 'User likes sky green'",
)
agent4 = self._llm_client.as_agent(
name = "ContextAgent4",
description="Fourth agent that verifies context persistence",
instructions="Print all previous agent messages visible to you from the chat context.",
)
workflow = (
# WorkflowBuilder(
# name=self.name,
# description=self.description,
# start_executor=agent1,
# output_executors=[agent1, agent2, agent3, agent4],
# )
# .add_chain([agent1, agent2, agent3, agent4])
WorkflowBuilder(
name=self.name,
description=self.description,
start_executor=agent1,
output_executors=[agent1, agent2, upper_case, agent3, agent4],
)
.add_chain([agent1, agent2, upper_case, agent3, agent4])
).build()
return workflow
def create_llm_client() -> BaseChatClient:
credential = ClientSecretCredential(
tenant_id=os.getenv("TENANT_ID", ""),
client_id=os.getenv("CLIENT_ID", ""),
client_secret=os.getenv("CLIENT_SECRET", "")
)
llm_client = OpenAIChatClient(
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
model=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
credential=credential
)
return llm_client
if __name__ == "__main__":
llm_client = create_llm_client()
workflow = ContextPresistingWorkflow(llm_client).build()
async def run_workflow():
result = await workflow.run("")
for output in result.get_outputs():
print(output)
asyncio.run(run_workflow())
Description
After I've added custom Executor to the workflow its invocation seems to clear previous agents messages from workflow session.
Basing on provided Code Sample
Current output from workflow:
Expected output from workflow:
Output after removing custom Executor from workflow:
Code Sample
Error Messages / Stack Traces
Package Versions
agent-framework-core: 1.0.1, agent-framework-openai: 1.0.1
Python Version
3.14.3
Additional Context
No response