LangChain agents are built on LangGraph, so they support the same streaming stack with agent-focused projections for messages, tool calls, state, and custom updates. For most application and frontend use cases, use Event Streaming throughDocumentation Index
Fetch the complete documentation index at: https://langchain-5e9cc07a-preview-nhuses-1778700384-2b3c094.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
stream_events(..., version="v3"). Event Streaming returns a run object with typed projections, so each projection can be consumed independently instead of parsing stream-mode tuples.
What you can stream
| Projection | Use |
|---|---|
for event in stream | Raw protocol events with full envelope and access to every channel. |
stream.messages | Model message streams, one per LLM call. |
message.text | Text deltas and final text for a message. |
message.reasoning | Reasoning deltas for models that expose reasoning content. |
message.tool_calls | Tool-call argument chunks and finalized tool calls. |
message.output | Final message object after the model call completes. |
stream.values | Agent state snapshots. |
stream.output | Final agent state. |
stream.subgraphs | Nested graph runs (sub-agents and plain subgraphs). |
stream.extensions | Custom transformer projections. |
stream.tool_calls | Tool execution lifecycle, inputs, output deltas, final output, and errors. |
stream.messages yields ChatModelStream objects. Each message stream exposes .text, .reasoning, .tool_calls, and .output. Sync projections are iterable for live deltas and drainable for final values: use str(message.text) for final text and message.tool_calls.get() for finalized tool calls.
Agent messages
Usestream.messages when you want model output from each LLM call.
message.output gives you the finalized AI message, including provider-specific content blocks. In TypeScript, use message.usage when you only need token counts or other usage metadata; in Python, read usage from message.output.usage_metadata.
Reasoning content
Reasoning content uses the same shape as text content, but it is available only when the selected model emits reasoning blocks.Tool calls
There are two useful tool-call projections:message.tool_callsstreams tool-call argument chunks while the model is producing the tool call.stream.tool_callsstreams the lifecycle of tool execution after the tool call starts.
Streaming sub-agents
When acreate_agent call invokes another create_agent (via a wrapping tool, typically), the inner agent’s events flow at a nested namespace and surface as a handle on stream.subgraphs. Each handle exposes the inner agent’s own .messages, .values, .tool_calls, and .output projections. The name= you pass to create_agent becomes subagent.graph_name (Python) / subagent.name (JS), which lets you filter and label per agent.
Every nested CompiledStateGraph shows up on stream.subgraphs — create_agent instances are one specific kind. Filter on the name to act only on the ones you care about.
StateGraph subgraphs invoked from a tool — set name= on .compile(name=...) to get a label in subagent.graph_name. There’s no separate sub-agent-only projection; the filter is what you write into your loop.
State and final output
Usestream.values for state snapshots and stream.output for the final agent state.
Multiple projections
Usestream.interleave(...) when you want one sync loop over multiple projections:
Custom updates
Use custom stream transformers when your application needs a projection that is not built in, such as retrieval progress, artifacts, or domain-specific events.Related
- Streaming covers low-level Pregel stream modes.
- Build your own projection covers writing application-specific projections.
- Frontend streaming patterns shows UI use cases built on streamed state.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

