Introduction
AI agents that can plan, reason, and execute complex multi-step tasks are the next evolution of AI applications. In this tutorial, we will build a production-ready AI agent using LangGraph (by LangChain) and Claude that can research topics, write content, and manage workflows autonomously.
Prerequisites
- Python 3.10+
- Anthropic API key
- Basic understanding of LLMs and APIs
- Familiarity with async Python
Step 1: Environment Setup
pip install langgraph langchain-anthropic langchain-community tavily-pythonStep 2: Define Agent State
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
task: str
plan: str
result: str
iteration: intStep 3: Create Agent Nodes
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import SystemMessage, HumanMessage
llm = ChatAnthropic(model="claude-opus-4-6")
async def planner(state: AgentState) -> AgentState:
"""Plans the approach for the given task."""
response = await llm.ainvoke([
SystemMessage(content="You are a planning agent. Break the task into clear steps."),
HumanMessage(content=f"Task: {state['task']}\nCreate a step-by-step plan.")
])
return {"plan": response.content, "messages": [response]}
async def executor(state: AgentState) -> AgentState:
"""Executes the planned steps."""
response = await llm.ainvoke([
SystemMessage(content="You are an execution agent. Follow the plan precisely."),
HumanMessage(content=f"Plan: {state['plan']}\nExecute and provide results.")
])
return {"result": response.content, "messages": [response]}
async def reviewer(state: AgentState) -> AgentState:
"""Reviews the execution results."""
response = await llm.ainvoke([
SystemMessage(content="You are a review agent. Evaluate the quality of the result."),
HumanMessage(content=f"Task: {state['task']}\nResult: {state['result']}\nIs this complete and accurate?")
])
return {"messages": [response], "iteration": state.get("iteration", 0) + 1}Step 4: Build the Graph
from langgraph.graph import StateGraph, START, END
def should_continue(state: AgentState) -> str:
if state.get("iteration", 0) >= 3:
return "end"
last_message = state["messages"][-1].content.lower()
if "approved" in last_message or "complete" in last_message:
return "end"
return "revise"
graph = StateGraph(AgentState)
graph.add_node("planner", planner)
graph.add_node("executor", executor)
graph.add_node("reviewer", reviewer)
graph.add_edge(START, "planner")
graph.add_edge("planner", "executor")
graph.add_edge("executor", "reviewer")
graph.add_conditional_edges("reviewer", should_continue, {
"end": END,
"revise": "planner"
})
agent = graph.compile()Step 5: Add Tool Use
from langchain_community.tools.tavily_search import TavilySearchResults
search = TavilySearchResults(max_results=3)
tools = [search]
llm_with_tools = llm.bind_tools(tools)Step 6: Run the Agent
import asyncio
async def main():
result = await agent.ainvoke({
"task": "Research the latest developments in quantum computing and write a brief summary",
"messages": [],
"plan": "",
"result": "",
"iteration": 0,
})
print(result["result"])
asyncio.run(main())Troubleshooting
- Agent loops endlessly: Set a maximum iteration count in the conditional edges
- Tool calls fail: Verify API keys and check tool availability
- Poor plan quality: Improve the system prompt for the planner node
- State management issues: Ensure all state fields have default values
Conclusion
LangGraph provides the structure needed to build reliable AI agents. The graph-based approach makes it easy to add nodes, modify workflows, and implement human-in-the-loop controls.
Key Takeaways
- Use typed state for predictable agent behavior
- Implement review loops with maximum iteration limits
- Add tool use for real-world data access
- Start simple and add complexity incrementally