
今天我們繼續學習 LangGraph 的基礎內容。前面兩篇文章我們學習了 LangGraph 的基本用法和條件分支,本文繼續學習如何利用 LangGraph 實現一個環(循環)結構。 0. 完整代碼0.1 完整代碼還是先上完整代碼,Demo代碼參考這里: from langchain_community.tools.tavily_search import TavilySearchResults
tools = [TavilySearchResults(max_results=1)]
from langgraph.prebuilt import ToolExecutor
tool_executor = ToolExecutor(tools)
from langchain_openai import ChatOpenAI
# We will set streaming=True so that we can stream tokens # See the streaming section for more information on this. model = ChatOpenAI(temperature=0)
from langchain.tools.render import format_tool_to_openai_function
functions = [format_tool_to_openai_function(t) for t in tools] model = model.bind_functions(functions)
from typing import TypedDict, Annotated, Sequence import operator from langchain_core.messages import BaseMessage
class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], operator.add] from langgraph.prebuilt import ToolInvocation import json from langchain_core.messages import FunctionMessage
# Define the function that determines whether to continue or not def should_continue(state): messages = state['messages'] last_message = messages[-1] # If there is no function call, then we finish if "function_call" not in last_message.additional_kwargs: return "end" # Otherwise if there is, we continue else: return "continue"
# Define the function that calls the model def call_model(state): messages = state['messages'] response = model.invoke(messages) # We return a list, because this will get added to the existing list return {"messages": [response]}
# Define the function to execute tools def call_tool(state): messages = state['messages'] # Based on the continue condition # we know the last message involves a function call last_message = messages[-1] # We construct an ToolInvocation from the function_call action = ToolInvocation( tool=last_message.additional_kwargs["function_call"]["name"], tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]), ) # We call the tool_executor and get back a response response = tool_executor.invoke(action) # We use the response to create a FunctionMessage function_message = FunctionMessage(content=str(response), name=action.tool) # We return a list, because this will get added to the existing list return {"messages": [function_message]}
from langgraph.graph import StateGraph, END # Define a new graph workflow = StateGraph(AgentState)
# Define the two nodes we will cycle between workflow.add_node("agent", call_model) workflow.add_node("action", call_tool)
# Set the entrypoint as `agent` # This means that this node is the first one called workflow.set_entry_point("agent")
# We now add a conditional edge workflow.add_conditional_edges( # First, we define the start node. We use `agent`. # This means these are the edges taken after the `agent` node is called. "agent", # Next, we pass in the function that will determine which node is called next. should_continue, # Finally we pass in a mapping. # The keys are strings, and the values are other nodes. # END is a special node marking that the graph should finish. # What will happen is we will call `should_continue`, and then the output of that # will be matched against the keys in this mapping. # Based on which one it matches, that node will then be called. { # If `tools`, then we call the tool node. "continue": "action", # Otherwise we finish. "end": END } )
# We now add a normal edge from `tools` to `agent`. # This means that after `tools` is called, `agent` node is called next. workflow.add_edge('action', 'agent')
# Finally, we compile it! # This compiles it into a LangChain Runnable, # meaning you can use it as you would any other runnable app = workflow.compile()
from langchain_core.messages import HumanMessage
inputs = {"messages": [HumanMessage(content="what is the weather in sf")]} response = app.invoke(inputs)
print(response)
0.2 運行前準備官方給出的這個Demo程序,需要用到 Tavily API 的key。 需要大家去 這個地方(https://app./home) 去自行申請一個。 
申請完后,記得將該 API key 添加到你的環境變量中。 TAVILY_API_KEY = "tvly-xxxxxxxxxxx"
1. 代碼詳解1.1 創建圖 - StateGraph首先是實例化一個 Graph,與前面用的 Graph 不同,前面用的是 MessageGraph ,現在這個例子中用的是 StateGraph 。 from langgraph.graph import StateGraph, END # Define a new graph workflow = StateGraph(AgentState)
MessageGraph 與 StateGraph 好像也沒什么太大的區別:

1.2 添加節點 - node# Define the two nodes we will cycle between workflow.add_node("agent", call_model) workflow.add_node("action", call_tool)
添加了兩個節點: · 初始節點 agent,用來調用大模型 · 節點 action,用來執行工具
1.3 添加邊# We now add a conditional edge workflow.add_conditional_edges( "agent", should_continue, { # If `tools`, then we call the tool node. "continue": "action", "end": END } )
workflow.add_edge('action', 'agent')
添加了一個條件邊,一個普通邊。 現在這個圖結構應該是這樣的: 
看一下條件邊的條件:should_continue def should_continue(state): messages = state['messages'] last_message = messages[-1] # If there is no function call, then we finish if "function_call" not in last_message.additional_kwargs: return "end" # Otherwise if there is, we continue else: return "continue"
就是判斷是否有工具需要執行,如果有,則走 agent ---> action 方向,如果沒有,則走 agent ---> END 方向。 1.4 總結所以到這里,結構的實現已經比較清晰了,對比上一篇文章條件分支的代碼,其實最主要變化的是這一句: workflow.add_edge('action', 'agent')
由原來的 action ---> END,變成了 action ---> agent。 另外,還有一個要注意的點: def call_model(state): messages = state['messages'] response = model.invoke(messages) # We return a list, because this will get added to the existing list return {"messages": [response]}
在 model.invoke 時,注意 invoke 參數類型,messages = state['messages'] ,不是傳state類型,而是里面的 messages 類型,也就是HumanMessage 或AIMessage 等類型。 如果覺得本文對你有幫助,麻煩點個贊和關注唄 ~~~
· 大家好,我是 同學小張,日常分享AI知識和實戰案例 · 歡迎 點贊 + 關注 ??,持續學習,持續干貨輸出。 · +v: jasper_8017 一起交流??,一起進步??。

|