Notion Blog
技术分享1 分钟阅读

ai工作流编排langchain

from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from IPython.display import Image, display

llm = ChatOpenAI(model="gpt-4o-mini",
								 openai_api_key='sk-XXXXX',
                 openai_api_base='https://XX.XX.cm/v1', 
                 temperature=0)

# Graph state
class State(TypedDict):
    topic: str
    joke: str
    improved_joke: str
    final_joke: str


# Nodes
def generate_joke(state: State):
    """First LLM call to generate initial joke"""

    msg = llm.invoke(f"Write a short joke about {state['topic']}")
    return {"joke": msg.content}


def check_punchline(state: State):
    """Gate function to check if the joke has a punchline"""

    # Simple check - does the joke contain "?" or "!"
    if "?" in state["joke"] or "!" in state["joke"]:
        return "Pass"
    return "Fail"


def improve_joke(state: State):
    """Second LLM call to improve the joke"""

    msg = llm.invoke(f"Make this joke funnier by adding wordplay: {state['joke']}")
    return {"improved_joke": msg.content}


def polish_joke(state: State):
    """Third LLM call for final polish"""

    msg = llm.invoke(f"Add a surprising twist to this joke: {state['improved_joke']}")
    return {"final_joke": msg.content}


# Build workflow
workflow = StateGraph(State)

# Add nodes
workflow.add_node("generate_joke", generate_joke)
workflow.add_node("improve_joke", improve_joke)
workflow.add_node("polish_joke", polish_joke)

# Add edges to connect nodes
workflow.add_edge(START, "generate_joke")
workflow.add_conditional_edges(
    "generate_joke", check_punchline, {"Fail": "improve_joke", "Pass": END}
)
workflow.add_edge("improve_joke", "polish_joke")
workflow.add_edge("polish_joke", END)

# Compile
chain = workflow.compile()

# Show workflow
display(Image(chain.get_graph().draw_mermaid_png()))

# Invoke
state = chain.invoke({"topic": "cats"})
print("Initial joke:")
print(state["joke"])
print("\n--- --- ---\n")
if "improved_joke" in state:
    print("Improved joke:")
    print(state["improved_joke"])
    print("\n--- --- ---\n")

    print("Final joke:")
    print(state["final_joke"])
else:
    print("Joke failed quality gate - no punchline detected!")

工作流逻辑

从 START开始,进入 generate_joke节点生成初始笑话
通过 check_punchline检查笑话质量:
如果通过 (Pass),直接结束 (END)
如果失败 (Fail),进入 improve_joke节点改进笑话
改进后的笑话进入 polish_joke节点进行最终润色
最后到达 END结束

代码解释:LangChain 状态图工作流示例

这段代码展示了一个使用 LangChain 的 StateGraph创建笑话生成工作流的完整示例。下面我将详细解释每个部分:

1. 状态定义

class State(TypedDict):
    topic: str
    joke: str
    improved_joke: str
    final_joke: str

定义了一个类型化的字典 State,用于表示工作流的状态,包含四个字段:

topic: 笑话的主题(如"cats")
joke: 初始生成的笑话
improved_joke: 改进后的笑话
final_joke: 最终润色后的笑话

2. 节点函数

生成初始笑话

def generate_joke(state: State):
    msg = llm.invoke(f"Write a short joke about {state['topic']}")
    return {"joke": msg.content}

调用 LLM 生成关于给定主题的初始笑话。

检查笑点

def check_punchline(state: State):
    if "?" in state["joke"] or "!" in state["joke"]:
        return "Pass"
    return "Fail"

简单的质量检查,通过检查是否包含问号或感叹号来判断笑话是否有笑点。

改进笑话

def improve_joke(state: State):
    msg = llm.invoke(f"Make this joke funnier by adding wordplay: {state['joke']}")
    return {"improved_joke": msg.content}

如果初始笑话不合格,调用 LLM 添加文字游戏来改进笑话。

润色笑话

def polish_joke(state: State):
    msg = llm.invoke(f"Add a surprising twist to this joke: {state['improved_joke']}")
    return {"final_joke": msg.content}

对改进后的笑话进行最终润色,添加意外转折。

3. 构建工作流

创建状态图

workflow = StateGraph(State)

添加节点

workflow.add_node("generate_joke", generate_joke)
workflow.add_node("improve_joke", improve_joke)
workflow.add_node("polish_joke", polish_joke)

添加边和条件分支

workflow.add_edge(START, "generate_joke")# 从开始到生成笑话
workflow.add_conditional_edges(# 条件分支"generate_joke",
    check_punchline,
    {"Fail": "improve_joke", "Pass": END}
)
workflow.add_edge("improve_joke", "polish_joke")# 改进后到润色
workflow.add_edge("polish_joke", END)# 润色后结束

4. 编译和执行

编译工作流

chain = workflow.compile()

可视化工作流

display(Image(chain.get_graph().draw_mermaid_png()))

生成并显示工作流的 Mermaid 图。

执行工作流

state = chain.invoke({"topic": "cats"})

传入初始状态(主题为"cats")并执行工作流。

输出结果

print("Initial joke:")
print(state["joke"])

if "improved_joke" in state:# 如果有改进版本print("\nImproved joke:")
    print(state["improved_joke"])
    print("\nFinal joke:")
    print(state["final_joke"])
else:# 如果初始笑话直接通过检查print("Joke failed quality gate - no punchline detected!")

这个示例展示了如何使用 LangChain 的状态图功能构建一个有条件分支的多步骤 LLM 工作流。

有关使用上的问题,欢迎您在底部评论区留言,一起交流~

读者评论

评论会同步写入该文在 Notion 中的页面底部(与正文同页,便于管理)。

0/1500

暂无评论,欢迎抢沙发。