Building AI Agents for Intelligent Task Automation in Business (Part 2)

April 11th, 2025

0

Learn how to build a simple AI agent using LangGraph to plan, execute tasks, and generate answers - ideal for automating business research workflows.

Read more: Building AI Agents for Intelligent Task Automation in Business (Part 1)

In the previous part, we’ve talked about use cases, and architecture of building AI Agents. In today's blog, to illustrate LangGraph in action, we’ll build a simplified AI agent that demonstrates LLM reasoning, tool use, memory management, and planning.

Imagine we want an agent that can answer complex user queries by doing web research and then responding with a summary. This is a common scenario in business (think of an agent that gathers market data to answer a strategic question). We will use a plan-and-execute style: the agent will first plan what to do (e.g. decide to search the web), then perform the search, possibly plan again if needed, and finally produce an answer. For brevity, our "web search" tool will be a dummy function (to keep it self-contained), but one could integrate an actual search API.

Let’s go step by step!

Step 1: Define the State and Initialize the Graph

We decide what information our agent’s state will hold. In this case, we want to track the conversation messages (to have a memory of what the user asked and what the agent responded), and any list of tasks the agent plans. We also include a place to store the latest tool result. We’ll use LangGraph’s StateGraph to start building the graph.

from typing_extensions import TypedDict

from typing import Annotated

from langgraph.graph import StateGraph, START, END

from langgraph.graph.message import add_messages

# Define the structure of the agent's state

class AgentState(TypedDict):

  messages: Annotated[list, add_messages] # conversation history (will append new messages)

  tasks: list[str] # list of tasks (plan steps) to execute

  tool_result: str # result from the last tool execution (if any)

# Initialize the graph builder with the state schema

graph_builder = StateGraph(AgentState)

Here we defined AgentState with three keys. We used add_messages for the messages key so that when new messages are added by nodes, LangGraph knows to append them to the list rather than overwrite. The tasks list will hold plan steps (as simple strings describing each task), and tool_result will hold data returned by our search tool.

Step 2: Define the Tools and LLMs

For this example, let’s set up a fake search tool and an LLM. In practice, you might use an actual API or a LangChain tool. We’ll also set up an LLM via LangChain – for instance, using OpenAI (with function calling enabled) or Anthropic. To keep it general, assume we have an llm object that can be called to generate text.

# Dummy tool: web search function

def web_search(query: str) -> str:

# In real usage, call an API like SerpAPI or Bing API.

# Here we just return a placeholder string.

  return f"Results for '{query}' [Dummy data]"

llm = ChatOpenAI(

  model="gpt-4o-mini",

  temperature=0,

  max_tokens=None,

  timeout=None,

  max_retries=2,

  # api_key="...",  # if you prefer to pass api key in directly instaed of using env vars

  # base_url="...",

  # organization="...",

  # other params...

)

Step 3: Add Nodes for Planning, Execution, and Answering

We will create three main nodes in our graph:

  • planner node – uses the LLM to generate a plan (a list of tasks) based on the user’s query.
  • executor node – takes the next task from state["tasks"] and executes it. If the task is a search query, it will call web_search and store the result. (We could imagine other types of tasks too, but keep it simple.)
  • answer node – uses the LLM to generate the final answer for the user, using all available information (including any tool results in the state).

0

Pic 1: Graph visualization

Additionally, we might want a node for re-planning in case we want the agent to possibly revise its plan after executing a task, but in this simple workflow we’ll assume one round of planning is enough (or the planner could have included the final answer as the last task).

# Node 1: Planner – LLM generates a list of tasks given the user question

def planner_node(state: AgentState) -> dict:

  user_question = state["messages"][-1]

  prompt = (

    "You are an AI agent that will plan steps to answer the question.\n"

    f"Question: {user_question}\n"

    "Think of what steps or tools are needed. Provide a list of tasks to perform, numbered."

  )

  plan_response = llm.invoke(prompt)

  tasks = [line.strip() for line in plan_response.content.split('\n') if line.strip()]

  # Add planning message to state

  return {

    "tasks": tasks,

    "messages": [AIMessage(content=f"Planning steps to answer the question:\n" + "\n".join(f"{i+1}. {task}" for i, task in enumerate(tasks)))]

  }

# Node 2: Executor – take the next task and execute it

def executor_node(state: AgentState) -> dict:

  if not state.get("tasks"):

    return {}

  task_list = state["tasks"]

  current_task = task_list.pop(0)

  result = ""

  if current_task.lower().startswith("search") or "search" in current_task.lower():

    query = current_task.split("search", 1)[-1].strip().strip(':')

    result = web_search(query)

  else:

    result = f"(Executed task: {current_task})"

  # Add execution message to state

  return {

    "tool_result": result,

    "tasks": task_list,

    "messages": [AIMessage(content=f"Executing task: {current_task}\nResult: {result}")]

  }

# Node 3: Answer – LLM generates a final answer

def answer_node(state: AgentState) -> dict:

  user_question = state["messages"][-1]

  info = state.get("tool_result", "")

  prompt = (

    "You are an AI agent answering the question.\n"

    f"Question: {user_question}\n"

    f"Information gathered: {info}\n"

    "Provide a concise answer to the user."

  )

  final_answer = llm.invoke(prompt)

  return {"messages": [AIMessage(content=final_answer.content)]}

Let’s unpack the above:

  • In planner_node, we craft a prompt asking the LLM to come up with a plan (a numbered list of tasks). We then split the LLM’s response by lines to get a list of tasks. In a more robust implementation, we might parse a structured output or ensure the LLM returns JSON. For example, using OpenAI’s function calling, we could define a function schema for plan(tasks: List[str]) and have the LLM populate it. But to keep things simple, we do a quick text parse. The output of this node is {"tasks": tasks}, which LangGraph will merge into the state (replacing the tasks list with the new one).
  • In executor_node, we take the next task from state["tasks"]. We simply pop the first item. (Because we return the updated "tasks": task_list, the state’s tasks list is now one shorter, effectively advancing the progress.) We then decide what to do with this task. If it contains the word "search", we treat it as a web search instruction and call our web_search tool. For any other type of task, in a full agent we might call different tools or even ask the LLM to handle it. Here we just return a string indicating it was “executed”. The node returns a tool_result (the outcome of the task) and the updated tasks list. LangGraph’s state management will append the new tool result to the messages if we had set it up that way, but in our state we didn’t specify a reducer for tool_result (so it will just overwrite each time). We might append tool results to messages as well if we wanted the conversation history to include them.
  • In answer_node, we assume that after some tasks, we have enough info to answer the user. We take the original question and any information gathered (we use the latest tool_result here for simplicity, though we might have accumulated multiple results). We prompt the LLM to provide a final answer. The output is then added to the messages list as the agent’s response. (We prepend "Agent:" just to label it; in a real chat scenario, we might structure messages as role-based objects rather than strings.)

Step 4: Define the Graph Flow with Edges

Now we connect these nodes in the order they should run. A reasonable flow is: start -> planner -> executor -> (loop back to executor if more tasks) -> answer -> end. We will use a conditional edge to handle the loop: after the executor node, if there are still tasks remaining, we go back to executor to do the next one; if no tasks remain, we proceed to the answer node.

# First, add all nodes

graph_builder.add_node("planner", planner_node)

graph_builder.add_node("executor", executor_node)

graph_builder.add_node("answer", answer_node)  # Make sure this is added before creating edges

# Then set the entry point and connect the nodes with edges

graph_builder.set_entry_point("planner")

graph_builder.add_edge("planner", "executor")

# Conditional edge: if tasks remain after executor, go back to executor; if not, go to answer

def has_more_tasks(state: AgentState):

# If there are tasks left in the list, return "executor", else "answer"

  tasks = state.get("tasks", [])

  return "executor" if tasks else "answer"

graph_builder.add_conditional_edges("executor", has_more_tasks, {"executor": "executor", "answer": "answer"})

# Finally, connect answer to END (finish after answering)

graph_builder.add_edge("answer", END)

# Compile the graph

agent_graph = graph_builder.compile()

We used set_entry_point("planner") (which under the hood is same as adding an edge from START to planner). Then a direct edge from planner to executor. The conditional edge from executor checks the state: our has_more_tasks function returns the string "executor" if there are still tasks to do, otherwise returns "answer" to move on to final answer. We map those outputs to the actual node names in the add_conditional_edges call. This creates a loop where the executor node will keep calling itself until it exhausts the tasks list, then transition to answer. Finally, the answer node goes to END (we could also have made the conditional edge send to END directly, but this way is clear). The compile() step produces a runnable graph object.

Step 5: Running the Agent

To run the agent, we need to initialize a state and invoke the compiled graph. Typically, you’d start with an empty history and then add a user message as input.

# Initialize the state and run the agent

initial_state = {"messages": [], "tasks": [], "tool_result": ""}

# Simulate a user question

user_question = "What is the latest news on electric vehicle adoption in Europe?"

initial_state["messages"].append(user_question)

# Use agent_graph.invoke() to run the compiled graph

final_state = agent_graph.invoke(initial_state)

# Print messages in a cleaner format

print("\n=== Conversation Flow ===\n")

for msg in final_state["messages"]:

  if isinstance(msg, HumanMessage):

    print(f" Human: {msg.content}")

  elif isinstance(msg, AIMessage):

    if "Planning steps" in msg.content:

      print("\n🤖 Agent (Planning):")

      # Split the planning steps and print them nicely

      steps = msg.content.split('\n')\[1:]  # Skip the first line

      for step in steps:

        if step.strip():

          print(f"  • {step.strip()}")

    elif "Executing task" in msg.content:

      print("\n🤖 Agent (Executing):")

      # Extract task and result

      task = msg.content.split('\n')\[0].replace('Executing task:', '').strip()

      result = msg.content.split('\n')\[1].replace('Result:', '').strip()

      print(f"  Task: {task}")

      print(f"  Result: {result}")

    else:

      print(f"\n🤖 Agent (Final Answer):\n{msg.content}")

  print("\n" + "="*50 + "\n")

When this agent runs, here’s what would happen step by step:

1.Planner Node: The LLM sees the question “What is the latest news on electric vehicle adoption in Europe?” and comes up with a plan. It might return something like:

“1. Search for recent news on EV adoption in Europe. 2. Summarize the key points.” (This gets parsed into tasks = ["Search for recent news on EV adoption in Europe.", "Summarize the key points."] in state.)

2. Executor Node (1st loop): The first task contains "Search", so the agent calls web*search("for recent news on EV adoption in Europe."). Our dummy tool returns a placeholder string like *"Results for 'for recent news on EV adoption in Europe.' [Dummy data]"_. The state’s tool_result is set to that, and the task is removed from the list (now tasks = ["Summarize the key points."]).

3. Executor Node (loop again): After executing, the conditional edge checks and finds another task remains, so it goes back to executor node. Now current*task is "Summarize the key points." Since this doesn’t involve a tool, our executor’s else-branch simply returns a string *"(Executed task: Summarize the key points.)"_ – in a real scenario, we might have instead called the LLM to summarize using the gathered info, but for demonstration let’s proceed. The tool_result becomes that string, and now tasks list becomes empty.

4. Executor finishes -> conditional edge -> Answer Node: Now the conditional edge sees no tasks left, so it directs to the answer node.

5. Answer Node: The LLM is prompted with the original question and the tool*result (which currently is the summary placeholder). It then produces a final answer, e.g.: *“Electric vehicle adoption in Europe is accelerating, with recent news highlighting record-high EV sales and expanded charging infrastructure across EU countries.”_ This gets added to messages as "Agent: <answer>".

6. END: The graph ends, and we can take the agent’s answer from the final state.

In a more fleshed out implementation, the “Summarize” task would ideally be handled by another LLM call (maybe a smaller model or even the same llm with a prompt to summarize the tool_result). We could have designed a second executor node or integrated that into the executor logic. The goal here is to see how LangGraph lets us break the problem into nodes and transitions, which we did: one node for planning, a loop node for execution, and one for final answer.

Despite the simplifications, this code showcases LangGraph’s core ideas: a state that carries info forward, distinct nodes for different functionalities, and explicit transitions controlling the flow (including a loop). With this structure, adding features is straightforward. For example, to include a memory of past Q&A so the agent can handle follow-up questions, we’d keep the messages list and include the conversation context in prompts. To add an error handling node, we could catch exceptions in the executor (say if a tool fails) and set an error flag in state, then have a conditional edge route to an error_handler node that maybe apologizes or attempts an alternate approach. LangGraph’s design makes such extensions modular.

All the code for this blog is at the Colab link.

Conclusion

AI agents built on LLMs are opening up new frontiers in business automation – from handling routine customer inquiries to conducting complex data analysis workflows. By leveraging frameworks like LangGraph, developers can implement these agents with a high degree of control and transparency. We discussed how LangGraph’s state machine approach, with nodes for reasoning, tool use, memory, and planning, provides a robust foundation for building intelligent agents that are both autonomous and reliable. We also saw how important it is to design good prompts, utilize LLM features like function calling, and include error-handling logic to make agents perform well in real-world conditions. With the provided example and best practices, you should have a blueprint for creating your own AI agents tailored to your business needs – be it in operations, support, sales, finance, or beyond.

Tags
Emerging Technologies
Artificial Intelligence
share icon
Share