Technical

The Unreasonable Effectiveness of OpenAPI Specs for Agent Tool Use

Max Nihlen, Co-Founder & CTOAugust 22, 2025
The Unreasonable Effectiveness of OpenAPI Specs for Agent Tool Use

Summary

In the rapidly evolving landscape of AI agents and autonomous systems, we're constantly searching for elegant ways to expose functionality to our digital assistants. While many new protocols are emerging, there's an unexpected hero in this story: the humble OpenAPI specification. Originally designed for documenting REST APIs, OpenAPI specs have proven remarkably effective at enabling AI agents to understand and use external tools. Let's explore why.

What are OpenAPI Specifications?

An OpenAPI Specification (formerly known as Swagger) is a standard way to describe REST APIs in a machine-readable format. At its core, it's a YAML or JSON document that describes:

  • available endpoints and HTTP methods
  • request/response schemas
  • authentication requirements
  • parameter types and validation rules
  • rich descriptions of what each operation does

Traditionally, developers use OpenAPI specs to:

  • generate interactive API documentation
  • create client SDKs in various programming languages
  • set up API testing and validation
  • enable API-first development workflows

The Power of Decoupling: Backend Meets Frontend

Where OpenAPI truly shines is in its ability to serve as a contract between backend and frontend teams. This decoupling allows:

  1. Parallel development: frontend developers can start building against the API specification while backend teams implement the actual endpoints
  2. Clear communication: the spec becomes the single source of truth, eliminating ambiguity about data structures and endpoints
  3. Automatic validation: both sides can validate their implementation against the spec
  4. Mock servers: frontend teams can generate mock servers from the spec for testing

This separation of concerns has made OpenAPI the de facto standard for API-driven development. But here's where it gets interesting: what if we think of AI agents as just another type of "frontend" consumer?

From Human Developers to AI Agents

The same properties that make OpenAPI specs excellent for human collaboration make them perfect for AI agent integration:

  • Self-describing: the detailed descriptions in OpenAPI specs help agents understand what each endpoint does
  • Structured: the schema definitions ensure agents know exactly what data to send and expect
  • Standardized: agents can use the same parsing logic for any OpenAPI-compliant API
  • Tool-ready: existing OpenAPI tooling can be repurposed for agent use cases

A Minimal Example: Task Management API

Let's walk through a concrete example. Here's a minimal OpenAPI spec for a simple task management API:

yaml
openapi: 3.1.0
info:
  title: Task Management API
  version: 1.0.0
  description: A simple API for managing tasks
servers:
  - url: https://api.tasks.example.com/v1
paths:
  /tasks:
    get:
      operationId: listTasks
      summary: List all tasks
      description: Retrieves a list of all tasks, optionally filtered by status
      parameters:
        - name: status
          in: query
          schema:
            type: string
            enum: [pending, completed, in_progress]
          description: Filter tasks by their status
      responses:
        '200':
          description: A list of tasks
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/Task'
    post:
      operationId: createTask
      summary: Create a new task
      description: Creates a new task with the provided details
      requestBody:
        required: true
        content:
          application/json:
            schema:
              type: object
              required: [title]
              properties:
                title:
                  type: string
                  description: The task title
                description:
                  type: string
                  description: Detailed description of the task
                due_date:
                  type: string
                  format: date-time
                  description: When the task is due
      responses:
        '201':
          description: Task created successfully
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Task'
components:
  schemas:
    Task:
      type: object
      properties:
        id:
          type: string
          description: Unique identifier for the task
        title:
          type: string
          description: Task title
        description:
          type: string
          description: Detailed task description
        status:
          type: string
          enum: [pending, completed, in_progress]
          description: Current status of the task
        due_date:
          type: string
          format: date-time
          description: Task due date
        created_at:
          type: string
          format: date-time
          description: When the task was created

Generating a Python Client

Now let's use the OpenAPI Generator to create a python client from this spec. We will use the Java-based OpenAPI generator CLI instead of the python package (openapi-python-client) as it can generate pydantic models for API request and response data structures. We'll use UV as our python package manager for the project setup. First, save the above YAML as openapi_tasks_api.yaml, install the OpenAPI generator CLI, then generate the client:

bash
# install the OpenAPI generator CLI (requires Node.js)
npm install -g @openapitools/openapi-generator-cli

# generate the python client from the OpenAPI spec
openapi-generator-cli generate \
    -i openapi_tasks_api.yaml \
    -g python \
    -o task_management_client

# initialize a new UV project
uv init

# add packages that we will use
uv add langchain_core langgraph langchain_openai
# install the client generated above
# Note: you might have to manually change the license entry in task_management_client/pyproject.toml from "NoLicense" to e.g. "MIT" 
# due to a known bug in the generator
uv add --editable task_management_client/

# export key for your llm of choice. We will use an openrouter key
export OPENROUTER_API_KEY=your_key

This generates a fully-typed python client with methods for each operation/endpoint. The generated client includes:

  • pydantic models for type-safe request/response handling
  • automatic serialization/deserialization
  • error handling

Screenshot 1
with client methods for the different endpoints and the validation.

Building a ToolFactory for AI Agents

Here's where we bridge the gap to AI agents. We'll create a ToolFactory class that wraps the generated client and exposes each operation/endpoint as a tool. For clarity in this blog post, we'll use an explicit approach where each tool is created by a dedicated method - this makes it easy to see e.g. how the generated pydantic models are used and how we can use docstrings to provide instructions for the agents. (Note: the caveat of this approach is that we have to manually copy paste each endpoint's description into each method as well as add new methods if we want to support new endpoints. It's possible to generalize the tool factory with a single method that introspects the API operations, but for the sake of readability we will stick to the simpler approach here)

python
from typing import List, Optional

from langchain_core.tools import tool, BaseTool
from pydantic import BaseModel, Field

# we import the generated client and its pydantic models
from openapi_client import ApiClient, Configuration
from openapi_client.api.default_api import DefaultApi
from openapi_client.models.task import Task
from openapi_client.models.create_task_request import CreateTaskRequest
from openapi_client.exceptions import ApiException


# the OpenAPI generator doesn't create models for query parameters
# so we create a simple one for the list_tasks endpoint
class ListTasksInput(BaseModel):
    """Input for listing tasks - the only endpoint without a generated input model"""

    status: Optional[str] = Field(
        default=None,
        description="Filter tasks by their status: 'pending', 'completed', or 'in_progress'",
    )


class TaskAPIToolFactory:
    """
    Factory for creating LangChain tools from the task management API.
    """

    def __init__(self, base_url: str = "https://api.tasks.example.com/v1"):
        configuration = Configuration(host=base_url)
        self.api_client = ApiClient(configuration)
        self.api = DefaultApi(self.api_client)

    def get_list_tasks_tool(self) -> BaseTool:
        """Creates the list_tasks tool."""

        @tool(args_schema=ListTasksInput)
        def list_tasks(status: Optional[str] = None) -> List[Task]:
            """List all tasks

            Retrieves a list of all tasks, optionally filtered by status.
            Use this tool when asked for a list of tasks or to count tasks.

            Parameters:
            - status: Filter tasks by their status - must be one of: 'pending', 'completed', or 'in_progress' (optional)

            Returns a list of Task objects, each with the following fields:
            - id: Unique identifier for the task
            - title: Task title
            - description: Detailed task description
            - status: Current status of the task (pending/completed/in_progress)
            - due_date: Task due date
            - created_at: When the task was created
            """
            try:
                return self.api.list_tasks(status=status)
            except ApiException as e:
                return f"API Error: {e.body}"

        return list_tasks

    def get_create_task_tool(self) -> BaseTool:
        """Creates the create_task tool using the generated CreateTaskRequest model."""

        @tool(args_schema=CreateTaskRequest)
        def create_task(
            title: str,
            description: Optional[str] = None,
            due_date: Optional[str] = None,
        ) -> Task:
            """Create a new task with the provided details

            Creates a new task with the provided details and returns the created task object.

            Parameters:
            - title: The task title (required)
            - description: Detailed description of the task (optional)
            - due_date: When the task is due in ISO date-time format (optional, e.g., '2025-01-15T10:00:00Z')

            Returns a Task object with the following fields:
            - id: Unique identifier for the task
            - title: Task title
            - description: Detailed task description
            - status: Current status of the task (pending/completed/in_progress)
            - due_date: Task due date
            - created_at: When the task was created
            """
            try:
                request = CreateTaskRequest(
                    title=title, description=description, due_date=due_date
                )
                return self.api.create_task(create_task_request=request)
            except ApiException as e:
                return f"API Error: {e.body}"

        return create_task

    def get_all_tools(self) -> List[BaseTool]:
        """Returns all available tools as a list."""
        return [self.get_list_tasks_tool(), self.get_create_task_tool()]


This might seem a little overkill for an API with only two endpoints, but it allows us to easily add additional endpoints by simply adding new methods to the factory class. If we want to add endpoints from an entirely different API, or perhaps a completely different kind of tool, we can also simply create a new factory class and combine the tools into a composite factory. For example:

python
class EmailAPIToolFactory:
    """Mock factory for email search and send API tools"""
    
    def get_search_emails_tool(self) -> BaseTool:
        ...
    
    def get_send_email_tool(self) -> BaseTool:
        ...
    
    def get_all_tools(self) -> List[BaseTool]:
        return [
            self.get_search_emails_tool(),
            self.get_send_email_tool()
        ]

class CombinedToolFactory:
    """Composite factory"""
    
    def __init__(self):
        self.task_factory = TaskAPIToolFactory()
        self.email_factory = EmailAPIToolFactory()
    
    def get_all_tools(self) -> List[BaseTool]:
        """Returns all tools from all factories."""
        all_tools = []
        all_tools.extend(self.task_factory.get_all_tools())
        all_tools.extend(self.email_factory.get_all_tools())
        return all_tools

combined_factory = CombinedToolFactory()
all_tools = combined_factory.get_all_tools()

Putting It All Together: Using Tools with LangGraph

Now that we've built our tool factory, let's see how it integrates with actual AI agents. We'll use LangGraph as our agent framework for convenience, as it provides good support for tool-calling. Using LangGraph is however not a requirement, our OpenAPI-based approach work seamlessly with any agent framework that supports LangChain tools and can be easily modified for other frameworks as well.

python
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

def create_task_agent():
    # initialize our tool factory
    factory = TaskAPIToolFactory()
    tools = factory.get_all_tools()
    
    # create the agent with our generated tools
    llm = ChatOpenAI(
        model="openai/gpt-4.1-mini",
        base_url="https://openrouter.ai/api/v1",  # for OpenRouter
        api_key=os.getenv("OPENROUTER_API_KEY"),
    )
    agent = create_react_agent(llm, tools)
    
    return agent

agent = create_task_agent()

# the agent now has access to your tools
response = agent.invoke({
    "messages": [
        ("human", "Create a task to review our Q4 financial reports, due January 15th, then list all pending tasks")
    ]
})

Of course, there's one small problem with this example: we just made up our OpenAPI specification, so there's no actual backend server running to handle these tool calls! Fortunately, having a well-defined OpenAPI spec comes to our rescue, we can automatically generate mock backends from it for testing and development.

Testing with Prism Mock Servers

Prism is a powerful mock server that generates realistic responses from OpenAPI specifications. Prism creates a fully functional HTTP server that validates requests and returns example responses based on your OpenAPI spec.

To run a mock server for our Task Management API:

bash
npx @stoplight/prism-cli mock openapi_tasks_api.yaml

This starts a mock server (typically on http://127.0.0.1:4010) that:

  • validates incoming requests against your OpenAPI schema
  • returns realistic mock responses using examples or generated data
  • provides immediate feedback for testing your agent integration

You can then update the base_url in your agent configuration to point to the mock server:

python
def create_task_agent():
    factory = TaskAPIToolFactory(base_url="http://127.0.0.1:4010")
    tools = factory.get_all_tools()
    llm = ChatOpenAI(
      model="openai/gpt-4.1-mini",
      base_url="https://openrouter.ai/api/v1",  # for OpenRouter
      api_key=os.getenv("OPENROUTER_API_KEY"),
    )
    agent = create_react_agent(llm, tools)
    return agent

agent = create_task_agent()

response = agent.invoke({
    "messages": [
        ("human", "Create a task to review our Q4 financial reports, due January 15th, then list all pending tasks")
    ]
})

# extract and show the tool calls made by the agent
ai_message = response["messages"][1]
print("The agent made these tool calls:")
for tool_call in ai_message.tool_calls:
    print(f"- {tool_call['name']}: {tool_call['args']}")

print(f"\nFinal response: {response['messages'][-1].content}")

We can now see:

Screenshot 2

From the first tool call we can see that the agent correctly populated the fields in the request body and for the second tool call it correctly sent in the "pending" value for the status parameter. The return data is just made up by the mock server but we can at least see that the agent can access the tools and use the tool descriptions to infer values for parameters.

An additional benefit here is that the pydantic models automatically validate tool inputs, ensuring required fields are provided, types are correct, and invalid calls are rejected before reaching the API. This creates a robust validation layer that prevents malformed requests and provides clear error messages to agents.

Comparing with MCP Servers

The astute reader might have noticed that there are similarities between the OpenAPI-based tool creation approach we've explored and the Model Context Protocol (MCP) that Anthropic introduced in late 2024. Both fundamentally address the same challenge: how do we enable AI agents to discover, understand, and effectively use external tools and services?

This convergence isn't accidental, both approaches recognize that successful agent-tool integration requires standardized, machine-readable interfaces with detailed metadata. Let's examine how these two approaches compare and when you might choose one over the other:

Core Similarities

Both approaches share fundamental design principles that make them effective for agent-tool integration:

  1. Standardized interface: both OpenAPI and MCP provide standardized ways to describe available operations, enabling consistent agent behavior across different tools
  2. Tool discovery: agents can dynamically discover available tools and their capabilities without manual configuration
  3. Type safety: both support strongly-typed parameters and responses, enabling robust validation and error prevention
  4. Rich metadata: both allow detailed descriptions of operations, parameters, and expected behaviors to guide agent decision-making
  5. Language agnostic: both can be implemented in any programming language, promoting ecosystem diversity

Where They Diverge

Despite these similarities, the approaches differ in several key areas that reflect their origins and intended use cases:

  1. Transport layer:

    • OpenAPI: HTTP/REST-based with stateless requests
    • MCP: can use stdio, SSE, or WebSocket for persistent connections, enabling real-time communication
  2. Design philosophy:

    • OpenAPI: originally designed for web APIs and human-developer consumption, then naturally adapted for agent use
    • MCP: purpose-built for AI agent integration from the ground up, with agent needs as the primary consideration
  3. Context management:

    • OpenAPI: each request is independent, following REST principles
    • MCP: supports persistent context across multiple interactions, allowing for more sophisticated workflows
  4. Tool complexity:

    • OpenAPI: excels at CRUD operations and straightforward request/response patterns
    • MCP: better suited for complex, stateful interactions, streaming responses, and workflows that require memory
  5. Ecosystem maturity:

    • OpenAPI: benefits from over a decade of tooling, generators, documentation tools, and community knowledge
    • MCP: newer but rapidly growing, with native support emerging in major AI platforms

Choosing the Right Approach

Understanding these differences helps determine which approach best fits your use case:

Choose OpenAPI-based tools when:

  • you already have REST APIs that you want to expose to agents
  • you need to support both human developers and AI agents with the same interface
  • you want to leverage existing API infrastructure and tooling
  • your operations are primarily stateless CRUD operations
  • you value the mature ecosystem and extensive tooling support
  • you prefer the familiar HTTP/REST paradigm

Choose MCP when:

  • building agent-first tools from scratch with no existing API constraints
  • you need persistent connections, streaming responses, or real-time interactions
  • complex state management across multiple tool calls is required
  • you want native integration with MCP-supporting AI platforms
  • your tools benefit from persistent context between agent interactions
  • you're building workflows that require sophisticated agent-tool collaboration

The Best of Both Worlds

Rather than viewing these approaches as competing alternatives, the most pragmatic approach may be to leverage their complementary strengths. Since both fundamentally solve the same core problems with different trade-offs, you can use OpenAPI specs to generate tool schemas for MCP servers, or build MCP servers that proxy existing REST APIs while adding state management capabilities.

Conclusion

The effectiveness of OpenAPI specs for agent tool use isn't "unreasonable" at all, it's a natural evolution of their core strengths. By providing machine-readable, strongly-typed, self-documenting API descriptions, OpenAPI specs give AI agents exactly what they need to understand and use external services effectively. The next time you're designing tools for AI agents, consider starting with an OpenAPI specification. You'll benefit from over a decade of established patterns, extensive tooling, and a mature ecosystem. Whether you're exposing existing REST APIs to agents or designing new services with agent integration in mind, OpenAPI provides a robust foundation that supports both human developers and AI agents.

Max Nihlen

Co-Founder & CTO

Contributing author at ekona, sharing insights on AI strategy and implementation for enterprise organisations.

Want to discuss these ideas further?

Let's explore how AI can create measurable impact for your organisation. No buzzwords, just results.

Get in Touch