Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.perplexity.ai/llms.txt

Use this file to discover all available pages before exploring further.

This guide covers production-ready function calling patterns that go beyond the basics. You will learn the complete multi-turn flow, multi-function orchestration, combining custom functions with built-in tools, robust error handling, and parallel function call processing.
This guide assumes familiarity with the Agent API and its tool definitions. For parameter reference and basic usage, see the Tools documentation.

Prerequisites

Install the Perplexity SDK:
pip install perplexityai
If you don’t have an API key yet:

Get your Perplexity API Key

Navigate to the API Keys tab in the API Portal and generate a new key.
Then export your API key as an environment variable:
export PERPLEXITY_API_KEY="your-api-key"
For a reference of built-in tools and schema fields, see Tools. This guide focuses on orchestration patterns in application code.

Complete Multi-Turn Flow

The core function calling loop follows a specific pattern: send a request with tool definitions, detect function_call items in the response, execute your functions locally, then return the results as function_call_output items.
from perplexity import Perplexity
import json

client = Perplexity()

# Step 1: Define tools
tools = [
    {
        "type": "function",
        "name": "lookup_order",
        "description": "Look up an order by order ID. Returns order status, items, and shipping info.",
        "parameters": {
            "type": "object",
            "properties": {
                "order_id": {
                    "type": "string",
                    "description": "The unique order identifier, e.g. ORD-12345"
                }
            },
            "required": ["order_id"]
        }
    }
]

# Your actual function implementation
def lookup_order(order_id: str) -> dict:
    # In production, query your database or order management system
    return {
        "order_id": order_id,
        "status": "shipped",
        "items": ["Wireless Headphones", "USB-C Cable"],
        "tracking_number": "1Z999AA10123456784",
        "estimated_delivery": "2026-03-02"
    }

# Step 2: Send the initial request
response = client.responses.create(
    model="anthropic/claude-sonnet-4-6",
    tools=tools,
    input="Where is my order ORD-98712?"
)

# Step 3: Process the response and handle function calls
next_input = [item.model_dump() for item in response.output]

for item in response.output:
    if item.type == "function_call":
        # Step 4: Parse arguments and execute the function
        args = json.loads(item.arguments)
        result = lookup_order(**args)

        # Step 5: Append the function result
        next_input.append({
            "type": "function_call_output",
            "call_id": item.call_id,
            "output": json.dumps(result)
        })

# Step 6: Send results back to get the final response
final_response = client.responses.create(
    model="anthropic/claude-sonnet-4-6",
    input=next_input
)

print(final_response.output_text)
Always use json.loads() (Python) or JSON.parse() (TypeScript) on the arguments field. It is a JSON string, not a parsed object.

Multi-Function Orchestration

When you provide multiple tools, the model decides which to call and in what order. This example registers three functions that work together to answer a complex query.
from perplexity import Perplexity
import json

client = Perplexity()

# Define multiple tools
tools = [
    {
        "type": "function",
        "name": "get_weather",
        "description": "Get the current weather forecast for a city. Returns temperature, conditions, and precipitation chance.",
        "parameters": {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "City name"}
            },
            "required": ["city"]
        }
    },
    {
        "type": "function",
        "name": "get_calendar_events",
        "description": "Retrieve today's calendar events for a user. Returns a list of events with times and locations.",
        "parameters": {
            "type": "object",
            "properties": {
                "user_id": {"type": "string", "description": "The user ID"},
                "date": {"type": "string", "description": "Date in YYYY-MM-DD format"}
            },
            "required": ["user_id", "date"]
        }
    },
    {
        "type": "function",
        "name": "send_email",
        "description": "Send an email to a recipient with a subject and body.",
        "parameters": {
            "type": "object",
            "properties": {
                "to": {"type": "string", "description": "Recipient email address"},
                "subject": {"type": "string", "description": "Email subject line"},
                "body": {"type": "string", "description": "Email body text"}
            },
            "required": ["to", "subject", "body"]
        }
    }
]

# Function implementations
def get_weather(city: str) -> dict:
    return {"city": city, "temp_f": 72, "conditions": "Partly cloudy", "precipitation_chance": 0.15}

def get_calendar_events(user_id: str, date: str) -> dict:
    return {
        "events": [
            {"time": "09:00", "title": "Team standup", "location": "Conference Room B"},
            {"time": "12:00", "title": "Lunch with client", "location": "Riverside Park (outdoor)"},
            {"time": "15:00", "title": "Sprint review", "location": "Zoom"}
        ]
    }

def send_email(to: str, subject: str, body: str) -> dict:
    # In production, integrate with your email service
    return {"status": "sent", "message_id": "msg-20260226-001"}

# Map function names to implementations
function_map = {
    "get_weather": get_weather,
    "get_calendar_events": get_calendar_events,
    "send_email": send_email,
}

# Multi-turn loop: keep sending requests until no more function calls
input_messages = [
    {"role": "user", "content": (
        "I'm user U-100 in San Francisco. What's my schedule today (2026-02-26) "
        "and is the weather good for my outdoor events? "
        "If there's rain risk, email me at alice@example.com with a reminder to bring an umbrella."
    )}
]

response = client.responses.create(
    model="anthropic/claude-sonnet-4-6",
    tools=tools,
    input=input_messages
)

# Loop until the model produces a final text response with no pending function calls
while any(item.type == "function_call" for item in response.output):
    next_input = [item.model_dump() for item in response.output]

    for item in response.output:
        if item.type == "function_call":
            args = json.loads(item.arguments)
            fn = function_map[item.name]
            result = fn(**args)

            next_input.append({
                "type": "function_call_output",
                "call_id": item.call_id,
                "output": json.dumps(result)
            })

    response = client.responses.create(
        model="anthropic/claude-sonnet-4-6",
        tools=tools,
        input=next_input
    )

print(response.output_text)
The model may call functions across multiple turns. The while loop above keeps running until the model finishes all function calls and produces a final text response. In some turns the model may call one function, and in the next turn call another based on the results it received.
You can mix built-in tools like web_search and fetch_url with your own custom functions in the same tools array. The model decides autonomously which tool to use. This is powerful for workflows that need live web data combined with actions in your own systems.
from perplexity import Perplexity
import json

client = Perplexity()

tools = [
    # Built-in web search
    {"type": "web_search"},
    # Custom function to persist data
    {
        "type": "function",
        "name": "save_to_db",
        "description": "Save a research summary to the internal database. Call this after gathering information to persist findings.",
        "parameters": {
            "type": "object",
            "properties": {
                "topic": {"type": "string", "description": "The research topic"},
                "summary": {"type": "string", "description": "A concise summary of the findings"},
                "sources": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "List of source URLs"
                }
            },
            "required": ["topic", "summary", "sources"]
        }
    }
]

def save_to_db(topic: str, summary: str, sources: list) -> dict:
    # In production, write to your database
    record_id = "rec-" + topic.lower().replace(" ", "-")[:20]
    print(f"Saved to DB: {record_id}")
    return {"record_id": record_id, "status": "saved"}

response = client.responses.create(
    model="anthropic/claude-sonnet-4-6",
    tools=tools,
    input="Research the latest developments in solid-state batteries, then save your findings to our database.",
    instructions="First search the web for current information, then use save_to_db to persist your summary."
)

# The model will use web_search automatically (no function_call for built-in tools),
# then call save_to_db which we need to handle.
while any(item.type == "function_call" for item in response.output):
    next_input = [item.model_dump() for item in response.output]

    for item in response.output:
        if item.type == "function_call":
            args = json.loads(item.arguments)
            result = save_to_db(**args)
            next_input.append({
                "type": "function_call_output",
                "call_id": item.call_id,
                "output": json.dumps(result)
            })

    response = client.responses.create(
        model="anthropic/claude-sonnet-4-6",
        tools=tools,
        input=next_input
    )

print(response.output_text)
Built-in tools like web_search are executed server-side by the API. You only need to handle function_call items for your custom functions. The model seamlessly interleaves built-in and custom tool usage.

Error Handling Patterns

When a function call fails, return a structured error in the function_call_output so the model can adapt its response. Never silently swallow errors; the model can often recover or inform the user gracefully.
from perplexity import Perplexity
import json
import traceback

client = Perplexity()

def execute_function(name: str, args: dict) -> dict:
    """Dispatch and execute a function call with error handling."""
    function_map = {
        "lookup_order": lookup_order,
        "cancel_order": cancel_order,
    }

    if name not in function_map:
        return {"error": True, "message": f"Unknown function: {name}"}

    try:
        result = function_map[name](**args)
        return result
    except KeyError as e:
        return {"error": True, "message": f"Missing required field: {e}"}
    except TimeoutError:
        return {"error": True, "message": "The request timed out. Please try again."}
    except Exception as e:
        return {"error": True, "message": f"Function failed: {str(e)}"}


def lookup_order(order_id: str) -> dict:
    if not order_id.startswith("ORD-"):
        raise ValueError(f"Invalid order ID format: {order_id}")
    return {"order_id": order_id, "status": "delivered"}


def cancel_order(order_id: str) -> dict:
    # Simulate a failure
    raise ConnectionError("Order service is temporarily unavailable")


def run_agent(user_input: str, tools: list) -> str:
    """Run the full function calling loop with error handling."""
    response = client.responses.create(
        model="anthropic/claude-sonnet-4-6",
        tools=tools,
        input=user_input
    )

    max_turns = 10
    turn = 0

    while any(item.type == "function_call" for item in response.output) and turn < max_turns:
        next_input = [item.model_dump() for item in response.output]

        for item in response.output:
            if item.type == "function_call":
                args = json.loads(item.arguments)
                result = execute_function(item.name, args)

                next_input.append({
                    "type": "function_call_output",
                    "call_id": item.call_id,
                    "output": json.dumps(result)
                })

        response = client.responses.create(
            model="anthropic/claude-sonnet-4-6",
            tools=tools,
            input=next_input
        )
        turn += 1

    if turn >= max_turns:
        return "Error: Maximum function call turns exceeded."

    return response.output_text
Key principles for error handling:
  • Return errors as structured data, not exceptions. Include "error": true and a human-readable "message" so the model can relay the issue to the user.
  • Catch specific exceptions (timeouts, auth failures, validation errors) and map them to clear messages.
  • Cap the number of turns to prevent infinite loops.
  • Never return raw stack traces to the model. They waste tokens and may leak internal details.

Parallel Function Calls

When the model determines that multiple function calls are independent, it may return several function_call items in a single response. Process all of them before sending results back in one batch.
from perplexity import Perplexity
import json
from concurrent.futures import ThreadPoolExecutor

client = Perplexity()

tools = [
    {
        "type": "function",
        "name": "get_stock_price",
        "description": "Get the current stock price for a ticker symbol.",
        "parameters": {
            "type": "object",
            "properties": {
                "ticker": {"type": "string", "description": "Stock ticker symbol, e.g. AAPL"}
            },
            "required": ["ticker"]
        }
    },
    {
        "type": "function",
        "name": "get_company_info",
        "description": "Get basic company information for a ticker symbol.",
        "parameters": {
            "type": "object",
            "properties": {
                "ticker": {"type": "string", "description": "Stock ticker symbol"}
            },
            "required": ["ticker"]
        }
    }
]

def get_stock_price(ticker: str) -> dict:
    prices = {"AAPL": 245.12, "GOOGL": 192.45, "TSLA": 371.80}
    return {"ticker": ticker, "price": prices.get(ticker, 0.0), "currency": "USD"}

def get_company_info(ticker: str) -> dict:
    info = {
        "AAPL": {"name": "Apple Inc.", "sector": "Technology", "market_cap": "3.7T"},
        "GOOGL": {"name": "Alphabet Inc.", "sector": "Technology", "market_cap": "2.4T"},
    }
    return info.get(ticker, {"name": "Unknown", "sector": "Unknown", "market_cap": "N/A"})

function_map = {
    "get_stock_price": get_stock_price,
    "get_company_info": get_company_info,
}

response = client.responses.create(
    model="anthropic/claude-sonnet-4-6",
    tools=tools,
    input="Compare the current stock prices and company details for AAPL and GOOGL."
)

while any(item.type == "function_call" for item in response.output):
    # Collect all pending function calls
    pending_calls = [item for item in response.output if item.type == "function_call"]
    next_input = [item.model_dump() for item in response.output]

    # Execute all function calls in parallel
    def run_call(item):
        args = json.loads(item.arguments)
        result = function_map[item.name](**args)
        return {
            "type": "function_call_output",
            "call_id": item.call_id,
            "output": json.dumps(result)
        }

    with ThreadPoolExecutor(max_workers=len(pending_calls)) as executor:
        results = list(executor.map(run_call, pending_calls))

    next_input.extend(results)

    response = client.responses.create(
        model="anthropic/claude-sonnet-4-6",
        tools=tools,
        input=next_input
    )

print(response.output_text)
The model may emit multiple function_call items in a single response when it determines the calls are independent. Using ThreadPoolExecutor (Python) or Promise.all (TypeScript) lets you execute them concurrently, reducing total latency.

Next Steps

Tools Reference

Review built-in tools and custom function schema fields.

Models

Choose a model for your function-calling workload.

Agent API Quickstart

Get up and running with the Agent API in minutes.

Output Control

Combine function calling with structured outputs and response shaping.