Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.perplexity.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The people_search tool enables models to find people and retrieve their professional information such as names, job titles, and companies. Use it to power workflows like lead research, recruiting pipelines, or organizational mapping. Use it when your application needs to:
  • Look up a specific person’s professional background
  • Find employees at a company by role or title
  • Identify professionals in a particular field or location
  • Research leadership teams or organizational structures
The model decides when to invoke people_search based on your prompt and instructions.

Query tips

For the best results, guide the model with specific details in your prompt:
ApproachExample prompt
Name + company”Find John Smith who works at Google”
Role + company”Who is the Head of Design at Figma?”
Role + location”Find marketing directors in San Francisco”
Role + field”Find machine learning researchers at Stanford”
The tool works best for people-related queries — it is not suited for general web search.

Tiered Configurations

The following five tiered configurations span the speed/quality tradeoff for workloads that mix people_search with web_search and fetch_url. Each tier defines a model, reasoning effort, tool selection, per-tool token budgets, and step limits. Use them as starting points and adjust per your latency, depth, and accuracy needs.
TierModelReasoningToolsMax StepsUse When
fastgoogle/gemini-3-flash-previewlowweb_search1Quick lookups; you want the fastest answer with a single tool call
proopenai/gpt-5-minimediumpeople_search, web_search, fetch_url5Balanced people/web research with moderate depth
deepgoogle/gemini-3-flash-previewhighpeople_search, web_search, fetch_url10Deeper analysis when latency budget is moderate but quality matters
advanced-deepopenai/gpt-5mediumpeople_search, web_search, fetch_url10High-quality, multi-step research with long context
ultra-deepopenai/gpt-5.5highpeople_search, web_search, fetch_url50Maximum-depth investigations with the largest token budgets and step counts
The bigtokens settings used by pro, deep, and advanced-deep refer to max_tokens=10000, max_tokens_per_page=1000, max_results_per_query=10, and max_results_per_request=30 on the people_search and web_search tools. The xltokens settings used by ultra-deep refer to max_tokens=20000, max_tokens_per_page=2000, max_results_per_query=30, and max_results_per_request=50.
ultra-deep heads-up: openai/gpt-5.5 with high reasoning and streaming may be flaky upstream. If requests hang, fall back to medium reasoning effort or disable streaming.

fast

Single-tool, low-latency configuration for quick factual lookups.
model: google/gemini-3-flash-preview
reasoning:
  effort: low
tools:
  - type: web_search
max_steps: 1

pro

Balanced configuration with all three tools enabled and moderate reasoning effort.
model: openai/gpt-5-mini
reasoning:
  effort: medium
tools:
  - type: people_search
    max_tokens: 10000
    max_tokens_per_page: 1000
    max_results_per_query: 10
    max_results_per_request: 30
  - type: web_search
    max_tokens: 10000
    max_tokens_per_page: 1000
    max_results_per_query: 10
    max_results_per_request: 30
  - type: fetch_url
max_steps: 5

deep

Higher reasoning effort and step count with a generous output budget for fuller multi-source answers.
model: google/gemini-3-flash-preview
reasoning:
  effort: high
tools:
  - type: people_search
    max_tokens: 10000
    max_tokens_per_page: 1000
    max_results_per_query: 10
    max_results_per_request: 30
  - type: web_search
    max_tokens: 10000
    max_tokens_per_page: 1000
    max_results_per_query: 10
    max_results_per_request: 30
  - type: fetch_url
max_steps: 10
max_tokens: 16000

advanced-deep

A frontier-model configuration for high-quality, multi-step research when latency budget is generous.
model: openai/gpt-5
reasoning:
  effort: medium
tools:
  - type: people_search
    max_tokens: 10000
    max_tokens_per_page: 1000
    max_results_per_query: 10
    max_results_per_request: 30
  - type: web_search
    max_tokens: 10000
    max_tokens_per_page: 1000
    max_results_per_query: 10
    max_results_per_request: 30
  - type: fetch_url
max_steps: 10

ultra-deep

Maximum-depth configuration with the largest token budgets, the highest step count, and xltokens per-tool settings. Best for exhaustive investigations.
openai/gpt-5.5 with high reasoning and streaming may be flaky upstream. If requests hang, switch to medium effort or use a non-streaming call.
model: openai/gpt-5.5
reasoning:
  effort: high
tools:
  - type: people_search
    max_tokens: 20000
    max_tokens_per_page: 2000
    max_results_per_query: 30
    max_results_per_request: 50
  - type: web_search
    max_tokens: 20000
    max_tokens_per_page: 2000
    max_results_per_query: 30
    max_results_per_request: 50
  - type: fetch_url
max_steps: 50
max_tokens: 32000

Tool Pricing

Each invocation of the people_search tool is billed at $5 per 1,000 tool invocations. See the Pricing page for full details.

Next Steps

Tools Overview

See all available Agent API tools including Web Search, Fetch URL, and more.

Agent API Quickstart

Get started with the Agent API.