Documentation Index
Fetch the complete documentation index at: https://docs.perplexity.ai/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Pro Search Classifier is an intelligent system that automatically determines whether a query requires the advanced multi-step tool usage of Pro Search or can be effectively answered with standard Fast Search. This optimization helps you balance performance needs with cost efficiency."pro" and "fast" search types, you can use "auto" to let the classifier make the optimal decision for each query.How It Works
When you setsearch_type: "auto", the classifier analyzes your query across multiple dimensions:
Query Complexity Analysis
- Number of sub-questions or aspects
- Requirement for comparative analysis
- Need for multi-step reasoning
- Complexity of information synthesis required
Classification Decision
- Pro Search for complex, multi-faceted queries requiring multi-step tool usage
- Fast Search for straightforward information retrieval
Classification Patterns
Queries Classified as Pro Search
Complex queries that benefit from multi-step tool usage are automatically routed to Pro Search:Multi-Part Questions
Multi-Part Questions
- Requires information about three different frameworks
- Needs comparative analysis across multiple dimensions
- Involves gathering expert opinions and recommendations
- Benefits from synthesis of diverse sources
- Multiple web searches for each framework
- URL fetching for benchmark data and official documentation
Research Synthesis
Research Synthesis
- Requires finding multiple research papers
- Needs access to full paper content, not just abstracts
- Involves extracting specific data (sample sizes, limitations)
- Requires synthesis across multiple studies
- Web search for recent peer-reviewed papers
fetch_url_contentto read full papers- Information extraction and synthesis
Time-Sensitive Complex Analysis
Time-Sensitive Complex Analysis
- Requires very recent information
- Needs multi-source verification
- Involves sector-by-sector analysis
- Benefits from expert opinion gathering
- Multiple targeted web searches
- URL fetching for financial analysis reports
- Synthesis of diverse expert opinions
Queries Classified as Fast Search
Straightforward queries that don’t require multi-step reasoning are efficiently handled by Fast Search:Simple Factual Questions
Simple Factual Questions
- Single, well-established fact
- No calculation or analysis needed
- Information readily available in search snippets
- Single web search
- Direct answer from search results
- No need for multi-step reasoning
Straightforward Information Retrieval
Straightforward Information Retrieval
- Single product inquiry
- Information available in product descriptions
- No comparative analysis required
- No calculations needed
- Search for product specifications
- Extract and list features
- Synthesize from search results
Single-Topic Queries
Single-Topic Queries
- Single concept definition
- No multi-part analysis required
- Standard information readily available
- Search for machine learning explanations
- Synthesize clear definition
- Provide context from reliable sources
Basic Definitional Requests
Basic Definitional Requests
- Simple definition request
- No complex analysis needed
- Information readily available
- Quick search for API definition
- Explain acronym and basic usage
- Provide clear, concise answer
Cost Implications
Understanding the cost difference helps you optimize your API usage:Classified as Pro Search
- Complex multi-part questions
- Requests requiring calculation or analysis
- Comparative research across sources
- Time-sensitive information needs
Classified as Fast Search
- Simple factual questions
- Straightforward information retrieval
- Single-topic queries
- Basic definitional requests
Pricing Comparison
Pro Search Rates:- Input: $3 per 1M tokens
- Output: $15 per 1M tokens
- Request fees: $14-$22 per 1,000 requests (based on context size)
- Input: $3 per 1M tokens
- Output: $15 per 1M tokens
- Request fees: $6-$14 per 1,000 requests (based on context size - same as standard Sonar Pro)
Usage Examples
Using Automatic Classification
Manual Override
You can still manually specify the search type when you know what you need:- Force Pro Search
- Force Fast Search
- You know the query needs multi-step reasoning
- Previous auto-classification was Fast but you need deeper analysis
- Critical queries where you want maximum capability
Best Practices
Default to automatic classification
search_type: "auto" and let the classifier optimize:Monitor classification patterns
- Review queries that consistently use Pro Search
- Identify opportunities to rephrase queries for Fast Search when appropriate
- Understand which user questions require advanced capabilities
Use manual override strategically
- You have specific performance requirements
- Testing and comparing Pro vs Fast results
- Building features with known complexity levels
Design queries effectively
Classification Transparency
You can verify the classification decision in the response metadata:When to Use Each Mode
Auto (Recommended)
Manual Pro
Manual Fast
Common Questions
How accurate is the classifier?
How accurate is the classifier?
- Rephrase queries to be more specific
- Use manual override for those query types
- Consider your use case’s specific needs
Can I see which mode was used?
Can I see which mode was used?
- Which search type was used
- Why the classification was made (when using auto)
- Cost breakdown by search type
Does automatic classification add latency?
Does automatic classification add latency?
What if I disagree with the classification?
What if I disagree with the classification?
- Making queries more specific
- Using manual override for those query types
- Reviewing whether your use case needs consistent Pro or Fast mode