Use Perplexity’s Sonar API with OpenAI’s client libraries for seamless integration.
https://api.perplexity.ai
:
model
- Model name (use Perplexity model names)messages
- Chat messages arraytemperature
- Sampling temperature (0-2)max_tokens
- Maximum tokens in responsetop_p
- Nucleus sampling parameterfrequency_penalty
- Frequency penalty (-2.0 to 2.0)presence_penalty
- Presence penalty (-2.0 to 2.0)stream
- Enable streaming responsessearch_domain_filter
- Limit or exclude specific domainssearch_recency_filter
- Filter by content recencyreturn_images
- Include image URLs in responsereturn_related_questions
- Include related questionssearch_mode
- “web” (default) or “academic” mode selector.choices[0].message.content
- The AI-generated responsemodel
- The model name usedusage
- Token consumption detailsid
, created
, object
- Standard response metadatasearch_results
- Array of web sources with titles, URLs, and datesusage.search_context_size
- Search context setting usedsonar-pro
, sonar-reasoning
.extra_body
(Python) or root fields (TypeScript) as shown above.sonar-pro
, sonar-reasoning
, etc.)Bearer
token format in Authorization header