Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.perplexity.ai/llms.txt

Use this file to discover all available pages before exploring further.

The shared prompting best practices live in the Agent API Prompt Guide and apply to Sonar without modification — be specific, cap result counts, don’t ask for URLs in prose, avoid few-shot content, and prefer parameters over prose for filters. This page covers the one structural difference that changes how Sonar is prompted: the system prompt does not influence search.
For new applications, we recommend the Agent API. The agent loop, custom tools, and richer prompt control make it the better default.

Shape Search Through the User Message

Sonar runs a web search before generating its answer, and only the user message is used to drive that search. The system prompt is not visible to search; it reaches the model only at answer time, when results are already in hand. Use the system prompt for tone, style, and grounding rules, but treat the user message as both the question for the model and the seed for the search. The practical consequence: phrasing in the user message directly affects which sources show up. A specific, descriptive question produces better results than a vague one, and a polished system prompt cannot rescue a vague user message. If retrieval quality matters, invest there first. Good Example: “What guidance has the FDA issued on AI in medical devices in the past year, and which device categories does it cover?” Poor Example: “Tell me about FDA AI rules.”
Do not put search instructions in the system prompt. Phrases like “search only on Wikipedia” or “look for the latest results” have no effect. For hard constraints like domain, recency, or region, use the dedicated search filter parameters on the request body rather than trying to express the constraint in prose.
from perplexity import Perplexity

client = Perplexity()

completion = client.chat.completions.create(
    model="sonar",
    messages=[
        {"role": "user", "content": "What guidance has the FDA issued on AI in medical devices in the past year?"}
    ],
    search_domain_filter=["fda.gov"],
    search_recency_filter="month"
)

print(completion.choices[0].message.content)
This contrasts with the Agent API, where instructions are re-read on every turn of the agent loop and shape both tool calls and the final answer. In Sonar, instructions has no equivalent. System messages only influence generation, never retrieval.

Reduce Hallucinations

LLMs are tuned to be helpful, which can occasionally lead them to provide an answer when search results are thin or off-target rather than flagging the gap. The system prompt doesn’t shape the search step itself, but it does shape how the model uses the search results when writing the final response, which makes it the right place for grounding rules. Two short additions cover most of these edge cases. Give the model permission to say it didn’t find anything. With an explicit out in the system prompt, the model is more likely to acknowledge insufficient results instead of leaning on training data to fill the gap.
System Prompt
Only answer using the search results provided. If the results do not contain the answer, say so explicitly rather than guessing.
Require disclosure of near-misses. Search sometimes returns related but non-matching results (a different year, a parent company instead of a subsidiary, a similar product). Asking the model to surface the mismatch up front keeps these cases from being presented as direct answers.
System Prompt
If the search results are related but do not match the question (a different year, a parent company, or a similar product), state the mismatch explicitly before answering.

What Carries Over from the Agent API Guide

The same core prompting rules apply with no changes:
  • Be specific and descriptive in the user message. Vague queries produce scattered results.
  • Cap result counts. If a list is needed, say how long.
  • Don’t few-shot content. Pasting a written-out example answer can cause the search step to latch onto the example topic. Few-shotting structure is fine; for guaranteed shape use response_format.
  • Don’t ask for URLs in the response text. Sonar always returns sources in the top-level citations and search_results fields. Read them from there.
  • Use parameters, not prose, for filters. The search backend reads parameters; it does not read the system prompt.

Next Steps

Agent API Prompt Guide

The full prompting guide. Most rules apply to Sonar as well.

Search Filters

Domain, recency, and date filters for narrowing Sonar search results.

Pro Search

Multi-step search and reasoning when single-shot is not enough.

Agent API Quickstart

Recommended for new applications. Multi-turn loop and custom tools.