Perplexity home pagelight logodark logo
  • Playground
  • Playground
Home
Models
Guides
API Reference
Changelog
System Status
FAQ
Roadmap
Discussions
Changelog
  • Overview
Changelog

Changelog

We’re excited to announce the release of our new academic filter feature, allowing you to tailor your searches specifically to academic and scholarly sources. By setting search_mode: "academic" in your API requests, you can now prioritize results from peer-reviewed papers, journal articles, and research publications.

This feature is particularly valuable for:

  • Students and researchers working on academic papers
  • Professionals requiring scientifically accurate information
  • Anyone seeking research-based answers instead of general web content

The academic filter works seamlessly with other search parameters like search_context_size and date filters, giving you precise control over your research queries.

Example:

Copy
curl --request POST \
  --url https://api.perplexity.ai/chat/completions \
  --header 'accept: application/json' \
  --header 'authorization: Bearer YOUR_API_KEY' \
  --header 'content-type: application/json' \
  --data '{
    "model": "sonar-pro",
    "messages": [{"role": "user", "content": "What is the scientific name of the lions mane mushroom?"}],
    "stream": false,
    "search_mode": "academic",
    "web_search_options": {"search_context_size": "low"}
}'

For detailed documentation and implementation examples, please see our Academic Filter Guide.

We’re excited to announce our new reasoning effort feature for sonar-deep-research. This lets you control how much computational effort the AI dedicates to each query. You can choose from “low”, “medium”, or “high” to get faster, simpler answers or deeper, more thorough responses.

This feature has a direct impact on the amount of reasoning tokens consumed for each query, giving you the ability to control costs while balancing between speed and thoroughness.

Options:

  • "low": Faster, simpler answers with reduced token usage
  • "medium": Balanced approach (default)
  • "high": Deeper, more thorough responses with increased token usage

Example:

Copy
curl --request POST \
  --url https://api.perplexity.ai/chat/completions \
  --header 'accept: application/json' \
  --header 'authorization: Bearer ${PPLX_KEY}' \
  --header 'content-type: application/json' \
  --data '{
    "model": "sonar-deep-research",
    "messages": [{"role": "user", "content": "What should I know before markets open today?"}],
    "stream": true,
    "reasoning_effort": "low"
  }'

For detailed documentation and implementation examples, please see: Sonar Deep Research Documentation

We’re excited to announce the addition of an asynchronous API for Sonar Deep Research, designed specifically for research-intensive tasks that may take longer to process.

This new API allows you to submit requests and retrieve results later, making it ideal for complex research queries that require extensive processing time.

The asynchronous API endpoints include:

  1. GET https://api.perplexity.ai/async/chat/completions - Lists all asynchronous chat completion requests for the authenticated user
  2. POST https://api.perplexity.ai/async/chat/completions - Creates an asynchronous chat completion job
  3. GET https://api.perplexity.ai/async/chat/completions/{request_id} - Retrieves the status and result of a specific asynchronous chat completion job

Note: Async requests have a time-to-live (TTL) of 7 days. After this period, the request and its results will no longer be accessible.

For detailed documentation and implementation examples, please see: Sonar Deep Research Documentation

We’ve improved our API responses to give you more visibility into search data by adding a new search_results field to the JSON response object.

This enhancement provides direct access to the search results used by our models, giving you more transparency and control over the information being used to generate responses.

The search_results field includes:

  • title: The title of the search result page
  • url: The URL of the search result
  • date: The publication date of the content

Example:

Copy
"search_results": [
  {
    "title": "Understanding Large Language Models",
    "url": "https://example.com/llm-article",
    "date": "2023-12-25"
  },
  {
    "title": "Advances in AI Research",
    "url": "https://example.com/ai-research",
    "date": "2024-03-15"
  }
]

This update makes it easier to:

  • Verify the sources used in generating responses
  • Create custom citation formats for your applications
  • Filter or prioritize certain sources based on your needs

Important: The citations field is being deprecated in favor of the new search_results field. Please update your applications to use the search_results field, as the citations field will be removed in a future update.

The search_results field is now available across all our search-enabled models.

We are excited to announce the release of our new API portal, designed to help you better manage your organization and API usage.

With this portal, you can:

  • Organize and manage your API keys more effectively.
  • Gain insights into your API usage and team activity.
  • Streamline collaboration within your organization.

Check it out here:
https://www.perplexity.ai/account/api/group

Looking to narrow down your search results based on users’ locations?
We now support user location filtering, allowing you to retrieve results only from a particular user location.

Check out the guide.

You can now upload images to Sonar and use them as part of your multimodal search experience.
Give it a try by following our image upload guide:
https://docs.perplexity.ai/guides/image-guide

Looking to narrow down your search results to specific dates?
We now support date range filtering, allowing you to retrieve results only from a particular timeframe.

Check out the guide:
https://docs.perplexity.ai/guides/date-range-filter-guide

We’ve fully transitioned to our new pricing model: citation tokens are no longer charged.
If you were already using the search_context_size parameter, you’ve been on this model already.

This change makes pricing simpler and cheaper for everyone — with no downside.

View the updated pricing:
https://docs.perplexity.ai/guides/pricing

We’ve removed all feature gating based on tiered spending. These were previously only available to users of Tier 3 and above.

That means every user now has access to all API capabilities, regardless of usage volume or spend. Rate limits are still applicable.
Whether you’re just getting started or scaling up, you get the full power of Sonar out of the box.

We’re excited to announce that structured outputs are now available to all Perplexity API users, regardless of tier level. Based on valuable feedback from our developer community, we’ve removed the previous Tier 3 requirement for this feature.

What’s available now:

  • JSON structured outputs are supported across all models
  • Both JSON and Regex structured outputs are supported for sonar and sonar-reasoning models

Coming soon:

  • Full Regex support for all models

This change allows developers to create more reliable and consistent applications from day one. We believe in empowering our community with the tools they need to succeed, and we’re committed to continuing to improve accessibility to our advanced features.

Thank you for your feedback—it helps us make Perplexity API better for everyone.

We’re excited to announce significant improvements to our Sonar models that deliver superior performance at lower costs. Our latest benchmark testing confirms that Sonar and Sonar Pro now outperform leading competitors while maintaining more affordable pricing.

Key updates include:

  • Three new search modes across most Sonar models:

    • High: Maximum depth for complex queries
    • Medium: Balanced approach for moderate complexity
    • Low: Cost-efficient for straightforward queries (equivalent to current pricing)
  • Simplified billing structure:

    • Transparent pricing for input/output tokens
    • No charges for citation tokens in responses (except for Sonar Deep Research)

The current billing structure will be supported as the default option for 30 days (until April 18, 2025). During this period, the new search modes will be available as opt-in features.

Important Note: After April 18, 2025, Sonar Pro and Sonar Reasoning Pro will not return Citation tokens or number of search results in the usage field in the API response.

Please note that as of February 22, 2025, several models and model name aliases will no longer be accessible. The following model names will no longer be available via API:

llama-3.1-sonar-small-128k-online

llama-3.1-sonar-large-128k-online

llama-3.1-sonar-huge-128k-online

We recommend updating your applications to use our recently released Sonar or Sonar Pro models – you can learn more about them here. Thank you for being a Perplexity API user.

We are expanding API offerings with the most efficient and cost-effective search solutions available: Sonar and Sonar Pro.

Sonar gives you fast, straightforward answers

Sonar Pro tackles complex questions that need deeper research and provides more sources

Both models offer built-in citations, automated scaling of rate limits, and public access to advanced features like structured outputs and search domain filters. And don’t worry, we never train on your data. Your information stays yours.

You can learn more about our new APIs here - http://sonar.perplexity.ai/

We are excited to announce the public availability of citations in the Perplexity API. In addition, we have also increased our default rate limit for the sonar online models to 50 requests/min for all users.

Effective immediately, all API users will see citations returned as part of their requests by default. This is not a breaking change. The return_citations parameter will no longer have any effect.

If you have any questions or need assistance, feel free to reach out to our team at api@perplexity.ai

We are excited to announce the launch of our latest Perplexity Sonar models:

Online Models - llama-3.1-sonar-small-128k-online llama-3.1-sonar-large-128k-online

Chat Models - llama-3.1-sonar-small-128k-chat llama-3.1-sonar-large-128k-chat

These new additions surpass the performance of the previous iteration. For detailed information on our supported models, please visit our model card documentation.

[Action Required] Model Deprecation Notice Please note that several models will no longer be accessible effective 8/12/2024. We recommend updating your applications to use models in the Llama-3.1 family immediately.

The following model names will no longer be available via API - llama-3-sonar-small-32k-online llama-3-sonar-large-32k-online llama-3-sonar-small-32k-chat llama-3-sonar-large-32k-chat llama-3-8b-instruct llama-3-70b-instruct mistral-7b-instruct mixtral-8x7b-instruct

We recommend switching to models in the Llama-3.1 family:

Online Models - llama-3.1-sonar-small-128k-online llama-3.1-sonar-large-128k-online

Chat Models - llama-3.1-sonar-small-128k-chat llama-3.1-sonar-large-128k-chat

Instruct Models - llama-3.1-70b-instruct llama-3.1-8b-instruct

If you have any questions, please email support@perplexity.ai. Thank you for being a Perplexity API user.

Stay curious,

Team Perplexity

Please note that as of May 14, several models and model name aliases will no longer be accessible. We recommend updating your applications to use models in the Llama-3 family immediately. The following model names will no longer be available via API:

codellama-70b-instruct mistral-7b-instruct mixtral-8x22b-instruct pplx-7b-chat pplx-7b-online

twitterlinkedindiscordgithubwebsite
Assistant
Responses are generated using AI and may contain mistakes.