Model Deprecation Notice
Please note that as of May 14, several models and model name aliases will no longer be accessible. We recommend updating your applications to use models in the Llama-3 family immediately. The following model names will no longer be available via API:
Cheaper Output Tokens
Effective immediately, input and output tokens are now charged with the same price. Previously, output tokens were more expensive than input tokens. Prices have generally gone down as a result.
New Model: mixtral-8x7b-instruct
We're excited to announce that pplx-api is now serving the latest open-source mixture-of-experts model, mixtral-8x7b-instruct
, at the blazingly fast speed of inference you are accustomed to.
Online LLMs and general availability for pplx-api
We’re excited to share two new PPLX models: pplx-7b-online
and pplx-70b-online
. These first-of-a-kind models are integrated with our in-house search technology for factual grounding. Read our blog post for more information!
https://blog.perplexity.ai/blog/introducing-pplx-online-llms
Models removed: replit-code-v1.5-3b and openhermes-2-mistral-7b
We have removed support for replit-code-v1.5-3b
and openhermes-2-mistral-7b
. There are no immediate plans to add these models back. If you were a user who enjoyed openhermes-2-mistral-7b
, try instead using our in-house models, pplx-7b-chat
, pplx-70b-chat
!
Versioning
The Perplexity AI API is currently in beta release v0. Clients are not protected from backwards incompatible changes and cannot specify their desired API version. Examples of backwards incompatible changes include...