Skip to main content

Overview

The langchain-perplexity package provides LangChain integrations for Perplexity’s API, enabling you to build LLM applications with real-time web search capabilities.
LangChain is a popular Python framework for building applications powered by large language models. It provides composable components for chains, agents, and retrieval-augmented generation (RAG). Learn more at langchain.com.
The integration includes:
  • ChatPerplexity - Chat model with Pro Search, streaming, and search controls
  • PerplexitySearchRetriever - Retriever for RAG applications
  • PerplexitySearchResults - Tool for LangChain agents

Installation

pip install langchain-perplexity

API Key Setup

Set your Perplexity API key as an environment variable:
import os

os.environ["PPLX_API_KEY"] = "your_api_key_here"

Get API Key

Generate your API key from the Perplexity dashboard.

Quick Start: Chat Models

Use ChatPerplexity for conversational AI with web search:
from langchain_perplexity import ChatPerplexity

chat = ChatPerplexity(model="sonar")

response = chat.invoke("What breakthroughs in fusion energy have been announced this year?")
print(response.content)
Enable multi-step reasoning with Pro Search:
from langchain_perplexity import ChatPerplexity, WebSearchOptions

chat = ChatPerplexity(
    model="sonar-pro",
    web_search_options=WebSearchOptions(search_type="pro")
)

response = chat.invoke("How does the electoral college work?")

# Access reasoning steps
if reasoning := response.additional_kwargs.get("reasoning_steps"):
    for step in reasoning:
        print(f"Thought: {step['thought']}")

Search Controls

Filter search results by domain, recency, or date:
chat = ChatPerplexity(
    model="sonar",
    search_domain_filter=["wikipedia.org", "nature.com"],
    search_recency_filter="month",
    return_images=True
)

response = chat.invoke("Solar system planets")

# Access citations and images
print("Citations:", response.additional_kwargs.get("citations", []))
print("Images:", response.additional_kwargs.get("images", []))

Streaming

for chunk in chat.stream("Explain quantum computing"):
    print(chunk.content, end="", flush=True)

Quick Start: Retriever

Use PerplexitySearchRetriever for RAG applications:
from langchain_perplexity import PerplexitySearchRetriever

retriever = PerplexitySearchRetriever(k=5)

docs = retriever.invoke("What is nuclear fusion?")

for doc in docs:
    print(f"Title: {doc.metadata['title']}")
    print(f"URL: {doc.metadata['url']}")
    print(f"Content: {doc.page_content[:200]}...")
    print("---")

RAG Chain Example

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_perplexity import ChatPerplexity, PerplexitySearchRetriever

llm = ChatPerplexity(model="sonar")
retriever = PerplexitySearchRetriever(k=3)

template = """Answer based on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

answer = rag_chain.invoke("What is the current status of ITER?")
print(answer)

Quick Start: Tool

Use PerplexitySearchResults with LangChain agents:
from langchain_perplexity import PerplexitySearchResults

tool = PerplexitySearchResults()

results = tool.invoke("LangChain framework")

for result in results:
    print(f"Title: {result['title']}")
    print(f"URL: {result['url']}")
    print(f"Snippet: {result['snippet'][:100]}...")
    print("---")

Agent Example

from langchain.chat_models import init_chat_model
from langchain_perplexity import PerplexitySearchResults
from langgraph.prebuilt import create_react_agent

model = init_chat_model(model="gpt-4o", model_provider="openai")
search_tool = PerplexitySearchResults()

agent = create_react_agent(model, [search_tool])

for step in agent.stream(
    {"messages": [("user", "What are the latest LangChain releases?")]},
    stream_mode="values",
):
    step["messages"][-1].pretty_print()

Available Models

The integration supports all Perplexity models:
ModelDescription
sonarFast, cost-effective search model
sonar-proAdvanced model with Pro Search support
sonar-reasoning-proAdvanced reasoning capabilities
sonar-deep-researchDeep research with comprehensive analysis
See the full list of models on our models page.

LangChain Docs

Full LangChain integration documentation

ChatPerplexity

Detailed chat model documentation

Retriever Docs

PerplexitySearchRetriever documentation

Tool Docs

PerplexitySearchResults documentation

PyPI Package

View on PyPI

API Reference

LangChain API reference

Support

Need help with the integration?