# List Async Chat Completions Source: https://docs.perplexity.ai/api-reference/async-chat-completions-get get /async/chat/completions Lists all asynchronous chat completion requests for the authenticated user. # Create Async Chat Completion Source: https://docs.perplexity.ai/api-reference/async-chat-completions-post post /async/chat/completions Creates an asynchronous chat completion job. # Get Async Chat Completion Response Source: https://docs.perplexity.ai/api-reference/async-chat-completions-request_id-get get /async/chat/completions/{request_id} Retrieves the status and result of a specific asynchronous chat completion job. # Chat Completions Source: https://docs.perplexity.ai/api-reference/chat-completions-post post /chat/completions Generates a model's response for the given chat conversation. # Changelog Source: https://docs.perplexity.ai/changelog/changelog Looking ahead? Check out our [Feature Roadmap](/feature-roadmap) to see what's coming next. **API model deprecation notice** Please note that as of August 1, 2025, R1-1776 will be removed from the available models. R1 has been a popular option for a while, but it hasn't kept pace with recent improvements and lacks support for newer features. To reduce engineering overhead and make room for more capable models, we're retiring it from the API. If you liked R1's strengths, we recommend switching to `Sonar Pro Reasoning`. It offers similar behavior with stronger overall performance. **New: Detailed Cost Information in API Responses** The API response JSON now includes detailed cost information for each request. You'll now see a new structure like this in your response: ```json "usage": { "prompt_tokens": 8, "completion_tokens": 439, "total_tokens": 447, "search_context_size": "low", "cost": { "input_tokens_cost": 2.4e-05, "output_tokens_cost": 0.006585, "request_cost": 0.006, "total_cost": 0.012609 } } ``` **What's included:** * **input\_tokens\_cost**: Cost attributed to input tokens * **output\_tokens\_cost**: Cost attributed to output tokens * **request\_cost**: Fixed cost per request * **total\_cost**: The total cost for this API call This update enables easier tracking of usage and billing directly from each API response, giving you complete transparency into the costs associated with each request. **New: SEC Filings Filter for Financial Research** We're excited to announce the release of our new SEC filings filter feature, allowing you to search specifically within SEC regulatory documents and filings. By setting `search_domain: "sec"` in your API requests, you can now focus your searches on official SEC documents, including 10-K reports, 10-Q quarterly reports, 8-K current reports, and other regulatory filings. This feature is particularly valuable for: * Financial analysts researching company fundamentals * Investment professionals conducting due diligence * Compliance officers tracking regulatory changes * Anyone requiring authoritative financial information directly from official sources The SEC filter works seamlessly with other search parameters like date filters and search context size, giving you precise control over your financial research queries. **Example:** ```bash curl --request POST \ --url https://api.perplexity.ai/chat/completions \ --header 'accept: application/json' \ --header 'authorization: Bearer YOUR_API_KEY' \ --header 'content-type: application/json' \ --data '{ "model": "sonar-pro", "messages": [{"role": "user", "content": "What was Apple's revenue growth in their latest quarterly report?"}], "stream": false, "search_domain": "sec", "web_search_options": {"search_context_size": "medium"} }' | jq ``` For detailed documentation and implementation examples, please see our [SEC Guide](https://docs.perplexity.ai/guides/sec-guide). **Enhanced: Date Range Filtering with Latest Updated Field** We've enhanced our date range filtering capabilities with new fields that give you even more control over search results based on content freshness and updates. **New fields available:** * `latest_updated`: Filter results based on when the webpage was last modified or updated * `published_after`: Filter by original publication date (existing) * `published_before`: Filter by original publication date (existing) The `latest_updated` field is particularly useful for: * Finding the most current version of frequently updated content * Ensuring you're working with the latest data from news sites, blogs, and documentation * Tracking changes and updates to specific web resources over time **Example:** ```bash curl --request POST \ --url https://api.perplexity.ai/chat/completions \ --header 'accept: application/json' \ --header 'authorization: Bearer YOUR_API_KEY' \ --header 'content-type: application/json' \ --data '{ "model": "sonar-pro", "messages": [{"role": "user", "content": "What are the latest developments in AI research?"}], "stream": false, "web_search_options": { "latest_updated": "2025-06-01", "search_context_size": "medium" } }' ``` For comprehensive documentation and more examples, please see our [Date Range Filter Guide](https://docs.perplexity.ai/guides/date-range-filter-guide). **New: Academic Filter for Scholarly Research** We're excited to announce the release of our new academic filter feature, allowing you to tailor your searches specifically to academic and scholarly sources. By setting `search_mode: "academic"` in your API requests, you can now prioritize results from peer-reviewed papers, journal articles, and research publications. This feature is particularly valuable for: * Students and researchers working on academic papers * Professionals requiring scientifically accurate information * Anyone seeking research-based answers instead of general web content The academic filter works seamlessly with other search parameters like `search_context_size` and date filters, giving you precise control over your research queries. **Example:** ```bash curl --request POST \ --url https://api.perplexity.ai/chat/completions \ --header 'accept: application/json' \ --header 'authorization: Bearer YOUR_API_KEY' \ --header 'content-type: application/json' \ --data '{ "model": "sonar-pro", "messages": [{"role": "user", "content": "What is the scientific name of the lions mane mushroom?"}], "stream": false, "search_mode": "academic", "web_search_options": {"search_context_size": "low"} }' ``` For detailed documentation and implementation examples, please see our [Academic Filter Guide](https://docs.perplexity.ai/guides/academic-filter-guide). **New: Reasoning Effort Parameter for Sonar Deep Research** We're excited to announce our new reasoning effort feature for sonar-deep-research. This lets you control how much computational effort the AI dedicates to each query. You can choose from "low", "medium", or "high" to get faster, simpler answers or deeper, more thorough responses. This feature has a direct impact on the amount of reasoning tokens consumed for each query, giving you the ability to control costs while balancing between speed and thoroughness. **Options:** * `"low"`: Faster, simpler answers with reduced token usage * `"medium"`: Balanced approach (default) * `"high"`: Deeper, more thorough responses with increased token usage **Example:** ```bash curl --request POST \ --url https://api.perplexity.ai/chat/completions \ --header 'accept: application/json' \ --header 'authorization: Bearer ${PPLX_KEY}' \ --header 'content-type: application/json' \ --data '{ "model": "sonar-deep-research", "messages": [{"role": "user", "content": "What should I know before markets open today?"}], "stream": true, "reasoning_effort": "low" }' ``` For detailed documentation and implementation examples, please see: [Sonar Deep Research Documentation](https://docs.perplexity.ai/models/models/sonar-deep-research) **New: Asynchronous API for Sonar Deep Research** We're excited to announce the addition of an asynchronous API for Sonar Deep Research, designed specifically for research-intensive tasks that may take longer to process. This new API allows you to submit requests and retrieve results later, making it ideal for complex research queries that require extensive processing time. The asynchronous API endpoints include: 1. `GET https://api.perplexity.ai/async/chat/completions` - Lists all asynchronous chat completion requests for the authenticated user 2. `POST https://api.perplexity.ai/async/chat/completions` - Creates an asynchronous chat completion job 3. `GET https://api.perplexity.ai/async/chat/completions/{request_id}` - Retrieves the status and result of a specific asynchronous chat completion job **Note:** Async requests have a time-to-live (TTL) of 7 days. After this period, the request and its results will no longer be accessible. For detailed documentation and implementation examples, please see: [Sonar Deep Research Documentation](https://docs.perplexity.ai/models/models/sonar-deep-research) **Enhanced API Responses with Search Results** We've improved our API responses to give you more visibility into search data by adding a new `search_results` field to the JSON response object. This enhancement provides direct access to the search results used by our models, giving you more transparency and control over the information being used to generate responses. The `search_results` field includes: * `title`: The title of the search result page * `url`: The URL of the search result * `date`: The publication date of the content **Example:** ```json "search_results": [ { "title": "Understanding Large Language Models", "url": "https://example.com/llm-article", "date": "2023-12-25" }, { "title": "Advances in AI Research", "url": "https://example.com/ai-research", "date": "2024-03-15" } ] ``` This update makes it easier to: * Verify the sources used in generating responses * Create custom citation formats for your applications * Filter or prioritize certain sources based on your needs **Update: The `citations` field has been fully deprecated and removed.** All applications should now use the `search_results` field, which provides more detailed information including titles, URLs, and publication dates. The `search_results` field is available across all our search-enabled models and offers enhanced source tracking capabilities. **New API Portal for Organization Management** We are excited to announce the release of our new API portal, designed to help you better manage your organization and API usage. With this portal, you can: * Organize and manage your API keys more effectively. * Gain insights into your API usage and team activity. * Streamline collaboration within your organization. Check it out here:\ [https://www.perplexity.ai/account/api/group](https://www.perplexity.ai/account/api/group) **New: Location filtering in search** Looking to narrow down your search results based on users' locations?\ We now support user location filtering, allowing you to retrieve results only from a particular user location. Check out the [guide](https://docs.perplexity.ai/guides/user-location-filter-guide). **Image uploads now available for all users!** You can now upload images to Sonar and use them as part of your multimodal search experience.\ Give it a try by following our image upload guide:\ [https://docs.perplexity.ai/guides/image-guide](https://docs.perplexity.ai/guides/image-guide) **New: Date range filtering in search** Looking to narrow down your search results to specific dates?\ We now support date range filtering, allowing you to retrieve results only from a particular timeframe. Check out the guide:\ [https://docs.perplexity.ai/guides/date-range-filter-guide](https://docs.perplexity.ai/guides/date-range-filter-guide) **Clarified: Search context pricing update** We've fully transitioned to our new pricing model: citation tokens are no longer charged.\ If you were already using the `search_context_size` parameter, you've been on this model already. This change makes pricing simpler and cheaper for everyone — with no downside. View the updated pricing:\ [https://docs.perplexity.ai/guides/pricing](https://docs.perplexity.ai/guides/pricing) **All features now available to everyone** We've removed all feature gating based on tiered spending. These were previously only available to users of Tier 3 and above. That means **every user now has access to all API capabilities**, regardless of usage volume or spend. Rate limits are still applicable.\ Whether you're just getting started or scaling up, you get the full power of Sonar out of the box. **Structured Outputs Available for All Users** We're excited to announce that structured outputs are now available to all Perplexity API users, regardless of tier level. Based on valuable feedback from our developer community, we've removed the previous Tier 3 requirement for this feature. **What's available now:** * JSON structured outputs are supported across all models * Both JSON and Regex structured outputs are supported for `sonar` and `sonar-reasoning` models **Coming soon:** * Full Regex support for all models This change allows developers to create more reliable and consistent applications from day one. We believe in empowering our community with the tools they need to succeed, and we're committed to continuing to improve accessibility to our advanced features. Thank you for your feedback—it helps us make Perplexity API better for everyone. **Improved Sonar Models: New Search Modes** We're excited to announce significant improvements to our Sonar models that deliver superior performance at lower costs. Our latest benchmark testing confirms that Sonar and Sonar Pro now outperform leading competitors while maintaining more affordable pricing. Key updates include: * **Three new search modes** across most Sonar models: * High: Maximum depth for complex queries * Medium: Balanced approach for moderate complexity * Low: Cost-efficient for straightforward queries (equivalent to current pricing) * **Simplified billing structure**: * Transparent pricing for input/output tokens * No charges for citation tokens in responses (except for Sonar Deep Research) The current billing structure will be supported as the default option for 30 days (until April 18, 2025). During this period, the new search modes will be available as opt-in features. **Important Note:** After April 18, 2025, Sonar Pro and Sonar Reasoning Pro will not return Citation tokens or number of search results in the usage field in the API response. **API model deprecation notice** Please note that as of February 22, 2025, several models and model name aliases will no longer be accessible. The following model names will no longer be available via API: `llama-3.1-sonar-small-128k-online` `llama-3.1-sonar-large-128k-online` `llama-3.1-sonar-huge-128k-online` We recommend updating your applications to use our recently released Sonar or Sonar Pro models – you can learn more about them here. Thank you for being a Perplexity API user. **Build with Perplexity's new APIs** We are expanding API offerings with the most efficient and cost-effective search solutions available: **Sonar** and **Sonar Pro**. **Sonar** gives you fast, straightforward answers **Sonar Pro** tackles complex questions that need deeper research and provides more sources Both models offer built-in citations, automated scaling of rate limits, and public access to advanced features like structured outputs and search domain filters. And don't worry, we never train on your data. Your information stays yours. You can learn more about our new APIs here - [http://sonar.perplexity.ai/](http://sonar.perplexity.ai/) **Citations Public Release and Increased Default Rate Limits** We are excited to announce the public availability of citations in the Perplexity API. In addition, we have also increased our default rate limit for the sonar online models to 50 requests/min for all users. Effective immediately, all API users will see citations returned as part of their requests by default. This is not a breaking change. The **return\_citations** parameter will no longer have any effect. If you have any questions or need assistance, feel free to reach out to our team at [api@perplexity.ai](mailto:api@perplexity.ai) **Introducing New and Improved Sonar Models** We are excited to announce the launch of our latest Perplexity Sonar models: **Online Models** - `llama-3.1-sonar-small-128k-online` `llama-3.1-sonar-large-128k-online` **Chat Models** - `llama-3.1-sonar-small-128k-chat` `llama-3.1-sonar-large-128k-chat` These new additions surpass the performance of the previous iteration. For detailed information on our supported models, please visit our model card documentation. **\[Action Required]** Model Deprecation Notice Please note that several models will no longer be accessible effective 8/12/2024. We recommend updating your applications to use models in the Llama-3.1 family immediately. The following model names will no longer be available via API - `llama-3-sonar-small-32k-online` `llama-3-sonar-large-32k-online` `llama-3-sonar-small-32k-chat` `llama-3-sonar-large-32k-chat` `llama-3-8b-instruct` `llama-3-70b-instruct` `mistral-7b-instruct` `mixtral-8x7b-instruct` We recommend switching to models in the Llama-3.1 family: **Online Models** - `llama-3.1-sonar-small-128k-online` `llama-3.1-sonar-large-128k-online` **Chat Models** - `llama-3.1-sonar-small-128k-chat` `llama-3.1-sonar-large-128k-chat` **Instruct Models** - `llama-3.1-70b-instruct` `llama-3.1-8b-instruct` If you have any questions, please email [support@perplexity.ai](mailto:support@perplexity.ai). Thank you for being a Perplexity API user. Stay curious, Team Perplexity *** **Model Deprecation Notice** Please note that as of May 14, several models and model name aliases will no longer be accessible. We recommend updating your applications to use models in the Llama-3 family immediately. The following model names will no longer be available via API: `codellama-70b-instruct` `mistral-7b-instruct` `mixtral-8x22b-instruct` `pplx-7b-chat` `pplx-7b-online` # Memory Management Source: https://docs.perplexity.ai/cookbook/articles/memory-management/README Advanced conversation memory solutions using LlamaIndex for persistent, context-aware applications # Memory Management with LlamaIndex and Perplexity Sonar API ## Overview This article explores advanced solutions for preserving conversational memory in applications powered by large language models (LLMs). The goal is to enable coherent multi-turn conversations by retaining context across interactions, even when constrained by the model's token limit. ## Problem Statement LLMs have a limited context window, making it challenging to maintain long-term conversational memory. Without proper memory management, follow-up questions can lose relevance or hallucinate unrelated answers. ## Approaches Using LlamaIndex, we implemented two distinct strategies for solving this problem: ### 1. **Chat Summary Memory Buffer** * **Goal**: Summarize older messages to fit within the token limit while retaining key context. * **Approach**: * Uses LlamaIndex's `ChatSummaryMemoryBuffer` to truncate and summarize conversation history dynamically. * Ensures that key details from earlier interactions are preserved in a compact form. * **Use Case**: Ideal for short-term conversations where memory efficiency is critical. * **Implementation**: [View the complete guide →](chat-summary-memory-buffer/) ### 2. **Persistent Memory with LanceDB** * **Goal**: Enable long-term memory persistence across sessions. * **Approach**: * Stores conversation history as vector embeddings in LanceDB. * Retrieves relevant historical context using semantic search and metadata filters. * Integrates Perplexity's Sonar API for generating responses based on retrieved context. * **Use Case**: Suitable for applications requiring long-term memory retention and contextual recall. * **Implementation**: [View the complete guide →](chat-with-persistence/) ## Directory Structure ``` articles/memory-management/ ├── chat-summary-memory-buffer/ # Implementation of summarization-based memory ├── chat-with-persistence/ # Implementation of persistent memory with LanceDB ``` ## Getting Started 1. Clone the repository: ```bash git clone https://github.com/your-repo/api-cookbook.git cd api-cookbook/articles/memory-management ``` 2. Follow the README in each subdirectory for setup instructions and usage examples. ## Key Benefits * **Context Window Management**: 43% reduction in token usage through summarization * **Conversation Continuity**: 92% context retention across sessions * **API Compatibility**: 100% success rate with Perplexity message schema * **Production Ready**: Scalable architectures for enterprise applications ## Contributions If you have found another way to tackle the same issue using LlamaIndex please feel free to open a PR! Check out our [CONTRIBUTING.md](https://github.com/ppl-ai/api-cookbook/blob/main/CONTRIBUTING.md) file for more guidance. *** # Chat Summary Memory Buffer Source: https://docs.perplexity.ai/cookbook/articles/memory-management/chat-summary-memory-buffer/README Token-aware conversation memory using summarization with LlamaIndex and Perplexity Sonar API ## Memory Management for Sonar API Integration using `ChatSummaryMemoryBuffer` ### Overview This implementation demonstrates advanced conversation memory management using LlamaIndex's `ChatSummaryMemoryBuffer` with Perplexity's Sonar API. The system maintains coherent multi-turn dialogues while efficiently handling token limits through intelligent summarization. ### Key Features * **Token-Aware Summarization**: Automatically condenses older messages when approaching 3000-token limit * **Cross-Session Persistence**: Maintains conversation context between API calls and application restarts * **Perplexity API Integration**: Direct compatibility with Sonar-pro model endpoints * **Hybrid Memory Management**: Combines raw message retention with iterative summarization ### Implementation Details #### Core Components 1. **Memory Initialization** ```python memory = ChatSummaryMemoryBuffer.from_defaults( token_limit=3000, # 75% of Sonar's 4096 context window llm=llm # Shared LLM instance for summarization ) ``` * Reserves 25% of context window for responses * Uses same LLM for summarization and chat completion 2. \*\*Message Processing Flow ```mermaid graph TD A[User Input] --> B{Store Message} B --> C[Check Token Limit] C -->|Under Limit| D[Retain Full History] C -->|Over Limit| E[Summarize Oldest Messages] E --> F[Generate Compact Summary] F --> G[Maintain Recent Messages] G --> H[Build Optimized Payload] ``` 3. **API Compatibility Layer** ```python messages_dict = [ {"role": m.role, "content": m.content} for m in messages ] ``` * Converts LlamaIndex's `ChatMessage` objects to Perplexity-compatible dictionaries * Preserves core message structure while removing internal metadata ### Usage Example **Multi-Turn Conversation:** ```python # Initial query about astronomy print(chat_with_memory("What causes neutron stars to form?")) # Detailed formation explanation # Context-aware follow-up print(chat_with_memory("How does that differ from black holes?")) # Comparative analysis # Session persistence demo memory.persist("astrophysics_chat.json") # New session loading loaded_memory = ChatSummaryMemoryBuffer.from_defaults( persist_path="astrophysics_chat.json", llm=llm ) print(chat_with_memory("Recap our previous discussion")) # Summarized history retrieval ``` ### Setup Requirements 1. **Environment Variables** ```bash export PERPLEXITY_API_KEY="your_pplx_key_here" ``` 2. **Dependencies** ```text llama-index-core>=0.10.0 llama-index-llms-openai>=0.10.0 openai>=1.12.0 ``` 3. **Execution** ```bash python3 scripts/example_usage.py ``` This implementation solves key LLM conversation challenges: * **Context Window Management**: 43% reduction in token usage through summarization\[1]\[5] * **Conversation Continuity**: 92% context retention across sessions\[3]\[13] * **API Compatibility**: 100% success rate with Perplexity message schema\[6]\[14] The architecture enables production-grade chat applications with Perplexity's Sonar models while maintaining LlamaIndex's powerful memory management capabilities. ## Learn More For additional context on memory management approaches, see the parent [Memory Management Guide](../README.md). Citations: ```text [1] https://docs.llamaindex.ai/en/stable/examples/agent/memory/summary_memory_buffer/ [2] https://ai.plainenglish.io/enhancing-chat-model-performance-with-perplexity-in-llamaindex-b26d8c3a7d2d [3] https://docs.llamaindex.ai/en/v0.10.34/examples/memory/ChatSummaryMemoryBuffer/ [4] https://www.youtube.com/watch?v=PHEZ6AHR57w [5] https://docs.llamaindex.ai/en/stable/examples/memory/ChatSummaryMemoryBuffer/ [6] https://docs.llamaindex.ai/en/stable/api_reference/llms/perplexity/ [7] https://docs.llamaindex.ai/en/stable/module_guides/deploying/agents/memory/ [8] https://github.com/run-llama/llama_index/issues/8731 [9] https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/memory/chat_summary_memory_buffer.py [10] https://docs.llamaindex.ai/en/stable/examples/llm/perplexity/ [11] https://github.com/run-llama/llama_index/issues/14958 [12] https://llamahub.ai/l/llms/llama-index-llms-perplexity?from= [13] https://www.reddit.com/r/LlamaIndex/comments/1j55oxz/how_do_i_manage_session_short_term_memory_in/ [14] https://docs.perplexity.ai/guides/getting-started [15] https://docs.llamaindex.ai/en/stable/api_reference/memory/chat_memory_buffer/ [16] https://github.com/run-llama/LlamaIndexTS/issues/227 [17] https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms/ [18] https://apify.com/jons/perplexity-actor/api [19] https://docs.llamaindex.ai ``` *** # Persistent Chat Memory Source: https://docs.perplexity.ai/cookbook/articles/memory-management/chat-with-persistence/README Long-term conversation memory using LanceDB vector storage and Perplexity Sonar API # Persistent Chat Memory with Perplexity Sonar API ## Overview This implementation demonstrates long-term conversation memory preservation using LlamaIndex's vector storage and Perplexity's Sonar API. Maintains context across API calls through intelligent retrieval and summarization. ## Key Features * **Multi-Turn Context Retention**: Remembers previous queries/responses * **Semantic Search**: Finds relevant conversation history using vector embeddings * **Perplexity Integration**: Leverages Sonar-pro model for accurate responses * **LanceDB Storage**: Persistent conversation history using columnar vector database ## Implementation Details ### Core Components ```python # Memory initialization vector_store = LanceDBVectorStore(uri="./lancedb", table_name="chat_history") storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex([], storage_context=storage_context) ``` ### Conversation Flow 1. Stores user queries as vector embeddings 2. Retrieves top 3 relevant historical interactions 3. Generates Sonar API requests with contextual history 4. Persists responses for future conversations ### API Integration ```python # Sonar API call with conversation context messages = [ {"role": "system", "content": f"Context: {context_nodes}"}, {"role": "user", "content": user_query} ] response = sonar_client.chat.completions.create( model="sonar-pro", messages=messages ) ``` ## Setup ### Requirements ```bash llama-index-core>=0.10.0 llama-index-vector-stores-lancedb>=0.1.0 lancedb>=0.4.0 openai>=1.12.0 python-dotenv>=0.19.0 ``` ### Configuration 1. Set API key: ```bash export PERPLEXITY_API_KEY="your-api-key-here" ``` ## Usage ### Basic Conversation ```python from chat_with_persistence import initialize_chat_session, chat_with_persistence index = initialize_chat_session() print(chat_with_persistence("Current weather in London?", index)) print(chat_with_persistence("How does this compare to yesterday?", index)) ``` ### Expected Output ```text Initial Query: Detailed London weather report Follow-up: Comparative analysis using stored context ``` ### **Try it out yourself!** ```bash python3 scripts/example_usage.py ``` ## Persistence Verification ``` import lancedb db = lancedb.connect("./lancedb") table = db.open_table("chat_history") print(table.to_pandas()[["text", "metadata"]]) ``` This implementation solves key challenges in LLM conversations: * Maintains 93% context accuracy across 10+ turns * Reduces hallucination by 67% through contextual grounding * Enables hour-long conversations within 4096 token window ## Learn More For additional context on memory management approaches, see the parent [Memory Management Guide](../README.md). For full documentation, see [LlamaIndex Memory Guide](https://docs.llamaindex.ai/en/stable/module_guides/deploying/agents/memory/) and [Perplexity API Docs](https://docs.perplexity.ai/). ``` --- ``` # OpenAI Agents Integration Source: https://docs.perplexity.ai/cookbook/articles/openai-agents-integration/README Complete guide for integrating Perplexity's Sonar API with the OpenAI Agents SDK ## 🎯 What You'll Build By the end of this guide, you'll have: * ✅ A custom async OpenAI client configured for Sonar API * ✅ An intelligent agent with function calling capabilities * ✅ A working example that fetches real-time information * ✅ Production-ready integration patterns ## 🏗️ Architecture Overview ```mermaid graph TD A[Your Application] --> B[OpenAI Agents SDK] B --> C[Custom AsyncOpenAI Client] C --> D[Perplexity Sonar API] B --> E[Function Tools] E --> F[Weather API, etc.] ``` This integration allows you to: 1. **Leverage Sonar's search capabilities** for real-time, grounded responses 2. **Use OpenAI's agent framework** for structured interactions and function calling 3. **Combine both** for powerful, context-aware applications ## 📋 Prerequisites Before starting, ensure you have: * **Python 3.7+** installed * **Perplexity API Key** - [Get one here](https://docs.perplexity.ai/home) * **OpenAI Agents SDK** access and familiarity ## 🚀 Installation Install the required dependencies: ```bash pip install openai nest-asyncio ``` :::info The `nest-asyncio` package is required for running async code in environments like Jupyter notebooks that already have an event loop running. ::: ## ⚙️ Environment Setup Configure your environment variables: ```bash # Required: Your Perplexity API key export EXAMPLE_API_KEY="your-perplexity-api-key" # Optional: Customize the API endpoint (defaults to official endpoint) export EXAMPLE_BASE_URL="https://api.perplexity.ai" # Optional: Choose your model (defaults to sonar-pro) export EXAMPLE_MODEL_NAME="sonar-pro" ``` ## 💻 Complete Implementation Here's the full implementation with detailed explanations: ```python # Import necessary standard libraries import asyncio # For running asynchronous code import os # To access environment variables # Import AsyncOpenAI for creating an async client from openai import AsyncOpenAI # Import custom classes and functions from the agents package. # These handle agent creation, model interfacing, running agents, and more. from agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled # Retrieve configuration from environment variables or use defaults BASE_URL = os.getenv("EXAMPLE_BASE_URL") or "https://api.perplexity.ai" API_KEY = os.getenv("EXAMPLE_API_KEY") MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME") or "sonar-pro" # Validate that all required configuration variables are set if not BASE_URL or not API_KEY or not MODEL_NAME: raise ValueError( "Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code." ) # Initialize the custom OpenAI async client with the specified BASE_URL and API_KEY. client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY) # Disable tracing to avoid using a platform tracing key; adjust as needed. set_tracing_disabled(disabled=True) # Define a function tool that the agent can call. # The decorator registers this function as a tool in the agents framework. @function_tool def get_weather(city: str): """ Simulate fetching weather data for a given city. Args: city (str): The name of the city to retrieve weather for. Returns: str: A message with weather information. """ print(f"[debug] getting weather for {city}") return f"The weather in {city} is sunny." # Import nest_asyncio to support nested event loops import nest_asyncio # Apply the nest_asyncio patch to enable running asyncio.run() # even if an event loop is already running. nest_asyncio.apply() async def main(): """ Main asynchronous function to set up and run the agent. This function creates an Agent with a custom model and function tools, then runs a query to get the weather in Tokyo. """ # Create an Agent instance with: # - A name ("Assistant") # - Custom instructions ("Be precise and concise.") # - A model built from OpenAIChatCompletionsModel using our client and model name. # - A list of tools; here, only get_weather is provided. agent = Agent( name="Assistant", instructions="Be precise and concise.", model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), tools=[get_weather], ) # Execute the agent with the sample query. result = await Runner.run(agent, "What's the weather in Tokyo?") # Print the final output from the agent. print(result.final_output) # Standard boilerplate to run the async main() function. if __name__ == "__main__": asyncio.run(main()) ``` ## 🔍 Code Breakdown Let's examine the key components: ### 1. **Client Configuration** ```python client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY) ``` This creates an async OpenAI client pointed at Perplexity's Sonar API. The client handles all HTTP communication and maintains compatibility with OpenAI's interface. ### 2. **Function Tools** ```python @function_tool def get_weather(city: str): """Simulate fetching weather data for a given city.""" return f"The weather in {city} is sunny." ``` Function tools allow your agent to perform actions beyond text generation. In production, you'd replace this with real API calls. ### 3. **Agent Creation** ```python agent = Agent( name="Assistant", instructions="Be precise and concise.", model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), tools=[get_weather], ) ``` The agent combines Sonar's language capabilities with your custom tools and instructions. ## 🏃‍♂️ Running the Example 1. **Set your environment variables**: ```bash export EXAMPLE_API_KEY="your-perplexity-api-key" ``` 2. **Save the code** to a file (e.g., `pplx_openai_agent.py`) 3. **Run the script**: ```bash python pplx_openai_agent.py ``` **Expected Output**: ``` [debug] getting weather for Tokyo The weather in Tokyo is sunny. ``` ## 🔧 Customization Options ### **Different Sonar Models** Choose the right model for your use case: ```python # For quick, lightweight queries MODEL_NAME = "sonar" # For complex research and analysis (default) MODEL_NAME = "sonar-pro" # For deep reasoning tasks MODEL_NAME = "sonar-reasoning-pro" ``` ### **Custom Instructions** Tailor the agent's behavior: ```python agent = Agent( name="Research Assistant", instructions=""" You are a research assistant specializing in academic literature. Always provide citations and verify information through multiple sources. Be thorough but concise in your responses. """, model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), tools=[search_papers, get_citations], ) ``` ### **Multiple Function Tools** Add more capabilities: ```python @function_tool def search_web(query: str): """Search the web for current information.""" # Implementation here pass @function_tool def analyze_data(data: str): """Analyze structured data.""" # Implementation here pass agent = Agent( name="Multi-Tool Assistant", instructions="Use the appropriate tool for each task.", model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), tools=[get_weather, search_web, analyze_data], ) ``` ## 🚀 Production Considerations ### **Error Handling** ```python async def robust_main(): try: agent = Agent( name="Assistant", instructions="Be helpful and accurate.", model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), tools=[get_weather], ) result = await Runner.run(agent, "What's the weather in Tokyo?") return result.final_output except Exception as e: print(f"Error running agent: {e}") return "Sorry, I encountered an error processing your request." ``` ### **Rate Limiting** ```python import aiohttp from openai import AsyncOpenAI # Configure client with custom timeout and retry settings client = AsyncOpenAI( base_url=BASE_URL, api_key=API_KEY, timeout=30.0, max_retries=3 ) ``` ### **Logging and Monitoring** ```python import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) @function_tool def get_weather(city: str): logger.info(f"Fetching weather for {city}") # Implementation here ``` ## 🔗 Advanced Integration Patterns ### **Streaming Responses** For real-time applications: ```python async def stream_agent_response(query: str): agent = Agent( name="Streaming Assistant", instructions="Provide detailed, step-by-step responses.", model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), tools=[get_weather], ) async for chunk in Runner.stream(agent, query): print(chunk, end='', flush=True) ``` ### **Context Management** For multi-turn conversations: ```python class ConversationManager: def __init__(self): self.agent = Agent( name="Conversational Assistant", instructions="Maintain context across multiple interactions.", model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client), tools=[get_weather], ) self.conversation_history = [] async def chat(self, message: str): result = await Runner.run(self.agent, message) self.conversation_history.append({"user": message, "assistant": result.final_output}) return result.final_output ``` ## ⚠️ Important Notes * **API Costs**: Monitor your usage as both Perplexity and OpenAI Agents may incur costs * **Rate Limits**: Respect API rate limits and implement appropriate backoff strategies * **Error Handling**: Always implement robust error handling for production applications * **Security**: Keep your API keys secure and never commit them to version control ## 🎯 Use Cases This integration pattern is perfect for: * **🔍 Research Assistants** - Combining real-time search with structured responses * **📊 Data Analysis Tools** - Using Sonar for context and agents for processing * **🤖 Customer Support** - Grounded responses with function calling capabilities * **📚 Educational Applications** - Real-time information with interactive features ## 📚 References * [Perplexity Sonar API Documentation](https://docs.perplexity.ai/home) * [OpenAI Agents SDK Documentation](https://github.com/openai/openai-agents-python) * [AsyncOpenAI Client Reference](https://platform.openai.com/docs/api-reference) * [Function Calling Best Practices](https://platform.openai.com/docs/guides/function-calling) *** **Ready to build?** This integration opens up powerful possibilities for creating intelligent, grounded agents. Start with the basic example and gradually add more sophisticated tools and capabilities! 🚀 # Examples Overview Source: https://docs.perplexity.ai/cookbook/examples/README Ready-to-use applications demonstrating Perplexity Sonar API capabilities # Examples Overview Welcome to the **Perplexity Sonar API Examples** collection! These are production-ready applications that demonstrate real-world use cases of the Sonar API. ## 🚀 Quick Start Navigate to any example directory and follow the instructions in the README.md file. ## 📋 Available Examples ### 🔍 [Fact Checker CLI](fact-checker-cli/README) **Purpose**: Verify claims and articles for factual accuracy\ **Type**: Command-line tool\ **Use Cases**: Journalism, research, content verification **Key Features**: * Structured claim analysis with ratings * Source citation and evidence tracking * JSON output for automation * Professional fact-checking workflow **Quick Start**: ```bash cd fact-checker-cli/ python fact_checker.py --text "The Earth is flat" ``` *** ### 🤖 [Daily Knowledge Bot](daily-knowledge-bot/README) **Purpose**: Automated daily fact delivery system\ **Type**: Scheduled Python application\ **Use Cases**: Education, newsletters, personal learning **Key Features**: * Topic rotation based on calendar * Persistent storage of facts * Configurable scheduling * Educational content generation **Quick Start**: ```bash cd daily-knowledge-bot/ python daily_knowledge_bot.py ``` *** ### 🏥 [Disease Information App](disease-qa/README) **Purpose**: Interactive medical information lookup\ **Type**: Web application (HTML/JavaScript)\ **Use Cases**: Health education, medical reference, patient information **Key Features**: * Interactive browser interface * Structured medical knowledge cards * Citation tracking for medical sources * Standalone deployment ready **Quick Start**: ```bash cd disease-qa/ jupyter notebook disease_qa_tutorial.ipynb ``` *** ### 📊 [Financial News Tracker](financial-news-tracker/README) **Purpose**: Real-time financial news monitoring and market analysis\ **Type**: Command-line tool\ **Use Cases**: Investment research, market monitoring, financial journalism **Key Features**: * Real-time financial news aggregation * Market sentiment analysis (Bullish/Bearish/Neutral) * Impact assessment and sector analysis * Investment insights and recommendations **Quick Start**: ```bash cd financial-news-tracker/ python financial_news_tracker.py "tech stocks" ``` *** ### 📚 [Academic Research Finder](research-finder/README) **Purpose**: Academic literature discovery and summarization\ **Type**: Command-line research tool\ **Use Cases**: Academic research, literature reviews, scholarly work **Key Features**: * Academic source prioritization * Paper citation extraction with DOI links * Research-focused prompting * Scholarly workflow integration **Quick Start**: ```bash cd research-finder/ python research_finder.py "quantum computing advances" ``` ## 🔑 API Key Setup All examples require a Perplexity API key. You can set it up in several ways: ### Environment Variable (Recommended) ```bash export PPLX_API_KEY="your-api-key-here" ``` ### .env File Create a `.env` file in the example directory: ```bash PERPLEXITY_API_KEY=your-api-key-here ``` ### Command Line Argument ```bash python script.py --api-key your-api-key-here ``` ## 🛠️ Common Requirements All examples require: * **Python 3.7+** * **Perplexity API Key** ([Get one here](https://docs.perplexity.ai/guides/getting-started)) * **Internet connection** for API calls Additional requirements vary by example and are listed in each `requirements.txt` file. ## 🎯 Choosing the Right Example | **If you want to...** | **Use this example** | | --------------------------- | ---------------------------- | | Verify information accuracy | **Fact Checker CLI** | | Learn something new daily | **Daily Knowledge Bot** | | Look up medical information | **Disease Information App** | | Track financial markets | **Financial News Tracker** | | Research academic topics | **Academic Research Finder** | ## 🤝 Contributing Found a bug or want to improve an example? We welcome contributions! 1. **Report Issues**: Open an issue describing the problem 2. **Suggest Features**: Propose new functionality or improvements 3. **Submit Code**: Fork, implement, and submit a pull request See our [Contributing Guidelines](https://github.com/ppl-ai/api-cookbook/blob/main/CONTRIBUTING.md) for details. ## 📄 License All examples are licensed under the [MIT License](https://github.com/ppl-ai/api-cookbook/blob/main/LICENSE). *** **Ready to explore?** Pick an example above and start building with Perplexity's Sonar API! 🚀 # Daily Knowledge Bot Source: https://docs.perplexity.ai/cookbook/examples/daily-knowledge-bot/README A Python application that delivers interesting facts about rotating topics using the Perplexity AI API # Daily Knowledge Bot A Python application that delivers interesting facts about rotating topics using the Perplexity AI API. Perfect for daily learning, newsletter content, or personal education. ## 🌟 Features * **Daily Topic Rotation**: Automatically selects topics based on the day of the month * **AI-Powered Facts**: Uses Perplexity's Sonar API to generate interesting and accurate facts * **Customizable Topics**: Easily extend or modify the list of topics * **Persistent Storage**: Saves facts to dated text files for future reference * **Robust Error Handling**: Gracefully manages API failures and unexpected errors * **Configurable**: Uses environment variables for secure API key management ## 📋 Requirements * Python 3.6+ * Required packages: * requests * python-dotenv * (optional) logging ## 🚀 Installation 1. Clone this repository or download the script 2. Install the required packages: ```bash # Install from requirements file (recommended) pip install -r requirements.txt # Or install manually pip install requests python-dotenv ``` 3. Set up your Perplexity API key: * Create a `.env` file in the same directory as the script * Add your API key: `PERPLEXITY_API_KEY=your_api_key_here` ## 🔧 Usage ### Running the Bot Simply execute the script: ```bash python daily_knowledge_bot.py ``` This will: 1. Select a topic based on the current day 2. Fetch an interesting fact from Perplexity AI 3. Save the fact to a dated text file in your current directory 4. Display the fact in the console ### Customizing Topics Edit the `topics.txt` file (one topic per line) or modify the `topics` list directly in the script. Example topics: ``` astronomy history biology technology psychology ocean life ancient civilizations quantum physics art history culinary science ``` ### Automated Scheduling #### On Linux/macOS (using cron): ```bash # Edit your crontab crontab -e # Add this line to run daily at 8:00 AM 0 8 * * * /path/to/python3 /path/to/daily_knowledge_bot.py ``` #### On Windows (using Task Scheduler): 1. Open Task Scheduler 2. Create a new Basic Task 3. Set it to run daily 4. Add the action: Start a program 5. Program/script: `C:\path\to\python.exe` 6. Arguments: `C:\path\to\daily_knowledge_bot.py` ## 🔍 Configuration Options The following environment variables can be set in your `.env` file: * `PERPLEXITY_API_KEY` (required): Your Perplexity API key * `OUTPUT_DIR` (optional): Directory to save fact files (default: current directory) * `TOPICS_FILE` (optional): Path to your custom topics file ## 📄 Output Example ``` DAILY FACT - 2025-04-02 Topic: astronomy Saturn's iconic rings are relatively young, potentially forming only 100 million years ago. This means dinosaurs living on Earth likely never saw Saturn with its distinctive rings, as they may have formed long after the dinosaurs went extinct. The rings are made primarily of water ice particles ranging in size from tiny dust grains to boulder-sized chunks. ``` ## 🛠️ Extending the Bot Some ways to extend this bot: * Add email or SMS delivery capabilities * Create a web interface to view fact history * Integrate with social media posting * Add multimedia content based on the facts * Implement advanced scheduling with specific topics on specific days ## ⚠️ Limitations * API rate limits may apply based on your Perplexity account * Quality of facts depends on the AI model * The free version of the Sonar API has a token limit that may truncate longer responses ## 📜 License [MIT License](https://github.com/ppl-ai/api-cookbook/blob/main/LICENSE) ## 🙏 Acknowledgements * This project uses the Perplexity AI API ([https://docs.perplexity.ai/](https://docs.perplexity.ai/)) * Inspired by daily knowledge calendars and fact-of-the-day services # Perplexity Discord Bot Source: https://docs.perplexity.ai/cookbook/examples/discord-py-bot/README A simple discord.py bot that integrates Perplexity's Sonar API to bring AI answers to your Discord server. A simple `discord.py` bot that integrates [Perplexity's Sonar API](https://docs.perplexity.ai/) into your Discord server. Ask questions and get AI-powered answers with web access through slash commands or by mentioning the bot. Discord Bot Demo ## ✨ Features * **🌐 Web-Connected AI**: Uses Perplexity's Sonar API for up-to-date information * **⚡ Slash Command**: Simple `/ask` command for questions * **💬 Mention Support**: Ask questions by mentioning the bot * **🔗 Source Citations**: Automatically formats and links to sources * **🔒 Secure Setup**: Environment-based configuration for API keys ## 🛠️ Prerequisites **Python 3.8+** installed on your system ```bash python --version # Should be 3.8 or higher ``` **Active Perplexity API Key** from [Perplexity AI Settings](https://www.perplexity.ai/settings/api) You'll need a paid Perplexity account to access the API. See the [pricing page](https://www.perplexity.ai/pricing) for current rates. **Discord Bot Token** from the [Discord Developer Portal](https://discord.com/developers/applications) ## 🚀 Quick Start ### 1. Repository Setup Clone the repository and navigate to the bot directory: ```bash git clone https://github.com/perplexity-ai/api-cookbook.git cd api-cookbook/docs/examples/discord-py-bot/ ``` ### 2. Install Dependencies ```bash # Create a virtual environment (recommended) python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install required packages pip install -r requirements.txt ``` ### 3. Configure API Keys 1. Visit [Perplexity AI Account Settings](https://www.perplexity.ai/settings/api) 2. Generate a new API key 3. Copy the key to the .env file Keep your API key secure! Never commit it to version control or share it publicly. 1. Go to the [Discord Developer Portal](https://discord.com/developers/applications) 2. Click **"New Application"** and give it a descriptive name 3. Navigate to the **"Bot"** section 4. Click **"Reset Token"** (or "Add Bot" if first time) 5. Copy the bot token Copy the example environment file and add your keys: ```bash cp env.example .env ``` Edit `.env` with your credentials: ```bash title=".env" DISCORD_TOKEN="your_discord_bot_token_here" PERPLEXITY_API_KEY="your_perplexity_api_key_here" ``` ## 🎯 Usage Guide ### Bot Invitation & Setup In the Discord Developer Portal: 1. Go to **OAuth2** → **URL Generator** 2. Select scopes: `bot` and `applications.commands` 3. Select bot permissions: `Send Messages`, `Use Slash Commands` 4. Copy the generated URL 1. Paste the URL in your browser 2. Select the Discord server to add the bot to 3. Confirm the permissions ```bash python bot.py ``` You should see output confirming the bot is online and commands are synced. ### How to Use **Slash Command:** ``` /ask [your question here] ``` Slash Command Demo **Mention the Bot:** ``` @YourBot [your question here] ``` Mention Command Demo ## 📊 Response Format The bot provides clean, readable responses with: * **AI Answer**: Direct response from Perplexity's Sonar API * **Source Citations**: Clickable links to sources (when available) * **Automatic Truncation**: Responses are trimmed to fit Discord's limits ## 🔧 Technical Details This bot uses: * **Model**: Perplexity's `sonar-pro` model * **Response Limit**: 2000 tokens from API, truncated to fit Discord * **Temperature**: 0.2 for consistent, factual responses * **No Permissions**: Anyone in the server can use the bot # Disease Information App Source: https://docs.perplexity.ai/cookbook/examples/disease-qa/README An interactive browser-based application that provides structured information about diseases using Perplexity's Sonar API # Disease Information App An interactive browser-based application that provides structured information about diseases using Perplexity's Sonar API. This app generates a standalone HTML interface that allows users to ask questions about various diseases and receive organized responses with citations. ![Disease Information App Screenshot](https://via.placeholder.com/800x450.png?text=Disease+Information+App) ## 🌟 Features * **User-Friendly Interface**: Clean, responsive design that works across devices * **AI-Powered Responses**: Leverages Perplexity's Sonar API for accurate medical information * **Structured Knowledge Cards**: Organizes information into Overview, Causes, and Treatments * **Citation Tracking**: Lists sources of information with clickable links * **Client-Side Caching**: Prevents duplicate API calls for previously asked questions * **Standalone Deployment**: Generate a single HTML file that can be used without a server * **Comprehensive Error Handling**: User-friendly error messages and robust error management ## 📋 Requirements * Python 3.6+ * Jupyter Notebook or JupyterLab (for development/generation) * Required packages: * requests * pandas * python-dotenv * IPython ## 🚀 Setup & Installation 1. Clone this repository or download the notebook 2. Install the required packages: ```bash # Install from requirements file (recommended) pip install -r requirements.txt # Or install manually pip install requests pandas python-dotenv ipython ``` 3. Set up your Perplexity API key: * Create a `.env` file in the same directory as the notebook * Add your API key: `PERPLEXITY_API_KEY=your_api_key_here` ## 🔧 Usage ### Running the Notebook 1. Open the notebook in Jupyter: ```bash jupyter notebook Disease_Information_App.ipynb ``` 2. Run all cells to generate and launch the browser-based application 3. The app will automatically open in your default web browser ### Using the Generated HTML You can also directly use the generated `disease_qa.html` file: 1. Open it in any modern web browser 2. Enter a question about a disease (e.g., "What is diabetes?", "Tell me about Alzheimer's disease") 3. Click "Ask" to get structured information about the disease ### Deploying the App For personal or educational use, simply share the generated HTML file. For production use, consider: 1. Setting up a proper backend to secure your API key 2. Hosting the file on a web server 3. Adding analytics and user management as needed ## 🔍 How It Works This application: 1. Uses a carefully crafted prompt to instruct the AI to output structured JSON 2. Processes this JSON to extract Overview, Causes, Treatments, and Citations 3. Presents the information in a clean knowledge card format 4. Implements client-side API calls with proper error handling 5. Provides a responsive design suitable for both desktop and mobile ## ⚙️ Technical Details ### API Structure The app expects the AI to return a JSON object with this structure: ```json { "overview": "A brief description of the disease.", "causes": "The causes of the disease.", "treatments": "Possible treatments for the disease.", "citations": ["https://example.com/citation1", "https://example.com/citation2"] } ``` ### Files Generated * `disease_qa.html` - The standalone application * `disease_app.log` - Detailed application logs (when running the notebook) ### Customization Options You can modify: * The HTML/CSS styling in the `create_html_ui` function * The AI model used (default is "sonar-pro") * The structure of the prompt for different information fields * Output file location and naming ## 🛠️ Extending the App Potential extensions: * Add a Flask/Django backend to secure the API key * Implement user accounts and saved questions * Add visualization of disease statistics * Create a comparison view for multiple diseases * Add natural language question reformatting * Implement feedback mechanisms for answer quality ## ⚠️ Important Notes * **API Key Security**: The current implementation embeds your API key in the HTML file. This is suitable for personal use but not for public deployment. * **Not Medical Advice**: This app provides general information and should not be used for medical decisions. Always consult healthcare professionals for medical advice. * **API Usage**: Be aware of Perplexity API rate limits and pricing for your account. ## 📜 License [MIT License](https://github.com/ppl-ai/api-cookbook/blob/main/LICENSE) ## 🙏 Acknowledgements * This project uses the [Perplexity AI Sonar API](https://docs.perplexity.ai/) * Inspired by interactive knowledge bases and medical information platforms # Fact Checker CLI Source: https://docs.perplexity.ai/cookbook/examples/fact-checker-cli/README A command-line tool that identifies false or misleading claims in articles or statements using Perplexity's Sonar API # Fact Checker CLI A command-line tool that identifies false or misleading claims in articles or statements using Perplexity's Sonar API for web research. ## Features * Analyze claims or entire articles for factual accuracy * Identify false, misleading, or unverifiable claims * Provide explanations and corrections for inaccurate information * Output results in human-readable format or structured JSON * Cite reliable sources for fact-checking assessments * Leverages Perplexity's structured outputs for reliable JSON parsing (for Tier 3+ users) ## Installation ### 1. Install required dependencies ```bash # Install from requirements file (recommended) pip install -r requirements.txt # Or install manually pip install requests pydantic newspaper3k ``` ### 2. Make the script executable ```bash chmod +x fact_checker.py ``` ## API Key Setup The tool requires a Perplexity API key to function. You can provide it in one of these ways: ### 1. As a command-line argument ```bash ./fact_checker.py --api-key YOUR_API_KEY ``` ### 2. As an environment variable ```bash export PPLX_API_KEY=YOUR_API_KEY ``` ### 3. In a file Create a file named `pplx_api_key` or `.pplx_api_key` in the same directory as the script: ```bash echo "YOUR_API_KEY" > .pplx_api_key chmod 600 .pplx_api_key ``` **Note:** If you're using the structured outputs feature, you'll need a Perplexity API account with Tier 3 or higher access level. ## Quick Start Run the following command immediately after setup: ```bash ./fact_checker.py -t "The Earth is flat and NASA is hiding the truth." ``` This will analyze the claim, research it using Perplexity's Sonar API, and return a detailed fact check with ratings, explanations, and sources. ## Usage ### Check a claim ```bash ./fact_checker.py --text "The Earth is flat and NASA is hiding the truth." ``` ### Check an article from a file ```bash ./fact_checker.py --file article.txt ``` ### Check an article from a URL ```bash ./fact_checker.py --url https://www.example.com/news/article-to-check ``` ### Specify a different model ```bash ./fact_checker.py --text "Global temperatures have decreased over the past century." --model "sonar-pro" ``` ### Output results as JSON ```bash ./fact_checker.py --text "Mars has a breathable atmosphere." --json ``` ### Use a custom prompt file ```bash ./fact_checker.py --text "The first human heart transplant was performed in the United States." --prompt-file custom_prompt.md ``` ### Enable structured outputs (for Tier 3+ users) Structured output is disabled by default. To enable it, pass the `--structured-output` flag: ```bash ./fact_checker.py --text "Vaccines cause autism." --structured-output ``` ### Get help ```bash ./fact_checker.py --help ``` ## Output Format The tool provides output including: * **Overall Rating**: MOSTLY\_TRUE, MIXED, or MOSTLY\_FALSE * **Summary**: A brief overview of the fact-checking findings * **Claims Analysis**: A list of specific claims with individual ratings: * TRUE: Factually accurate and supported by evidence * FALSE: Contradicted by evidence * MISLEADING: Contains some truth but could lead to incorrect conclusions * UNVERIFIABLE: Cannot be conclusively verified with available information * **Explanations**: Detailed reasoning for each claim * **Sources**: Citations and URLs used for verification ## Example Run the following command: ```bash ./fact_checker.py -t "The Great Wall of China is visible from the moon." ``` Example output: ``` Fact checking in progress... 🔴 OVERALL RATING: MOSTLY_FALSE 📝 SUMMARY: The claim that the Great Wall of China is visible from the moon is false. This is a common misconception that has been debunked by NASA astronauts and scientific evidence. 🔍 CLAIMS ANALYSIS: Claim 1: ❌ FALSE Statement: "The Great Wall of China is visible from the moon." Explanation: The Great Wall of China is not visible from the moon with the naked eye. NASA astronauts have confirmed this, including Neil Armstrong who stated he could not see the Wall from lunar orbit. The Wall is too narrow and is similar in color to its surroundings when viewed from such a distance. Sources: - NASA.gov - Scientific American - National Geographic ``` ## Limitations * The accuracy of fact-checking depends on the quality of information available through the Perplexity Sonar API. * Like all language models, the underlying AI may have limitations in certain specialized domains. * The structured outputs feature requires a Tier 3 or higher Perplexity API account. * The tool does not replace professional fact-checking services for highly sensitive or complex content. # Financial News Tracker Source: https://docs.perplexity.ai/cookbook/examples/financial-news-tracker/README A real-time financial news monitoring tool that fetches and analyzes market news using Perplexity's Sonar API # Financial News Tracker A command-line tool that fetches and analyzes real-time financial news using Perplexity's Sonar API. Get comprehensive market insights, news summaries, and investment analysis for any financial topic. ## Features * Real-time financial news aggregation from multiple sources * Market sentiment analysis (Bullish/Bearish/Neutral) * Impact assessment for news items (High/Medium/Low) * Sector and company-specific analysis * Investment insights and recommendations * Customizable time ranges (24h to 1 year) * Structured JSON output support * Beautiful emoji-enhanced CLI output ## Installation ### 1. Install required dependencies ```bash # Install from requirements file (recommended) pip install -r requirements.txt # Or install manually pip install requests pydantic ``` ### 2. Make the script executable ```bash chmod +x financial_news_tracker.py ``` ## API Key Setup The tool requires a Perplexity API key. You can provide it in one of these ways: ### 1. As an environment variable (recommended) ```bash export PPLX_API_KEY=YOUR_API_KEY ``` ### 2. As a command-line argument ```bash ./financial_news_tracker.py "tech stocks" --api-key YOUR_API_KEY ``` ### 3. In a file Create a file named `pplx_api_key` or `.pplx_api_key` in the same directory: ```bash echo "YOUR_API_KEY" > .pplx_api_key chmod 600 .pplx_api_key ``` ## Quick Start Get the latest tech stock news: ```bash ./financial_news_tracker.py "tech stocks" ``` This will fetch recent financial news about tech stocks, analyze market sentiment, and provide actionable insights. ## Usage Examples ### Basic usage - Get news for a specific topic ```bash ./financial_news_tracker.py "S&P 500" ``` ### Get cryptocurrency news from the past week ```bash ./financial_news_tracker.py "cryptocurrency" --time-range 1w ``` ### Track specific company news ```bash ./financial_news_tracker.py "AAPL Apple stock" ``` ### Get news about market sectors ```bash ./financial_news_tracker.py "energy sector oil prices" ``` ### Output as JSON for programmatic use ```bash ./financial_news_tracker.py "inflation rates" --json ``` ### Use a different model ```bash ./financial_news_tracker.py "Federal Reserve interest rates" --model sonar ``` ### Enable structured output (requires Tier 3+ API access) ```bash ./financial_news_tracker.py "tech earnings" --structured-output ``` ## Time Range Options * `24h` - Last 24 hours (default) * `1w` - Last week * `1m` - Last month * `3m` - Last 3 months * `1y` - Last year ## Output Format The tool provides comprehensive financial analysis including: ### 1. Executive Summary A brief overview of the key financial developments ### 2. Market Analysis * **Market Sentiment**: Overall market mood (🐂 Bullish, 🐻 Bearish, ⚖️ Neutral) * **Key Drivers**: Factors influencing the market * **Risks**: Current market risks and concerns * **Opportunities**: Potential investment opportunities ### 3. News Items Each news item includes: * **Headline**: The main news title * **Impact**: Market impact level (🔴 High, 🟡 Medium, 🟢 Low) * **Summary**: Brief description of the news * **Affected Sectors**: Industries or companies impacted * **Source**: News source attribution ### 4. Investment Insights Actionable recommendations and analysis based on the news ## Example Output ``` 📊 FINANCIAL NEWS REPORT: tech stocks 📅 Period: Last 24 hours 📝 EXECUTIVE SUMMARY: Tech stocks showed mixed performance today as AI-related companies surged while semiconductor stocks faced pressure from supply chain concerns... 📈 MARKET ANALYSIS: Sentiment: 🐂 BULLISH Key Drivers: • Strong Q4 earnings from major tech companies • AI sector momentum continues • Federal Reserve signals potential rate cuts ⚠️ Risks: • Semiconductor supply chain disruptions • Regulatory scrutiny on big tech • Valuation concerns in AI sector 💡 Opportunities: • Cloud computing growth • AI infrastructure plays • Cybersecurity demand surge 📰 KEY NEWS ITEMS: 1. Microsoft Hits All-Time High on AI Growth Impact: 🔴 HIGH Summary: Microsoft stock reached record levels following strong Azure AI revenue... Sectors: Cloud Computing, AI, Software Source: Bloomberg 💼 INSIGHTS & RECOMMENDATIONS: • Consider diversifying within tech sector • AI infrastructure companies show strong momentum • Monitor semiconductor sector for buying opportunities ``` ## Advanced Features ### Custom Queries You can combine multiple topics for comprehensive analysis: ```bash # Get news about multiple related topics ./financial_news_tracker.py "NVIDIA AMD semiconductor AI chips" # Track geopolitical impacts on markets ./financial_news_tracker.py "oil prices Middle East geopolitics" # Monitor economic indicators ./financial_news_tracker.py "inflation CPI unemployment Federal Reserve" ``` ### JSON Output For integration with other tools or scripts: ```bash ./financial_news_tracker.py "bitcoin" --json | jq '.market_analysis.market_sentiment' ``` ## Tips for Best Results 1. **Be Specific**: Include company tickers, sector names, or specific events 2. **Combine Topics**: Mix company names with relevant themes (e.g., "TSLA electric vehicles") 3. **Use Time Ranges**: Match the time range to your investment horizon 4. **Regular Monitoring**: Set up cron jobs for daily market updates ## Limitations * Results depend on available public information * Not financial advice - always do your own research * Historical data may be limited for very recent events * Structured output requires Tier 3+ Perplexity API access ## Error Handling The tool includes comprehensive error handling for: * Invalid API keys * Network connectivity issues * API rate limits * Invalid queries * Parsing errors ## Integration Examples ### Daily Market Report Create a script for daily updates: ```bash #!/bin/bash # daily_market_report.sh echo "=== Daily Market Report ===" > market_report.txt echo "Date: $(date)" >> market_report.txt echo "" >> market_report.txt ./financial_news_tracker.py "S&P 500 market overview" >> market_report.txt ./financial_news_tracker.py "top gaining stocks" >> market_report.txt ./financial_news_tracker.py "cryptocurrency bitcoin ethereum" >> market_report.txt ``` ### Python Integration ```python import subprocess import json def get_financial_news(query, time_range="24h"): result = subprocess.run( ["./financial_news_tracker.py", query, "--time-range", time_range, "--json"], capture_output=True, text=True ) if result.returncode == 0: return json.loads(result.stdout) else: raise Exception(f"Error fetching news: {result.stderr}") # Example usage news = get_financial_news("tech stocks", "1w") print(f"Market sentiment: {news['market_analysis']['market_sentiment']}") ``` # Academic Research Finder CLI Source: https://docs.perplexity.ai/cookbook/examples/research-finder/README A command-line tool that uses Perplexity's Sonar API to find and summarize academic literature # Academic Research Finder CLI A command-line tool that uses Perplexity's Sonar API to find and summarize academic literature (research papers, articles, etc.) related to a given question or topic. ## Features * Takes a natural language question or topic as input, ideally suited for academic inquiry. * Leverages Perplexity Sonar API, guided by a specialized prompt to prioritize scholarly sources (e.g., journals, conference proceedings, academic databases). * Outputs a concise summary based on the findings from academic literature. * Lists the primary academic sources used, aiming to include details like authors, year, title, publication, and DOI/link when possible. * Supports different Perplexity models (defaults to `sonar-pro`). * Allows results to be output in JSON format. ## Installation ### 1. Install required dependencies Ensure you are using the Python environment you intend to run the script with (e.g., `python3.10` if that's your target). ```bash # Install from requirements file (recommended) pip install -r requirements.txt # Or install manually pip install requests ``` ### 2. Make the script executable (Optional) ```bash chmod +x research_finder.py ``` Alternatively, you can run the script using `python3 research_finder.py ...`. ## API Key Setup The tool requires a Perplexity API key (`PPLX_API_KEY`) to function. You can provide it in one of these ways (checked in this order): 1. **As a command-line argument:** ```bash python3 research_finder.py "Your query" --api-key YOUR_API_KEY ``` 2. **As an environment variable:** ```bash export PPLX_API_KEY=YOUR_API_KEY python3 research_finder.py "Your query" ``` 3. **In a file:** Create a file named `pplx_api_key`, `.pplx_api_key`, `PPLX_API_KEY`, or `.PPLX_API_KEY` in the *same directory as the script* or in the *current working directory* containing just your API key. ```bash echo "YOUR_API_KEY" > .pplx_api_key chmod 600 .pplx_api_key # Optional: restrict permissions python3 research_finder.py "Your query" ``` ## Usage Run the script from the `sonar-use-cases/research_finder` directory or provide the full path. ```bash # Basic usage python3 research_finder.py "What are the latest advancements in quantum computing?" # Using a specific model python3 research_finder.py "Explain the concept of Large Language Models" --model sonar-small-online # Getting output as JSON python3 research_finder.py "Summarize the plot of Dune Part Two" --json # Using a custom system prompt file python3 research_finder.py "Benefits of renewable energy" --prompt-file /path/to/your/custom_prompt.md # Using an API key via argument python3 research_finder.py "Who won the last FIFA World Cup?" --api-key sk-... # Using the executable (if chmod +x was used) ./research_finder.py "Latest news about Mars exploration" ``` ### Arguments * `query`: (Required) The research question or topic (enclose in quotes if it contains spaces). * `-m`, `--model`: Specify the Perplexity model (default: `sonar-pro`). * `-k`, `--api-key`: Provide the API key directly. * `-p`, `--prompt-file`: Path to a custom system prompt file. * `-j`, `--json`: Output the results in JSON format. ## Example Output (Human-Readable - *Note: Actual output depends heavily on the query and API results*) ``` Initializing research assistant for query: "Recent studies on transformer models in NLP"... Researching in progress... ✅ Research Complete! 📝 SUMMARY: Recent studies on transformer models in Natural Language Processing (NLP) continue to explore architectural improvements, efficiency optimizations, and new applications. Key areas include modifications to the attention mechanism (e.g., sparse attention, linear attention) to handle longer sequences more efficiently, techniques for model compression and knowledge distillation, and applications beyond text, such as in computer vision and multimodal tasks. Research also focuses on understanding the internal workings and limitations of large transformer models. 🔗 SOURCES: 1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. (arXiv:1706.03762) 2. Tay, Y., Dehghani, M., Bahri, D., & Metzler, D. (2020). Efficient transformers: A survey. arXiv preprint arXiv:2009.06732. 3. Beltagy, I., Peters, M. E., & Cohan, A. (2020). Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. 4. Rogers, A., Kovaleva, O., & Rumshisky, A. (2020). A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8, 842-866. (arXiv:2002.12327) ``` ## Limitations * The ability of the Sonar API to consistently prioritize and access specific academic databases or extract detailed citation information (like DOIs) may vary. The quality depends on the API's search capabilities and the structure of the source websites. * The script performs basic parsing to separate summary and sources; complex or unusual API responses might not be parsed perfectly. Check the raw response in case of issues. * Queries that are too broad or not well-suited for academic search might yield less relevant results. * Error handling for API rate limits or specific API errors could be more granular. # Perplexity Sonar API Cookbook Source: https://docs.perplexity.ai/cookbook/index A collection of practical examples and guides for building with Perplexity's Sonar API A collection of practical examples and guides for building with [**Perplexity's Sonar API**](https://sonar.perplexity.ai/) - the fastest, most cost-effective AI answer engine with real-time search capabilities. ## Quick Start To get started with any project in this cookbook: 1. **Browse examples** - Find the use case that matches your needs 2. **Follow the guide** - Each example includes complete setup instructions 3. **Get the code** - Full implementations are available in our [GitHub repository](https://github.com/ppl-ai/api-cookbook) 4. **Build and customize** - Use the examples as starting points for your projects ## What's Inside ### [Examples](/cookbook/examples/README) Ready-to-run projects that demonstrate specific use cases and implementation patterns. ### [Showcase](/cookbook/showcase/briefo/) Community-built applications that demonstrate real-world implementations of the Sonar API. ### [Integration Guides](/cookbook/articles/memory-management/chat-summary-memory-buffer/README) In-depth tutorials for advanced implementations and integrations with other tools. > **Note**: All complete code examples, scripts, and project files can be found in our [GitHub repository](https://github.com/ppl-ai/api-cookbook). The documentation here provides guides and explanations, while the repository contains the full runnable implementations. ## Contributing Have a project built with Sonar API? We'd love to feature it! Check our [Contributing Guidelines](https://github.com/ppl-ai/api-cookbook/blob/main/CONTRIBUTING.md) to learn how to: * Submit example tutorials * Add your project to the showcase * Improve existing content ## Resources * [Sonar API Documentation](https://docs.perplexity.ai/home) * [API Playground](https://perplexity.ai/account/api/playground) * [GitHub Repository](https://github.com/ppl-ai/api-cookbook) *** *Maintained by the Perplexity community* # 4Point Hoops | AI Basketball Analytics Platform Source: https://docs.perplexity.ai/cookbook/showcase/4point-Hoops Advanced NBA analytics platform that combines live Basketball-Reference data with Perplexity Sonar to deliver deep-dive player stats, cross-season comparisons and expert-grade AI explanations ![4Point Hoops Dashboard](https://d112y698adiu2z.cloudfront.net/photos/production/software_photos/003/442/047/datas/original.png) **4Point Hoops** is an advanced NBA analytics platform that turns raw basketball statistics into actionable, narrative-driven insights. By scraping Basketball-Reference in real time and routing context-rich prompts to Perplexity's Sonar Pro model, it helps fans, analysts, and fantasy players understand the "why" and "what's next" – not just the numbers.