Discussions

Ask a Question
ANSWERED

Domain Filter

Hi Aarash. Any news on when the domain filter will be fixed for Sonar online? BTW if you'd like to see what we are doing with the charities we support, would love to email you a link and have a call to discuss. We'd love a public association with you guys and it might be good for P too. Best, /Mike
ANSWERED

Inquiry Regarding Perplexity API Endpoint Configuration

Hello, I am currently working on a development project that incorporates Perplexity. However, I have encountered an issue with setting up the endpoint. Despite extensive searching, I have been unable to find accurate information, so I am asking for help here. Below are the trial URLs and their results. Your guidance would be greatly appreciated. <https://api.openai.com/v1/chat/completions>: 401 Error <https://api.perplexity.ai>: 404 Error <https://api.perplexity.ai/search>: 404 Error <https://api.perplexity.ai/v1/search>: 404 Error <https://api.perplexity.ai/v1/endpoint>: 404 Error
ANSWERED

Please update the roadmap

Which of the following features have been implemented? June \-Enforce JSON grammar in responses -N>1 sampling for multiple responses -Model Info endpoint for listing metadata of available models -Status page for API uptime and availability
ANSWERED

Can any of these models be used with make.com

Grettings, I want realtime information and thus one of the pplx models. They aren't offered via Make.com. Is there a known workaround or any plans to itegration with make?
ANSWERED

will i be charged if i make >1000 requests with the online model?

i just topped up my account, and the pricing page says the online models charge a flat $5/1000 tokens. if i make less than 1000 requests (i'm experimenting for a side project), will i be charged?
ANSWERED

Different results in API and Web UI.

Thanks develepors for this beautiful AI tool. This AI works wonders in the web interface. I researched many AI tools, but Perplexity is clearly ahead. Afterwards, I decided to use it with the API. But there is a big difference between API and web UI results. Web UI shows much better results. Is there any way to solve this? I'm adding an example for this below. **Question:** **Who are the top 5 glass producers in Europe and what is their production capacity?** <br /> _**Result from Web UI with "SONAR LARGE 32K"**_ It gives very good results. `The top 5 glass producers in Europe are: AGC Inc.: With a production capacity of over 10 million square meters of coated glass per year, AGC Inc. is one of the leading glass producers in Europe. Saint-Gobain: Saint-Gobain has a production capacity of up to 250,000 tonnes of glass annually, equating to up to 10 miles of glass produced each day. Vitro: Vitro has a production capacity of over 6 million square meters of coated glass per year and is one of the leading glass manufacturers in North America. Şişecam: Şişecam has a production capacity that places it 5th in the world and 1st in Europe in terms of flat glass production, with an annual production capacity of over 2.5 million tons. Guardian Glass: Guardian Glass is a leading glass manufacturer in Europe, with a significant production capacity, although the exact figure is not publicly disclosed. These companies are leaders in the European glass market, with significant production capacities and a strong presence in the industry.` <br /> _**Result from API "llama-3-sonar-large-32k-online"**_ It does not provide capacity information for manufacturers other than Şişecam. `{ "id": "efb09074-b1fc-4f43-9e17-a6ab8144d68d", "model": "llama-3-sonar-small-32k-online", "created": 1719606123, "usage": { "prompt_tokens": 66, "completion_tokens": 285, "total_tokens": 351 }, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "The top 5 glass producers in Europe are:\n\n1. **Saint-Gobain Glass**: Known for its extensive operations in over 68 countries, employing around 171,000 people, and having a revenue of $49.3 billion. Saint-Gobain is a leading global glass manufacturer with significant production capacity.\n\n2. **AGC Glass Europe**: As the European branch of AGC (Japan), it operates over 100 sites across Europe, employing around 16,500 people. AGC Glass Europe is a major flat glass manufacturer for the construction, automotive, and solar industries.\n\n3. **Şişecam**: After acquiring Sangalli, Şişecam became the largest flat-glass manufacturer in Europe with an annual production capacity of 220,000 tons.\n\n4. **Vetropack Group**: This Swiss-based company is known for its innovative manufacturing processes and produces lightweight and durable glass bottles. They have clients in over 7 countries and employ around 3,000 people.\n\n5. **Verallia**: Headquartered in France, Verallia has been producing durable glass bottles for wines, beer, liquors, and juices since 1827. They cater to a wide range of industries, including food and beverages.\n\nThese companies are prominent players in the European glass manufacturing industry, contributing significantly to the continent's overall glass production capacity." }, "delta": { "role": "assistant", "content": "" } } ] }`
ANSWERED

looking for DBRX

Is it no longer available?
ANSWERED

llama-3-sonar-large-32k-online is a bit slow

Hi, I am working with llama-3-sonar-large-32k-online and it is a bit slower than I expected. Do you have any suggestion to improve the speed? or any paper or article to refer to? I am onee founder of 433.ai
ANSWERED

Poor Identification of Important Words Leads to No Relevant Chunks Gathered

Below are two similar prompts which cause different input chunks to be delivered to llama-3-sonar-large-32k-online with a domain filter of zsl.org. The second prompt is the same as the first but with an additional "safety suffix". Who is Oliver cupit Who is Oliver cupit. If the information is not in your context say: 'I don't know, it's not in my context’. The first prompt returns 3 citations from zsl.org indicating the LLM has been fed chunks from the zsl.org filter domain. The second prompt returns no citations from the zsl.org domain (although it does return several others from outside it - which you are going to fix - see earlier question - and so soon this prompt will return no citations). Given that the first step of your process is (like) a semantic site search of zsl.org is seems that the "safety suffix" of my second prompt blurs the key question about Oliver - and therefore no site pages are identified and no zsl chunks passed to the LLM. **Question**: can P better identify the important words in the user's question (and eliminate the residue) prior to the chunk-gathering stage? (With other models we would ordinarily put the "safety suffix" in the system prompt, but P has said sonar does not attend to it)
ANSWERED

API responses are very diferent than the app.

I tested a bunch of prompts in the app and api. For example. "write python code to connect to the WIX CMS API to list collections. Use the latest version of the rest API." The app provides the latest API endpoints with working code which is the expected output. The API using llama-3-70b-instruct provides old endpoint urls and says there is a data cut off point. Does the API not access the internet the same way the app does?