POST
/
chat
/
completions

Authorizations

Authorization
string
headerrequired

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
string
required

The name of the model that will complete your prompt. Refer to Supported Models to find all the models offered.

messages
object[]
required

A list of messages comprising the conversation so far.

max_tokens
integer

The maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window.

temperature
number
default: 0.2

The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic.

Required range: 0 < x < 2
top_p
number
default: 0.9

The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. We recommend either altering top_k or top_p, but not both.

Required range: 0 < x < 1
search_domain_filter
any[]

Given a list of domains, limit the citations used by the online model to URLs from the specified domains. Currently limited to only 3 domains for whitelisting and blacklisting. For blacklisting add a - to the beginning of the domain string. This filter is in closed beta

return_images
boolean
default: false

Determines whether or not a request to an online model should return images. Images are in closed beta

return_related_questions
boolean
default: false

Determines whether or not a request to an online model should return related questions. Related questions are in closed beta

search_recency_filter
string

Returns search results within the specified time interval - does not apply to images. Values include month, week, day, hour.

top_k
number
default: 0

The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. We recommend either altering top_k or top_p, but not both.

Required range: 0 < x < 2048
stream
boolean
default: false

Determines whether or not to incrementally stream the response with server-sent events with content-type: text/event-stream.

presence_penalty
number
default: 0

A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty.

Required range: -2 < x < 2
frequency_penalty
number
default: 1

A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence_penalty.

Required range: x > 0

Response

200 - application/json
id
string

An ID generated uniquely for each response.

model
string

The model used to generate the response.

object
string

The object type, which always equals chat.completion.

created
integer

The Unix timestamp (in seconds) of when the completion was created.

citations
any[]

Citations for the generated answer.

choices
object[]

The list of completion choices the model generated for the input prompt.

usage
object

Usage statistics for the completion request.