v2025.6.2
All Bundles

Response

Model response for a given text and files

Operations

GetCreatedAt

Unix timestamp (in seconds) of when the object instance was created

method : public : GetCreatedAt() ~ Int

Return

TypeDescription
Inttime with the object instance was created

GetId

Get object instance API ID

method : public : GetId() ~ String

Return

TypeDescription
Stringinstance ID

GetModel

ID of the model to use

method : public : GetModel() ~ String

Return

TypeDescription
StringID of the model to use

GetObject

Get the object type

method : public : GetObject() ~ String

Return

TypeDescription
Stringobject type

GetOutputs

Get response outputs

method : public : GetOutputs() ~ Vector<Output>

Return

TypeDescription
Vector<Output>response outputs

GetText

Get response text

method : public : GetText() ~ String

Return

TypeDescription
Stringresponse text

Respond

Model response for the given query

function : Respond(model:String, message:Pair<String,String>, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,String>completion message and image query.
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given query

function : Respond(model:String, message:Pair<String,String>, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,String>completion message and image query.
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given query

function : Respond(model:String, message:Pair<String,String>, max_tokens:Int, temperature:Float, top_p:Float, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,String>completion message and image query
max_tokensIntmaximum number of completion tokens returned by the API
temperatureFloatamount of randomness in the response, valued between 0 inclusive and 2 exclusive
top_pFloatnucleus sampling threshold, valued between 0 and 1 inclusive
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given query

function : Respond(model:String, message:Pair<String,String>, max_tokens:Int, temperature:Float, top_p:Float, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,String>completion message and image query
max_tokensIntmaximum number of completion tokens returned by the API
temperatureFloatamount of randomness in the response, valued between 0 inclusive and 2 exclusive
top_pFloatnucleus sampling threshold, valued between 0 and 1 inclusive
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given query

function : Respond(model:String, messages:Vector<Pair<String,String>>) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagesVector<Pair<String,String>>list of messages comprising the conversation

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given query

function : Respond(model:String, messages:Vector<Pair<String,String>>) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagesVector<Pair<String,String>>list of messages comprising the conversation

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given file and query

function : Respond(model:String, message:API.OpenAI.FileQuery, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messageAPI.OpenAI.FileQuerycompletion message and file query.
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given file and query

function : Respond(model:String, message:API.OpenAI.FileQuery, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messageAPI.OpenAI.FileQuerycompletion message and file query.
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given file and query

function : Respond(model:String, message:API.OpenAI.FileQuery, max_tokens:Int, temperature:Float, top_p:Float, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messageAPI.OpenAI.FileQuerycompletion message and file query
max_tokensIntmaximum number of completion tokens returned by the API
temperatureFloatamount of randomness in the response, valued between 0 inclusive and 2 exclusive
top_pFloatnucleus sampling threshold, valued between 0 and 1 inclusive
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given file and query

function : Respond(model:String, message:API.OpenAI.FileQuery, max_tokens:Int, temperature:Float, top_p:Float, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messageAPI.OpenAI.FileQuerycompletion message and file query
max_tokensIntmaximum number of completion tokens returned by the API
temperatureFloatamount of randomness in the response, valued between 0 inclusive and 2 exclusive
top_pFloatnucleus sampling threshold, valued between 0 and 1 inclusive
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given file and query

function : Respond(model:String, messages:Vector<FileQuery>, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagesVector<FileQuery>list of messages comprising the conversation
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given file and query

function : Respond(model:String, messages:Vector<FileQuery>, max_tokens:Int, temperature:Float, top_p:Float, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagesVector<FileQuery>list of messages comprising the conversation
max_tokensIntmaximum number of completion tokens returned by the API
temperatureFloatamount of randomness in the response, valued between 0 inclusive and 2 exclusive
top_pFloatnucleus sampling threshold, valued between 0 and 1 inclusive
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given image and query

function : Respond(model:String, message:Pair<String,API.OpenAI.ImageQuery>, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,APIOpenAIImageQuery>completion message and image query.
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given image and query

function : Respond(model:String, message:Pair<String,API.OpenAI.ImageQuery>, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,APIOpenAIImageQuery>completion message and image query.
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given image and query

function : Respond(model:String, message:Pair<String,API.OpenAI.ImageQuery>, max_tokens:Int, temperature:Float, top_p:Float, schema:ParameterType, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,APIOpenAIImageQuery>completion message and image query
max_tokensIntmaximum number of completion tokens returned by the API
temperatureFloatamount of randomness in the response, valued between 0 inclusive and 2 exclusive
top_pFloatnucleus sampling threshold, valued between 0 and 1 inclusive
schemaParameterTypeoutput schema
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given image and query

function : Respond(model:String, message:Pair<String,API.OpenAI.ImageQuery>, max_tokens:Int, temperature:Float, top_p:Float, token:String) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagePair<String,APIOpenAIImageQuery>completion message and image query
max_tokensIntmaximum number of completion tokens returned by the API
temperatureFloatamount of randomness in the response, valued between 0 inclusive and 2 exclusive
top_pFloatnucleus sampling threshold, valued between 0 and 1 inclusive
tokenStringAPI token

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given image and query

function : Respond(model:String, messages:Vector<Pair<String,API.OpenAI.ImageQuery>>) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagesVector<Pair<String,APIOpenAIImageQuery>>list of messages comprising the conversation

Return

TypeDescription
Responsecompletion response

Respond

Model response for the given image and query

function : Respond(model:String, messages:Vector<Pair<String,API.OpenAI.ImageQuery>>) ~ Response

Parameters

NameTypeDescription
modelStringID of the model to use
messagesVector<Pair<String,APIOpenAIImageQuery>>list of messages comprising the conversation

Return

TypeDescription
Responsecompletion response

ToString

String representation of the object

method : public : ToString() ~ String

Return

TypeDescription
Stringstring representation