openairesponses

package
v0.4.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 21, 2026 License: Apache-2.0 Imports: 23 Imported by: 0

Documentation

Overview

Package openairesponses implements a client for the OpenAI Responses API.

It is described at https://platform.openai.com/docs/api-reference/responses/create

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ProcessStream

func ProcessStream(chunks iter.Seq[ResponseStreamChunkResponse]) (iter.Seq[genai.Reply], func() (genai.Usage, [][]genai.Logprob, error))

ProcessStream converts the raw packets from the streaming API into Reply fragments.

func Scoreboard

func Scoreboard() scoreboard.Score

Scoreboard for OpenAI.

Types

type APIError

type APIError struct {
	Code    string `json:"code"` // "server_error"
	Message string `json:"message"`
}

APIError represents an API error in the response.

func (*APIError) Error

func (e *APIError) Error() string

type Annotation

type Annotation struct {
	// "file_citation", "url_citation", "container_file_citation", "file_path"
	Type string `json:"type,omitzero"`

	// Type == "file_citation", "container_file_citation", "file_path"
	FileID string `json:"file_id,omitzero"`

	// Type == "file_citation", "file_path"
	Index int64 `json:"index,omitzero"`

	// Type == "url_citation"
	URL   string `json:"url,omitzero"`
	Title string `json:"title,omitzero"`

	// Type == "url_citation", "container_file_citation"
	StartIndex int64 `json:"start_index,omitzero"`
	EndIndex   int64 `json:"end_index,omitzero"`
}

Annotation represents annotations in output text.

type Background

type Background string

Background is only supported on gpt-image-1.

const (
	BackgroundAuto        Background = "auto"
	BackgroundTransparent Background = "transparent"
	BackgroundOpaque      Background = "opaque"
)

Background mode values.

type Batch

type Batch struct {
	CancelledAt      base.Time `json:"cancelled_at"`
	CancellingAt     base.Time `json:"cancelling_at"`
	CompletedAt      base.Time `json:"completed_at"`
	CompletionWindow string    `json:"completion_window"` // "24h"
	CreatedAt        base.Time `json:"created_at"`
	Endpoint         string    `json:"endpoint"`      // Same as BatchRequest.Endpoint
	ErrorFileID      string    `json:"error_file_id"` // File ID containing the outputs of requests with errors.
	Errors           struct {
		Data []struct {
			Code    string `json:"code"`
			Line    int64  `json:"line"`
			Message string `json:"message"`
			Param   string `json:"param"`
		} `json:"data"`
	} `json:"errors"`
	ExpiredAt     base.Time         `json:"expired_at"`
	ExpiresAt     base.Time         `json:"expires_at"`
	FailedAt      base.Time         `json:"failed_at"`
	FinalizingAt  base.Time         `json:"finalizing_at"`
	ID            string            `json:"id"`
	InProgressAt  base.Time         `json:"in_progress_at"`
	InputFileID   string            `json:"input_file_id"` // Input data
	Metadata      map[string]string `json:"metadata"`
	Object        string            `json:"object"`         // "batch"
	OutputFileID  string            `json:"output_file_id"` // Output data
	RequestCounts struct {
		Completed int64 `json:"completed"`
		Failed    int64 `json:"failed"`
		Total     int64 `json:"total"`
	} `json:"request_counts"`
	Status string `json:"status"` // "completed", "in_progress", "validating", "finalizing"
}

Batch is documented at https://platform.openai.com/docs/api-reference/batch/object

type BatchRequest

type BatchRequest struct {
	CompletionWindow string            `json:"completion_window"` // Must be "24h"
	Endpoint         string            `json:"endpoint"`          // One of /v1/responses, /v1/chat/completions, /v1/embeddings, /v1/completions
	InputFileID      string            `json:"input_file_id"`     // File must be JSONL
	Metadata         map[string]string `json:"metadata,omitzero"` // Maximum 16 keys of 64 chars, values max 512 chars
}

BatchRequest is documented at https://platform.openai.com/docs/api-reference/batch/create

type BatchRequestInput

type BatchRequestInput struct {
	CustomID string   `json:"custom_id"`
	Method   string   `json:"method"` // "POST"
	URL      string   `json:"url"`    // "/v1/chat/completions", "/v1/embeddings", "/v1/completions", "/v1/responses"
	Body     Response `json:"body"`
}

BatchRequestInput is documented at https://platform.openai.com/docs/api-reference/batch/request-input

type BatchRequestOutput

type BatchRequestOutput struct {
	CustomID string   `json:"custom_id"`
	ID       string   `json:"id"`
	Error    APIError `json:"error"`
	Response struct {
		StatusCode int      `json:"status_code"`
		RequestID  string   `json:"request_id"` // To use when contacting support
		Body       Response `json:"body"`
	} `json:"response"`
}

BatchRequestOutput is documented at https://platform.openai.com/docs/api-reference/batch/request-output

type Client

type Client struct {
	base.NotImplemented
	// contains filtered or unexported fields
}

Client is a client for the OpenAI Responses API.

func New

func New(ctx context.Context, opts ...genai.ProviderOption) (*Client, error)

New creates a new client to talk to the OpenAI Responses API.

If ProviderOptionAPIKey is not provided, it tries to load it from the OPENAI_API_KEY environment variable. If none is found, it will still return a client coupled with an base.ErrAPIKeyRequired error. Get your API key at https://platform.openai.com/settings/organization/api-keys

To use multiple models, create multiple clients. Use one of the model from https://platform.openai.com/docs/models

Documents

OpenAI supports many types of documents, listed at https://platform.openai.com/docs/assistants/tools/file-search#supported-files

func (*Client) Capabilities added in v0.2.0

func (c *Client) Capabilities() genai.ProviderCapabilities

Capabilities implements genai.Provider.

func (*Client) GenAsync added in v0.2.0

func (c *Client) GenAsync(ctx context.Context, msgs genai.Messages, opts ...genai.GenOption) (genai.Job, error)

GenAsync implements genai.Provider.

It uses the OpenAI Responses API background mode to submit a request that is processed asynchronously. The returned Job is the response ID that can be polled with PokeResult.

https://platform.openai.com/docs/api-reference/responses/create

func (*Client) GenStream

func (c *Client) GenStream(ctx context.Context, msgs genai.Messages, opts ...genai.GenOption) (iter.Seq[genai.Reply], func() (genai.Result, error))

GenStream implements genai.Provider.

func (*Client) GenStreamRaw

func (c *Client) GenStreamRaw(ctx context.Context, in *Response) (iter.Seq[ResponseStreamChunkResponse], func() error)

GenStreamRaw provides access to the raw API.

func (*Client) GenSync

func (c *Client) GenSync(ctx context.Context, msgs genai.Messages, opts ...genai.GenOption) (genai.Result, error)

GenSync implements genai.Provider.

func (*Client) GenSyncRaw

func (c *Client) GenSyncRaw(ctx context.Context, in, out *Response) error

GenSyncRaw provides access to the raw API.

func (*Client) HTTPClient

func (c *Client) HTTPClient() *http.Client

HTTPClient returns the HTTP client to fetch results (e.g. videos) generated by the provider.

func (*Client) ListModels

func (c *Client) ListModels(ctx context.Context) ([]genai.Model, error)

ListModels implements genai.Provider.

func (*Client) ModelID

func (c *Client) ModelID() string

ModelID implements genai.Provider.

It returns the selected model ID.

func (*Client) Name

func (c *Client) Name() string

Name implements genai.Provider.

It returns the name of the provider.

func (*Client) OutputModalities

func (c *Client) OutputModalities() genai.Modalities

OutputModalities implements genai.Provider.

It returns the output modalities, i.e. what kind of output the model will generate (text, audio, image, video, etc).

func (*Client) PokeResult added in v0.2.0

func (c *Client) PokeResult(ctx context.Context, job genai.Job) (genai.Result, error)

PokeResult implements genai.Provider.

It polls the status of a background response by its ID.

https://platform.openai.com/docs/api-reference/responses/get

func (*Client) PokeResultRaw added in v0.2.0

func (c *Client) PokeResultRaw(ctx context.Context, job genai.Job) (*Response, error)

PokeResultRaw provides raw access to poll a background response.

func (*Client) Scoreboard

func (c *Client) Scoreboard() scoreboard.Score

Scoreboard implements genai.Provider.

type Content

type Content struct {
	Type ContentType `json:"type,omitzero"`

	// Type == ContentInputText, ContentOutputText
	Text string `json:"text,omitzero"`

	// Type == ContentInputImage, ContentInputFile
	FileID string `json:"file_id,omitzero"`

	// Type == ContentInputImage
	ImageURL string `json:"image_url,omitzero"` // URL or base64
	Detail   string `json:"detail,omitzero"`    // "high", "low", "auto" (default)

	// Type == ContentInputFile
	FileData string `json:"file_data,omitzero"` // TODO: confirm if base64
	Filename string `json:"filename,omitzero"`

	// Type == ContentOutputText
	Annotations []Annotation `json:"annotations,omitzero"`
	Logprobs    []Logprobs   `json:"logprobs,omitzero"`

	// Type == ContentRefusal
	Refusal string `json:"refusal,omitzero"`
}

Content represents different types of input content.

func (*Content) FromReply

func (c *Content) FromReply(in *genai.Reply) error

FromReply converts from a genai reply.

func (*Content) FromRequest

func (c *Content) FromRequest(in *genai.Request) error

FromRequest converts from a genai request.

func (*Content) To

func (c *Content) To() ([]genai.Reply, error)

To converts to the genai equivalent.

type ContentType

type ContentType string

ContentType defines the data being transported. It only includes actual data (text, files), no tool call nor result.

const (
	// ContentInputText is an input text content type.
	ContentInputText  ContentType = "input_text"
	ContentInputImage ContentType = "input_image"
	ContentInputFile  ContentType = "input_file"

	// ContentOutputText is an output text content type.
	ContentOutputText ContentType = "output_text"
	ContentRefusal    ContentType = "refusal"
)

Content type values.

type ErrorResponse

type ErrorResponse struct {
	ErrorVal struct {
		Message string `json:"message"`
		Type    string `json:"type"`
		Code    string `json:"code"`
		Param   string `json:"param"`
	} `json:"error"`
}

ErrorResponse represents an error response from the OpenAI API.

func (*ErrorResponse) Error

func (e *ErrorResponse) Error() string

func (*ErrorResponse) IsAPIError

func (e *ErrorResponse) IsAPIError() bool

IsAPIError implements base.ErrorResponseI.

type File

type File struct {
	Bytes         int64     `json:"bytes"` // File size
	CreatedAt     base.Time `json:"created_at"`
	ExpiresAt     base.Time `json:"expires_at"`
	Filename      string    `json:"filename"`
	ID            string    `json:"id"`
	Object        string    `json:"object"`         // "file"
	Purpose       string    `json:"purpose"`        // One of: assistants, assistants_output, batch, batch_output, fine-tune, fine-tune-results and vision
	Status        string    `json:"status"`         // Deprecated
	StatusDetails string    `json:"status_details"` // Deprecated
}

File is documented at https://platform.openai.com/docs/api-reference/files/object

func (*File) GetDisplayName

func (f *File) GetDisplayName() string

GetDisplayName implements genai.CacheItem.

func (*File) GetExpiry

func (f *File) GetExpiry() time.Time

GetExpiry implements genai.CacheItem.

func (*File) GetID

func (f *File) GetID() string

GetID implements genai.Model.

type FileDeleteResponse

type FileDeleteResponse struct {
	ID      string `json:"id"`
	Object  string `json:"object"` // "file"
	Deleted bool   `json:"deleted"`
}

FileDeleteResponse is documented at https://platform.openai.com/docs/api-reference/files/delete

type FileListResponse

type FileListResponse struct {
	Data   []File `json:"data"`
	Object string `json:"object"` // "list"
}

FileListResponse is documented at https://platform.openai.com/docs/api-reference/files/list

type GenOptionImage added in v0.2.0

type GenOptionImage struct {
	// Background is only supported on gpt-image-1.
	Background Background
}

GenOptionImage defines OpenAI specific options.

func (*GenOptionImage) Validate added in v0.2.0

func (o *GenOptionImage) Validate() error

Validate implements genai.Validatable.

type GenOptionText added in v0.2.0

type GenOptionText struct {
	// ReasoningEffort is the amount of effort (number of tokens) the LLM can use to think about the answer.
	//
	// When unspecified, defaults to medium.
	ReasoningEffort ReasoningEffort
	// ServiceTier specify the priority.
	ServiceTier ServiceTier
	// Truncation controls automatic shortening of long conversations.
	Truncation Truncation
	// PreviousResponseID enables server-side conversation state, avoiding re-transmitting full history.
	PreviousResponseID string
}

GenOptionText defines OpenAI Responses specific options.

func (*GenOptionText) Validate added in v0.2.0

func (o *GenOptionText) Validate() error

Validate implements genai.Validatable.

type ImageChoiceData

type ImageChoiceData struct {
	B64JSON       []byte `json:"b64_json"`
	RevisedPrompt string `json:"revised_prompt"` // dall-e-3 only
	URL           string `json:"url"`            // Unsupported for gpt-image-1
}

ImageChoiceData is the data for one image generation choice.

type ImageRequest

type ImageRequest struct {
	Prompt            string     `json:"prompt"`
	Model             string     `json:"model,omitzero"`              // Default to dall-e-2, unless a gpt-image-1 specific parameter is used.
	Background        Background `json:"background,omitzero"`         // Default "auto"
	Moderation        string     `json:"moderation,omitzero"`         // gpt-image-1: "low" or "auto"
	N                 int64      `json:"n,omitzero"`                  // Number of images to return
	OutputCompression float64    `json:"output_compression,omitzero"` // Defaults to 100. Only supported on gpt-image-1 with webp or jpeg
	OutputFormat      string     `json:"output_format,omitzero"`      // "png", "jpeg" or "webp". Defaults to png. Only supported on gpt-image-1.
	Quality           string     `json:"quality,omitzero"`            // "auto", gpt-image-1: "high", "medium", "low". dall-e-3: "hd", "standard". dall-e-2: "standard".
	ResponseFormat    string     `json:"response_format,omitzero"`    // "url" or "b64_json"; url is valid for 60 minutes; gpt-image-1 only returns b64_json
	Size              string     `json:"size,omitzero"`               // "auto", gpt-image-1: "1024x1024", "1536x1024", "1024x1536". dall-e-3: "1024x1024", "1792x1024", "1024x1792". dall-e-2: "256x256", "512x512", "1024x1024".
	Style             string     `json:"style,omitzero"`              // dall-e-3: "vivid", "natural"
	User              string     `json:"user,omitzero"`               // End-user to help monitor and detect abuse
}

ImageRequest is documented at https://platform.openai.com/docs/api-reference/images

func (*ImageRequest) Init

func (i *ImageRequest) Init(msg *genai.Message, model string, opts ...genai.GenOption) error

Init initializes the request from the given parameters.

type ImageResponse

type ImageResponse struct {
	Created base.Time         `json:"created"`
	Data    []ImageChoiceData `json:"data"`
	Usage   struct {
		InputTokens        int64 `json:"input_tokens"`
		OutputTokens       int64 `json:"output_tokens"`
		TotalTokens        int64 `json:"total_tokens"`
		InputTokensDetails struct {
			TextTokens  int64 `json:"text_tokens"`
			ImageTokens int64 `json:"image_tokens"`
		} `json:"input_tokens_details"`
	} `json:"usage"`
	Background   string `json:"background"`    // "opaque"
	Size         string `json:"size"`          // e.g. "1024x1024"
	Quality      string `json:"quality"`       // e.g. "medium"
	OutputFormat string `json:"output_format"` // e.g. "png"
}

ImageResponse is the provider-specific image generation response.

type IncompleteDetails

type IncompleteDetails struct {
	Reason string `json:"reason"`
}

IncompleteDetails represents details about why a response is incomplete.

type Logprobs

type Logprobs struct {
	Token       string  `json:"token,omitzero"`
	Bytes       []byte  `json:"bytes,omitzero"`
	Logprob     float64 `json:"logprob,omitzero"`
	TopLogprobs []struct {
		Token   string  `json:"token,omitzero"`
		Bytes   []byte  `json:"bytes,omitzero"`
		Logprob float64 `json:"logprob,omitzero"`
	} `json:"top_logprobs,omitzero"`
}

Logprobs is the provider-specific log probabilities.

func (*Logprobs) To

func (l *Logprobs) To() []genai.Logprob

To converts to the genai equivalent.

type Message

type Message struct {
	Type MessageType `json:"type,omitzero"`

	// Type == MessageMessage
	Role    string    `json:"role,omitzero"` // "user", "assistant", "system", "developer"
	Content []Content `json:"content,omitzero"`

	// Type == MessageMessage, MessageFileSearchCall, MessageFunctionCall, MessageReasoning
	Status string `json:"status,omitzero"` // "in_progress", "completed", "incomplete", "searching", "failed"

	// Type == MessageMessage (with Role == "assistant"), MessageFileSearchCall, MessageItemReference,
	// MessageFunctionCall, MessageFunctionCallOutput, MessageReasoning
	ID string `json:"id,omitzero"` // MessageItemReference: an internal identifier for an item to reference; Others: tool call ID

	// Type == MessageFileSearchCall
	Queries []string `json:"queries,omitzero"`
	Results []struct {
		Attributes map[string]string `json:"attributes,omitzero"`
		FileID     string            `json:"file_id,omitzero"`
		Filename   string            `json:"filename,omitzero"`
		Score      float64           `json:"score,omitzero"` // [0, 1]
		Text       string            `json:"text,omitzero"`
	} `json:"results,omitzero"`

	// Type == MessageFunctionCall
	Arguments string `json:"arguments,omitzero"` // JSON
	Name      string `json:"name,omitzero"`

	// Type == MessageFunctionCall, MessageFunctionCallOutput
	CallID string `json:"call_id,omitzero"`

	// Type == MessageFunctionCallOutput
	Output string `json:"output,omitzero"` // JSON

	// Type == MessageReasoning
	EncryptedContent string             `json:"encrypted_content,omitzero"`
	Summary          []ReasoningSummary `json:"summary,omitzero"`

	// Type == MessageWebSearchCall
	Action struct {
		Type    string `json:"type,omitzero"` // "search"
		Query   string `json:"query,omitzero"`
		Sources []struct {
			Type string `json:"type,omitzero"` // "url"
			URL  string `json:"url,omitzero"`
		} `json:"sources,omitzero"`
	} `json:"action,omitzero"`
}

Message represents a message input or output to the model.

In OpenAI Responses API, Message is a mix of Message and Content because the tool call type is in the Message.Type.

func (*Message) From

func (m *Message) From(in *genai.Message) (bool, error)

From must be called with at most one ToolCallResults.

func (*Message) To

func (m *Message) To(out *genai.Message) error

To is different here because it can be called multiple times on the same out.

In the Responses API, Message is actually a mix of Message and Content.

type MessageType

type MessageType string

MessageType controls what kind of content is allowed.

This means a single message cannot contain multiple kind of calls at the time time. I really don't know why they did this especially that they have parallel tool calling support.

const (
	// MessageMessage represents inputs and outputs.
	MessageMessage MessageType = "message"
	// MessageFileSearchCall represents outputs.
	MessageFileSearchCall      MessageType = "file_search_call"
	MessageComputerCall        MessageType = "computer_call"
	MessageWebSearchCall       MessageType = "web_search_call"
	MessageFunctionCall        MessageType = "function_call"
	MessageReasoning           MessageType = "reasoning"
	MessageImageGenerationCall MessageType = "image_generation_call"
	MessageCodeInterpreterCall MessageType = "code_interpreter_call"
	MessageLocalShellCall      MessageType = "local_shell_call"
	MessageMcpListTools        MessageType = "mcp_list_tools"
	MessageMcpApprovalRequest  MessageType = "mcp_approval_request"
	MessageMcpCall             MessageType = "mcp_call"
	// MessageComputerCallOutput represents inputs.
	MessageComputerCallOutput   MessageType = "computer_call_output"
	MessageFunctionCallOutput   MessageType = "function_call_output"
	MessageLocalShellCallOutput MessageType = "local_shell_call_output"
	MessageMcpApprovalResponse  MessageType = "mcp_approval_response"
	MessageItemReference        MessageType = "item_reference"
)

Message type values for inputs and outputs.

type Model

type Model struct {
	ID      string    `json:"id"`
	Object  string    `json:"object"`
	Created base.Time `json:"created"`
	OwnedBy string    `json:"owned_by"`
}

Model is documented at https://platform.openai.com/docs/api-reference/models/object

Sadly the modalities aren't reported. The only way I can think of to find it at run time is to fetch https://platform.openai.com/docs/models/gpt-4o-mini-realtime-preview, find the div containing "Modalities:", then extract the modalities from the text.

func (*Model) Context

func (m *Model) Context() int64

Context implements genai.Model.

func (*Model) GetID

func (m *Model) GetID() string

GetID implements genai.Model.

func (*Model) String

func (m *Model) String() string

type ModelsResponse

type ModelsResponse struct {
	Object string  `json:"object"` // list
	Data   []Model `json:"data"`
}

ModelsResponse represents the response structure for OpenAI models listing.

func (*ModelsResponse) ToModels

func (r *ModelsResponse) ToModels() []genai.Model

ToModels converts OpenAI models to genai.Model interfaces.

type ReasoningConfig

type ReasoningConfig struct {
	Effort  ReasoningEffort `json:"effort,omitzero"`
	Summary string          `json:"summary,omitzero"` // "auto", "concise", "detailed"
}

ReasoningConfig represents reasoning configuration for o-series models.

type ReasoningEffort

type ReasoningEffort string

ReasoningEffort is the effort the model should put into reasoning. Default is Medium.

https://platform.openai.com/docs/api-reference/assistants/createAssistant#assistants-createassistant-reasoning_effort https://platform.openai.com/docs/guides/reasoning

const (
	ReasoningEffortNone    ReasoningEffort = "none"
	ReasoningEffortMinimal ReasoningEffort = "minimal"
	ReasoningEffortLow     ReasoningEffort = "low"
	ReasoningEffortMedium  ReasoningEffort = "medium"
	ReasoningEffortHigh    ReasoningEffort = "high"
	ReasoningEffortXHigh   ReasoningEffort = "xhigh"
)

Reasoning effort values.

func (ReasoningEffort) Validate added in v0.2.0

func (r ReasoningEffort) Validate() error

Validate implements genai.Validatable.

type ReasoningSummary

type ReasoningSummary struct {
	Type string `json:"type,omitzero"` // "summary_text"
	Text string `json:"text,omitzero"`
}

ReasoningSummary represents reasoning summary content.

type Response

type Response struct {
	Model                string            `json:"model"`
	Background           bool              `json:"background"`
	Instructions         string            `json:"instructions,omitzero"`
	MaxOutputTokens      int64             `json:"max_output_tokens,omitzero"`
	MaxToolCalls         int64             `json:"max_tool_calls,omitzero"`
	Metadata             map[string]string `json:"metadata,omitzero"`
	ParallelToolCalls    bool              `json:"parallel_tool_calls,omitzero"`
	PreviousResponseID   string            `json:"previous_response_id,omitzero"`
	PromptCacheKey       struct{}          `json:"prompt_cache_key,omitzero"`
	PromptCacheRetention struct{}          `json:"prompt_cache_retention,omitzero"`
	Reasoning            ReasoningConfig   `json:"reasoning,omitzero"`
	SafetyIdentifier     struct{}          `json:"safety_identifier,omitzero"`
	ServiceTier          ServiceTier       `json:"service_tier,omitzero"`
	Store                bool              `json:"store"`
	Temperature          float64           `json:"temperature,omitzero"`
	Text                 struct {
		Format struct {
			Type        string             `json:"type"` // "text", "json_schema", "json_object"
			Name        string             `json:"name,omitzero"`
			Description string             `json:"description,omitzero"`
			Schema      *jsonschema.Schema `json:"schema,omitzero"`
			Strict      bool               `json:"strict,omitzero"`
		} `json:"format"`
		Verbosity string `json:"verbosity,omitzero"` // "low", "medium", "high"
	} `json:"text,omitzero"`
	TopLogprobs int64    `json:"top_logprobs,omitzero"` // [0, 20]
	TopP        float64  `json:"top_p,omitzero"`
	ToolChoice  string   `json:"tool_choice,omitzero"` // "none", "auto", "required"
	Truncation  string   `json:"truncation,omitzero"`  // "disabled", "auto"
	Tools       []Tool   `json:"tools,omitzero"`
	User        string   `json:"user,omitzero"`    // Deprecated, use SafetyIdentifier and PromptCacheKey
	Include     []string `json:"include,omitzero"` // "web_search_call.action.sources"

	// Request only
	Input  []Message `json:"input,omitzero"`
	Stream bool      `json:"stream,omitzero"`

	// Response only
	ID                string            `json:"id,omitzero"`
	Object            string            `json:"object,omitzero"` // "response"
	CreatedAt         base.Time         `json:"created_at,omitzero"`
	CompletedAt       base.Time         `json:"completed_at,omitzero"`
	Status            string            `json:"status,omitzero"` // "completed"
	IncompleteDetails IncompleteDetails `json:"incomplete_details,omitzero"`
	Error             APIError          `json:"error,omitzero"`
	Output            []Message         `json:"output,omitzero"`
	Usage             Usage             `json:"usage,omitzero"`
	Billing           map[string]string `json:"billing,omitzero"` // e.g. {"payer": "openai"}
}

Response represents a request to the OpenAI Responses API.

https://platform.openai.com/docs/api-reference/responses/object

func (*Response) Init

func (r *Response) Init(msgs genai.Messages, model string, opts ...genai.GenOption) error

Init implements base.InitializableRequest.

func (*Response) SetStream

func (r *Response) SetStream(stream bool)

SetStream implements base.InitializableRequest.

func (*Response) ToResult

func (r *Response) ToResult() (genai.Result, error)

ToResult implements base.ResultConverter.

type ResponseStreamChunkResponse

type ResponseStreamChunkResponse struct {
	Type           ResponseType `json:"type,omitzero"`
	SequenceNumber int64        `json:"sequence_number,omitzero"`

	// Type == ResponseCreated, ResponseInProgress, ResponseCompleted, ResponseFailed, ResponseIncomplete,
	// ResponseQueued
	Response Response `json:"response,omitzero"`

	// Type == ResponseOutputItemAdded, ResponseOutputItemDone, ResponseContentPartAdded,
	// ResponseContentPartDone, ResponseOutputTextDelta, ResponseOutputTextDone, ResponseRefusalDelta,
	// ResponseRefusalDone, ResponseFunctionCallArgumentsDelta, ResponseFunctionCallArgumentsDone,
	// ResponseReasoningSummaryPartAdded, ResponseReasoningSummaryPartDone, ResponseReasoningSummaryTextDelta,
	// ResponseReasoningSummaryTextDone, ResponseOutputTextAnnotationAdded
	OutputIndex int64 `json:"output_index,omitzero"`

	// Type == ResponseOutputItemAdded, ResponseOutputItemDone
	Item Message `json:"item,omitzero"`

	// Type == ResponseContentPartAdded, ResponseContentPartDone, ResponseOutputTextDelta,
	// ResponseOutputTextDone, ResponseRefusalDelta, ResponseRefusalDone,
	//  ResponseOutputTextAnnotationAdded
	ContentIndex int64 `json:"content_index,omitzero"`

	// Type == ResponseContentPartAdded, ResponseContentPartDone, ResponseOutputTextDelta,
	// ResponseOutputTextDone, ResponseRefusalDelta, ResponseRefusalDone, ResponseFunctionCallArgumentsDelta,
	// ResponseFunctionCallArgumentsDone, ResponseReasoningSummaryPartAdded, ResponseReasoningSummaryPartDone,
	// ResponseReasoningSummaryTextDelta, ResponseReasoningSummaryTextDone, ResponseOutputTextAnnotationAdded
	ItemID string `json:"item_id,omitzero"`

	// Type == ResponseContentPartAdded, ResponseContentPartDone, ResponseReasoningSummaryPartAdded,
	// ResponseReasoningSummaryPartDone
	Part Content `json:"part,omitzero"`

	// Type == ResponseOutputTextDelta, ResponseRefusalDelta, ResponseFunctionCallArgumentsDelta,
	// ResponseReasoningSummaryTextDelta
	Delta string `json:"delta,omitzero"`

	// Type == ResponseOutputTextDone, ResponseReasoningSummaryTextDone
	Text string `json:"text,omitzero"`

	// Type == ResponseRefusalDone
	Refusal string `json:"refusal,omitzero"`

	// Type == ResponseFunctionCallArgumentsDone
	Arguments string `json:"arguments,omitzero"`

	// Type == ResponseReasoningSummaryPartAdded, ResponseReasoningSummaryPartDone,
	// ResponseReasoningSummaryTextDelta, ResponseReasoningSummaryTextDone
	SummaryIndex int64 `json:"summary_index,omitzero"`

	// Type == ResponseOutputTextAnnotationAdded
	Annotation      Annotation `json:"annotation,omitzero"`
	AnnotationIndex int64      `json:"annotation_index,omitzero"`

	// Type == ResponseError
	ErrorResponse

	Logprobs []Logprobs `json:"logprobs,omitzero"`

	Obfuscation string `json:"obfuscation,omitzero"`
}

ResponseStreamChunkResponse represents a streaming response chunk.

https://platform.openai.com/docs/api-reference/responses-streaming

type ResponseType

type ResponseType string

ResponseType is one of the event at https://platform.openai.com/docs/api-reference/responses-streaming

const (
	ResponseCompleted                       ResponseType = "response.completed"
	ResponseContentPartAdded                ResponseType = "response.content_part.added"
	ResponseContentPartDone                 ResponseType = "response.content_part.done"
	ResponseCreated                         ResponseType = "response.created"
	ResponseError                           ResponseType = "error"
	ResponseFailed                          ResponseType = "response.failed"
	ResponseFileSearchCallCompleted         ResponseType = "response.file_search_call.completed"
	ResponseFileSearchCallInProgress        ResponseType = "response.file_search_call.in_progress"
	ResponseFileSearchCallSearching         ResponseType = "response.file_search_call.searching"
	ResponseFunctionCallArgumentsDelta      ResponseType = "response.function_call_arguments.delta"
	ResponseFunctionCallArgumentsDone       ResponseType = "response.function_call_arguments.done"
	ResponseImageGenerationCallCompleted    ResponseType = "response.image_generation_call.completed"
	ResponseImageGenerationCallGenerating   ResponseType = "response.image_generation_call.generating"
	ResponseImageGenerationCallInProgress   ResponseType = "response.image_generation_call.in_progress"
	ResponseImageGenerationCallPartialImage ResponseType = "response.image_generation_call.partial_image"
	ResponseInProgress                      ResponseType = "response.in_progress"
	ResponseIncomplete                      ResponseType = "response.incomplete"
	ResponseMCPCallArgumentsDelta           ResponseType = "response.mcp_call.arguments.delta"
	ResponseMCPCallArgumentsDone            ResponseType = "response.mcp_call.arguments.done"
	ResponseMCPCallCompleted                ResponseType = "response.mcp_call.completed"
	ResponseMCPCallFailed                   ResponseType = "response.mcp_call.failed"
	ResponseMCPCallInProgress               ResponseType = "response.mcp_call.in_progress"
	ResponseMCPListToolsCompleted           ResponseType = "response.mcp_list_tools.completed"
	ResponseMCPListToolsFailed              ResponseType = "response.mcp_list_tools.failed"
	ResponseMCPListToolsInProgress          ResponseType = "response.mcp_list_tools.in_progress"
	ResponseCodeInterpreterCallInterpreting ResponseType = "response.code_interpreter_call.interpreting"
	ResponseCodeInterpreterCallCompleted    ResponseType = "response.code_interpreter_call.completed"
	ResponseCustomToolCallInputDelta        ResponseType = "response.custom_tool_call_input.delta"
	ResponseCustomToolCallInputDone         ResponseType = "response.custom_tool_call_input.done"
	ResponseCodeInterpreterCallDelta        ResponseType = "response.code_interpreter_call.delta"
	ResponseCodeInterpreterCallDone         ResponseType = "response.code_interpreter_call.done"
	ResponseOutputItemAdded                 ResponseType = "response.output_item.added"
	ResponseOutputItemDone                  ResponseType = "response.output_item.done"
	ResponseOutputTextDelta                 ResponseType = "response.output_text.delta"
	ResponseOutputTextDone                  ResponseType = "response.output_text.done"
	ResponseOutputTextAnnotationAdded       ResponseType = "response.output_text.annotation.added"
	ResponseQueued                          ResponseType = "response.queued"
	ResponseReasoningSummaryPartAdded       ResponseType = "response.reasoning_summary_part.added"
	ResponseReasoningSummaryPartDone        ResponseType = "response.reasoning_summary_part.done"
	ResponseReasoningSummaryTextDelta       ResponseType = "response.reasoning_summary_text.delta"
	ResponseReasoningSummaryTextDone        ResponseType = "response.reasoning_summary_text.done"
	ResponseReasoningTextDelta              ResponseType = "response.reasoning_text.delta"
	ResponseReasoningTextDone               ResponseType = "response.reasoning_text.done"
	ResponseRefusalDelta                    ResponseType = "response.refusal.delta"
	ResponseRefusalDone                     ResponseType = "response.refusal.done"
	ResponseWebSearchCallCompleted          ResponseType = "response.web_search_call.completed"
	ResponseWebSearchCallInProgress         ResponseType = "response.web_search_call.in_progress"
	ResponseWebSearchCallSearching          ResponseType = "response.web_search_call.searching"
)

Response event type values.

type ServiceTier

type ServiceTier string

ServiceTier is the quality of service to determine the request's priority.

const (
	// ServiceTierAuto will utilize scale tier credits until they are exhausted if the Project is Scale tier
	// enabled, else the request will be processed using the default service tier with a lower uptime SLA and no
	// latency guarantee.
	//
	// https://openai.com/api-scale-tier/
	ServiceTierAuto ServiceTier = "auto"
	// ServiceTierDefault has the request be processed using the default service tier with a lower uptime SLA
	// and no latency guarantee.
	ServiceTierDefault ServiceTier = "default"
	// ServiceTierFlex has the request be processed with the Flex Processing service tier.
	//
	// Flex processing is in beta, and currently only available for GPT-5, o3 and o4-mini models.
	//
	// https://platform.openai.com/docs/guides/flex-processing
	ServiceTierFlex ServiceTier = "flex"
)

func (ServiceTier) Validate added in v0.2.0

func (s ServiceTier) Validate() error

Validate implements genai.Validatable.

type TokenDetails

type TokenDetails struct {
	CachedTokens    int64 `json:"cached_tokens,omitzero"`
	AudioTokens     int64 `json:"audio_tokens,omitzero"`
	ReasoningTokens int64 `json:"reasoning_tokens,omitzero"`
}

TokenDetails provides detailed token usage breakdown.

type Tool

type Tool struct {
	// "function", "file_search", "computer_use_preview", "mcp", "code_interpreter", "image_generation",
	// "local_shell", "web_search"
	Type string `json:"type,omitzero"`

	// Type == "function"
	Name        string             `json:"name,omitzero"`
	Description string             `json:"description,omitzero"`
	Parameters  *jsonschema.Schema `json:"parameters,omitzero"`
	Strict      bool               `json:"strict,omitzero"`

	// Type == "file_search"
	FileSearchVectorStoreIDs []string `json:"vector_store_ids,omitzero"`

	// Type == "web_search"
	Filters struct {
		AllowedDomains []string `json:"allowed_domains,omitzero"`
	} `json:"filters,omitzero"`
	SearchContextSize string `json:"search_context_size,omitzero"` // "low", "medium", "high"
	UserLocation      struct {
		Type     string `json:"type,omitzero"`    // "approximate"
		Country  string `json:"country,omitzero"` // "GB"
		City     string `json:"city,omitzero"`    // "London"
		Region   string `json:"region,omitzero"`  // "London"
		Timezone string `json:"timezone,omitzero"`
	} `json:"user_location,omitzero"`
}

Tool represents a tool that can be called by the model.

type Truncation added in v0.2.0

type Truncation string

Truncation controls the truncation strategy for long conversations.

const (
	// TruncationDisabled means the request will fail if input exceeds the model's context.
	TruncationDisabled Truncation = "disabled"
	// TruncationAuto means the model will automatically truncate input if it exceeds the model's context.
	TruncationAuto Truncation = "auto"
)

type Usage

type Usage struct {
	InputTokens        int64 `json:"input_tokens"`
	InputTokensDetails struct {
		CachedTokens int64 `json:"cached_tokens"`
	} `json:"input_tokens_details"`
	OutputTokens        int64 `json:"output_tokens"`
	OutputTokensDetails struct {
		ReasoningTokens int64 `json:"reasoning_tokens"`
	} `json:"output_tokens_details"`
	TotalTokens int64 `json:"total_tokens"`
}

Usage represents token usage statistics.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL