Documentation
¶
Overview ¶
Package graph provides a high-level API for the Geode graph database, built on top of the database/sql-compatible driver. It adds batch execution, pipeline processing, schema management, and transaction ergonomics that go beyond what database/sql offers.
Index ¶
- type BatchResult
- type Conn
- func (c *Conn) ApplySchema(ctx context.Context, stmts []Statement) error
- func (c *Conn) ApplySchemaFS(ctx context.Context, fsys fs.FS, dir string) error
- func (c *Conn) BatchExec(ctx context.Context, stmts []Statement) (BatchResult, error)
- func (c *Conn) Close() error
- func (c *Conn) DB() *sql.DB
- func (c *Conn) ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)
- func (c *Conn) PingContext(ctx context.Context) error
- func (c *Conn) PipelineExec(ctx context.Context, stmts []Statement, opts ...PipelineOption) (PipelineResult, error)
- func (c *Conn) QueryContext(ctx context.Context, query string, args ...any) (*sql.Rows, error)
- func (c *Conn) TX(ctx context.Context, opts *sql.TxOptions, fn TXFunc) (err error)
- type Option
- type Options
- type PipelineOption
- type PipelineResult
- type Statement
- type TXFunc
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BatchResult ¶
type BatchResult struct {
Total int // number of statements in the batch
Succeeded int // number that executed successfully
Failed int // number that failed
Err error // error that stopped execution, if any
}
BatchResult summarises the outcome of a BatchExec call.
type Conn ¶
type Conn struct {
// contains filtered or unexported fields
}
Conn wraps a *sql.DB with graph-specific configuration such as retry policy, batch size, and worker count. It serves as the foundation for higher-level graph operations (transactions, batch execution, pipeline processing, and schema management).
func NewConn ¶
NewConn creates a new Conn wrapping the provided *sql.DB. It returns an error if db is nil. Functional options are applied on top of the defaults provided by ApplyOptions.
func (*Conn) ApplySchema ¶
ApplySchema executes stmts sequentially against the database. Execution stops at the first error. An empty or nil slice returns a nil error. The context is checked between statements.
func (*Conn) ApplySchemaFS ¶
ApplySchemaFS is a convenience method that parses all .gql files from dir in fsys and then applies them via ApplySchema.
func (*Conn) BatchExec ¶
BatchExec executes stmts sequentially. Each statement is retried according to the retry policy derived from the Conn options. Execution stops at the first unrecoverable error. An empty slice returns a zero BatchResult with a nil error.
func (*Conn) ExecContext ¶
ExecContext executes a query that does not return rows. It delegates directly to the underlying *sql.DB.
func (*Conn) PingContext ¶
PingContext verifies the connection to the database is still alive. It delegates directly to the underlying *sql.DB.
func (*Conn) PipelineExec ¶
func (c *Conn) PipelineExec(ctx context.Context, stmts []Statement, opts ...PipelineOption) (PipelineResult, error)
PipelineExec executes stmts in parallel using a bounded worker pool. Each statement is retried according to the retry policy derived from the Conn options. The first unrecoverable error cancels all pending work (first-error-stops semantics). An empty slice returns a zero PipelineResult with a nil error.
func (*Conn) QueryContext ¶
QueryContext executes a query that returns rows. It delegates directly to the underlying *sql.DB.
func (*Conn) TX ¶
TX begins a transaction, executes fn inside it, and commits on success. Any error returned by fn causes an immediate rollback and the fn error is returned to the caller. A panic inside fn also triggers a rollback before the panic is re-propagated. This ensures no transactions are ever leaked.
opts may be nil; if so the database default isolation level is used.
type Option ¶
type Option func(*Options)
Option is a functional option for configuring a Conn.
func WithBatchSize ¶
WithBatchSize sets the number of items processed per batch operation.
func WithLogger ¶
WithLogger sets a structured logger for the Conn.
func WithRetry ¶
WithRetry configures the maximum number of retries and the backoff duration between attempts.
func WithWorkers ¶
WithWorkers sets the number of concurrent workers used in pipeline and batch operations.
type Options ¶
type Options struct {
MaxRetries int
RetryBackoff time.Duration
Logger *slog.Logger
BatchSize int
Workers int
}
Options holds configuration for a Conn.
func ApplyOptions ¶
ApplyOptions returns an Options populated with defaults, then applies each provided option in order.
Defaults:
- MaxRetries: 3
- RetryBackoff: 1s
- BatchSize: 100
- Workers: 8
type PipelineOption ¶
type PipelineOption func(*pipelineConfig)
PipelineOption is a functional option for configuring a PipelineExec call.
func WithProgress ¶
func WithProgress(fn func(completed, total int)) PipelineOption
WithProgress registers a callback that is invoked after each successful statement execution. The callback receives the number of completed statements and the total number of statements in the pipeline.
type PipelineResult ¶
type PipelineResult struct {
Total int // number of statements submitted
Succeeded int // number that executed successfully
Failed int // number that failed
Err error // first error encountered, if any
}
PipelineResult summarises the outcome of a PipelineExec call.
type Statement ¶
Statement is a GQL query with its bound arguments.
func ParseSchemaFS ¶
ParseSchemaFS reads all .gql files from dir in fsys, parses their contents into Statements, and returns them in lexicographic file order. Files that are not regular .gql files (directories, non-.gql extensions) are silently skipped. An error is returned if the directory cannot be read, a file cannot be read, or a statement uses an unsupported keyword.