cache

package
v0.0.0-...-1d8266f Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 23, 2025 License: Apache-2.0 Imports: 9 Imported by: 0

Documentation

Overview

Package cache provides a small cache abstraction and a simple in-memory implementation with optional TTL expiration.

The in-memory implementation (MemCache) is thread-safe and supports:

  • Lazy eviction: expired entries are removed on Get access
  • Periodic background cleaner: removes expired items in bounded batches
  • Optional TTL expiration per entry
  • Metrics tracking for cache performance

Example usage:

cache := cache.New(logger)
defer cache.Close(context.Background())

cache.Set(ctx, "key", "value", time.Minute)
value, err := cache.Get(ctx, "key")
if err != nil {
    // handle error
}

Index

Constants

View Source
const (
	// MaxDeletesPerRun is an upper bound on the number of expired items removed
	// by the background cleaner in a single tick.
	MaxDeletesPerRun = 10000
	// DefaultCleanupInterval is the default interval for the background cleaner.
	DefaultCleanupInterval = time.Minute
)

Variables

View Source
var (
	// ErrType indicates a value exists but cannot be converted to the requested type.
	ErrType = errors.New("type error")
	// ErrNotFound indicates that a key does not exist or is expired.
	ErrNotFound = errors.New("not found")
	// ErrAborted indicates an operation was cancelled via context.
	ErrAborted = errors.New("operation aborted")
)

Functions

This section is empty.

Types

type Cache

type Cache interface {
	Set(ctx context.Context, key string, value any, ttl time.Duration) error
	Get(ctx context.Context, key string) (any, error)
	Delete(ctx context.Context, keys ...string) error
	Digest(ctx context.Context, key string) Digest
	Close(ctx context.Context) error
}

Cache is a minimal cache interface with TTL support.

TTL semantics:

  • ttl > 0: the entry expires at now+ttl
  • ttl <= 0: the entry does not expire

Implementations may evict expired items lazily (on reads) and/or via a background cleanup process.

type Digest

type Digest uint64

Digest is a stable, cheap fingerprint of a cached value.

For this package's MemCache implementation, Digest is defined and stable for primitive types (string, []byte, bool, ints, uints, floats). For other types, or when a key is missing or expired, implementations may return 0.

type MemCache

type MemCache struct {
	// contains filtered or unexported fields
}

MemCache is a thread-safe in-memory cache with optional TTL expiration.

It performs eviction:

  • lazily on Get (expired items are removed on access)
  • periodically via a background cleaner (best-effort, capped per run)

MemCache uses read-write locks to allow concurrent reads while ensuring thread safety. Write operations (Set, Delete) block readers, but reads (Get) use read locks for better concurrency.

Close must be called to stop the background cleaner and release resources.

func New

func New(log zerolog.Logger) *MemCache

New returns a MemCache using DefaultCleanupInterval for background cleanup. The returned cache starts a background goroutine that periodically removes expired entries. Call Close to stop the background cleaner.

func WithDeleteInterval

func WithDeleteInterval(cleanupInterval time.Duration, log zerolog.Logger) *MemCache

WithDeleteInterval returns a MemCache that runs the background cleaner at the provided interval.

The background cleaner removes expired entries in batches, processing at most MaxDeletesPerRun items per interval to bound cleanup work.

If cleanupInterval is <= 0, DefaultCleanupInterval is used instead to avoid ticker panics.

The returned cache starts a background goroutine. Call Close to stop it.

func (*MemCache) Close

func (mc *MemCache) Close(_ context.Context) error

Close stops the background cleaner. It is safe to call Close multiple times.

func (*MemCache) Delete

func (mc *MemCache) Delete(ctx context.Context, keys ...string) error

Delete removes a set of keys from cache.

Delete is safe for concurrent use. If a key does not exist, it is ignored. The operation can be cancelled via the context. If cancelled, Delete returns ErrAborted. Metrics are still updated for keys deleted before cancellation.

func (*MemCache) Digest

func (mc *MemCache) Digest(_ context.Context, key string) Digest

Digest returns a fingerprint for the current (non-expired) value of key.

If key is missing or expired, Digest returns 0. For MemCache, the digest is defined and stable for primitive types (string, []byte, bool, ints, uints, floats). For other value types, Digest returns 0.

Digest is safe for concurrent use. It does not remove expired entries; use Get for lazy eviction.

func (*MemCache) Get

func (mc *MemCache) Get(_ context.Context, key string) (any, error)

Get returns the cached value for key.

If the entry is missing or expired, Get returns ErrNotFound. Expired entries are removed lazily on access (when Get is called) or by the background cleaner. Get guarantees that it never returns a value that is expired at the time of the final check.

Get is safe for concurrent use. It uses read locks for fast access and only acquires a write lock when deleting expired entries.

func (*MemCache) Metrics

func (mc *MemCache) Metrics() Metrics

Metrics returns a snapshot of internal metrics.

The returned snapshot is a point-in-time copy of all metrics, safe for concurrent access. Metrics include hits, misses, sets, deletes, and eviction statistics.

func (*MemCache) MetricsJSON

func (mc *MemCache) MetricsJSON() string

MetricsJSON returns a JSON snapshot of internal metrics as a string.

The returned JSON is a point-in-time snapshot, safe for concurrent access. Useful for logging or monitoring endpoints.

func (*MemCache) Set

func (mc *MemCache) Set(_ context.Context, key string, value any, ttl time.Duration) error

Set stores key/value with the provided TTL.

TTL semantics:

  • ttl > 0: expires at now+ttl
  • ttl <= 0: does not expire

Set is safe for concurrent use. If the key already exists, it is overwritten.

func (*MemCache) Size

func (mc *MemCache) Size() int

Size returns the current number of entries in the cache.

Size includes both expired and non-expired entries. Expired entries are removed lazily on Get or by the background cleaner, so Size may include entries that would return ErrNotFound on Get.

type Metrics

type Metrics struct {
	Hits                  uint32 `json:"hits"`
	Misses                uint32 `json:"misses"`
	Sets                  uint32 `json:"sets"`
	Deletes               uint32 `json:"deletes"`
	LazyEvictions         uint32 `json:"lazy_evictions"`           // Expired items found during Get
	ScheduledEvictions    uint32 `json:"scheduled_evictions"`      // Expired items removed by cleaner
	CleanupRuns           uint32 `json:"cleanup_runs"`             // Number of scheduled cleanup runs
	LastCleanupDurationMs uint64 `json:"last_cleanup_duration_ms"` // Duration of last cleanup in milliseconds
	LastCleanupItems      uint32 `json:"last_cleanup_items"`       // Items cleaned in last run
}

Metrics tracks cache operations and evictions.

All fields are updated atomically and are safe to read concurrently.

func (*Metrics) AddCleanupRun

func (m *Metrics) AddCleanupRun(duration time.Duration, itemsCleaned uint32)

func (*Metrics) AddDelete

func (m *Metrics) AddDelete(count uint32)

func (*Metrics) AddHit

func (m *Metrics) AddHit()

func (*Metrics) AddLazyEviction

func (m *Metrics) AddLazyEviction()

func (*Metrics) AddMiss

func (m *Metrics) AddMiss()

func (*Metrics) AddScheduledEviction

func (m *Metrics) AddScheduledEviction(count uint32)

func (*Metrics) AddSet

func (m *Metrics) AddSet()

func (*Metrics) JSONStr

func (m *Metrics) JSONStr() string

JSONStr returns a JSON snapshot of the current metrics.

func (*Metrics) Snapshot

func (m *Metrics) Snapshot() Metrics

Snapshot returns an atomic-load copy of all metrics fields.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL