Documentation
¶
Index ¶
- type BloomSet
- func (b *BloomSet) Add(hash common.Hash)
- func (b *BloomSet) AddMany(hashes []common.Hash)
- func (b *BloomSet) Contains(hash common.Hash) bool
- func (b *BloomSet) Count() uint
- func (b *BloomSet) FilterNotContained(hashes []common.Hash) []common.Hash
- func (b *BloomSet) MemoryUsage() uint
- func (b *BloomSet) Reset()
- func (b *BloomSet) Rotate()
- type BloomSetOptions
- type BoundedSet
- type LRU
- func (c *LRU[K, V]) Add(key K, value V)
- func (c *LRU[K, V]) AddBatch(keys []K, values []V)
- func (c *LRU[K, V]) Get(key K) (V, bool)
- func (c *LRU[K, V]) Peek(key K) (V, bool)
- func (c *LRU[K, V]) PeekMany(keys []K) []V
- func (c *LRU[K, V]) PeekManyWithKeys(keys []K) ([]K, []V)
- func (c *LRU[K, V]) Remove(key K) (V, bool)
- func (c *LRU[K, V]) Update(key K, updateFn func(V) V)
- type LRUOptions
- type Locked
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BloomSet ¶
type BloomSet struct {
// contains filtered or unexported fields
}
BloomSet is a memory-efficient probabilistic set for tracking seen hashes. It uses a rotating dual-bloom-filter design:
- "current" filter receives all new additions
- "previous" filter is checked during lookups for recency
- Rotate() moves current to previous and creates a fresh current
Trade-offs vs LRU cache:
- Pro: ~10x less memory, minimal GC pressure (fixed-size arrays)
- Pro: O(1) add/lookup with very low constant factor
- Con: False positives possible (~1% with default settings)
- Con: No exact eviction control (use Rotate for approximate TTL)
For knownTxs, false positives mean occasionally not broadcasting a tx to a peer that doesn't have it - acceptable since they'll get it elsewhere.
This implementation wraps holiman/bloomfilter/v2, the same battle-tested bloom filter library used by geth for state pruning.
func NewBloomSet ¶
func NewBloomSet(opts BloomSetOptions) *BloomSet
NewBloomSet creates a new BloomSet with the given options. If options are zero-valued, defaults are applied.
func (*BloomSet) Contains ¶
Contains checks if a hash might be in the set. Returns true if the hash is probably in the set (may have false positives). Returns false if the hash is definitely not in the set.
func (*BloomSet) Count ¶
Count returns the approximate number of elements added since last rotation. This uses the bloom filter's internal count of added elements.
func (*BloomSet) FilterNotContained ¶
FilterNotContained returns hashes that are definitely not in the set. Hashes that might be in the set (including false positives) are excluded.
func (*BloomSet) MemoryUsage ¶
MemoryUsage returns the approximate memory usage in bytes.
func (*BloomSet) Rotate ¶
func (b *BloomSet) Rotate()
Rotate moves the current filter to previous and creates a fresh current. Call this periodically to maintain approximate recency (e.g., every N minutes). After rotation, lookups still check the previous filter, so recently-added items remain "known" for one more rotation period.
type BloomSetOptions ¶
type BloomSetOptions struct {
// Size is the number of bits in the bloom filter.
// Larger size = lower false positive rate but more memory.
// Recommended: 10 * expected_elements for ~1% false positive rate.
Size uint
// HashCount is the number of hash functions to use.
// Recommended: 7 for ~1% false positive rate.
HashCount uint
}
BloomSetOptions contains configuration for creating a BloomSet.
func DefaultBloomSetOptions ¶
func DefaultBloomSetOptions() BloomSetOptions
DefaultBloomSetOptions returns sensible defaults for tracking ~32K elements with approximately 1% false positive rate. Memory usage: ~80KB per BloomSet (2 filters of ~40KB each).
type BoundedSet ¶
type BoundedSet[T comparable] struct { // contains filtered or unexported fields }
BoundedSet is a simple set-based collection with a maximum size. When the set reaches capacity, the oldest element is evicted via Pop(). This provides lower memory overhead compared to a full LRU cache when only membership tracking is needed without value storage.
func NewBoundedSet ¶
func NewBoundedSet[T comparable](max int) *BoundedSet[T]
NewBoundedSet creates a new BoundedSet with the specified maximum size.
func (*BoundedSet[T]) Add ¶
func (b *BoundedSet[T]) Add(elem T)
Add adds an element to the set, evicting the oldest element if at capacity.
func (*BoundedSet[T]) Clear ¶
func (b *BoundedSet[T]) Clear()
Clear removes all elements from the set.
func (*BoundedSet[T]) Contains ¶
func (b *BoundedSet[T]) Contains(elem T) bool
Contains returns true if the element exists in the set.
func (*BoundedSet[T]) Len ¶
func (b *BoundedSet[T]) Len() int
Len returns the number of elements in the set.
type LRU ¶
type LRU[K comparable, V any] struct { // contains filtered or unexported fields }
LRU is a thread-safe LRU cache with optional TTL-based expiration.
func NewLRU ¶
func NewLRU[K comparable, V any](opts LRUOptions) *LRU[K, V]
NewLRU creates a new LRU cache with the given options. If opts.MaxSize <= 0, the cache has no size limit. If opts.TTL is 0, entries never expire based on time.
func (*LRU[K, V]) Add ¶
func (c *LRU[K, V]) Add(key K, value V)
Add adds or updates a value in the cache.
func (*LRU[K, V]) AddBatch ¶
func (c *LRU[K, V]) AddBatch(keys []K, values []V)
AddBatch adds multiple key-value pairs to the cache. Uses a single write lock for all additions, reducing lock contention compared to calling Add in a loop. Keys and values must have the same length.
func (*LRU[K, V]) Peek ¶
Peek retrieves a value from the cache without updating LRU ordering. Uses a read lock for better concurrency.
func (*LRU[K, V]) PeekMany ¶
func (c *LRU[K, V]) PeekMany(keys []K) []V
PeekMany retrieves multiple values from the cache without updating LRU ordering. Uses a single read lock for all lookups, providing better concurrency than GetMany when LRU ordering updates are not needed. Returns only found values (indices don't correspond to input keys). Use PeekManyWithKeys if you need key-value pairs.
func (*LRU[K, V]) PeekManyWithKeys ¶
func (c *LRU[K, V]) PeekManyWithKeys(keys []K) ([]K, []V)
PeekManyWithKeys retrieves multiple key-value pairs from the cache without updating LRU ordering. Returns parallel slices of found keys and values. Uses a single read lock for all lookups.
func (*LRU[K, V]) Update ¶
func (c *LRU[K, V]) Update(key K, updateFn func(V) V)
Update atomically updates a value in the cache using the provided update function. The update function receives the current value (or zero value if not found) and returns the new value to store. This is thread-safe and prevents race conditions in get-modify-add patterns.
type LRUOptions ¶
LRUOptions contains configuration for LRU caches with TTL.