The Trinity Beast — ElastiCache Key Definitions

Complete reference for all key patterns stored in ElastiCache (Valkey). Covers price cache, API key cache, cluster stats, usage log indexes, adaptive governor, rate limiting, KCC dashboard, and deduplication keys.

Engine: Valkey 7.2 Node: cache.r7g.2xlarge Updated: May 4, 2026 Version: v16

Table of Contents

  1. Connection Details
  2. Price Cache
  3. API Key Cache
  4. Application Configuration
  5. Usage Log Indexes
  6. Usage Log Data
  7. Report Usage Logs
  8. Cluster Stats (Metrics Publisher)
  9. Adaptive Governor
  10. Rate Limit Token Buckets
  11. KCC Daily Dashboard
  12. Session Deduplication
  13. Webhook Deduplication
  14. High-Water Marks (Sync Job)

1. Connection Details

Node Type
cache.r7g.2xlarge
vCPU
8
Memory
52.8 GB
Engine
Valkey 7.2
TLS
Enabled
Pool / Container
300
PropertyValue
Endpointmaster.trinity-beast-cache.ptsbmm.use2.cache.amazonaws.com:6379
Node Typecache.r7g.2xlarge (8 vCPU, 52.8 GB)
EngineValkey 7.2
TLSEnabled
Client Librarygo-redis UniversalClient (works with both cluster and standalone mode)
Connection Pool300 per container (1,200 total across 4 ECS containers)

2. Price Cache

Cached cryptocurrency prices written by the Kraken prewarm batch and REST fallback path. Read on every LPO price request when the Tier 2 (in-memory) cache misses.

price:{ASSET}

STRING (JSON) TTL: prewarm_interval + 60s
FieldTypeDescription
assetstringCryptocurrency ticker (BTC, ETH, SOL, etc.)
pricefloatCurrent price in USD
timestampstring (ISO 8601)When the price was fetched
readable_timestampstringHuman-readable timestamp
sourcestringPrice source (coinbase-ws, gemini-ws, kraken-ws, gateio-ws, bybit-ws, okx-ws, kraken-prewarm)
latency_msintegerSource fetch latency in milliseconds
cachedbooleanAlways false when written; set to true when served from cache
Written by: Kraken prewarm batch, FlushToElastiCache (30s cycle from all 6 WebSocket feeds), REST fallback cache
Read by: LPO price handler (Tier 2 cache miss path), Webhook delivery engine (price resolution)

3. API Key Cache

Full API key records cached as hashes. Populated by the sync job on a regular cycle and on cache miss by the API key store. Every inbound API request reads this key for authentication and rate limit enforcement.

apikey:{api_key}

HASH TTL: 24 hours (set by sync job)
FieldTypeDescription
idstring (uuid)API key record ID
user_idstring (uuid)Owner user ID
namestringSubscriber name
tierstringSubscription tier
query_limitintegerMonthly query limit
current_usageintegerCurrent month's query count
rate_limit_qpsintegerQueries per second limit
burst_limitintegerToken bucket burst capacity
burst_tokensfloatCurrent token balance
api_keystringThe API key string
minimum_wait_secondsfloatMin time between throttled requests
revokedbooleanWhether the key is disabled
lrs_enabledbooleanWhether unlimited LRS reports are enabled
last_usedstring (ISO 8601)Last API call timestamp
created_atstring (ISO 8601)Key creation date
last_successstring (ISO 8601)Last successful API call
Written by: Sync job, API key store on cache miss
Read by: Every API request for key validation
Invalidated by: /admin/invalidate-key endpoint

4. Application Configuration

All runtime application parameters from the Aurora application_parameters table, cached as a single hash. No TTL — persists until explicitly overwritten. Read on every config poll cycle by all containers.

app:config

HASH TTL: none (persistent)

Contains all 56+ application parameters from Aurora. Example fields include:

Example FieldDescription
adaptive_max_concurrentMax concurrent requests before adaptive throttling
cache_pool_sizeElastiCache connection pool size per container
coinbase_prewarm_assetsComma-separated list of assets to prewarm from Coinbase
gemini_prewarm_assetsComma-separated list of assets to prewarm from Gemini
kraken_prewarm_assetsComma-separated list of assets to prewarm from Kraken
log_levelApplication log level (debug, info, warn, error)
cache_ttl_secondsDefault cache TTL for price entries
Written by: Sync job, /admin/reload-params, /admin/system-mode
Read by: LoadApplicationParameters on every config poll cycle

5. Usage Log Indexes

Sorted sets that index usage log entries by timestamp. The global index plus per-API-key and per-asset indexes enable fast range queries for LRS reports without scanning the full dataset.

usage_logs:index

SORTED SET TTL: managed by sync job (93-day retention)

Score: Unix timestamp  |  Members: Usage log UUIDs

Global index of all usage log entries. Used by LRS usage report queries for date-range filtering.

usage_logs:api_key:{api_key_id}

SORTED SET TTL: managed by sync job (93-day retention)

Score: Unix timestamp  |  Members: Usage log UUIDs

Per-API-key usage log index. Enables efficient filtering of usage logs by subscriber.

usage_logs:asset:{asset}

SORTED SET TTL: managed by sync job (93-day retention)

Score: Unix timestamp  |  Members: Usage log UUIDs

Per-asset usage log index. Enables efficient filtering of usage logs by cryptocurrency asset.

6. Usage Log Data

Individual usage log entries stored as hashes. Each entry is a complete snapshot of a single LPO API request. Written by the sync job from Aurora and retained for 93 days.

usage_log:{id}

HASH TTL: 93 days
FieldTypeDescription
api_key_idstringAPI key that made the request
assetstringCryptocurrency asset queried
pricefloatPrice returned
sourcestringPrice source
cachedbooleanWhether served from cache
latency_msintegerSource fetch latency
duration_msintegerTotal request processing time
ip_addressstringClient IP address
timestampstring (ISO 8601)When the request occurred
readable_timestampstringHuman-readable timestamp
cache_age_secondsfloatAge of cached price at time of request
cluster_nodestringECS container that handled the request
regionstringAWS region
monthly_usageintegerSubscriber's usage count at time of request
monthly_limitintegerSubscriber's limit at time of request
Written by: Sync job (from Aurora)
Read by: LRS report handlers

7. Report Usage Logs

Indexes and data for LRS report request logs. Mirrors the Aurora report_usage_logs table in ElastiCache for fast report-on-report queries.

report_usage_logs:index

SORTED SET TTL: managed by sync job

Score: Unix timestamp  |  Members: Report usage log UUIDs

Global index of all report usage entries.

report_usage_logs:api_key:{api_key_id}

SORTED SET TTL: managed by sync job

Score: Unix timestamp  |  Members: Report usage log UUIDs

Per-API-key report usage index for subscriber-scoped report queries.

report_usage_log:{id}

HASH TTL: managed by sync job
FieldTypeDescription
api_key_idstringAPI key that requested the report
report_typestringReport type (usage/summary/report-usage/report-summary)
formatstringOutput format (json/csv/tsv/text)
filtersstring (JSON)Query filters applied
row_countintegerNumber of rows returned
duration_msintegerProcessing time
ip_addressstringClient IP address
timestampstring (ISO 8601)When the report was requested
readable_timestampstringHuman-readable timestamp
cluster_nodestringECS container that handled the request
regionstringAWS region
protocolstringTCP or UDP
status_codeintegerHTTP status returned
Written by: Sync job (from Aurora)
Read by: LRS report-usage and report-summary handlers

8. Cluster Stats (Metrics Publisher)

Each ECS container publishes a JSON snapshot of its runtime metrics to ElastiCache every 3 seconds. Read by /admin/cluster-stats and /public/status for cluster-wide aggregation.

cluster:stats:{node_name}

STRING (JSON) TTL: 30 seconds

Node names: BeastMain, BeastMirror, BeastLRS, BeastWebhook

FieldTypeDescription
uptime_secondsfloatContainer uptime in seconds
tcp_requestsintegerTotal TCP requests handled
udp_requestsintegerTotal UDP requests handled
lrs_requestsintegerTotal LRS report requests
total_rpsfloatRequests per second (all protocols)
syncmap_hitsintegerTier 1 (local sync.Map) cache hits
elasticache_hitsintegerTier 2 (ElastiCache) cache hits
cache_missesintegerFull cache misses (REST fallback)
errors_5xxintegerServer error count
errors_4xxintegerClient error count
rate_limit_hitsintegerRate limit rejections
bg_work_droppedintegerBackground tasks dropped (pool full)
Written by: Metrics publisher goroutine (every 3 seconds per container)
Read by: /admin/cluster-stats, /public/status, KCC daily dashboard

9. Adaptive Governor

Distributed counters and flags used by the adaptive governor to coordinate throttling decisions across all three ECS containers. Uses a hash tag {adaptive:lpo} to ensure all keys land on the same shard for atomic operations.

{adaptive:lpo}:successes

STRING (integer) TTL: managed by governor cycle
Distributed success counter across all containers. Incremented on each successful price fetch. Reset at the start of each governor evaluation window.

{adaptive:lpo}:total

STRING (integer) TTL: managed by governor cycle
Distributed total request counter across all containers. Incremented on every price request (success or failure). Used with successes to calculate the success rate.

{adaptive:lpo}:throttle

STRING (boolean) TTL: managed by governor cycle
Distributed throttle flag. When set to true, all containers reduce outbound requests to protect upstream price sources. Evaluated on every inbound request.

10. Rate Limit Token Buckets

Per-API-key rate limit state stored in ElastiCache for cluster-wide enforcement. The token bucket algorithm tracks remaining tokens and last refill time. Read and updated on every API request.

ratelimit:{api_key}

STRING (JSON) TTL: 5 minutes (public tiers) / 60 minutes (partner)
FieldTypeDescription
tokensfloatCurrent token balance in the bucket
last_refillfloat (unix)Timestamp of last token refill
Written by: Rate limiter middleware on every API request
Read by: Rate limiter middleware for token check
TTL policy: 5 minutes for public tiers (free, pro, enterprise, unlimited, lifetime), 60 minutes for partner and stress tiers

11. KCC Daily Dashboard

Stores the collected daily infrastructure metrics as a single JSON blob. Written by bash scripts/kcc.sh daily-collect and read by the CLI daily command and the KCC Live Dashboard.

kcc:daily

STRING (JSON) TTL: 24 hours

Contains a comprehensive snapshot of all infrastructure metrics: service health, ECS cluster stats, Valkey metrics, Lambda status, nightly sync results, SQS queue depth, and 7-day website analytics.

Written by: KCC daily-collect command (via /admin/valkey endpoint)
Read by: KCC daily command, KCC Live Dashboard (docs/dashboard.html)

12. Session Deduplication

Prevents double-processing of Stripe checkout session completion events. The receipt Lambda writes the full response after processing and checks for existence before re-processing.

receipt:session:{session_id}

STRING (JSON) TTL: 1 hour

Contains: Full Lambda response JSON from the receipt processing

Written by: Receipt Lambda after processing a checkout session
Read by: Receipt Lambda to prevent double-processing of the same session

13. Webhook Deduplication

Prevents duplicate processing of Stripe webhook events. Stripe may deliver the same event multiple times; this key ensures idempotent handling.

webhook:event:{event_id}

STRING TTL: 24 hours
Written by: Webhook handler after processing a Stripe event
Read by: Webhook handler to skip duplicate Stripe events

14. High-Water Marks (Sync Job)

Tracks the last-synced timestamp for incremental data synchronization from Aurora to ElastiCache. The sync job reads these marks to determine which new records need to be copied.

sync:hwm:usage_logs

STRING (ISO timestamp) TTL: none (persistent)
Last synced usage_logs timestamp. The sync job queries Aurora for all usage_logs with timestamp > this value.

sync:hwm:report_usage_logs

STRING (ISO timestamp) TTL: none (persistent)
Last synced report_usage_logs timestamp. The sync job queries Aurora for all report_usage_logs with timestamp > this value.