The Trinity Beast — Application Parameters

Complete reference for all 30+ runtime parameters — system mode profiles, cache tuning, DB pool settings, usage log batching, and price source configuration.

Parameters: 36+ Profiles: 20 Storage: Aurora application_parameters Updated: May 2026

Table of Contents

  1. Overview
  2. System Mode Profiles
  3. Complete Parameter Reference
  4. Parameter Categories
    1. Logging & Diagnostics
    2. Cache & Prewarming
    3. Database Connection Pool
    4. Usage Log Batching
    5. Price Source Configuration
    6. Rate Limiting & API
    7. System Configuration
    8. UDP Server Tuning (v8)

Overview

Application parameters control the runtime behavior of The Trinity Beast system. All parameters are stored in the application_parameters table in Aurora (the authoritative source) and cached in ElastiCache as the app:config hash for fast reads.

  • Startup: Parameters are loaded from ElastiCache first (fast path). If ElastiCache is unavailable, Aurora is queried directly (fallback).
  • Polling: Parameters are reloaded periodically based on application_parameter_interval_minutes (default: every 5 minutes).
  • Write-through: After loading from Aurora, parameters are written through to ElastiCache so subsequent reads are fast.
  • Hot-reloadable: Most parameters take effect within one polling cycle — no container restart needed. DB pool settings are re-applied automatically on each reload.
  • Restart required: Some parameters are only read once at container startup. Changing these in Aurora requires a container restart (--force-new-deployment) to take effect. See the table below.

⚠️ Restart Required vs Hot-Reloadable

🔄 Hot-Reloadable (no restart)🔁 Restart Required
cache_ttl_seconds, config_poll_interval_seconds
qps, burst, log_level
sqs_batch_size, sqs_flush_ms, sqs_buffer_size, sqs_timeout_ms
db_max_open_conns, db_max_idle_conns, db_conn_max_lifetime_minutes, db_conn_max_idle_time_minutes
admin_api_key, demo_api_key
prewarm_assets_list, prewarm_interval_minutes, prewarm_delay_ms
coinbase_prewarm_assets, gemini_prewarm_assets, kraken_prewarm_assets
gateio_prewarm_assets, bybit_prewarm_assets, okx_prewarm_assets
price_source_* (all price source params)
usage_log_* (all batch/flush params)
adaptive_* (all governor params)
payment_grace_period_days, usage_warning_green_pct, usage_warning_yellow_pct, usage_warning_red_pct
cache_max_retries
http_read_timeout_seconds
http_write_timeout_seconds
http_idle_timeout_seconds
http_read_header_timeout_seconds
http_max_header_bytes
udp_read_buffer_bytes
udp_write_buffer_bytes
udp_reader_goroutines
udp_max_concurrent_lpo
udp_max_concurrent_lrs
udp_workers_per_socket
udp_batch_size
udp_pre_serialize_responses
cache_pool_size
cache_min_idle_conns
cache_dial_timeout_ms
cache_read_timeout_ms
cache_write_timeout_ms

Why? Hot-reloadable parameters update the Config struct in memory and are read on every request. Restart-required parameters are consumed once when the HTTP server, UDP listeners, or ElastiCache client are created at startup — Go does not allow reconfiguring these after creation.

-- Aurora: authoritative source
SELECT key, value FROM application_parameters;

-- ElastiCache: fast cache (app:config hash)
HGETALL app:config

-- Update a parameter (takes effect within one poll cycle)
INSERT INTO application_parameters (key, value)
VALUES ('cache_ttl_seconds', '30')
ON CONFLICT (key) DO UPDATE SET value = '30';

System Mode Profiles

The /admin/system-mode?mode=<profile_name> endpoint applies a predefined profile from the application_parameter_profiles table in Aurora. Profiles can be added or modified via SQL without code deploys. There are currently 10 profiles covering production, demo, debug, and stress testing across all protocol/topology combinations.

Note: System mode also updates the demo API key's rate_limit_qps and burst_limit in the api_keys table and invalidates the demo key cache. QPS and Burst values shown below are applied to the demo API key.

Key tuning rules applied across all profiles:

  • db_max_idle always equals db_max_open — prevents connection churn under load (p99 fix: 1,266ms → 8.9ms)
  • cache_min_idle ≈ 33% of cache_pool_size — optimal pre-warm ratio
  • cache_pool_size is always a multiple of 3 — one pool per container
  • ALB/NLB profiles use smaller pools (1,998) since load is distributed across 3 containers
  • UDP profiles use smaller batches (100) and longer flush intervals — frees CPU for packet processing
  • Combined profiles use db_max_open=180 — headroom for LRS report queries alongside LPO writes

Production & Operational Profiles

Parameter demo debug fresh-price
PurposeLive demosTroubleshootingProduction
qps331,000
burst331,000
log_levelinfodebugerror
cache_ttl306013
config_poll9030300
sqs_batch_size101010
sqs_flush_ms500500100
sqs_buffer_size10,00010,00050,000
sqs_timeout_ms5,0005,0003,000
db_max_open3030150
db_max_idle3030150
db_conn_lifetime_min101010
db_conn_idle_min555
cache_pool_size150150600
cache_min_idle5151201
cache_max_retries331
cache_dial_ms3,0003,000500
cache_read_ms3,0003,000500
cache_write_ms3,0003,000500

Stress Test Profiles — LPO Only (APP_SERVER)

Parameter stress tcp-direct tcp-alb udp-direct udp-nlb
TopologyBaseline1 node3 nodes1 node3 nodes
qps100K100K100K100K100K
burst100K100K100K100K100K
log_levelerrorerrorerrorerrorerror
cache_ttl300300300300300
sqs_batch_size1010101010
sqs_flush_ms5050505050
sqs_buffer_size100,000100,000100,000100,000100,000
sqs_timeout_ms3,0003,0003,0003,0003,000
db_max_open150150150150150
db_max_idle150150150150150
db_conn_idle_min1010101010
cache_pool_size2,9972,9971,9982,9971,998
cache_min_idle999999666999666

Stress Test Profiles — Combined LPO + LRS (APP_REPORT_SERVER)

Parameter combined-direct combined-alb
Topology1 node, LPO + LRS3 nodes, LPO + LRS
qps100K100K
burst100K100K
log_levelerrorerror
cache_ttl300300
sqs_batch_size1010
sqs_flush_ms5050
sqs_buffer_size100,000100,000
sqs_timeout_ms3,0003,000
db_max_open180180
db_max_idle180180
db_conn_idle_min1010
cache_pool_size2,9971,998
cache_min_idle999666

Values highlighted in color differ from the baseline stress profile. UDP profiles use smaller batches and longer flush intervals to free CPU for packet processing.

Stress Test Profiles — LRS Only

Parameter lrs-direct lrs-alb
Topology1 node, LRS only3 nodes, LRS only
qps100K100K
burst100K100K
log_levelerrorerror
cache_ttl300300
sqs_batch_size1010
sqs_flush_ms5050
sqs_buffer_size100,000100,000
sqs_timeout_ms3,0003,000
db_max_open180180
db_max_idle180180
db_conn_idle_min1010
cache_pool_size2,9971,998
cache_min_idle999666
cache_read_ms1,0001,000

LRS profiles use db_max_open=180 for read-heavy report queries and cache_read_ms=1,000 because reports fetch more data per ElastiCache call than single price lookups. Write volume is low (report_usage_logs only), so flush intervals are relaxed.

Stress Test Profiles — Combined LPO + LRS (APP_REPORT_SERVER)

Parameter combined-direct combined-alb
Topology1 node, LPO + LRS3 nodes, LPO + LRS
qps100K100K
burst100K100K
log_levelerrorerror
cache_ttl300300
sqs_batch_size1010
sqs_flush_ms5050
sqs_buffer_size100,000100,000
sqs_timeout_ms3,0003,000
db_max_open180180
db_max_idle180180
db_conn_idle_min1010
cache_pool_size2,9971,998
cache_min_idle999666

Combined profiles use db_max_open=180 to give LRS report queries headroom alongside LPO batch writes. This is the production configuration (APP_REPORT_SERVER).

Run 17 Profiles — v8 UDP Optimizations

Parameter r17-lb-all r17-tcp-direct r17-udp-direct r17-all-direct
Phase1: Load Balancers2: TCP Direct3: UDP Direct4: All Direct
ProtocolsAll 4 via ALB+NLBTCP-LPO onlyUDP-LPO onlyAll 4 direct
db_max_open150150150180
sqs_batch_size10101010
sqs_flush_ms50505050
sqs_buffer_size100,000100,000100,000100,000
sqs_timeout_ms3,0003,0003,0003,000
cache_pool_size1,9982,9972,9972,997
cache_read_ms1,0005005001,000
udp_reader_goroutines8888
udp_read_buffer_mb32323232
udp_workers_per_socket128128128128
udp_batch_size32323232
udp_pre_serializetruetruetruetrue

Run 17 profiles include v8 UDP tuning columns. All phases use SO_REUSEPORT (8 sockets), recvmmsg batch reads (32 datagrams/syscall), 32 MB socket buffers, and pre-serialized response cache. Phase 3 (UDP direct) uses minimal batch/flush settings to maximize CPU for packet processing — this is the 200K+ UDP chase.

Complete Parameter Reference

All 33 application parameters recognized by the system. Parameters marked with mode columns are updated by the /admin/system-mode endpoint.

Parameter Key Type Default Demo Perf Debug Description
admin_api_key string "" Admin API key for X-Admin-Key header authentication SECRET
application_parameter_interval_minutes int 5 How often (minutes) to reload all parameters from Aurora
cache_ttl_seconds int 60 30 9 60 How long a cached price is considered fresh (seconds)
cacheRetentionDays int 93 Days to retain cached data in ElastiCache
config_poll_interval_seconds int 60 90 300 30 How often (seconds) to poll for config interval changes
db_conn_max_idle_time_minutes int 1 1 5 1 Max time a DB connection can sit idle before being closed
db_conn_max_lifetime_minutes int 5 5 10 5 Max lifetime of a DB connection before recycling
db_max_idle_conns int 30 15 90 15 Max idle connections in the Aurora connection pool
db_max_open_conns int 50 30 180 30 Max open connections to Aurora
debug_messages string "false" DEPRECATED — use log_level=debug instead DEPRECATED
default_query_limit int 1000 Default pagination limit for queries
demo_api_key string "" The public demo API key value (e.g., demo-public-2026-03-01-abc123)
demo_usage_row_cap int 30 Max rows returned for demo usage detail and report-usage detail reports
demo_summary_row_cap int 300 Max rows aggregated for demo usage summary and report-usage summary reports
usage_warning_threshold_pct int 90 Deprecated — replaced by graduated thresholds below. Kept for backward compatibility.
usage_warning_green_pct int 85 When monthly usage reaches this percentage, responses include usage_warning: "green" and status shows ✅🟡. First tier of graduated warnings.
usage_warning_yellow_pct int 90 When monthly usage reaches this percentage, responses include usage_warning: "yellow" and status shows ✅⚠️. Second tier — caution.
usage_warning_red_pct int 95 When monthly usage reaches this percentage, responses include usage_warning: "red" and status shows ✅🔴. Third tier — critical, near limit.
kraken_prewarm_assets string "nano,sc,lsk,...,flow" Kraken WebSocket feed assets (24). See WebSocket Feed Asset Lists section.
gateio_prewarm_assets string "bnb,trx,...,ftm" Gate.io WebSocket feed assets (24). See WebSocket Feed Asset Lists section.
bybit_prewarm_assets string "ton,wld,...,gala" Bybit WebSocket feed assets (24). See WebSocket Feed Asset Lists section.
okx_prewarm_assets string "kas,tia,...,floki" OKX WebSocket feed assets (24). See WebSocket Feed Asset Lists section.
kraken_prewarm_interval_minutes int 3 How often to run Kraken batch prewarm
kraken_prewarm_offset_seconds int 15 Offset from main prewarm to stagger Kraken prewarm
log_level string "info" info error debug Logging level: debug, info, error
max_cache_size int 10000 Maximum entries in the local price cache
minimum_to_wait float 1.0 Minimum wait time (seconds) between requests for rate limiting
prewarm_assets_list string "btc,eth,sol,doge,xrp,link,dot,ltc,avax,uni,aave" Comma-separated list of assets to prewarm via WebSocket and REST
prewarm_delay_ms int 600 Delay between individual asset prewarm requests (ms)
prewarm_interval_minutes int 10 How often to run the main prewarm cycle
price_source_fallback_on_error bool true Whether to try next source on error
price_source_health_window_sec int 300 Time window for source health tracking
price_source_min_success_rate float 0.85 Minimum success rate to consider a source healthy
price_source_order string "coinbase,kraken,gemini" REST fallback source priority order
price_source_prefer_low_latency bool true Whether to prefer lower-latency sources
price_source_timeouts_ms string {'coinbase':150,'kraken':120,'gemini':200} Per-source REST timeout in milliseconds
price_source_weights string {'coinbase':0.50,'kraken':0.35,'gemini':0.15} Source selection weights for weighted routing
seconds_to_wait float 15.0 Seconds to wait before retrying a failed source
stripe_key string "" Stripe API key for payment processing SECRET
sqs_batch_size int 10 10 10 10 Messages per SQS SendMessageBatch call (range: 1–10)
sqs_flush_ms int 100 500 50 500 SQS producer flush interval in milliseconds
sqs_buffer_size int 50000 10,000 100,000 10,000 SQS producer channel buffer capacity
sqs_timeout_ms int 3000 5,000 3,000 5,000 Per-batch SQS API call timeout in milliseconds
udp_workers_per_socket int 128 Worker goroutines per UDP socket. v8: configurable (was hardcoded at 128)
udp_batch_size int 32 Datagrams per recvmmsg batch read. 0 disables batching (v7 fallback)
udp_pre_serialize_responses bool true Pre-serialized response cache for UDP cache hits

Parameter Categories

Logging & Diagnostics

Controls log verbosity and diagnostic output. The log_level parameter is the primary control; debug_messages is deprecated but still honored for backward compatibility.

Parameter Type Default Description
log_level string "info" Logging level: debug, info, error. Applied immediately via callback — no restart needed.
debug_messages string "false" DEPRECATED. When set to "true", forces log level to debug. Use log_level=debug instead. DEPRECATED

Cache & Prewarming

Controls cache freshness, retention, size limits, and the prewarm cycles that keep the cache warm for popular assets. The main prewarm cycle covers top assets via WebSocket and REST; the Kraken prewarm handles exchange-exclusive assets separately.

Parameter Type Default Description
cache_ttl_seconds int 60 How long a cached price is considered fresh (seconds). Lower values mean fresher prices but more REST fallback calls.
cacheRetentionDays int 93 Days to retain cached data in ElastiCache. Entries older than this are eligible for eviction.
max_cache_size int 10000 Maximum entries in the local in-memory price cache (sync.Map). Prevents unbounded memory growth.
prewarm_assets_list string "btc,eth,sol,...,bat" Legacy combined list of assets for the REST prewarm cycle. 24 assets (Coinbase + Gemini lists combined).
prewarm_interval_minutes int 10 How often (minutes) to run the REST prewarm cycle for assets in prewarm_assets_list.
prewarm_delay_ms int 600 Delay (ms) between individual asset prewarm requests. Prevents thundering herd on external APIs.

WebSocket Feed Asset Lists (6 Exchanges × 25 Assets = 150 Total)

Each exchange has its own dedicated asset list. Assets must not overlap between exchanges. All lists are configurable via application parameters — no hardcoding. Changes take effect on the next parameter reload, but require a container restart for the WebSocket connections to re-subscribe.

Parameter Exchange Default (24 assets) WebSocket Endpoint
coinbase_prewarm_assets Coinbase btc,eth,sol,doge,xrp,link,dot,ltc,avax,uni,pepe,xlm,rndr,jasmy,icp,eos,egld,zec,enj,ankr,lrc,skl,coti,rlc wss://advanced-trade-ws.coinbase.com
gemini_prewarm_assets Gemini aave,ada,matic,atom,near,arb,mkr,crv,grt,fil,shib,bat,mana,sand,axs,chz,storj,amp,ren,uma,bond,ctsi,rly,rad wss://ws.gemini.com
kraken_prewarm_assets Kraken nano,sc,lsk,kava,bico,rari,ocean,cfg,cqt,algo,fet,flow,mina,glmr,movr,ksm,astr,phala,nodl,para,kilt,aca,teer,lit wss://ws.kraken.com/v2
gateio_prewarm_assets Gate.io bnb,trx,apt,inj,op,sui,vet,hbar,ftm,celr,dent,hot,one,reef,win,tfuel,stmx,troy,vite,oax,pundix,ach,bel,chess wss://api.gateio.ws/ws/v4/
bybit_prewarm_assets Bybit ton,wld,ape,blur,imx,ens,ldo,snx,comp,1inch,sushi,gala,magic,rdnt,hook,id,edu,cyber,arkm,ntrn,mav,sei,woo,agld wss://stream.bybit.com/v5/public/spot
okx_prewarm_assets OKX kas,tia,jup,strk,pyth,w,zro,pendle,ondo,render,wif,floki,people,mask,looks,high,rss3,perp,badger,alcx,fxs,tribe,alpha,dodo wss://ws.okx.com:8443/ws/v5/public

150 assets, 6 exchanges, zero overlap. Each exchange subscribes to 25 unique assets via its own persistent WebSocket connection. Prices are pushed in real-time (sub-second) directly into the in-process sync.Map — zero network calls on the hot path. The source exchange is tracked in every response and usage log via the source field (e.g., coinbase-ws, binance-ws, okx-ws).

Database Connection Pool

Controls the Go database/sql connection pool settings for Aurora Serverless v2. Changes are applied immediately via database.ApplyPoolSettings() after each parameter reload.

Parameter Type Default Description
db_max_open_conns int 50 Max open connections to Aurora. Higher values support more concurrent queries but consume more ACUs.
db_max_idle_conns int 30 Max idle connections kept warm in the pool. Should be less than or equal to db_max_open_conns.
db_conn_max_lifetime_minutes int 5 Max lifetime of a DB connection before it is closed and recycled. Prevents stale connections.
db_conn_max_idle_time_minutes int 1 Max time a DB connection can sit idle before being closed. Frees resources during low traffic.

SQS Queued Write Pipeline

Usage logs are now sent to SQS via a channel-buffered producer, consumed by the trinity-beast-queued-writer Lambda, and batch-inserted into Aurora. No more direct Aurora writes from the hot path.

Parameter Type Default Description
sqs_batch_size int 10 Messages per SQS SendMessageBatch call (range: 1–10)
sqs_flush_ms int 100 SQS producer flush interval in milliseconds
sqs_buffer_size int 50000 SQS producer channel buffer capacity
sqs_timeout_ms int 3000 Per-batch SQS API call timeout in milliseconds

Price Source Configuration

Controls the REST fallback price source behavior — priority order, per-source timeouts, weighted routing, health tracking, and error handling. These only apply when all WebSocket feeds are stale and the system falls back to REST.

Parameter Type Default Description
price_source_order string "coinbase,kraken,gemini" REST fallback source priority order. Sources are tried in this order when fallback is needed.
price_source_weights string {'coinbase':0.50,'kraken':0.35,'gemini':0.15} Source selection weights for weighted routing. Weights should sum to 1.0.
price_source_timeouts_ms string {'coinbase':150,'kraken':120,'gemini':200} Per-source REST timeout in milliseconds. Requests exceeding this timeout are cancelled.
price_source_fallback_on_error bool true Whether to try the next source in price_source_order when the current source returns an error.
price_source_health_window_sec int 300 Time window (seconds) over which source health metrics are tracked. Older data is discarded.
price_source_min_success_rate float 0.85 Minimum success rate (0.0–1.0) to consider a source healthy. Sources below this threshold may be deprioritized.
price_source_prefer_low_latency bool true Whether to prefer lower-latency sources when multiple healthy sources are available.

Rate Limiting & API

Controls rate limiting behavior, the demo API key, and default query pagination.

Parameter Type Default Description
demo_api_key string "" The public demo API key value. When set, the system resolves the key's UUID from ElastiCache or Aurora for fast lookups.
demo_usage_row_cap int 30 Max rows returned for demo usage detail and report-usage detail reports. Adjustable at runtime for specific demos.
demo_summary_row_cap int 300 Max rows aggregated for demo usage summary and report-usage summary reports. Adjustable at runtime for specific demos.
usage_warning_threshold_pct int 90 Deprecated — replaced by graduated thresholds below. Kept for backward compatibility.
usage_warning_green_pct int 85 At or above this usage percentage, responses include usage_warning: "green" and status shows ✅🟡. First tier of graduated warnings.
usage_warning_yellow_pct int 90 At or above this usage percentage, responses include usage_warning: "yellow" and status shows ✅⚠️. Second tier — caution.
usage_warning_red_pct int 95 At or above this usage percentage, responses include usage_warning: "red" and status shows ✅🔴. Third tier — critical, near limit.
minimum_to_wait float 1.0 Minimum wait time (seconds) between requests for rate limiting.
seconds_to_wait float 15.0 Seconds to wait before retrying a failed source.
default_query_limit int 1000 Default pagination limit for queries when no explicit limit is provided.

System Configuration

Core system parameters that control polling intervals, authentication, and external service integration.

Parameter Type Default Description
application_parameter_interval_minutes int 5 How often (minutes) to reload all parameters from Aurora. This is the master polling interval for the parameter loader.
config_poll_interval_seconds int 60 How often (seconds) to poll for config interval changes. This is a faster inner loop that checks if the main interval has changed.
admin_api_key string "" Admin API key for X-Admin-Key header authentication. Applied immediately via callback to update the admin auth middleware. SECRET
stripe_key string "" Stripe API key for payment processing. SECRET

4.8 UDP Server Tuning (v8)

v8 introduced six new parameters controlling the UDP hot path. These parameters are restart-required — they configure socket options and goroutine counts at startup.

v8 UDP Architecture: Each protocol (LPO on port 2679, LRS on port 2680) opens udp_reader_goroutines SO_REUSEPORT sockets. The kernel distributes incoming packets across sockets by source IP hash. Each socket has its own reader goroutine that uses recvmmsg to pull up to udp_batch_size datagrams per syscall, dispatching them to udp_workers_per_socket worker goroutines. For cache hits, udp_pre_serialize_responses skips the response build step entirely — the pre-built JSON payload is written directly to the socket.

Parameter KeyTypeDefaultDescription
udp_reader_goroutines int 8 Number of SO_REUSEPORT sockets per UDP protocol. Each socket gets its own kernel receive queue. v7 was 3, v8 bumped to 8 after the manual parser freed CPU headroom. Total workers = readers × workers_per_socket.
udp_read_buffer_bytes int 33554432 Per-socket kernel receive buffer (SO_RCVBUF). v8 default: 32 MB (was 8 MB). Absorbs burst spikes before the kernel drops packets. Set via conn.SetReadBuffer().
udp_write_buffer_bytes int 33554432 Per-socket kernel send buffer (SO_SNDBUF). v8 default: 32 MB (was 8 MB). Prevents write backpressure under high response rates.
udp_workers_per_socket int 128 Worker goroutines per socket. Each worker processes one packet at a time (parse → validate → cache lookup → response). 128 is optimal for dual-protocol without OOM risk. Tested at 256 — caused OOM under combined TCP+UDP load.
udp_batch_size int 32 Max datagrams per recvmmsg batch read. At 75K RPS with batch=32, reduces read syscalls from 75K/s to ~2.3K/s per socket. Set to 0 or 1 to disable batching and use the v7 single-read path.
udp_pre_serialize_responses bool true Enable pre-serialized response cache for hot assets. When a price is cached in sync.Map, the JSON response payload is pre-built once. Subsequent cache hits skip all strconv.AppendFloat / AppendInt formatting — just memcpy the pre-built bytes + append per-request fields (api_key, usage, ip). ~2x faster than v7 zero-copy for cache hits.

Stress Report Cache (v8): For stress-tier API keys, LRS report results are cached in ElastiCache under stress:report:{type}:{api_key} with a 2-hour TTL. This turns the LRS hot path from a multi-key pipeline query (176ms avg) into a single GET (sub-ms). Seeded automatically on first query or via /admin/seed-stress-report?api_key=<key>. Production tiers always run the real query pipeline.