The Trinity Beast — Go Application Features

A single Go binary. 6 real-time exchange feeds. 150 prewarmed assets. Table-driven everything. v8 UDP engine. 24-counter telemetry. One person runs it all.

Language: Go 1.26 Binary: trinity-beast-lpo-server Packages: 9 Handlers: 17 files Updated: May 2026

Table of Contents

1. At a Glance

Exchanges
6
Prewarmed Assets
150
WebSocket Feeds
6
REST Polling
0
Handler Files
17
Aurora Tables
29
Tunable Profiles
16
Telemetry Counters
24

The Trinity Beast is a single Go binary that serves real-time cryptocurrency prices from 6 exchanges, reports usage analytics in 4 formats, manages subscriptions via Stripe, processes partner applications, and publishes 30+ CloudWatch metrics — all from one main.go controlled by a SERVER_TYPE environment variable.

2. Table-Driven Exchange Manager

Adding a new exchange to The Trinity Beast requires zero code changes. The Exchange Manager reads its configuration from two Aurora tables and launches WebSocket connections dynamically.

Add Exchange #7: Insert one row into exchange_feeds (endpoint, subscribe template, JSON paths, ping interval) and 24 rows into exchange_asset_map (asset-to-symbol translations). Restart the container. The new exchange is live — receiving real-time trades, caching prices, and appearing in the demo dropdown. No Go code touched.

Diagram 2.1 — Exchange Manager Data Flow
graph TD
    TBCC[TBCC Exchange Manager] -->|"CRUD API"| EF[(exchange_feeds
19 columns)] TBCC -->|"CRUD API"| EAM[(exchange_asset_map
150 rows)] EF -->|"on startup"| EM[Exchange Manager
Go Engine] EAM -->|"symbol lookup"| EM EM -->|"launches"| WS1[Coinbase WS] EM -->|"launches"| WS2[Gemini WS] EM -->|"launches"| WS3[Kraken WS] EM -->|"launches"| WS4[Gate.io WS] EM -->|"launches"| WS5[Bybit WS] EM -->|"launches"| WS6[OKX WS] WS1 & WS2 & WS3 & WS4 & WS5 & WS6 -->|"prices"| PC[sync.Map
Price Cache] PC --> API[/price API/] EF -->|"public"| DD[Demo Dropdown
/exchanges] style TBCC fill:#4a5568,stroke:#718096,color:#e2e8f0 style EF fill:#2d4a6f,stroke:#4a7ab5,color:#cbd5e1 style EAM fill:#2d4a6f,stroke:#4a7ab5,color:#cbd5e1 style EM fill:#2d5a4a,stroke:#4a9a7a,color:#cbd5e1 style PC fill:#2d5a4a,stroke:#4a9a7a,color:#cbd5e1 style API fill:#4a5568,stroke:#718096,color:#e2e8f0 style DD fill:#3d3a5c,stroke:#6b6399,color:#cbd5e1 style WS1 fill:#334155,stroke:#64748b,color:#94a3b8 style WS2 fill:#334155,stroke:#64748b,color:#94a3b8 style WS3 fill:#334155,stroke:#64748b,color:#94a3b8 style WS4 fill:#334155,stroke:#64748b,color:#94a3b8 style WS5 fill:#334155,stroke:#64748b,color:#94a3b8 style WS6 fill:#334155,stroke:#64748b,color:#94a3b8

exchange_feeds — 19 Columns

Each row defines a complete WebSocket connection: endpoint URL, subscribe message template with {SYMBOLS} placeholder, JSON paths for extracting price/symbol/timestamp from trade messages, timestamp format (RFC3339, unix_ms, unix_ns), ping keepalive interval and payload, and an enabled flag to disable an exchange without a deploy.

exchange_asset_map — Symbol Translation

Every exchange uses a different symbol format for the same asset. BTC is BTC-USD on Coinbase, btcusd on Gemini, BTC/USD on Kraken, BTC_USDT on Gate.io, BTCUSDT on Bybit, and BTC-USDT on OKX. The translation table maps each normalized asset (BTC) to its exchange-specific symbol. 150 rows (25 per exchange), zero overlap, all data-driven.

CRUD API

EndpointPurpose
GET /admin/exchange-feedsList all exchanges with asset counts and connection status
POST /admin/exchange-feeds/saveCreate or update an exchange configuration (upsert)
POST /admin/exchange-feeds/toggleEnable or disable an exchange with one call
GET /admin/exchange-assetsList asset mappings (filterable by exchange)
POST /admin/exchange-assets/saveBatch create/update asset-to-symbol mappings
GET /exchangesPublic endpoint — powers the demo dropdown on the subscription page

3. Six-Exchange WebSocket Engine

Every price in The Trinity Beast arrives via a persistent WebSocket connection — no REST polling, no scheduled fetches, no stale data. Each container maintains 6 independent WebSocket connections, one per exchange.

ExchangeEndpointPair FormatAssetsSource Tag
Coinbasewss://advanced-trade-ws.coinbase.comBTC-USD24coinbase-ws
Geminiwss://ws.gemini.combtcusd24gemini-ws
Krakenwss://ws.kraken.com/v2BTC/USD24kraken-ws
Gate.iowss://api.gateio.ws/ws/v4/BTC_USDT24gateio-ws
Bybitwss://stream.bybit.com/v5/public/spotBTCUSDT24bybit-ws
OKXwss://ws.okx.com:8443/ws/v5/publicBTC-USDT24okx-ws

Each feed auto-reconnects with exponential backoff (1s → 60s max), respects the shutdown signal for graceful termination, and stores prices in both the per-exchange WsPriceCache and the main PriceCache via CachePriceLocal. The source exchange is tracked in every API response and every usage log — total transparency.

4. Three-Tier Price Cache

Diagram 4.1 — Three-Tier Cache Lookup
graph LR
    REQ[Price Request] --> T1
    T1[Tier 1
sync.Map
Zero Network] -->|"HIT ~0ms"| RESP[Response] T1 -->|"MISS"| T2 T2[Tier 2
ElastiCache
Sub-ms Network] -->|"HIT ~1ms"| RESP T2 -->|"MISS"| T3 T3[Tier 3
REST Fallback
Exchange API] -->|"50-200ms"| RESP WS[6 WebSocket Feeds] -->|"every trade"| T1 style REQ fill:#4a5568,stroke:#718096,color:#e2e8f0 style T1 fill:#2d5a4a,stroke:#4a9a7a,color:#cbd5e1 style T2 fill:#2d4a6f,stroke:#4a7ab5,color:#cbd5e1 style T3 fill:#5a4a2d,stroke:#9a7a4a,color:#cbd5e1 style RESP fill:#2d5a4a,stroke:#4a9a7a,color:#cbd5e1 style WS fill:#3d3a5c,stroke:#6b6399,color:#cbd5e1

Tier 1 — sync.Map (Zero Network)

In-process Go sync.Map populated by WebSocket feeds. Sub-microsecond reads. No network call. This is the hot path — 99%+ of requests are served here under stress testing with 300s TTL.

Tier 2 — ElastiCache (Sub-Millisecond)

Valkey 7.2 on cache.r7g.2xlarge (52 GB). Shared across all 3 containers. Prices written by the sync job and WebSocket feeds. Falls through to Tier 3 only if both Tier 1 and Tier 2 miss.

Tier 3 — REST Fallback (On-Demand)

Direct REST API calls to Coinbase, Gemini, or Kraken. Only triggered for assets not in any cache — typically first-time queries for non-prewarmed assets. Result is cached in Tier 1 and Tier 2 for subsequent requests.

The cache TTL is configurable per profile — 13 seconds in production (fresh-price), 300 seconds during stress testing. WebSocket-fed assets stay sub-second fresh regardless of TTL because every trade pushes a new price into Tier 1.

5. UDP v8 — SO_REUSEPORT, recvmmsg, Pre-Serialized Responses

Three generations of UDP optimizations, each targeting a specific bottleneck. The progression from v6 to v8 transformed UDP from a protocol that failed above 1,500 concurrent into one that achieves 100% success at 21,000 concurrent — the first perfect run in Trinity Beast history.

VersionOptimizationBeforeAfterImpact
v6Zero-alloc response builderjson.Marshal (reflection)buildUDPResponse() — direct byte append~70% faster, zero heap allocations
Multi-socket architectureSingle shared net.UDPConnOne socket per reader goroutine3× write parallelism
Per-socket worker poolsShared across all readersDedicated channel per socketZero cross-socket contention
v7Manual byte-scan JSON parserencoding/json.UnmarshalDirect byte scanning for fields~5× faster parsing, no reflection
Zero-copy response writeBuild → copy → writeBuild → write from pool → returnEliminates 1 alloc per response
v8SO_REUSEPORTSingle kernel receive queuePer-socket kernel receive queue via net.ListenConfig.ControlEliminated receive buffer bottleneck
recvmmsg batch reads1 datagram per syscall32 datagrams per syscall via ipv4.PacketConn.ReadBatch~32× reduction in read syscalls
Pre-serialized response cacheBuild JSON per requestsync.Map of pre-built byte slices~2× faster for cache hits
32 MB socket buffers8 MB per socket32 MB per socketAbsorbs burst spikes before drops
8 reader goroutines per protocol3 readers8 SO_REUSEPORT sockets × 128 workers = 1,024 handlersMore packets drained before drops

Result: 100% UDP success through all 13 concurrency levels (30 to 21,000 concurrent). 487,900 UDP RPS sustained for 30 minutes. 0.2ms average latency. The v8 engine combined with persistent socket pools in the stress client eliminated every bottleneck in the UDP path.

6. Runtime Telemetry — 24 Atomic Counters

Every container runs 24 atomic.Int64 counters that track the complete request lifecycle in real-time. The overhead is approximately 1 nanosecond per increment — invisible at any throughput level.

Throughput

tcp_requests, udp_requests, lrs_requests — real-time RPS by protocol

Cache Layers

syncmap_hits, elasticache_hits, cache_misses — three-tier visibility

UDP Health

udp_packets_received, udp_packets_dropped, udp_packets_sent — packet loss detection

Background Pool

bg_work_submitted, bg_work_dropped, bg_work_completed — housekeeping saturation

Batch Pipeline

batch_rows_queued — SQS messages sent (usage log entries queued for Lambda consumer)

DB Connections

db_open_conns, db_in_use_conns, db_wait_count — pool utilization

Access via GET /admin/stress-stats — returns all 24 counters plus derived metrics (RPS, hit percentages, drop rates) in a single JSON response. Reset with GET /admin/stress-reset before each test run. Six key metrics are also published to CloudWatch for dashboard visibility.

7. Cluster-Wide Aggregation via ElastiCache

Each container publishes its 24-counter metrics snapshot to ElastiCache every 3 seconds. The /admin/cluster-stats endpoint reads all 3 snapshots in a single ElastiCache pipeline call — one round-trip, sub-millisecond, guaranteed all 3 containers.

How It Works

Diagram 7.1 — ElastiCache Cluster Stats Pipeline
graph LR
    subgraph Containers
        M[BeastMain
24 counters] -->|"every 3s"| EC Mi[BeastMirror
24 counters] -->|"every 3s"| EC L[BeastLRS
24 counters] -->|"every 3s"| EC end EC[(ElastiCache
cluster:stats:*
TTL 30s)] EC -->|"1 pipeline read"| CS[/admin/cluster-stats/] CS --> TBCC[Command Center
Cluster Health Widget] CS --> CW[CloudWatch
6 Runtime Metrics] style M fill:#334155,stroke:#64748b,color:#94a3b8 style Mi fill:#334155,stroke:#64748b,color:#94a3b8 style L fill:#334155,stroke:#64748b,color:#94a3b8 style EC fill:#5a3a3a,stroke:#8a5a5a,color:#e2c8c8 style CS fill:#2d5a4a,stroke:#4a9a7a,color:#cbd5e1 style TBCC fill:#2d4a6f,stroke:#4a7ab5,color:#cbd5e1 style CW fill:#3d3a5c,stroke:#6b6399,color:#cbd5e1

ElastiCache Keys

KeyTTLContent
cluster:stats:BeastMain30 secondsFull metrics snapshot + cluster_node + region + published_at
cluster:stats:BeastMirror30 secondsSame structure
cluster:stats:BeastLRS30 secondsSame structure

If a container hasn't published in 30 seconds, its key expires and it shows as missing in the cluster view — an immediate signal that something is wrong. The publish interval (3 seconds) and TTL (30 seconds) are candidates for future application parameters.

Before: 30 HTTP requests through the ALB, hoping the load balancer routes to different containers. Slow, unreliable, wasteful.

After: 1 ElastiCache pipeline read. Sub-millisecond. Guaranteed all 3 containers. The data is always fresh because each container publishes independently every 3 seconds.

Response Structure

GET /admin/cluster-stats returns per-node snapshots and aggregated totals:

8. Partner Tier — Zero Friction Access

AWS Partners connect via PrivateLink (TCP) or VPC Peering (UDP) directly to containers. Partner API keys bypass all rate limiting, monthly caps, and billing checks — in both the TCP and UDP handlers.

The exchanges we depend on — Coinbase, Gemini, Kraken, Gate.io, Bybit, OKX — share their data with us at no cost. We pass that forward. If your AWS application needs live crypto prices, we provide them free. We receive freely, we give freely.

Partners apply through a public form at /partner-apply.html, receive a professional SES confirmation email, and can check their application status at /partner-status.html. Applications are reviewed in the TBCC Partner Management widget.

9. Application Parameter Profiles

16 named profiles stored in Aurora's application_parameter_profiles table. Each profile defines a complete set of tuning parameters — rate limits, cache TTLs, connection pool sizes, batch settings, and logging levels. Applied instantly via GET /admin/system-mode?mode=<name>.

CategoryProfilesKey Difference
Productiondemo, debug, fresh-priceTTL, logging, pool sizes
LPO Stressstress, tcp-direct, tcp-alb, udp-direct, udp-nlbFlush intervals, batch sizes, pool topology
LRS Stresslrs-direct, lrs-alb, lrs-udp-direct, lrs-udp-nlbdb_max_open=180, cache_read_ms=1000
Combinedcombined-direct, combined-alb, combined-udp-direct, combined-udp-nlbdb_max_open=180 for LPO+LRS headroom

Key tuning rule: db_max_idle always equals db_max_open — the fix that dropped p99 latency from 1,266ms to 8.9ms. All cache pool sizes are multiples of 3 (one pool per container).

10. SQS Queued Write Pipeline

Usage logs are never written synchronously — and never written directly to Aurora from the hot path. Every price request fires a message to SQS via a channel-buffered producer. A purpose-built Go Lambda (trinity-beast-queued-writer) consumes batches from the queue and batch-inserts into Aurora. Zero logs shed. Zero latency added to the price response.

Messages use a type-routed envelope format ({"type":"usage_log","payload":{...}}) — extensible to any future write type without changing queue or Lambda infrastructure. During Run 17 at 746,374 combined RPS, every usage log was delivered to Aurora via SQS with zero shedding — solving the data integrity gap discovered in earlier stress tests where only ~12K of 1.34B requests produced Aurora rows.

11. Distributed Adaptive Governor

A cluster-wide TCP connection throttle coordinated via ElastiCache. Each container tracks its success rate over a rolling window. When the rate drops below the threshold, the governor introduces a configurable delay to shed load gracefully — preventing cascade failures under extreme concurrency.

Disabled during direct-to-container stress tests (where we want to find the raw ceiling). Enabled during ALB/NLB tests (where we want production-like behavior).

12. Single Binary, Three Modes

One Docker image. One Go binary. Three operational modes controlled by the SERVER_TYPE environment variable:

ModeSERVER_TYPEServicesPorts
LPO OnlyAPP_SERVERPrice API + UDP8080, 8081 (health), 2679
LRS OnlyREPORT_SERVERReports + UDP9090, 9091 (health), 2680
CombinedAPP_REPORT_SERVERAll services8080, 8081, 9090, 9091, 2679, 2680

All 3 ECS services run the same image. The only difference is the environment variable. This means one build, one push, one image — deployed to 3 services with different configurations.

13. Secrets-Driven Configuration

Zero hardcoded credentials or email addresses. Everything sensitive is loaded from AWS Secrets Manager at startup and injected into the Config struct. 16 keys covering database credentials, Stripe API key, SMTP settings, and all SES sender addresses.

To change a sender email address, update it in Secrets Manager and restart the container. No code deploy, no binary rebuild.

14. Every Query Funds Freedom

100% of subscription revenue from The Trinity Beast goes directly to Cross Power Ministries of Pakistan — funding freedom from brick kiln debt bondage, clean water, medical camps, wheelchairs, education, and Bibles. When a developer calls /price?asset=BTC, they're not just getting a number. They're funding freedom.

This is not a feature of the Go application. It is the reason the Go application exists.