Complete folder structure, file descriptions, and integration guide for all three Go applications
The Trinity Beast platform consists of three independent Go applications, each with its own repository, Docker image, and deployment pipeline. All three follow the standard Go project layout conventions and share a common infrastructure on AWS (Aurora PostgreSQL, ElastiCache, Secrets Manager).
| Application | Purpose | Runtime | Docker Image |
|---|---|---|---|
trinity-beast-lpo-server |
Listener Price Oracle + Listener Report Server — the main application serving TCP/UDP price queries and usage reports | ECS Fargate (3 services) | trinity-beast-lpo-server:latest |
trinity-beast-sync-job |
Nightly sync job — reads usage_logs from Aurora, writes to ElastiCache for LRS reporting. 93-day rolling retention. | ECS Fargate (standalone task) | trinity-beast-sync-job:latest |
trinity-beast-receipt-lambda |
Post-checkout receipt processor — reads Stripe sessions, records transactions in Aurora, generates API keys, sends SES email receipts | AWS Lambda (provided.al2023) | N/A (zip deployment) |
The main application powering the Listener Price Oracle and Listener Report Server. Runs as three ECS Fargate services (BeastMain, BeastMirror, BeastLRS) from a single Docker image. Handles real-time crypto price queries via TCP and UDP, usage reporting, rate limiting, caching, and CloudWatch metrics.
trinity-beast-lpo-server/ ├── go.mod, go.sum ├── Makefile ├── README.md ├── .dockerignore │ ├── bin/ — Compiled binaries │ ├── price-server — Local server build │ ├── trinity-beast-demo-linux — Demo client (Linux) │ ├── trinity-beast-demo-mac — Demo client (macOS) │ ├── trinity-beast-demo.exe — Demo client (Windows) │ ├── trinity-stress — Stress test client (macOS) │ └── trinity-stress-linux — Stress test client (Linux) │ ├── cmd/ — Application entry points (6 binaries) │ ├── server/ │ │ ├── main.go — LPO + LRS server (2554 lines) │ │ └── metrics.go — CloudWatch metrics publisher │ ├── demo/ │ │ └── main.go — Interactive demo client for API testing │ ├── receipt/ │ │ └── main.go — Receipt handler (legacy, see receipt-lambda) │ ├── stress/ │ │ └── main.go — Go stress test client for load testing │ ├── sync/ │ │ └── main.go — Sync job (legacy, see sync-job repo) │ └── queued-writer/ │ └── main.go — SQS usage log consumer Lambda (Go) │ ├── deployments/ │ ├── cloudformation/ │ │ ├── trinity-beast-stack.yaml — Complete infrastructure template │ │ └── inventory/ — 70+ JSON snapshots of every AWS resource │ ├── docker/ │ │ ├── Dockerfile — LPO server multi-stage build │ │ ├── Dockerfile.receipt — Receipt Lambda build │ │ ├── Dockerfile.queued-writer — Queued-writer Lambda build │ │ └── Dockerfile.sync — Sync job build │ ├── ses/ │ │ ├── DemoWelcome.json — Demo welcome email template │ │ ├── donation-receipt.json — Donation receipt template │ │ └── subscription-receipt.json — Subscription receipt template │ └── task-definitions/ — ECS task definition JSON files (15 files) │ ├── main-service-config.json │ ├── mirror-service-config.json │ ├── lrs-service-config.json │ └── ... (versioned task defs) │ ├── docs/ — Technical documentation (16 HTML + index) │ ├── index.html │ ├── Trinity-Beast-API-Reference.html │ ├── Trinity-Beast-Application-Parameters.html │ ├── Trinity-Beast-Architecture-Guide.html │ ├── Trinity-Beast-CloudFormation-Guide.html │ ├── Trinity-Beast-CloudWatch-Guide.html │ ├── Trinity-Beast-Docker-Setup-Guide.html │ ├── Trinity-Beast-Infrastructure-Specification.html │ ├── Trinity-Beast-LRS-Report-Management.html │ ├── Trinity-Beast-Management-Console-Guide.html │ ├── Trinity-Beast-Optimization-Guide.html │ ├── Trinity-Beast-Performance-Report.html │ ├── Trinity-Beast-Project-Structure.html │ ├── Trinity-Beast-Quick-Reference.html │ ├── Trinity-Beast-Stripe-Implementation.html │ └── stripe-payment-links-setup.html │ ├── internal/ — Private application packages (10 packages) │ ├── apikeys/ │ │ └── apikeys.go — API key store: load, cache, validate, refresh │ ├── cache/ │ │ └── redis.go — ElastiCache connection, price cache read/write │ ├── config/ │ │ ├── params.go — Application parameter loader (Aurora → ElastiCache → runtime) │ │ └── secrets.go — AWS Secrets Manager reader │ ├── database/ │ │ ├── postgres.go — Aurora connection pool, reader/writer endpoints │ │ ├── shutdown.go — Graceful shutdown: drain connections, flush buffers │ │ └── sqs_producer.go — SQS message producer for queued usage log pipeline │ ├── handlers/ │ │ ├── admin.go — Admin endpoints: /admin/invalidate-key, /admin/system-mode, /admin/feed-status │ │ ├── adminsql.go — /admin/sql: read/write SQL with DDL blocking │ │ ├── adminsqlbatch.go — /admin/sql-batch: unrestricted DDL + multi-statement SQL │ │ ├── adminstressreport.go — /admin/seed-stress-report: stress test result seeding │ │ ├── adminvalkey.go — /admin/valkey: direct ElastiCache command execution │ │ ├── analytics.go — Page analytics: /analytics/pageview, /analytics/event, /admin/page-analytics │ │ ├── checkout.go — /checkout: Stripe Payment Link redirect with locale passthrough │ │ ├── demo_leads.go — Demo lead capture + SES welcome email │ │ ├── deps.go — HandlerDeps struct: shared dependencies for all handlers │ │ ├── email.go — SES email sending helper + email admin CRUD │ │ ├── errors.go — Standardized error response formatting (TBC + plain) │ │ ├── exchangefeeds.go — Exchange Manager: CRUD for exchange_feeds + exchange_asset_map │ │ ├── mappins.go — Map pin data endpoint for impact map │ │ ├── newsletter.go — Newsletter subscribe/unsubscribe/send/admin endpoints │ │ ├── partners.go — Partner application: submit, status check, admin review │ │ ├── pipeline.go — Background pipeline: batch Aurora writes, SQS integration │ │ ├── price.go — /price endpoint: fetch, cache, rate-limit, respond │ │ ├── profiles.go — Application parameter profiles: migration, seed, list │ │ ├── public_status.go — /public/status: public infrastructure dashboard (no auth) │ │ ├── response.go — Unified JSON response writer (TBC envelope format) │ │ ├── stressstats.go — /admin/stress-stats, /admin/cluster-stats (24 atomic counters) │ │ ├── support.go — Support ticket submission + admin management │ │ ├── udp.go — UDP v6: multi-socket, zero-alloc response builder, worker pools │ │ ├── webhook.go — Webhook Push management: /webhook/configure, /verify, /status, /assets │ │ └── webhook_delivery.go — Webhook delivery engine: table-driven push via UDP + HTTPS │ ├── lrs/ │ │ ├── counter.go — LRS report counter: per-key monthly limit tracking │ │ ├── errors.go — LRS-specific error types │ │ ├── handlers.go — /reports/usage, /reports/summary endpoints │ │ ├── middleware.go — LRS authentication and rate limiting middleware │ │ ├── params.go — LRS configuration parameters │ │ ├── types.go — LRS data types: UsageReport, SummaryReport │ │ └── usagelogger.go — LRS usage log writer for report-on-report tracking │ ├── middleware/ │ │ ├── adaptive.go — Distributed Adaptive Governor: cross-node rate coordination │ │ ├── adminauth.go — Admin endpoint authentication middleware │ │ ├── cors.go — CORS middleware for browser requests │ │ └── ratelimit.go — Token bucket rate limiter with ElastiCache backing │ ├── models/ │ │ ├── config.go — Config struct: all application parameters │ │ ├── lpo.go — LPO-specific models: PriceResponse, SourceHealth │ │ ├── models.go — Shared models: APIKeyData, UsageLog, Secret │ │ └── timer.go — TimerManager: periodic task scheduling │ ├── pricing/ │ │ ├── cache.go — Three-tier price cache: sync.Map → ElastiCache → REST │ │ ├── cache_flush.go — FlushToElastiCache: batch pipeline write from local WS cache │ │ ├── engine.go — PriceEngine: orchestrates source selection and fallback │ │ ├── exchange_manager.go — Generic table-driven Exchange Manager: loads configs from Aurora │ │ ├── health.go — Source health tracking: staleness detection, failover │ │ ├── prewarm.go — Asset prewarming: Kraken batch + exchange_asset_map translation │ │ ├── sources.go — Price sources: 6 WebSocket + 3 REST fallbacks │ │ ├── websocket.go — WebSocket feeds: Coinbase WS, Gemini WS │ │ ├── websocket_bybit.go — Bybit WebSocket v5 feed │ │ ├── websocket_gateio.go — Gate.io WebSocket v4 feed │ │ ├── websocket_helpers.go — Shared helpers: parseFloat, parseInt64 │ │ ├── websocket_kraken.go — Kraken WebSocket v2 feed │ │ └── websocket_okx.go — OKX WebSocket v5 feed │ ├── metrics/ │ │ └── runtime.go — RuntimeMetrics: 24 atomic counters for stress test observability │ └── ratelimit/ — (empty, rate limiting in middleware/ratelimit.go) │ ├── pkg/ │ └── logger/ │ └── logger.go — Structured logger: module-tagged, level-filtered, JSON-ready │ ├── scripts/ │ ├── migrate_database.sql — Aurora schema migration │ ├── perf-test-suite.sh — Performance test automation │ ├── run_migration.sh — Migration runner │ ├── seed-templates.sh — SES template seeder │ ├── tbmcstart — Start TBCC (The Trinity Beast Command Center) │ ├── tbmcstop — Stop TBCC │ ├── terminal-relay.py — Python terminal relay for TBCC │ ├── test.sh — Go test runner │ ├── udp-stress-test.py — Python UDP load tester │ ├── v4-setup-test-instance.sh — EC2 stress test instance setup │ └── v4-stress-test.py — Python v4 stress test client │ ├── tests/stress/ — Compiled stress test binaries │ ├── stress.go — Basic stress test │ ├── stress_direct.go — Direct (ALB bypass) stress test │ └── stress_full.go — Full ALB stress test │ └── test-results/ — Stress test result documentation ├── run14-stress-results.md — Run 14 results └── v33-alb-bypass-rationale.md — ALB bypass testing rationale
API key store with in-memory sync.Map cache. Loads keys from Aurora on startup, refreshes periodically. Validates keys, checks tier limits, tracks current usage.
apikeys.go — API key store: load, cache, validate, refresh
ElastiCache (Valkey 7.2) connection management. Handles TLS connections, price cache read/write, app config hash, and connection health checks.
redis.go — ElastiCache connection, price cache read/write
Application parameter loading pipeline: Aurora → ElastiCache → runtime config struct. params.go loads the 33 parameters from the app:config hash (fast path) or Aurora (fallback). secrets.go reads credentials from AWS Secrets Manager.
params.go — Application parameter loader (Aurora → ElastiCache → runtime) secrets.go — AWS Secrets Manager reader
Aurora PostgreSQL connection management. postgres.go manages dual connection pools (writer + reader endpoints). sqs_producer.go implements the channel-buffered SQS message producer that queues usage logs for the Lambda consumer. shutdown.go handles graceful connection draining.
postgres.go — Aurora connection pool, reader/writer endpoints shutdown.go — Graceful shutdown: drain connections, flush buffers sqs_producer.go — SQS message producer for queued usage log pipeline
All HTTP and UDP request handlers. deps.go defines the shared HandlerDeps struct. price.go is the core /price endpoint. udp.go runs dual UDP v6 servers with multi-socket zero-alloc architecture. exchangefeeds.go provides full CRUD for the table-driven Exchange Manager. webhook.go handles Webhook Push management (configure, verify, status, assets). webhook_delivery.go is the table-driven delivery engine that pushes prices to Associates via UDP + HTTPS. public_status.go powers the public Infrastructure Live dashboard. checkout.go handles Stripe checkout redirects with locale passthrough. analytics.go provides privacy-first page analytics.
admin.go — Admin endpoints: /admin/invalidate-key, /admin/system-mode, /admin/feed-status adminsql.go — /admin/sql: read/write SQL with DDL blocking adminsqlbatch.go — /admin/sql-batch: unrestricted DDL + multi-statement SQL adminstressreport.go — /admin/seed-stress-report: stress test result seeding adminvalkey.go — /admin/valkey: direct ElastiCache command execution analytics.go — Page analytics: /analytics/pageview, /analytics/event, /admin/page-analytics checkout.go — /checkout: Stripe Payment Link redirect with locale passthrough demo_leads.go — Demo lead capture + SES welcome email deps.go — HandlerDeps struct: shared dependencies for all handlers email.go — SES email sending helper + email admin CRUD errors.go — Standardized error response formatting (TBC + plain) exchangefeeds.go — Exchange Manager: CRUD for exchange_feeds + exchange_asset_map + public /exchanges mappins.go — Map pin data endpoint for impact map newsletter.go — Newsletter subscribe/unsubscribe/send/admin endpoints partners.go — Partner application: submit, status check, admin review, SES confirmation pipeline.go — Background pipeline: batch Aurora writes, SQS integration price.go — /price endpoint: fetch, cache, rate-limit, respond + background housekeeping profiles.go — Application parameter profiles: migration, seed (16 profiles), list endpoint public_status.go — /public/status: public infrastructure dashboard (no auth, powers Infrastructure Live page) response.go — Unified JSON response writer (TBC envelope format) stressstats.go — /admin/stress-stats, /admin/stress-reset, /admin/cluster-stats (24 atomic counters) support.go — Support ticket submission, admin management, SES notifications udp.go — UDP v6: multi-socket, zero-alloc response builder, per-socket worker pools webhook.go — Webhook Push management: /webhook/configure, /verify, /status, /assets + tier template migration webhook_delivery.go — Webhook delivery engine: table-driven push via UDP fire-and-forget + HTTPS signed POST
Complete Listener Report Service implementation. handlers.go serves /reports/usage and /reports/summary with JSON, CSV, TSV, and text output formats. counter.go tracks per-key monthly report limits in ElastiCache. middleware.go handles LRS-specific auth and rate limiting. usagelogger.go implements report-on-report tracking (LRS queries generate their own usage logs). types.go and params.go define data structures and configuration. errors.go provides LRS-specific error types.
counter.go — LRS report counter: per-key monthly limit tracking errors.go — LRS-specific error types handlers.go — /reports/usage, /reports/summary endpoints middleware.go — LRS authentication and rate limiting middleware params.go — LRS configuration parameters types.go — LRS data types: UsageReport, SummaryReport usagelogger.go — LRS usage log writer for report-on-report tracking
Request processing middleware. adaptive.go implements the Distributed Adaptive Governor — cross-node rate coordination via ElastiCache counters. ratelimit.go is the per-key token bucket rate limiter backed by ElastiCache. adminauth.go authenticates admin endpoints. cors.go handles CORS headers for browser requests.
adaptive.go — Distributed Adaptive Governor: cross-node rate coordination adminauth.go — Admin endpoint authentication middleware cors.go — CORS middleware for browser requests ratelimit.go — Token bucket rate limiter with ElastiCache backing
All data structures. config.go defines the Config struct with all 33 application parameters and their defaults. models.go has APIKeyData, UsageLog, Secret, and other shared types. lpo.go has PriceResponse, SourceHealth, WSPrice. timer.go provides the TimerManager for periodic task scheduling.
config.go — Config struct: all application parameters lpo.go — LPO-specific models: PriceResponse, SourceHealth models.go — Shared models: APIKeyData, UsageLog, Secret timer.go — TimerManager: periodic task scheduling
Price fetching engine. engine.go orchestrates source selection with automatic failover. websocket.go maintains persistent WebSocket connections to Coinbase and Gemini. Four additional WebSocket files (websocket_kraken.go, websocket_gateio.go, websocket_bybit.go, websocket_okx.go) provide real-time feeds from 6 exchanges total — 150 prewarmed assets with zero REST polling. Binance was removed (permanently geo-blocked from US). exchange_manager.go is the table-driven generic Exchange Manager that loads configurations from Aurora's exchange_feeds and exchange_asset_map tables. prewarm.go uses exchange_asset_map for Kraken symbol translation. cache_flush.go batch-writes local WebSocket prices to ElastiCache every 30 seconds.
cache.go — Three-tier price cache: sync.Map → ElastiCache → REST cache_flush.go — FlushToElastiCache: batch pipeline write from local WS cache engine.go — PriceEngine: orchestrates source selection and fallback exchange_manager.go — Generic table-driven Exchange Manager: loads configs from Aurora health.go — Source health tracking: staleness detection, failover prewarm.go — Asset prewarming: Kraken batch + exchange_asset_map translation sources.go — Price sources: 6 WebSocket + 3 REST fallbacks websocket.go — WebSocket feeds: Coinbase WS, Gemini WS websocket_bybit.go — Bybit WebSocket v5 feed websocket_gateio.go — Gate.io WebSocket v4 feed websocket_helpers.go — Shared helpers: parseFloat, parseInt64 websocket_kraken.go — Kraken WebSocket v2 feed websocket_okx.go — OKX WebSocket v5 feed
Runtime observability for stress testing. runtime.go defines 24 atomic counters (lock-free, ~1ns per increment) covering throughput, cache performance, UDP health, background pool saturation, batch pipeline pressure, and DB connection utilization. The /admin/stress-stats endpoint reads all counters in ~50ns and computes derived metrics (RPS, hit percentages, drop rates). Zero overhead on the hot path.
runtime.go — RuntimeMetrics: 24 atomic counters, TakeSnapshot(), Reset()
Rate limiting logic lives in middleware/ratelimit.go and middleware/adaptive.go. This directory is reserved but currently unused.
Main LPO+LRS+Webhook server (production). Contains main.go which is the single-file server implementing all LPO, LRS, and Webhook Push functionality. The SERVER_TYPE environment variable controls which services start: APP_SERVER (LPO only), REPORT_SERVER (LRS only), APP_REPORT_SERVER (both LPO + LRS), or WEBHOOK_SERVER (outbound price push delivery). Also contains metrics.go for CloudWatch metric publishing to the TrinityBeast/LPO and TrinityBeast/LRS namespaces.
Interactive demo client for API testing. A command-line tool that exercises the LPO and LRS endpoints, useful for verifying deployments and demonstrating API capabilities to potential subscribers.
Receipt handler — legacy copy. The production version of this code lives in the trinity-beast-receipt-lambda repository. This copy is kept for reference but is not deployed from here.
Go stress test client used for load testing. Compiles to platform-specific binaries in bin/ (trinity-stress for macOS, trinity-stress-linux for Linux). Used during Run 17 performance testing to achieve 487,900 UDP and 369,600 TCP requests per second. The v5.0 distributed client drove 746,374 combined RPS across 3 stress clients.
Sync job — legacy copy. The production version of this code lives in the trinity-beast-sync-job repository. This copy is kept for reference but is not deployed from here.
Purpose-built Go Lambda that drains the trinity-beast-queued-usage-logs SQS queue and batch-inserts usage log entries into Aurora. Routes on a message type field for extensibility — currently handles usage_log type. Replaces the legacy in-process UsageWriter that shed logs under extreme load.
A standalone Go application that runs as a scheduled ECS Fargate task. Triggered nightly at 1 AM EST by EventBridge rule trinity-beast-nightly-sync. Reads usage_logs from Aurora PostgreSQL and writes them to ElastiCache as Redis hashes and sorted set indexes for fast LRS reporting. Maintains 93 days (~3 months) of rolling data.
trinity-beast-sync-job/ ├── go.mod — Go module definition and dependencies ├── go.sum — Dependency checksums (auto-generated) ├── Dockerfile — Multi-stage build: Go 1.26 builder → Alpine 3.19 runtime (linux/amd64) ├── README.md — Project overview and usage │ ├── cmd/ — Application entry point │ └── sync/ — Sync job application │ └── main.go — Single-file sync: Aurora → ElastiCache (historical + incremental + prune) │ ├── internal/ — Private packages (reserved for future refactoring) │ ├── models/ — Data structures: UsageLog (reserved) │ └── sync/ — Sync logic: batch storage, pruning (reserved) │ └── sync/ — Legacy directory (deprecated)
Contains the single main.go file that implements the entire sync pipeline. On first run (ElastiCache empty), loads 93 days of historical data. On subsequent runs, loads yesterday through current moment and prunes records older than 93 days. Stores each usage log as a Redis hash with 18 fields, plus sorted set indexes by timestamp, API key, and asset. All Redis keys have 93-day TTLs.
Unlike the LPO server, the Dockerfile lives at the project root. Same multi-stage pattern: Go 1.26 Alpine builder compiles a static linux/amd64 binary, then copies it into a minimal Alpine 3.19 runtime image. Runs as non-root appuser.
CACHE_URL, DB_SECRET_NAME, AWS_REGION. Optional: FORCE_FULL_SYNC=true to reload all 93 days.
A Go Lambda function that handles post-checkout processing for donations, subscriptions, and LRS add-on purchases. Called by the thank-you pages via API Gateway after Stripe redirects. Reads the Stripe checkout session, records transactions in Aurora, generates API keys for subscribers, and sends branded SES email receipts.
trinity-beast-receipt-lambda/ ├── go.mod — Go module definition and dependencies ├── go.sum — Dependency checksums (auto-generated) ├── bootstrap — Compiled Linux binary (Lambda runtime expects this name) ├── function.zip — Packaged zip for Lambda deployment (contains bootstrap) │ ├── cmd/ — Application entry point │ └── handler/ — Lambda handler │ └── main.go — Handler: Stripe session reader, Aurora inserts, SES email sender │ └── handler/ — Legacy directory (deprecated)
Contains main.go implementing three flows: donation (transaction → SES receipt), subscription (users → api_keys → transaction → SES receipt with API key), and lrs-addon (validate subscriber → enable LRS → transaction → SES receipt). Uses direct HTTP calls to the Stripe API (no SDK). Reads secrets from AWS Secrets Manager. Tier configuration is read from the rate_limit_template table in Aurora — rate limits, burst limits, and query limits per tier are data-driven, not hardcoded.
Lambda on provided.al2023 runtime expects the binary to be named bootstrap. The function.zip contains just this binary. Built with GOOS=linux GOARCH=amd64 CGO_ENABLED=0 and stripped with -ldflags="-w -s" for minimal size.
DB_SECRET_NAME=trinity-beast-secrets, SES_FROM=No-Reply@CPMP-Site.org. AWS_REGION is provided automatically by Lambda.
The public-facing website for Cross Power Ministries of Pakistan. Contains 26 public HTML pages, 4 admin pages, 18 technical documents, shared CSS/JS, newsletter templates, and media assets. Deployed to S3 bucket trinity-beast-website-east2 and served via CloudFront distribution E110PRKEIYQVLL.
cpmp-redesign/ — Live website (S3 → CloudFront) ├── index.html — Homepage ├── donate.html — Donation page with impact cards ├── give.html — Alternative giving page ├── subscribe-listener.html — LPO subscription page with code examples ├── thank-you.html — Donation thank-you page ├── thank-you-listener.html — Subscription/LRS addon thank-you page ├── map.html — Impact map (Leaflet + marker clusters) ├── medical-camps.html — Medical camps impact page ├── freedom-moments.html — Freedom moments impact page ├── provisions.html — Provisions impact page ├── training.html — Education/training impact page ├── word-of-life.html — Bible distribution impact page ├── wheelchairs.html — Wheelchair impact page ├── clean-water.html — Clean water impact page ├── team.html — Team page ├── support.html — Support/contact page ├── newsletters.html — Newsletter archive page ├── newsletter-optout-cpmp.html — CPMP newsletter unsubscribe ├── newsletter-optout-lpo.html — LPO newsletter unsubscribe ├── authority.html — Statement of authority ├── privacy.html — Privacy policy ├── terms.html — Terms of service ├── copyright.html — Copyright notice │ ├── admin/ — Admin console pages │ ├── trinity-beast-command-center.html — TBCC dashboard │ ├── newsletteradmin.html — Newsletter administration │ ├── emailadmin.html — Email administration │ └── supportadmin.html — Support ticket administration │ ├── css/ │ └── style.css — Shared stylesheet │ ├── js/ │ └── theme.js — Theme switching │ ├── includes/ │ ├── header.html — Shared navigation header │ └── footer.html — Shared footer │ ├── docs/ — Technical documentation (18 documents + index) │ ├── index.html — Document Library landing page (grouped by category) │ └── Trinity-Beast-*.html — 18 technical documents │ ├── templates/ │ ├── newsletter-template-cpmp.html — CPMP newsletter template │ └── newsletter-template-lpo.html — LPO newsletter template │ ├── images/ — 55+ image assets (photos, favicons, QR codes) ├── icons/ — 4 SVG crypto token icons ├── videos/ — 6 MP4 impact videos └── originals/ — Pre-redesign pages (reference only, NOT deployed)
The root directory contains all public-facing pages. The homepage (index.html), donation flow (donate.html, give.html, thank-you.html), subscription flow (subscribe-listener.html, thank-you-listener.html), seven impact category pages, the interactive impact map (map.html using Leaflet + marker clusters), team page, support page, newsletter archive, two newsletter opt-out pages, and four legal/policy pages.
Four admin console pages. The Trinity Beast Command Center (trinity-beast-command-center.html) provides a real-time dashboard for monitoring the LPO server. newsletteradmin.html, emailadmin.html, and supportadmin.html provide administration interfaces for newsletters, email, and support tickets respectively.
The complete The Trinity Beast Documentation Library — 21 HTML documents plus an index page. The index page groups documents by category (Core Reference, Architecture, Data & Reports, Operations, Performance). All documents follow the same dark-theme HTML convention and are accessible from the subscription page at cpmp-site.org/docs/.
images/ contains 55+ assets including impact photos, favicons, and QR codes. icons/ has 4 SVG crypto token icons used on the subscription page. videos/ contains 6 MP4 impact videos embedded on category pages. originals/ holds pre-redesign pages for reference only — these are NOT deployed to S3.
Kiro steering files provide persistent project knowledge that is auto-loaded into AI context. Spec files capture iterative feature design with requirements, design documents, and task lists.
.kiro/ ├── steering/ — Persistent project knowledge (auto-loaded into AI context) │ ├── architecture.md — Canonical AWS resource names, infrastructure decisions │ ├── conventions.md — Coding standards, branding, workflow rules │ ├── deployment.md — Exact deployment commands for each component │ └── project-structure.md — File locations and key files to watch │ ├── specs/ — Feature specifications (iterative design docs) │ ├── lrs-report-limit-enforcement/ │ │ ├── requirements.md, design.md, tasks.md │ └── subscription-lifecycle-management/ │ ├── requirements.md, design.md, tasks.md │ └── settings/ └── mcp.json — MCP server configuration
Four markdown files that define the canonical project knowledge. architecture.md lists every AWS resource name, ARN pattern, and infrastructure decision. conventions.md captures coding standards, branding rules, and workflow preferences. deployment.md contains the exact commands for deploying each component. project-structure.md maps file locations and key files to watch across all repositories.
Feature specifications follow a three-document pattern: requirements.md (acceptance criteria), design.md (technical approach), and tasks.md (implementation checklist). Each spec lives in its own directory named after the feature.
All three applications share the same Aurora PostgreSQL database and ElastiCache cluster, accessed via credentials stored in AWS Secrets Manager (trinity-beast-secrets).
LPO Server (main.go) │ ├── Receives price queries (TCP :8080, UDP :2679) ├── Fetches prices from Coinbase, Kraken, Gemini ├── Caches prices in ElastiCache ├── Writes usage_logs to Aurora (batched INSERT) │ ├── Serves LRS reports (TCP :9090, UDP :2680) └── Reads usage data from ElastiCache Sync Job (cmd/sync/main.go) │ ├── Triggered nightly by EventBridge at 1 AM EST ├── Reads usage_logs from Aurora ├── Writes to ElastiCache (hashes + sorted sets) └── Prunes data older than 93 days Receipt Lambda (cmd/handler/main.go) │ ├── Called by thank-you pages via API Gateway ├── Reads Stripe checkout session (HTTP) ├── Inserts into Aurora (users, api_keys, transactions) ├── Sends email via SES templates └── Calls LPO Server /admin/invalidate-key for cache busting
| Shared Resource | LPO Server | Sync Job | Receipt Lambda |
|---|---|---|---|
| Aurora PostgreSQL | Read/Write (api_keys, usage_logs, application_parameters) | Read (usage_logs) | Write (users, api_keys, transactions) |
| ElastiCache (Valkey) | Read/Write (price cache + LRS queries) | Write (usage log hashes + indexes) | — |
| Secrets Manager | Read (trinity-beast-secrets) | Read (trinity-beast-secrets) | Read (trinity-beast-secrets + STRIPE_SECRET_KEY) |
| CloudWatch | Write (TrinityBeast/LPO + TrinityBeast/LRS metrics) | Write (logs only) | Write (logs only) |
| SES | Send (newsletters, support, demo welcome) | — | Send (DonationReceipt, SubscriptionReceipt, DemoWelcome templates) |
| LPO Server API | — | — | POST /admin/invalidate-key (cache busting after new subscriptions) |
Engineering techniques and architectural patterns used across the Trinity Beast platform.
Persistent WebSocket connections to Coinbase and Gemini provide zero-network-hop price updates. Prices land in a local sync.Map (zero-latency read). REST APIs (Coinbase, Kraken, Gemini) serve as fallbacks only when all WS feeds go stale. This eliminates per-request HTTP overhead for the hot path.
Layer 1: sync.Map (in-process, zero-latency). Layer 2: ElastiCache (cross-node, sub-millisecond). Layer 3: REST API call (network, ~50–200ms). Each layer falls through to the next only on miss or staleness. WebSocket feeds continuously populate Layer 1.
Cross-node rate coordination using ElastiCache atomic counters. Each ECS container tracks its own request rate and reads the cluster-wide counter to make local admission decisions. Prevents any single node from consuming disproportionate capacity. Implemented in middleware/adaptive.go.
Price responses are sent to the client immediately after cache/source lookup. Background goroutines then handle usage logging, metrics publishing, cache updates, and rate limit accounting. This minimizes perceived latency — the client gets their price before any bookkeeping happens.
Usage logs are buffered in memory and flushed to Aurora in configurable batches (default 100 rows per INSERT). A background goroutine flushes on batch-full or timer expiry (whichever comes first). This reduces Aurora write IOPS by 100x compared to per-request INSERTs.
The same server binary serves both TCP (HTTP via ALB) and UDP (raw datagrams via NLB) on separate port pairs. LPO: TCP :8080 + UDP :2679. LRS: TCP :9090 + UDP :2680. UDP provides lower latency for high-frequency price consumers.
One Docker image runs all 3 ECS services. The SERVER_TYPE environment variable (APP_SERVER, REPORT_SERVER, APP_REPORT_SERVER) controls which listeners start. This simplifies builds, reduces ECR storage, and ensures all nodes run identical code.
Each subsystem gets its own named logger (e.g., logger.New("CoinbaseWS"), logger.New("Pricing")). All log output includes region, cluster node, module name, and level. Enables precise CloudWatch Logs Insights filtering by module.
GOGC=300 (3x default) trades memory for throughput by reducing garbage collection frequency. Under high load, fewer GC pauses mean more consistent latency. Configurable via environment variable at container level.
SIGTERM triggers an ordered shutdown: stop accepting new connections → drain in-flight requests → flush usage log buffer to Aurora → close database connections → close ElastiCache connections. Prevents data loss during ECS rolling deployments.
The receipt Lambda runs outside the VPC to avoid NAT Gateway costs ($32/month). It calls the LPO server's public /admin/invalidate-key endpoint to bust cached API keys after creating new subscriptions. This saves infrastructure cost while maintaining cache consistency.
The nightly sync job tracks the last-synced timestamp in ElastiCache. On each run, it queries Aurora for only records newer than the high-water mark, avoiding full-table scans. First run (empty cache) loads 93 days of historical data.
LRS report queries generate their own usage logs, enabling meta-analytics: how often each API key queries reports, which report types are most popular, and report generation latency. Implemented in lrs/usagelogger.go.
The LPO server maintains two connection pools: one for the Aurora writer endpoint (INSERTs, UPDATEs) and one for the reader endpoint (SELECTs). This distributes load across Aurora's read replicas and prevents read queries from competing with writes.
The 33 application parameters are cached in ElastiCache as a single hash (app:config). On startup, the server reads from ElastiCache first (fast path). If unavailable, it falls back to Aurora. Parameters are refreshed every 60 seconds. This provides sub-millisecond config reads without hitting the database.
All three projects follow the standard Go project layout:
| Directory | Convention |
|---|---|
cmd/ |
Application entry points. Each subdirectory is a separate binary. The main.go in each is the entry point that gets compiled. |
internal/ |
Private packages. Go enforces that internal/ packages cannot be imported by code outside the parent module. Used for application-specific logic that shouldn't be shared. |
pkg/ |
Public packages. Code here can be imported by other projects. Reserved for shared utilities. |
deployments/ |
Infrastructure-as-code: Dockerfiles, CloudFormation templates, ECS task definitions, SES templates. |
scripts/ |
Operational scripts: database migrations, test runners, load testers. |
configs/ |
Configuration files (YAML, JSON, TOML). Currently application parameters are stored in Aurora's application_parameters table instead. |
test-results/ |
Stress test result documentation and analysis. Contains markdown files documenting performance test runs and rationale. |
go.mod / go.sum |
Go module files. go.mod declares the module path and dependencies. go.sum contains cryptographic checksums for reproducible builds. Never edit go.sum manually. |
Step-by-step instructions for building and deploying each application to its AWS runtime.
The LPO server image is shared by all three ECS services (BeastMain, BeastMirror, BeastLRS). One build and push updates all three.
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 211998422884.dkr.ecr.us-east-2.amazonaws.com
docker buildx build --platform linux/amd64 --no-cache \
-f trinity-beast-lpo-server/deployments/docker/Dockerfile \
-t 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-lpo-server:latest \
--push --provenance=false \
trinity-beast-lpo-server/
Key flags: --platform linux/amd64 ensures the image runs on Fargate (even when building on Apple Silicon). --provenance=false prevents multi-arch manifest issues. --no-cache ensures a clean build with latest code changes.
aws ecs update-service --cluster trinity-beast-fargate-cluster \
--service trinity-beast-main-service --force-new-deployment --region us-east-2
aws ecs update-service --cluster trinity-beast-fargate-cluster \
--service trinity-beast-mirror-service --force-new-deployment --region us-east-2
aws ecs update-service --cluster trinity-beast-fargate-cluster \
--service trinity-beast-lrs-service --force-new-deployment --region us-east-2
Each service pulls the :latest image from ECR. The rolling deployment starts a new task, waits for it to pass health checks, then drains the old task. Typically completes in 2–3 minutes per service.
aws ecs describe-services --cluster trinity-beast-fargate-cluster \
--services trinity-beast-main-service trinity-beast-mirror-service trinity-beast-lrs-service \
--region us-east-2 \
--query 'services[*].{name:serviceName,running:runningCount,deployments:length(deployments)}'
All three should show running: 1 and deployments: 1 when complete. If deployments: 2, the rollout is still in progress.
curl https://api.cpmp-site.org/health — LPO healthcurl https://api.cpmp-site.org/reports/usage?page_size=1 — LRS health
The sync job runs as a one-off Fargate task, not a persistent service. It's triggered nightly by EventBridge but can also be run manually.
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 211998422884.dkr.ecr.us-east-2.amazonaws.com
docker buildx build --platform linux/amd64 --no-cache \
-t 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-sync-job:latest \
--push --provenance=false \
trinity-beast-sync-job/
The Dockerfile is at the project root (not in a deployments/ subfolder).
aws ecs run-task \
--cluster trinity-beast-fargate-cluster \
--task-definition trinity-beast-sync-job:1 \
--launch-type FARGATE \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["subnet-06781ce7266a4b870"],
"securityGroups": ["sg-050b617f93b2388f6"],
"assignPublicIp": "ENABLED"
}
}' \
--region us-east-2
No force-new-deployment needed — the task definition points to :latest and pulls fresh on each run.
aws logs tail /aws/ecs/trinity-beast --filter-pattern "Sync" --region us-east-2 --since 5m
Look for: Incremental sync complete: N logs loaded and Sync complete in Xms - N logs in ElastiCache.
The Lambda is deployed as a zip file containing a single Go binary named bootstrap.
cd ~/trinity-beast-receipt-lambda
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build \
-ldflags="-w -s" -o bootstrap ./cmd/handler/
CGO_ENABLED=0 produces a static binary. -ldflags="-w -s" strips debug info for smaller size. The output must be named bootstrap for the provided.al2023 runtime.
zip -j function.zip bootstrap
The -j flag strips directory paths so bootstrap is at the zip root.
aws lambda update-function-code \
--function-name trinity-beast-receipt \
--zip-file fileb://function.zip \
--region us-east-2
Takes effect immediately — no rolling deployment. The next invocation uses the new code.
curl -X POST https://6vz2eswz55.execute-api.us-east-2.amazonaws.com/receipt \
-H "Content-Type: application/json" \
-d '{"session_id":"test","type":"donation"}'
Expected response: {"success":false,"message":"Failed to read payment session"} — confirms the Lambda is running and reachable. A real Stripe session_id is needed for a full test.