The short version: This is your insurance policy. If every single piece of The Trinity Beast Infrastructure were deleted tomorrow — every server, every database, every network — this one YAML file recreates all of it with a single command.
Think of CloudFormation as a blueprint for a building. When a contractor builds a house, they don't wing it — they follow a blueprint that describes every wall, every pipe, every electrical outlet. CloudFormation is the same idea, but for cloud infrastructure.
You write a YAML file that describes what you want: "I need a network, a database, three servers, a load balancer..." and AWS reads that file and builds everything for you. Every time. Exactly the same way. No clicking through consoles, no forgetting a step, no "it worked on my machine."
aws cloudformation create-stackThe Trinity Beast template is 126 resources defined in a single file. That's the entire platform — networking, databases, servers, load balancers, DNS, CDN, monitoring, and scheduling. One file. One command. Everything.
Why this matters: Without this template, recreating The Trinity Beast would mean manually clicking through dozens of AWS console screens, remembering exact settings, and hoping you don't miss anything. With this template, it's one command and a 30-minute wait. That's the difference between a weekend of panic and a coffee break.
The template defines 126 AWS resources organized into 9 layers. Each layer builds on the ones before it — you can't have servers without a network, and you can't have a load balancer without servers. CloudFormation handles the ordering automatically.
The foundation. Think of the VPC as the building itself — it defines the walls and rooms. Subnets are the rooms. The Internet Gateway is the front door. Security groups are the locks on each room's door.
| Resource | Details | Purpose |
|---|---|---|
| VPC | 10.0.0.0/16 (65,536 IPs) | The private network — your building |
| Public Subnets | 3 subnets across 3 AZs | Rooms with windows — internet-facing (load balancers live here) |
| Private Subnets | 3 subnets across 3 AZs | Interior rooms — no direct internet access (databases, servers live here) |
| Internet Gateway | 1 IGW attached to VPC | The front door to the internet |
| Route Tables | 4 route tables | Hallway signs — tell traffic where to go |
| Security Groups | 7 security groups | Door locks — control who can talk to what |
| VPC Endpoints | 10 endpoints | Private back doors to AWS services (no internet needed) |
| Flow Logs | VPC Flow Logs → CloudWatch | Security cameras — record all network traffic |
IAM roles are like employee badges — they define what each service is allowed to do. A server can read from the database but can't delete the network. Each role follows the principle of least privilege: only the permissions needed, nothing more.
| Role | Purpose |
|---|---|
| ECS Task Execution Role | Lets ECS pull Docker images from ECR and write logs to CloudWatch |
| ECS Task Role | Lets the running containers access Secrets Manager, S3, SES, and other AWS services |
| EventBridge Role | Lets the scheduler (EventBridge) launch ECS tasks for the nightly sync job |
| Flow Logs Role | Lets VPC Flow Logs write to CloudWatch Logs |
| Lambda Execution Role | Lets the receipt Lambda function write logs and access needed services |
Where your data lives. Aurora is the main database (think: filing cabinets). ElastiCache is the speed cache (think: sticky notes on your desk for things you need instantly). ECR stores your Docker images. S3 stores your website files.
| Resource | Details | Purpose |
|---|---|---|
| Aurora Serverless v2 | PostgreSQL 17.7, 1–21 ACU, Optimized I/O | Main database — writer + reader instances |
| ElastiCache for Valkey | Valkey 7.3, db.r7g.large, 13 GB, TLS | In-memory cache — sub-millisecond reads |
| ECR Repositories | 5 repositories | Docker image storage (like a private Docker Hub) |
| S3 Bucket | cpmp-ministry-site-east2 | Website files, CloudFormation templates, backups |
| Secrets Manager | trinity-beast-secrets | Encrypted storage for DB passwords, Stripe keys, API keys |
The workers. ECS Fargate runs your containers without you managing servers. Think of it as hiring workers and only paying for the hours they work — no building maintenance. Lambda runs small, one-off tasks (like processing a receipt after a purchase).
| Resource | Details | Purpose |
|---|---|---|
| ECS Cluster | trinity-beast-fargate-cluster | The factory floor — organizes all services |
| Main Service | trinity-beast-main-service — 8 vCPU / 32 GB | LPO + LRS (SERVER_TYPE: APP_REPORT_SERVER) |
| Mirror Service | trinity-beast-mirror-service — 8 vCPU / 32 GB | LPO + LRS (SERVER_TYPE: APP_REPORT_SERVER) |
| LRS Service | trinity-beast-lrs-service — 8 vCPU / 32 GB | LPO + LRS (SERVER_TYPE: APP_REPORT_SERVER) |
| Webhook Service | trinity-beast-webhook-service — 8 vCPU / 32 GB | Webhook Push delivery engine (SERVER_TYPE: WEBHOOK_SERVER) |
| Sync Job Task Def | trinity-beast-sync-job | Nightly database sync task definition |
| Lambda Function | trinity-beast-receipt — Go (provided.al2023) | Post-checkout receipt processing |
| SQS Queue | trinity-beast-queued-usage-logs — Standard | Decoupled usage log write pipeline |
| Lambda Function | trinity-beast-queued-writer — Go (provided.al2023) | SQS consumer — batch-inserts usage logs into Aurora |
Load balancers are traffic cops. They stand at the front door and direct incoming requests to the right server. The ALB handles web traffic (HTTP/HTTPS). The NLB handles UDP traffic (real-time price feeds).
| Resource | Listeners | Purpose |
|---|---|---|
ALB (Trinity-Beast-TCP-ALB) | Port 80, 443, 8080, 9090 | Web traffic — API, LRS reports, HTTPS redirect |
NLB (Trinity-Beast-UDP-NLB) | UDP 2679, 2680 | Real-time UDP price feeds |
| Target Groups | 4 target groups | Route traffic to the right ECS service |
Listener breakdown: Port 80 redirects to 443 (HTTPS). Port 443 serves the API. Port 8080 routes to the main service. Port 9090 routes to LRS services. Ports 8081 and 9091 serve dedicated health checks (isolated from production traffic). UDP 2679 and 2680 deliver real-time price data. The Webhook service (BeastWebhook) does not use the ALB or NLB — it pushes prices outbound to subscribers via UDP datagrams and signed HTTPS POSTs.
DNS is the phone book of the internet. When someone types cpmp-site.org, DNS tells their browser which server to talk to. Route 53 manages all of this.
| Record | Points To | Purpose |
|---|---|---|
cpmp-site.org | CloudFront | Main website |
www.cpmp-site.org | CloudFront | WWW alias for website |
api.cpmp-site.org | ALB | REST API endpoint |
lrs.cpmp-site.org | ALB | LRS report server |
udp.cpmp-site.org | NLB | UDP price feed endpoint |
| MX record | SES inbound | Email receiving |
| SPF record | TXT record | Email authentication — "yes, we're allowed to send email" |
| DMARC record | TXT record | Email policy — tells receivers how to handle our email |
| SES verification | TXT/CNAME records | Proves we own the domain for sending email |
CloudFront is like having copies of your website in cities around the world. Instead of everyone connecting to Ohio, visitors get served from the nearest location. Faster for them, less load on your servers.
| Setting | Value |
|---|---|
| Origin | S3 bucket (cpmp-ministry-site-east2) |
| HTTP → HTTPS | Automatic redirect |
| TLS Version | TLS 1.2 minimum |
| Certificate | ACM certificate in us-east-1 (CloudFront requirement) |
Monitoring is your early warning system. CloudWatch watches metrics (CPU usage, error rates, database health) and SNS sends you a text or email when something goes wrong. You find out about problems before your users do.
| Category | Alarms | What They Watch |
|---|---|---|
| ECS CPU | CPU alarms per service | Container CPU usage — alerts if consistently high |
| ECS Service Count | Running task count | Alerts if a service has zero running tasks (it's down) |
| Aurora | CPU, connections, storage | Database health — CPU spikes, connection exhaustion |
| ElastiCache | CPU, memory, connections | Cache health — memory pressure, connection limits |
| ALB/NLB | 5xx errors, unhealthy targets | Load balancer health — are requests failing? |
Alert delivery: SNS topic sends to both email ([email]) and SMS ([phone]). You get notified both ways for every critical alarm.
EventBridge is your cron job in the cloud. It runs tasks on a schedule without you lifting a finger.
| Rule | Schedule | What It Does |
|---|---|---|
trinity-beast-nightly-sync | cron(0 6 * * ? *) = 1:00 AM EST | Launches the sync job ECS task to synchronize database data nightly |
Parameters are the blanks you fill in when deploying. Think of them as the customization options on an order form. Most have sensible defaults — you only must provide two values.
Only 2 required: DBPassword and SecretValue are the only parameters you absolutely must provide. Everything else has a default that matches the current production setup.
| Parameter | Purpose | Default | Secret? | Required? |
|---|---|---|---|---|
DBPassword |
PostgreSQL master password for Aurora cluster | None — you must provide this | Yes (NoEcho) | YES |
SecretValue |
JSON string for trinity-beast-secrets (DB creds, Stripe keys, API keys) | None — you must provide this | Yes (NoEcho) | YES |
DBUsername |
PostgreSQL master username | postgres |
No | No |
DBName |
Aurora database name | CPMP_Backend_Aurora |
No | No |
DomainName |
Primary domain name | cpmp-site.org |
No | No |
ACMCertificateArnEast2 |
ACM certificate ARN in us-east-2 (for ALB HTTPS) | Current production cert ARN | No | No |
ACMCertificateArnEast1 |
ACM certificate ARN in us-east-1 (for CloudFront) | Current production cert ARN | No | No |
AlertEmail |
Email address for critical alerts | [email] |
No | No |
AlertSMS |
Phone number for SMS alerts | [phone] |
No | No |
SESFromAddress |
SES sender address for receipt emails | CPMP Mission <No-Reply@CPMP-Site.org> |
No | No |
LPOImageTag |
Docker image tag for LPO server | latest |
No | No |
SyncImageTag |
Docker image tag for sync job | latest |
No | No |
About SecretValue: This is a JSON string containing all the secrets your application needs. Format: {"DB_PASSWORD":"...","STRIPE_SECRET_KEY":"...","STRIPE_WEBHOOK_SECRET":"...",...}. Keep this stored securely outside of AWS — in a password manager, encrypted file, or similar. If you lose this, you'll need to regenerate all the API keys and passwords it contains.
CloudFormation doesn't just create resources in isolation — it wires them together using two key mechanisms: !Ref (reference another resource's ID) and !GetAtt (get a specific attribute like an endpoint URL). Here's what that looks like in plain English.
graph TB
Internet["Internet"]
subgraph DNS_CDN["DNS & CDN"]
Route53["Route 53
cpmp-site.org"]
CloudFront["CloudFront
Static Website"]
S3["S3 Bucket
trinity-beast-website-east2"]
end
subgraph LoadBalancers["Load Balancers"]
ALB["ALB
Trinity-Beast-TCP-ALB
TCP: 80, 443 → 8080, 9090"]
NLB["NLB
Trinity-Beast-UDP-NLB
UDP: 2679, 2680"]
end
subgraph ECS["ECS Fargate Cluster — 4 Services"]
Main["BeastMain · AZ 2a
APP_REPORT_SERVER
8 vCPU / 32 GB"]
Mirror["BeastMirror · AZ 2b
APP_REPORT_SERVER
8 vCPU / 32 GB"]
LRS["BeastLRS · AZ 2c
APP_REPORT_SERVER
8 vCPU / 32 GB"]
Webhook["BeastWebhook
WEBHOOK_SERVER
8 vCPU / 32 GB"]
endB"]
end
subgraph Data["Data Layer"]
Aurora["Aurora PostgreSQL
Serverless v2
2–18 ACU"]
MemDB["ElastiCache
Valkey 7.2
cache.r7g.2xlarge · 52 GB"]
Secrets["Secrets Manager
trinity-beast-secrets"]
end
Lambda["Lambda
trinity-beast-receipt
Go / provided.al2023"]
EventBridge["EventBridge
Nightly Sync
1 AM EST"]
SyncJob["Sync Job
0.5 vCPU / 1 GB"]
SNS["SNS
Critical Alerts
Email + SMS"]
CW["CloudWatch
4 Dashboards
14 Alarms"]
Internet --> Route53
Route53 -->|"cpmp-site.org"| CloudFront
Route53 -->|"api / lrs"| ALB
Route53 -->|"udp"| NLB
CloudFront --> S3
Internet -->|"Stripe Webhook"| Lambda
ALB -->|"TCP 8080"| Main
ALB -->|"TCP 8080"| Mirror
ALB -->|"TCP 9090"| LRS
NLB -->|"UDP 2679"| Main
NLB -->|"UDP 2679"| Mirror
NLB -->|"UDP 2680"| LRS
Main --> Aurora
Main --> MemDB
Mirror --> Aurora
Mirror --> MemDB
LRS --> Aurora
LRS --> MemDB
Webhook --> Aurora
Webhook --> MemDB
Main -.->|"!GetAtt Endpoint"| Aurora
Main -.->|"!GetAtt Endpoint"| MemDB
Main -.->|"Reads"| Secrets
Lambda --> Aurora
Lambda --> MemDB
Lambda -.->|"Reads"| Secrets
EventBridge -->|"cron 0 6 * * ?"| SyncJob
SyncJob --> Aurora
SyncJob --> MemDB
CW -.->|"ALARM"| SNS
%% Internet → DNS — white
linkStyle 0 stroke:#e2e8f0,stroke-width:2px
%% DNS → CDN — pink (website path)
linkStyle 1 stroke:#f472b6,stroke-width:2px
%% DNS → ALB — blue (TCP)
linkStyle 2 stroke:#60a5fa,stroke-width:2px
%% DNS → NLB — orange (UDP)
linkStyle 3 stroke:#FF9900,stroke-width:2px
%% CloudFront → S3 — pink
linkStyle 4 stroke:#f472b6,stroke-width:2px
%% Internet → Lambda — violet (Stripe)
linkStyle 5 stroke:#a78bfa,stroke-width:2px
%% ALB → ECS — blue (TCP)
linkStyle 6 stroke:#60a5fa,stroke-width:2px
linkStyle 7 stroke:#60a5fa,stroke-width:2px
linkStyle 8 stroke:#60a5fa,stroke-width:2px
%% NLB → ECS — orange (UDP)
linkStyle 9 stroke:#FF9900,stroke-width:2px
linkStyle 10 stroke:#FF9900,stroke-width:2px
linkStyle 11 stroke:#FF9900,stroke-width:2px
%% ECS → Aurora — red (database)
linkStyle 12 stroke:#f87171,stroke-width:2px
linkStyle 14 stroke:#f87171,stroke-width:2px
linkStyle 16 stroke:#f87171,stroke-width:2px
linkStyle 18 stroke:#f87171,stroke-width:2px
%% ECS → ElastiCache — green (cache)
linkStyle 13 stroke:#10b981,stroke-width:2px
linkStyle 15 stroke:#10b981,stroke-width:2px
linkStyle 17 stroke:#10b981,stroke-width:2px
linkStyle 19 stroke:#10b981,stroke-width:2px
%% !GetAtt / !Ref wiring — cyan (dashed)
linkStyle 20 stroke:#22d3ee,stroke-width:1.5px
linkStyle 21 stroke:#22d3ee,stroke-width:1.5px
%% Secrets reads — rose
linkStyle 22 stroke:#fca5a5,stroke-width:1.5px
%% Lambda → Aurora — red
linkStyle 23 stroke:#f87171,stroke-width:2px
%% Lambda → ElastiCache — green
linkStyle 24 stroke:#10b981,stroke-width:2px
%% Lambda → Secrets — rose
linkStyle 25 stroke:#fca5a5,stroke-width:1.5px
%% EventBridge → SyncJob — yellow
linkStyle 26 stroke:#facc15,stroke-width:2px
%% SyncJob → Aurora — red
linkStyle 27 stroke:#f87171,stroke-width:2px
%% SyncJob → ElastiCache — green
linkStyle 28 stroke:#10b981,stroke-width:2px
%% CloudWatch → SNS — rose (alerts)
linkStyle 29 stroke:#fca5a5,stroke-width:1.5px
style Internet fill:#FF9900,color:#0f172a,stroke:#FF9900
style ALB fill:#1e293b,stroke:#60a5fa,color:#e2e8f0
style NLB fill:#1e293b,stroke:#FF9900,color:#e2e8f0
style Main fill:#064e3b,stroke:#10b981,color:#e2e8f0
style Mirror fill:#064e3b,stroke:#10b981,color:#e2e8f0
style LRS fill:#064e3b,stroke:#10b981,color:#e2e8f0
style Webhook fill:#064e3b,stroke:#10b981,color:#e2e8f0
style Aurora fill:#1e293b,stroke:#f87171,color:#e2e8f0
style MemDB fill:#1e293b,stroke:#10b981,color:#e2e8f0
style Lambda fill:#1e293b,stroke:#a78bfa,color:#e2e8f0
style CloudFront fill:#1e293b,stroke:#f472b6,color:#e2e8f0
style S3 fill:#1e293b,stroke:#f472b6,color:#e2e8f0
style Route53 fill:#1e293b,stroke:#e2e8f0,color:#e2e8f0
style Secrets fill:#7f1d1d,stroke:#fca5a5,color:#e2e8f0
style SNS fill:#7f1d1d,stroke:#fca5a5,color:#e2e8f0
style CW fill:#1e293b,stroke:#60a5fa,color:#e2e8f0
style SyncJob fill:#1e293b,stroke:#facc15,color:#e2e8f0
style EventBridge fill:#1e293b,stroke:#facc15,color:#e2e8f0
The Aurora cluster endpoint is automatically passed to all 4 ECS task definitions as the DB_HOST environment variable. When CloudFormation creates the Aurora cluster, it gets an endpoint like trinity-beast-aurora.cluster-xxxxx.us-east-2.rds.amazonaws.com. That endpoint is injected into every container so they know where the database is — no hardcoding needed.
The ElastiCache cluster endpoint is passed to all 4 ECS task definitions as the CACHE_URL environment variable. Same idea — CloudFormation creates the cache, gets the endpoint, and passes it to the containers automatically.
Security groups reference each other to create a chain of trust:
This is defense in depth. Even if someone bypasses the load balancer, they can't reach the database directly because the security group only trusts traffic from the ECS containers.
Route 53 records use !GetAtt to point at the load balancers:
api.cpmp-site.org → ALB's DNS name (automatically resolved)udp.cpmp-site.org → NLB's DNS namecpmp-site.org → CloudFront distribution domainIf the load balancer's address changes (e.g., after a rebuild), the DNS records update automatically because they reference the resource, not a hardcoded IP.
The Secrets Manager secret ARN is passed to ECS task definitions. At runtime, ECS pulls the secret values and injects them as environment variables. Your application code never sees the raw secret — it just reads environment variables like DB_PASSWORD and STRIPE_SECRET_KEY.
Deploying the stack means telling CloudFormation: "Build everything described in this template." Here's how, step by step.
aws sts get-caller-identity to verify you're authenticated to account 211998422884us-east-2 (for the ALB)us-east-1 (for CloudFront — this is an AWS requirement, CloudFront only uses us-east-1 certs)Important: Replace the placeholder values below with your actual secrets. Never commit real secrets to version control.
aws cloudformation create-stack \
--stack-name trinity-beast-stack \
--template-body file://trinity-beast-stack.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-2 \
--parameters \
ParameterKey=DBPassword,ParameterValue='YOUR_DB_PASSWORD_HERE' \
ParameterKey=SecretValue,ParameterValue='{"DB_PASSWORD":"...","STRIPE_SECRET_KEY":"...","STRIPE_WEBHOOK_SECRET":"..."}' \
ParameterKey=DBUsername,ParameterValue=postgres \
ParameterKey=DBName,ParameterValue=CPMP_Backend_Aurora \
ParameterKey=DomainName,ParameterValue=cpmp-site.org \
ParameterKey=AlertEmail,ParameterValue='[email]' \
ParameterKey=AlertSMS,ParameterValue='[phone]' \
ParameterKey=LPOImageTag,ParameterValue=latest \
ParameterKey=SyncImageTag,ParameterValue=latest
What's CAPABILITY_NAMED_IAM? This flag tells CloudFormation: "Yes, I know this template creates IAM roles, and I'm okay with that." It's a safety check — AWS wants you to explicitly acknowledge that you're granting permissions.
CloudFormation reads the template, figures out the dependency order, and starts creating resources. Here's the rough timeline:
| Time | What's Happening |
|---|---|
| 0–2 min | VPC, subnets, internet gateway, route tables created |
| 2–5 min | Security groups, VPC endpoints, IAM roles created |
| 5–15 min | Aurora cluster spinning up (this is the slowest part) |
| 5–15 min | ElastiCache cluster spinning up (runs in parallel with Aurora) |
| 10–20 min | ECR repos, S3 bucket, Secrets Manager created |
| 15–25 min | ECS cluster, task definitions, ALB, NLB created |
| 20–30 min | ECS services start (4 services — they'll fail health checks until images are pushed) |
| 25–35 min | Route 53 records, CloudFront distribution, CloudWatch alarms |
| 30–45 min | Stack complete — all 126 resources created |
The stack creates the infrastructure, but some things need to be done manually after:
# Authenticate Docker to ECR
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 211998422884.dkr.ecr.us-east-2.amazonaws.com
# Tag and push the LPO server image
docker tag trinity-beast-lpo-server:latest 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-lpo-server:latest
docker push 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-lpo-server:latest
# Tag and push the sync job image
docker tag trinity-beast-sync-job:latest 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-sync-job:latest
docker push 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-sync-job:latest
aws lambda update-function-code \
--function-name trinity-beast-receipt \
--zip-file fileb://bootstrap.zip \
--region us-east-2
aws s3 sync ./website/ s3://cpmp-ministry-site-east2/ --delete
aws cloudfront create-invalidation \
--distribution-id E110PRKEIYQVLL \
--paths "/*"
dig api.cpmp-site.org
dig udp.cpmp-site.org
dig cpmp-site.org
curl -I https://api.cpmp-site.org/health
# Check overall stack status
aws cloudformation describe-stacks --stack-name trinity-beast-stack --region us-east-2
# Watch events in real-time (useful during creation)
aws cloudformation describe-stack-events \
--stack-name trinity-beast-stack \
--region us-east-2 \
--query 'StackEvents[0:10].[Timestamp,ResourceType,LogicalResourceId,ResourceStatus]' \
--output table
# List all resources in the stack
aws cloudformation list-stack-resources \
--stack-name trinity-beast-stack \
--region us-east-2
Success looks like: "StackStatus": "CREATE_COMPLETE". If you see CREATE_FAILED or ROLLBACK_COMPLETE, check the events for the specific resource that failed — CloudFormation will tell you exactly what went wrong.
Need to change something? Edit the YAML file and tell CloudFormation to update. It's smart enough to figure out what changed and only touch those resources.
aws cloudformation update-stack \
--stack-name trinity-beast-stack \
--template-body file://trinity-beast-stack.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-2 \
--parameters \
ParameterKey=DBPassword,UsePreviousValue=true \
ParameterKey=SecretValue,UsePreviousValue=true
Notice UsePreviousValue=true — for secret parameters, you don't need to re-enter them on every update. CloudFormation remembers the values from the last deployment.
CloudFormation compares your new template against the current state and categorizes each change:
| Change Type | What Happens | Example |
|---|---|---|
| No Interruption | Resource is updated in place — no downtime | Changing a CloudWatch alarm threshold |
| Some Interruption | Resource is briefly interrupted during update | Changing an ECS task definition (rolling deploy) |
| Replacement | Old resource is deleted and a new one is created | Changing VPC CIDR block, Aurora engine version |
⚠️ Replacement Warning: Some changes force CloudFormation to delete and recreate a resource. This can cause data loss. Before updating, always preview changes with a Change Set:
aws cloudformation create-change-set \
--stack-name trinity-beast-stack \
--template-body file://trinity-beast-stack.yaml \
--change-set-name my-changes \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-2
# Review what will change
aws cloudformation describe-change-set \
--stack-name trinity-beast-stack \
--change-set-name my-changes \
--region us-east-2
This shows you exactly what will be modified, added, or replaced — before anything happens. Always use Change Sets for production updates.
The scenario: everything is gone. The AWS account was compromised, someone deleted the stack, or you need to rebuild from scratch in a new account. Here's the step-by-step playbook.
Estimated recovery time: 1–2 hours from "everything is gone" to "everything is running." The stack itself takes 30–45 minutes. The rest is pushing code and verifying.
This is the big one. One command creates all 126 resources:
aws cloudformation create-stack \
--stack-name trinity-beast-stack \
--template-body file://trinity-beast-stack.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-2 \
--parameters \
ParameterKey=DBPassword,ParameterValue='YOUR_DB_PASSWORD' \
ParameterKey=SecretValue,ParameterValue='YOUR_SECRETS_JSON'
Then wait. Monitor progress:
aws cloudformation describe-stacks --stack-name trinity-beast-stack --region us-east-2 \
--query 'Stacks[0].StackStatus' --output text
Watch for CREATE_COMPLETE. This takes 30–45 minutes. Aurora and ElastiCache are the slowest. Go get coffee.
The ECR repos are empty — they're just containers waiting for images. Build and push:
# Authenticate
aws ecr get-login-password --region us-east-2 | \
docker login --username AWS --password-stdin \
211998422884.dkr.ecr.us-east-2.amazonaws.com
# Build and push LPO server
docker build -t trinity-beast-lpo-server:latest .
docker tag trinity-beast-lpo-server:latest \
211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-lpo-server:latest
docker push 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-lpo-server:latest
# Build and push sync job
docker build -t trinity-beast-sync-job:latest -f Dockerfile.sync .
docker tag trinity-beast-sync-job:latest \
211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-sync-job:latest
docker push 211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-sync-job:latest
ECS will automatically pick up the new images and start the services.
# Build the Go binary for Lambda
GOOS=linux GOARCH=arm64 go build -o bootstrap cmd/receipt/main.go
zip bootstrap.zip bootstrap
# Deploy to Lambda
aws lambda update-function-code \
--function-name trinity-beast-receipt \
--zip-file fileb://bootstrap.zip \
--region us-east-2
aws s3 sync ./cpmp-redesign/ s3://cpmp-ministry-site-east2/ --delete
aws cloudfront create-invalidation \
--distribution-id E110PRKEIYQVLL \
--paths "/*"
If the hosted zone was recreated, it gets new nameservers. You must update them at your domain registrar (wherever you bought cpmp-site.org).
# Get the new nameservers
aws route53 get-hosted-zone --id YOUR_HOSTED_ZONE_ID \
--query 'DelegationSet.NameServers' --output text
Copy those 4 nameserver values and update them at your registrar. DNS propagation can take up to 48 hours, but usually completes within 1–2 hours.
# Check ECS services
aws ecs describe-services \
--cluster trinity-beast-fargate-cluster \
--services trinity-beast-main-service trinity-beast-mirror-service trinity-beast-lrs-service trinity-beast-webhook-service \
--region us-east-2 \
--query 'services[].{name:serviceName,running:runningCount,desired:desiredCount,status:status}'
# Check API health
curl -s https://api.cpmp-site.org/health | jq .
# Check ALB target health
aws elbv2 describe-target-health \
--target-group-arn YOUR_TARGET_GROUP_ARN \
--region us-east-2
# Check Aurora
aws rds describe-db-clusters \
--db-cluster-identifier trinity-beast-aurora \
--region us-east-2 \
--query 'DBClusters[0].Status'
# Check ElastiCache
aws elasticache describe-cache-clusters \
--cluster-name trinity-beast-cache \
--region us-east-2 \
--query 'Clusters[0].Status'
Database note: The stack creates an empty Aurora database. You'll need to run your database migrations to recreate the schema and seed data. If you have a database backup (RDS snapshot), you can restore from that instead — but that's a manual step outside the CloudFormation template.
Honesty time. The CloudFormation template is powerful, but it doesn't cover everything. Some things require manual steps, either because AWS doesn't support them in CloudFormation or because they involve external services.
These items require manual action after the stack is deployed. The template creates the infrastructure, but these pieces must be configured separately.
| Item | Why It's Not in the Template | What You Need to Do |
|---|---|---|
| ACM Certificates | Certificates require DNS validation — a chicken-and-egg problem (you need DNS to validate, but DNS is in the template) | Request certificates in ACM for us-east-2 (ALB) and us-east-1 (CloudFront). Validate via DNS. Pass the ARNs as parameters. |
| SES Domain Verification & DKIM | SES verification involves external DNS records and waiting for AWS to verify | Verify cpmp-site.org in SES. Set up DKIM signing. Move out of SES sandbox if needed. |
| Stripe Webhook Configuration | Stripe is an external service — CloudFormation can't configure it | Log into Stripe Dashboard. Create webhook endpoint pointing to https://api.cpmp-site.org/webhook/stripe. Copy the webhook secret into your SecretValue parameter. |
| Database Schema & Seed Data | CloudFormation creates the database engine, not the tables inside it | After Aurora is up, connect and run your migrations: go run cmd/migrate/main.go. Seed any required reference data. |
| Docker Images | The template creates ECR repos (the shelves) but not the images (the books) | Build your Docker images locally or in CI/CD, then push to ECR. See Section 7, Step 3. |
| Website Content | The template creates the S3 bucket (the filing cabinet) but not the files inside it | Upload your website files: aws s3 sync ./cpmp-redesign/ s3://cpmp-ministry-site-east2/ |
| Compute Savings Plan | Savings Plans are billing commitments, not infrastructure — they can't be defined in CloudFormation | Purchase a Compute Savings Plan through the AWS Cost Explorer console after your infrastructure is stable. |
| Actual Secret Values | Secrets should never be stored in a template file — that would be a security risk | You provide secrets as parameters at deploy time. Store them securely in a password manager or encrypted vault. |
Think of it this way: The template builds the house — walls, plumbing, electrical, locks on the doors. But you still need to move in your furniture (Docker images), hang your pictures (website content), set up your mail forwarding (SES), and give the locksmith your key preferences (secrets). The house is ready, but it needs to be lived in.
The CloudFormation template and supporting files are stored in multiple locations for redundancy.
trinity-beast-lpo-server/deployments/cloudformation/trinity-beast-stack.yaml
This is the canonical version. All edits should be made here and then synced to S3.
s3://cpmp-ministry-site-east2/cloudformation/trinity-beast-stack.yaml
A copy in S3 for redundancy. You can deploy directly from S3 using --template-url instead of --template-body:
aws cloudformation create-stack \
--stack-name trinity-beast-stack \
--template-url https://cpmp-ministry-site-east2.s3.us-east-2.amazonaws.com/cloudformation/trinity-beast-stack.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--region us-east-2 \
--parameters ...
trinity-beast-lpo-server/deployments/cloudformation/inventory/
This directory contains 72 JSON files — one for every resource's current configuration as captured from the live AWS environment. These serve as a reference if you need to verify that the CloudFormation template matches what's actually deployed. They're snapshots, not live data.
Keep these in sync: When you update the template locally, remember to upload the new version to S3: aws s3 cp trinity-beast-stack.yaml s3://cpmp-ministry-site-east2/cloudformation/