Multi-Stage Builds, Troubleshooting Toolkit, and Container Deployment
The Trinity Beast Infrastructure uses three Dockerfiles located in deployments/docker/:
Dockerfile — LPO Server (runs all 4 ECS services: Main, Mirror, LRS, Webhook)Dockerfile.sync — Nightly sync job (EventBridge scheduled task)Dockerfile.receipt — Receipt Lambda (builder only — produces a bootstrap binary)linux/amd64 for AWS FargateCGO_ENABLED=0, -ldflags="-w -s"appuser:1000) for securityThree images, two deployed as containers, one deployed as a Lambda zip:
| Image | ECR Repository | Runs | Fargate Specs |
|---|---|---|---|
| LPO Server | trinity-beast-lpo-server |
All 4 ECS services (Main / Mirror / LRS / Webhook) | 8 vCPU / 32 GB |
| Sync Job | trinity-beast-sync-job |
Nightly EventBridge task | 0.5 vCPU / 1 GB |
| Receipt Lambda | Not containerized | Builds a bootstrap binary, deployed as a zip to Lambda |
N/A — Lambda provided.al2023 |
The primary Dockerfile at deployments/docker/Dockerfile. This single image runs all four ECS services — differentiated only by the SERVER_TYPE environment variable.
# Multi-stage build for The Trinity Beast LPO Server
# Optimized for AWS Fargate (AMD64)
# Stage 1: Build
FROM --platform=linux/amd64 golang:1.26.1-alpine AS builder
# Install build dependencies
RUN apk add --no-cache git ca-certificates tzdata
# Set working directory
WORKDIR /build
# Copy go mod files
COPY go.mod go.sum ./
# Download dependencies
RUN go mod download
# Copy source code
COPY cmd/ ./cmd/
COPY pkg/ ./pkg/
COPY internal/ ./internal/
# Build the application
# CGO_ENABLED=0 for static binary
# -ldflags="-w -s" to reduce binary size
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
-ldflags="-w -s" \
-o trinity-beast-lpo-server \
./cmd/server
# Stage 2: Runtime
FROM --platform=linux/amd64 alpine:3.19
# Install runtime dependencies + troubleshooting toolkit
RUN apk add --no-cache \
ca-certificates \
tzdata \
curl \
postgresql16-client \
python3 \
jq \
bind-tools \
redis \
htop \
strace \
tcpdump \
busybox-extras \
&& rm -rf /var/cache/apk/*
# Create non-root user
RUN addgroup -g 1000 appuser && \
adduser -D -u 1000 -G appuser appuser
# Set working directory
WORKDIR /app
# Copy binary from builder
COPY --from=builder /build/trinity-beast-lpo-server .
# Change ownership
RUN chown -R appuser:appuser /app
# Switch to non-root user
USER appuser
# Expose ports
# LPO: 8080 (TCP), 8081 (health), 2679 (UDP)
# LRS: 9090 (TCP), 9091 (health), 2680 (UDP)
EXPOSE 8080 8081 9090 9091 2679/udp 2680/udp
# Health check — dedicated health port (8081) isolated from production traffic
HEALTHCHECK --interval=60s --timeout=30s --start-period=30s --retries=10 \
CMD curl -f http://localhost:8081/health || exit 1
# Run the application
ENTRYPOINT ["/app/trinity-beast-lpo-server"]
golang:1.26.1-alpine with git, ca-certificates, and tzdatacmd/, pkg/, internal/CGO_ENABLED=0 and stripped debug symbols (-ldflags="-w -s")/build/trinity-beast-lpo-serverappuser:1000curl -f http://localhost:8081/health every 60 seconds (dedicated health port, isolated from production traffic)The same image runs all four ECS services. The SERVER_TYPE env var (set in the ECS task definition) determines which mode the server runs in:
| SERVER_TYPE | ECS Service | Behavior |
|---|---|---|
APP_REPORT_SERVER |
trinity-beast-main-service | Primary LPO + LRS server (TCP + UDP) |
APP_REPORT_SERVER |
trinity-beast-mirror-service | Mirror LPO + LRS server |
APP_REPORT_SERVER |
trinity-beast-lrs-service | LRS server (TCP + UDP on 9090/2680) |
WEBHOOK_SERVER |
trinity-beast-webhook-service | Outbound webhook price push (health on TCP 8083, no ALB) |
The sync job Dockerfile at deployments/docker/Dockerfile.sync. Same multi-stage pattern as the LPO server, but builds cmd/sync instead. No ports exposed, no health check — runs to completion and exits.
# Multi-stage build for The Trinity Beast Sync Job
# Built from the monorepo root
FROM --platform=linux/amd64 golang:1.26.1-alpine AS builder
RUN apk add --no-cache git ca-certificates tzdata
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY cmd/ ./cmd/
COPY pkg/ ./pkg/
COPY internal/ ./internal/
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
-ldflags="-w -s" \
-o trinity-beast-sync-job \
./cmd/sync
FROM --platform=linux/amd64 alpine:3.19
RUN apk add --no-cache ca-certificates tzdata postgresql16-client python3 jq bind-tools redis htop strace tcpdump busybox-extras && \
rm -rf /var/cache/apk/*
RUN addgroup -g 1000 appuser && \
adduser -D -u 1000 -G appuser appuser
WORKDIR /app
COPY --from=builder /build/trinity-beast-sync-job .
RUN chown -R appuser:appuser /app
USER appuser
ENTRYPOINT ["/app/trinity-beast-sync-job"]
./cmd/sync instead of ./cmd/serverThe receipt Lambda Dockerfile at deployments/docker/Dockerfile.receipt. This is a builder stage only — it produces a bootstrap binary for Lambda's provided.al2023 runtime. It is not deployed as a container.
# Build for The Trinity Beast Receipt Lambda
# Built from the monorepo root — produces a bootstrap binary for Lambda
FROM --platform=linux/amd64 golang:1.26.1-alpine AS builder
RUN apk add --no-cache git ca-certificates tzdata
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY cmd/ ./cmd/
COPY pkg/ ./pkg/
COPY internal/ ./internal/
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
-ldflags="-w -s" \
-o bootstrap \
./cmd/receipt
bootstrap binary (the name Lambda expects for custom runtimes)provided.al2023 runtime (Amazon Linux 2023)GOOS=linux GOARCH=amd64 go build -o bootstrap ./cmd/handler/
zip -j function.zip bootstrap
aws lambda update-function-code \
--function-name trinity-beast-receipt \
--zip-file fileb://function.zip \
--region us-east-2
Every container (LPO Server and Sync Job) includes a full troubleshooting toolkit. These tools are available via ECS Exec for live debugging.
| Tool | Version | Purpose |
|---|---|---|
psql |
PostgreSQL 16.11 | Aurora database queries and migrations |
python3 |
3.11.14 | Scripting, data parsing, ad-hoc automation |
jq |
1.7.1 | JSON parsing and filtering |
dig |
DiG 9.18.44 | DNS resolution debugging |
redis-cli |
7.2.9 | ElastiCache/Valkey debugging and queries |
htop |
— | Interactive process monitoring |
strace |
— | System call tracing for debugging hangs or crashes |
tcpdump |
— | Network packet capture for connectivity issues |
curl |
— | HTTP requests and API testing |
wget |
— | HTTP downloads |
nc (netcat) |
— | TCP/UDP connectivity testing |
netstat |
— | Network connection listing |
nslookup |
— | DNS lookups |
top |
— | Basic process monitoring |
All tools are available via ECS Exec. Connect with:
aws ecs execute-command \
--cluster trinity-beast-fargate-cluster \
--task TASK_ID \
--container CONTAINER_NAME \
--interactive \
--region us-east-2 \
--command sh
All builds run from the monorepo root (trinity-beast-lpo-server/). The -f flag points to the Dockerfile in deployments/docker/.
docker build --platform linux/amd64 \
-t trinity-beast-lpo-server \
-f deployments/docker/Dockerfile .
docker build --platform linux/amd64 \
-t trinity-beast-sync-job \
-f deployments/docker/Dockerfile.sync .
GOOS=linux GOARCH=amd64 go build -o bootstrap ./cmd/handler/
zip -j function.zip bootstrap
aws ecr get-login-password --region us-east-2 | \
docker login --username AWS --password-stdin \
211998422884.dkr.ecr.us-east-2.amazonaws.com
docker tag trinity-beast-lpo-server:latest \
211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-lpo-server:latest
docker push \
211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-lpo-server:latest
docker tag trinity-beast-sync-job:latest \
211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-sync-job:latest
docker push \
211998422884.dkr.ecr.us-east-2.amazonaws.com/trinity-beast-sync-job:latest
aws lambda update-function-code \
--function-name trinity-beast-receipt \
--zip-file fileb://function.zip \
--region us-east-2
Forces ECS to pull the new image from ECR and start new tasks:
# Force deploy all 4 services (picks up new image from ECR)
for svc in trinity-beast-main-service trinity-beast-mirror-service trinity-beast-lrs-service trinity-beast-webhook-service; do
aws ecs update-service \
--cluster trinity-beast-fargate-cluster \
--service $svc \
--force-new-deployment \
--region us-east-2
done
# Monitor rollout status
for svc in trinity-beast-main-service trinity-beast-mirror-service trinity-beast-lrs-service trinity-beast-webhook-service; do
aws ecs describe-services \
--cluster trinity-beast-fargate-cluster \
--services $svc \
--region us-east-2 \
--query "services[0].[serviceName,deployments[0].rolloutState]" \
--output text
done
Rolling deployment: New tasks start, pass health checks, then old tasks drain. Zero downtime — MinimumHealthyPercent: 100, MaximumPercent: 200.
| Check Type | Method | Target |
|---|---|---|
| Docker HEALTHCHECK | curl -f http://localhost:8081/health |
Every 60s, 30s timeout, 10 retries. Dedicated health port isolated from production traffic. |
| ALB Health Check | HTTP GET /health |
Port 8081 (LPO) and 9091 (LRS) — dedicated health servers |
| NLB Health Check | TCP | Port 8081 (UDP 2679 TG) and 9091 (UDP 2680 TG) |
# Main API
curl -s https://api.cpmp-site.org/health
# LRS
curl -s https://lrs.cpmp-site.org/health
# Get task ID
TASK_ID=$(aws ecs list-tasks \
--cluster trinity-beast-fargate-cluster \
--service-name trinity-beast-main-service \
--region us-east-2 \
--query 'taskArns[0]' \
--output text | awk -F/ '{print $NF}')
# Connect
aws ecs execute-command \
--cluster trinity-beast-fargate-cluster \
--task $TASK_ID \
--container trinity-beast-main-container-lpo \
--interactive \
--region us-east-2 \
--command sh
| Command | Purpose |
|---|---|
psql -h $DB_HOST -U $DB_USER -d $DB_NAME |
Connect to Aurora |
redis-cli -h $CACHE_URL --tls -p 6379 |
Connect to ElastiCache |
curl localhost:8080/health |
Local health check |
dig trinity-beast-aurora-cluster.cluster-cvg4oeysemon.us-east-2.rds.amazonaws.com |
DNS resolution |
htop |
Process monitoring |
netstat -tlnp |
Listening ports |
appuser:1000)SERVER_TYPE env var (APP_REPORT_SERVER or WEBHOOK_SERVER)GOGC=300 set at runtime for GC tuning (not in Dockerfile)8080 (LPO TCP), 8081 (LPO health), 9090 (LRS TCP), 9091 (LRS health), 2679 (LPO UDP), 2680 (LRS UDP)