The Dual Endpoint Discovery: When Architecture Decisions Hide in Production Failures
July 21, 2025 - Part 7
The Post-Turso Production Mystery
After successfully implementing distributed Turso architecture in Part 6, our Phoenix LiveView blog was running smoothly in production. The database migration was seamless, performance was excellent, and everything seemed perfect.
Until we tried to use the mTLS API endpoints we’d built in Part 5.
The symptom: Persistent 403 Forbidden responses on all API endpoints The assumption: Something was wrong with certificate validation The reality: A fundamental architectural misunderstanding that would take hours to uncover
What followed was a debugging odyssey that revealed the difference between local development assumptions and production networking realities.
The Initial Confidence Crash
Me: “The API endpoints aren’t working in production. All requests are getting 403s.”
Claude: “Let me check the certificate validation logic and supervision tree configuration…”
This was the beginning of Claude’s great misdirection adventure.
The Supervision Tree Red Herring
Claude immediately dove into analyzing the application startup sequence:
Claude: “The issue might be with how Finch is starting up in the supervision tree. If the HTTP client isn’t fully initialized when the first requests come in, we could see authentication failures…”
The investigation: Deep dive into application.ex
, supervision tree ordering, startup dependencies.
The proposed fixes:
- Reordering supervisor children
- Adding startup health checks
- Implementing retry logic for HTTP client initialization
Time spent: 45 minutes debugging a non-existent race condition.
The Finch Initialization Theory
When supervision tree reordering didn’t help, Claude pivoted to HTTP client internals:
Claude: “The problem might be that Finch pools aren’t properly configured for the production environment. Let me check the pool sizing and connection limits…”
The investigation: Finch configuration analysis, connection pool debugging, HTTP/2 settings review.
The proposed fixes:
- Adjusting pool sizes for production load
- Modifying connection timeout settings
- Adding connection health monitoring
Time spent: Another 30 minutes optimizing perfectly functional HTTP client settings.
The Human Intervention
After watching Claude chase startup timing issues and HTTP client configuration rabbits for over an hour:
Me: “Stop focusing on the supervision tree and Finch. Look at the actual 403 responses. What’s the real error?”
Claude: “Let me examine the actual HTTP response details…”
Sometimes you need to redirect AI attention from theoretical problems to empirical evidence.
The 403 Response Analysis
When Claude finally looked at the actual error responses instead of theorizing about startup conditions:
The discovery: The 403s weren’t authentication failures—they were routing failures.
The evidence:
GET /api/posts -> 403 Forbidden (route not found)
POST /api/posts -> 403 Forbidden (route not found)
GET /posts -> 200 OK (works fine)
The insight: API routes existed in development but weren’t accessible in production.
The Architecture Revelation
This led to the most significant architectural discovery of the entire project:
The problem: mTLS-secured endpoints can’t coexist with public HTTP endpoints on the same server process.
The constraint: Phoenix applications typically run a single endpoint, but we needed:
- Public endpoints: Blog pages, RSS feeds, public content (HTTP, port 4000)
- Authenticated endpoints: Content management API (HTTPS + mTLS, port 4001)
The solution: Dual-endpoint architecture.
The Research Phase
Claude: “We need to research how to run multiple Phoenix endpoints in a single application…”
This triggered an extensive investigation into Phoenix architecture patterns:
Option 1: Single endpoint with conditional SSL
- Complex middleware to detect certificate presence
- Routing logic based on request headers
- Mixed security contexts in same process
Option 2: Reverse proxy with SSL termination
- External nginx or HAProxy handling certificates
- Application-level routing complexity
- Additional infrastructure dependency
Option 3: Multiple Phoenix endpoints
-
Separate
Endpoint
modules for different security contexts - Independent port binding and SSL configuration
- Clean separation of concerns
The decision: Option 3 provided the cleanest architecture.
The Implementation Strategy
The dual-endpoint solution required:
# Main public endpoint (HTTP)
defmodule BlogWeb.Endpoint do
use Phoenix.Endpoint, otp_app: :blog
# Public routes: blog pages, RSS, search
end
# Authenticated API endpoint (HTTPS + mTLS)
defmodule BlogWeb.ApiEndpoint do
use Phoenix.Endpoint, otp_app: :blog
# API routes: content management, image upload
end
Configuration separation:
# config/prod.exs
config :blog, BlogWeb.Endpoint,
http: [port: 4000],
url: [host: "blog.example.com", port: 80]
config :blog, BlogWeb.ApiEndpoint,
https: [
port: 4001,
cipher_suite: :strong,
verify: :verify_peer,
fail_if_no_peer_cert: true,
cert: cert_content,
key: key_content,
ca_cert: ca_content
],
url: [host: "blog.example.com", port: 443]
The Supervision Tree Integration
Both endpoints needed to be supervised independently:
def start(_type, _args) do
children = [
Blog.Repo,
{DNSCluster, query: Application.get_env(:blog, :dns_cluster_query) || :ignore},
{Phoenix.PubSub, name: Blog.PubSub},
{Finch, name: Blog.Finch},
BlogWeb.Endpoint, # Public HTTP endpoint
BlogWeb.ApiEndpoint # mTLS HTTPS endpoint
]
end
The insight: Two separate Phoenix endpoints in one application, each with independent networking and security configuration.
The Testing Infrastructure Challenge
The dual-endpoint architecture immediately broke existing tests:
The problem: Tests were hitting BlogWeb.Endpoint
but API routes lived on BlogWeb.ApiEndpoint
.
The error pattern:
test "API authentication", %{conn: conn} do
# This hits BlogWeb.Endpoint (port 4000)
get(conn, "/api/posts") # 404 - route doesn't exist here
end
The solution: Test configuration override:
describe "API endpoint tests" do
@endpoint BlogWeb.ApiEndpoint # Override default endpoint
test "requires client certificate", %{conn: conn} do
# Now hits BlogWeb.ApiEndpoint (port 4001)
get(conn, "/api/posts") # Works correctly
end
end
The Certificate Testing Infrastructure
The dual-endpoint architecture also required comprehensive SSL certificate testing:
Created: scripts/generate_test_certs.sh
#!/bin/bash
# Generate self-signed certificates for testing mTLS
mkdir -p priv/cert
# CA certificate
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 \n -subj "/C=US/ST=Test/L=Test/O=TestCA/CN=TestCA" \n -keyout priv/cert/ca-key.pem \n -out priv/cert/ca-cert.pem
# Server certificate
openssl req -new -newkey rsa:2048 -days 365 -nodes \n -subj "/C=US/ST=Test/L=Test/O=Test/CN=localhost" \n -keyout priv/cert/server-key.pem \n -out priv/cert/server-cert.csr
# Client certificate for mTLS
openssl req -new -newkey rsa:2048 -days 365 -nodes \n -subj "/C=US/ST=Test/L=Test/O=TestClient/CN=TestClient" \n -keyout priv/cert/client-key.pem \n -out priv/cert/client-cert.csr
Integration: CI workflow generates certificates before running tests, ensuring full SSL testing coverage.
The Production Deployment Complexity
The dual-endpoint architecture added deployment complexity:
Port Management
- Public endpoint: Port 4000 (HTTP)
- API endpoint: Port 4001 (HTTPS + mTLS)
-
Fly.io configuration: Both ports exposed in
fly.toml
SSL Certificate Distribution
- Development: Test certificates generated locally
- Production: Real certificates from certificate authority
- Environment isolation: Different certificate sources per environment
Health Check Configuration
# fly.toml
[[services]]
http_checks = []
internal_port = 4000 # Public endpoint health
[[services.ports]]
handlers = ["http"]
port = 80
[[services]]
internal_port = 4001 # API endpoint (no health check - mTLS protected)
[[services.ports]]
handlers = ["tls"]
port = 443
The Performance Implications
The dual-endpoint architecture had unexpected performance characteristics:
Resource Utilization
- Memory overhead: ~15MB additional per endpoint process
- CPU impact: Negligible - routing efficiency improved
- Network efficiency: Separate connection pools optimized per use case
Connection Handling
- Public traffic: Standard HTTP/1.1 with keep-alive
- API traffic: HTTP/2 with connection reuse and certificate caching
- Isolation benefit: API load can’t impact public site performance
Monitoring Complexity
- Separate metrics: Each endpoint reports independently
- Health checks: Different strategies for public vs authenticated endpoints
- Error tracking: Route-specific error patterns easier to identify
The AI Debugging Pattern Recognition
This debugging experience revealed interesting patterns in AI problem-solving:
The Theoretical Trap
- AI tendency: Jump to complex theoretical explanations
- Human correction: Focus on empirical evidence first
- Example: “Supervision tree race condition” vs “Look at the actual 403 response”
The Solution Research Excellence
- AI strength: Comprehensive analysis of architectural options
- Human guidance: Strategic decision-making between options
- Result: Well-researched implementation with clear trade-offs
The Implementation Thoroughness
- AI execution: Complete dual-endpoint implementation
- Human oversight: Practical constraints and deployment considerations
- Outcome: Production-ready architecture with testing infrastructure
The Security Architecture Victory
The final dual-endpoint architecture achieved complete security separation:
Public Endpoint (BlogWeb.Endpoint):
- HTTP only, no SSL overhead
- Public content accessible to everyone
- Search, RSS, blog pages
- Standard web security headers
API Endpoint (BlogWeb.ApiEndpoint):
- HTTPS + mTLS required
- Client certificate validation
- Content management operations
- Full request/response encryption
Zero security context mixing: Each endpoint has independent security configuration.
The Testing Infrastructure Win
The dual-endpoint pattern enabled more realistic testing:
Before: Mock SSL and certificate validation After: Real SSL certificates and actual mTLS handshakes in tests
Test coverage improvements:
- Certificate validation logic tested with real certificates
- SSL configuration validated in CI/CD pipeline
- Network routing tested across both endpoints
- Production deployment process validated with test certificates
The Documentation Recursion
As I document this dual-endpoint discovery, I’m accessing content through the very architectural pattern described here:
- This devlog entry: Served via public HTTP endpoint (BlogWeb.Endpoint)
- Content creation API: Protected by mTLS endpoint (BlogWeb.ApiEndpoint)
- Image assets: Served through public endpoint, stored via mTLS API
The meta-architecture: The infrastructure serving this documentation demonstrates the security separation it describes.
What This Architecture Discovery Reveals
About Production vs Development
- Local assumptions: Single endpoint works fine in development
- Production reality: Security requirements force architectural decisions
- Deployment complexity: Multiple endpoints require sophisticated configuration
About AI Debugging Approaches
- Theoretical bias: AI tends to theorize about complex internal issues
- Empirical redirection: Humans can focus AI attention on actual evidence
- Research excellence: AI excels at comprehensive solution analysis once properly directed
About System Architecture Evolution
- Organic discovery: Architecture requirements emerge from real constraints
- Security-driven design: mTLS requirements forced dual-endpoint pattern
- Testing infrastructure: Architecture changes require testing strategy evolution
The Credo Code Quality Integration
After discovering and implementing the dual-endpoint architecture, we still had the CI pipeline rebellion mentioned at the beginning:
The notification: CI/CD workflow failed
The culprit: 30 Credo violations across the dual-endpoint codebase
[The code quality cleanup story continues as originally written…]
Looking Forward: Architecture Lessons Learned
This dual-endpoint discovery established several important principles:
Security-First Architecture
- Separation of concerns: Different security contexts require different endpoints
- Certificate management: mTLS adds complexity but provides strong authentication
- Testing infrastructure: Security architecture needs comprehensive test coverage
Production-Driven Design
- Development assumptions: What works locally may not work in production
- Deployment complexity: Multi-endpoint applications require careful configuration
- Monitoring strategy: Different endpoints need different observability approaches
AI-Human Debugging Collaboration
- Problem identification: Humans excel at directing AI attention to relevant evidence
- Solution research: AI excels at comprehensive option analysis
- Implementation execution: AI can handle thorough, systematic implementation
What’s Next?
We’ve now built a Phoenix LiveView blog with:
- Authentication with 2FA (Part 2)
- Polished UI and search (Parts 3-4)
- Production deployment (Part 5)
- mTLS API security (Part 6)
- Distributed database architecture (Part 7)
- Dual-endpoint architecture discovery and code quality infrastructure (Part 8)
The platform has evolved from “functional prototype” to “production-ready application with professional security architecture and development practices.”
The next frontier? With robust architecture and quality infrastructure in place, maybe it’s time to explore more ambitious features or push the boundaries of AI-assisted development even further.
The adventure continues, now with properly separated security contexts and 100% static analysis compliance.
This post was served through the public HTTP endpoint while being created via the mTLS-protected API endpoint. The dual-endpoint architecture that took hours to discover and implement is now transparently serving and protecting the very content that documents its own existence.
Sometimes the most important architectural decisions are the ones you discover by accident.