Securing the Recursive Loop: When AI Builds mTLS Authentication for Its Own Blog

AI Development Phoenix Elixir

July 19, 2025 - Part 6

The Security Imperative

After successfully deploying our Phoenix LiveView blog in Part 5, we had achieved something remarkable: a fully functional, publicly accessible blog built entirely through AI-human collaboration. But there was a glaring security gap.

The problem: Our API endpoints were completely open. Anyone could POST, PUT, or DELETE content without any authentication whatsoever.

The solution: Implement mTLS (mutual TLS) client certificate authentication for write operations while keeping read access public.

What followed was perhaps the most technically challenging chapter of our AI development adventure—and one that revealed both the sophisticated debugging capabilities and surprising knowledge gaps in AI-assisted security implementation.

The Initial Confidence vs. Implementation Reality

Me: “We are trying to set up mTLS for the POST endpoint in this project. Start the server and test the endpoint with curl.”

Claude: “I’ll implement mTLS client certificate authentication for the POST endpoints…”

This seemed straightforward. Phoenix has excellent SSL support, Claude appeared confident about certificate handling, and mTLS is a well-established security pattern.

Famous last words.

What followed was a deep dive into certificate validation logic, testing framework integration, and time parsing bugs that would test both our patience and Claude’s debugging abilities.

The Testing Framework Revelation

The first major challenge came when trying to test the mTLS implementation. My initial approach was to use external HTTP clients:

Me: “Let’s test this with curl using the certificates.”

But every test failed with cryptic certificate validation errors:

Certificate expired or not yet valid

This despite using certificates that were clearly valid. After extensive research, Claude discovered the fundamental issue:

Claude: “Based on the research, the proper approach is to use Plug.Test.put_peer_data/2 instead of external HTTPS clients for testing mTLS in Phoenix applications.”

This was a pivotal moment. External clients bypass Phoenix’s testing infrastructure entirely, making it impossible to properly test certificate authentication in the development environment.

The Authentication Plug Architecture

Claude designed an elegant authentication system that worked across both production and testing environments:

defp get_client_certificate(conn) do
  case conn.adapter do
    {Plug.Cowboy.Conn, cowboy_req} ->
      get_peer_cert_from_cowboy(cowboy_req)
    _ ->
      get_peer_cert_from_test_data(conn)
  end
end

The beauty of this approach: the same authentication logic handles both real Cowboy connections (production) and test data injected through Plug.Test.put_peer_data/2.

The Great Time Parsing Bug Hunt

Just when the architecture looked solid, we hit a showstopper. Every certificate validation was failing with “Certificate expired or not yet valid” errors—even for certificates that were clearly within their validity period.

The debugging process became a systematic investigation:

Me: “The tests are consistently failing. Here’s the error output…”

Claude: “Let me examine the certificate validation logic…”

After careful examination of the time parsing code, Claude discovered the bug:

# WRONG - parameters reversed
defp parse_and_compare_time(current_time, {:utcTime, encoded_time}) do
  case parse_utc_time(encoded_time) do
    {:ok, cert_time} -> {:ok, DateTime.compare(cert_time, current_time)}
    error -> error
  end
end

# CORRECT - parameters in right order  
defp parse_and_compare_time(current_time, {:utcTime, encoded_time}) do
  case parse_utc_time(encoded_time) do
    {:ok, cert_time} -> {:ok, DateTime.compare(current_time, cert_time)}
    error -> error
  end
end

The issue: The comparison parameters were reversed, causing valid certificates to be rejected as expired and expired certificates to be accepted as valid.

This kind of subtle logic error is exactly the type of bug that can slip through code review but gets caught immediately in systematic testing.

The Testing Revolution

With the authentication logic fixed, Claude implemented a comprehensive testing framework:

def with_client_cert(conn, cert_options \\ []) do
  cert_data = Keyword.get(cert_options, :cert_data, load_real_test_certificate())
  peer_data = %{address: {127, 0, 0, 1}, port: 443, ssl_cert: cert_data}
  Plug.Test.put_peer_data(conn, peer_data)
end

def with_invalid_client_cert(conn) do
  cert_data = load_invalid_test_certificate()
  peer_data = %{address: {127, 0, 0, 1}, port: 443, ssl_cert: cert_data}
  Plug.Test.put_peer_data(conn, peer_data)
end

def without_client_cert(conn) do
  peer_data = %{address: {127, 0, 0, 1}, port: 443}
  Plug.Test.put_peer_data(conn, peer_data)
end

This testing framework covered every authentication scenario:

  • Valid certificates (should succeed)
  • Invalid certificates (should be rejected)
  • Missing certificates (should be rejected)
  • Expired certificates (should be rejected)

Every test passed on the first run after the bug fix.

The Certificate Generation Strategy

One of the most elegant aspects of the implementation was the certificate generation strategy:

Valid Test Certificates

Generated using the project’s CA and properly signed:

openssl genrsa -out test_client.key 2048
openssl req -new -key test_client.key -out test_client.csr -subj "/CN=test-client"
openssl x509 -req -in test_client.csr -CA ca.crt -CAkey ca.key -out test_client.crt -days 365

Invalid Test Certificates

Self-signed certificates that should be rejected:

openssl req -x509 -newkey rsa:2048 -keyout invalid_client.key -out invalid_client.crt -days 365 -nodes -subj "/CN=invalid-client"

This approach ensured that tests verified actual certificate validation logic rather than just checking for the presence of certificate data.

The Code Cleanup Marathon

After implementing the core mTLS functionality, the codebase needed cleanup:

Me: “Clean up the project the work that has been done. Remove unnecessary functions and address all the warnings from Mix.”

What followed was systematic elimination of:

  • Unused function aliases in test helpers
  • Redundant wrapper functions
  • Compilation warnings about unused variables
  • Inconsistent naming patterns

Claude’s approach: Rather than just removing code, it traced through the entire call chain to ensure no functionality was lost while eliminating unnecessary abstractions.

The Documentation Audit

Me: “Audit all of the documentation strings to make sure they are accurate.”

This request led to a comprehensive review of every docstring in the authentication system. Claude updated documentation to reflect the actual implementation, removed outdated references, and ensured consistency with Phoenix conventions.

The attention to detail was impressive—every function parameter, return value, and error case was accurately documented.

The Atomic Commit Strategy

Me: “Make atomic commits for all the mtls work.”

Instead of one massive commit, Claude organized the work into logical, reviewable chunks:

  1. mTLS authentication plug implementation - Core certificate validation logic
  2. mTLS testing framework - Phoenix-compatible testing infrastructure
  3. Test certificate generation - Valid and invalid certificate creation
  4. Time parsing bug fix - Critical logic correction
  5. Code cleanup and documentation - Polish and maintainability improvements

Each commit was focused, well-documented, and included only related changes.

What This Implementation Revealed

Building mTLS authentication taught several important lessons about AI-assisted security development:

1. AI Excels at Systematic Security Implementation

Claude’s approach to authentication was methodical and comprehensive. It didn’t just implement basic certificate checking—it built a complete security framework with proper error handling, logging, and edge case coverage.

2. Testing Framework Knowledge is Deep but Specialized

The discovery that Phoenix requires Plug.Test.put_peer_data/2 for mTLS testing wasn’t obvious from documentation. Claude found this through research and pattern matching across multiple Elixir forum discussions.

3. Subtle Logic Bugs Still Require Careful Review

The time parsing parameter reversal was exactly the kind of bug that could have serious security implications. While Claude caught and fixed it during testing, it demonstrated the importance of comprehensive test coverage for security-critical code.

4. Security Implementation Benefits from Iterative Refinement

The authentication system went through multiple iterations:

  • Initial implementation (mostly correct)
  • Bug discovery and fixing (parameter reversal)
  • Testing framework enhancement (comprehensive coverage)
  • Code cleanup (maintainability)
  • Documentation polish (clarity)

Each iteration improved both security and maintainability.

The Meta Security Moment

As I write this blog post about implementing mTLS authentication, I’m using the very API endpoints that are now secured by that authentication system. The POST request that will save this content is protected by the certificate validation logic described within these words.

The recursion is getting philosophical: I’m documenting the security system that protects the documentation of itself.

Production Deployment and Testing

After extensive testing in the development environment, we moved to production deployment—where a new challenge emerged: Fly.io doesn’t support client certificate forwarding through their standard HTTP proxy.

The Solution: Dual-port architecture

  • Port 443: Public website with trusted CA certificates (handled by Fly.io)
  • Port 8443: Direct TCP passthrough for mTLS API endpoints

The Phoenix Adapter Discovery

The most critical production issue was subtle but devastating: Phoenix 1.7+ defaults to the Bandit HTTP server, but our mTLS authentication was designed for Cowboy.

Symptoms:

curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL

Root Cause: Bandit wasn’t configured for mTLS, so port 8443 never started with SSL.

Solution: Force Cowboy usage:

# config/config.exs
config :phoenix, :adapter, :cowboy

Production Validation

Once properly configured, production testing confirmed the architecture works:

# This command now completes the SSL handshake and processes the API request
curl --cert client-cert.pem --key client-key.pem \
     --cacert ca.pem \
     https://blog-nameless-grass-3626.fly.dev:8443/api/posts

The 500 error that followed was just a database issue—the mTLS authentication layer was fully functional.

The Production Security Posture

With mTLS authentication successfully deployed in production, our blog now has a sophisticated security model:

Public Access:

  • ✅ Anyone can read blog posts
  • ✅ Anyone can browse and search content
  • ✅ Anyone can view the site without certificates

Authenticated Access:

  • 🔒 POST operations require valid client certificates
  • 🔒 PUT operations require valid client certificates
  • 🔒 DELETE operations require valid client certificates
  • 🔒 Certificate validation includes CA verification and time bounds checking

This strikes the right balance: open access for readers, strong authentication for content management.

What Still Needs Human Oversight

Despite Claude’s sophisticated implementation, several aspects required human judgment:

Security Policy Decisions

Human decision: Which endpoints should require authentication vs. remain public
AI strength: Implementing whatever policy is decided

Certificate Management Strategy

Human decision: How certificates should be distributed and managed in production
AI strength: Building the validation and verification infrastructure

Error Handling Philosophy

Human decision: How much information to reveal in authentication failure responses
AI strength: Implementing consistent error handling patterns

AI can build excellent security infrastructure, but strategic security decisions still require human expertise and business context.

Looking Back at the Security Journey

From initial mTLS setup through comprehensive testing and cleanup, this security implementation revealed both the capabilities and boundaries of AI-assisted development:

The Impressive:

  • Systematic approach to complex authentication requirements
  • Sophisticated testing framework that works with Phoenix conventions
  • Thorough documentation and code organization
  • Ability to debug and fix subtle logic errors through testing

The Human-Required:

  • Strategic decisions about what to secure and how
  • Understanding of business requirements for certificate distribution
  • Judgment calls about error messaging and user experience

The Result: A fully deployed, production-ready mTLS authentication system that properly secures API endpoints while maintaining usability for legitimate users.

The Continuing Adventure

We’ve now built and deployed a sophisticated Phoenix LiveView blog with:

  • Authentication with 2FA (Part 2)
  • Polished UI and UX (Part 3)
  • Advanced search functionality (Part 4)
  • Production deployment (Part 5)
  • Fully functional mTLS API security in production (Part 6)

What’s next? The blog is functionally complete, but the AI development adventure continues. Each new feature request, each bug report, each enhancement becomes another opportunity to explore the boundaries of AI-assisted development.

The recursion may never truly end—as long as we keep using this AI-built blog to document AI-built features, we’ll have new stories to tell.


This post was written using the mTLS-secured API endpoints described within it. The certificate authentication that protected the POST request for this content was implemented using the exact testing framework and validation logic documented in these words.

The meta-commentary has reached new levels of recursive sophistication.