When AI Debugging Meets Real-World Chaos: Building Search That Actually Works

AI Development Phoenix Elixir

June 29, 2025 - Part 4

The Setup: Search Shouldn’t Be This Hard

After polishing the blog’s reading experience to near-perfection in Part 3, I felt confident tackling the next challenge: building a comprehensive search and filtering system. How hard could it be? We had Phoenix LiveView, a solid foundation, and Claude 4’s proven track record.

Famous last words.

What followed was a masterclass in collaborative debugging that tested both my patience and Claude’s problem-solving abilities—made even more interesting by an unexpected interruption when I hit Claude’s usage limits mid-development.

The Vision vs. Reality Gap

My initial request seemed straightforward:

Me: “The selection of tags doesn’t seem to do anything and selecting them doesn’t indicate that they are actually actively selected. I can’t tell. There is a hover color, but it doesn’t persist when the tag is clicked.”

What I thought would be a simple CSS fix turned into a deep dive through Phoenix LiveView internals, JavaScript debugging, and the discovery of some fundamental architectural issues.

Claude: “You’re right! Let me debug this step by step…”

What I expected: 10 minutes of CSS tweaks.
What happened: 2 hours of systematic problem diagnosis that revealed the entire LiveView event system wasn’t working.

The Great JavaScript Detective Story

The first clue something was seriously wrong came when Claude added a simple test button:

<button phx-click="test_click" class="mb-2 px-2 py-1 bg-red text-white text-xs">
  TEST LIVEVIEW (check server logs)
</button>

Me: “I can’t see anything happening when I click the test button.”

This single sentence changed everything. We weren’t dealing with a styling issue—LiveView events weren’t working at all.

The Console Output Investigation

Claude immediately asked me to check the browser developer console. Here’s what I found:

app.js:30 Uncaught Error: Dynamic require of "svelte/server" is not supported
    at app.js:30:11
    at render.js:2:22
    at app.js:54:21

Claude: “Perfect! That’s the issue. There’s a JavaScript error preventing LiveView from working properly. The error is related to the Svelte integration trying to do a dynamic require of ‘svelte/server’ which isn’t supported in the browser environment.”

The Debugging Dance: Copy-Paste Partnership

What followed was a fascinating back-and-forth of diagnostic information. Claude would request specific outputs, I’d copy-paste from terminal or browser, and Claude would immediately identify the next investigation step:

Claude: “Can you check what appears in the browser’s developer console when you first load the page and then click the test button?”

Me: [pastes console output]

Claude: “Now let me check the server logs to see if events are being triggered at all…”

Me: [pastes server terminal output showing no event messages]

Claude: “The issue is clear now. The JavaScript error is preventing the LiveView socket from connecting properly.”

This copy-paste debugging process became our primary collaboration method—Claude providing systematic diagnostic steps, me providing raw output data.

The Forced Intermission: When AI Has Limits

Right in the middle of debugging the JavaScript issues, I hit something I hadn’t encountered before:

Claude API: “I’ve reached my usage limit for this conversation. Please continue in a new session.”

Suddenly, I found myself staring at a development session frozen mid-debug, with a broken search system and notes scattered across terminal windows. It was like having your pair programming partner suddenly disappear during the most complex part of the problem.

The handoff challenge: How do you resume AI-assisted development when the AI has no memory of the last two hours of collaborative debugging?

Resuming with Fresh Context

When Claude came back online, I had to provide a compressed summary of our entire debugging journey:

Me: “This session is being continued from a previous conversation that ran out of context. The conversation is summarized below… [detailed session summary] Please continue the conversation from where we left it off without asking any further questions.”

Claude: Immediately picks up the debugging thread and continues from where we left off

This forced pause actually revealed something interesting: the systematic nature of Claude’s debugging approach meant the session could be resumed seamlessly. The problem diagnosis was logical and methodical enough to survive the context break.

The LiveSvelte Revelation

Once we resumed, the fix was surprisingly simple. The JavaScript error was coming from LiveSvelte integration—a library we’d added for potential Svelte components but never actually used:

// The problematic import causing browser errors
import { getHooks } from "../../deps/live_svelte/priv/static/live_svelte.esm.js"

Claude: “The LiveSvelte library is trying to use server-side rendering functions in the client code. Since we’re not actually using Svelte components, let’s remove these hooks.”

The fix: Remove the unused imports and simplify the LiveSocket configuration.

// Clean configuration without problematic hooks
let liveSocket = new LiveSocket("/live", Socket, {
  longPollFallbackMs: 2500,
  params: {_csrf_token: csrfToken}
})

Result: LiveView events started working immediately.

The Server Log Symphony

With JavaScript fixed, testing revealed new issues. Each problem triggered another round of copy-paste debugging:

Me: “When clicking a tag button, an error toast appears for a moment and the console output is…”

Claude: “The LiveView is crashing. Let me check the server logs to see the actual error.”

Me: [pastes massive server error log]

[error] GenServer #PID<0.875.0> terminating
** (UndefinedFunctionError) function BlogWeb.HomeLive.handle_params/3 is undefined or private

Claude: “Ah! The issue is that we’re using push_patch to update the URL, but LiveView expects a handle_params/3 callback to handle URL parameter changes. Let me add the missing function.”

This pattern repeated multiple times—me providing raw error output, Claude immediately identifying the specific problem and implementing the exact fix needed.

The “Simple” Search Request That Wasn’t

Once events were working, I made what seemed like a reasonable request:

Me: “The search bar isn’t really working as I envisioned. It definitely shouldn’t be a dropdown. And tab-completion isn’t working. Also, when a suggested autocomplete tag is selected, the text box should accept the selected tag as a bubble in the search box and then allow searching another tag, repeating the behavior. It should also support free text search of blog content. All combined, not one or the other.”

Translation: “Can you rebuild the entire search interface with completely different UX patterns and complex state management?”

Claude: “I’ll implement multi-tag bubbles, inline autocomplete, free text search, and combined functionality.”

What followed was the most sophisticated UI component we’d built yet.

The Parameter Mismatch Detective Work

As the new search system came online, more copy-paste debugging ensued:

Me: “Something is going very wrong when typing into the search box. Here is an error log…”

[pastes function clause error from server logs]

Claude: “The issue is that phx-keyup events send different parameters than I expected. The event sends %{'key' => 'l', 'value' => 'el'} but my handler expects %{'query' => query}. Let me fix the pattern matching.”

This kind of parameter debugging would have taken me significant time to figure out, but with the copy-paste partnership, Claude diagnosed and fixed it in minutes.

The Browser Console Partnership

Throughout development, the browser console became our shared debugging workspace:

Me: “Here is the console output…”

app.js:16 LiveSocket initialized and connected
app.js:2 phx-GE1lt4q1WDDbaxQh mount:  -  {0: {…}, 1: {…}, s: Array(3), t: 'Blog'}

Claude: “Excellent! The JavaScript is now working properly. LiveSocket is connecting and the LiveView is mounting successfully.”

Then when testing tag clicks:

Me: “Logs are showing…”

[pastes server logs showing successful event handling]

Claude: “Perfect! LiveView events are now working correctly. The tag buttons should be functional.”

Rebuilding Search: The Tag Bubble Revolution

The new search system Claude designed was genuinely impressive:

Multi-Tag Bubbles

Selected tags became removable bubbles within the search box:

<span class="inline-flex items-center gap-1 px-2 py-1 bg-blue text-white text-xs rounded-full">
  elixir
  <button phx-click="remove_tag" phx-value-tag="elixir">×</button>
</span>

Real-Time Suggestions

Type 2+ characters and get instant tag suggestions:

<div class="absolute z-10 w-full mt-1 bg-surface0 border rounded-lg shadow-lg">
  <button phx-click="add_tag_from_search" phx-value-tag="elixir">
    <span class="w-2 h-2 bg-blue rounded-full"></span>
    elixir
  </button>
</div>

Combined Search Logic

The brilliant part: it handles both tag filtering AND free text search simultaneously.

The Testing Revolution

Perhaps most impressively, Claude proactively suggested comprehensive testing:

Claude: “Please make sure that if this functionality is unit testable, appropriate unit tests are added.”

What followed was the creation of 21 comprehensive Content module tests and 11 LiveView interaction tests covering every edge case we’d encountered during our copy-paste debugging sessions.

Every test passed on the first run.

The Polish Phase: Micro-Adjustments

As we neared completion, I made increasingly specific UX requests:

Me: “Instead of the clear selection button being at the top of the page, make it appear next to ‘Popular tags’ if any tag selection is active.”

Me: “Remove the background from behind the search box, and change the placeholder text to ‘Search posts and/or filter by tags…’”

Me: “Can the search box be left justified?”

Each request was handled instantly and correctly, regardless of how minor.

What This Phase Revealed

Building the search system taught me several things about AI-human collaboration:

1. Copy-Paste Debugging is Surprisingly Effective

The pattern of “Claude requests specific output → I provide raw logs → Claude diagnoses exact issue” worked better than traditional debugging. No interpretation errors, no lost context.

2. Usage Limits Create Interesting Handoff Challenges

Having to resume development with a fresh AI instance mid-debug was initially jarring, but Claude’s systematic approach meant the handoff worked seamlessly.

3. Raw Error Output is AI’s Best Friend

Instead of me trying to interpret and summarize errors, just pasting the actual console output or server logs led to immediate, accurate diagnoses.

4. Complex UX Patterns Don’t Intimidate AI

My request for “tag bubbles with autocomplete and combined search” would be a significant feature in most apps. Claude treated it as a straightforward implementation challenge.

The Final Product: Search Done Right

After this development marathon (including the forced intermission), we ended up with:

Multi-tag bubble interface with removable tags
Real-time autocomplete that suggests unselected tags
Combined search logic for both tags and content
OR-based tag filtering (posts matching ANY selected tag)
URL persistence for bookmarkable search states
Comprehensive testing with 32 passing tests
Polished interactions with contextual UI elements

The search experience went from “completely broken” to “actually better than most production apps” in a single development session.

The Meta Moment: Testing Our Own System

The ultimate test came when I used the new search system to find content for this very blog post:

  • Typed “elixir” → Got tag suggestions
  • Clicked suggestion → Added as bubble
  • Typed “phoenix” → Added second tag
  • Instantly saw filtered results showing only posts with Elixir OR Phoenix tags

I was literally using the search system we’d just built to research the search system we’d just built.

What the Usage Limit Interruption Taught Me

The forced pause in the middle of development revealed something important about AI-assisted coding:

Traditional Pair Programming: Relies on shared mental models and continuous context
AI-Assisted Development: Can be systematically resumed from documentation and error logs

The fact that we could pick up exactly where we left off after a complete context break suggests that AI development might be more resilient to interruptions than human collaboration.

Looking Forward

We’ve now built:

  • Authentication with 2FA
  • Content management with markdown
  • Polished reading experience
  • Sophisticated search and filtering (with battle-tested debugging)

Next up: Deployment to Gigalixir with hot code reloading and CI/CD pipeline setup.

The question: Will Claude maintain this level of systematic problem-solving when dealing with production deployment, environment configuration, and DevOps challenges?

Stay tuned for the final chapter: Taking this thing live…


Written using the search system described within this post. The copy-paste debugging workflow that built this search feature was used to research previous posts while writing this one.

The recursion is getting ridiculous.