The Code Quality Crusade: When AI Meets Credos Wrath
July 27, 2025 - Part 13
The Quality Reckoning
After building comprehensive observability infrastructure (Part 12), our Phoenix LiveView blog was professionally monitored but had accumulated some technical debt. It was time for a code quality audit.
The trigger: Running mix credo
revealed several cyclomatic complexity warnings—functions that had grown too complex and needed refactoring.
The challenge: Transform complex, monolithic functions into clean, maintainable code without breaking existing functionality.
What followed was a masterclass in AI-driven refactoring, where Claude systematically decomposed complex functions into elegant, single-purpose modules.
The Credo Warnings: A Code Quality Wake-Up Call
$ mix credo
┃ Warnings - please take a look
┃
┣━ [W] ↗ lib/blog/content.ex:827 Function has a cyclomatic complexity of 11
┃ (max is 9). Consider refactoring.
┃
┣━ [W] ↗ lib/blog_web/live/home_live.ex:29 Function has a cyclomatic complexity of 14
┃ (max is 9). Consider refactoring.
┃
┗━ [W] ↗ lib/blog_web/live/home_live.ex:145 Function has a cyclomatic complexity of 12
┃ (max is 9). Consider refactoring.
The diagnosis: Three functions had grown beyond Credo’s complexity threshold, indicating they were trying to do too much.
Me: “Fix the Credo warnings.”
Claude: “I’ll analyze each complex function and extract smaller, focused helper functions to reduce cyclomatic complexity…”
This is where AI-driven refactoring really shines—systematic decomposition without losing the bigger picture.
The Content Module Surgery
The Problem: get_series_empty_state_by_id/1
(Complexity: 11)
The original function was a monolithic state machine:
defp get_series_empty_state_by_id(series_id) do
# Count published posts
published_count = from(p in Post, where: p.series_id == ^series_id and p.published == true)
|> RepoService.all()
|> case do
{:ok, posts} -> length(posts)
{:error, _} -> 0
end
# Count total posts
total_count = from(p in Post, where: p.series_id == ^series_id)
|> RepoService.all()
|> case do
{:ok, posts} -> length(posts)
{:error, _} -> 0
end
# Complex conditional logic for state determination
cond do
total_count == 0 -> :no_posts
published_count > 0 -> :has_published
true ->
# Find earliest unpublished date with complex query
earliest_date = from(p in Post,
where: p.series_id == ^series_id and p.published == false,
select: min(p.published_at),
order_by: [asc: :series_position])
|> RepoService.one()
|> case do
{:ok, nil} -> nil
{:ok, datetime} -> datetime
{:error, _} -> nil
end
{:upcoming_only, earliest_date}
end
end
Claude’s Surgical Refactoring
The approach: Split the single complex function into focused, single-purpose helpers:
defp get_series_empty_state_by_id(series_id) do
published_count = get_series_published_count(series_id)
total_count = get_series_total_count(series_id)
determine_series_empty_state(series_id, published_count, total_count)
end
defp get_series_published_count(series_id) do
from(p in Post,
where: p.series_id == ^series_id and p.published == true,
select: count(p.id))
|> RepoService.one()
|> case do
{:ok, count} -> count
{:error, _} -> 0
end
end
defp get_series_total_count(series_id) do
from(p in Post,
where: p.series_id == ^series_id,
select: count(p.id))
|> RepoService.one()
|> case do
{:ok, count} -> count
{:error, _} -> 0
end
end
defp determine_series_empty_state(_series_id, _published_count, 0), do: :no_posts
defp determine_series_empty_state(_series_id, published_count, _total_count)
when published_count > 0, do: :has_published
defp determine_series_empty_state(series_id, 0, total_count) when total_count > 0 do
earliest_date = get_earliest_unpublished_date(series_id)
{:upcoming_only, earliest_date}
end
defp get_earliest_unpublished_date(series_id) do
from(p in Post,
where: p.series_id == ^series_id and p.published == false,
select: min(p.published_at),
order_by: [asc: :series_position])
|> RepoService.one()
|> case do
{:ok, nil} -> nil
{:ok, datetime} -> datetime
{:error, _} -> nil
end
end
The transformation:
- Before: 1 function with complexity 11
- After: 5 functions with complexity ≤ 3 each
- Benefits: Each function has a single, clear responsibility
The LiveView Parameter Parsing Overhaul
The Problem: handle_params/3
(Complexity: 14)
The HomeLive parameter parsing had grown into a nested conditional nightmare:
def handle_params(params, _uri, socket) do
# Complex nested parameter parsing
tags = case params["tags"] do
nil -> []
"" -> []
tags_string ->
tags_string |> String.split(",") |> Enum.map(&String.trim/1) |> Enum.reject(&(&1 == ""))
end
series = case params["series"] do
nil -> []
"" -> []
series_string when is_binary(series_string) ->
series_string |> String.split(",") |> Enum.map(&String.trim/1) |> Enum.reject(&(&1 == ""))
series_list when is_list(series_list) -> series_list
end
search = case params["search"] do
nil -> nil
"" -> nil
search_term -> String.trim(search_term)
end
# More complex logic...
end
Claude’s Parameter Parsing Extraction
The solution: Extract specialized parsing functions:
def handle_params(params, _uri, socket) do
tags = parse_tags_param(params["tags"])
series = parse_series_param(params["series"])
search = parse_search_param(params["search"])
# Clean, linear logic flow...
end
defp parse_tags_param(nil), do: []
defp parse_tags_param(""), do: []
defp parse_tags_param(tags_string) when is_binary(tags_string) do
tags_string
|> String.split(",")
|> Enum.map(&String.trim/1)
|> Enum.reject(&(&1 == ""))
end
defp parse_series_param(nil), do: []
defp parse_series_param(""), do: []
defp parse_series_param(series_string) when is_binary(series_string) do
series_string
|> String.split(",")
|> Enum.map(&String.trim/1)
|> Enum.reject(&(&1 == ""))
end
defp parse_series_param(series_list) when is_list(series_list), do: series_list
defp parse_search_param(nil), do: nil
defp parse_search_param(""), do: nil
defp parse_search_param(search_term) when is_binary(search_term) do
case String.trim(search_term) do
"" -> nil
trimmed -> trimmed
end
end
The elegance: Pattern matching and guard clauses replaced nested conditional logic, making each case explicit and testable.
The Post Loading Function Decomposition
The Problem: load_posts/1
(Complexity: 12)
The post loading function was handling multiple concerns in a single massive function:
defp load_posts(socket) do
# Complex query building
# Analytics tracking
# Search term processing
# Empty state determination
# Error handling
# Socket assignment updates
# ... all mixed together
end
Claude’s Separation of Concerns
The refactoring strategy: Separate data loading, analytics, and presentation concerns:
defp load_posts(socket) do
query_opts = build_post_query_opts(
socket.assigns.page,
socket.assigns.per_page,
socket.assigns.tags,
socket.assigns.series,
socket.assigns.search
)
posts = Content.list_posts(query_opts)
socket
|> assign(:posts, posts)
|> maybe_add_search_term()
|> maybe_add_empty_state()
|> track_analytics()
end
defp build_post_query_opts(page, per_page, tags, series, search) do
[
page: page,
per_page: per_page,
tags: tags,
series: series,
search: search
]
end
defp maybe_add_search_term(%{assigns: %{search: search}} = socket) when search != nil do
assign(socket, :search_term, build_search_term(search, socket.assigns.tags, socket.assigns.series))
end
defp maybe_add_search_term(socket), do: socket
defp maybe_add_empty_state(%{assigns: %{posts: []}} = socket) do
if has_search_criteria?(socket.assigns.search, socket.assigns.tags, socket.assigns.series) do
assign(socket, :empty_state, :no_results)
else
assign(socket, :empty_state, get_series_empty_state(socket.assigns.series))
end
end
defp maybe_add_empty_state(socket), do: assign(socket, :empty_state, nil)
The transformation: One complex function became a pipeline of focused, composable operations.
The Refactoring Philosophy: AI vs Human Approaches
This refactoring revealed interesting differences between AI and human refactoring strategies:
Where Claude Excels
Systematic decomposition: Claude methodically identified each distinct responsibility within complex functions and extracted them consistently.
Pattern recognition: Claude recognized repeated patterns (like parameter parsing) and extracted them into reusable functions with consistent interfaces.
Comprehensive coverage: The refactoring touched all complexity issues simultaneously, rather than fixing them piecemeal.
The AI Refactoring Advantages
No emotional attachment: Claude had no bias toward preserving existing code structure—it ruthlessly optimized for clarity and maintainability.
Consistency: All extracted functions followed the same naming patterns and structural approaches.
Completeness: The refactoring addressed every Credo warning, not just the most obvious ones.
The Testing Validation
After the refactoring, the critical question: Did we break anything?
$ mix test
Finished in 2.1 seconds (0.00s async, 2.1s sync)
129 tests, 0 failures
Randomized with seed 42
Result: All tests passed. The refactoring preserved functionality while dramatically improving code quality.
The lesson: Proper refactoring maintains external behavior while improving internal structure.
The Credo Victory
$ mix credo
Checking 47 files ...
Analysis took 0.4 seconds (0.2s to load, 0.2s running 52 checks on 47 files)
129 mods/funs, found no issues.
Use `mix credo --strict` for stricter analysis.
Achievement unlocked: Zero Credo warnings.
Cyclomatic complexity results:
-
get_series_empty_state_by_id
: 11 → 3 (73% reduction) -
handle_params
: 14 → 6 (57% reduction) -
load_posts
: 12 → 4 (67% reduction)
All functions now well within Credo’s complexity guidelines.
The Maintainability Payoff
The real test of refactoring comes during future development. Two weeks later, when we needed to modify the series empty state logic (spoiler for Part 12), the refactored code made the changes trivial:
Before refactoring: Modifying series logic would have required understanding and changing a complex, monolithic function.
After refactoring: The change was isolated to determine_series_empty_state/3
—a focused, testable function with clear inputs and outputs.
The refactoring investment paid dividends immediately.
What AI Refactoring Teaches About Code Quality
This refactoring session revealed several insights about AI-assisted code improvement:
AI’s Systematic Approach to Complexity
Human refactoring: Often tactical, fixing the most painful problems first.
AI refactoring: Strategic and comprehensive, addressing all complexity issues with consistent patterns.
The advantage: AI treats refactoring as an optimization problem, finding global solutions rather than local fixes.
The Function Extraction Philosophy
Claude’s approach to function extraction followed clear principles:
- Single Responsibility: Each extracted function had one clear purpose
- Meaningful Names: Function names clearly described their behavior
- Consistent Interfaces: Similar functions used similar parameter patterns
- Pattern Matching: Leveraged Elixir’s strengths for conditional logic
The Cognitive Load Reduction
Before: Understanding the code required holding multiple concerns in your head simultaneously.
After: Each function could be understood independently, with clear inputs and outputs.
The impact: Code became self-documenting through structure and naming.
The Recursive Documentation Quality Check
As I write this devlog entry, I’m applying the same principles Claude used for refactoring:
- Single purpose sections: Each section covers one aspect of the refactoring
- Clear structure: The narrative flows from problem → analysis → solution → results
- Consistent patterns: Similar refactoring examples follow similar explanation formats
Even documentation benefits from the principles that make code maintainable.
Looking Back: The Code Quality Foundation
This refactoring established a foundation for all future development:
Technical debt eliminated: Complex functions that would have become maintenance nightmares were decomposed into manageable pieces.
Development velocity increased: Future changes could be made with confidence, knowing the code structure was clean and testable.
Quality standards established: The codebase now has consistent patterns for handling complexity, providing templates for future code.
What’s Next After the Quality Crusade?
With code quality issues resolved, we were ready for the next major architectural challenge: implementing a complete database migration system that could bridge Ecto’s expectations with Turso’s HTTP API.
The foundation was solid. Now it was time to build something truly complex on top of it.
Sometimes the most important work is the work that makes future work possible.
This post documents the systematic refactoring that eliminated all cyclomatic complexity warnings in our Phoenix LiveView blog. The clean, maintainable code described here became the foundation for the more complex database adapter patterns implemented in Part 12.
Quality first, features second—a principle that compound returns in software development.