A real-world debugging story that illustrates how human creativity and out-of-the-box thinking still outpaces LLM capabilities in complex problem-solving.
The Context
This is a story about the current state of human versus AI capabilities in software engineering. The author uses LLMs regularly for code reviews, exploring ideas, and testing approaches - they're not anti-AI. But this experience highlights how human creativity still has a significant edge in complex problem-solving.
The Problem
While working on Vector Sets for Redis, the author encountered a sophisticated bug related to corruption-resistant RDB and RESTORE payloads. The challenge involved:
The Architecture: To optimize performance, the system serializes the HNSW (Hierarchical Navigable Small World) graph representation directly rather than element-vector pairs - storing node links as integers and resolving them to pointers. This makes loading 100x faster but creates vulnerability to corrupted data.
The Bug Scenario:
- Corrupted data indicates node A links to node B, but B doesn't link back to A
- When node B is deleted, the broken reciprocity means A's link to B isn't cleared
- During graph scanning, accessing B through A triggers a use-after-free error
The Validation Challenge: After loading data, the system needs to verify all links are reciprocal. The straightforward approach is O(N²) - for each node, scan all levels and neighbors, checking reciprocal links. This caused loading time for 20 million vectors to jump from 45 to 90 seconds.
The Human-LLM Collaboration
LLM's First Suggestion: Gemini 2.5 PRO proposed sorting neighbor link pointers to enable binary search. While valid, this wasn't clearly better for arrays of 16-32 pointers and wouldn't dramatically improve performance.
Human's Initial Innovation: Use a hash table to track link pairs. When seeing A linking to B at level X, store "A:B:X" (with A and B sorted so A>B). When the same link appears reciprocally, remove it from the hash table. Any remaining entries indicate non-reciprocal links.
LLM's Concern: Pointed out overhead from snprintf() for key creation and hashing time.
Human's Refinement: Eliminate snprintf() by using memcpy() with fixed-size keys.
Human's Breakthrough: Replace the hash table entirely with a fixed 12-byte accumulator. XOR each link (A:B:X = 8+8+4 bytes) into the accumulator. Since XORing the same value twice cancels out, a non-zero final value indicates orphaned links.
LLM's Valid Critique: Identified collision risks - pointers have similar structures, so three spurious links L1, L2, L3 might XOR to zero (L1 XOR L2 = L3), creating false negatives. Allocators are predictable and externally guessable, which could be exploited.
Human's Final Solution:
- Generate a random seed S from /dev/urandom
- For each link A:B:X, compute murmur-128(S:A:B:X)
- XOR the 128-bit hash output into a register
- Verify the register is zero at the end
LLM's Assessment: This approach effectively prevents both casual collisions and deliberate attacks, since the seed S is unknown, pointers can't be fully controlled, and combining these factors is extremely difficult.
The Insight
This exchange demonstrates a crucial distinction: humans excel at creative, imprecise, out-of-the-box solutions that work better than conventional approaches. The progression from hash tables to XOR accumulators to seeded hashing represents a type of lateral thinking that remains challenging for LLMs.
However, the LLM served as an invaluable "smart duck" - a conversational partner for verifying ideas, identifying weaknesses, and exploring implications. The author might not have developed these ideas in the same way without having an intelligent sounding board.
The Takeaway
LLMs are useful tools for coding - excellent for reviews, exploration, and validation. But human creativity, particularly the ability to envision unconventional solutions and make intuitive leaps, still has a significant edge. The most effective approach combines human inventiveness with AI's analytical capabilities, using each for what it does best.
Human intelligence remains far ahead in creative problem-solving, even as AI becomes increasingly useful as a collaborative tool.