ChessLens
ExperimentCompleteSpecialized APIs beat general AI.
A 2-hour prototype proving that purpose-built tools (chess engine + vision model) outperform general-purpose AI assistants at domain-specific analysis.
The Problem
General-purpose AI assistants like Perplexity confidently provide incorrect chess analysis. They hallucinate move sequences and misread positions, but users assume AI confidence equals correctness.
The Approach
Combine three specialized components: Gemini vision extracts board position from screenshots, Stockfish calculates objectively best moves, and Gemini generates grandmaster-style explanations of why the engine's line is best.
Features
Technical Highlights
Entirely client-side except API calls to Gemini
Stockfish runs in Web Worker (non-blocking)
State machine: IDLE → ANALYZING_IMAGE → VERIFYING_BOARD → CALCULATING → RESULT
Multi-PV analysis showing alternative candidate moves
Learnings
Use specialized tools for specialized domains—chess has 50+ years of engine development
AI composition beats AI monolith—vision + engine + explanation is more reliable than one model
Prototype speed matters—proving a point in 2 hours changes the nature of arguments