Back to products
ChessLens thumbnail

ChessLens

ExperimentComplete

Specialized APIs beat general AI.

A 2-hour prototype proving that purpose-built tools (chess engine + vision model) outperform general-purpose AI assistants at domain-specific analysis.

GeminiChessVision AIReact/TS

The Problem

General-purpose AI assistants like Perplexity confidently provide incorrect chess analysis. They hallucinate move sequences and misread positions, but users assume AI confidence equals correctness.

The Approach

Combine three specialized components: Gemini vision extracts board position from screenshots, Stockfish calculates objectively best moves, and Gemini generates grandmaster-style explanations of why the engine's line is best.

Features

Upload chess board screenshots for instant analysis
Stockfish engine analysis with principal variation
Interactive board editing to fix vision errors
Animated move playback
King escape analysis with threat visualization

Technical Highlights

Entirely client-side except API calls to Gemini

Stockfish runs in Web Worker (non-blocking)

State machine: IDLE → ANALYZING_IMAGE → VERIFYING_BOARD → CALCULATING → RESULT

Multi-PV analysis showing alternative candidate moves

Learnings

1

Use specialized tools for specialized domains—chess has 50+ years of engine development

2

AI composition beats AI monolith—vision + engine + explanation is more reliable than one model

3

Prototype speed matters—proving a point in 2 hours changes the nature of arguments