AI IQ Launches Model Scorecard, Sparks Precision vs. Simplicity Debate

A new site called AI IQ has launched a framework for scoring frontier language models on a single intelligence quotient, mapping over 50 models onto a bell curve similar to human IQ tests. The visualization has drawn quick praise from enterprise technologists for making the fragmented model landscape legible, but sharp criticism from researchers who argue that reducing jagged, uneven AI capabilities to a single number creates false precision. The methodology groups 12 benchmarks across four reasoning dimensions (abstract, mathematical, programmatic, and academic) and applies hand-calibrated difficulty curves to prevent easier benchmarks from inflating scores, with GPT-5.5 currently leading at an estimated IQ of 136.
TL;DR
- →AI IQ, created by Ryan Shea (Stacks co-founder), assigns estimated IQs to 50+ language models using 12 benchmarks across four reasoning dimensions
- →GPT-5.5 leads at IQ 136, followed closely by Opus 4.7 (132), GPT-5.4 (131), and Gemini 3.1 Pro (131), showing unprecedented convergence at the frontier
- →The framework uses hand-calibrated difficulty curves to prevent easier benchmarks from inflating scores and conservatively handles missing data
- →Enterprise technologists praise the visualization for clarity, while researchers warn the single-number reduction obscures AI's uneven, jagged capabilities
Why it matters
The AI market has become fragmented and difficult to compare across dozens of models with different strengths and weaknesses. A standardized scoring framework, even a contested one, addresses a real need for operators and buyers to understand relative model capabilities at a glance. However, the backlash highlights a fundamental tension in AI evaluation: whether complex, multidimensional systems can be meaningfully reduced to a single metric without losing critical information about real-world performance.
Business relevance
For enterprise buyers and product teams evaluating which frontier models to integrate, a legible comparison tool reduces decision friction and research overhead. Conversely, the framework's limitations mean teams cannot rely on AI IQ scores alone for procurement decisions and must still conduct task-specific benchmarking. The site's visibility also signals growing demand for transparent, accessible model evaluation as the market matures.
Key implications
- →Single-number metrics risk becoming a proxy for capability in market perception, potentially influencing model adoption and pricing even if the underlying methodology has significant blind spots
- →The tight clustering at the frontier (top models within 7 IQ points) suggests diminishing returns on raw capability gains and may shift competitive focus toward specialized performance, cost, and inference speed
- →Methodology choices like difficulty curve calibration and missing data handling embed subjective judgments that can shift rankings; transparency is necessary but may not resolve fundamental disagreements about what AI intelligence means
What to watch
Monitor whether AI IQ becomes an industry standard reference point or remains a niche tool, and track how model creators respond to their rankings (defensive statements, methodology critiques, or attempts to optimize for the framework). Watch for competing evaluation frameworks that attempt to address the jagged capability problem differently, and observe whether the tight convergence at the frontier persists or widens as new models launch.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



