NanoKnow: Mapping How LLMs Encode Knowledge

Researchers have released NanoKnow, a benchmark dataset that maps questions from Natural Questions and SQuAD to whether their answers appear in nanochat's fully transparent pre-training corpus. This enables direct measurement of how LLMs encode and rely on parametric knowledge versus external evidence. Experiments across eight nanochat checkpoints reveal that answer frequency in training data strongly influences closed-book accuracy, that external evidence can reduce this dependence but remains complementary to parametric knowledge, and that irrelevant context actively harms performance based on position and volume.
TL;DR
- →NanoKnow partitions QA datasets by answer presence in nanochat's open pre-training data, enabling transparent analysis of knowledge sources
- →Closed-book accuracy correlates strongly with answer frequency in pre-training, showing parametric knowledge is frequency-dependent
- →External evidence mitigates frequency bias but does not eliminate it, indicating parametric and external knowledge are complementary rather than substitutable
- →Non-relevant context degrades accuracy in measurable ways based on position and quantity, highlighting the importance of retrieval quality
Why it matters
Understanding how LLMs encode knowledge has been opaque because pre-training data is typically proprietary or inaccessible. NanoKnow leverages nanochat's open training corpus to directly measure this, providing empirical grounding for how models balance learned knowledge against external information. This work clarifies fundamental questions about model behavior that affect reliability, interpretability, and design choices in production systems.
Business relevance
For teams building RAG systems and retrieval-augmented applications, these findings quantify the tradeoff between relying on model weights versus external sources. The result that irrelevant context actively harms performance has direct implications for retrieval pipeline design and cost, while the frequency-dependence finding suggests that fine-tuning or continued pre-training on underrepresented domains may be necessary for specialized applications.
Key implications
- →LLM knowledge is not uniformly distributed, parametric knowledge is frequency-biased, and this bias persists even with external evidence, requiring explicit mitigation strategies
- →Retrieval quality matters more than quantity, non-relevant context is actively harmful, and position of irrelevant information affects performance, informing RAG system architecture
- →Open pre-training data enables reproducible analysis of model behavior, and similar transparency efforts could accelerate understanding of larger models and their knowledge boundaries
What to watch
Monitor whether other model developers adopt similar transparency practices around pre-training data, as this enables reproducible knowledge auditing. Watch for follow-up work applying NanoKnow methodology to larger models and different domains, and observe whether these findings influence RAG system design patterns in production deployments.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



