Anthropic's ID Verification Tightens Grip on China Access

Anthropic has begun requiring government-issued photo ID and facial verification from some customers, escalating its enforcement of restrictions on access from countries deemed U.S. adversaries, including China, Russia, and North Korea. The policy shift follows a year of incremental steps to block users in these regions. At least one Beijing-based AI startup founder lost access to Claude Code following the announcement, prompting him to switch to OpenAI's Codex as an alternative, though OpenAI maintains similar geographic restrictions without requiring ID verification.
TL;DR
- →Anthropic now requires government ID and facial verification from select customers to comply with restrictions on adversary nations
- →A Beijing AI startup co-founder's Claude Code access was shut down after the policy announcement, forcing a switch to OpenAI's Codex
- →OpenAI enforces the same geographic restrictions but does not require ID verification, creating a competitive disadvantage for Anthropic
- →The move represents Anthropic's toughest enforcement yet after a year of incremental restrictions targeting China, Russia, and North Korea
Why it matters
Anthropic's shift from passive geographic blocking to active identity verification signals a hardening of U.S. export controls on frontier AI capabilities. This escalation reflects broader geopolitical tensions around AI access and raises questions about how AI companies will balance compliance with market reach. The policy also highlights a potential enforcement gap between Anthropic and competitors like OpenAI, which maintain restrictions without the same verification burden.
Business relevance
For founders and operators in restricted regions, Anthropic's move eliminates a workaround that had previously allowed access through circumvention. The requirement to provide government ID creates friction and legal risk that may push users toward competitors with lower verification barriers. This could reshape the competitive dynamics of AI coding tools and foundation model access in Asia and other regions.
Key implications
- →Anthropic's ID verification requirement may accelerate adoption of alternative AI services in restricted regions, benefiting competitors like OpenAI and local AI providers
- →The policy creates a precedent for other AI companies to implement similar identity verification, potentially becoming an industry standard for compliance
- →Chinese and other adversary-nation startups may face increased operational friction when integrating U.S. AI tools, incentivizing investment in domestic alternatives
What to watch
Monitor whether other major AI providers adopt similar ID verification requirements and how strictly they enforce geographic restrictions. Track whether the policy measurably reduces Anthropic's usage in restricted regions or simply shifts users to competitors. Watch for any regulatory or diplomatic responses from affected countries, and whether Anthropic's enforcement approach becomes a model for other U.S. AI companies.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



