vff — the signal in the noise
NewsTrending

Enterprise Software Ditches Flat Fees for AI Usage Pricing

Aaron HolmesRead original
Share
Enterprise Software Ditches Flat Fees for AI Usage Pricing

Enterprise software companies are abandoning flat per-user subscription fees in favor of usage-based pricing tied to AI consumption. By end of 2025, 79 of the 500 largest software firms tracked by analyst Kyle Poyar, including HubSpot, Adobe, and Salesforce, had implemented additional charges based on AI usage, more than doubling the count from 2024. This shift reflects how AI capabilities are disrupting traditional seat-based licensing models that no longer capture the value these tools generate.

TL;DR

  • 79 of 500 major software companies now charge extra for AI usage, up from roughly 35 in 2024
  • HubSpot, Adobe, and Salesforce among firms moving away from flat per-user fees
  • Usage-based pricing reflects AI's threat to legacy seat-based subscription models
  • Shift accelerated through 2025 as AI features became core product differentiators

Why it matters

This pricing migration signals that AI is no longer a peripheral feature but a primary value driver in enterprise software. Companies can no longer sustain traditional per-seat models when AI usage varies wildly across customers and generates outsized value for heavy users. The shift also indicates consolidation around usage-based economics as the industry standard for AI-augmented products.

Business relevance

For operators and founders, this trend validates usage-based pricing as a viable model for AI-heavy products and suggests customers will accept incremental charges for AI capabilities. It also creates pressure on legacy software vendors to restructure pricing or risk losing customers to more flexible competitors. Startups building AI tools should consider usage-based models from the outset rather than retrofitting traditional licensing.

Key implications

  • Seat-based licensing is becoming obsolete for software with meaningful AI components, forcing legacy vendors to restructure revenue models
  • Usage-based pricing may increase customer acquisition costs initially but allows vendors to capture more value from power users
  • Customers gain flexibility but face unpredictable costs if AI usage scales unexpectedly, creating new procurement and budgeting challenges

What to watch

Monitor whether usage-based AI pricing becomes standard across all software categories or remains concentrated in specific verticals. Watch for customer backlash or churn if AI charges become too aggressive, and track whether startups gain competitive advantage by offering more transparent or predictable AI pricing models. Also observe how enterprise procurement teams adapt to managing variable AI costs in their budgets.

Share

vff Briefing

Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.

No spam. Unsubscribe any time.

Related stories

Lightweight Model Beats GPT-4o at Robot Gesture Prediction
Research

Lightweight Model Beats GPT-4o at Robot Gesture Prediction

Researchers have developed a lightweight transformer model that generates co-speech gestures for robots by predicting both semantic gesture placement and intensity from text and emotion signals alone, without requiring audio input at inference time. The model outperforms GPT-4o on the BEAT2 dataset for both gesture classification and intensity regression tasks. The approach is computationally efficient enough for real-time deployment on embodied agents, addressing a gap in current robot systems that typically produce only rhythmic beat-like motions rather than semantically meaningful gestures.

3 days ago· ArXiv (cs.AI)
AWS Launches G7e GPU Instances for Cheaper Large Model Inference
TrendingModel Release

AWS Launches G7e GPU Instances for Cheaper Large Model Inference

AWS has launched G7e instances on Amazon SageMaker AI, powered by NVIDIA RTX PRO 6000 Blackwell GPUs with 96 GB of GDDR7 memory per GPU. The instances deliver up to 2.3x inference performance compared to previous-generation G6e instances and support configurations from 1 to 8 GPUs, enabling deployment of large language models up to 300B parameters on the largest 8-GPU node. This represents a significant upgrade in memory bandwidth, networking throughput, and model capacity for generative AI inference workloads.

6 days ago· AWS Machine Learning Blog
Anthropic Launches Claude Design for Non-Designers
Model Release

Anthropic Launches Claude Design for Non-Designers

Anthropic has launched Claude Design, a new product aimed at helping non-designers like founders and product managers create visuals quickly to communicate their ideas. The tool addresses a gap for early-stage teams and individuals who need to share concepts visually but lack design expertise or resources. Claude Design integrates with Anthropic's Claude AI platform, leveraging its capabilities to streamline the visual creation process. The launch reflects growing demand for AI-powered design tools that lower barriers to entry for non-technical users.

7 days ago· TechCrunch AI
Google Splits TPUs Into Training and Inference Chips

Google Splits TPUs Into Training and Inference Chips

Google is splitting its eighth-generation tensor processing units into separate chips optimized for AI training and inference, a shift the company says reflects the rise of AI agents and their distinct computational needs. The training chip delivers 2.8 times the performance of its predecessor at the same price, while the inference processor (TPU 8i) achieves 80% better performance and includes triple the SRAM of the prior generation. Both chips will launch later this year as Google continues its effort to compete with Nvidia in custom AI silicon, though the company is not directly benchmarking against Nvidia's offerings.

5 days ago· Direct