vff — the signal in the noise
News

Finetuning Multimodal Models with Sentence Transformers

Read original
Share
Finetuning Multimodal Models with Sentence Transformers

Sentence Transformers, a Python library for embedding and reranker models, now supports training and finetuning multimodal models that handle text, images, audio, and video. The post demonstrates finetuning Qwen/Qwen3-VL-Embedding-2B for Visual Document Retrieval, achieving an NDCG@10 of 0.947 versus the base model's 0.888, outperforming larger existing models. The training pipeline mirrors text-only approaches but handles multimodal data through the model's processor, with components including model selection, domain-specific datasets, loss functions, and evaluation tools.

TL;DR

  • Sentence Transformers now enables training multimodal embedding and reranker models on custom data for domain-specific tasks
  • Finetuning a 2B parameter vision-language model on Visual Document Retrieval improved performance from 0.888 to 0.947 NDCG@10, beating larger models
  • Training pipeline uses the same SentenceTransformerTrainer as text-only models, with automatic image preprocessing handled by the model's processor
  • Domain-specific finetuning addresses the limitation that general-purpose multimodal models rarely optimize for specialized tasks like document retrieval

Why it matters

Multimodal embedding models are increasingly central to retrieval-augmented generation and semantic search workflows, but off-the-shelf models often underperform on specialized tasks. This post demonstrates that modest-sized models can match or exceed much larger competitors when finetuned on domain data, lowering the barrier to building effective multimodal retrieval systems without massive compute budgets.

Business relevance

For teams building document retrieval, visual search, or multimodal RAG systems, finetuning enables competitive performance with smaller, cheaper models. A 2B parameter model achieving state-of-the-art results on Visual Document Retrieval reduces inference costs and deployment complexity compared to larger alternatives, making multimodal retrieval more accessible to resource-constrained organizations.

Key implications

  • Domain-specific finetuning is a practical path to competitive multimodal performance without scaling to larger models, reducing infrastructure and inference costs
  • The standardized training pipeline across text and multimodal models lowers the learning curve for teams already familiar with Sentence Transformers
  • Visual Document Retrieval performance gains suggest multimodal finetuning is particularly effective for document-heavy tasks involving layout, charts, and tables

What to watch

Monitor whether practitioners adopt this finetuning approach for other multimodal tasks beyond document retrieval, and track whether smaller finetuned models continue to outperform larger general-purpose alternatives in benchmarks. Also watch for community-contributed finetuned models and datasets that emerge on Hugging Face, which could accelerate adoption across different domains.

Share

vff Briefing

Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.

No spam. Unsubscribe any time.

Related stories

AWS Launches G7e GPU Instances for Cheaper Large Model Inference
TrendingModel Release

AWS Launches G7e GPU Instances for Cheaper Large Model Inference

AWS has launched G7e instances on Amazon SageMaker AI, powered by NVIDIA RTX PRO 6000 Blackwell GPUs with 96 GB of GDDR7 memory per GPU. The instances deliver up to 2.3x inference performance compared to previous-generation G6e instances and support configurations from 1 to 8 GPUs, enabling deployment of large language models up to 300B parameters on the largest 8-GPU node. This represents a significant upgrade in memory bandwidth, networking throughput, and model capacity for generative AI inference workloads.

about 11 hours ago· AWS Machine Learning Blog
Anthropic Launches Claude Design for Non-Designers
Model Release

Anthropic Launches Claude Design for Non-Designers

Anthropic has launched Claude Design, a new product aimed at helping non-designers like founders and product managers create visuals quickly to communicate their ideas. The tool addresses a gap for early-stage teams and individuals who need to share concepts visually but lack design expertise or resources. Claude Design integrates with Anthropic's Claude AI platform, leveraging its capabilities to streamline the visual creation process. The launch reflects growing demand for AI-powered design tools that lower barriers to entry for non-technical users.

1 day ago· TechCrunch AI
Phononic Eyes $1.5B+ Valuation in AI Data Center Cooling Play

Phononic Eyes $1.5B+ Valuation in AI Data Center Cooling Play

Phononic, a 17-year-old Durham, North Carolina semiconductor company that makes cooling components for AI data center servers, is in talks with potential buyers at a valuation of at least $1.5 billion, with some buyers expressing interest above $2 billion. The company has engaged investment bank Lazard to evaluate its options since early 2026. This valuation would more than double its last private funding round, reflecting broader investor appetite for industrial suppliers tied to AI infrastructure demand. Phononic may also choose to raise additional capital instead of pursuing a sale.

about 12 hours ago· The Information