Pet Camera Startup Cuts Inference Costs with AWS Inferentia2

Tomofun, maker of the Furbo pet camera, migrated its vision-language model inference from GPU-based EC2 instances to AWS Inferentia2 chips to reduce costs while maintaining real-time pet behavior detection at scale. The company deployed the BLIP model on Inf2 instances using the Neuron SDK, allowing it to handle continuous inference workloads across hundreds of thousands of devices without rewriting existing PyTorch code. The architecture uses a two-tier Auto Scaling setup that can route requests to either GPU or Inferentia2 backends in real-time, providing both cost efficiency and high availability.
TL;DR
- →Tomofun switched pet behavior detection inference from GPUs to AWS Inferentia2 to cut costs on always-on workloads
- →BLIP vision-language model was compiled using Neuron SDK and deployed on EC2 Inf2 instances without major code rewrites
- →Two-tier Auto Scaling architecture allows real-time switching between GPU and Inferentia2 backends for flexibility and availability
- →System processes image streams from hundreds of thousands of Furbo cameras through load-balanced API and inference layers
Why it matters
This case demonstrates a practical path for cost-optimizing inference at scale without sacrificing model capability or availability. As vision-language models become standard in production applications, the ability to run them efficiently on purpose-built accelerators like Inferentia2 becomes critical for companies managing continuous, high-volume inference workloads.
Business relevance
For operators running always-on inference services, this shows how switching to specialized hardware can significantly reduce operational costs while maintaining performance. Founders building real-time AI features at scale should consider that GPU-based inference may not be the most cost-effective path, and that hardware-specific optimization tools like Neuron SDK can enable such transitions without major architectural rewrites.
Key implications
- →Purpose-built AI accelerators like Inferentia2 can deliver cost advantages for continuous inference workloads that don't require peak GPU throughput
- →Vision-language models can be optimized for specialized hardware using SDK tools without requiring developers to abandon existing PyTorch codebases
- →Multi-backend inference architectures allow companies to balance cost and performance by routing requests dynamically, reducing lock-in to any single hardware type
What to watch
Monitor whether other pet-tech and IoT companies adopt similar hardware-switching strategies as inference costs become a larger operational expense. Also track how widely the Neuron SDK adoption spreads beyond AWS use cases, and whether competing accelerator vendors develop comparable optimization tooling for vision-language models.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



