MIT's AromaGen Generates Custom Scents from Text Using LLMs

Researchers at MIT and collaborators have developed AromaGen, an AI-powered wearable that generates custom scents from text or image inputs using a multimodal language model. The system maps semantic descriptions to mixtures of 12 base odorants released through a neck-worn dispenser, and users can refine results through natural language feedback. In a 26-person study, AromaGen matched human-composed aromas in zero-shot generation and significantly outperformed them after iterative refinement, achieving median similarity scores of 8/10 to real food scents while reducing perceived artificiality.
TL;DR
- →AromaGen uses multimodal LLMs to generate custom aromas from free-form text or visual inputs in real time
- →The system combines 12 carefully selected base odorants and allows iterative refinement through natural language feedback
- →User study results show the system matches human-composed mixtures immediately and surpasses them after refinement cycles
- →Addresses a major constraint in olfactory AI: the scarcity of large-scale olfactory datasets by leveraging latent knowledge in LLMs
Why it matters
This work demonstrates a practical application of multimodal LLMs to a domain where AI has been severely limited by data scarcity and hardware constraints. By mapping semantic inputs to structured odorant mixtures rather than attempting to generate novel scents from scratch, AromaGen sidesteps the need for massive olfactory datasets while showing that language models contain sufficient latent knowledge to guide scent composition. The result is a working system that bridges the gap between AI capability and real-world sensory experience.
Business relevance
Olfactory interfaces represent an emerging category in immersive technology and consumer hardware, with applications in food, wellness, entertainment, and remote communication. AromaGen's approach of using LLMs to enable general-purpose aroma generation from text or images could lower barriers to entry for companies building scent-enabled products, reducing dependence on fixed cartridge libraries or manual composition. The iterative refinement loop also suggests a model for personalized scent experiences, relevant to luxury goods, hospitality, and metaverse applications.
Key implications
- →Multimodal LLMs can effectively encode domain-specific knowledge even in data-scarce modalities, opening pathways for AI in other sensory or specialized domains
- →Wearable olfactory interfaces are moving from prototype to functional systems, potentially enabling new interaction paradigms in AR/VR and physical spaces
- →The ability to refine outputs through natural language feedback in real time suggests a template for interactive AI systems in non-visual domains
What to watch
Monitor whether AromaGen or similar systems move beyond research into commercial products, and track adoption in immersive media, food tech, or wellness applications. Watch for expansion of the base odorant palette and whether the approach scales to more complex or novel scent compositions. Also observe whether other sensory modalities (taste, texture) adopt similar LLM-based generation strategies.
vff Briefing
Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.
No spam. Unsubscribe any time.



