Today, Snap teased its real-time, on-device image diffusion model that can generate vivid AR experiences at the Augmented World Expo. Additionally, the company introduced generative AI tools for AR creators.
At the event, Snap’s CTO Bobby Murphy discussed an AI model that runs instantly on smartphones. It can re-render frames quickly as guided by text prompts. This opens new creative possibilities compared to slower cloud-based models.
Snap aims to put this model into Lenses by the end of the year. Users should soon enjoy AR effects with highly realistic generated imagery that responds to movement in real-time.
An updated Lens Studio makes AR building even easier. Creators got new generative tools to craft face filters, 3D objects, textures and more – sometimes in just minutes rather than weeks.
The latest version also includes an AI assistant to answer creators’ questions as they work. This help plus fast creation should encourage more innovative Lenses.
Snap revealed demos of the new generative powers, like generating fully customizable characters by description. Their visions for lifelike, mobile-powered AR point to an immersive future.