WITT AI generated visuals

As part of the VMX Project, we wanted to explore a bold question:
If we equip artists with AI tools and a bit of training, how easily can they generate their own live visuals—without needing a technical background?

To find out, we teamed up with WITT, an electronic music act and student project from PXL-Music. The goal was to let the artist take full creative control over the visual aspect of her performance—using AI as a creative partner rather than relying on external VJ or motion design support.

We introduced WITT to a set of AI tools and gave guidance on how to use them effectively. What followed was a striking result:
Within just a few days, WITT was able to generate the entire visual identity for her live show—a rich, stylistically coherent set of visuals that evolved with the energy and emotion of her music.

This experiment showed that generative AI is not only powerful, but increasingly artist-friendly. It opens the door for more musicians to create their own custom visual worlds—even with limited time or technical skills.

Live Camera Meets AI: StreamDiffusion in TouchDesigner

As a secondary exploration, we took things a step further:
What happens when live camera input becomes part of the visual feedback loop?

Using StreamDiffusion in TouchDesigner, we set up a system where a roaming camera could capture real-time footage of the audience during WITT’s performance. This footage was then processed by AI, generating visuals in the style defined by the artist. Importantly, we could blend between the raw camera input and the AI-transformed visuals, giving the audience a clear sense that they are influencing what’s on the screen.

This created a uniquely responsive performance environment—one where the visuals became a dialogue between artist, audience, and machine.

Generating AI Visuals from Simulation

Building on this idea, we tried a similar approach using fluid simulations instead of camera input.

These simulations, created in TouchDesigner, became dynamic input sources for StreamDiffusion. The result? AI-generated visuals that evolved based on the shapes, speed, and flow of the simulated fluids—a kind of digital choreography between physical logic and visual abstraction.

This use case showed how AI can be driven by more than just text prompts or video feeds. Motion, flow, and even chaos can become creative input.

All tracks used in the immersive live show belong to WITT and are available on spotify:

This article belongs to the following project:

Virtual Music Experiences

In the international music industry, there has been a significant growth in experiments with new technology in recent years. For...