Basic Info

Momentum detects the audience’s gesture and transforms it into something surreal while they dance to the energetic music playing in the background.

I made everything in TouchDesigner instead of p5js, so here’s the link to the project file (models not included)

Description

I’m planning on building a body gesture based audio-visual performance that reacts real-time to the audience’s performance as they dance with the music. I’ll be using a combination of machine learning techniques and models such as body pose detection, body segmentation, text generation, and image generation to create a surreal, chaotic, yet playful experience. I’ll mainly be using TouchDesigner as my platform to run all these models and generate the visuals.

Inspiration

MANS O on Instagram: "25fps realtime mirroring & tracking at @videocitta inside a gazometer Visuals: @sandufi Music & performance: @mans_o Technical assist: @pura.cadera Wearing: @sunnei Slide 1,2,3 🔊unreleased tracks Slide 4 🔊 SIRVIENDO"

Audience & Context


Proposal

Body Gesture

Azure Kinect DK: camera input body gesture based audio-visual performance

Angle from camera

Angle from camera

Fake Depth map

Fake Depth map

Image Generation

StreamDiffusion

ControlNet

Text Generation