Momentum detects the audience’s gesture and transforms it into something surreal while they dance to the energetic music playing in the background.
I made everything in TouchDesigner instead of p5js, so here’s the link to the project file (models not included)
I’m planning on building a body gesture based audio-visual performance that reacts real-time to the audience’s performance as they dance with the music. I’ll be using a combination of machine learning techniques and models such as body pose detection, body segmentation, text generation, and image generation to create a surreal, chaotic, yet playful experience. I’ll mainly be using TouchDesigner as my platform to run all these models and generate the visuals.
Azure Kinect DK: camera input body gesture based audio-visual performance
Angle from camera
Fake Depth map
ControlNet
stabilizes the body gesture of the output image with a segmentation mask from my camera
Installation Prototype in 3D