Fu Manchu - Giving TouchDesigner a Go

2018-02-09 - posted in: computer-graphics, live-visuals, music,

Recently I was a bit upset about Jitter’s performance when it came to particle systems, and the complexity of merging 2d and 3d imagery, so I decided to take a closer look at TouchDesigner, which I know is heavily used, and loved, by my colleagues in the live visuals sector.

So, first off, here’s the result:

To be sure, there are tons of more sophisticated TD animations out there, but still I find it intriguing how far I got after only having tried out TD for a week. So here’s a little bit documentation about my fail-fast, deliver-early approach to making a music visualization of Fu Manchu.

So, for music visualization we need some sort of audio analysis, right?

fu manchu audio analysis

We get that using an AudioAnalysis CHOP, and afterwards Selecting the first magnitude channel from it (audio_spectral_magnitude). Also, I use a lowpass Filter CHOP with extreme low cutoff (1 Hz) as an envelope follower.

The container to the right computes a simple spectral flux descriptor:

fu manchu spectral flux

We’re feedbacking the spectrum to get a one-window-sized delay and again select the first magnitude channel for simplicity. We then subtract them using a math CHOP (Combine CHOPs: Subtract) and taking the absolute (Channel Post OP: Positive). Finally, with an Analyze CHOP, we sum the values in a frame, i.e. get the flux.

Using a CHOPto TOP, we fetch the current spectral content and transform it to sit in the lower right corner of a 640x640 square. What’s more, I set input smoothness to nearest pixel of every TOP to preserve that edgy digital look:

fu manchu spectral line

A Cache TOP stores 100 frames of this stream, which is read out at 0, -16, -32, -48, -64, and -80. Those CacheSelects are then translated up by certain procedural values - see below - afterwards darkened a bit, and composited using a screen operation.

fu manchu cache select

The result are several spectral lines moving according to the following chain of CHOPs. Basically, a slow ramp with a superimposed sine wave to get the accelerating/braking effect, read out at 4 different delay times:

fu manchu upward movement

Afterwards the picture is displaced based on a ramp CHOP. The displacement amount is tied to the envelope follower mentioned above:

fu manchu displace ramp

Last in the processing chain is a simple resampling algorithm. I’m boosting the image with a level TOP and simultaneously downsample it to a quarter resolution, afterwards making it grayscale. This grayscale is then used to matte the original over the resampled content.

fu manchu resample matte

As for the 3D scene setup, just the basic stuff you’ll find in any tutorial: A geo container, a distant light with a camera, and a render TOP to connect them. I used a simple phong shader with the emit map set to the output of the audio analysis imagery chain from above.

fu manchu phong emit

There is a final tweak to this, a displacement of the rendered stream based on a Perlin noise TOP, scaled by the spectral flux computed above.

fu manchu glitch