Okay I found out what one would have to modify, but I can't extract the information from the music :/
For the eyes, put a TransformMesh between Disc and Mesh with Scale.X = 0.5 and Rot.X changing between, I think, -70 and 50.
For the mouth, change the shader line
v.y = (v.y*sin(v.x*PI+PI/2)+sin(v.x*PI+PI/2))*in1;
in both shaders (VS_Program) to
v.y = (v.y*sin(v.x*PI+PI/2)+sin(v.x*PI+PI/2))*in1 + cos(v.x*PI)*K;
with K ranging, I think, from -0.25 (very happy) to 0.25 (very sad).
Now I only have to find out how to extrakt Rot.X and K from the music. If you use beat(nomusic) in an expression, what's the range?
Good ideas! I'll see what I can do
Looks cool :D
Would it be possible to bend the corners of the mouth according e.g. to music pitch, so low pitch -> sadface, high pitch -> happyface? (You could also vary eyeform based on loudness: loud music -> v-shaped, like in >:-( or >:-), quiet music -> ^-shaped, like in <:-) (content) or <|-) (sleeping) (that's not the best way to show what I mean D-: )