Generic Avatar visemes
J Bruening
Visemes currently do not work for Generic rigs. Please move the code that initializes blend shapes with the lip sync controller out of the bit that checks if the animator is human, so that generic rigs can work with visemes
Log In
SubsonicJackal
This was implemented on SDK3 Avatars.
Ashelia
Confirmed, even after Unity 2018 visemes still don't work on a Generic rig. I'd love my cartoon animal (with custom 4-legged walking/running) to visibly talk and while humanoid trickery works (at a serious performance cost), I'd be having to give up the custom walk/run :(
How hard can this even be...?
Fred Jones
It's been two years and we still don't have this.
Eremite
+1 for this. It would be really great to be able to take advantage of the things humanoid rigs can do without having to do weird rigging magic to force it to work as humanoid. Right now generic rigs can't use : Speech Visemes, Gesture Overrides, Chairs (sit), Emotes, Menu if viewball is too low, animation overrides for non-walk/idle motions (unless I just suck at animation states), crouch/prone poses (just moves the viewball up and down). I'm probably forgetting something else that's broken.
Space Jammer Slammi
As an addendum, it would be nice if eye tracking wasn't leashed to the Humanoid rigging as well. Divorcing both Visemes and eye bone movement from this rig requirement would be really beneficial for a wide variety of reasons.
Lhun
perhaps a more complete solution would be a vrchat "rigger" that doesn't depend on humanoid at all but could be built from a modified avatar descriptor routine and a vrchat-created armature/bone rigging tool not unlike like mixamo (that focuses on humanoid). This would allow root node modification on new custom prefabs to fix/create IK on anything a user creates - and users would be able to more easily describe custom emotes, gestures and viseme blends. (plus anim conversion of facial blend from pmx2fbx would benefit me, lol!) One other benefit would be that you could hardcode full body ik pucks as detected to the avatar rather then having the user "Strap in" every time. VRChat is currently bombarded with "how do I custom avatar?" more than anything else, and I think accessibility of allowing people to be whoever they want to be is key to it's success.
AeridicCore
This is essential for quadruped models to have proper talk animation. Not having this option is quite absurd - blend shapes aren't rig dependent AT ALL.
I recommend you to rewrite algorythm to check whether the rig is humanoid AFTER visemes are applied, if any.
E
EvolvedAnt
I agree with this. VRChat specifically supports non-humanoid rigs for avatars by setting to generic, as long as you use a custom animation controller to make your own animations. However for some reason Visemes are ignored when you choose not to use a humanoid rig. Blendshapes do not in any way require a humanoid rig to function, so there shouldn't be a need for a humanoid rigged avatar to enable it. In fact I have avatars that are humanoid, with objects attached to my hand that are generic rigged, and have visemes working perfectly on the object I am holding in my hand.
If a generic rigged object on my hand works with visemes perfectly, just because the avatar itself is detected as humanoid, then an avatar set up as generic, but with a custom animation controller where we do all the animations for walking/jumping/etc, ourself, should work with visemes as well. Especially considering the various visemes don't need to be autodetected, we tell the SDK specifically which blendshape to use for each viseme.
Please allow non humanoid avatars to have visemes enabled.