Avatar SDK 512 Bits instead of 256 Bits (Face Tracking)
Shinyflvres
VRChat’s Avatar SDK still supports only 256 bits for parameters in late 2025 and likely even going into 2026. This is far from sufficient, especially with current face-tracking technologies. We now have new capabilities such as eye tracking, mouth tracking, and more. Many upcoming headsets will include these features, and there are also external VR face tracking modules on the way.
I have created face tracking setups for Karin, Rusk, Manuka, and several other avatars. To achieve even remotely good face tracking, you already need around 210 to 240 bits. After allocating that, there are no remaining bits for any toggles at all, no clothing switches, no material changes, no animation triggers. Even installing systems like GoGo Loco becomes nearly impossible because there are simply no bits left.
By the end of 2025, VRChat should finally increase the limit to 512 bits on PC, while Quest can stay at 256 if necessary. We are living in the future, not the past.
Log In
Shinyflvres
I want to share a possible way:
Example:
256 normal parameters (toggles, contacts, sliders): keep as-is.
256 additional face-tracking parameters (tracking only): cull these based on
distance and visibility.
If a player is 50 meters away, there is no need to sync face-tracking parameters to a remote client, because the client will not be able to see the face clearly at that distance. Only sync them when needed, such as when the player is close enough, or when the avatar is in view (frustum). Do not sync them when the player is out of sight. Cull when not needed to keep performance reasonable. This is standard development practice.
✩Frisk✩
Shinyflvres I’d say give a float value of how far should be till culled instead of 50
I’m all for it especially like for steam frame coming out that would blow quest users out the window
Smash-ter
I feel that it'd be better to ask them to have native support for face tracking
Shinyflvres
Smash-ter
This would break many face-tracking creator prefabs that rely on blendshapes driven by parameters and blend trees. Native face tracking would still need either bones or blendshapes to be animated. So adding “native face tracking” does not change the core requirement.
Both the current system and any native system still require parameters to be synchronized over the network at a high update rate to look smooth. Even with native support, you are still synchronizing the same underlying data, whether it drives bones or blendshapes. In practice, there is no meaningful difference in network load or in how many parameters must be synced.
// EDIT
Furthermore, if they decide to create a native system that uses bones for the jaw, eyes, cheeks, and more, the authors of Booth or Gumroad models would have to add those bones to their rigs. Some creators would, and some would not.
If a creator does not include the required bones, users would be forced to find workarounds and add them manually. That is close to impossible in practice. You generally cannot redistribute modified models on Booth or Gumroad, because doing so would violate many creators’ licenses.
On top of that, many users prefer blendshapes over bones because they want VTuber-style face tracking. Adjusting mouth and eye shapes is often not achievable in the same way with bones alone, and blendshapes are usually the more practical approach for that style.
✩Frisk✩
Smash-ter if VRChat is trying to perfect their marketing plan then making this native would be a bad deal for others.
Adding additional Params than making it native is far more ideal.
Smash-ter
Shinyflvres
- I expect things to break as this would be things that are officially added to the SDK for creators to use.
- OP is asking for more sync parameters, which would still be as expensive as adding native support.
- Native support would be an official parameter guide for generic facial/eye tracking solution to be natively supported inside the SDK itself.
- The concerns about bones and blendshapes can be mitigated with predefined parameters paired with a link that'd redirect to a guide about this stuff.
Smash-ter
✩Frisk✩ when I say "make it native" I literally mean putting in VRC defined parameters for facial and eye tracking. Having 512 bits is a lot on the network, especially if you have 80 people in an instance
SebrinaRena
I agree. I've recently added toe tracking, and that too currently has to eat an additional 52 bits of expression data, right next to face tracking, and etc. And many compromises had to be made on the tracking accuracy as we do not have access to provide real quaternion rotation.
In order to have an avatar with maximum visual expression, big compromises have to be made, even with MemoryOptimizer or other compression tools compressing down clothing/accessories toggles.
And things that require realtime updating cannot be compressed down without compromising tracking quality.
✩Frisk✩
All platforms would have to be 512 to keep things synced.
Shinyflvres
✩Frisk✩
That is true, and there are ways to keep it performant. Culling already exists, and if necessary, synced parameters that are not required can be culled as well.
Example:
256 normal parameters (toggles, contacts, sliders): keep as-is.
256 additional face-tracking parameters (tracking only): cull these based on distance and visibility.
If a player is 50 meters away, there is no need to sync face-tracking parameters to a remote client, because the client will not be able to see the face clearly at that distance. Only sync them when needed, such as when the player is close enough, or when the avatar is in view (frustum). Do not sync them when the player is out of sight. Cull when not needed to keep performance reasonable. This is standard development practice.
pinkiceygirl
Either that or finally support face tracking natively so that we don’t really need these systems in the first place.
Shinyflvres
pinkiceygirl
I wrote this in a comment above to another user. It does not matter whether they add “native” or “non-native” support. The end result is the same: bones or blendshapes still need to be animated, and the driving parameters still need to be synchronized over the network. There is no meaningful difference.
BaconPancake·
the requirement for a parameter compressor on most featured avatars with face tracking is unreasonable
Shinyflvres
BaconPancake·
A parameter compressor will most likely not work well for face tracking. Parameter compression typically packs multiple integers and floats into a shared pool, which works best for simple cases where values are similar or drive the same kind of animation.
Face tracking uses too many unique variables, and most of them cannot be meaningfully compressed. Even if you could compress some of them, it would likely create more problems than benefits in practice.
inseyy
agree
MatuLives
With compressor + binary parameter budgeting + encoded wardrobe modes, you can run full FT (eye/mouth/cheek/brow/tongue), OSC extras, and even BCI appendage control today without slamming into the memory wall at the cost of slamming my own skull against a wall
But none of this removes the need for a 512-bit upgrade on PC, we’re overdue
Shinyflvres
MatuLives
Im aware that you can compress parameters and that’s what i already do. Yet still for detailed face tracking you will need alot of bits regardless if you compress. 256 not much enough especially when you want to add at least a few toggles for clothes/materials or props. With compression i still reach already 175-200 Bits for a Detailed Facetracking (Eye Tracking, Mouth Tracking, Tongue Tracking, Cheek Tracking)
Big Soulja
MatuLives Param compression is expensive in itself animator wise. Adds a lot of frame time. Same for parameter smoothing too. With 512 bits + Native smoothing options in the SDK, face-tracked avatars would be less CPU heavy.
MatuLives
Big Soulja I agree, these are just options we have to work with for now, this post is definitely not asking for luxury
For a bit, it was fun traversing through increasingly cursed animator graphs
But it's about to be 2026 and we are still solving a 2019 limitation with hacks
anthonyjh2020
agreed
SergioMarquina
I do agree. After face tracking, we got barely anything to do cool stuff. And face tracking with extreme detail already requires insane amounts of bits that sometimes, let alone cool stuff, you can't add anything. 512 bits should be what's supported going forward
Load More
→