I would like to propose the addition of native parameter smoothing functionality within VRChat, directly supported by the VRChat client and SDK. Currently, creators must rely on animator-based feedback loop smoothing to mitigate the slow network update rate of float parameters, particularly for advanced use cases such as full face tracking. However, this approach introduces significant inefficiencies and complexity.
Feedback-loop smoothing via animator logic results in extremely bloated and overly complex animators. This not only complicates the animation setup process, but also has measurable negative performance impacts.
Based on my own testing and benchmarking, a single avatar using full facial tracking with feedback-loop smoothing can increase CPU frame times enough to cause a 5–10 FPS drop in CPU-limited environments, with mirrors disabled. It is 100% reproduceable especially with OSCmooth and Pawlygons facetracking templates. This is purely based on the smoothing logic only, not the actual facetracking logic. The performance degradation scales linearly with the number of face-tracked avatars present in a given instance. This makes the current approach unsustainable, especially as more users adopt advanced tracking setups.
I propose that VRChat introduce native parameter smoothing functionality configurable directly through the VRChat Unity SDK. This could be implemented by expanding the existing animator parameter list interface in the SDK to include:
  1. A checkbox to enable native smoothing on a per-parameter basis.
  2. Two smoothing strength values, ranging from 0.0 to 1.0:
  • Local Smoothing: Applied to the user's own view of their avatar (e.g., for responsive and accurate feedback).
  • Remote Smoothing: Applied to how other users see the avatar (e.g., to compensate for low parameter update frequency over the network).
This native implementation would eliminate the need for animator-driven smoothing workarounds, significantly reduce complexity, and greatly improve performance across all facetracked avatars moving forward, especially in CPU-constrained scenarios (me 99% of the time, same can be said for most people nowadays).
Such a feature would improve performance by a very measurable amount once it becomes standardized over the current hacky avatar-based feedback-loop method. It would also have huge positive implications for Mobile/Quest as they are much more CPU restricted, yet don't have any facetracking/animator limitations at all. We need more tools to optimize animators, and this would be a great step forward, thank you.